id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9903/astro-ph9903156.html
|
ar5iv
|
text
|
# RXTE All-Sky Monitor Localization of SGR 1627–41
## 1. Introduction
The soft gamma repeaters (SGRs) were first identified as a separate class from “classical” gamma-ray bursts over ten years ago (Atteia et al. 1987; Kouveliotou et al. 1987; Laros et al. 1987). To date, four SGRs have been unambiguously identified. Attempts have been made to associate all these sources with supernova remnants (SNR). The location of SGR 0525–66 is consistent with the SNR N49 (Cline et al. 1982), and SGR 1806–20 has been associated with the SNR G10.0–0.3 (Kulkarni & Frail 1993; Kouveliotou et al. 1994; Kulkarni et al. 1994; Murakami et al. 1994). SGR 1900+14 has recently been localized to an error box of 1.6 square arcminutes, which lies just outside SNR G42.6+06 (Hurley et al. 1999b).
SGR 1627–41 was discovered with the BATSE instrument on 1998 June 15; a coarse location was promptly announced (Kouveliotou et al. 1998). Three days later, Hurley & Kouveliotou (1998) reported, based on data from GRB detectors in the Interplanetary Network (IPN), that the burst source was located within an annulus $`6\mathrm{}`$ in width. Further analysis reduced the width of the annulus to $`19\mathrm{}`$ (Hurley et al. 1999a). Earth limb considerations during bursts observed with BATSE restricted the burst source location along the IPN annulus to declinations between $`43\mathrm{°}`$ and $`49\mathrm{°}`$ (Woods et al. 1998). These localizations are all displayed in Figure 1. Woods et al. (1998) noted that the non-thermal core of the CTB 33 complex lay within this region. This core was identified as SNR G337.0–0.1 by Sarma et al. (1997), who estimated its distance to be $`11.0\pm 0.3`$ kpc. In response to the discovery of SGR 1627–41, this SNR was observed with the BeppoSAX Narrow Field Instruments. In this observation, a faint X-ray source was discovered at a location consistent with the IPN annulus, on the west side of G337.0–0.1 (Woods et al. 1999).
In this paper, we use observations by the Rossi X-ray Timing Explorer All-Sky Monitor of two bursts from SGR 1627-41 to constrain the source’s position along the IPN annulus, and we discuss the possible association of SGR 1627–41 with the SNR G337.0–0.1.
Fig. 1 – Localizations of SGR 1627–41. Known X-ray sources in the ASM catalog with typical brightnesses above $`50`$ mCrab are plotted as triangles. SNRs from the Green catalog are plotted as circles with diameters equal to the mean angular size listed in the catalog. The IPN annulus is not plotted near the SNR G337.0–0.1 to avoid obscuring the $`1.5\mathrm{}`$ circle.
## 2. Instrument
The ASM consists of three Scanning Shadow Cameras (SSCs) mounted on a motorized rotation drive. Each SSC contains a proportional counter with eight resistive anodes that are used to obtain a one-dimensional position along the direction parallel to the anodes for each detected event. The proportional counter views a $`12\mathrm{°}\times 110\mathrm{°}`$ (FWZI) field through a random-slit mask. The mask consists of six unique patterns, oriented such that the slits run perpendicular to the anodes.
The intensities of known sources in the field of view (FOV) are derived via a fit of model slit-mask shadow patterns to histograms of counts as a function of position in the detector. Normally, the residuals from a successful fit in the 1.5–12 keV band are then cross-correlated with each of the expected shadow patterns corresponding to a set of possible source directions which make up a grid covering the FOV. A peak in the resulting cross-correlation map indicates the possible presence and approximate location of a new, uncataloged X-ray source (Levine et al. 1996).
In addition to “position” data products, time-series data on the total number of counts registered in each SSC are recorded in 1/8 s bins. The position of each count in the detector is not preserved in the time-series mode.
## 3. Observations and Analysis
Two bursts were detected in SSC 3 of the ASM on 1998 June 17.943917 and 17.954243 (UTC). The time histories of these bursts are shown in Figure 3. Each burst was short ($`2`$ s), bright (5–12 keV fluxes of $`9`$ and $`23`$ Crab at peak), and hard (no significant flux below 5 keV), characteristic of SGR bursts. At these times, BATSE could not observe SGR 1627–41 due to Earth-occultation (C. Kouveliotou 1998, private communication), but SGR 1627–41 was known to be active around this time, and the BATSE and IPN localizations of SGR 1627–41 were consistent with the FOV of the ASM at the times of these events. We therefore attribute these bursts to SGR 1627–41.
The energy spectra of bursts from SGR 1806–20 are known to drop rapidly below $`14`$ keV (Fenimore, Laros, & Ulmer 1994), and this property seems consistent with the ASM observations of SGR 1627–41 reported here. These two bursts were not detected in the lowest two energy channels of the ASM (1.5–5 keV). We therefore performed the cross-correlation analysis using only the data from the highest energy channel (5–12 keV). We found significant peaks in each of the two cross-correlation maps. The celestial locations of these peaks are consistent with each other and with the refined BATSE/IPN position. The first burst detection has a statistical significance of 4.3 $`\sigma `$, while the second burst was detected with a significance of 5.7 $`\sigma `$. These levels of significance do not take the number of trials in the search into account.
Since there are roughly 10000 independent position bins in the FOV of an SSC, the probability of measuring a noise peak of at least 4.3 $`\sigma `$ somewhere in the FOV is roughly 0.08, and the probability of measuring a 5.7 $`\sigma `$ peak or higher is roughly $`10^3`$. There are roughly 60 independent position bins within the refined BATSE/IPN error box. The probability that both peaks would fall at random in the same location and that the common location should overlap the BATSE/IPN error box is $`60\times (10^4)^210^6`$. We are therefore confident that these peaks do represent detections of SGR 1627–41.
To refine the source position, we determined which regions of the IPN annulus were consistent with the ASM observations of the two bursts. At the time of the first burst, the FOV of SSC 3 did not cover positions along the annulus north of $`45\mathrm{°}`$, while at the time of the second burst, the FOV did not cover positions along the annulus south of $`50.6\mathrm{°}`$. We ran multiple trials of our ASM shadow-pattern fitting program for the 5–12 keV data from each of the two 90 s observations. In each trial, a source was assumed to be at one of 1600 locations along the center line of the segment of the IPN annulus with $`50\mathrm{°}\delta 45\mathrm{°}`$.
Fig. 2 – Counts per 1-s bin around the times of two bursts from SGR 1627–41, as observed between 5–12 keV by SSC 3. The count rate includes contributions from all X-ray sources in the FOV, as well as the diffuse X-ray background. No background subtraction has been performed.
Each trial yields the fitted intensity of the source as well as a $`\chi ^2`$ goodness of fit statistic. For both observations, the best fit was found to be near $`\delta =47.6\mathrm{°}`$ (Fig. 3).
For each of the two ASM observations, regions for which $`\chi ^2<\chi _{\mathrm{min}}^2+`$ 2.7, 4.0, or 6.6 yielded 90%, 95%, or 99% confidence intervals, respectively, for the source declination. These intervals are presumed to reflect counting statistics. We estimated the effect of systematic errors by projecting the measured magnitude of the position error for strong sources onto the direction of the BATSE/IPN error box. This magnitude was measured by Smith et al. (1999) to be $`1.9\mathrm{}`$ for 95% confidence intervals along the direction parallel to the proportional counter anodes. We added $`1.9\mathrm{}/\mathrm{cos}\theta `$ in quadrature with the errors estimated from the $`\chi ^2`$ values, where $`\theta `$, the angle between the IPN annulus and the anode direction, was $`7\mathrm{°}`$ for the first burst and $`43\mathrm{°}`$ for the second burst.
We thus derive two independent measurements of the declination of SGR 1627–41 along the IPN annulus: $`\delta _1=47\mathrm{°}.603_{0.051}^{+0.053}`$ and $`\delta _2=47\mathrm{°}.640_{0.096}^{+0.091}`$ at 95% confidence. We find a joint error box of $`<\delta >=47\mathrm{°}.621\pm 0.045`$ (J2000) by taking the weighted average of the two measurements. We calculate similar intervals for 90% and 99% confidence levels, using the same value for the systematic error. These three intervals are $`4.9\mathrm{}`$, $`5.4\mathrm{}`$, and $`6.3\mathrm{}`$ long, in order of increasing confidence. All three intervals are plotted as dark bars in Figure 3 and, together with the IPN annulus, as boxes in Figure 4. The center of these boxes lies $`0.13\mathrm{°}`$ below the galactic plane. These boxes are consistent with the $`1\mathrm{}`$ location of the persistent X-ray source localized by Woods et al. (1999), which is also plotted in Figure 4.
Fig. 3 – The change in $`\chi ^2`$ as a function of declination along the IPN annulus, relative to the minimum values. The results from the burst of June 17.943917 are graphed as a solid line, while those from June 17.954243 are graphed as a dashed line. The location and extent of G337.0–0.1 is indicated by the dark horizontal bar below the curves. The weighted average error box at each confidence level is indicated by a darkened interval.
Since the position of the source is well-constrained, the fits to the position data yield source intensities that can be used to estimate the number of counts actually detected from the source. Since there are no other known variable sources in the FOV, we can use the time-series data to estimate the peak flux of the bursts as well as investigate the possible presence of non-burst emission. Comparison of the count rates with the observed brightness of the Crab Nebula yields burst peak fluxes (1-s bins; 5–12 keV) of $`(1.1\pm 0.2)\times 10^7`$ and $`(2.7\pm 0.3)\times 10^7`$ erg cm<sup>-2</sup> s<sup>-1</sup>. These values may be underestimates because the burst spectrum is substantially harder than the Crab spectrum in this energy range. We find that, for both bursts, the number of counts derived from the position data is consistent with the number of counts detected in the bursts as recorded in the time-series data. This yields a weak upper limit of $`2\times 10^8`$ erg cm<sup>-2</sup> s<sup>-1</sup> (3 $`\sigma `$) on any non-burst emission in the 1.5–12 keV band from the SGR during these observations.
A search for pulsations in the 5–12 keV time-series data was conducted by performing FFTs on 64 s of data after the burst events. In neither observation was any coherent signal between 0.015 and 4.000 Hz detected to an upper limit on the amplitude of $`2.4`$ c/s at 95% confidence. At the position of the ASM/IPN error box, this limit corresponds to a peak-to-peak modulation of $`340`$ mCrab.
## 4. Discussion
The $`19\mathrm{}\times 4.9\mathrm{}`$ ASM/IPN error box for the bursting source passes within $`1.05\mathrm{}`$ of the center of SNR G337.0–0.1, which is
Fig. 4 – The joint ASM/IPN localization of the bursts from SGR 1627–41 is graphed over the MOST 0.8 GHz image (Whiteoak & Green, 1996) of SNR G337.0–0.1, the pronounced dark region. The width of the box is set by the IPN 3 $`\sigma `$ error annulus. The lengths represent three confidence levels for the ASM localization: 99% (dotted line), 95% (dashed line), and 90% (solid line). Also plotted is the error circle for the location of the weak persistent source observed by BeppoSAX. (Woods et al. 1999).
a non-thermal shell, $`0.75\mathrm{}`$ in mean radius, embedded in a complex HII region (Sarma et al. 1997). Since arguments have been made to associate each of the three previously known SGRs with nearby SNRs, it is tempting to conclude that SGR 1627–41 is a magnetar that was born in the same supernova explosion that produced SNR G377.0–0.1. However, this is a very crowded region of the sky. The a posteriori probability of finding a SNR near the error box for SGR 1627–41 cannot be discounted, and it has been argued (e.g., Gaensler & Johnston 1995) that the number of pulsar/SNR associations has been overestimated due to the underestimation of the chances of appearing close on the sky by geometric projection.
The north end of the ASM/IPN error box lies $`0.3\mathrm{}`$ outside the nominal edge of G337.0–0.1. We estimate the probability of a chance association after the method of Kulkarni & Frail (1993). We inflate the size of all the SNRs in Green’s catalog (1998) by $`0.3\mathrm{}`$ and convolve them with the ASM/IPN error box for SGR 1627–41. The fraction of the sky covered by the total of all the resulting areas gives the probability of a chance association, if the distribution of SNRs is uniform. The ASM/IPN error box is less than $`0.1\mathrm{°}`$ away from the galactic plane, and SNRs are strongly clustered along the galactic plane as well as toward the galactic center. We therefore apply the method by summing over the areas of the 19 SNRs in the region with galactic coordinates of $`327\mathrm{°}<\mathrm{}<347\mathrm{°}`$ and $`b<0.5`$. We obtain a probability of 5.4% that the ASM/IPN error box would fall within $`0.3\mathrm{}`$ of a SNR by chance.
Another method of evaluating the association between SGR 1627–41 and G337.0–0.1 is to consider the conditional probability of finding the SGR within the ASM error box given that it must be somewhere within the revised BATSE/IPN error box. The hypothesis that the SGR was born in the same explosion that created G337.0–0.1 gives us one model for the underlying probability distribution of the source location. We assume that the SGR has been traveling for $`10^4`$ y from the center of G337.0–0.1 (assumed to be 11 kpc from Earth) at a velocity drawn from a three-dimensional Maxwellian distribution with an rms value of 500 km/s. We convert this three-dimensional distribution to a distribution on the sky (a two-dimensional Gaussian with a standard deviation of $`1.6\mathrm{}`$), and renormalize this function by requiring that its integral over the BATSE/IPN error box be equal to unity. The integral of the resulting density function over the ASM 90% confidence error box yields a value of 0.749.
To evaluate a second model, we compute the probability of finding the SGR within the ASM error box if its location within the BATSE/IPN error box is drawn from a uniform distribution. Under that assumption, the probability is simply the ratio of the areas of the two error boxes, or 0.0136.
A comparison between these two probabilities indicates that the first hypothesis is a more reasonable explanation for the data than the second. This should not be taken as proof that the SGR actually did originate at the center of SNR G337.0–0.1, as it is also possible that the SGR may be associated with something other than G337.0–0.1 that belongs to a class of objects that clusters along the galactic plane.
An association between SGR 1627–41 and G337.0–0.1 does not require that unreasonable physical characteristics be attributed to the system. At an assumed distance of 11 kpc, the ASM/IPN error box lies 4 pc away from the center of G337.0–0.1, projected onto the plane of the sky. G337.0–0.1 itself is 5.1 pc in diameter. The radius of a SNR in the Sedov-Taylor phase of its expansion is given by $`R=(31.5\mathrm{pc})(E_{51}/n_0)^{1/5}t_5^{2/5}`$ (Shull, Fesen, & Saken 1989). If we assume $`E_{51}1`$, then $`t_5=n_0^{1/2}(R/31.5)^{5/2}`$. Since OH masers have been associated with this SNR (Frail et al. 1996), the local density must fall within the range $`(130)\times 10^4`$ cm<sup>-3</sup> (Lockett, Gauthier, & Elitzur 1999). This means the age of the remnant must fall between $`(210)\times 10^4`$ y. These values are within an order of magnitude of the estimated age of magnetars, $`10,000`$ y (Thompson & Duncan 1996), and they imply a projected velocity for the magnetar between 40 and 200 km/s. These are reasonable values for the projected velocity of a neutron star.
A distance of 11 kpc also does not demand an excessive energy budget for these bursts. Bursts from SGR 1806–20 have been observed to reach total peak luminosities of $`10^{42}`$ erg/s, with a third of the emission above 30 keV (Fenimore et al. 1994). The two bursts detected by the ASM from SGR 1627–41, assuming isotropic emission from an 11 kpc distance, reach peak 5–12 keV luminosities of 2 and 3 $`\times 10^{39}`$ erg/s, only 0.1% of the above total.
We thank Kevin Hurley for providing the IPN annulus and Peter Woods for providing the BeppoSAX source location. DS wishes to thank Derek Fox, Bryan Gaensler and Vicky Kaspi for helpful discussions. Support for this work was provided in part by NASA Contract NAS5-30612. The MOST is operated by the University of Sydney with support from the Australian Research Council and the Science Foundation for Physics within the University of Sydney.
|
no-problem/9903/astro-ph9903081.html
|
ar5iv
|
text
|
# A Modified Magnitude System that Produces Well-Behaved Magnitudes, Colors, and Errors Even for Low Signal-to-Noise Ratio Measurements
## 1 Introduction
The advantages of using a logarithmic scale to measure astronomical fluxes are obvious: the magnitude scale is able to span a huge dynamic range, and when relative colors are needed they can be computed by simply differencing magnitudes measured in different bandpasses. These advantages are quite clear for bright objects where noise is not an issue. On the other hand, as fluxes become comparable to the sky and instrumental noise, the corresponding magnitudes are subject to large and asymmetric errors due to the singularity in the magnitude scale at zero flux; indeed, if a noisy measurement of an object’s flux happens to be negative, its magnitude is a complex number! Astronomers have generally handled these cases by specifying detection flags and somehow encoding the negative flux. These problems become even more pronounced as we work in multicolor space. An object can be well detected and measured in several of the bands, and still fail to provide a measurable flux in others. In flux space the object would be represented as a multivariate Gaussian probability distribution centered on its measured fluxes, not necessary all positive. Important information is lost by simply replacing a flux by an arbitrary 2- or 3- $`\sigma `$ upper limit. In magnitude space, the error distribution of such an object has an infinite extent in some directions, making meaningful multicolor searches in a database impossible; all the non-detections in one or more bands must be isolated, and treated separately. One solution to this problem is to use linear units (e.g. Janskys), although this makes studies based on flux ratios (colors) inconvenient. This paper proposes an alternative solution, a modification of the definition of magnitudes, which preserves their advantages while avoiding their disadvantages.
## 2 The Inverse Hyperbolic Sine
We propose replacing the logarithm in the traditional definition of a magnitude with an inverse hyperbolic sine function. This function becomes a logarithm for large values of its argument, but is linear close to the origin.
$$\mathrm{sinh}^1(x)=\mathrm{ln}\left[x+\sqrt{x^2+1}\right]\{\begin{array}{cc}\text{sgn}(x)\mathrm{ln}|2x|,\hfill & \text{if }|x|1\hfill \\ x,\hfill & \text{if }|x|1\hfill \end{array}$$
(1)
The usual apparent magnitude $`m`$ can be written in terms of the dimensionless normalized flux $`xf/f_0`$ as
$$m2.5\mathrm{log}_{10}x=(2.5\mathrm{log}_{10}e)\mathrm{ln}xa\mathrm{ln}x.$$
(2)
where $`f_0`$ is the flux of an object with magnitude $`0.0`$, and $`a2.5\mathrm{log}_{10}e=1.08574`$ is Pogson’s ratio (Pogson 1856).
Let us define the new magnitude $`\mu `$ as
$$\mu (x)a\left[\mathrm{sinh}^1\left(\frac{x}{2b}\right)+\mathrm{ln}b\right]$$
(3)
Here $`a`$ and $`b`$ are constants, and $`b`$ is an arbitrary ‘softening’ which determines the flux level at which linear behaviour sets in. After some discussion, we have adopted the name ‘asinh magnitudes’ for $`\mu `$. Consider the asymptotic behaviour of $`\mu `$, for both high and low $`x`$:
$$\underset{x\mathrm{}}{lim}\mu (x)=a\mathrm{ln}x=m\underset{x0}{lim}\mu (x)=a\left[\frac{x}{2b}+\mathrm{ln}b\right].$$
(4)
Thus for $`x\mathrm{}`$, $`\mu `$ approaches $`m`$, for any choice of $`b`$. On the other hand when $`|x|b`$, $`\mu `$ is linear in $`x`$; for $`xb`$, we gradually return to logarithmic behaviour, although this regime is never of astronomical interest for any reasonable choice of $`b`$. Intuition suggests that $`b`$ should be chosen to be comparable to the (normalized) flux of an object with a signal-to-noise ratio of about one; the following section discusses the choice of $`b`$, and the related question of $`\mu `$’s error distribution.
## 3 The Errors in $`\mu `$ and the choice of the Softening Parameter $`b`$
Although the softening parameter $`b`$ can be any positive number, we show below that one particular selection has a couple of attractive features. In making our choice, we use the following guiding principles: 1) Since asinh magnitudes are being introduced to avoid the problems that classical magnitudes manifest in low signal-to-noise data, the differences between asinh and classical magnitudes should be minimized for high signal-to-noise data. 2) We should minimize $`\mu `$’s variance at low flux levels. The latter is not strictly required as a too-small value of $`b`$ merely stretches out $`\mu `$’s scale along with its errors, but it is convenient if $`\mu `$’s variance at zero flux is comparable to its variance at a signal-to-noise ratio of a few.
In reality, our measurement of the normalized flux $`x`$ will be noisy, with variance $`\sigma ^2`$. We wish to choose $`b`$ to minimize $`\mu `$’s variance, while keeping the difference $`m\mu `$ small. Let us therefore compute the variances of $`m`$ and $`\mu `$ (keeping only the linear terms in their Taylor series), and also their difference; arrows indicate the asymptotic behavior as $`x0`$:
$`\text{Var}(m)=`$ $`{\displaystyle \frac{a^2\sigma ^2}{x^2}}`$ $`{\displaystyle \frac{a^2\sigma ^2}{x^2}}`$
$`\text{Var}(\mu )=`$ $`{\displaystyle \frac{a^2\sigma ^2}{4b^2+x^2}}`$ $`{\displaystyle \frac{a^2\sigma ^2}{4b^2}}`$ (5)
$`m\mu =`$ $`a\mathrm{ln}\left[{\displaystyle \frac{1+\sqrt{1+4b^2/x^2}}{2}}\right]`$ $`a\text{sgn}(x)\mathrm{ln}\left({\displaystyle \frac{|x|}{b}}\right)`$
What are the disadvantages of taking either too low or too high a value for $`b`$? Choosing a low value of $`b`$ causes the difference $`m\mu `$ to become smaller, i.e. the two magnitudes track each other better, but unfortunately $`\mu `$’s variance at $`x=0`$ varies as $`1/b^2`$. Choosing too high a value has the opposite effect: the difference explodes at low values of $`x`$, simply due to the singularity in the logarithm in the definition of $`m`$. At the same time, $`\mu `$’s variance remains small.
In order to balance these two competing effects, we shall determine $`b`$ by minimizing a penalty function containing terms due to both, added in quadrature.
The difference between the two magnitudes, normalised by $`m`$’s standard deviation, is given by
$$\delta (x)=\frac{m(x)\mu (x)}{\sqrt{\text{Var}(m)}}\frac{b}{\sigma }F\left(\frac{x}{b}\right),\mathrm{where}F(y)=y\mathrm{ln}\left[\frac{1+\sqrt{1+4/y^2}}{2}\right].$$
(6)
The function $`F(y)`$ has a maximum value of approximately 0.5 at $`y=0.7624`$, so the largest possible deviation between the two magnitude scales is
$$\left(m\mu \right)_{\text{max}}\frac{b}{2\sigma }\sqrt{\text{Var}(m)}$$
(7)
The other ‘cost’ associated with the choice of $`b`$ is the size of the error box for $`\mu `$ at $`x=0`$, which is
$$\sqrt{\text{Var}(\mu )|_{x=0}}=\frac{a\sigma }{2b}$$
(8)
The total penalty can be obtained by adding these two costs in quadrature:
$$ϵ=\delta _{\text{max}}^2+\text{Var}(\mu )|_{x=0}=\frac{b^2}{4\sigma ^2}+\frac{a^2\sigma ^2}{4b^2}=\frac{a}{4}\left[\frac{b^2}{a\sigma ^2}+\frac{a\sigma ^2}{b^2}\right]$$
(9)
which has the obvious minimum at $`b^2=a\sigma ^2`$. Thus the optimal setting is the value $`b=\sqrt{a}\sigma =1.042\sigma `$. As expected, $`b`$ is approximately equal to the noise in the flux. This choice of $`b`$ leads to $`m\mu `$ having a maximum value of 0.52$`\sqrt{\text{Var}(m)}`$, implying that the difference between the two magnitudes is always smaller than the uncertainty in $`m`$ (see figure 1). If the error in its measured flux is $`1\sigma `$, the error in $`\mu `$ is $`\pm 0.52`$. If the flux errors are Gaussian, so, to leading order, are the errors in $`\mu `$ as the transformation from counts to $`\mu `$ is linear for $`|x|b`$.
Figure 2 shows $`m`$ and $`\mu `$ as a function of signal-to-noise ratio, for this choice of $`b`$, along with their 1-$`\sigma `$ errors.
## 4 Application to Real Data
We have been working in terms of $`xf/f_0`$, but it is usually more convenient to use the measured fluxes directly; In terms of the non-normalised flux $`f`$, the expressions for $`m`$, $`\mu `$, and $`\text{Var}(\mu )`$ become
$`m=`$ $`m_02.5\mathrm{log}_{10}f,`$ (10)
$`\mu =`$ $`\left(m_02.5\mathrm{log}_{10}b^{}\right)a\mathrm{sinh}^1\left(f/2b^{}\right)`$ (11)
and
$`\text{Var}(\mu )=`$ $`{\displaystyle \frac{a^2\sigma ^2}{4b^2+f^2}}{\displaystyle \frac{a^2\sigma ^2}{4b^2}}`$ (12)
where $`m_02.5\mathrm{log}_{10}(f_0)`$, $`b^{}f_0b`$, and $`\sigma ^{}f_0\sigma `$ are measured in real flux units (e.g. counts).
An object with no measured flux in a given band has a $`\mu `$ value of $`m_02.5\mathrm{log}_{10}b^{}`$ (equation 11), in other words the classical magnitude of an object with a flux of $`b^{}`$. We note in passing that this value $`\mu (0)`$ is a convenient measure of the depth of a survey, containing information about both the noise properties of the sky and the image quality.
In the discussion above we considered the idealized case of all objects having the same error, dominated by sky noise. This case covers most objects found in a given deep survey, as most are detected at the flux limit and are typically at most marginally resolved. For bright objects, of course, the difference between $`m`$ and $`\mu `$ is entirely negligible.
The optimal measure of the flux of a faint stellar object is given by convolving its image with the PSF. If the noise is dominated by the sky and detector, the variance of the measured flux is independent of the object’s brightness, and is given by the background variance per unit area multiplied by the effective area of the PSF ($`4\pi \alpha ^2`$ if the PSF is Gaussian with FWHM $`2\sqrt{2\mathrm{ln}2}\alpha `$). If we decide upon a typical seeing quality and sky brightness for a given band, this defines $`\sigma `$’s nominal value, $`\sigma _0`$, which sets $`b`$ once and for all. Each band has its own value of $`b`$.
As observing conditions change so do measurement errors, with the result that the error in $`\mu `$ for very faint objects is no longer exactly the 0.52 magnitudes that it would be under canonical conditions. Whenever a precise error is needed for a given object’s $`\mu `$, it may be found by converting $`\mu `$ back to flux, or by applying equation 12; for faint objects this reduces to multiplying the quoted error by $`\sigma /\sigma _0`$. Failure to apply such a correction would mean that the quoted errors on $`\mu `$ were wrong.
It would be possible to choose $`b`$ separately for different parts of the sky, but this would make the conversion of $`\mu `$ back to flux impractically complicated, and a significant source of mistakes for users of the data. The behaviour of $`\mu `$ as $`b`$ changes is reasonably benign; the error at zero flux varies only linearily with $`b`$, as does the flux where $`m`$ and $`\mu `$ begin to diverge.
Care is also required whenever the measured flux has different noise properties, for example if the flux is measured within a circular aperture or a given isophote rather than using a PSF. In this case, the appropriate value of $`\sigma `$ may be much larger from the one used to set $`b`$, with the consequence that the error in $`\mu `$ at zero flux considerably underestimates the true uncertainty (the other case, where the effective aperture is smaller than the PSF, is unlikely to occur in practice). It would, of course, be possible if confusing to choose a different set of $`b`$ values for each type of (fixed size) aperture, although it seems unlikely that this would really be a good idea. As the discussion in section 3 showed the consequences of even a grossly incorrect value of $`b`$ are not catastrophic; the asinh magnitudes still reduce to our familiar magnitudes for bright objects, and are still well defined for negative values.
One place where special care will be needed is in measures of surface brightness, where $`m`$ and $`\mu `$ can depart quite strongly from one another even at levels where the flux is well determined. It may prove desirable to use a different value of $`b`$ for such measurements; they are after all never directly compared with total magnitudes.
Fan et al. (1999a) have used the asinh magnitude system to search for high-z quasars in preliminary SDSS data; examples of color-color plots employing asinh magnitudes may be found in Fan et al. (1999b).
## 5 Asinh Magnitudes and Colors
The ratio of two low signal-to-noise ratio measurements (for example, an object’s color) is statistically badly behaved (indeed, for Gaussian distributions if the denominator has zero mean, the ratio follows a Cauchy distribution and accordingly has no mean, let alone a variance!). What is the behavior of our asinh magnitudes when used to measure colors?
For objects detected at high signal-to-noise ratio, the difference in $`\mu `$ measured in two bands is simply a measure of the relative flux in the two bands. For faint objects this is no longer true, although the difference is well behaved. A non-detection in two bands has a well defined ‘color’ ($`\mu _1(0)\mu _2(0)`$). As discussed above, the error in this color is approximately $`0.75\sigma /\sigma _0`$ magnitudes, assuming independent errors in the two bands. Equivalently, such a non-detection can be represented by an ellipsoid in multi-color space, centered at the point corresponding to zero flux in all bands, with principal axes $`0.52\sigma /\sigma _0`$ (in general $`\sigma /\sigma _0`$ will be different in each band).
As an illustration of the instability of the traditional definition of color for faint objects, consider two objects that have almost identical colors but which are near the detection limit of a survey. Their asinh colors will be very similar, but (due to the singularity of the logarithm as the flux goes to zero) their classical magnitudes may differ by an arbitrarily large amount.
Figure 3 shows the results of a simple Monte-Carlo simulation. We took a set of ‘objects’ with 2.51 times as much flux in one band in the other (one magnitude), added Gaussian noise of fixed variance to each measurement, and tabulated the color measured using both classical and asinh magnitudes. The left hand panel shows the flux ratio, $`\mathrm{\Delta }mm_1m_2`$, and $`\mathrm{\Delta }\mu \mu _1\mu _2`$ as a function of signal-to-noise ratio; the right hand panels show histograms of their distribution in the range $`1<=S/N<=3`$. At the right side of the plot, where the noise is less important, both $`\mathrm{\Delta }mm_1m_2`$ and $`\mathrm{\Delta }\mu \mu _1\mu _2`$ tend to -1, the correct value. As the noise becomes more important, the errors on the $`\mathrm{\Delta }m`$ plot grow (and an increasing fraction of points in the left panel is simply omitted as their fluxes are zero or negative). The $`\mathrm{\Delta }\mu `$ plot shows the ‘color’ tending to its value at zero flux, in this case 0.0, as the signal-to-noise ratio drops.
## 6 Summary
We have shown that an innovative use of inverse hyperbolic sines for a new magnitude scale can overcome most deficiencies of traditional magnitudes, while preserving their desirable features. The defining equations are
$`\mu =`$ $`\left(m_02.5\mathrm{log}_{10}b^{}\right)a\mathrm{sinh}^1\left(f/2b^{}\right)`$ $`\mu (0)a\mathrm{sinh}^1\left(f/2b^{}\right)`$
and
$`\text{Var}(\mu )=`$ $`{\displaystyle \frac{a^2\sigma ^2}{4b^2+f^2}}{\displaystyle \frac{a^2\sigma ^2}{4b^2}}`$
where $`a2.5\mathrm{log}_{10}e`$, $`f`$ is the measured flux, $`\sigma ^{}`$ the error in $`f`$ due to the sky and detector, and $`b^{}`$ a softening parameter (Equations 11 and 12).
The principal advantages of these ‘asinh’ magnitudes are their equivalence to classical magnitudes when errors are negligible, their ability to represent formally negative fluxes, and their well behaved error distribution as the measured flux goes to zero. For high signal-to-noise ratios the difference of two asinh magnitudes is a measure of the flux ratio, while for noisy detections it becomes the statistically preferable flux difference; this allows meaningful color cuts even when an object is barely detected in some bands. Additionally, $`\mu (0)`$ provides a convenient way of summarizing the photometric depth of a survey for point-like objects, containing information about both the noise level of the system and the image quality.
Asinh magnitudes will be used in the SDSS catalogs.
###### Acknowledgements.
The authors would like to thank Xiaohui Fan, Don Schneider, and Michael Strauss for helpful comments on various versions of this paper, and Gillian Knapp for demonstrating that the whole question was more subtle than we had realised. The Sloan Digital Sky Survey (SDSS) is a joint project of The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, Princeton University, the United States Naval Observatory, and the University of Washington. Apache Point Observatory, site of the SDSS, is operated by the Astrophysical Research Consortium. Funding for the project has been provided by the Alfred P. Sloan Foundation, the SDSS member institutions, the National Aeronautics and Space Administration, the National Science Foundation and the U.S. Department of Energy.
|
no-problem/9903/cond-mat9903443.html
|
ar5iv
|
text
|
# Apparent Metallic Behavior at 𝐵=0 of a Two Dimensional Electron System in AlAs
## Abstract
We report the observation of metallic-like behavior at low temperatures and zero magnetic field in two dimensional (2D) electrons in an AlAs quantum well. At high densities the resistance of the sample decreases with decreasing temperature, but as the density is reduced the behavior changes to insulating, with the resistance increasing as the temperature is decreased. The effect is similar to that observed in 2D electrons in Si-MOSFETs, and in 2D holes in SiGe and GaAs, and points to the generality of this phenomenon.
The question of whether or not a metal-insulator transition can occur in two dimensions at zero magnetic field has been of great recent interest. Using scaling arguments, Abrahams et al. showed that a non-interacting 2D carrier system with any amount of disorder will be localized at zero temperature. Subsequent experiments provided evidence to support this theoretical prediction. However, recently, various investigators have discovered 2D carrier systems that show metallic behavior. They observe that for a range of 2D densities ($`n`$), the resistivity of their samples decreases by nearly an order of magnitude as the temperature ($`T`$) is decreased. When $`n`$ is reduced below a critical density ($`n_c`$) they observe a transition to insulating behavior. One difference between the earlier and the more recent experiments is that the new samples have a much higher carrier mobility ($`\mu `$). One theoretical investigation asserts that this higher $`\mu `$ combines with a broken inversion symmetry in the confining potential well to allow spin orbit effects to create the metallic state, while another hypothesizes that the higher $`\mu `$ allows for stronger electron-electron interaction, and that this interaction causes the metallic state. However, there is still no clear model supported by experimental results to explain this metallic behavior.
So far, the metallic behavior has been observed in 2D electron systems (2DESs) in Si-MOSFETs, 2D hole systems (2DHSs) in GaAs/AlGaAs heterostructures and SiGe quantum wells (QWs), and now in a 2DES in AlAs. To provide an overview, and for further discussion, some of the important parameters of these systems are shown in Table I. AlAs is an interesting material for 2DESs because it combines some of the properties of GaAs with those of Si. Since it is grown in the same molecular beam epitaxy (MBE) systems as GaAs samples, very clean samples can be fabricated. However, it is similar to Si in that the minima of the AlAs conduction band are at the X-points of the Brillouin Zone. (The minima in Si are near the X-points.) In addition, in AlAs QWs like ours, which are grown on the (100) surface of the GaAs substrate, one can cause the electrons to occupy the conduction band ellipsoids either perpendicular to or parallel to the plane of the 2DES by varying the width of the QW. Our data indicate that in our sample, only one of the in-plane ellipsoids is occupied. By patterning Hall bars along and perpendicular to the direction of the occupied ellipsoid’s major axis, we are able to measure the resistance along these two directions. We observe anisotropy in the measured resistance, and the data along both directions show metallic behavior at high $`n`$ and insulating behavior at low $`n`$.
Our sample was grown by MBE on an undoped GaAs (100) substrate. The 2DES is confined to a 150 Å-wide AlAs QW which is separated from the dopant atoms (Si) by a 300 Å-wide front barrier of Al<sub>0.45</sub>Ga<sub>0.55</sub>As and a rear barrier consisting of 25 periods of a GaAs/AlAs (10.5 Å/8.5 Å) superlattice. On a sample from this wafer, we patterned two Hall bars oriented perpendicular to each other (L-shaped) along the and directions. The Hall bars were patterned by standard photolithographic techniques and a wet etch. Ohmic contacts were made by alloying AuGeNi in an N<sub>2</sub> and H<sub>2</sub> atmosphere for 10 minutes. A front gate of 350 Å Au on top of 50 Å Ti was deposited on top of the active regions of the Hall bars to control $`n`$.
Our $`T`$ dependence measurements were performed in a pumped <sup>3</sup>He refrigerator at $`T`$ from 0.28 K to 1.4 K. We measured $`T`$ using a calibrated RuO<sub>2</sub> resistor. We used the standard low-frequency AC lock-in technique with an excitation current of 1 nA to measure the four point resistance of the sample. The data were taken by fixing the front-gate voltage ($`V_g`$), and measuring both the longitudinal ($`R_{xx}`$) and transverse ($`R_{xy}`$) resistances as a function of perpendicular magnetic field ($`B`$). These magnetoresistance measurements were used to determine $`n`$ at that $`V_g`$. Gate leakage was monitored throughout the experiment and it never exceeded 10 pA. The $`T`$ was then raised to 1.4 K and continuously lowered back to the base $`T`$ (0.28 K) over a period of three hours. The densities and $`T`$ dependencies were repeatable at the same gate voltages. The results of $`R_{xx}`$ and $`R_{xy}`$ magnetoresistance measurements at 0.28 K for $`V_g=0`$ are shown in Fig. 1. Note the high quality of the data, with the appearance of Shubnikov-de Haas oscillations at a field as low as 0.6 T and the fractional quantum Hall effect at Landau level filling factor $`\nu =2/3`$ as well as at $`\nu =1/3`$ (see Ref. ).
Before presenting the $`T`$-dependence data, we will describe some of the characteristics of our AlAs 2DES. Several observations lead us to believe that in our sample only one in-plane conduction-band ellipsoid is occupied. First, previous cyclotron resonance (CR) measurements on samples from this wafer reveal a CR effective mass $`m_{CR}=0.46m_e`$. This mass is in excellent agreement with the expected CR mass if in-plane ellipsoids are occupied. It is very different from $`m_{CR}=m_t=0.19m_e`$ which would be observed if an ellipsoid perpendicular to the plane were occupied. Second, the data of Fig. 1 show minima in $`R_{xx}`$ and plateaus in $`R_{xy}`$ for both even and odd filling factors, and the Shubnikov-de Haas oscillations show no beating. Moreover, the two $`R_{xx}`$ traces from the two perpendicular Hall bars show an anisotropy in $`\mu `$. We conclude from these observations that only one of the two in-plane ellipsoids is occupied: the magnetoresistance data suggest that there is only one occupied subband, while the $`\mu `$ anisotropy indicates that the Fermi surface in the plane of the 2DES can be anisotropic, consistent with a single in-plane ellipsoid being occupied. It is possible that a slight angular deviation from the ideal growth direction could account for the lifting of the expected degeneracy of the in-plane ellipsoids, as a splitting of only a few meV would be sufficient to produce a single occupied subband.
We now compare the characteristics of the 2DES in our sample with those of Si-MOSFETs. First, the conduction band ellipsoids in bulk Si and AlAs are comparable, with similar values for $`m_l`$ and $`m_t`$. However, in contrast to our sample, the Si-MOSFET 2DESs which have been studied so far occupy out-of-plane ellipsoids. As a result, transport in the plane is isotropic with an effective mass $`m_t=0.19m_e`$. The mobilities at base $`T`$ of our sample for the trace shown in Fig. 1 ($`n=2.08\times 10^{11}`$ cm<sup>-2</sup>) are 6.1 m<sup>2</sup>/Vs for the high-$`\mu `$ direction and 4.2 m<sup>2</sup>/Vs for the low-$`\mu `$ direction. The highest mobilities we measure, for the highest density ($`n=2.73\times 10^{11}`$ cm<sup>-2</sup>), are 7.7 m<sup>2</sup>/Vs and 4.7 m<sup>2</sup>/Vs. These mobilities are comparable to the highest mobilities reported for Si-MOSFET 2DESs (see Table I). Finally, as seen in Fig. 1 and Ref. , our sample exhibits clear fractional quantum Hall effect, an effect rarely seen in Si-MOSFETs.
Figure 2 summarizes the $`T`$ dependence of the zero-$`B`$ resistivity ($`\rho `$) for a range of $`n`$ from $`2.73\times 10^{11}`$ cm<sup>-2</sup> to $`0.59\times 10^{11}`$ cm<sup>-2</sup>. The results for both high and low mobility directions are shown. As with other experiments, the data can be split into three regimes. In the lowest $`n`$ traces, the behavior is insulating throughout the $`T`$ range measured, with $`\rho `$ rising monotonically as $`T`$ is reduced. The highest $`n`$ traces show metallic behavior throughout the $`T`$ range, with $`\rho `$ decreasing monotonically as $`T`$ is reduced. For intermediate $`n`$, $`\rho `$ exhibits a nonmonotonic dependence on $`T`$: it initally rises as $`T`$ is lowered, shows a maximum, and then decreases with decreasing $`T`$. These data are very similar qualitatively to the results of previous experiments.
As already mentioned, a theoretical explanation for data like these, with experimental evidence to support it, does not exist. We expect, though, that the Fermi energy ($`E_F`$) might be an important parameter. In our sample, $`E_F`$ at the highest density is 16 K and at the lowest density is 3.6 K. Our $`T`$-range in these measurements (0.28 K to 1.4 K) is only about an order of magnitude smaller than $`E_F`$. Most of the 2D carrier systems investigated so far also have $`E_F`$ comparable to the $`T`$-range over which the experiment is done. This raises the possibility of finite-$`T`$ effects causing the metallic-like behavior. In fact, Henini et al. have fitted to their GaAs 2DHS data an expression \[$`\mu /\mu _o1(T/E_F)^2`$\] describing the temperature dependence of screening and found that the fit was very good. Our data is not fit well by this equation, but we cannot rule out this effect because the exact expression for temperature-dependent screening depends on details of disorder and material parameters.
Another possible model has been put forth by Pudalov. He suggests that the metallic-like data may be fitted by an empirical dependence
$$\rho (T)=\rho _o+\rho _1exp(T_o/T).$$
(1)
The second term is intended to account for an energy gap caused by a spin-orbit interaction. For $`n`$ where our 2DES exhibits a metallic behavior throughout the measured $`T`$-range (traces a to f of Fig. 2), this equation fits our data well through the whole $`T`$-range. The fits are not shown in Fig. 2 because they are indistinguishable from the data. To show the accuracy of the fits, we present representative Arrhenius plots of ($`\rho \rho _o`$) vs. $`1/T`$ in Fig. 3a. For clarity, only the curves for the high-$`\mu `$ direction data are shown; the curves for the low-$`\mu `$ direction are very similar. Clear exponential behavior is observed for more than a decade of ($`\rho \rho _o`$) for the highest density traces, but as the density is reduced, the range over which exponential dependence is observed reduces to about a factor of 5. In Fig. 3b we show, as a function of $`n`$, the values of $`\rho _o`$, $`\rho _1`$, and $`T_o`$ deduced from fitting Eq. 1 to the data. As expected, $`\rho _o`$ rises monotonically as $`n`$ is reduced. Also, $`\rho _1`$ is seen to rise smoothly and monotonically, which is more evidence that the fits are meaningful. The $`T_o`$ vs. $`n`$ dependence seen in the lower part of Fig. 3b is qualitatively the same as what Hanein et al. observe for their 2DHS data. Both show a dependence that is close to linear, and that extrapolates to $`T_o=0`$ at $`n=0`$. We note that a decreasing $`T_o`$ with $`n`$ is consistent with $`T_o`$ being related to spin-orbit interaction: the spin-splitting energy in 2D carrier systems due to interface inversion asymmetry indeed typically decreases with decreasing 2D density. It is also interesting to compare the dimensionless ratio $`T_o/E_F`$ in our measurements to those of Hanein et al. Since both $`E_F`$ and $`T_o`$ vary approximately linearly with $`n`$ in the range where the behavior is metallic, this ratio is a constant for each experiment. For the data of Hanein et al., $`T_o/E_F0.2`$, while for ours, $`T_o/E_F0.1.`$ Despite a factor of two difference, these ratios are similar enough to suggest that $`T_o`$ and $`E_F`$ may be important physical parameters in all of the systems that show metallic behavior.
Taken together, the results of recent experiments make it very difficult to overlook the anomalous low-$`T`$ behavior in these systems. The similarity of the data and the parameters from various systems, and the inability of any current theory to describe them all, strongly suggest that there is new and interesting physics here. A look at Table I shows the similarities among some relevant parameters. The large values of the dimensionless parameter $`r_s`$ (the interparticle spacing measured in units of the effective Bohr radius) and of $`\mu `$ support the idea that electron-electron interaction plays a role in stabilizing the metallic state. The combination of rather small densities and large effective masses that leads to large $`r_s`$ values, on the other hand, also means small values of $`E_F`$. Ironically, precisely because of these small $`E_F`$ values, it is still questionable if it is meaningful to infer the existence of a zero-$`T`$ metallic state from the available finite-$`T`$ data: phenomena such as temperature dependent screening can indeed lead to a decrease in $`\rho `$ with decreasing $`T`$ at temperatures which are not negligible compared to $`E_F`$.
In conclusion, we present data from a new and different 2DES that shows the same metallic-like behavior and apparent metal-insulator transition recently observed in other 2D carrier systems. The generality of this phenomenon begs theoretical explanation.
We would like to thank Y. Hanein, D. Shahar, D. C. Tsui, and J. Yoon for useful discussions. This work was funded by the National Science Foundation.
|
no-problem/9903/cond-mat9903099.html
|
ar5iv
|
text
|
# Ab initio calculations of the dynamical response of copper
\[
## Abstract
The role of localized $`d`$-bands in the dynamical response of Cu is investigated, on the basis of ab initio pseudopotential calculations. The density-response function is evaluated in both the random-phase approximation (RPA) and a time-dependent local-density functional approximation (TDLDA). Our results indicate that in addition to providing a polarizable background which lowers the free-electron plasma frequency, $`d`$-electrons are responsible, at higher energies and small momenta, for a double-peak structure in the dynamical structure factor. These results are in agreement with the experimentally determined optical response of copper. We also analyze the dependence of dynamical scattering cross sections on the momentum transfer.
\]
Noble-metal systems have been the focus of much experimental and theoretical work in order to get a better understanding of how electronic properties of delocalized, free-electron-like, electrons are altered by the presence of localized $`d`$-electrons. Silver is one of the best understood systems, where the free-electron plasma frequency is strongly renormalized (red-shifted) by the presence of a polarizable background of $`d`$-electrons . Unlike silver, copper presents no decoupling between $`sp`$ and $`d`$ orbitals, and a combined description of these one-electron states is needed to address both structural and electronic properties of this material. Though in the case of other materials, as semiconductors, electron-hole interactions (excitonic renormalization) strongly modify the single-particle optical absorption profile , metals offer a valuable playground for investigations of dynamical exchange-correlation effects of interacting many-electron systems within a quasiparticle picture . Indeed, ab initio calculations of the dynamical response of a variety of simple metals, as obtained within the random-phase-approximation (RPA), successfully account for the experimentally determined plasmon dispersion relations and scattering cross sections . Within the same many-body framework, ab initio calculations of the electronic stopping power of real solids have also been reported .
Since the pioneering investigations by Ehrenreich and Philipp on the optical absorption and reflectivity of Ag an Cu, a variety of measurements of the optical properties of copper has been reported. Nevertheless, there have been, to the best of our knowledge, no first-principles calculations of the dielectric response function of Cu that include the full effects of the crystal lattice. Furthermore, the dynamical density-response function is well-known to be a key quantity in discussing one-electron properties in real metals, and it is also of basic importance in the description of low-dimensional copper systems studied in optical and time-resolved femtosecond experiments .
In this Rapid Communication we report a first-principles evaluation of the dynamical density-response function of Cu, as computed in the RPA and a time-dependent extension of local-density functional theory (TDLDA), after an expansion of $`4s^1`$ and $`3d^{10}`$ one-electron Bloch states in a plane-wave basis with a kinetic-energy cutoff of $`75\mathrm{Ry}`$. Though all-electron mixed basis schemes, such as the full-potential linearized augmented plane wave (LAPW) method, are expected to be well suited for the description of the response of localized $`d`$-electrons, plane-wave pseudopotential approaches offer a simple and well-defined scenario to describe ground-state properties and dynamical response functions. This approach has already been successfully incorporated in the description of inelastic lifetimes of excited electrons in copper , and could also be applied to the study of other noble and transition metals.
The key quantity in our calculations is the dynamical density-response function $`\chi (𝐫,𝐫^{};\omega )`$. For periodic crystals we Fourier transform this function into a matrix $`\chi _{𝐆,𝐆^{}}(𝐪,\omega )`$ which, in the framework of time-dependent density-functional theory (DFT), satisfies the matrix equation
$`\chi _{𝐆,𝐆^{}}(𝐪,\omega )=\chi _{𝐆,𝐆^{}}^0(𝐪,\omega )+{\displaystyle \underset{𝐆^{\prime \prime }}{}}{\displaystyle \underset{𝐆^{\prime \prime \prime }}{}}\chi _{𝐆,𝐆^{\prime \prime }}^0(𝐪,\omega )`$ (1)
$`\times \left[v_{𝐆^{\prime \prime }}(𝐪)\delta _{𝐆^{\prime \prime },𝐆^{\prime \prime \prime }}+K_{𝐆^{\prime \prime },𝐆^{\prime \prime \prime }}^{xc}(𝐪,\omega )\right]\chi _{𝐆^{\prime \prime \prime },𝐆^{}}(𝐪,\omega ).`$ (2)
Here, the wave vector $`𝐪`$ is in the first Brillouin zone (BZ), $`𝐆`$ and $`𝐆^{}`$ are reciprocal lattice vectors, $`v_𝐆(𝐪)=4\pi /|𝐪+𝐆|^2`$ are the Fourier coefficients of the bare Coulomb potential, the kernel $`K_{𝐆,𝐆^{}}^{xc}(𝐪,\omega )`$ accounts for short-range exchange-correlation effects, and $`\chi _{𝐆,𝐆^{}}^0(𝐪,\omega )`$ are the Fourier coefficients of the density-response function of noninteracting Kohn-Sham electrons:
$`\chi _{𝐆,𝐆^{}}^0(𝐪,\omega )={\displaystyle \frac{1}{\mathrm{\Omega }}}{\displaystyle \underset{𝐤}{\overset{BZ}{}}}{\displaystyle \underset{n,n^{}}{}}{\displaystyle \frac{f_{𝐤,n}f_{𝐤+𝐪,n^{}}}{E_{𝐤,n}E_{𝐤+𝐪,n^{}}+\mathrm{}(\omega +\mathrm{i}\eta )}}`$ (3)
(4)
$`\times \varphi _{𝐤,n}|e^{\mathrm{i}(𝐪+𝐆)𝐫}|\varphi _{𝐤+𝐪,n^{}}\varphi _{𝐤+𝐪,n^{}}|e^{\mathrm{i}(𝐪+𝐆^{})𝐫}|\varphi _{𝐤,n},`$ (5)
where the second sum runs over the band structure for each wave vector $`𝐤`$ in the first BZ, $`f_{𝐤,n}`$ are Fermi factors, $`\eta `$ is a positive infinitesimal, and $`\mathrm{\Omega }`$ represents the normalization volume. The one-particle Bloch states $`\varphi _{𝐤,n}(𝐫,\omega )`$ and energies $`E_{𝐤,n}`$ are the self-consistent eigenfunctions and eigenvalues of the Kohn-Sham equations of DFT, which we solve within the so-called local-density approximation (LDA) for exchange and correlation effects. The electron-ion interaction is described by a non-local, norm-conserving, ionic pseudopotential in a non-separable form, so as to have a better description of the conduction bands entering Eq. (3).
The calculations presented below have been found to be well converged for all frequencies and wave vectors under study, and they have been performed by including conduction bands up to a maximum energy of $`80\mathrm{eV}`$ above the Fermi level. BZ integrations were performed by sampling on a $`10\times 10\times 10`$ Monkhorst-Pack mesh. In the RPA the kernel $`K_{𝐆,𝐆^{}}^{xc}(𝐪,\omega )`$ is taken to be zero. In the TDLDA the zero-frequency kernel, approximated within the LDA by a contact delta function, is adiabatically extended to finite frequencies. In both RPA and TDLDA, crystalline local field effects appearing through the dependence of the diagonal elements of the interacting response matrix $`\chi _{𝐆,𝐆^{}}(𝐪,\omega )`$ on the off-diagonal elements of the polarizability $`\chi _{𝐆,𝐆^{}}^0(𝐪,\omega )`$ have been fully included in our calculations.
The properties of the long-wavelength limit ($`𝐪0`$) of the dynamical density-response function are accessible by measurements of the optical absorption, through the imaginary part of the dielectric response function $`ϵ_{𝐆=0,𝐆^{}=0}(𝐪=0,\omega )`$. On the other hand, the scattering cross-section for inelastic scattering of either $`X`$-rays or fast electrons with finite momentum transfer $`𝐪+𝐆`$ is, within the first Born approximation, proportional to the dynamical structure factor
$$S(𝐪+𝐆,\omega )=\frac{2}{v_𝐆(𝐪)}\mathrm{Im}\left[ϵ_{𝐆,𝐆}^1(𝐪,\omega )\right],$$
(6)
where
$$ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega )=\delta _{𝐆,𝐆^{}}+v_𝐆^{}(𝐪)\chi _{𝐆,𝐆^{}}(𝐪,\omega ).$$
(7)
Fig. 1 exhibits, by solid lines, our results for both real and imaginary parts of the $`ϵ_{𝐆,𝐆}(𝐪,\omega )`$ dielectric function of copper, for a small momentum transfer of $`|𝐪+𝐆|=0.18a_0^1`$ ($`a_0`$ is the Bohr radius), together with the optical data ($`𝐪=0`$) of Ref. (dashed lines). In this low-$`𝐪`$ limit, both RPA and TDLDA dynamical density-response functions coincide, and the dielectric function is obtained from the dynamical density-response function of noninteracting Kohn-Sham electrons. Corresponding values of the so-called energy-loss function $`\mathrm{Im}\left[ϵ_{𝐆,𝐆}^1(𝐪,\omega )\right]`$ are presented in Fig. 2, and a comparison between the imaginary parts of interacting $`\chi _{𝐆,𝐆}(𝐪,\omega )`$ and noninteracting $`\chi _{𝐆,𝐆}^0(𝐪,\omega )`$ density-response functions is displayed in the inset of this figure, showing that as the Coulomb interaction is turned on the oscillator strength is redistributed. Our results, as obtained for a small but finite momentum transfer, are in excellent agreement with the experimentally determined dielectric function, both showing a double peak structure in $`\mathrm{Im}\left[ϵ_{𝐆,𝐆}^1(𝐪,\omega )\right]`$.
In order to investigate the role of localized $`d`$-bands in the dynamical response of copper, we have also used an ab initio pseudopotential with the $`3d`$ shell assigned to the core. The result of this calculation, displayed in Fig. 2 by a dotted line, shows that a combined description of both localized $`3d^{10}`$ and delocalized $`4s^1`$ electrons is needed to address the actual electronic response of copper. On the one hand, the role played in the long-wavelength limit by the Cu $`d`$-bands is to provide a polarizable background which lowers the free-electron plasma frequency by $`2.5\mathrm{eV}`$. We note from Fig. 1 that near $`8.5\mathrm{eV}`$ the real part of the dielectric function ($`\mathrm{Re}ϵ`$) is zero; however, the imaginary part ($`\mathrm{Im}ϵ`$) is not small, due to the existence of interband transitions at these energies which completely damp the free-electron plasmon. On the other hand, $`d`$-bands are also responsible, at higher energies, for a double-peak structure in the energy-loss function, which stems from a combination of band-structure effects and the building up of collective modes of $`d`$-electrons. Since these peaks occur at energies ($`20\mathrm{eV}`$ and $`30\mathrm{eV}`$) where $`\mathrm{Re}ϵ`$ is nearly zero (see Fig. 1), they are in the nature of collective excitations, the small but finite value of $`\mathrm{Im}ϵ`$ at these energies accounting for the width of the peaks.
A better insight onto the origin of the double-hump in the energy-loss function is achieved from Fig. 3, where the density of states (DOS) and the joint-density of states (J-DOS) of Cu are plotted. The high-energy peak present in the J-DOS spectrum at about $`25\mathrm{eV}`$, which appears as a result of transitions between $`d`$-bands at $`2\mathrm{eV}`$ below the Fermi level and unoccupied states with energies of $`23\mathrm{eV}`$ above the Fermi level, is responsible for the peak of electron-hole excitations in $`\mathrm{Im}ϵ`$ and $`\mathrm{Im}\chi ^0`$ at $`\omega =25\mathrm{eV}`$ (see Fig. 1 and the inset of Fig. 2). Hence, there is a combination, at high energies, of $`d`$-like collective excitations and interband electron-hole transitions, which results in a prominent double-peak in the loss spectrum.
Now we focus on the dependence of the energy-loss function on the momentum transfer $`𝐪+𝐆`$. As long as the $`3d`$ shell of Cu is assigned to the core, we find a well-defined free-electron plasmon for wave vectors up to the critical momentum transfer where the plasmon excitation enters the continuum of intraband particle-hole excitations. This free-electron plasmon, which shows a characteristic positive dispersion with wave vector, is found to be completely damped when a realistic description of $`3d`$ orbitals is included in the calculations. At higher energies and small momenta, $`d`$-like collective excitations originate a double-peak structure which presents no dispersion, as shown in Fig. 4. In this figure the RPA dynamical structure factor for $`𝐆=0`$ is displayed, as obtained for various values of $`q`$ along the (100) direction. For larger values of the momentum transfer $`𝐪+𝐆`$, single-particle excitations take over the collective ones up to the point that above a given cutoff the spectra is completely dominated by the kinetic-energy term.
In Fig. 5 we show the computed dynamical structure factor for $`|𝐪+𝐆|=1.91a_0^1`$ along the (111) direction, in both RPA (solid line) and TDLDA (dashed line), together with the result of replacing the interacting $`\chi _{𝐆,𝐆}(𝐪,\omega )`$ matrix by its noninteracting counterpart $`\chi _{𝐆,𝐆}^0(𝐪,\omega )`$ (dotted line). The noninteracting dynamical structure factor (dotted line) now reproduces the main features of full RPA and TDLDA calculations. The double peak of Fig. 2, which is in the nature of plasmons, is now replaced by a less pronounced double-hump originated from single electron-hole excitations. A similar double-peak has been found in the loss spectra of simple metals, which has been understood on the basis of the existence of a gap region for interband transitions. We also note that the effect of short-range exchange-correlation effects, which are absent in the RPA, is to reduce the effective electron-electron interaction, thus the dynamical structure factor being closer in TDLDA than in RPA from the result obtained for noninteracting Kohn-Sham electrons. The RPA dynamical structure factor of Cu is enhanced by up to a $`40\%`$ by the inclusion, within the TDLDA, of many-body local field corrections.
In summary, we have presented ab initio pseudopotential calculations of the dynamical density-response function of Cu, by including $`d`$-electrons as part of the valence complex. In the long-wavelength limit ($`𝐪0`$), $`d`$-bands provide a polarizable background that lowers the free-electron plasma frequency. $`d`$-electrons are also responsible for a full damping of this $`s`$-like collective excitation and for the appearance of a $`d`$-like double-peak structure in the energy-loss function, in agreement with the experimentally determined optical response of copper. We have analyzed the dependence of the dynamical structure factor on the momentum transfer, and we have found that, for values of the momentum transfer over the cutoff wave vector for which collective excitations enter the continuum of intraband electron-hole pairs, a less-pronounced double-hump is originated by the existence of interband electron-hole excitations. Experimental measurements of scattering cross sections in Cu would be desirable for the investigation of many-body effects, which we have approximated within RPA and TDLDA.
We thank P. M. Echenique for stimulating discussions. I.C. and J.M.P. acknowledge partial support by the Basque Unibertsitate eta Ikerketa Saila and the Spanish Ministerio de Educación y Cultura. A.R. acknowledges the hospitality of the Departamento de Física de Materiales, Universidad del País Vasco, San Sebastián, where part of this work was carried out.
|
no-problem/9903/cond-mat9903277.html
|
ar5iv
|
text
|
# Microscopic Motion of Particles Flowing through a Porous Medium
## 1 Introduction
The transport of particulate suspensions through porous media is a process with numerous industrial applications such as deep bed filtration , hydrodynamic chromatography , migration of fines , ground water contamination , the flow of dilute stable emulsions , and hindered diffusion in membranes . In order to understand the behavior of these systems, one often needs to know the microscopic, or “pore-scale” behavior of the suspended particles. Consider the example of deep bed filtration, where a suspension is injected into a filter made of porous material, in the hope that the suspended particles will be collected in the filter while clear(er) fluid passes through. A filtrate particle flowing through the pore space may be trapped by the geometric constraint of reaching a pore smaller than its diameter, or by other adhesive mechanisms such as electrostatic or van der Walls forces. Realistic porous media have an intricate randomly-sized and randomly-interconnected pore space with highly nontrivial flow paths. The degree to which suspended particles choose trajectories in the pore space with or without small constrictions, or tend to be attracted or repelled by the walls, or move towards or away from the other suspended particles will determine the dynamics of the filtration process. Recent experiments which focus on such microscopic particle behavior in filtration clearly illuminate how such pore-scale details can change the macroscopic properties of the system.
A useful way of describing the porous medium and particulate motion therein is to note that the pore space of a material like a random sphere pack involves relatively large open regions connected by relatively narrow channels: pores and throats, respectively . A typical pore is connected to several others through the throats. The streamlines for fluid flow or the paths taken by suspended particles (in the absence of wall adhesion mechanisms) are then roughly unidirectional within the throats but branch at each pore, corresponding to the different connected neighboring pores which are accessible. A key issue for microscopic particulate motion is the “junction rule”: when a suspended particle reaches a pore, which of the neighboring pores does it choose to move towards next.
The necessity for such information is evident when constructing a quantitative description for the system. Consider, for example, the network model for filtration, which models a filter medium as a ball-and-stick network of nodes interconnected by channels . The trajectory of the suspended particles in the network is largely determined by the motion of the particles at the nodes, which is specified by the junction rule. Furthermore, the interaction between two nearby particles has been experimentally found to be important . For example, a particle which is apparently trapped at some point in the pore space may escape from the trap if another suspended particle passes nearby. This effect can significantly change the distribution of the trapped particles, and such information must be included in the model. The problem which motivates this work is that very little quantitative information on these crucial effects is available. For the junction rule, there have been studies on the related problem of red cells passing through tube hematocrits , where similar but different geometries were considered. In the filtration case, only rough estimates exist for both the junction probabilities and for the escape rate of trapped particles .
In this paper, we study the microscopic motion of particles suspended in a fluid passing through a porous medium, using Stokesian Dynamics (SD) computer simulations . We construct our model porous media by fixing the positions of a set of spheres, representing the grains of the porous medium, and allowing other particles, representing the filtrate, to move through this fixed bed. The original SD code, which simulated only mobile particles, was modified appropriately to handle fixed particles as well. Given that we are interested here in relatively large non-colloidal particles, whose size is in the micron range, Brownian forces are not included in the calculations. Using this methodology, we describe the filtrate motion at junctions in two and three dimensional model porous media in terms of a quantity called the fractional particle flux (FPF). At a given pore, the FPF for any subsequent connected channel is defined as the fraction of particles choosing it out of all possible exit channels. We find that it is a good approximation to assume that the FPF of a channel is proportional to its cross-sectional area. Although this result agrees with one’s simplest intuitive expectation, we are not aware of any previous quantitative verification.
A suspended particle may spend a relatively long time in a pore before proceeding to an exit channel. To quantify this effect we have also measured the “waiting” time for a particle passing through two and three dimensional model porous medium, and also for a mobile particle passing near one fixed particle. These calculations show that the waiting time is dominated by the two particle interaction – that between the mobile particle and fixed particle which divides the two exit channels. We also present an approximate analytic calculation for the waiting time, which is consistent with the numerical results. We further study the effect of another mobile particle on “hesitating” particles – those near bifurcating streamlines with large waiting times. The resulting motion of the two particles displays an unexpectedly rich range of possibilities, such as “push toward”, “push away”, “turn”, and “lead”, which we describe in more detail below.
Another important result is the condition under which geometrically trapped particles escape from the trap by the perturbation of a “bypassing” particle – another suspended particle passing nearby. We construct simple traps consisting of two and three fixed particles in two and three dimensions, respectively, and place a mobile particle in the trap. In agreement with observations, We find that a bypassing particle can “relaunch” the trapped particle based upon only hydrodynamic interactions. We determine the trajectories of the mobile particles which cause relaunching for the parameters characterizing the trap. We find that the condition for the relaunching depends not only on the geometry of the trap, but also on the surroundings. We then study the condition for the relaunching in a two dimensional model porous medium, and obtain qualitatively similar results.
## 2 Stokesian Dynamics
Stokesian Dynamics (SD) is a numerical method to dynamically simulate the behavior of solid particles suspended in a viscous fluid at zero Reynolds number , based on the resistance matrix description of particle motion in creeping flow . When the Reynolds number based on the particle length scale is negligible, the generalized hydrodynamic forces $`\stackrel{}{F}^H`$ exerted on the particles in a suspension are
$$\stackrel{}{F}^H=\mu R_{FU}(\stackrel{}{U}\stackrel{}{U}^{\mathrm{}}),$$
(1)
where the components of $`\stackrel{}{U}`$ are the generalized particle velocities, $`\stackrel{}{U}^{\mathrm{}}`$ is the ambient flow velocity evaluated at the particle centers, and we have assumed that there is no bulk shear flow. The term generalized means that both translational and rotational components are included: the velocity vector contains angular velocity components, and torques are included in the force vector, so that $`\stackrel{}{U}`$ and $`\stackrel{}{F}^H`$ have $`6N`$ components, where $`N`$ is the total number of the particles. $`R_{FU}`$ is the configuration dependent resistance matrix that gives the hydrodynamic forces on the particles due to their motion relative to the fluid. The inverse of the resistance matrix is the mobility matrix $`M_{FU}`$.
We briefly describe the calculation of $`R_{FU}`$ in the SD method; a detailed discussion can be found in . The force density on the surface of each particle is expanded in a series of multipole moments, where terms higher than quadrupole are ignored. The far field (large interparticle distance) value of the mobility matrix $`M_{FU}`$ is expressed in terms of these moments. We invert $`M_{FU}`$ to get the form of the resistance matrix $`R_{FU}`$ relevant to the far field region. We then add near field (short interparticle distance) lubrication terms to the matrix. The resulting matrix $`R_{FU}`$ is accurate both for far and near fields, but not necessarily for intermediate distance. In order to determine accurate values of $`R_{FU}`$ at intermediate distance, Stokes equation is solved numerically for the system of two spheres with various separations. The numerical values of $`R_{FU}`$ from the simulations are included in the present code.
Neglecting inertia, the total force on each particle is zero,
$$\stackrel{}{F}^H+\stackrel{}{F}^P=0,$$
(2)
where $`\stackrel{}{F}^P`$ are the non-hydrodynamic forces acting on the particles, which we specify presently. Combining Eqs. (1) and (2), one can calculate the velocities of the particles, and from the velocities, one can calculate the positions of the particles at a short time later. For this new configuration, the velocities can again be calculated, from which a new configuration can again be generated, and this process can be repeated to generate the trajectories of the particles. In this paper we wish to apply this method to the study the flow of suspensions through small subregions of rigid porous media. We construct a model porous medium by fixing the positions of certain particles to be at chosen locations, which requires some modifications of the SD method. The basic equation (1), the integration method for the (force free) moving particles, and the resistance matrix remain unchanged. However, a further requirement $`\stackrel{}{U}=0`$ is required for the fixed particles, and in effect we will choose the value of an appropriate non-hydrodynamic force to impose this condition.
In the original SD formulation the velocities of the particles are determined by calculating $`\stackrel{}{F}^H`$ from the force-free condition Eq. (2), and inverting Eq. (1). To fix particle’s position, we apply an additional “gluing force” to balance hydrodynamic and other non-hydrodynamic forces. The problem is that the value of the gluing force is given only as a part of the solution of the problem, and we cannot simply invert Eq. (1) to obtain the velocities. We instead calculate the velocities by a recursive method. For given values of the gluing forces, one can calculate the velocities of all the particles by inverting Eq. (1). The resulting velocities are, in general, not the correct solution of the problem, that is, the velocities of the supposedly fixed particles are not equal to zero. We define $`\mathrm{\Sigma }`$ as the sum of the squares of the speed of the fixed particles, which is then a function of the gluing forces. The correct solution is obtained for the values of the gluing forces which makes $`\mathrm{\Sigma }=0`$, and since it is a non-negative quantity, the problem is to find the global minimum of $`\mathrm{\Sigma }`$ in the parameter space of the gluing forces. In other words, the problem becomes the minimization of a function, and the method of the steepest descent is effective. In practice, we begin the iterative solution by using the gluing force from the previous time step, and deem that a solution is obtained when the average speed of the fixed particles is less than $`10^5`$ of the ambient velocity. Fortunately, it turns out that $`\mathrm{\Sigma }`$ is a smooth function of the gluing forces, and does not contain other local minima, and typically $`50`$ iterations suffice at each timestep.
To check the code, we examined a few simple two-particle configurations where an analytic solution is available. At present, only three-dimensional SD codes for monodisperse particles exist, and in the remainder of this paper, the radius of all particles is set to $`1`$. First, we consider head-on collisions, where we fix a particle at $`(0,0,0)`$ and place a moving particle at $`(r,0,0)`$ in an ambient velocity field $`\stackrel{}{U}^{\mathrm{}}=(1,0,0)`$. (In all cases considered in this paper, no ambient angular velocity is applied, and since we are interested in the trajectories of spheres, for the most part we suppress the discussion of their individual angular velocities.) Our numerical results for the velocities are compared with those obtained analytically using the expression for the resistance matrix $`R_{FU}`$ given in . The two velocities agree to within $`10^3`$ percent for both near and far fields. We then consider tangential motion by placing the moving particle initially at $`(0,r,0)`$, without changing the other parameters. The quality of the agreement is similar to the head-on collision case. Another check is whether the solutions from the simulations exhibit required symmetries. We consider placing the moving particle initially at $`(\pm x_0,y_0,0)`$, with the fixed particle and ambient flow field as above. From the obvious geometrical symmetries of this system, the $`x`$ velocity, the absolute values of the $`y`$ velocity and the $`z`$ angular velocity should be the same, but the signs of the $`y`$ velocity and the $`z`$ angular velocity should be reversed for the two settings. For several combinations of $`x_0`$ and $`y_0`$, we confirm that the solution indeed possesses the required symmetry. Lastly, we also check the dependence of the solution on the size of the computational domain. A periodic boundary condition is applied in each direction. We find that the linear size of $`10,000`$ is enough to ensure that the effect of the boundary is negligible.
Particles in an inertialess suspension cannot overlap with or even touch each other, since the radial component of the lubrication force diverges when two particles approach. In the simulation, however, we find that the particles do overlap at times, due to the following numerical subtlety. When a moving particle approaches a fixed one, at sufficiently small interparticle distances the lubrication force indeed becomes large, and it pushes the particle away. However, since the force grows slowly, the distance at which the repulsion occurs often is very small. In our simulation, we find that this distance is typically smaller than $`10^{13}`$, and in principle one has to calculate the trajectory of the particles to extremely high precision. The problem is that the simulation time with this high accuracy is enormous: double precision computations are inadequate, and quadruple precision variables are needed, and furthermore the time step has to be kept very small: less than $`10^4`$. The CPU time required for even a two particle simulation at this accuracy exceeds 50 hours on a DEC AlphaStation 500/500, so such an SD simulation in a model porous medium is not feasible.
This type of simulation is not only practically impossible, but also not at all physical. The surface roughness of ordinary solid particles is typically $`10^3`$ to $`10^2`$ of the radius . As a result the particles are not exactly spherical, and their flow behavior changes significantly when the two particle separation is at the scale of the roughness . One semi-physical way to include the effects of roughness is to add an extra repulsive force at very short distances . In this paper, we instead follow the simpler procedure of Phung and Brady , who on one hand introduced a cutoff in the resistance matrix computation, and on the other simply ignore small overlaps. When interparticle gap is smaller than the cutoff value of $`10^5`$, the value of $`R_{FU}`$ at the cutoff is used. Accidental overlaps are allowed, provided the overlap distance is smaller than $`10^3`$.
## 3 Fractional Particle Flux
Most pores in typical porous media have more than one exit channel connected to them, and the path taken by a particle in such a pore in proceeding to the next one depends in detail on microscopic variables such as the detailed local geometry, the particle’s precise location, and the flow field (which in turn depends on the location of the other particles). It is not feasible to study more than a small subregion of a porous medium in such fine detail, and it is useful to construct a more tractable model based on a network of nodes and links. A key ingredient is such a coarse-grained description is the fraction of particles passing through a given exit path, averaged over some of the microscopic variables. We refer to this quantity as the fractional particle flux (FPF).
Although the FPF is important in understanding porous media transport in general and processes such as filtration in particular, we are not aware of any direct measurement of this quantity. Although it is possible to follow the motion of a suitably tagged suspended particle in some detail through a laboratory porous medium, one would have to map out the microscopic pore space as well, and the practical difficulties are evident. A related problem which has received some quantitative study is the flow of red cells passing through tube hematocrits. When a tube hematocrit bifurcates to two or more smaller tubes, a blood cell has to choose a tube at the junction. The fractional red cell flux is of great importance in determining red cell distribution among the microvessels, and there exist several direct and indirect measurements of it . The essential problem in these two systems is the same, but their geometries are a bit different and blood cells are deformable, which may result in different qualitative behaviors of the FPF.
We construct model porous media with fixed particles in two and three dimensions. First, we consider a two dimensional medium, where particles are confined to a plane, immersed in a three dimensional fluid. Although a two dimensional medium is not realistic, it allows us to study more detailed properties (note that computation time in SD simulations increases roughly as the square of the number of particles). Fortunately, we shall see that its qualitative behavior does not differ from that of a three dimensional medium, which we discuss later. The medium consists of the $`11`$ fixed particles shown in Fig. 1(a). The centers of all particles are in the $`z=0`$ plane, and we choose the coordinates of the lower left particle to be the origin. The “lattice constant” $`a`$ is the distance between neighboring particles in the same column, and it also is the distance between neighboring columns. The middle column can be shifted vertically, and we choose the $`y`$ coordinate of the center particle to be $`y_c+b`$, where $`y_c=3a/2`$. We call the system, characterized by two parameters $`a`$ and $`b`$, as the “2d-11” geometry. Recall that the current code can handle only monodisperse particles whose radius is taken as $`1`$. The ambient flow field is $`(1,0,0)`$ in all cases, unless stated otherwise. We insert a moving particle at $`(2,y_c+d,0)`$, with $`a/2<d<a/2`$, just upstream of the first column (Fig. 1(b)). After the particle reaches the node behind the first column, it will proceed to either channel A or B depending on the initial parameter $`d`$.
Assuming that the distribution of the particle along the starting line ($`x=2`$) is uniform , we can calculate the FPF from the range of $`d`$ within which the trajectory of the particle passes through channel A (or B). The FPF for a channel is often plotted against the fractional flow rate through the channel, but in the present simulation the flow rate of a channel is difficult to calculate. We use the fractional channel width (FCW) instead. We define $`W_i`$ as the width of the $`y`$ interval corresponding to channel $`i`$ within the unit cell $`[y_ca/2,y_c+a/2]`$, as shown in Fig. 1(b). The FCW of channel $`i`$ is defined as $`W_i`$ normalized by the lattice constant $`a`$.
We consider three values of the lattice constant, $`a=4.25`$, $`4.5`$, and $`5.0`$, and we also vary the fractional channel width by changing $`b`$, using five values of $`b`$ for each $`a`$. For a given geometry, determined by $`a`$ and $`b`$, we can measure the FPF of a channel by studying the trajectories of the particles starting from different locations characterized by $`d`$. Note that we can determine the FPF with less effort by focusing on a neighborhood of the value of $`d`$ at which the trajectory of the particle terminates at the center particle. The calculation of a typical trajectory requires about $`4`$ CPU hours on a workstation, About 15 trajectories are needed to determine the FPF for a given geometry with fair accuracy. The FPF for the channel A are shown against its FCW in Fig. 2. In this geometry, the FPF for FCW less than $`1/2`$ can be determined simply by symmetry.
The most prominent feature of the figure is that all the curves lie very close to the line: FPF = FCW. In other words, it is a good approximation to distribute particles in proportion to the exit channel width. Small, but finite deviations from the curve are observed, especially for large FCW. Somewhat similar behavior is observed in simulations of the flow of red blood cells at a capillary bifurcation , but there are two noticeable differences. In the blood cell case, the FPF is larger than FCW for large FCW, in contrast to the present case. By inspecting the trajectories, it seems that the difference is caused by the difference in the local geometry around the junction. Another difference is that in the present case, the amount of the deviation increases as $`a`$ increases, in contrast to the cell case. This is a bit puzzling, since we expect the FPF to follow the relative width for larger lattice constants. Further simulations indicate that the deviation shows a complicated dependence on $`a`$, before beginning to decrease for larger values, $`a>10`$.
We next study a model three dimensional porous medium, consisting of 13 fixed particles, in three layers, forming a deformable hexagonal close packed (hcp) structure. There are first and third layers containing three particles each, and a second (middle) layer with seven particles, as shown in Fig. 3(a), where the layers are stacked in the $`x`$-direction normal to the plane of the figure. The $`y`$ and $`z`$ coordinates of the particles in the first and third layers are in register, while the middle layer can be translated in its plane to produce a family of porous structures. Specifically, the $`y`$ coordinate of the center particle in the second layer lies in the dotted (center-)line of the first layer, and is shifted in the $`z`$ direction by a variable amount $`b`$. When $`b=0`$ the orthocenter (small circle) in the first layer coincides with the center particle in the second layer, and the particles form precisely a hexagonal close packed cell. The second parameter describing the geometry is the distance between nearest neighbor particles in the same layer, the lattice constant $`a`$. The interlayer distance is fixed at $`a(2/3)^{1/2}`$ in all cases studied. This system is referred to below as the “3d-13” geometry.
The ambient flow again has unit strength in the $`x`$ direction normal to the layers. We insert a mobile particle in front of the gap in the first layer and, depending on its initial coordinates, it will eventually be carried around the central particle in the second layer and proceed to one of the exit channels A, B or C shown in Fig. 3(b) The initial $`y`$ and $`z`$ coordinates of the mobile particle lie within the “unit cell”—the largest triangle in Fig. 3(b), where the projection of the particles to the $`yz`$ plane is shown. Again, the question is the fractional particle flux through each channel. Due to symmetry, the FPFs of channels A and B are identical, and all three fluxes add to unity, so it suffices to give the FPF of channel C. As in the previous simulation, calculating the FPF involves finding the ranges of initial coordinates such which the trajectory of the mobile particle passes through channel C. However, a rather larger number of trajectories for a given cell, 30, are required here, so we consider only the case $`a=10`$, and the four values $`b=2,0,2,4`$. The calculation of one trajectory now requires about $`6`$ CPU hours.
The measured FPF for channel C is plotted against its fractional channel area (FCA) in Fig. 4. The FCA of channel $`i`$, which is an analog of the FCW in three dimensions, is defined as the projected area of the triangle $`W_i`$, normalized by the area of the unit cell—$`\sqrt{3}a^2/2`$. The FPF curve lies again very close to the FPF = FCA line, and the number of particles passing through a channel is to a very good approximation proportional to its channel area. Small deviations from the FPF = FCA line can be seen, similar to those in the 2d-11 geometry.
From these simulations of motion in two and three dimensional porous media, we find that the fractional particle flux for a channel is, to a good approximation, proportional to its channel width (in 2d) or area (in 3d). Since a particle at a given initial location always passes through the same channel, we expect the “mixing” at the node is not significant, which is consistent with observations in the flow of red blood cells . A contrasting assumption is commonly made for the motion of passive tracers in porous media flow. The alternative “complete mixing” rule assumes that a tracer particle (effectively, a particle small enough not to disturb the flow field) in a pore chooses an exit pore based only on the relative flux there, independent of its position of entry into the pore. Evidently, this assumption is reasonable only if the local Péclet number is small enough for the particle to diffuse substantially within the pore and lose memory of its initial streamline.
## 4 Waiting Time near a Junction
Consider a particle in a pore with, for example, two possible exit paths, In the absence of suspended particles, the flow field will have a dividing streamline terminating on the fixed particle, and a passive fluid particle on this streamline would reach the fixed particle only after infinite time. Suspended particles will alter this picture somewhat, but there will be a single mathematical trajectory which reaches the fixed particle after infinite time, and when a particle lies close to the line its velocity will be small, and it will “wait” in this pore before proceeding to the next. Such behavior was observed in the simulations of a particle passing through a bifurcating tube . Here, we study in detail the parameter dependence of the waiting time for model porous media.
We first study the waiting time numerically for the 2d-11 geometry. A mobile particle is inserted at $`(2,y_c+d)`$, just upstream of the first layer (see Fig. 1(b)). This particle may dally in front of the center particle which divides channels A and B, and to quantify this effect we define the waiting time $`T_w`$ as the time the mobile particle is in the interval $`(x_c2)0.1<x<(x_c2)+0.1`$, where $`x`$ ($`x_c`$) is the $`x`$ coordinate of the mobile (center) particle. In Fig. 5(a), we plot $`T_w`$ against $`d`$ for $`a=4.5`$ and five values of $`b`$. (For other values of $`a`$, 4.25 and 5.0, the qualitative features of the results are unchanged.) As seen in the figure, $`T_w`$ is sharply peaked around a $`b`$-dependent value $`d_{\mathrm{peak}}`$, where waiting times as much as 100 times larger that the large-$`d`$ values are seen. Furthermore, the peaks are essentially all the same, as seen if we superpose the individual peaks by plotting $`T_w`$ against $`dd_{\mathrm{peak}}`$ in Fig. 5(b), and noting that they roughly collapse to a single curve. Since the individual peaks intuitively correspond to the moving particle stagnating near the dividing fixed particle in the second layer, and do not seem to depend on exactly where the latter is located, one suspects that the waiting time is only sensitive to the two particle interaction between the mobile and the center particles.
Similar behavior is found in three dimensions. In the 3d-13 geometry, we place a mobile particle in front of the first layer, along the vertical line passing through the center particle of the second layer in Fig. 3(b). Here, we define $`d`$ as the difference in $`z`$ coordinate between the mobile particle and the center particle of the second layer, and the waiting time $`T_w`$ as the amount of time the mobile particle spends in $`x_c20.1<x<x_c2+0.1`$, where $`x`$ ($`x_c`$) is the coordinate of the mobile (center) particle. In Fig. 6(a), we show $`T_w`$ for $`a=10`$, and four values of $`b`$. These curves are similar to those for the 2d-11 geometry and, when shifted by their peak positions $`d_{\mathrm{peak}}`$, collapse again to the single curve in Fig. 6(b). The collapsed curve is very similar to that of the 2d-11 geometry, further supporting the idea that the effect of the particles surrounding the mobile and center particles is not significant.
To pursue this simplifying idea, we consider the interaction of a mobile particle with only a single fixed particle. We fix the latter at the origin, and place a mobile particle initially at $`(5,d,0)`$. We again define the waiting time $`T_w`$ as the time the mobile particle spends in $`2.1<x<1.9`$. In Fig. 7, we plot the waiting time for the two particle system in a log-log plot, along with those of the 2d-11 ($`a=4.5,b=0`$ and $`b=0.45`$) and the 3d-13 geometries ($`a=10,b=0`$). Aside from a difference in overall scale, the waiting time variation for the different cases is identical, verifying that the two-particle interaction is the dominant factor. (The origin of the difference in time scale is simply a matter of a different superficial velocity: the asymptotic velocity far from the packing is the same in all cases, so the scale of the velocity near the two particle subsystem will depend on the superficial velocity within the packing. The 2d-11 geometry is most closely packed, has the lowest permeability and lowest velocities, and the longest times. The two particle system is the most open and has the highest velocities and shortest times.)
The two particle system can be treated analytically. We fix a particle at the origin and insert a mobile particle at $`(2d,0,0)`$, where $`0<d1`$ for the long waiting time limit. The ambient velocity field is $`U^{\mathrm{}}=(\mathrm{cos}\theta ,\mathrm{sin}\theta ,0)`$, and the mobile particle moves in the $`z=0`$ plane (Fig. 8). From Eq. (1), the forces on the particles are
$`\left(\begin{array}{c}\stackrel{}{F}^1\\ \stackrel{}{F}^2\end{array}\right)=\mu \left(\begin{array}{cc}A_{11}& A_{12}\\ A_{21}& A_{22}\end{array}\right)\left(\begin{array}{c}\stackrel{}{U}^{\mathrm{}}\stackrel{}{U}^1\\ \stackrel{}{U}^{\mathrm{}}\stackrel{}{U}^2\end{array}\right),`$ (9)
where $`\stackrel{}{F}^i`$ and $`\stackrel{}{U}^i`$ are the hydrodynamic force and velocity acting on particle $`i`$, and we ignore the $`z`$ components of these vectors. At these near-touching distance, the contribution from the linear motion of the particle is much larger ($`1/\xi `$, where $`\xi `$ is the gap width) than that from the rotation ($`\mathrm{ln}\xi `$), so we ignore the contribution of rotation of the particle to the forces. Imposing the force free condition $`\stackrel{}{F}^1=0`$ and the fixing condition $`\stackrel{}{U}^2=0`$,
$`(X_{11}^A+X_{12}^A)U_x^{\mathrm{}}`$ $`=`$ $`X_{11}^AU_x^1`$
$`(Y_{11}^A+Y_{12}^A)U_y^{\mathrm{}}`$ $`=`$ $`Y_{11}^AU_y^1,`$ (10)
where $`X_{ij}^A`$ and $`Y_{ij}^A`$ are two particle resistance functions . When the particles nearly touch,
$$U_y^1=(1+\frac{Y_{12}}{Y_{11}})U_y^{\mathrm{}}(1+\frac{Y_{12}}{Y_{11}})b,$$
(11)
where $`b`$ is the impact parameter as shown in Fig. 8. Substituting the near-field form of the resistance matrices we have
$$U_1^y\frac{3(A_{11}^Y(1)+A_{12}^Y(1))}{\mathrm{ln}\xi }b.$$
(12)
Here, $`A_{ij}^Y(1)`$ are known constants satisfying $`A_{11}^Y(1)+A_{12}^Y(1)>0`$, and $`\xi `$ is the gap distance between the two particles. Since $`\mathrm{ln}\xi `$ is a slowly varying function, we can ignore its variation. The above equation becomes,
$$U_1^y\alpha b,$$
(13)
where $`\alpha `$ is a new constant. Note that the approximations made are valid at near-touching distance, and within a certain range of $`\xi `$. We can estimate the waiting time as that required for particle to proceed from $`y=y`$ to $`y=y+\delta y`$. Then,
$$T_w_y^{y+\delta y}\frac{1}{U_1^y}𝑑y\frac{1}{\alpha }\mathrm{ln}(1+\frac{\delta y}{y}),$$
(14)
where we use that $`by`$ near touching distance.
We determine $`\alpha `$ and $`\delta y`$ from the least square fit of the Eq. (14) to the measured time for the two particle simulation. The waiting time from Eq. (14) with these parameters ($`\alpha =0.166`$ and $`\delta y=0.329`$) is shown along with measured waiting times in Fig. 7. The overall shape of the analytic curve agrees very well with the numerical computations for small values of the impact parameter, while the deviation at large $`d`$ is expected, since Eq. (14) holds only at if the particles nearly touch.
Given the agreement between the behavior of the waiting times in various systems, it is reasonable to conclude that two-body interactions control the tail of $`T_w`$. Two qualifications are in order, however. The porous systems we have considered are relatively “open”, and it is evident which fixed particle the mobile one interacts with. In a densely packed porous medium, particularly one involving heterogeneous shapes and sizes, some ambiguity may be present. Secondly, one may ask about the effects of the other mobile particles. When the suspension is dilute, one may distinguish between the case of a mobile particle approaching another that is waiting in a pore, the subject of the next section in fact, and the effects of perturbations in the velocity field induced by more distant suspended particles. An accurate treatment of the latter question requires a much more extensive set of additional simulations than we are able to provide at this time, but an approximate treatment may be given by considering the effect of noise on the waiting time. We used the 2d-11 geometry, with $`a=4.5`$ and $`b=0.9`$, and oscillated the particle at the lower left to provide a perturbation on the trajectory of the mobile particle. The amplitude of the oscillation was $`1`$ and its frequency is 0.1, comparable to the time scale of the particle motion. The measured waiting time with noise is not substantially different than without, except that the peak is somewhat rounded, and we conclude that perturbations of this form do not seem to alter the waiting time significantly.
## 5 Perturbation of a Waiting Particle
As discussed in the previous section. a particle may spend significant amount of time around a junction, and we now consider the perturbations induced by other mobile particles in the vicinity, i.e., the effect of a “bypassing” particle on a “waiting” particle. This represents one special case of the hydrodynamic interaction between two mobile particles in a porous medium, but a particularly important one in applications such as deep bed filtration where slow particles are likely to adhere or be left behind in the filter.
In the previous section, it was shown that the dynamics of particles moving slowly through pores is controlled by two particle interactions, so we first consider the effect of a third mobile particle on a simple two-particle system. We fix a particle at the origin, and insert mobile particles at $`(2.2,d_w,0)`$ and $`(5,d_p,0)`$. The first mobile particle would wait near the fixed one if alone, and the second mobile particle perturbs it. We measure the waiting time $`T_w`$, again defined as the amount of time the waiter spends in $`2.1<x<1.9`$.
In Fig. 9(a), we show the waiting time for $`d_w=0.1`$, plotted against $`d_p`$. For large and small values of $`d_p`$, the waiting time approaches that of the unperturbed particle ($`T_w10`$), but the notable features in the figure are a minimum at $`d_p=0.5`$ and a maximum at $`d_p=0.6`$. The origin of these extrema can be understood by inspecting the trajectories of the particles. Around the value of $`d_p`$ at which the minimum of $`T_w`$ occurs, the perturbing particle passes below the fixed particle and it pushes the waiting particle away from the fixed particle. The waiting particle then easily escapes from the junction. Around the local maximum, the perturbing particle passes above the fixed particle and pushes the waiting particle towards it, making it harder for the waiting particle to escape from the junction.
For smaller values of $`d_w`$, the behavior becomes more complicated. We show the waiting time for $`d_w=0.01`$ against $`d_p`$ in Fig. 9(b). Here again, for small and large values of $`d_p`$, the waiting time approaches that of the unperturbed particle. However, there are now three maxima and three minima in the waiting time, compared to one each in the previous case. We label the minima as A ($`d_p=2.0`$), B ($`d_p=0.5`$), and C ($`d_p=0.6`$), and the maxima as D ($`d_p=0.9`$), E ($`d_p=0.1`$), and F ($`d_p=2.5`$). The minimum A is the “push away” case of $`d=0.1`$. Around minima B and C, the waiting particle, by following the perturbing particle, escapes from the junction. The perturbing particle, which seems to form a temporary bound state with the waiting particle , makes it easier for it to escape. In other words, it “leads” the waiting particle from the junction. The minimum B (C) occurs when the perturbing particle leads below (above) the fixed particle. We next consider the maxima; at D, the perturbing particle initially leads the waiting particle. However, the bond between the two breaks, when the waiting particle cannot catch up to the leading particle. The waiting particle then changes its direction (“turns”), and proceeds to the opposite channel. This series of events increases the waiting time. The maximum E is due to a “head on” collision. When all three particles lie close to a straight line, we expect a large waiting time since any motion orthogonal to the line, which is essential for escape, will take time. The last maximum (F) is due to the “push toward” case of $`d_w=0.1`$. As $`d_w`$ is further decreased, we observe the same mechanisms in the qualitative behavior of the waiting time. We have gone as low as $`d_w=0.0001`$, with no new phenomena appearing.
Next we ask whether the above “moves” are also observed in more realistic geometries, by studying the waiting time in the 2d-11 geometry (with $`a=5`$ and $`b=0`$). We insert one mobile particle at $`(x_c2.2,y_c+d_w,0)`$, where $`(x_c,y_c)`$ are the coordinates of the center particle. A mobile perturbing particle is placed at $`(x_c7,y_c+d_p,0)`$, and we measure the waiting time of the first mobile particle for a few combinations of $`d_w`$ and $`d_p`$. In Fig. 9(c), we plot the waiting time against $`d_p`$ for $`d_w=0.0001`$. The curve is not very different than the previous case of $`d_w=0.01`$. There are three maxima at $`d_p=1.4,0.4`$ and $`1.6`$, and two minima at $`d_p=0.1`$ and $`d_p=0.9`$, but the waiting time does not yet approach to its unperturbed value in the current range of $`d_p`$. The origin of the extrema can also be connected to particular moves of the two particles. The three maxima are due to the “turn” of the waiting particle. The motion near the minimum at $`0.1`$ is dominated by the “push away” move, and at the other minimum by the “lead” move. Note that in a this geometry with many fixed particles, a single trajectory can contain multiple moves. For example, both the maximum at $`0.4`$ and the minimum at $`0.9`$ involve both the lead and turn moves. Near the maximum, the turn move gives dominant contribution to the waiting time, while the lead move dominates near the minimum.
The waiting times for the 2d-11 configuration are not exactly the same as that for the three particle configuration, not entirely surprisingly. Not only is the sequence of moves different, but new ones are present, and it would be difficult to infer the waiting time for a realistic porous medium from these simple studies alone. It is clear that such three or multi-body interactions have a significant quantitative effect on transit times for particles through a porous medium, and this effect could at best be captured in an average way in simplified models, such as those based on network representations of porous media.
## 6 Relaunching of Trapped Particles
One of the important mechanisms for the capture of particles suspended in a fluid flowing through a porous medium (see, e.g., ), is geometrical trapping (or straining), where particles are caught in constrictions smaller than their diameter. It has been observed experimentally that such particles can escape (or be “relaunched”) from the trap, due to another particle passing near the trapped one . These authors argue, albeit without direct evidence, that the relaunching is caused by hydrodynamic interactions between the two particles. These experiments also indicate that relaunching qualitatively changes the distribution of the trapped particles, and in particular the efficiency of a filter. Thus understanding of the relaunching mechanism and its quantitative measurement are important in understanding the long time behavior of particulate systems such as deep bed filters. We now consider this question within our model porous media, and show that relaunching does occur with hydrodynamic interaction only, and then estimate the relaunching rate.
We first study a two dimensional system, where all particles are confined to the $`z=0`$ plane. We form a trap using two fixed particles, whose geometry is determined by the distance $`r`$ between the two particle centers and $`\theta `$, the angle between the line joining the two particle centers and $`y`$-axis (Fig. 10). The coordinates of two particles are $`(\pm (r/2)\mathrm{sin}\theta ,\pm (r/2)\mathrm{cos}\theta ,0)`$, and the ambient flow is $`(1,0,0)`$. We insert a mobile particle in the trap, barely touching the two fixed particles as shown in Fig. 10. In the absence of additional particle, the particle remains in the trap, and its exact coordinate is determined within the SD simulation. We then insert another mobile particle at $`(5,d)`$, which is carried near the trap, and ask under what conditions the trapped particle is dislodged. Typically, for given values of $`r`$ and $`\theta `$, there is a finite range of $`d`$ where relaunching is observed: the $`r=2`$ results, for example, are plotted in Fig. 11. This behavior is qualitatively reasonable, since as $`\theta `$ increases, the “barrier” becomes more aligned with the ambient flow. (For $`r=2`$, if $`\theta `$ is much larger than $`30^{}`$, we expect the mobile particle does not touch the lower fixed particle, and moves away from the trap even in the absence of any disturbance.) We next fix $`\theta `$ at $`10`$ degrees, and study the dependence of the relaunching condition on $`r`$. We find that relaunching becomes less frequent as $`r`$ increases, and none at all is observed for $`r=3`$, which again can be understood in terms of the stability of the trapped particle. In order to calculate the average relaunching probability, one has to average over all possible trap configurations (over $`r`$ and $`\theta `$), and the trajectory of the perturbing particle (over $`d`$). A very rough estimate of the probability, based on the data similar to that above, is on the order of a few percent.
An amusing aspect of relaunching is that in the trajectories taken by the particles it is not the case, as one might expect, that the perturbing particle “pushes” the trapped one away from the trap. Rather, we find that the perturbing particle always “leads” the trapped particle from the trap (see the previous section for the precise definition of push and lead).
Next, we consider how relaunching is affected by a porous medium – the other fixed particles surrounding the trap. To this end, we replace the center particle in the 2d-11 geometry by the two-particle trap just discussed, giving a 12-particle porous medium characterized by the lattice constant $`a`$, the vertical shift of the second layer $`b`$, the distance between two trap particles $`r`$, and the tilt angle $`\theta `$. We insert one mobile particle in the trap, and a second mobile particle at $`(2,d,0)`$ just in front of the first layer. In general we find that while relaunching still occurs, the effect is noticeably suppressed by the surrounding particles. For example, if we fix $`a=8`$, $`b=0`$ and $`r=2`$ while varying $`\theta `$ and $`d`$, again no relaunching is observed for $`\theta =0`$, for $`\theta =10^{}`$ the $`d`$-interval for relaunching is too narrow to be worth determining and for $`\theta =20`$ relaunching is observed for the interval $`0.5d0.9`$, narrower than in the open geometry. Again the mechanism at the individual trajectory level is that the perturbing particle leads the trapped particle away from the trap. Thus, while relaunching observed in a model porous medium as well as an open system, its likelihood is considerably reduced, by a factor of $`510`$ for these parameters. In other cases, for example significantly smaller values of $`a`$, relaunching is again suppressed if not eliminated. In qualitative terms, one might say that the surrounding fixed particles have the effects of suppressing the velocity perturbations due to the mobile particle (via porous medium hydrodynamic screening) as well as constraining the phase space available for the trapped one.
In order to check whether the dimensionality is relevant to relaunching, we considered a three dimensional triangular trap, consisting of three particles, with a mobile particle trapped in the middle, and the ambient flow field orthogonal to the triangular plane. We find relaunching occurs in a similar manner, where the trapped particle follows the “leading” particle. The relaunching rate differs from the two dimensional cases, however, and given the large parameter space involved, it is not feasible to provide a quantitative estimate of the rate for any given porous medium.
## 7 Summary
We have studied the microscopic behavior of particles suspended in fluid passing through a porous medium using Stokesian Dynamics numerical calculations. The porous medium was constructed by fixing the positions of certain particles in a fluid, with the appropriate modifications in the code, and the remainder were allowed to move under the action of an ambient flow and the hydrodynamic interactions due to all fixed and moving particles. We measured the fractional particle flux for two and three dimensional model porous media. We find that the FPF for a channel is, to an excellent approximation, proportional to its fractional channel width (in two dimensions) or area (in three). The details of the distribution seem to be highly dependent on the local geometry, in particular the distribution of particles at the channel entrances, so it is difficult to draw general conclusions about its form.
We examined the waiting time distribution for particles moving slowly in pore junctions, typically caused by a bifurcating flow path due to a particular fixed particle at the pore boundary. We compared the behavior of appropriate particle trajectories in two and three dimensional model porous media to motion under two-body interactions alone, treated either numerically or analytically. The results are essentially the same in all cases and indicate that the waiting time is dominated by the interaction between the mobile particle and the fixed particle which divides the possible flow paths.
We then considered the perturbations to a “waiting” particle near a junction due to a second mobile “bypassing” particle. We find that the interaction between the two particles leads to an unexpected and rich behavior, displaying several types of motions as the two mobile particles interact with each other. An important related quantity is the relaunching rate of geometrically trapped particles, representing the likelihood that a particle caught in a geometrical trap is released by the effects of a second particle nearby. We constructed simple traps in two and three dimensions, in which particles are trapped in a constriction. We find that the trapped particles can indeed be relaunched from the trap by this mechanism, and that the relaunching rate depends sensitively on the surrounding geometry of the trap.
A principal motivation for this work is to develop computational approaches to the dynamics of processes such as deep bed filtration, where there are innumerable microscopic details, by replacing them by a coarser-grained but tractable network description. Network modeling of fluid or passive tracer flow in porous media seems to capture many of the relevant aspects and we have been guided by the detailed experimental observations cited above in identifying pore scale mechanisms to understand. The results in this paper have succeeded in part, in that we have shown that effects such as hydrodynamic relaunching have a firm basis, and explored many of the microscopic interactions controlling filtration dynamics. We are limited principally by the range of geometries we can construct, in particular the restriction to monodisperse particles, and by the relative slowness of SD computations in general. Future work will, we trust, alleviate these restrictions.
## Acknowledgment
We thank J. Brady and T. Phung for providing the Stokesian Dynamics code, and for helping us with its use, and E. Guazzelli for extensive discussions of the filtration experiments which motivated this study. This work was supported by the Department of Energy under Grant No. DE-FG02-93-ER14327. One of us (J.L.) is supported in part by SNU-CTP and Korea Science and Engineering Foundation through the Brain-Pool program. The simulations were performed at the Systems Engineering Research Institute in Korea.
## Figure Captions
(a) A two dimensional model porous medium consisting of $`11`$ spheres in a plane, characterized by the “lattice constant” $`a`$ and the vertical displacement $`b`$. (b) Possible trajectories A and B of a moving particle (shaded), initially at $`(2,y_c+d,0)`$, in the ambient velocity field $`(1,0,0)`$.
The fractional particle flux (FPF) for channel A plotted against its fractional channel width (FCW) for the $`2d11`$ geometry, for $`a=4.25,4.5`$ and $`5.0`$. The dotted line represents FPF = FCW.
(a) The first and second layers, respectively, of a three dimensional porous medium. The third layer is identical to the first, and they are spaced in the (out-of-plane) $`x`$ direction by $`a\sqrt{2/3}`$, where the (in-plane) lattice constant is $`a`$. (b) A mobile particle is initially placed within the triangular “unit cell” at a small value of $`x`$, and then passes through one of the three channels (A, B or C). The figure here shows the projection onto the $`yz`$ plane.
Fractional particle flux (FPF) passing through channel C against its fractional channel area (FCA) for the $`3d11`$ geometry with $`a=10`$. The dotted line represents FPF = FCA.
(a) Waiting time $`T_w`$ for the 2d-11 geometry with $`a=4.5`$ and several values of $`b`$. (b) The curves in (a) are shifted by $`d_{\mathrm{peak}}`$, whereupon they approximately collapse into a single master curve.
(a) Waiting time $`T_w`$ for the 3d-13 geometry plotted against $`d`$, with $`a=10`$ and four values of $`b`$. (b) The curves in (a) are shifted by $`d_{\mathrm{peak}}`$, and again seem to collapse to a single master curve.
Waiting time for a two particle system is shown together with those for the 2d-11 ($`a=4.5,b=0`$ and $`b=0.45`$) and the 3d-13 ($`a=10,b=0`$) geometries, along with the analytic estimate Eq. (14) with $`\alpha =0.166`$ and $`\delta y=0.329`$.
Geometry of the two particle system used for the analytical estimate of $`T_w`$. A particle (shaded) is fixed at $`(0,0)`$, and a mobile particle starts $`d`$ away from it, in the ambient velocity field $`U^{\mathrm{}}=(\mathrm{cos}\theta ,\mathrm{sin}\theta ,0)`$.
Waiting time distributions for the perturbation of a slow particle. A mobile particle is at $`(2.2,d_w,0)`$ in front of a fixed particle, while a second mobile particle starting at $`(5,d_p,0)`$ passes by. The waiting time for the first particle is plotted against $`d_p`$ for (a) $`d_w=0.1`$, and (b) $`d_w=0.01`$. (c) The waiting time for a mobile particle in the 2d-11 geometry, in the presence of another mobile particle. The variable $`d_p`$ is roughly the impact parameter of the second mobile particle.
Geometry for studying the relaunching of trapped particles. A mobile particle is in a model two-particle trap (marked with grey shade) characterized by $`r`$ and $`\theta `$, while another mobile particle starting at $`(5,d,0)`$ passes near the trap.
The range of $`d`$ for which relaunching occurs plotted against $`\theta `$, for the geometry in Fig. 10 with $`r=2`$. The relaunching occurs in the region bounded by the two lines.
|
no-problem/9903/cond-mat9903120.html
|
ar5iv
|
text
|
# Energy Landscape and Overlap Distribution of Binary Lennard-Jones Glasses
## 1 Distance Measure in Configuration Space
We study correlations among an ensemble of such metastable states, labelled by $`a`$. A glassy state $`a`$ is characterized by a spontaneous breaking of translational invariance as signalled by a non-zero expectation value of the Fourier-components $`\mathrm{exp}(i\stackrel{}{k}\stackrel{}{r}_i)_a0`$ of the local density. Here $`\stackrel{}{r}_i`$ denotes the position of the $`i`$-th particle, $`i=1,\mathrm{},N`$. The local density is analogous to the local magnetization, which is non-zero in the spin glass phase, $`s_i_a0`$, indicating the spontaneous breaking of the Ising symmetry. An overlap between two glassy configurations can be defined as
$$Q^{a,b}(\stackrel{}{k})=\frac{1}{N}\underset{i=1}{\overset{N}{}}\mathrm{exp}(i\stackrel{}{k}\stackrel{}{r}_i)_a\mathrm{exp}(i\stackrel{}{k}\stackrel{}{r}_i)_b$$
(1)
in naive analogy to the spin glass order parameter $`Q^{a,b}=\frac{1}{N}_{i=1}^Ns_i_as_i_b`$. For the glassy state, however, $`Q^{a,b}(\stackrel{}{k})`$ is not a good measure, because it depends on the labelling of the particles. In particular two identical configurations with different labels can have a very small overlap according to the above definition. Permutations of identical particles do not give rise to new states in the space of physically distinguishable configurations. Hence, we need a distance measure which is invariant under all permutations $`\pi `$ of identical particles in one state only. This can be achieved by maximizing over all such permutations
$$Q^{a,b}(\stackrel{}{k})=\underset{\pi }{\mathrm{max}}\frac{1}{N}\underset{i=1}{\overset{N}{}}\mathrm{exp}(i\stackrel{}{k}\stackrel{}{r}_i)_a\mathrm{exp}(i\stackrel{}{k}\stackrel{}{r}_{\pi (i)})_b.$$
(2)
This overlap has the required invariance property, but its computation requires the enumeration of all permutations, which is computationally hard and can at most be done for very small systems.
The computationally hard problem can be avoided by working with densities. We specialize to zero temperature and only consider configurations of locally minimal energy. Such a configuration is uniquely characterized by a point measure $`\rho ^a(\stackrel{}{r})=(1/N)_{i=1}^N\delta (\stackrel{}{r}\stackrel{}{r}_i^a)`$, which by construction is independent of the labelling of the particles. To obtain a distance measure in configuration space, we regularise the point particle densities $`\delta (\stackrel{}{r})`$ to homogeneous $`\eta `$-spheres
$$\delta (\stackrel{}{r})\mathrm{\Delta }_\eta (\stackrel{}{r})=\frac{1}{V_\eta }\theta (\eta |\stackrel{}{r}|),$$
(3)
Here $`V_\eta =4\pi \eta ^3/3`$ is the volume of the $`\eta `$-sphere and we choose $`2\eta <r_{\mathrm{min}}^a=\mathrm{min}_{i,j}(|\stackrel{}{r}_i^a\stackrel{}{r}_j^a|)`$ to guarantee that the spheres do not overlap and that the positions $`\stackrel{}{r}_i^a`$ can still be uniquely reconstructed from the regularised density. Such a regularisation is necessary because products of $`\delta `$ functions are ill defined and the probability of coincidence of any two 3-dimensional vectors which are distributed smoothly in space is equal to zero.
A natural distance measure between two regularised densities is
$$Q^{a,b}=\frac{V_\eta }{N}\mathrm{d}^3x\underset{i,j=1}{\overset{N}{}}\mathrm{\Delta }_\eta (\stackrel{}{x}\stackrel{}{x}_i^a)\mathrm{\Delta }_\eta (\stackrel{}{x}\stackrel{}{x}_j^b).$$
(4)
If we restrict $`\eta `$ to even smaller values, $`4\eta <r_{\mathrm{min}}:=\mathrm{min}(r_{\mathrm{min}}^a,r_{\mathrm{min}}^b)`$, then each sphere in state $`a`$ can overlap with at most one sphere in state $`b`$. Consequently the product of two local regularised densities is only non-zero, for the permutation $`j=\pi (i)`$, which identifies particles within the same $`\eta `$-sphere. The overlap is then given by the volume fraction of overlapping spheres in the two configurations, so that $`0Q^{a,b}1`$.
It is obvious that $`\eta `$ cannot be made arbitrarily small, because then the distance measure looses its ability to discriminate between different structures. Very small $`\eta `$ will simply lead to maximal distances in most cases and ultimately reproduce the point measure. We have varied $`\eta `$ over the range $`0<\eta <r_{\mathrm{min}}/4`$ and computed the number of overlapping spheres for two configurations which are random and two configurations which are similar (about 80% overlap). We find that the number of overlapping spheres is hardly sensitive to the choice of $`\eta `$ in the range $`r_{\mathrm{min}}/8<\eta <r_{\mathrm{min}}/4`$. For $`\eta <r_{\mathrm{min}}/8`$ the number of overlapping spheres decreases drastically with $`\eta `$ even for the strongly correlated configurations so that $`Q^{a,b}`$ can no longer discriminate between strongly and poorly correlated states. In the following we fix $`\eta =r_{\mathrm{min}}/5`$.
Other distance measures have been used in the literature , in particular
$$D(a,b)=\underset{\pi }{\mathrm{min}}\underset{i=1}{\overset{N}{}}\left(\stackrel{}{r}_i^a\stackrel{}{r}_{\pi (i)}^b\right)^2,$$
(5)
which requires the solution of a hard computational problem. It has the further disadvantage to be not directly related to quantities which may be obtained from experiment (like the density) or which appear naturally in any of the existing theories of supercooled liquids and structural glasses.
A system in free space with pairwise interactions possesses translational and rotational symmetries, which may be broken explicitly by boundary conditions. The most convenient boundary conditions for computer simulations, namely a cubic box with periodic boundary conditions, leave a subgroup of the free space symmetries unbroken. This subgroup $`𝒮`$ is generated from the translations and the cubic point group symmetries. As a consequence, there exists a whole orbit of symmetry related, degenerate minima of $`H`$ to every single minimum found in the simulations.
We are interested in the rate of occurrence, i.e. the probability distribution, of overlaps $`P(Q)=_{a,b}P_aP_b\delta (QQ^{a,b})`$. Here the summation includes all symmetry related states, which all have the same rate of occurrence $`P_a`$. The large number of symmetry related states imposes two severe difficulties. First, many translated states (out of a continuum of states) have to be generated to obtain a reliable estimate of $`P(Q)`$ from numerical simulations. Second, the $`Q`$ values generated by all symmetry related states may scatter considerably in the interval $`0Q1`$. This may smear out structures in $`P(Q)`$ which would be clearly marked in an ensemble without residual symmetries. To avoid this problem we have broken the translational symmetry explicitly by fixing the centre-of-mass. Since total momentum is conserved in our simulations, this can be achieved by an appropriate choice of initial conditions (see below). The cubic point group symmetry has been left untouched: for each minimum which was found by our algorithm, we have generated 48 equivalent states and included them in the histogram for $`Q^{a,b}`$. To check, whether this procedure smears out relevant structure, we have also maximized the overlap over all 48 equivalent states.
## 2 Simulations
The system under consideration is a binary mixture of large (L) and small (S) particles with $`80\%`$ large and $`20\%`$ small particles. Small and large particles only differ in diameter, but have the same mass. They interact via a Lennard–Jones potential of the form $`U_{\alpha \beta }(r)=4ϵ_{\alpha \beta }[(\sigma _{\alpha \beta }/r)^{12}(\sigma _{\alpha \beta }/r)^6]`$. All results are given in reduced units, where $`\sigma _{LL}`$ was used as the length unit and $`ϵ_{LL}`$ as the energy unit. The other values of $`ϵ`$ and $`\sigma `$ were chosen as follows: $`ϵ_{LS}=1.5,\sigma _{LS}=0.8,ϵ_{SS}=0.5,\sigma _{SS}=0.88`$. The systems were kept at a fixed density $`\rho 1.2`$. Periodic boundary conditions have been applied and the potential has been truncated appropriately according to the minimum image rule , and shifted to zero at the respective cutoff. For the systems with $`N=30`$ particles the cut-off is $`r_c=1.43`$ and for $`N=60`$ the cut-off is at $`r_c=1.7`$. The choice of the Lennard-Jones parameters follows recent simulations of Lennard-Jones glasses ; it is known to suppress recrystallization of the system on molecular dynamics time scales. The glass transition is supposed to occur at the temperature $`T_g0.45`$ . Throughout this study we present results for systems with $`N=30`$ and $`N=60`$ particles, noticing that most of the results have been verified for $`N=100`$.
Initially $`N`$ atoms are placed randomly inside a cubic simulation box of side-length $`L`$. The following steps are performed repeatedly:
* Heat up the system to a temperature $`T_{\mathrm{run}}`$.
* Let the system evolve for a time $`\tau _{\mathrm{run}}`$ using molecular dynamics.
* Locate the nearest local minimum by quenching down the system to $`T=0K`$ using the steepest descent path.
Explicit breaking of translational invariance can be achieved by fixing the centre-of-mass of the system. A unique definition of the centre-of-mass $`\stackrel{}{R}_s`$ is not obvious for periodic boundary conditions. We start from the extended zone scheme representation of the simulation box. The infinite periodic density pattern is cut off at finite volume $`V`$, so that the cubic point group symmetry stays intact. The centre-of-mass is the same for all such volumes, including the smallest one, the simulation box itself. The regularised $`\stackrel{}{R}_s`$ remains unchanged in the limit $`V\mathrm{}`$.
Each run is initialized with the same positions and different velocities of the particles. The latter are drawn from a Maxwell-Boltzmann distribution with temperature $`T_{\mathrm{run}}`$, subject to the constraint that the total momentum vanishes. The total momentum is conserved in the molecular dynamics simulation as well as in the steepest descent algorithm. Thereby we generate an ensemble, such that in all states the centre-of-mass stays fixed. In the following we shall also consider states which are related by cubic point group symmetries. To guarantee that the centre of mass remains unaffected by point group operations we choose $`\stackrel{}{R}_s=0`$. We use standard molecular dynamics with the velocity form of the Verlet algorithm . Steepest descent is achieved with the conjugate gradient algorithm .
## 3 Results
Simulating small systems requires careful checks to avoid sampling crystalline configurations. First, it can be quite helpful to look at the structures. Second, we have applied a common neighbor analysis to detect the amount of polytetrahedral order (amorphous) and closed-packed structures (crystalline). Occasionally the structural stability of low energy states has been verified for temperatures below the glass transition.
In fig. 2 we show a histogram of the energy per particle of locally minimal states for $`N=30`$ and $`N=60`$. For each system size $`1000`$ minima have been generated, starting from $`T_{\mathrm{run}}=0.5`$. For the small system, crystalline states have the dominating basins of attraction, whereas for the larger system we found very few crystalline states (3 out of 1000).
Energy Landscape. The set of sampled local minima depends on $`T_{\mathrm{run}}`$. We have generated $`200`$ configurations for $`N=60`$ particles and fixed $`T_{\mathrm{run}}`$ chosen in the interval $`0.3T_{\mathrm{run}}T=1.8`$. For each $`T_{\mathrm{run}}`$ we have computed the mean energy, the lowest energy found and the variance of the energy. The results are shown in fig. 2. For each configuration the length of the molecular dynamics simulation has been chosen as $`\tau _{\mathrm{run}}=0.510^6`$. This is sufficient to allow for equilibration at $`T_{\mathrm{run}}>T_g`$.
The mean energies per particle of the sampled minima are approximately constant ($`ee_h=5.2`$) at high temperatures and decrease significantly within a transition region. The lowest energy minima $`e_l5.45`$ are found for $`T_{\mathrm{run}}T_g`$, whereas for $`T_{\mathrm{run}}<T_g`$ sampling is restricted to one quasi-ergodic component. Thus extensive sampling of locally stable states is most effective for $`T_{\mathrm{run}}T_g`$. At high temperatures, most of the weight is found in a broad band of locally stable states with high energies around $`e_h`$ . The low energy structures are not sampled, although these minima are favoured by a relative Boltzmann factor $`\mathrm{exp}(N(e_he_l)/k_BT_{\mathrm{run}})`$. This implies that the configuration space of the high energy basins is much larger than that of the low energy basins, so that it overcompensates the relative Boltzmann factor. When $`T_{\mathrm{run}}`$ is lowered to temperatures around $`T_g`$, the Boltzmann factor becomes more important and eventually overruns the configuration space volumes of the $`e_h`$ states. These results are in agreement with recent work of Sastry et al. , who showed that the transition is accompanied by the onset of typical, glass like relaxation phenomena.
To assess the performance of our algorithm in sampling low lying energy states quantitatively, we compare the lowest energies sampled with the results of a recent study on the optimization properties of traditional simulation algorithms applied to the Lennard-Jones glasses . The authors reported on a lowest energy configuration found for $`N=60`$ at $`e=5.38`$ by cooling down slowly from the liquid phase using standard molecular dynamics. The lowest state shown in fig. 2 is at $`e=5.45`$. The difference in energy $`\delta e=0.07`$ corresponds to a factor of $`100`$ in computer time according to the estimate given in refs. . For $`T_{\mathrm{run}}0.5`$ around $`15\%`$ of all states sampled using our method are lower in energy then $`e=5.38`$.
Distribution of overlaps. To investigate the distribution of overlaps we have generated $`2000`$ amorphous states for the $`N=60`$ particles system keeping the centre-of-mass fixed. Each molecular dynamics trajectory has been initialized at $`T_{\mathrm{run}}=0.5`$ and has been run for $`\tau _{\mathrm{run}}=10^6`$ time steps. We compute the overlap of the large particles for all pairs $`(a,b)`$. This implies that the weights $`P_a`$ are approximated by the rate of occurrence of state $`a`$ in our simulation. Several arguments support our choice of ensemble: 1) The generated states are as low in energy as the best ones from other optimization procedures. 2) In agreement with recent simulations of Sastry et al. , we find a qualitative change in the properties of sampled states at a well defined temperature $`T_g0.5`$, such that above $`T_g`$ a broad range of states are sampled, whereas below $`T_g`$ the system is landscape dominated, i.e. trapped in a metastable state. 3) The Stillinger-Weber method is controlled and reproducible and sufficiently simple to allow for analytical calculations.
A histogram of overlaps is shown in fig. 4. We observe a most probable value around $`Q^{a,b}0.2`$ and no significant structure in the distribution. It is instructive to also compute the distribution of overlaps of liquid configurations. This distribution is compared to the histogram for glassy states in fig. 4. The distribution of overlaps for glassy states is $`35\%`$ broader and centered at a $`20\%`$ higher value. Otherwise no significant changes occur. Thus we conclude that the glassy minima are as different as they can be. This conclusion also holds, if we restrict our ensemble of glassy states to the lowest energy states.
The residual cubic point group symmetry does not affect the overlap distribution significantly. We have calculated overlaps by either maximizing over the $`48`$ symmetry related states or by including all symmetry related states with the same weight. The resulting distributions are shown in Fig. 4. As one would expect the distribution obtained by maximizing over all rotations is peaked at a higher value of $`Q`$, but does not reveal any additional structure.
Recently Coluzzi and Parisi computed the distribution of distances $`P(D)`$, using the distance measure of eq. (5) and simulating small systems ($`28`$ to $`36`$ particles) confined by soft walls. They find highly non-trivial distributions $`P(D)`$ in the glassy phase, which strongly depend on particle number. We have also applied our method to generate an ensemble of low energy states by rapid cooling for the system of ref. . A straightforward evaluation of $`P(Q)`$ yields the bimodal distribution, shown in fig. 5. A closer inspection, however, reveals that the structure is due to many imperfect crystalline states. Soft walls increase the probability for crystalline states, as compared to periodic boundary conditions, and for small systems ($`N30`$ particles) most of the particles sit on the surface of the sample.
|
no-problem/9903/astro-ph9903282.html
|
ar5iv
|
text
|
# The X-ray Transient XTE J2012+381
## 1 Introduction
The transient X-ray source XTE J2012+381 was discovered by the RXTE All Sky Monitor (ASM) on 1998 May 24 \[Remillard, Levine & Wood 1998\]. Its ASM light curve is shown in Fig. 1. ASCA observations (White et al. 1998) revealed the ultrasoft spectrum plus hard power-law tail signature of a black hole candidate. A radio counterpart was suggested by Hjellming & Rupen \[Hjellming & Rupen 1998a\] and found to be close to an 18th magnitude star (Castro-Tirado & Gorosabel 1998, Garcia et al. 1998), the USNO A1.0 star 1275.13846761 \[Monet et al. 1996\]. Spectroscopy of this star showed a nearly featureless spectrum with Balmer and Na D absorption, and no apparent emission lines (Garcia et al. 1998). Our images obtained with the Jacobus Kapteyn Telescope (JKT) on La Palma, however, showed the presence of a faint red companion star 1.1 arcsec away, closely coincident with the radio source (Hynes and Roche 1998, Hjellming and Rupen 1998b). Infrared images also showed this second star \[Callanan et al. 1998\].
In this paper, we report in detail on the JKT photometry of the faint red star, which we suggest to be the true optical counterpart of XTE J2012+381. A spectrum obtained with the William Herschel Telescope (WHT) reveals weak H$`\alpha `$ emission in the fainter star supporting its identification with XTE J2012+381. The red star also appears somewhat fainter at the epoch of the WHT observations. Infrared imaging obtained with the United Kingdom Infrared Telescope (UKIRT) late in the X-ray decline clearly shows both stars and also suggests fading relative to the earlier observations of Callanan et al. (1998).
## 2 JKT Photometry
Multicolour photometry of the field of XTE J2012+381 was taken on 1998 June 3 through the JKT service programme. The JKT CCD camera was used with the TEK4 CCD and a standard UBVRI filter set. De-biasing and flat-fielding were performed with standard iraf tasks. A V band image of the field is shown in Fig. 2a. As the field is crowded, the daophot task was used to deblend point spread functions. I band images before and after subtraction of the PSF of USNO A1.0 star 1275.13846761 (Monet et al. 1996; hereafter the USNO star) are shown in Fig. 2b and 2c. A residual stellar image is clearly present after subtraction. Its offset relative to the USNO star was measured using V, R and I images; the results are consistent to 0.1 arcsec. Hence we determine the position of the star (Table 1). The position of the radio source (Hjellming & Rupen 1998b) is consistent with the fainter star to within uncertainties, but is difficult to reconcile with the USNO star. The fainter star is therefore a strong candidate for the optical counterpart of the radio source (and hence the X-ray source).
Absolute flux calibration was obtained from a colour-dependent fit to five stars from Landolt standard field 110 \[Landolt 1992\]. The fainter star could only be distinguished in V, R and I band images and its magnitudes (Table 2) indicate that it is very red, with $`\mathrm{V}\mathrm{R}=1.4\pm 0.2`$. Our only quantitative estimate of the interstellar reddening comes from White et al. \[White et al. 1998\] who estimate a column density $`N_\mathrm{H}=(1.29\pm 0.03)\times 10^{22}`$ cm<sup>-2</sup>. Gas-to-dust scalings are at best approximate. Using the relations of Ryter, Cesarsky & Audouze (1975), Bohlin, Savage & Drake (1978) and Predehl & Schmitt (1995) we obtain $`E_{\mathrm{B}\mathrm{V}}=1.9`$, 2.2 and 2.4 respectively, so we adopt this range as a reasonable estimate of reddening. Assuming the extinction curve of Cardelli, Clayton & Mathis (1989) this implies $`1.4<E_{\mathrm{V}\mathrm{R}}<1.8`$ and hence an intrinsic colour of $`0.6<\mathrm{V}\mathrm{R}<0.2`$. This is consistent with other SXTs; the optical emission is likely dominated by a hot accretion disc. This colour would also be consistent with an early type star, but the lack of H$`\alpha `$ absorption (Sect. 3) would not, unless it were almost exactly filled in by emission.
## 3 WHT Spectroscopy
XTE J2012+381 was observed with the WHT on 1998 July 20, when the object had faded to approximately half its peak X-ray brightness. The ISIS dual-beam spectrograph was used in single red arm mode to maximise throughput with the low-resolution R158R grating (2.9 Å pixel<sup>-1</sup>) and the TEK2 CCD. A 1 arcsec slit was used giving an instrumental resolution of 5.5 Å. Based on positions derived from the JKT photometry, the slit was aligned to pass through the line of centres of the USNO star and an isolated comparison star used to define the spatial profile as a function of wavelength. This comparison star was chosen to lie along the line of centres of the two blended stars. The three stars are co-linear to within the accuracy of our astrometry, i.e. the derived V, R and I positions of the red star scatter evenly to either side of the line of centres of the USNO star and comparison star. We estimate that the offset of the red star from the line of centres of the other two stars is therefore less than 0.1 arcsec, the scatter in the astrometry. The positioning of the slit was judged by eye using reflections off the slit jaws. There is therefore some uncertainty in centring the stars within the slit, but as the stars are co-linear to within a fraction of the slit width, this should not affect their relative brightness significantly. Standard iraf procedures were used to de-bias and flat-field the spectrum. Sky subtraction was performed by fitting fourth order polynomials to the sky background with the stellar profiles masked out. The subtracted image showed no significant residuals in the sky regions. Wavelength calibration was achieved using a fit to a copper-neon arc spectrum obtained immediately before the object spectrum and checked against the night sky emission lines.
Although the seeing was good (spatial FWHM 0.9 arcsec) the stars are still heavily blended; a sample binned spatial cut is shown in Fig. 3. The third star on the slit was therefore used to define the spatial profile as a function of wavelength, by means of a Voigt profile fit. The profile parameters are smoothed in wavelength using a fourth-order polynomial fit and then fixed. The centre position of the profile varies by less than one pixel over the whole spectrum. The Gaussian core width and the Voigt damping parameter both vary by about 5 per cent. No smaller scale patterns in the residuals to this fit are apparent. The Voigt model was found to give a very good fit over most of the profile. It does somewhat overestimate the extreme wings of the profile, so we approximately correct this by defining a wavelength independent correction term as a function only of distance from profile centre. With this correction, no significant deviations between model and data can be discerned (see Fig. 3). The positions of the other two stars relative to the isolated comparison are taken as fixed, leaving only the amplitudes of two blended profiles to be fitted. This was done by $`\chi ^2`$ minimisation and is effectively a generalisation of the optimal extraction method \[Horne 1986\] to the multiple profile case. This method for separating the spectra of blended stars will be described in more detail in a subsequent paper in preparation.
A Telluric correction spectrum was formed by normalising the spectrum of the spectrophotometric standard BD+40 4032 \[Stone 1977\] observed later in the night. This has spectral type B2 III so H$`\alpha `$ and He i absorption lines were masked out. The correction spectrum was shifted slightly to compensate for spectrograph flexure between this exposure and that of our target. It was also rescaled to give a least-squares fit to the normalised sum of the three spectra on the slit. A short exposure of our target was made immediately before observing BD+40 4032 so we also use this to obtain absolute flux calibration. Both these calibration observations used a wide slit (4 arcsec) to ensure photometric accuracy.
The final calibrated spectra of both stars are shown in Fig. 4, binned $`\times 4`$ in wavelength. As can be seen, the Telluric corrections are good, though slight distortion of the spectrum remains, e.g. near 7250 Å. The only prominent non-atmospheric feature in either spectrum is H$`\alpha `$, in absorption in the USNO star (believed to be an early F-type star, Garcia et al. 1998), with an equivalent width of $`(6.5\pm 0.3)`$ Å, and apparent weak emission in its red companion, equivalent width $`(6.6\pm 1.5)`$ Å. The errors in equivalent width are statistical errors derived from the residuals of a low-order fit to the surrounding continuum. Assuming these errors are correct, our detection of H$`\alpha `$ is significant at the $`3\sigma `$ level.
The emission feature appears roughly square, with full width $`40\mathrm{\AA }1800`$ km s<sup>-1</sup>. It is broader than the absorption in the USNO star (FWHM$`10`$ Å) and so does not look like a simple reflection of the absorption line. It is also much wider than the night sky H$`\alpha `$ emission. We have repeated the extraction process without the correction to the profile wings. While this noticeably degrades the profile fit, it does not affect the H$`\alpha `$ emission feature. As a final test, we have used the derived spatial profile (including profile correction) to synthesise a new fake image in which the spectrum of the fainter star was left unchanged, but the H$`\alpha `$ line profile of the USNO star was used as a template to add artificial absorption lines (of the same strength and width) at 6100, 6425 and 6725 Å. We then repeated the extraction process on this fake image without allowing the profile correction to the line wings, i.e. using a model for the spatial profile which is known to be inadequate. This was done as a check that a combination of misfitting the spatial profile and a strong absorption line does not produce spurious emission. No emission features are seen in the spectrum of the fainter star at the position of the fake absorption lines. After subtraction of a low-order fit to the continuum, we measure the total residual counts in 15-pixel (44 Å) bins centred on each of the line positions. From the noise in the surrounding continuum, we estimate that the error on the total counts from such a bin is 130 counts. At 6100, 6425 and 6725 Å respectively we measure 150, 76 and $`180`$ counts, a distribution (mean 14, rms 140) consistent with zero to within the error estimate. For the bin centred on H$`\alpha `$, however, we measure 550 counts, significantly larger than the estimated error.
We therefore conclude that the H$`\alpha `$ emission is unlikely to be an artifact of the de-blending process but represents real emission from the optical counterpart of the X-ray source, detected with $`3\sigma `$ confidence.
## 4 UKIRT Photometry
UKIRT observations were obtained on 1998 August 14. The weather was clear, winds were light and the humidity was low. The images were obtained using the IRCAM3 detector. The $`256\times 256`$ detector array has a scale of 0.286 arcsec pixel<sup>-1</sup>. The data on our target were collected between 12.51 and 13.57 UT, during which the airmass varied from 1.49 to 2.02. Initial data reduction was performed using ircamdr software. As for the JKT observations, point spread function fitting was done using daophot in iraf. Flux calibration was performed with respect to the UKIRT standard FS4. The primary data on the standard were taken between the J and K band observations of the source. These were compared with other standard observations taken at the beginning of the sequence as a check of consistency; the scatter in standard measurements is $`0.03`$ mag. Our magnitudes for the two stars are summarised in Table 3 together with the magnitude equivalent to the sum of their fluxes. Error estimates are derived from a combination of the consistency of standard measurements (as an indicator of the reliability of the calibration) and statistical errors determined by daophot.
## 5 Is the Optical/IR Counterpart Fading?
In order to assess whether the optical counterpart had faded between the JKT and WHT observations, we formed a model bandpass (based on filter transmission, CCD response and the extinction curve for the same airmass as the JKT observations). This was applied to our spectra of the three stars on the slit to produce synthetic R band magnitudes for comparison with the photometry. We estimate $`\mathrm{R}=17.23`$ for the USNO star, $`\mathrm{R}=20.23`$ for the faint optical counterpart to the X-ray source and $`\mathrm{R}=16.89`$ for the isolated star. Our spectra do not quite cover the full R bandpass, but the loss is a very small amount at both short and long wavelengths. We estimate that the effect of this, together with errors in the model bandpass introduces an uncertainty of no more than 0.05 magnitude when combined with the colour differences between the stars. The JKT and WHT observations of the USNO star and the slit comparison star are then approximately consistent, with perhaps a small systematic calibration error of less than 0.10 magnitudes. The much larger difference in the magnitudes for the X-ray source (0.33 magnitudes fainter at the second observation) then suggests that it has indeed faded optically between days 9 and 53.
The infrared counterpart also appears fainter than earlier in outburst. Callanan et al. (1998) estimated $`\mathrm{J}=15.0\pm 0.1`$, $`\mathrm{K}=14.3\pm 0.1`$ near the peak of outburst for the two blended stars. The comparable combined magnitudes from our UKIRT observation are given in Table 3. Our observations are significantly different from those of Callanan et al. To produce this difference by fading of the fainter star would require a decline of 1.2 magnitudes in J and 1.0 magnitudes in K between days 9 and 82.
## 6 Discussion
The X-ray light curve of XTE J2012+381 (Fig. 1) shows many similarities to those of other soft X-ray transients \[Chen, Shrader & Livio 1997\], but there are some important differences. The secondary maximum peaks around day 29, only 22 days after the first peak. This is earlier in the outburst than in most systems (45–75 days, Chen, Livio & Gehrels 1993), but is not unprecedented (Shahbaz, Charles & King 1998). The decline from secondary maximum is clearly linear. In the paradigm of King & Ritter \[King & Ritter 1998\], this would imply that the disc is too large to be held in a high state by X-ray irradiation. A large disc would then imply a long orbital period. Such an interpretation is supported by observations of other SXTs (Shahbaz et al. 1998), in which extended linear decays are only seen in long-period systems.
A long orbital period in turn implies that XTE J2012+381 probably contains an evolved companion, similar to V404 Cyg. There are differences in the optical spectrum: for example, V404 Cyg showed very strong emission lines of hydrogen, helium and other species (Casares et al. 1991, Wagner et al. 1998), whereas XTE J2012+381 shows only weak H$`\alpha `$ emission. This may not be significant; V404 Cyg was unusual in this respect, and most SXTs show few emission lines during outburst. Another long period system, GRO J1655–40, has exhibited outbursts both with H$`\alpha `$ emission \[Bailyn et al. 1995\] and without \[Hynes et al. 1998\].
More puzzling is the apparent plateau in the X-ray brightness after day 95 at a level of $`12`$ mCrab. In this period, the flux is gradually rising and culminates in a mini-outburst around day 145, approximately 120 days after the secondary maximum. This is an intriguing timescale as both GRO J1655–40 (Harmon et al. 1995, Zhang et al. 1997) and GRO J0422+32 (Augusteijn, Kuulkers & Shaham 1993, Chevalier & Ilovaisky 1995) have also shown mini-outbursts separated by $`120`$ days. The lightcurves of these three systems are otherwise very different, and it is not clear that their mini-outbursts are caused by the same mechanism; see Chen, Shrader & Livio (1997) for a discussion of the models proposed for SXT rebrightenings.
We conclude that XTE J2012+381 shows similarities to other SXTs, especially long-period systems. The extended plateau state is, however, unusual. Continued monitoring is important to elucidate its nature. In particular it will be of interest to see if further mini-outbursts occur.
## Acknowledgements
The Jacobus Kapteyn and William Herschel Telescopes are operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the U.K. Particle Physics and Astronomy Research Council. The JKT observations were provided by Nic Walton as part of the JKT Service Programme and UKIRT data was obtained through the UKIRT Service Programme. RXTE results provided by the ASM/RXTE teams at MIT and at the RXTE SOF and GOF at NASA’s GSFC. RIH is supported by a PPARC Research Studentship and would like to acknowledge helpful discussion with Carole Haswell and others at Sussex, as well as constructive criticism from our referee, Michael Garcia. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
|
no-problem/9903/astro-ph9903478.html
|
ar5iv
|
text
|
# 1 Program Summary
## 1 Program Summary
| Title of program: | SOPHIA 2.0 |
| --- | --- |
| Catalog number: | |
| Program obtainable | from A. Mücke |
| | e-mail: amuecke@physics.adelaide.edu.au |
| Computer on which the program | DEC-Alpha and Intel-Pentium based |
| has been thoroughly tested: | workstations |
| Operating system: | UNIX, Linux, Open-VMS |
| Programming language used: | FORTRAN 77 |
| Memory required to execute | $`<`$1 megabyte |
| No. of bits in a word | 64 |
| Has the code been vectorized? | no |
| Number of lines | 16500 |
| in distributed program: | |
| Other programs used in SOPHIA | Rndm |
| in modified form | Processor independent random number |
| | generator based on Ref. |
| | Jetset 7.4 |
| | Lund Monte Carlo for |
| | jet fragmentation |
| | DECSIB |
| | Sibyll routine which decays |
| | unstable particles |
| Nature of physical problem: | Simulation of minimum bias photo- |
| | production for astrophysical applications |
| Method of solution: | Monte Carlo simulation of individual events |
| --- | --- |
| | for given nucleon and photon energies, |
| | photon energies are sampled |
| | from various distributions. |
| Restrictions on the complexity | Incident ambient photon fields limited |
| of the problem | to power law and blackbody spectra so far. |
| | No tests were done for center-of-mass |
| | energies $`\sqrt{s}1000`$GeV. |
| Typical running time: | 10000 events at a center-of-mass energy |
| | of 1.5 GeV require a typical |
| | CPU time of about $`75`$ seconds. |
## 2 Introduction
The cosmic ray spectrum extends to extremely high energies. Giant air showers have been observed with energy exceeding $`10^{11}`$ GeV . Energy losses due to interactions with ambient photons can become important, even dominant for such energetic nucleons, above the threshold for pion production. Photoproduction of hadrons is expected to cause a distortion of the ultra-high energy cosmic ray (CR) spectrum by interactions of the nucleons with the microwave background (the Greisen-Zatsepin-Kuzmin cutoff ; see also for additional references), but it may also be relevant to the observed high energy gamma ray emission from jets of Active Galactic Nuclei (AGN) (e.g. ) or Gamma-Ray Bursts (GRB) . Moreover, it is the major source process for the predicted fluxes of very high energy cosmic neutrinos (e.g. ).
The photohadronic cross section at low interaction energies is dominated by the $`\mathrm{\Delta }`$(1232) resonance. Since the low energy region of the cross section is emphasized in many astrophysical applications, the cross section and decay properties of the prominent $`\mathrm{\Delta }`$-resonance have often been used as an approximation for photopion production, and the subsequent production of gamma rays and neutrinos . As discussed in , this approximation is only valid for a restricted number of cases, and does not describe sufficiently well the whole energy range of photohadronic interactions. A more sophisticated photoproduction simulation code is needed to cover the center-of-mass energy range of about $`\sqrt{s}110^3`$ GeV, which is important in many astrophysical applications.
In this paper we present the newly developed Monte-Carlo event generator SOPHIA (Simulations Of Photo Hadronic Interactions in Astrophysics), which we wrote as a tool for solving problems connected to photohadronic processes in astrophysical environments. The philosophy of the development of SOPHIA has been to implement well established phenomenological models, symmetries of hadronic interactions in a way that describes correctly the available exclusive and inclusive photohadronic cross section data obtained at fixed target and collider experiments.
The paper is organized as follows. After introducing the kinematics of $`N\gamma `$-interactions (Sect. 3) we give a brief physical description of the relevant photohadronic interaction processes (Sect. 4). The implementation of these processes into the SOPHIA event generator, together with the method of cross section decomposition and parametrization, is described in Sect. 5. The structure of the program is outlined in Sec. 6, and a comparison of the model results with experimental data is provided in Sec. 7. The definitions of special functions and tables of parameters used in the cross section and final state parametrization, as well as a compilation of the important routines and functions used in the code, are given in the appendices.
Unless noted otherwise, natural units ($`\mathrm{}=c=e=1`$) are used throughout this paper, with GeV as the general unit. In this notation, cross sections will be in GeV<sup>-2</sup>. A general exception is Section 4, where numerical parametrizations of cross sections are given in $`\mu `$barn. The relevant conversion constant is $`(\mathrm{}c)^2=389.37966`$ GeV$`{}_{}{}^{2}\mu `$barn.
## 3 The physics of photohadronic interactions
### 3.1 Kinematics of $`N`$-$`\gamma `$ collisions
There are three reference frames involved in the description of an astrophysical photohadronic interaction: (i) the astrophysical lab frame (LF), (ii) the rest frame of the nucleon<sup>5</sup><sup>5</sup>5In the following the subscript $`N`$ is used if no distinction between protons and neutrons is made. Interactions for both protons and neutrons are implemented in SOPHIA. (NRF), and (iii) the center-of-mass frame (CMF) of the interaction. For example, in the LF, the initial state can be characterized by the nucleon energy $`E_N`$, the photon energy $`ϵ`$, and the interaction angle $`\theta `$
$$\mathrm{cos}\theta =(\stackrel{}{p}_N\stackrel{}{p}_\gamma )/\beta _NE_Nϵ.$$
(1)
where $`\stackrel{}{p}_N`$ and $`\stackrel{}{p}_\gamma `$ denote the nucleon and photon momenta. The Lorentz factor is $`\gamma _N=E_N/m_N=(1\beta _N^2)^{1/2}`$ with $`m_N`$ being the nucleon mass. The corresponding quantities in the NRF and CMF are marked with a prime () and an asterisk (), respectively. Fixed target accelerator experiments where a photon beam interacts with a proton target are performed in the NRF. In astronomical applications we assume that the LF can be chosen such that a the photon distribution function is isotropic. The LF may therefore be different from the astronomical observer’s frame if, for example the emission region is moving relative to the observer such as in AGN jets or GRB. The interaction rate of the nucleon in the LF is given by
$$R(E_N)=\frac{1}{8E_N^2\beta _N}_{ϵ_{\mathrm{th}}}^{\mathrm{}}𝑑ϵ\frac{n(ϵ)}{ϵ^2}_{s_{\mathrm{th}}}^{s_{\mathrm{max}}}𝑑s(sm_N^2)\sigma _{N\gamma }(s),$$
(2)
where $`\sigma _{N\gamma }`$ is the total photohadronic cross section and
$$s=m_N^2+2E_Nϵ(1\beta _N\mathrm{cos}\theta )=m_N^2+2m_Nϵ^{},$$
(3)
is the square of the center-of-mass energy. The lowest threshold energy for photomeson production is $`\sqrt{s_{\mathrm{th}}}=m_N+m_{\pi ^0}`$. The remaining quantities in Eq. (2) are $`ϵ_{\mathrm{th}}=(s_{\mathrm{th}}m_N^2)/2(E_N+p_N)`$ and $`s_{\mathrm{max}}=m_N^2+2E_Nϵ(1+\beta _N)`$.
The final state of the interaction is described by a number $`N_c`$ of possible channels, each of which has $`N_{f,c}`$ particles in the final state. At threshold, the phase space volume vanishes, which requires kinematically for the partial cross section $`\sigma _c0`$ for $`ss_{\mathrm{th},c}=[_im_i]^2`$. Above threshold, each final state channel has $`3N_{f,c}4`$ degrees of freedom given by the 3-momentum components $`(p_i,\chi _i,\varphi _i)`$ of the particles, constrained by energy and momentum conservation. Here $`p_i`$, $`\chi _i`$, and $`\varphi _i`$ are the particle momentum, and it’s polar and azimuthal angles with respect to the initial nucleon momentum, respectively. One of the $`\varphi _i`$ angles can be chosen to be distributed isotropically since we consider only the scattering of unpolarized photons and nucleons, all other variables are determined by the interaction physics through the differential cross sections. A distinguished role in the final state is played by the “leading-baryon”, which is considered to carry the baryonic quantum numbers of the incoming nucleon. For this particle, the Lorentz invariant 4-momentum transfer $`t=(P_NP_{\mathrm{final}})^2`$ is often used as a final state variable. At small $`s`$, many interaction channels can be reduced to 2-particle final states, for which $`d\sigma /dt`$ gives a complete description.
### 3.2 Interaction processes
Photon-proton interactions are dominated by resonance production at low energies. The incoming baryon is excited to a baryonic resonance due to the absorption of the photon. Such resonances have very short life times and decay immediately into other hadrons. Consequently, the $`N\gamma `$ cross section exhibits a strong energy dependence with clearly visible resonance peaks. Another process being important at low energy is the incoherent interaction of photons with the virtual structure of the nucleon. This process is called direct meson production. Eventually, at high interaction energies ($`\sqrt{s}>2\mathrm{GeV}`$) the total interaction cross section becomes approximately energy-independent, while the contributions from resonances and the direct interaction channels decrease. In this energy range, photon-hadron interactions are dominated by inelastic multiparticle production (also called multipion production).
#### 3.2.1 Baryon resonance excitation and decay
The energy range from the photopion threshold energy $`\sqrt{s}_{\mathrm{th}}1.08`$ GeV for $`\gamma N`$-interactions up to $`\sqrt{s}2`$ GeV is dominated by the process of resonant absorption of a photon by the nucleon with the subsequent emission of particles, i.e. the excitation and decay of baryon resonances. The cross section for the production of a resonance with angular momentum $`J`$ is given by the Breit-Wigner formula
$$\sigma _{\mathrm{bw}}(s;M,\mathrm{\Gamma },J)=\frac{s}{(sm_N^2)^2}\frac{4\pi b_\gamma (2J+1)s\mathrm{\Gamma }^2}{(sM^2)^2+s\mathrm{\Gamma }^2},$$
(4)
where $`M`$ and $`\mathrm{\Gamma }`$ are the nominal mass and the width of the resonance. $`b_\gamma `$ is the branching ratio for photo-decay of the resonance, which is identical to the probability of photoexcitation. The decay of baryon resonances is generally dominated by hadronic channels. The exclusive cross sections for the resonant contribution to a hadronic channel with branching ratio $`b_c`$ can be written as
$$\sigma _c(s;M,\mathrm{\Gamma },J)=b_c\sigma _{\mathrm{bw}}(s;M,\mathrm{\Gamma },J),$$
(5)
with $`_cb_c=1b_\gamma 1`$. Most decay channels produce two-particle intermediate or final states, some of them again involving resonances. For the pion-nucleon decay channel, $`N\pi `$, the angular distribution of the final state is given by
$$\frac{d\sigma _{N\pi }}{d\mathrm{cos}\chi ^{}}\underset{\lambda =J}{\overset{J}{}}\left|f_{\frac{1}{2},\lambda }^Jd_{\lambda ,\frac{1}{2}}^J(\chi ^{})\right|^2,$$
(6)
where $`\chi ^{}`$ denotes the scattering angle in the CMF and $`f_{\frac{1}{2},\lambda }^J`$ are the $`N\pi `$-helicity amplitudes. The functions $`d_{\lambda ,\frac{1}{2}}^J(\chi ^{})`$ are commonly used angular distribution functions which are defined on the basis of spherical harmonics. The $`N\pi `$ helicity amplitudes can be determined from the helicity amplitudes $`A_{\frac{1}{2}}`$ and $`A_{\frac{3}{2}}`$ for photoexcitation (see Ref. for details), which are measured for many baryon resonances . The same expression applies to other final states involving a nucleon and an isospin-0 meson (e.g., $`N\eta `$). For decay channels with other spin parameters, however, the situation is more complex, and we assume for simplicity an isotropic decay of the resonance.
Baryon resonances are distinguished by their isospin into $`N`$-resonances ($`I=\frac{1}{2}`$, as for the unexcited nucleon) and $`\mathrm{\Delta }`$-resonances ($`I=\frac{3}{2}`$). The charge branching ratios $`b_{\mathrm{iso}}`$ of the resonance decay follow from isospin symmetry. For example, the branching ratios for the decay into a two-particle final state involving a $`N`$\- or $`\mathrm{\Delta }`$-baryon and an $`I=1`$ meson ($`\pi `$ or $`\rho `$) are given in Table 1. Here $`\mathrm{\Delta }I_3`$ is the difference in the isospin 3-component of the baryon between initial and final state (the baryon charge is $`Q_B=I_3+\frac{1}{2}`$). In contrast to the strong decay channels, the electromagnetic excitation of the resonance does not conserve isospin. Hence, the resonance excitation strengths for $`p\gamma `$ and $`n\gamma `$ interactions are not related to each other by isospin symmetry and have to be determined experimentally.
#### 3.2.2 Direct pion production
Direct pion production can be considered as electromagnetic scattering by virtual charged mesons, which are the quantum–mechanical representation of the (color-neutral) strong force field around the baryon. The interacting virtual meson gains enough momentum to materialize. Experimentally the direct production of charged pions is observed as a relatively structureless background in the $`N\pi ^\pm `$ and $`\mathrm{\Delta }\pi ^\pm `$ final states in photon-nucleon interactions.
In terms of Feynman graphs, this process is represented by the $`t`$-channel exchange of a meson. Here, $`t`$ is the squared 4-momentum transfer from the initial to the final state baryon. The graph has a strong vertex at the baryon branch and an electromagnetic vertex for the photon interaction. At the strong vertex, the baryon may be excited and change its isospin. Isospin combination rules determine the iso-branching ratios in the same way as for resonance decay (Table 1 for $`I_{\mathrm{res}}=\frac{1}{2}`$). The presence of the electromagnetic vertex requires that the particle the photon couples to is charged. Thus direct processes with $`\mathrm{\Delta }I_3=0`$ branches (e.g. $`\gamma p\pi ^0p`$) are strongly suppressed.
The low energy structure of the direct cross section is not well constrained. At high energies, Regge theory of the pion exchange implies that $`\sigma _{\mathrm{dir}}(s)s^2`$ . The angular distribution of the process is strongly forward peaked and can be parametrized for small $`|t|`$ by
$$\frac{d\sigma _{\mathrm{dir}}}{dt}\mathrm{exp}(b_{\mathrm{dir}}t).$$
(7)
with an experimentally determined slope of $`b_{\mathrm{dir}}12\mathrm{GeV}^2`$ .
The total cross section for a direct scattering process is roughly $`m_t^2`$, where $`m_t`$ is the (nominal) mass of the exchanged virtual particle. Therefore, the direct production of pions is dominant, while the contributions from the exchange of heavier mesons are suppressed. The same applies to direct reactions which involve the exchange of a virtual baryon ($`u`$-channel exchange). However, with increasing energy, more and more channels add to the direct cross section, and this makes an explicit treatment difficult.
#### 3.2.3 High energy processes
Phenomenologically, high energy interactions can be interpreted as reggeon and pomeron exchange processes. Both the reggeon and the pomeron are quasi-particles which correspond to sums of certain Feynman diagrams in the Regge limit ($`|t|s`$) . The cross sections for reggeon and pomeron exchange have different but universal energy dependences and account for all of the total cross section at high energy. There are many different Regge theory-based cross section parametrizations possible. Here we use a recent cross section fit based on the Donnachie-Landshoff model
$$\sigma _{\mathrm{reg}}\left(\frac{sm_p^2}{s_0}\right)^{0.34}\sigma _{\mathrm{pom}}\left(\frac{sm_p^2}{s_0}\right)^{0.095},$$
(8)
with the reference scale $`s_0=1`$ GeV<sup>2</sup>.
Concerning high energy processes, it is convenient to distinguish between diffractive and non-diffractive interactions. Diffractive interactions are characterized by the production of very few secondaries along the direction of the incoming particles. They correspond to the quasi-elastic exchange of a reggeon or pomeron between virtual hadronic states the photon (mainly the vector mesons $`\rho ^0`$, $`\omega `$, and $`\varphi `$) and the nucleon. Because of the spacelike nature of the interaction, the angular distribution is strongly forward peaked, and can be parametrized by Eq. (7) with an energy-dependent slope $`b_{\mathrm{diff}}=6\mathrm{G}\mathrm{e}\mathrm{V}^2+0.5\mathrm{GeV}^2\mathrm{ln}(s/s_0)`$ . At high energies, the cross section of diffractive interactions is approximately a constant fraction of the total cross section. The relative contribution of the different vector mesons is predicted by theory to $`\rho ^0:\omega =9:1`$. The diffractive production of $`\varphi `$ or heavier mesons is neglected in SOPHIA.
Our treatment of non-diffractive multiparticle production is based on the Dual Parton Model . This model can be considered as a phenomenological realization of the expansion of QCD for large numbers of colors and flavours in connection with general ideas of Duality and Gribov’s Regge theory . It provides a well developed basic scheme for the simulation of high energy hadronic interactions. The model can be visualized as follows: (i) the incoming nucleon and photon are split into colored quark and diquark constituents, (ii) in the course of the interaction these constituents exchange color quantum numbers, and (iii) confinement and the color field forces result in color strings which fragment to hadrons.
To relate the contributions of reggeon and pomeron exchange to parton configurations, we use the correspondence of their respective amplitudes to certain color flow topologies which are shown in Figs. 2 and 2. The pomeron exchange topology involves the formation of two color neutral strings, while in case of a reggeon topology only one string is stretched from the diquark to the quark of the photon. The quark and diquark flavors at the string ends are determined by the the spin and valence flavor statistics for the nucleon. For photons the charge difference between $`u`$ and $`d`$ quarks increases the probability that the photon couples to a $`u`$-$`\overline{u}`$ pair instead of a $`d`$-$`\overline{d}`$ pair. In the model we use the theoretically predicted ratio of 4:1 between these two combinations.
The longitudinal momentum fractions $`x`$, $`1x`$ of the partons connected to the string ends are given by Regge asymptotics . One gets for the valence quark ($`x`$) and diquark ($`1x`$) distribution inside the nucleon
$$\rho (x)\frac{1}{\sqrt{x}}(1x)^{1.5}$$
(9)
and for the quark antiquark distribution inside the photon
$$\rho (x)\frac{1}{\sqrt{x(1x)}}.$$
(10)
The relatively small transverse momentum of the partons at the string ends are neglected.
In the string fragmentation process, the kinetic energy of the initial partons is reduced by creating new quark-antiquark pairs which are color field-connected to the parent partons. This process continues until the available kinetic energy drops below the particle production threshold causing the newly produced quarks to combine with the valence quarks to form hadrons (see, for example, ).
## 4 Implementation
In this section, all numerical expressions for cross sections are measured in units of $`\mu \mathrm{barn}`$, unless noted otherwise.
### 4.1 Method of cross section parametrization
The basic models of photohadronic interaction processes described in Sect. 3 are used to obtain robust theoretical predictions used for the parametrization of cross sections and final state distribution functions. For theoretically unpredictable parameters we use, if possible, the estimates given in the Review of Particle Properties (RPP. Remaining parameters are determined in fits to available exclusive and inclusive data on $`\gamma p`$ and $`\gamma n`$ interactions, as compiled in standard reference series (, and references therein). Since the parameters published in the RPP generally allow some variations within a given error range, these parameters are then optimized in comparison to data in an iterative process, until a reasonable agreement with the data for a large set of interaction channels is obtained.
This method has been previously described by Rachen , but was considerably improved in the development of SOPHIA. It provides a minimum bias description of photoproduction, which reproduces a large set of available data while reducing a possible bias due to data selection, since data are used only to fix model parameters. Considering the intended applications of SOPHIA for (i) astrophysical applications and (ii) the determination of background spectra in high energy experiments, we put particular emphasis on a good representation of inclusive cross sections and average final state properties in a wide range of interaction energies, while a good representation of complex exclusive channels is generally not expected.
### 4.2 Resonance production
Using Eq. (4), the contribution to the cross section from a resonance with mass $`M`$ and width $`\mathrm{\Gamma }`$ can be written as a function of the NRF photon energy $`ϵ^{}`$ as
$$\sigma (ϵ^{})=\frac{s}{ϵ^2}\frac{\sigma _0\mathrm{\Gamma }^2s}{(sM^2)^2+\mathrm{\Gamma }^2s}.$$
(11)
The reduced cross section $`\sigma _0`$ is entirely determined by the resonance angular momentum and the electromagnetic excitation strength $`b_\gamma `$. We selected all baryon resonances listed in the RPP with certain existence (overall status: \****) and a well determined minimal photo–excitation strength of $`b_\gamma >10^4`$ for either the $`p\gamma `$ or the $`n\gamma `$ excitation. The resonances fulfilling these criteria and their parameters, as implemented in SOPHIA after iterative optimization, are given in Table 2. The phase-space reduction close to the $`N\pi `$ threshold is heuristically taken into account by multiplying Eq. (11) with the linear quenching function $`\mathrm{Qf}(ϵ^{};0.152,0.17)`$ for the $`\mathrm{\Delta }(1232)`$-resonance, and with $`\mathrm{Qf}(ϵ^{};0.152,0.38)`$ for all other resonances. The function $`\mathrm{Qf}(ϵ^{};ϵ_{\mathrm{th}}^{},w)`$ is defined in Appendix A. The quenching width $`w`$ has been determined from comparison with the data of the total $`p\gamma `$ cross section, and of the exclusive channels $`p\pi ^0`$, $`n\pi ^+`$ and $`\mathrm{\Delta }^{++}\pi ^{}`$ where most of the resonances contribute.
The major hadronic decay channels of these baryon resonances are $`N\pi `$, $`\mathrm{\Delta }\pi `$, and $`N\rho `$; for the $`N(1535)`$, there is also a strong decay into $`N\eta `$, and the $`N(1650)`$ contributes to the $`\mathrm{\Lambda }K`$ channel. The hadronic decay branching ratios $`b_c`$ are all well determined for these resonances and given in the RPP. However, a difficulty arises from the fact that branching ratios can be expected to be energy dependent because of the different masses of the decay products in different branches. In SOPHIA, we consider all secondary particles, including hadronic resonances, as particles of a fixed mass. This implies that, for example, the decay channel $`\mathrm{\Delta }\pi `$ is energetically forbidden for $`\sqrt{s}<m_\mathrm{\Delta }+m_\pi 1.37`$ GeV. To accommodate this problem, we have developed a scheme of energy dependent branching ratios, which change at the thresholds for additional decay channels and are constant in between. The requirements are that (i) the branching ratio $`b_c=0`$ for $`ϵ^{}<ϵ_{\mathrm{th},c}^{}`$, and (ii) the average of the branching ratio over energy, weighted with the Breit-Wigner function, correspond to the average branching ratio given in the RPP for this channel. For all resonances, we considered not more than three decay channels leading to a unique solution to this scheme. No fits to data are required. In practice, however, the experimental error on many branching ratios allows for some freedom, which we have used to generate a scheme that optimizes the agreement with the data on different exclusive channels.
The hadronic branching ratios are given in Tab. 4 in Appendix B. To obtain the contribution to a channel with given particle charges, e.g. $`\mathrm{\Delta }^{++}\pi ^{}`$, the hadronic branching ratio $`b_{\mathrm{\Delta }\pi }`$ has to be multiplied with the iso-branching ratios as given in Tab. 1. We note that with the parameters $`b_\gamma `$, $`b_c`$ and $`b_{\mathrm{iso}}`$, the resonant contribution to all exclusive decay channels is completely determined.
The angular decay distributions for the resonances follow from Eq. (6). In SOPHIA, the kinematics of the decay channels into $`N\pi `$ is implemented in full detail (see Tab. 3). For other decay channels, we assume isotropic decay according to the phase space. Furthermore, there might be some mixing of the different scattering angular distributions since the sampled resonance mass, in general, does not coincide with its nominal mass. This effect is neglected in our work. Instead, we use the angular distributions applying to resonance decay at its nominal mass $`M`$.
The two decay products of a resonance may also decay subsequently. This decay is simulated to occur isotropically according to the available phase space.
### 4.3 Direct pion production
The cross section for direct meson production, unlike those of resonances, is not completely determined by well known parameters. The low and high energy constraints suggest the phenomenological parametrization
$$\sigma _{\mathrm{dir}}(ϵ^{})=\sigma _{\mathrm{max}}\mathrm{Pl}(ϵ^{};ϵ_{\mathrm{th}}^{},ϵ_{\mathrm{max}}^{},2),$$
(12)
where the function $`\mathrm{Pl}(ϵ^{};ϵ_{\mathrm{th}}^{},ϵ_{\mathrm{max}}^{},\alpha )`$ approaches zero for $`ϵ^{}=ϵ_{\mathrm{th}}^{}`$, goes through a maximum at $`ϵ^{}=ϵ_{\mathrm{max}}^{}`$ and follows an asymptotic behaviour $`(ϵ^{})^\alpha `$. The definition of this function is given in Appendix A.
In SOPHIA, we consider explicitly direct channels with charged pion exchange which are dominating at low energies. The selection is further constrained by the fact that sufficient data are only available for the channels $`p\gamma n\pi ^+`$, $`n\gamma p\pi ^{}`$, and $`p\gamma \mathrm{\Delta }^{++}\pi ^{}`$. We note that proton and neutron induced direct reactions are strictly isospin-symmetric. Both proton and neutron data sets (when available) can be used in the fitting procedure. The high energy data fits on the $`\mathrm{\Delta }\pi `$ and $`N\pi `$ channel, i.e. $`\sigma _\pi 18(ϵ^{})^2`$ and $`\sigma _\mathrm{\Delta }26.4(ϵ^{})^2`$ for $`ϵ^{}>1`$, were primarily used to fix $`\sigma _{\mathrm{max}}`$, while a best fit of $`ϵ_{\mathrm{max}}^{}`$ was obtained by comparing with the residuals of the low energy data after subtracting the resonance contribution. The adopted cross sections are
$`\sigma _{N\pi }(ϵ^{})`$ $`=`$ $`92.7\mathrm{Pl}(ϵ^{};0.152,0.25,2)+40\mathrm{exp}\left({\displaystyle \frac{(ϵ^{}0.29)^2}{0.002}}\right)`$
$`15\mathrm{exp}\left({\displaystyle \frac{(ϵ^{}0.37)^2}{0.002}}\right),`$
$`\sigma _{\mathrm{\Delta }\pi }(ϵ^{})`$ $`=`$ $`37.7\mathrm{Pl}(ϵ^{};0.4,0.6,2).`$ (14)
The two Gaussian-shaped functions included in the direct $`N\pi `$ cross section have been added to improve the representation of the total cross section in the energy region $`0.152`$ GeV $`<ϵ^{}<0.4`$ GeV, where otherwise only the well constrained $`\mathrm{\Delta }(1232)`$ resonance contributes significantly. For $`p\gamma `$\- ($`n\gamma `$-) interactions $`\sigma _{\mathrm{\Delta }\pi }`$ contributes to the $`\mathrm{\Delta }^{++}\pi ^{}`$ ($`\mathrm{\Delta }^+\pi ^{}`$) and $`\mathrm{\Delta }^0\pi ^+`$ ($`\mathrm{\Delta }^{}\pi ^+`$) final states with a ratio 3:1 according to isospin combination rules (see Tab. 1).
By comparison with the total cross section data we find that the resonant and direct interaction channels account for all of the total interaction cross section below the $`3\pi `$ threshold at $`ϵ^{}0.51`$ GeV. Above this threshold, and below the threshold for diffractive interactions at $`ϵ^{}1`$ GeV, where high energy processes set in, we find a residual cross section which can be parametrized as
$$\sigma _{\mathrm{lf}}=80.3(60.2)\mathrm{Qf}(x;0.51;0.1)(ϵ^{})^{0.34},$$
(15)
where the number 60.2 given in brackets belongs to $`n\gamma `$–interactions while the number 80.3 refers to $`p\gamma `$–collisions. The normalization cross section and the quenching width has been determined by a $`\chi ^2`$ minimization method to the total cross section data for $`p\gamma `$ ($`n\gamma `$) interactions after subtraction of the respective resonant and direct contributions. By analogy, the power law index for this contribution is taken from the high energy parametrization for reggeon exchange (note that $`ϵ^{}sm_N^2`$). Physically, this cross section represents the joint contribution of all $`t`$-channel scattering processes at low energies not considered so far. This is in principle similar to interactions at high energies. Consequently, we use an adapted string fragmentation model to simulate this contribution, and refer to it as low energy fragmentation hereafter.
### 4.4 High energy multipion production
In SOPHIA, we assume that the cross sections for diffractive and non-diffractive high energy interactions are proportional to each other at all energies. This assumption fixes the threshold for high energy interactions to the threshold of the $`N\rho `$ final state, which is nominally at $`ϵ^{}1.1`$ GeV. Because of the large width of the $`\rho `$ there should be some contribution also at lower energies. From comparison with exclusive data of the $`N\rho `$ final state, and the residuals of the total cross section data, with the sum of the contributions of all low energy channels, we find a common threshold for high energy interactions of $`ϵ_{\mathrm{th},\mathrm{high}}^{}=0.85`$ GeV.
We restrict the diffractive channel to the non-resonant production of $`N\rho `$ and $`N\omega `$ final states, for which we assume the theoretically predicted relation $`\sigma _\rho =9\sigma _\omega `$. The ratio between diffractive and non-diffractive interactions is derived from the comparison with exclusive $`N\rho `$ data and total cross section data at high energy,
$$\sigma _{\mathrm{diff}}=0.15\sigma _{\mathrm{frag}}.$$
(16)
For the parametrization of $`\sigma _{\mathrm{frag}}`$, we use the power law representations of the reggeon and pomeron exchange cross section at high energies, and multiply them by an exponential quenching function $`1\mathrm{exp}([ϵ_{\mathrm{th},\mathrm{high}}^{}ϵ^{}]/a)`$. The relative contributions of the reggeon and pomeron cross sections, and the quenching parameter $`a`$ have been determined by a iterative $`\chi ^2`$ minimization method with respect to the total $`p\gamma `$ ($`n\gamma `$) cross section data after subtraction of all low energy contributions. We find
$$\sigma _{\mathrm{frag}}(ϵ^{})=\left[1\mathrm{exp}\left(\frac{ϵ^{}0.85}{0.69}\right)\right][28.8(26.0)(ϵ^{})^{0.34}+58.3(ϵ^{})^{0.095}],$$
(17)
where we have used the high-energy behaviour given by Eq. (8).
The string fragmentation is done by the Lund Monte Carlo Jetset 7.4 . This program is well suited for string fragmentation at high energies. Since for our purposes also strings with rather small invariant masses have to be hadronized, several parameters of this fragmentation code had to be tuned to obtain a reasonable description also at low energies. Furthermore, in order to avoid double counting, all final states identical to the processes already considered by resonance production and direct interactions are rejected (note that this is not the case for low-energy fragmentation).
### 4.5 Initial state kinematics and photon radiation fields
The probability for interaction of a proton of energy $`E_N`$ with a photon of energy $`ϵ`$ from a radiation field with the photon density $`n(ϵ)`$ reads
$$𝒫(ϵ)=\frac{1}{R(E_N)}\frac{n(ϵ)}{8E_N^2\beta ϵ^2}_{s_{\mathrm{th}}}^{s_{\mathrm{max}}}𝑑s(sm_N^2)\sigma _{N\gamma }(s),$$
(18)
where $`R(E_N)`$ is the interaction rate as given in Eq. (2), where also $`s_{\mathrm{th}}`$ and $`s_{\mathrm{max}}`$ are defined.
For a fixed nucleon energy, the CMF energy is sampled from the distribution
$$𝒫(s)=\mathrm{\Phi }^1(sm_N^2)\sigma _{N\gamma }(s)$$
(19)
with $`\mathrm{\Phi }=_{s_{\mathrm{th}}}^{s_{\mathrm{max}}}𝑑s(sm_N^2)\sigma _{N\gamma }(s)`$. The interaction angle follows from
$$\mathrm{cos}\theta =\frac{1}{\beta }\left(\frac{m_N^2s}{2E_Nϵ}+1\right).$$
(20)
Currently black body, power law, and broken power law radiation spectra are implemented in SOPHIA. The photon density $`n(ϵ)`$ for a blackbody radiation field of temperature $`T`$ is given in natural units by
$$n(ϵ)=\frac{1}{\pi ^2}\frac{ϵ^2}{\mathrm{exp}(\frac{ϵ}{kT})1},$$
(21)
where $`k`$ is the Boltzmann constant. For a power law photon spectrum the photon density is given by $`n(ϵ)=ϵ^\alpha `$. The broken power law photon spectrum is given by
$`n(ϵ)=`$ $`ϵ^{\alpha _1}`$ $`\text{for}ϵ<ϵ_b`$ (22)
$`n(ϵ)=`$ $`ϵ_b^{\alpha _2\alpha _1}ϵ^{\alpha _2}`$ $`\text{for}ϵ>ϵ_b`$ (23)
where $`ϵ_b`$ is the break energy, and $`\alpha _1`$, $`\alpha _2`$ are the photon indices below and above the break energy, respectively. Note that no absolute normalization of $`n(ϵ)`$ is necessary since it cancels in the definition of $`𝒫(ϵ)`$.
## 5 Structure of the program
### 5.1 Source code
The SOPHIA source code consists of several files which contain a number of routines.
main program containing the routines which organize the various tasks to be performed. Furthermore, the input is handled here and some kinematic transformations needed as input to several routines are performed.
initialization routine for parameter settings.
collection of routines/functions needed for sampling the CMF energy squared $`s`$ and the photon energy $`ϵ`$ in the observer frame.
event generator for photomeson production in $`p\gamma `$ and $`n\gamma `$ collisions.
contains output routines.
### 5.2 The event generator
The simulation of the final state is performed by the photopion production event generator EVENTGEN. Together with the initialization routine INITIAL, EVENTGEN can be used separately for Monte Carlo event simulation. The user has to give the nucleon code number L0, energy E0 (in GeV), the photon of energy $`ϵ`$ (in GeV), and the angle $`\theta `$ in degrees.
EVENTGEN is structured as follows. First the momenta of the incident particles are Lorentz-transformed into the CMF of the interactions. The cross section (in $`\mu `$barn) is calculated in the function CROSSECTION. The cross sections for the various channels considered in this code determine the distribution for the probability of a certain process. The sampling of a process (resonance decay, direct channel, diffractive scattering, fragmentation) at a given NRF energy of the photon is carried out in the routine DEC\_INTER3.
For the resonance decay we sample the resonance at this energy using the Breit-Wigner formula as a probability distribution for a specific resonance (performed in DEC\_RES2). Its branching ratios define the decay mode (in DEC\_PROC2). The subsequent two-particle decay in the CM frame is carried out in RES\_DECAY3, and then decays of all unstable particles are carried out in the Sybill routine DECSIB.
Secondary particle production is simulated in GAMMA\_H for direct and multiparticle production.
Finally, all final state particles are Lorentz-transformed back to the LF.
The output of the final states is organized in the common block /S\_PLIST/ P(2000,5), LLIST(2000), NP, Ideb. Here the array P(i,j) contains the 4-momenta and rest mass of the final state particle i in cartesian coordinates (P(i,1) = $`P_x`$, P(i,2) = $`P_y`$, P(i,3) = $`P_z`$, P(i,4) = energy, P(i,5) = rest mass). LLIST() gives the code numbers of all final state particles and NP is the number of stable final state particles.
### 5.3 Input/Output routines
Using the standard main program, the user provides the following input parameters.
* E0 = energy of incident proton (in GeV), or
* Emin, Emax = low/high energy cutoff of an energy grid of incident protons (in GeV)
* L0 = code number of the incident nucleon (L0 = 13: proton, L0 = 14: neutron)
* ambient photon field:
+ blackbody spectrum: TBB = temperature (in K)
+ straight/broken power law spectrum: ALPHA1, ALPHA2 = power law indices, EPSMIN = low energy cut off (in eV), EPSMAX = high energy cut off (in eV), EPSB = break energy (in eV)
* NTRIAL = number of inelastic interactions
* NBINS = number of bins for output particle spectra ($`200`$ bins)
* DELX = stepsize of output particle spectra
For the calculation of the incident particle momenta we assume that the relativistic nucleon is moving along the positive $`z`$-axis.
The output is organized as follows. All the energy distributions $`(1/N_{\mathrm{eve}})dN_{\mathrm{part}}/d\mathrm{log}x`$ of produced particles are given with logarithmically equal bin sizes in the scaling variable $`x=E_{\mathrm{part}}`$/E0. Here $`N_{\mathrm{eve}}`$ denotes the number of simulated inelastic events and $`N_{\mathrm{part}}`$ is the number of secondary particles of a certain kind. The spectra of photons, protons, neutrons, $`e`$-neutrinos, $`e`$-antineutrinos, $`\mu `$-neutrinos and $`\mu `$-antineutrinos are considered separately. They are stored in a file with name xxxxxx.particle with xxxxxx = input name (chosen by the user):
| particle = | ’gamma’ | $``$ | $`\gamma `$ | spectrum |
| --- | --- | --- | --- | --- |
| | ’muneu’ | $``$ | $`\nu _\mu `$ | spectrum |
| | ’muane’ | $``$ | $`\overline{\nu }_\mu `$ | spectrum |
| | ’e\_neu’ | $``$ | $`\nu _e`$ | spectrum |
| | ’eaneu’ | $``$ | $`\overline{\nu }_e`$ | spectrum |
| | ’elect’ | $``$ | $`e^{}`$ | spectrum |
| | ’posit’ | $``$ | $`e^+`$ | spectrum |
| | ’proto’ | $``$ | $`p`$ | spectrum |
| | ’neutr’ | $``$ | $`n`$ | spectrum |
The structure of a typical output file is:
| 1. line: | high/low energy cutoff of incident nucleon energy grid, |
| --- | --- |
| | number of energy bins of incident nucleon spectrum (=NINC), |
| | TBB or ALPHA1, ALPHA2, EPSMIN, EPSB, EPSMAX, incident particles |
| 2. line: | 1. number: | energy of incident nucleon |
| --- | --- | --- |
| | 2. number: | first number (=$`a`$) of non-zero energy bin of |
| | | particle spectrum |
| | 3. number: | last number (=$`b`$) of non-zero energy bin of |
| | | particle spectrum |
| 3. line: | particle spectrum | |
| | between $`a`$$`b`$ | |
## 6 Comparison to data
### 6.1 Total cross section
Fig. 3 (upper panel) shows the total cross section for $`\gamma p`$-interactions with the various contributions considered in SOPHIA. For simplicity, we show both fragmentation contributions together (low energy fragmentation and non-diffractive multipion production). The resonant process $`\gamma p\mathrm{\Delta }^+\pi ^0p`$ is the only one kinematically possible directly at the particle production threshold. Above the $`\pi ^+n`$ threshold at very low energies ($`ϵ^{}<0.25\mathrm{GeV}`$), the direct channel $`\gamma p\pi ^+n`$ constitutes the largest contribution.
To assess the quality of the cross section parametrization, the differences between the experimental data on the total $`\gamma p`$ cross section and the cross section fit are shown in Fig. 3 (lower panel). The total $`\gamma n`$ cross section is overall similar, except at energies of about $`\sqrt{s}1.680`$ GeV ($`ϵ^{}1.035`$ GeV), where it is considerably smaller due to the different excitation strengths of the resonances at this energy.
### 6.2 Exclusive cross sections
Figs. 4 to 7 compare the output of SOPHIA with the data on specific final states as a function of the interaction energy. Such comparisons are important for models that aim to represent correctly photohadronic interactions over a wide energy range.
Fig. 4 compares the total cross sections for $`\pi ^0p`$ and $`\pi ^+n`$ production with experimental data. The major contributions come from the $`\mathrm{\Delta }`$(1232) resonance and the direct channel together with minor contributions from other resonances. The agreement with data in the threshold region is of great importance for many astrophysical applications where this is the dominating energy range in the case of steep proton and ambient photon spectra.
Fig. 5 compares the calculated and measured cross sections for final states involving charged and neutral pions. The general agreement of the model results with data is quite good, although there are some energy ranges that show minor deviations. It is important to note that it is difficult to fit exactly the experimental data without any detailed knowledge of the experimental setups and acceptance constraints, especially in cases like $`\gamma p\pi ^+\pi ^{}p`$ \+ neutrals, where the final state is not well defined.
The number of inelastic $`\gamma p`$ events with 1, 3, and 5 charged particles in the final state are shown in Fig. 6. This comparison is very sensitive to the description of multipion production processes as well as to the smooth transition between different particle production processes. It shows that the different channels are reasonably well modeled in SOPHIA.
Fig. 7 shows reaction cross sections for final states with equal numbers of pions, but having a different isospin of the produced nucleon, together with fixed-target data. This comparison is important for the correct simulation of the ratio of the proton to neutron numbers produced in $`\gamma p`$ collisions. Again the model results agree well with data.
### 6.3 Inclusive distributions
The rapidity distribution tests the kinematic of our simulations. The rapidity of a final state particle with energy $`E`$ is defined as
$$y=\frac{1}{2}\mathrm{ln}\left(\frac{E+p_{}}{Ep_{}}\right)=\mathrm{ln}\left(\frac{E+p_{}}{m_{}}\right),$$
(24)
where $`p_{}`$ is its momentum component along the direction of the incoming particle. The transverse mass $`m_{}`$ follows from $`m_{}^2=E^2p_{}^2`$. Rapidity is additive under Lorentz transformations, which keeps the rapidity distribution invariant under such transformations.
Moffeit et al. have measured the rapidity distribution for the interaction $`\gamma p\pi ^{}+`$ anything at three different beam energies $`ϵ^{}`$ = 2.8, 4.7 and 9.3 GeV. In Fig. 8 we compare the calculated rapidity distribution to data. The agreement in the width and the height of the distributions is good.
### 6.4 Multiplicities
In astrophysical environments the production of neutrinos results mainly from the decay of charged secondary pions ($`\pi ^+e^+\nu _\mu \overline{\nu }_\mu \nu _e`$, $`\pi ^{}e^{}\nu _\mu \overline{\nu }_\mu \overline{\nu }_e`$). Most of the gamma rays are produced via neutral pion decay ($`\pi ^0\gamma \gamma `$) and synchrotron/Compton emission from emanating relativistic $`e^{}/e^+`$. Pion multiplicities (see Figs. 9, 10) are therefore instructive to understand the energy dissipation of the initial energy among the $`\nu `$\- and $`\gamma `$-component. For non-astrophysical applications the charged pion multiplicity is a basic interaction parameter that presents a cumulative measure of many interaction channels.
In the resonance region the maximum of the neutral pion multiplicity is reached at the $`\mathrm{\Delta }(1232)`$-resonance (see Fig. 9). At threshold neutral pion production is strongly suppressed in favour of $`\pi ^+`$-production (see Fig. 10) due to the dominance of the direct channel. Multiplicities are approximately growing as $`s^{1/4}`$ in the multipion production region at low energies. The obtained multiplicity distributions from our simulations are in agreement with the data from Moffeit et al. .
By counting the number of protons and neutrons produced in $`\gamma p`$ interactions, one can define a proton-to-neutron ratio. The SOPHIA prediction on the energy dependence of this ratio is shown in Fig. 11. The $`p/n`$ ratio reaches $`2.2`$ at high energy which can be contrasted to an experimentally estimated value of about $`3.8`$ found by Meyer . The ratio derived in is essentially based on the same data as those our model is compared to. Meyer estimated the experimentally unknown cross sections by isospin symmetry arguments. In our case these cross sections are predicted by the Monte Carlo simulation. We conclude from this that the unmeasured cross sections are important for the $`p/n`$ ratio, and that the difference between the values 2.2 and 3.8 reflect the uncertainty due to the limited experimental data available.
## 7 Conclusions
A newly developed Monte Carlo event generator SOPHIA has been presented. It simulates the interactions of nucleons with photons over a wide range in energy. The simulation of the final state includes all interaction processes which are relevant to astrophysical applications. SOPHIA contains tools, such as for sampling the photon energy from different ambient soft photon spectra, and the nucleon–photon interaction angle, that are needed for such applications. As an event generator, SOPHIA uses all available information on the interaction cross section, the final state particle composition, and kinematics of the interaction processes as provided by particle physics. Comparison with the available accelerator data shows that SOPHIA provides a good description of our current knowledge of photon–nucleon interactions.
Acknowledgements
The work of AM and RJP is supported by the Australian Research Council. RE and TS acknowledge the support by the U.S. Department of Energy under Grant Number DE FG02 01 ER 40626. TS is also supported in part by the NASA grant NAG5–7009. The contribution of JPR was supported by NASA NAG5–2857 and by the EU-TMR network “Astro-Plasma-Physics” under the contract number ERBFMRX-CT98-0168.
## Appendix A Definition of functions
The functions $`\mathrm{Pl}(ϵ^{},ϵ_{\mathrm{th}}^{},ϵ_{\mathrm{max}}^{},\alpha )`$ and $`\mathrm{Qf}(ϵ^{},ϵ_{\mathrm{th}}^{},w)`$ are defined by
$$\mathrm{Pl}(ϵ^{},ϵ_{\mathrm{th}}^{},ϵ_{\mathrm{max}}^{},\alpha )=(\frac{ϵ^{}ϵ_{\mathrm{th}}^{}}{ϵ_{\mathrm{max}}^{}ϵ_{\mathrm{th}}^{}})^{A\alpha }(\frac{ϵ^{}}{ϵ_{\mathrm{max}}^{}})^A$$
(25)
for $`ϵ^{}>ϵ_{\mathrm{th}}^{}`$, and $`\mathrm{Pl}(ϵ^{},ϵ_{\mathrm{th}}^{},ϵ_{\mathrm{max}}^{},\alpha )=0`$ otherwise, $`A=\alpha ϵ_{\mathrm{max}}^{}/ϵ_{\mathrm{th}}^{}`$, and
$$\mathrm{Qf}(ϵ^{},ϵ_{\mathrm{th}}^{},w)=(ϵ^{}ϵ_{\mathrm{th}}^{})/w$$
(26)
for $`ϵ_{\mathrm{th}}^{}<ϵ^{}<w+ϵ_{\mathrm{th}}^{}`$, $`\mathrm{Qf}(ϵ^{},ϵ_{\mathrm{th}}^{},w)=0`$ for $`ϵ^{}ϵ_{\mathrm{th}}^{}`$, and $`\mathrm{Qf}(ϵ^{},ϵ_{\mathrm{th}}^{},w)=1`$ for $`ϵ^{}w+ϵ_{\mathrm{th}}^{}`$.
## Appendix B Resonance branching ratios
## Appendix C Compilation of routines/functions
calculates cross section (in $`\mu `$barn) of a resonance with width GAMMA (in GeV), mass DMM (in GeV), maximum cross section SIGMA\_0 (in $`\mu `$barn) and NRF energy of the photon (in GeV) according to the Breit-Wigner formula
collection of functions (SINGLEBACK, TWOBACK) which calculates the cross section (at the NRF energy EPS\_PRIME of the incident photon) of the direct channel (not isospin-corrected)
computes cross section (in $`\mu `$barn) for $`N\gamma `$-interaction at a given energy EPS\_PRIME (=photon energy in GeV in proton’s rest frame); depending on the control variable NDIR it returns the total cross section (NDIR=3) or only a certain part of the cross section (NDIR=1: total resonance cross section, NDIR=4: direct channel cross section NDIR=5: fragmentation cross section, NDIR=11-19: individual resonance cross sections)
returns reaction mode: decay of resonance (IMODE=6), direct pion production (IMODE=2 for $`N\pi `$ final states, IMODE=3 for $`\mathrm{\Delta }\pi `$ final states), fragmentation in resonance region (IMODE=5), diffractive scattering (IMODE=1 for $`N\rho `$ final states, IMODE=4 for $`N\omega `$ final states) and multipion production/fragmentation (IMODE=0) at a given energy EPS\_PRIME (in GeV)
returns in IPROC the decay mode for a given resonance IRES at energy EPS\_PRIME (in GeV) for incident nucleon with code number L0; IRANGE is the number of energy intervals corresponding to the (energy dependent) branching ratios of a specific resonance
returns sampled resonance number IRES (IRES = 1 …IRESMAX) at energy EPS\_PRIME (in GeV) for incident nucleon with code number L0
decay of unstable particles
calculates distribution of the squared CMF energy S (in GeV<sup>2</sup>)
interface routine for hadron-$`\gamma `$ collisions in CM frame;
E0 is CMF energy of the $`N\gamma `$-system; first particle ($`\gamma `$ or hadron $`N`$) goes in $`+z`$-direction; final state consists of protons, neutrons, gamma, leptons, neutrinos
initialization routine for parameter settings;
to be called at before calling EVENTGEN
interface to Jetset fragmentation of system with CMF energy SQS (in GeV);
NP = number of secondaries produced
returns photon density (in photons/cm<sup>3</sup>/eV) at energy EPS (in eV) of blackbody radiation of temperature TBB (in K)
probability distribution for scattering angle of given resonance IRES and incident nucleon with code number L0 ;
ANGLESCAT is the cosine of scattering angle in CMF frame
calculates probability distribution for thermal photon field with temperature TBB (in K) at energy EPS (in eV)
calculates probability distribution at energy EPS (in eV) for power law radiation n $``$ EPS<sup>-ALPHA</sup> between the limits EPSM1 …EPSM2 (in eV)
carries out 2-particle decay in CM frame of system with mass AMD (in GeV) into particles with code numbers LA,LB (stored in array LRES()) wheras COSTHETA is the cosine of the $`N\gamma `$-CM frame scattering angle ; returns 5-momenta PRES() of the two particles;
NBAD=1 for kinematically not possible decays, otherwise NBAD=0
determines decay products for the decay of a given resonance IRES by a given decay process IPROC for a incident nucleon with code number L0, and carries out two-particle decay of the resonance with squared CMF energy S; code numbers and (CMF) 5-momenta of produced particles are stored in arrays LLIST and P, respectively, in a common block;
NBAD=1 for kinematically not possible decays; otherwise NBAD=0
samples incident photon energy; if the blackbody temperature is set to TBB=0 the photon energy is sampled from a power law distribution with index ALPHA between given photon energies EPSMIN, EPSMAX (in eV); otherwise it is sampled from a distribution for a blackbody radiation with temperature TBB (in K)
samples the total CM frame energy S (in GeV<sup>2</sup>) for a given photon with energy EPS (in GeV) and proton with energy E0 (in GeV)
samples the cosine of the scattering angle ANGLESCAT for a given resonance IRES and incident nucleon INC
|
no-problem/9903/astro-ph9903459.html
|
ar5iv
|
text
|
# Spiral shocks in the accretion disc of IP Peg during outburst maximum
## 1 Introduction
High-energy phenomena such as jets emanating from AGNs and X-ray radiation emitted from binaries are related to accretion discs. Accretion discs are the most efficient machines for the extraction of energy and angular momentum. The physical conditions that lead to specific dynamical formations in discs, such as spiral arms, are not contained as yet due to lack of observational constraints. Spiral structure in galaxies is attributed to a quasi-stationery wave which triggers star formation as it propagates through the disc (Bertin et al. 1989) or a tidal pattern due to interaction with a satellite galaxy (e.g. M51; Toomre and Toomre 1972). Spiral waves in a disc have also been invoked to explain the proximity of giant extra-planets to their orbiting stars (e.g 51 Peg; Lin et al. 1996). Moreover, spiral shocks have been found in simulations of protosolar discs and have been proposed as a means of converting the gaseous protostellar disc into orbiting planetesimals (Boss 1997). Accretion discs in cataclysmic variables (a white dwarf accretes matter from a donor K-M star) evolve on short dynamical timescales (2-10 hours) allowing us to investigate in detail their development. In cataclysmic variables, the outburst origin in dwarf novae (disc or donor star) and the mechanism for the angular momentum transport of the disc material (‘viscosity’) are still unresolved issues though fundamental in our understanding of accretion physics (Verbunt 1986). The exact structure of the disc, an $`\alpha `$-disc (Shakura and Synyaev 1973) or a spiral-shock disc (Sawada, Matsuda & Hachisu 1986), is linked to the above issues. Simulations of such accretion discs have revealed double spiral shocks (Sawada, Matsuda & Hachisu 1986; Savonjie, Papaloizou and Lin 1994; Stehle 1998). Recently, we indirectly detected spiral waves in the accretion disc of the eclipsing dwarf nova IP Peg during the rise to outburst (Steeghs, Harlaftis, Horne 1997). Here, we report on subsequent observations of He II 4686, during maximum of the November 1996 outburst, which were aimed to probe the ionization structure of the spiral arms.
## 2 Observations
IP Peg, a double-eclipsing dwarf nova, shows semi-periodic outbursts every $``$3 months. During the third day of the November 1996 outburst (Fig. 1), we observed a complete binary cycle of 3.8 hours. We obtained 81 spectra, under $`1.6`$ arcsecond seeing, with a TEK CCD and the 235 camera of the Intermediate Dispersion Spectrograph on the 2.5m Isaac Newton Telescope at La Palma. The wavelength range covered is 4354–4747 Å at a velocity dispersion of 27 km s<sup>-1</sup> per pixel. The spectra were extracted using optimal extraction (Horne 1986) after debiasing, flat-fielding and sky-subtraction. The wavelength calibration was performed using 17 CuAr arc lines and a 3rd order polynomial fit and is accurate to 0.04Å. A comparison star had been included in the slit which was subsequently used to correct the object spectra for atmospheric and slit losses. The absolute flux scale was defined by observing the flux standard Feige 15 (Oke 1990). Typical signal to noise ratio is 21 for the IP Peg spectra (and 46 for the comparison spectra). Flux uncertainties were estimated using Poisson statistics and a noise model for the CCD chip (gain of 1.1 electrons per ADU and readout noise of 5.7 electrons per pixel). We adopt the binary ephemeris from Wolf et al. (1993) $`T_o(HJD)=2445615.4156(4)+0.15820616(4)E`$, where $`T_o`$ is the inferior conjunction of the white dwarf. The He II $`\lambda 4542/\lambda 4686`$ flux ratio of 0.022-0.036, as well as the He I flux ratio $`\lambda 6678/\lambda 4471`$ flux ratio of 0.71-0.87 (see also Marsh and Horne 1990), are consistent with recombination case B (for T = 5,000-20,000 K and electron density of $`10^4`$ cm<sup>-3</sup>; Osterbrock 1989). Given that the He I is blended with the Mg II line, the above ratio is only a lower limit, thus favouring the temperatures nearer to 20,000 K.
## 3 Resolving Spiral waves with Doppler tomography
The trailed spectra of the strongest emission line, He II, show very complex structure (left panel in Fig. 2). The peak separation changes with phase and is narrower at phase 0.50. The red peak of the profile is maximum at phases 0.45 and 0.95 where it has a velocity of 400 and 650 km s<sup>-1</sup>, respectively. Similarly, the blue peak has a clear maximum at phase 0.85 and a velocity of 500 km s<sup>-1</sup>. Other weaker components are immediately visible; a sharp component moves from red to blue at phase 0.5 (red star emission) and a low-velocity component is present at phases around 0.2 and 0.9. We reconstruct the Doppler image of the He II 4686 emission-line distribution using the trailed spectra (Marsh and Horne 1988). This imaging technique has been particularly successful in resolving the location of emission components such as the red star (IP Peg; Harlaftis et al. 1994), the gas stream (OY Car in outburst; Harlaftis and Marsh 1996), the bright spot (GS2000+25; Harlaftis et al. 1996) and spiral waves in the outer accretion disc (from H$`\alpha `$ and He I lines of IP Peg during rise to outburst; Steeghs, Harlaftis and Horne 1997).
The He II Doppler image of IP Peg during outburst (centre panel of Fig. 2) directly displays the locations of the various emission components; the inner side of the red star, a low-velocity component and the dominant accretion disc with extended spiral arms. For comparison with the observed data, the computed data from the image are also shown (right panel). The low-velocity component is centred at $`(V_x,Vy)=(100\pm 50,20\pm 70)`$ km s<sup>-1</sup> and has a FWHM of 270 km s<sup>-1</sup> and although is seen in other dwarf novae during outburst its origin is not understood (Steeghs et al. 1996). In Fig. 3 we display the non-axisymmetric data and Doppler image for comparison with a model. The non-axisymmetric Doppler image (top-left) is built by subtracting the median at each radius with the white dwarf at its centre. A simple model of spiral arms (plus a red star and a low-velocity component; top-right) is built by using a power law of $`V^1`$ for the intensity, velocities between 400 to 800 km s<sup>-1</sup> and azimuthal extent of 0.25 binary cycles. The computed data from the model (bottom-right; with the same noise content as the data) reconstruct successfully all the velocity features observed, such as the decreasing double-peak separation with an intensity jump at phase 0.6. The Bowen blend, He I 4471 and Mg II 4481 lines produce similar Doppler images.
## 4 Discussion
The high-ionization He II 4686 line is most suitable to clarify the shock nature of the resolved spiral structure, since it is sensitive to higher temperatures and is not saturated as H$`\alpha `$ may be. We quantify the properties of the spiral structure by taking radial slices starting from the white dwarf at (0,-147) km s<sup>-1</sup> for each azimuth and then search for the velocity for which the image intensity reaches its maximum value. The result is then to trace the centre of the spiral arms with azimuth, maximum intensity (bottom) and velocity at that image value, relative to the white dwarf (top). The two peaks in both velocity and intensity correspond to the spiral arms. In particular, the jump in intensity (a factor of more than two) and velocity ($``$200–300 km s<sup>-1</sup>) is the characteristic signature for a shock. The shocks show an azimuthal extent of $``$90 degrees (phases 0.50–0.75 and 0.95–1.25) and a very shallow power-law index in emissivity ($`V^1`$ or $`R^{2.5}`$ assuming a Keplerian disc). The spiral shocks contribute about 15 per cent of the total disc emission and their opening angle is $`30`$ degrees by assuming Keplerian flow (which places a firm upper limit). The Mach numbers at the shocks range between 28–50 for $`T=20,000`$ K and velocities between 400–700 km s<sup>-1</sup>. There are differences between the two arms: the arm centred at phase 0.65 (bottom-left arm on Doppler image) is weaker in intensity (by 40 per cent) than the arm centred at phase 0.05. Also, the velocity range is 400–700 km s<sup>-1</sup> compared to 400–550 km s<sup>-1</sup> and the radial width is 40 per cent shorter compared to the FWHM=240 km s<sup>-1</sup> of the arm at phase 0.05. Note that in the above we have discussed the velocities related to the maximum intensities of the image. The Doppler image shows that there is also some emission at velocities up to 1000 km s<sup>-1</sup> (or 850 km s<sup>-1</sup> relative to the white dwarf).
Spiral arms have now been observed in 3 different outbursts of IP Peg (this work; 1993 August outburst in Steeghs, Harlaftis and Horne 1997; 1987 July outburst in Marsh & Horne 1990 where, in retrospect, only limited evidence for spiral structure can be discerned due to the incomplete binary coverage). The trailed spectra obtained by Hessman (1989) immediately after the end of an outburst of IP Peg do not show any structure other than the red star ‘S’-wave component. We tested this by adjusting our He II spectra to the instrument and time resolution of the Hessman spectra (195 km s<sup>-1</sup> and 12 phase bins, respectively). The signal-to-noise is similar since the red star line component is visible. The spiral shocks can still be resolved in the simulated trailed spectra suggesting that the shocks either are tightly wound up or may have dissipated by the time the system has reached quiescence after an outburst. Spiral structure in systems other than IP Peg can also be found, such as in SS Cyg during outburst (see Helium lines in Figures 7, 9 and 10; Steeghs et al. 1996) and in the H$`\alpha `$ trailed spectra of LB1800 (Still et al. 1998, Steeghs et al. 1998, in preparation). No evidence of spiral shocks is observed in short-period SU UMa-type stars, such as OY Car during outburst (Harlaftis and Marsh 1996), SU UMa and YZ Cnc in outburst or quiescence (Harlaftis 1991). Therefore, it may be possible that the spiral shocks only develop in systems above the period gap where the disc, the companion star and the mass transfer rate are larger. Doppler maps of X-ray binaries do not show any evidence of spiral shocks (outburst of GRO J0422+32, Casares et al. 1995; quiescent map of A0620-00, Marsh et al. 1994; quiescent map of GS 2000, Harlaftis et al. 1996). Only H$`\alpha `$ spectra of X 1822-371, the prototype system for vertical disc structure, may contain a hint of some structure similar to spiral shocks (Harlaftis et al. 1997).
SPH simulations of the IP Peg accretion disc do show the development of transient spiral shocks (Armitage and Murray 1998) which agree quite well with the observed ones in velocity and azimuthal extent. Stehle (1998) uses hydrodynamical simulations of a hot accretion disc ($`T_{eff}=10,00050,000`$ K, $`q=0.3`$) which confirm the deployment of stable spiral shocks in the disc over many orbital cycles (see also Steeghs and Stehle 1999). Fitting of the He II spiral shock data (Fig. 4) with model data extracted from simulated discs should eventually constrain the physical conditions of the disc such as the local Mach number and effective temperatures. We are aiming to obtain similar coverage at the end of an outburst (to complete our coverage), so that we have a full picture of the dynamical evolution of the tidal shocks. In particular, the effect of a decreasing $`\alpha `$ parameter (outburst decline) on the velocity range, opening angle and azimuthal extent of the spiral shocks.
## Acknowledgments
The data reduction and analysis was partly carried out at the St. Andrews STARLINK node. Use of software developed by T. Marsh is acknowledged. ETH was supported by the TMR contract RT ERBFMBICT960971 of the European Union. ETH and DS were partially supported by a joint research programme between the University of Athens, the British Council at Athens and the University of St. Andrews. DS acknowledges support by a University of St. Andrews research studentship. In this research, we have used, and acknowledge with thanks, data from the AAVSO International Database, based on observations submitted to the AAVSO by variable star observers worldwide.
|
no-problem/9903/cond-mat9903372.html
|
ar5iv
|
text
|
# The electronic structure of the heavy fermion metal LiV2O4
\[
## Abstract
The electronic structure of the first reported heavy fermion compound without f-electrons LiV<sub>2</sub>O<sub>4</sub> was studied by an ab-initio calculation method. In the result of the trigonal splitting and d-d Coulomb interaction one electron of the $`d^{1.5}`$ configuration of V ion is localized and the rest partially fills a relatively broad conduction band. The effective Anderson impurity model was solved by Non-Crossing-Approximation method, leading to an estimation for the single-site Kondo energy scale $`T_K`$. Then, we show how the so-called exhaustion phenomenon of Nozières for the Kondo lattice leads to a remarkable decrease of the heavy-fermion (or coherence) energy scale $`T_{coh}T_{K}^{}{}_{}{}^{2}/D`$ ($`D`$ is the typical bandwidth), comparable to the experimental result.
\]
Heavy fermion systems are characterized by a large effective quasiparticle mass inferred from the strongly enhanced electronic specific heat coefficient and spin susceptibility at low temperatures . Until recently this effect was observed only for f-electron compounds containing lanthanide or actinide atoms. The transition metal oxide compound LiV<sub>2</sub>O<sub>4</sub> is the first reported heavy fermion system without f-electrons .
LiV<sub>2</sub>O<sub>4</sub> has a face-centered-cubic normal-spinel structure. The formal valence of the V-ions is $`V^{3.5+}`$ leading to 1.5 electrons/V in the 3d-band. The electronic specific heat coefficient $`\gamma (T)=C_e(T)/T`$ is extraordinarily large for a transition metal compound $`(\gamma (1K)0.42J/molK^2)`$, decreasing rapidly with the temperature to $`0.1J/molK^2`$ at 30 K. The spin susceptibility from 50K to 400K shows a Curie-Weiss \[$`\chi =C/(T\theta )`$\] behavior corresponding to weakly antiferromagnetically coupled ($`\theta `$= -30 to -60 K) vanadium local magnetic moments with $`S`$=1/2 and $`g2`$, but static magnetic ordering does not occur above 0.02 K and superconductivity is not observed above 0.01 K. The nearly temperature independent spin susceptibility $`\chi (T)`$ and Knight shift $`K(T)`$ for $`T<30K`$ are a sign of the disappearance of the V local moment in this temperature region .
In the traditional heavy fermion compounds there are two distinct types of electronic states, the localized f-orbitals of lanthanide or actinide atoms which form local moments and the delocalized s-, p-, d-orbitals which are responsible for the metallic properties. The weak hybridization of the f-orbitals with the conduction states leads to the low temperature anomalies. If one of the 1.5 3d-electrons per V ion would be in a localized orbital, and the rest in a relatively broad band, then the situation would be analogous to f-compounds and all experimental facts could be qualitatively understood . As the general opinion was that these 1.5 electrons are in the (same) band formed by $`t_{2g}`$-orbitals, such a model was considered to be unrealistic. In this paper we will show that the trigonal point group symmetry of the V ion in LiV<sub>2</sub>O<sub>4</sub> lifts the degeneracy of the $`t_{2g}`$-band. As a result the above mentioned model, is appropriate to estimate the heavy-fermion energy scale for this compound.
The normal-spinel crystal structure is formed by the edge-shared oxygen octahedra with V-atoms at the centres and Li-atoms between octahedra. The face-centered-cubic lattice has four V atoms in the unit cell which form a tetrahedron (Fig. 1). The total space group of the crystal is cubic but the local point group symmetry of V-ion crystallographic position is trigonal. The different trigonal axes of every V-atom in the unit cell are directed towards the centre of the tetrahedron.
The octahedral coordination of the oxygen ions around the V results in the strong splitting of d-states into triply degenerate $`t_{2g}`$-orbitals with lower energy and double degenerate $`e_g`$-orbitals with higher energy. The band structure has three well separated sets of bands, completely filled O-2p-band, partially filled $`t_{2g}`$ band and empty $`e_g`$ bands. As only partially filled bands are important for the physical properties we will restrict our analysis to the $`t_{2g}`$ band.
In the trigonal symmetry crystal field the three $`t_{2g}`$ orbitals are split into the nondegenerate $`A_{1g}`$ and double degenerate $`E_g`$ representations of the $`D_{3d}`$ group. The $`A_{1g}`$-orbital is $`(xy+xz+yz)/\sqrt{3}`$ if the the coordinate axes are directed along V-O bonds. If the $`z`$-direction is chosen along the trigonal axis then the $`A_{1g}`$-orbital is $`3z^2r^2`$ (see Fig. 1) and the $`E_g`$ orbitals have the lobes in the plane perpendicular to $`z`$.
We have performed LDA calculations of the electronic structure of LiV<sub>2</sub>O<sub>4</sub> by LMTO method . In Fig. 2 the results for $`t_{2g}`$ band are presented with the partial DOS projected to $`A_{1g}`$ and $`E_g`$ orbitals. While the partial $`E_g`$ DOS have the width of $``$ 2 eV, the $`A_{1g}`$ DOS is much narrower (a width of only $``$ 1 eV). The trigonal splitting is smaller than the band width but it is not neglegible. We have estimated it as a difference of the centres of gravities of $`A_{1g}`$ and $`E_g`$ DOS’s and have found that the energy of $`A_{1g}`$ orbital is 0.1 eV lower than the energy of $`E_g`$ orbitals.
We have calculated the d-d Coulomb interaction parameter $`U`$ and have found the value of 3 eV, which is larger than the band width $`W`$ 2 eV. Such value of the $`U/W`$ ratio leads to the appearance of lower and upper Hubbard bands with one electron localized in the former and the other 0.5 partially filling the latter. Which particular orbital will form the lower Hubbard band is defined by the sign of the trigonal splitting between the energies of $`A_{1g}`$ and $`E_g`$ orbitals.
The effect of the d-d Coulomb interaction on the electronic structure of transition metal compounds can be treated by LDA+U method . This method is basically a combination of the Hartree-Fock approximation to multiband Hubbard model with LDA. The ground state of the LDA+U solution was found to be indeed a metal with one electron nearly completely localized on $`A_{1g}`$ orbital and $`E_g`$ orbitals forming relatively broad ($``$ 2 eV) band which is partially filled.
The partial DOS for $`E_g`$ orbitals has a long low-energy tail (Fig. 2) and as a consequence the LDA-occupancy of $`E_g`$ orbital is significantly larger than the corresponding value for $`A_{1g}`$ orbital. One would expect that after switching on the Coulomb interaction in LDA+U calculation, the orbital whose occupation was larger becomes in the process of the self-consistency iteration, more and more occupied at the expense of all other orbitals and in the end the d-electron will be localized on one of the $`E_g`$ orbitals. In reality the situation is more complicated. Indeed, the Coulomb interaction energy will be lowered by localization of the electron on any orbital. However the total energy of the solution with the localized electron on $`A_{1g}`$ orbital is lower than energy of the solution with $`E_g`$ orbital due to the trigonal splitting. The rotation invariant formulation of the LDA+U method allows the system to choose itself on what particular orbital (or the particular combination of the basis set orbitals) electrons will be localized. If one starts from the LDA orbital occupancies, than at the first stage of the self-consistency iterations the $`E_g`$ orbital, having larger LDA occupation become localized. However further iterations cause ‘rotations’ in the five-dimensional space of 3d-orbitals leading to the solution with the $`A_{1g}`$ orbital occupied. The system arrives at this solution independently of the starting point. The total energy as a functional of the orbital occupation matrix has only one minimum and the corresponding LDA+U equations have only one solution. The separation of the $`t_{2g}`$-states into the localized $`A_{1g}`$ orbital and the conduction band $`E_g`$ orbitals shows that LiV<sub>2</sub>O<sub>4</sub> can be regarded as an analog of f-systems. In order to estimate the strength of the interaction between the localized and the conduction electrons we have defined an effective Anderson impurity model. The partial DOS for $`A_{1g}`$ orbital obtained in the LDA+U calculation $`n_{A_{1g}}(E)`$ was used to determine the position of the impurity state $`ϵ_f`$ and the hybridization function $`\mathrm{\Delta }(E)`$:
$$n_{A_{1g}}(E)=\frac{1}{\pi }\mathrm{}m(Eϵ_f+i\mathrm{\Delta }(E))^1$$
(1)
The results are presented on Fig. 3. We then used the LDA+U results as input to estimate the Kondo energy scale for a single site model. To solve this Anderson impurity model we have used a resolvent perturbation theory.
Within this approach one determines the renormalization of the impurity states by the hybridization with the surrounding medium. The latter is given by the hybridization function $`\mathrm{\Delta }(E)`$. The approach leads to a perturbation expansion in terms of $`\mathrm{\Delta }(E)`$. A widely used approximation, the Non-Crossing-Approximation (NCA) , is well-known to reproduce the low-energy scale of the underlying model , which is the single-site Kondo temperature $`T_K`$. Here we have extracted this quantity directly from the singlet-triplet splitting of the impurity states. For the hybridization function $`\mathrm{\Delta }(E)`$ shown in Fig. 3 we estimate for the single-site Kondo temperature $`T_K`$ 550K. On the other hand the Kondo temperature can be expressed via a Kondo parameter $`J_K`$ as :
$$T_K=De^{\frac{1}{2N(0)J_K}}$$
(2)
where $`D`$ is the effective Fermi energy and $`N(0)`$ the density of conduction states per spin. In ref. the value of $`N(0)`$ was estimated from the experimental value of the Pauli susceptibility as 2.9 state/(eV vanadium spin direction). With a $`D`$ value of approximately 1 eV, this gives us an estimation for $`J_K`$ 670K. In contrast to standard f-systems where the direct exchange between localized f-electrons and conduction electrons is small, both localized and conduction electrons are 3d-electrons in our case and there is a strong onsite exchange coupling between them, namely a ferromagnetic Hund coupling of the order of 1eV. The presence of two types of d-electrons, namely those in conduction states and the electrons forming local moments, is known to result in a lattice in a double exchange ferromagnetic interaction between local moments. We have estimated this value of the intersite exchange coupling parameter from the results of LDA+U calculation using the formula derived as a second derivative of the total energy in respect to the angle between local moments directions . Our calculation gave the value of the double exchange parameter $`J_{dex}`$= 530K.
However, there is another contribution to the interaction between the local moments. The Kondo exchange is between a local moment (electron on the $`A_{1g}`$-orbital) on one site and the spin of the conduction electron ($`E_g`$ -orbital) on the neighboring site, because having different symmetry the orbitals do not mix on the same site. As the spins of the $`E_g`$ and $`A_{1g}`$ electrons on the neighboring site are ‘strongly’ coupled by the Hund interaction, it gives us an effective antiferromagnetic (AF) interaction between local moments on neighboring sites approximately equal to the Kondo exchange parameter $`J_K`$. As a result there is a strong cancellation between these two processes, ferromagnetic double exchange and AF Kondo-induced exchange so that the net exchange interaction is small. Simply substracting the two terms leads to an estimate $`J_K`$-$`J_{dex}140K`$. The measured Curie-Weiss $`\mathrm{\Theta }`$ at high temperature $`(\mathrm{\Theta }50K)`$ points to a more complete cancellation and a net AF exchange interaction that is an order in magnitude smaller than our estimate. This discrepancy is not surprising in view of the difficulty of making a reliable estimate in the presence of this strong cancellation. Note the inherent frustration that inhibits the onset of AF-order in a spinel lattice does not enter the determination of the value of $`\mathrm{\Theta }`$. The small value of $`\mathrm{\Theta }`$ shows that the net exchange interaction between neighboring local moments is very weak.
A realistic lattice model for LiV<sub>2</sub>O<sub>4</sub> contains two competing terms which couple the conduction and localized states. These are the onsite Hund’s ferromagnetic coupling and the AF Kondo interaction which couples conduction and localized electrons on neighboring sites. This competition makes difficulties for a first principles treatment. On the other hand, there are several arguments in favor of ignoring the ferromagnetic interactions relative to the Kondo interaction. For a single localized site, it is well-known that the onsite ferromagnetic coupling between conduction and localized states scales to weak coupling limit at low temperatures and so can be ignored. However it could be argued that in the lattice this ferromagnetic coupling scales to the strong coupling limit through the well-known double exchange effect. But as discussed above the double exchange effect is cancelled here by the AF interaction between the localized spins induced by the Kondo effect. Therefore it seems plausible to ignore at least as a first step the ferromagnetic interactions and treat only the AF interaction so that the model is simply a Kondo lattice model. There may be some renormalization of the Kondo exchange parameter, $`J_K`$ but for now we ignore that too.
In the Kondo lattice model there is also a competition between the induced AF interactions which favor AF order and the Kondo effect which favors a singlet groundstate. Here, the former are very weak due to the cancellation affects discussed above and as a result we are in the limit where the Kondo effect dominates. This leads to the formation of a heavy-fermion Landau Fermi liquid with a characteristic temperature scale for the onset of quantum coherence, $`T_{coh}`$. The exact value of $`T_{coh}`$ is difficult to estimate but there are a number of strong arguments that $`T_{coh}T_K`$, the single site Kondo temperature. We summarize these arguments below.
In the single site case, the local moment forms a singlet pair with any conduction electron within an energy $`T_K`$ of the Fermi energy. We have the picture of a complex screening cloud which delocalizes the impurity at the Fermi level. The resulting non-perturbative ground state is found to be of Fermi-liquid type, where all the physical quantities depend on $`T_K`$. In the concentrated Kondo lattice, the number of conduction electrons per site to screen the local moments is of order $`\text{n}_k=\frac{T_K}{D}1`$. There is a lack of conduction electrons to screen all the spin array at the energy scale $`T_K`$: this is the well-known exhaustion phenomenon. In that sense, a macroscopic singlet ground state should not take place at $`T_K`$, but rather at a very low temperature $`T_{coh}`$. To obtain it, we can use the following simple thermodynamical arguments. The condensation energy that we really dispose to screen the (effective) localized spins is: $`\text{E}_{cond}=\text{N}_V\text{n}_kT_K`$ where $`\text{N}_V`$ is the total number of V-ions (we have one localized 3d-electron per V). In view to stabilize a perfect singlet ground state, this energy must absorb all the entropy of the impurity lattice which is defined as: $`\text{S}_{tot}=\text{N}_V\mathrm{ln}2`$. Then, the energy scale at which the entropy of the spin array will go to zero can be simply defined as $`T_{coh}=\text{E}_{cond}/\text{S}_{tot}\text{n}_kT_K=T_{K}^{}{}_{}{}^{2}/D`$. For $`TT_{coh}`$, the prevalent bonds should be formed between local moments and then $`T_{coh}`$ plays the role of the effective Kondo temperature in the lattice problem. It should be noted that these effects predicted by physical arguments could recently be indeed observed in calculations for the periodic Anderson model in the framework of the Dynamical Mean-Field Theory . Taking into account the estimations for $`T_K`$ and $`D`$, we obtain $`T_{coh}2540K`$, which agrees well with the numerical result for the periodic Anderson model. It also seems to be in good agreement with the experimental result in LiV<sub>2</sub>O<sub>4</sub>. This should produce an enhanced linear specific heat and Pauli susceptibility which are proportional to $`N_V/T_{coh}`$, and a (normalized) Wilson ratio $`R_W`$ which is equal to one. Experimentally, $`R_W`$ is on the order of unity as in conventional heavy-metals.
In conclusion, our calculations using the LDA+U method give a theoretical justification of a model with one of the 3d-electrons per V localized in a $`A_{1g}`$-orbital and the remaining 0.5 electron/V in a conduction band state, primarily of $`E_g`$ symmetry, which has been previously discussed. This leads to a lattice model with competing onsite ferromagnetic coupling due to the Hund’s rule and nearest neighbor antiferromagnetic coupling due to the Kondo effect. We present arguments that such a model reduces to a Kondo lattice model. Estimates for the temperature scale for the onset of quantum coherence give a small value, much less than the single site Kondo temperature, in agreement with experiments. The low temperature heavy fermion Fermi liquid is strongly correlated and is therefore a good candidate for a transition to unconventional superconductivity, if it can be made perfect enough.
Note added—Recently an LDA calculation of the electronic structure of LiV<sub>2</sub>O<sub>4</sub> was made by Eyert et al. where the partial density of states of $`t_{2g}`$ band was analyzed using trigonal symmetry but the possibility of orbital polarization of electrons due to the Coulomb interaction was not investigated.
We wish to thank D. Johnston, D. Khomskii and D. Cox for stimulating discussions. This work was supported by the Russian Foundation for Basic Research (grants RFFI-98-02-17275 and RFFI-96-15-96598).
|
no-problem/9903/cond-mat9903205.html
|
ar5iv
|
text
|
# Conductance enhancement in quantum point contact–semiconductor–superconductor devices
## I Introduction
Charge transport through a normal conductor–superconductor (NS) interface is accompanied by a conversion of quasiparticle current to a supercurrent. In the Andreev reflection, by which the conversion occurs, an electron-like quasiparticle in the normal conductor (with an excitation energy lower than the energy gap of the superconductor) incident on the NS interface is retroreflected into a hole-like quasiparticle (with reversal of its momentum and its energy relative to the Fermi level) and a Cooper pair is added to the condensate of the superconductor . For an ideal NS interface, the signature of Cooper pair transport and the Andreev scattering is a doubling of the conductance compared to the normal state conductance.
A theoretical framework for studies of the scattering at NS interfaces is provided by the Bogoliubov–de Gennes (BdG) formalism where the scattering states are eigenfunctions of the BdG equation
$$\left(\begin{array}{cc}\widehat{}(𝐫)& \mathrm{\Delta }(𝐫)\\ \mathrm{\Delta }^{}(𝐫)& \widehat{}^{}(𝐫)\end{array}\right)\left(\begin{array}{c}u(𝐫)\\ v(𝐫)\end{array}\right)=E\left(\begin{array}{c}u(𝐫)\\ v(𝐫)\end{array}\right),$$
(1)
which is a Schrödinger-like equation in electron-hole space (Nambu space). Here $`\widehat{}(𝐫)`$ is the single-particle Hamiltonian, $`\mathrm{\Delta }(𝐫)`$ is the pairing potential of the superconductor, $`E`$ is the excitation energy, and $`u(𝐫)`$ and $`v(𝐫)`$ are wave functions of electron-like and hole-like quasiparticles.
The technological possibility of studying the interface between a two-dimensional electron gas (2DEG) in a semiconductor heterostructure and a superconductor experimentally, has provided a playground for investigating the interplay between the Andreev reflection and the mesoscopic effects seen in mesoscopic semiconductor structures . In the recent years the technological efforts have revealed a variety of new mesoscopic phenomena, see Refs. and references therein. One class of the studied devices are the quantum point contact (QPC) 2DEG-S and S-2DEG-S devices with the QPC in the normal region. The dc Josephson effect and the quantization of the critical current in QPC S-2DEG-S junctions have been studied extensively both experimentally by e.g. Takayanagi and co-workers and theoretically by e.g. Beenakker and van Houten , Beenakker , and Furusaki, Takayanagi, and Tsukada .
The linear-response conducting properties of QPC 2DEG-S structures have been studied by several groups. In the analytical work of Beenakker a ballistic normal region with a QPC modeled by a saddle-point potential was considered. The effect of elastic impurity scattering was considered numerically by Takagaki and Takayanagi who considered a disordered region between a narrow-wide (NW) constriction and the superconductor. Both of these studies of the conductance were based on a scattering matrix (S–matrix) approach and the BdG formalism. In the numerical simulations of De Raedt, Michielsen, and Klapwijk , based on the time-dependent BdG equation, a wide-narrow-wide (WNW) constriction was considered. Here, the aim was to study the electron-hole conversion efficiency and the robustness of the back-focusing phenomena of the Andreev reflection.
One of the properties of the QPC is that most transmission eigenvalues are either close to zero or unity. For an ideal QPC 2DEG-S interface, the Andreev reflection will therefore give rise to a factor-of-two enhancement of the conductance $`G_{\mathrm{NS}}`$ compared to the normal state conductance $`G_\mathrm{N}`$ , which is quantized in units of $`2e^2/h`$ . However, as pointed out by van Houten and Beenakker , deviations from the simple factor-of-two enhancement should be expected when the position of the Fermi level does not correspond to a conductance plateau. The presence of impurity scattering in the normal region and/or interface roughness will also suppress the doubling of the conductance .
Using an S–matrix approach, we study the linear-response regime of a phase-coherent ballistic QPC 2DEG-S system where the QPC is modeled by a WNW constrictions with a hard-wall confining potential, see Fig. 1. We report new results for the device studied by De Raedt *et al.* which had a relative width $`W/W^{}=1.7\mu \mathrm{m}/(1.6\times 335\mathrm{\AA })31.72`$, an aspect ratio $`L_1/W^{}=5/1.6`$, and a relative length $`L_2/W^{}=20/1.6`$. Applying the S–matrix formalism instead of the computationally more complicated time-dependent BdG formalism, we are able to study a larger part of the parameter-space where we also consider a barrier (with a normalized barrier strength $`Z`$) at the NS interface. We focus on the regime with only a few propagating modes in the QPC. In this regime the transmission eigenvalues of the QPC depend strongly on the actual position of the Fermi level. Even for an ideal interface this gives rise to a strong suppression of the conductance for certain positions of the Fermi level as predicted by van Houten and Beenakker and subsequently seen in the work of Beenakker \[13, Fig. 1\]. In the presence of a barrier at the interface, the QPC gives rise to an enhanced tunneling through the barrier (compared to the case without a QPC) as in the case of reflectionless tunneling effect of diffusive junctions .
In the sequential tunneling limit the conductance can be found by considering the QPC and the interface as two series-connected resistive regions and in the limit $`WW^{}`$ the enhancement of the conductance compared to the normal state conductance vanishes even for ideal NS interfaces. This may be an explanation for the unexpectedly low conductance enhancement in the experimental results of Benistant *et al.* on Ag-Pb interfaces where the current is injected into the Ag crystal through a point contact.
The text is organized as follows: In Section II the S–matrix formalism is introduced, in Section III we formulate our model, in Section IV the scattering scheme of the considered geometry is presented, and in Sections V we present results of several applications of our scattering scheme. Finally, in Section VI discussion and conclusions are given.
## II Scattering matrix formalism
The scattering approach to coherent dc transport in superconducting hybrids follows closely the scattering theory developed for non-superconducting mesoscopic structures, see e.g. the text-book by Datta .
For an ideal NS interface, the interface acts as a phase-conjugating mirror within the Andreev approximation and the rigid boundary condition for the pairing potential
$$\mathrm{\Delta }(𝐫)=\mathrm{\Delta }_0e^{i\phi }\mathrm{\Theta }(xL),$$
(2)
where $`\mathrm{\Delta }_0`$ is the BCS energy gap , $`\phi `$ is the phase of the pairing potential, $`\mathrm{\Theta }(x)`$ is a Heaviside function, and $`L=L_1+L_2`$ is the length of the normal region (see Fig. 1).
In the linear-response regime in zero magnetic field, Beenakker found that the conductance $`GI/V`$ is given by
$`G_{\mathrm{NS}}`$ $`=`$ $`{\displaystyle \frac{4e^2}{h}}\mathrm{Tr}\left(tt^{}\left[2\widehat{1}tt^{}\right]^1\right)^2`$ (3)
$`=`$ $`{\displaystyle \frac{4e^2}{h}}{\displaystyle \underset{n=1}{\overset{N}{}}}{\displaystyle \frac{T_n^2}{\left(2T_n\right)^2}},`$ (4)
which, in contrast to the Landauer formula ,
$$G_\mathrm{N}=\frac{2e^2}{h}\mathrm{Tr}tt^{}=\frac{2e^2}{h}\underset{n=1}{\overset{N}{}}T_n,$$
(5)
is a non-linear function of the transmission eigenvalues $`T_n`$ ($`n=1,2,\mathrm{},N`$) of $`tt^{}`$. Here $`t`$ is the $`N\times N`$ transmission matrix of the normal region, $`N`$ being the number of propagating modes.
Equation (4) holds for an arbitrary disorder potential and is a multi-channel generalization of a conductance formula first obtained by Blonder, Tinkham and Klapwijk who considered a delta function potential as a model for the interface barrier potential. The computational advantage of Eq. (4) over the time-dependent BdG approach of De Raedt *et al.* is that we only need to consider the time-independent Schrödinger equation with a potential which describes the disorder in the normal region, so that we can use the techniques developed for quantum transport in normal conducting mesoscopic structures.
For more details on e.g. finite bias and/or temperature, see Lesovik, Fauchère and Blatter , Lesovik and Blatter , and the reviews of Beenakker , and Lambert and Raimondi .
## III Model
We describe the geometry of Fig. 1 by the Hamiltonian
$$\widehat{}(𝐫)=\frac{\mathrm{}^2}{2m}\widehat{}+V_\delta (𝐫)+V_\mathrm{c}(𝐫)\mu ,$$
(6)
where $`\mu `$ is the chemical potential. The barrier potential is given a Dirac delta function potential with strength $`H`$ :
$$V_\delta (𝐫)=H\delta (xL),$$
(7)
and the transverse motion is limited by a hard-wall confining potential
$$V_\mathrm{c}(𝐫)=\{\begin{array}{ccc}0& ,& |y|<𝒲(x)/2\\ \mathrm{}& ,& |y|𝒲(x)/2\end{array},$$
(8)
where the width $`𝒲(x)`$ defines the WNW constriction and is given by
$$𝒲(x)=\{\begin{array}{ccc}W& ,& x<0\\ W^{}& ,& 0xL_1\\ W& ,& x>L_1\end{array}.$$
(9)
The scattering states can be constructed as linear combinations of the eigenstates of the Schrödinger equation.
## IV Scattering scheme
In the following subsections, we consider the S–matrices of a system with the Hamiltonian in Eq. (6) relevant for the geometry shown in Fig. 1. The S-matrix $`S`$ of a scattering region relates the incident current amplitudes $`a_\mathrm{S}^\pm `$ to the outgoing current amplitudes $`b_\mathrm{S}^\pm `$. For a scattering region with two leads, $`S`$ is a $`2\times 2`$ block-matrix with sub-matrices $`S_{11}`$, $`S_{12}`$, $`S_{21}`$, and $`S_{22}`$, where the diagonal and off-diagonal sub-matrices are reflection and transmission matrices, respectively. The appropriate scattering scheme for three scattering regions (the WN and the NW constrictions, and the interface barrier potential, respectively) connected by ballistic conductors is shown in Fig. 2.
The WN constriction is described by the S–matrix $`S^{\mathrm{WN}}`$
$$\left(\begin{array}{c}b_{\mathrm{S}_1}^+\\ b_{\mathrm{S}_1}^{}\end{array}\right)=S^{\mathrm{WN}}\left(\begin{array}{c}a_{\mathrm{S}_1}^+\\ a_{\mathrm{S}_1}^{}\end{array}\right),$$
(10)
the narrow region of length $`L_1`$ by the propagation-matrix $`U^\mathrm{N}`$
$$\left(\begin{array}{c}a_{\mathrm{S}_1}^{}\\ a_{\mathrm{S}_2}^+\end{array}\right)=U^\mathrm{N}\left(\begin{array}{c}b_{\mathrm{S}_1}^{}\\ b_{\mathrm{S}_2}^+\end{array}\right),$$
(11)
the NW constriction by the S–matrix $`S^{\mathrm{NW}}`$
$$\left(\begin{array}{c}b_{\mathrm{S}_2}^+\\ b_{\mathrm{S}_2}^{}\end{array}\right)=S^{\mathrm{NW}}\left(\begin{array}{c}a_{\mathrm{S}_2}^+\\ a_{\mathrm{S}_2}^{}\end{array}\right),$$
(12)
the wide region of length $`L_2`$ by the propagation-matrix $`U^\mathrm{W}`$
$$\left(\begin{array}{c}a_{\mathrm{S}_2}^{}\\ a_{\mathrm{S}_3}^+\end{array}\right)=U^\mathrm{W}\left(\begin{array}{c}b_{\mathrm{S}_2}^{}\\ b_{\mathrm{S}_3}^+\end{array}\right),$$
(13)
and the delta function barrier by the S–matrix $`S^\delta `$
$$\left(\begin{array}{c}b_{\mathrm{S}_3}^+\\ b_{\mathrm{S}_3}^{}\end{array}\right)=S^\delta \left(\begin{array}{c}a_{\mathrm{S}_3}^+\\ a_{\mathrm{S}_3}^{}\end{array}\right).$$
(14)
To apply Eqs. (4) and (5) we need to calculate the composite transmission matrix $`t=S_{21}`$ which is a sub-matrix of the composite S–matrix $`SS^{\mathrm{WN}}U^\mathrm{N}S^{\mathrm{NW}}U^\mathrm{W}S^\delta `$ relating the outgoing current amplitudes to the incoming current amplitudes,
$$\left(\begin{array}{c}b_{\mathrm{S}_1}^+\\ b_{\mathrm{S}_3}^{}\end{array}\right)=S\left(\begin{array}{c}a_{\mathrm{S}_1}^+\\ a_{\mathrm{S}_3}^{}\end{array}\right),S=\left(\begin{array}{cc}r& t^{}\\ t& r^{}\end{array}\right).$$
(15)
The meaning of the symbol $``$ is found by eliminating the internal current amplitudes . As a final result we find the transmission matrix
$`t`$ $`=`$ $`S_{21}^\delta \left[\widehat{1}U_{21}^\mathrm{W}\left\{S_{22}^{\mathrm{NW}}+S_{21}^{\mathrm{NW}}\left[\widehat{1}U_{21}^\mathrm{N}S_{22}^{\mathrm{WN}}U_{12}^\mathrm{N}S_{11}^{\mathrm{NW}}\right]^1U_{21}^\mathrm{N}S_{22}^{\mathrm{WN}}U_{12}^\mathrm{N}S_{12}^{\mathrm{NW}}\right\}U_{12}^\mathrm{W}S_{11}^\delta \right]^1`$ (16)
$`\times U_{21}^\mathrm{W}S_{21}^{\mathrm{NW}}\left[\widehat{1}U_{21}^\mathrm{N}S_{22}^{\mathrm{WN}}U_{12}^\mathrm{N}S_{11}^{\mathrm{NW}}\right]^1U_{21}^\mathrm{N}S_{21}^{\mathrm{WN}}.`$ (17)
### A Quantum point contact
We consider a QPC which we model by a WNW constriction defined by a hard-wall confining potential, see Fig. 1. This geometry has been considered by Szafer and Stone and Weisshaar, Lary, Goodnick and Tripathi in the context of conductance quantization of the QPC in a 2DEG, and recently by Kassubek, Stafford and Grabert in the context of conducting and mechanical properties of ideal two and three-dimensional metallic nanowires. We follow Kassubek *et al.* and calculate the composite S–matrix $`S^{\mathrm{WNW}}=S^{\mathrm{WN}}U^\mathrm{N}S^{\mathrm{NW}}`$. In zero magnetic field, where all S–matrices satisfy $`S=S^T`$, the individual S–matrices are given by
$`S^{\mathrm{WN}}`$ $`=`$ $`\left(\begin{array}{cc}r_{\mathrm{NW}}^{}& t_{\mathrm{NW}}\\ t_{\mathrm{NW}}^T& r_{\mathrm{NW}}\end{array}\right),`$ (20)
$`U^\mathrm{N}`$ $`=`$ $`\left(\begin{array}{cc}\widehat{0}& X^\mathrm{N}\\ X^\mathrm{N}& \widehat{0}\end{array}\right),`$ (23)
$`S^{\mathrm{NW}}`$ $`=`$ $`\left(\begin{array}{cc}r_{\mathrm{NW}}& t_{\mathrm{NW}}^T\\ t_{\mathrm{NW}}& r_{\mathrm{NW}}^{}\end{array}\right),`$ (26)
where $`X_{nn^{}}^\mathrm{N}=\delta _{nn^{}}\mathrm{exp}\left(ik_nL_1\right)`$ describes the narrow region with free propagation of propagating modes and an exponential decay of evanescent modes. Here $`k_n=k_\mathrm{F}\sqrt{1\left(n\pi /k_\mathrm{F}W^{}\right)^2}`$ is the longitudinal wave vector of mode $`n`$ in the narrow region. The S-matrices of the WN and NW constrictions are related through an exchange of leads.
By elimination of the internal current amplitudes we find the composite transmission matrix
$$S^{\mathrm{WNW}}=\left(\begin{array}{cc}r_{\mathrm{WNW}}& t_{\mathrm{WNW}}\\ t_{\mathrm{WNW}}& r_{\mathrm{WNW}}\end{array}\right),$$
(27)
where
$`r_{\mathrm{WNW}}`$ $`=`$ $`r_{\mathrm{NW}}^{}`$ (28)
$`+t_{\mathrm{NW}}\left[\widehat{1}\left(X^\mathrm{N}r_{\mathrm{NW}}\right)^2\right]^1X^\mathrm{N}r_{\mathrm{NW}}X^\mathrm{N}t_{\mathrm{NW}}^T,`$ (29)
$`t_{\mathrm{WNW}}`$ $`=`$ $`t_{\mathrm{NW}}\left[\widehat{1}\left(X^\mathrm{N}r_{\mathrm{NW}}\right)^2\right]^1X^\mathrm{N}t_{\mathrm{NW}}^T.`$ (30)
The S–matrix of the NW constriction can be found from a matching of scattering states which are eigenstates of the Schrödinger equation with the Hamiltonian in Eq. (6) where we only consider the part of the potential which sets up the NW constriction. From a matching of scattering states we find that
$`r_{\mathrm{NW}}`$ $`=`$ $`\left(\widehat{1}+\varrho \varrho ^T\right)^1\left(\widehat{1}\varrho \varrho ^T\right),`$ (31)
$`t_{\mathrm{NW}}`$ $`=`$ $`2\varrho ^T\left(\widehat{1}+\varrho \varrho ^T\right)^1,`$ (32)
$`r_{\mathrm{NW}}^{}`$ $`=`$ $`\left(\varrho ^T\varrho +\widehat{1}\right)^1\left(\varrho ^T\varrho \widehat{1}\right),`$ (33)
where the elements of the $`\varrho `$-matrix can be written as $`\varrho _{nw}=\sqrt{K_w/k_n}\varphi _n|\mathrm{\Phi }_w`$ where $`\varphi _n|\mathrm{\Phi }_w=_{\mathrm{}}^{\mathrm{}}dy\varphi _n(y)\mathrm{\Phi }_w(y)`$ is an overlap between transverse wave functions of mode $`n`$ in the narrow region and mode $`w`$ in the wide region. Here $`K_w=k_\mathrm{F}\sqrt{1\left(w\pi /\eta k_\mathrm{F}W^{}\right)^2}`$ is the longitudinal wave vector of mode $`w`$ in the wide region and $`\eta W/W^{}`$ is the relative width of the constriction.
The overlap can be calculated analytically since its elements consist of overlaps between transverse wave functions $`\varphi _n`$ and $`\mathrm{\Phi }_w`$ which are either two Sine or two Cosine functions (the overlap between a Sine function and a Cosine function or vice versa is zero due to the odd and even character of the two functions). From the overlap-matrix we get the following elements of the $`\varrho `$-matrix
$$\varrho _{nw}=\delta _{𝒫\left(n\right),𝒫\left(w\right)}\left(\frac{\left(\frac{k_\mathrm{F}W^{}}{\pi }\right)^2\left(\frac{n}{\eta }\right)^2}{\left(\frac{k_\mathrm{F}W^{}}{\pi }\right)^2n^2}\right)^{1/4}\times \{\begin{array}{ccc}\delta _{𝒫\left(n\right),1}(1)^{(n+2)/2}\times \frac{4n\eta ^{3/2}\mathrm{sin}(w\pi /2\eta )}{\pi \left(n^2\eta ^2w^2\right)}\hfill & ,& n\eta w\hfill \\ \delta _{𝒫\left(n\right),1}(1)^{(n1)/2}\times \frac{4n\eta ^{3/2}\mathrm{cos}(w\pi /2\eta )}{\pi \left(n^2\eta ^2w^2\right)}\hfill & ,& n\eta w\hfill \\ \eta ^{1/2}\hfill & ,& n\eta =w\hfill \end{array},$$
(34)
where the parity $`𝒫(j)`$ of $`j`$ is $`𝒫(j)1`$ if $`j`$ is even and $`𝒫(j)1`$ if $`j`$ is odd.
In the numerical evaluation of Eqs. (31)-(33) and Eqs. (29) and (30) it is crucial to let the number of modes in the narrow and wide regions extend over both propagating modes and evanescent modes. After all matrix inversions are performed, the reflection and transmission matrices are projected onto the propagating modes. In practice, numerical convergence of the reflection and transmission matrices is found for a finite cut-off in the number of evanescent modes. For the considered device, the number of evanescent modes is roughly ten times the number of propagating modes in the wide region corresponding to $`1000\times 1000`$ matrices.
In the limit $`WW^{}`$, Szafer and Stone employed a mean-field approximation for the overlap $`\varphi _n|\mathrm{\Phi }_w`$ in which mode $`n`$ of the narrow region couples uniformly to only a band of modes (with the same parity as mode $`n`$) in the wide region within one level spacing so that the elements of the $`\varrho `$-matrix take the form
$$\varrho _{nw}\delta _{𝒫\left(n\right),𝒫\left(w\right)}\left(\frac{\left(\frac{k_\mathrm{F}W^{}}{\pi }\right)^2\left(\frac{n}{\eta }\right)^2}{\left(\frac{k_\mathrm{F}W^{}}{\pi }\right)^2n^2}\right)^{1/4}\times \{\begin{array}{ccc}\eta ^{1/2}& ,& (n1)\eta w<(n+1)\eta \\ 0& ,& \mathrm{otherwise}\end{array}.$$
(35)
Within this approximation, there is no mode mixing and $`tt^{}`$ becomes diagonal with the transmission eigenvalues along the diagonal. This approximation was found to capture the results of an exact numerical calculation when used with the Landauer formula, Eq. (5), which is linear in the transmission eigenvalues $`T_n`$. However, for an NS interface, the conductance formula, Eq. (4), is non-linear in $`T_n`$ which also makes off-diagonal components in $`tt^{}`$ important. As we shall see (in Fig. 3), the mean-field approximation cannot reproduce the results of an exact numerical calculation of $`G_{\mathrm{NS}}`$ in the same nice way as for $`G_\mathrm{N}`$.
### B Wide region
The wide region $`L_1<x<L`$ is described similarly to the narrow region by
$$U^\mathrm{W}=\left(\begin{array}{cc}\widehat{0}& X^\mathrm{W}\\ X^\mathrm{W}& \widehat{0}\end{array}\right),$$
(36)
where $`X_{ww^{}}^\mathrm{W}=\delta _{ww^{}}\mathrm{exp}\left(iK_wL_2\right)`$ describes both the free propagating modes and the exponential decay of the evanescent modes. Here $`K_w`$ is the introduced longitudinal wave vector of mode $`w`$ in the wide region.
### C Interface barrier potential
We consider an NS interface of width $`W`$ with a barrier which we model by a Dirac delta function potential, following Blonder, Tinkham and Klapwijk . The S–matrix elements for the delta-function potential is found from a matching of scattering states which are eigenstates of the Schrödinger equation with the Hamiltonian in Eq. (6) where we only consider the part of the potential consisting of the barrier at the interface. In zero magnetic field one finds the symmetric result
$$S^\delta =\left(\begin{array}{cc}r_\delta & t_\delta \\ t_\delta & r_\delta \end{array}\right),$$
(37)
with
$`\left(t_\delta \right)_{ww^{}}`$ $`=`$ $`\delta _{ww^{}}{\displaystyle \frac{1}{1+iZ/\mathrm{cos}\theta _w}},`$ (38)
$`\left(r_\delta \right)_{ww^{}}`$ $`=`$ $`\delta _{ww^{}}{\displaystyle \frac{iZ/\mathrm{cos}\theta _w}{1+iZ/\mathrm{cos}\theta _w}},`$ (39)
where the normalized barrier strength is given by $`ZH/\mathrm{}v_\mathrm{F}`$ and $`\mathrm{cos}\theta _wK_w/k_\mathrm{F}=\sqrt{1\left(w\pi /k_\mathrm{F}W\right)^2}`$. The results differ from those of a one-dimensional calculation since we have taken the parallel degree of freedom into account. However, if we introduce an angle dependent effective barrier strength $`Z_{\mathrm{eff}}(\theta _w)=Z/\mathrm{cos}\theta _w`$ , the transmission and reflection amplitudes can formally be written in the one-dimensional form of Ref. . The transmission eigenvalues of $`t_\delta t_\delta ^{}`$ are given by $`T_w^\delta =\left(1+Z_{\mathrm{eff}}^2(\theta _w)\right)^1`$ in contrast to the mode-independent result $`T_w^\delta =\left(1+Z^2\right)^1`$ of a one-dimensional calculation .
## V Results
### A Phase-coherent junction with ideal interface
For the case of coherent transport through an ideal 2DEG-S interface with a WNW constriction in the normal region, see lower insert of Fig. 3, the conductance $`G_{\mathrm{NS}}`$ and the normal state conductance $`G_\mathrm{N}`$ can be found from Eqs. (4) and (5) with the transmission matrix $`t=t_{\mathrm{WNW}}`$.
Figure 3 shows the conductance as a function of $`k_\mathrm{F}W^{}/\pi `$ based on a numerical calculation of $`t_{\mathrm{WNW}}`$ (full lines) and the mean-field approximation (dashed line) for a WNW constriction with an aspect ratio $`L_1/W^{}=1`$ and a relative width $`W/W^{}=31.72`$. The conductance $`G_{\mathrm{NS}}`$ is seen to be approximately quantized in units of $`4e^2/h`$ which is twice the unit of conductance for the normal state conductance $`G_\mathrm{N}`$. However, just above the thresholds ($`k_\mathrm{F}W^{}/\pi =1,2,3,\mathrm{}`$), oscillations due to resonances in the narrow region of the constriction are observed. In the normal state result, these resonances are small but in contrast to the Landauer formula, $`G_{\mathrm{NS}}`$ is not linear in the transmission eigenvalues and this makes the resonances much more pronounced compared to those in the normal state conductance. Another signature of the non-linearity of $`G_{\mathrm{NS}}`$ and the importance of off-diagonal transmission, is that the mean-field approximation is in good agreement with the numerical calculation for $`G_\mathrm{N}`$ whereas it has difficulties in accounting for $`G_{\mathrm{NS}}`$. The sharpness of the resonances is to a certain extent due to the wide-narrow-wide constriction, and is suppressed in experiments with split-gate-defined constrictions. However, as shown by the simulations of Maaø, Zozulenko, and Hauge resonance effects do persist even for more smooth connections of the narrow region to the 2DEG reservoirs.
The normalized conductance $`gG_{\mathrm{NS}}/G_\mathrm{N}`$, shown in the upper insert, is two on the conductance plateaus but for certain “mode-fillings” of the constriction it is strongly suppressed and for $`k_\mathrm{F}W^{}/\pi 2`$ (two propagating modes) we get $`g1.5`$. This effect, which occurs at the onset of new modes, was also seen in the calculations of Beenakker \[13, Fig. 1\]. As the number of modes increases, these dips vanish and the normalized conductance approaches its ideal value of two. The reason is simple: suppose the constriction has $`N`$ propagating modes, then the $`N1`$ of them will have a transmission of order unity and only a single mode (corresponding to the mode with the highest transverse energy) will have transmission different from unity. As $`N`$ increases, the effect of the single mode with transmission different from unity becomes negligible for the normalized conductance and from Eqs. (4) and (5) it follows that $`lim_N\mathrm{}g=2`$.
Since the quasiparticle propagation is coherent and the Andreev scattering is the only back-scattering mechanism, the phase conjugation between electron-like and hole-like quasiparticles makes the conductance $`G_{\mathrm{NS}}`$ independent of the separation $`L_2`$ of the constriction and the interface. If evanescent modes in this region were also taken into account the results would depend weakly on $`L_2`$ as it was found in the simulations of De Raedt *et al.* and our results should be compared with their results in the large $`L_2`$-limit. As we shall see below, interfaces with a finite barrier (and thereby normal scattering at the interface) lead to size-quantization and thereby resonances which will depend on $`L_2`$.
The back-focusing phenomenon of the Andreev reflected quasiparticles and the lowering of the normalized conductance due to a QPC in the normal region was studied by De Raedt, Michielsen and Klapwijk by solving the time-dependent BdG equation fully numerically. In their wave propagation simulations, the QPC is also modeled by a WNW constriction with a relative width $`W/W^{}=1.7\mu \mathrm{m}/(1.6\times 335\mathrm{\AA })31.72`$, an aspect ratio $`L_1/W^{}=5/1.6`$ and a relative length $`L_2/W^{}=20/1.6`$. For the particular “mode-filling” $`k_\mathrm{F}W^{}/\pi =3.2`$, they find a normalized conductance $`g=1.87<2`$, but the dependence on the “mode-filling” was not studied in detail.
In Fig. 4 we present a calculation of $`g`$ as a function of $`k_\mathrm{F}W^{}/\pi `$ for this specific geometry. The result of De Raedt *et al.* ($``$) is reproduced but in general the normalized conductance is seen to have many resonances caused by the high aspect ratio of the constriction. In the range $`3<k_\mathrm{F}W^{}/\pi <4`$, the normalized conductance can be anything in the range $`1.655<g2`$ depending on the position of the Fermi level and though De Raedt *et al.* found the back-focusing phenomena of the Andreev reflection to be very robust with respect to changes of the device-parameters, the normalized conductance itself certainly depends strongly on the position of the Fermi level. The reason is that only those quasiparticles which enter the region between the constriction and the interface can be Andreev reflected and thus contribute to the conductance enhancement compared to the normal state conductance.
### B Phase-coherent junction with barrier at interface
We next consider coherent transport through an NS interface with a barrier at the interface and a WNW constriction at a distance $`L_2`$ from the interface, see lower right insert of Fig. 5. The conductance $`G_{\mathrm{NS}}`$ and the normal state conductance $`G_\mathrm{N}`$ are found from Eqs. (4) and (5) with the transmission matrix in Eq. (17).
In Fig. 5 we present a calculation of the normalized conductance $`g`$ as a function of the normalized barrier strength $`Z`$ for the device considered by De Raedt *et al.* . For the position of the Fermi level ($``$) considered by De Raedt *et al.*, the normalized conductance is only weakly suppressed (compared to a system without a QPC, see e.g. ) for low barrier scattering ($`Z<1`$) and only for a very high barrier strength ($`Z>2`$) the normalized conductance approaches the cross-over from an excess conductance ($`g>1`$) to a deficit conductance ($`g<1`$). The effect of the barrier for $`Z<1`$ is very similar to the reflectionless tunneling behavior in diffusively disordered junctions where the net result is as if tunneling through the barrier is reflectionless. In the case of a QPC instead of a diffusive region there is a weak dependence on the barrier strength and the tunneling is not perfectly reflectionless.
An interesting feature is the non-monotonic behavior of $`g`$ as a function of $`Z`$. For $`Z\mathrm{}`$, the normalized conductance of course vanishes, but in some regions it increases with an increasing barrier strength \[curve ($``$)\] and for $`Z1`$ it has the same value as for $`Z=0`$. This is purely an effect of size quantization in the cavity between the QPC and the barrier which enters the conducting properties because of the fully coherent propagation of electrons and holes. However, changing e.g. the position of the Fermi level slightly, \[curves ($`\mathrm{}`$) and ($`\mathrm{}`$)\], changes the quantitative behavior although the overall suppression of $`g`$ with increasing $`Z`$ is maintained.
### C Incoherent junction
In junctions where the propagation in the cavity between the QPC and the NS interface is incoherent, the so-called sequential tunneling regime, the QPC and the NS interface can be considered as two series-connected resistive regions . This means that
$`G_{\mathrm{QPC}\mathrm{NS}}`$ $`=`$ $`\left(G_{\mathrm{QPC}}^1+G_{\mathrm{NS}}^1\right)^1,`$ (40)
$`G_{\mathrm{QPC}\mathrm{N}}`$ $`=`$ $`\left(G_{\mathrm{QPC}}^1+G_\mathrm{N}^1\right)^1,`$ (41)
where $`G_{\mathrm{QPC}}`$ and $`G_\mathrm{N}`$ are found from Eq. (5) with $`t=t_{\mathrm{QPC}}`$ and $`t=t_\delta `$, respectively, and $`G_{\mathrm{NS}}`$ from Eq. (4) with $`t=t_\delta `$. The normalized conductance can be written as
$$g\frac{G_{\mathrm{QPC}\mathrm{NS}}}{G_{\mathrm{QPC}\mathrm{N}}}=\frac{G_{\mathrm{NS}}}{G_\mathrm{N}}\times \frac{G_{\mathrm{QPC}}+G_\mathrm{N}}{G_{\mathrm{QPC}}+G_{\mathrm{NS}}},$$
(42)
and for $`WW^{}`$ the major contribution to the resistance comes from the QPC, i.e. $`G_{\mathrm{QPC}}(G_\mathrm{N},G_{\mathrm{NS}})`$. This means that the enhancement of $`G_{\mathrm{NS}}`$ compared to $`G_\mathrm{N}`$ has a negligible effect on the total conductance so that the normalized conductance approaches $`g1`$.
For an ideal QPC and an ideal interface we have $`G_{\mathrm{QPC}}=\frac{2e^2}{h}N`$, $`G_{\mathrm{NS}}=\frac{4e^2}{h}M`$, and $`G_\mathrm{N}=\frac{2e^2}{h}M`$, where $`N`$ is the number of modes in the QPC and $`M`$ is the number of modes at the NS interface. The corresponding normalized conductance is shown in Fig. 6.
The sequential tunneling behavior may provide an explanation for the unexpectedly small conductance enhancement seen in the experiments of Benistant *et al.* on Ag-Pb interfaces with injection of quasiparticles into an Ag crystal through a point contact. The condition for the electronic transport to be incoherent is that the distance between the point contact and the NS interface is longer than the correlation length $`L_c=\mathrm{Min}(\mathrm{}_{\mathrm{in}},L_\mathrm{T})`$, $`\mathrm{}_{\mathrm{in}}`$ being the inelastic scattering length and $`L_\mathrm{T}`$ the Thouless length . For the ballistic device studied by Benistant et al. $`L_c=L_\mathrm{T}=\mathrm{}v_\mathrm{F}/k_\mathrm{B}T9\mu \mathrm{m}`$ (at $`T=1.2\mathrm{K}`$) which is much shorter than the distance between the point contact and the NS interface ($`200\mu \mathrm{m}`$). Lowering the temperature will increase the correlation length and for sufficiently low temperatures ($`T0.05\mathrm{K}`$) we expect a cross-over from the sequential tunneling regime to the phase-coherent regime where the Andreev mediated conductance enhancement should become observable.
## VI Discussion and conclusion
For an ideal 2DEG-S interface with a QPC in the normal region, the normalized conductance $`gG_{\mathrm{NS}}/G_\mathrm{N}`$ depends strongly on the position of the Fermi level and only when the Fermi level corresponds to a conductance plateau a doubling of the conductance is found. The deviations from the factor-of-two enhancement, when the Fermi level does not correspond to a plateau, can be significant and for a particular example of the WNW constriction we find that the normalized conductance can be suppressed to $`g1.5`$ in a system with only two propagating modes in the constriction. In the presence of a barrier at the 2DEG-S interface, the normalized conductance depends strongly on the longitudinal quantization in the cavity set up by the QPC and the barrier. Depending on the barrier strength, the length of the cavity and the position of the Fermi level, this longitudinal quantization may give rise to both constructive and destructive inferences in the transmission and thus also in the conductance. Perhaps surprisingly, the effect of the barrier is very much suppressed (compared to a system without a QPC, see e.g. ) due to a very strong back-scattering at the return of the quasiparticles to the normal probe. The localization of quasiparticles in the cavity gives rise to an almost reflectionless tunneling through the barrier as it is also found in systems with a diffusive normal region . The interferences due to localization in the cavity will be smeared by a finite temperature and they are also expected to be suppressed by a finite inelastic scattering length compared to the length of the cavity .
For the sequential tunneling regime we find that the conductance enhancement vanishes as the number of modes at the interface becomes much larger than the number of modes in the QPC.
Our calculations show that the S–matrix approach provides a powerful alternative to the time-dependent Bogoliubov-de Gennes approach of De Raedt *et al.* in detailed studies of the conducting properties of nanoscale 2DEG-S devices. Even though the back-focusing phenomenon of the Andreev reflection is robust against changes in the geometry , the electron-hole conversion efficiency itself is not.
Finally, we stress that for a quantitative comparison to experimental systems, it is crucial to take different Fermi wave vectors and effective masses of the 2DEG and the superconductor into account .
## Acknowledgements
We would like to thank C.W.J. Beenakker, M. Brandbyge, J.B. Hansen, and H.M. Rønnow for useful discussions. NAM acknowledges financial support by the Nordic Academy for Advanced Study (NorFA) and HS acknowledges support by the European Community.
|
no-problem/9903/hep-ex9903066.html
|
ar5iv
|
text
|
# The CLEO-III RICH Detector and Beam Test Results
## I Introduction
The CLEO detector is undergoing a major upgrade to phase III in parallel with a significant luminosity increase of the CESR electron-position collider. These improvements will make it possible to make precision measurements, especially of CP violation and rare B decays. One of the main goals of the CLEO III upgrade is to have excellent charged particle identification.
In order to achieve this goal, we have designed and are constructing a Ring Imaging Cherenkov (RICH) detector, based on the ’proximity focusing’ approach. It needs no optical focusing elements, and requires that the radiator is relatively thin compared to the distance of expansion gap between the photon detector and radiator. The detector can be designed flat and compact to fit in limited space. Our design is based on the pioneer work done by the Fast-RICH group.
When a charged particle passes through a medium, it radiates photons if the velocity of the particle is faster than light speed in the medium. The direction of the radiated photons with respect to track is given by $`\mathrm{cos}\theta =1/(\beta n)`$, where $`n`$ is the index of the medium, and $`\beta =p/E`$. To distinguish between hypotheses for a track at momentum $`p`$, the formula $`\mathrm{\Delta }\mathrm{sin}^2\theta =\mathrm{\Delta }m^2/(p^2n^2)`$ is applicable.
In a medium of $`n=1.5`$, a very fast particle ($`\beta 1`$) radiates Cherenkov photons at angle about 840 mrad. For CLEO III RICH design, our goal is to separate charged pions and kaons ($`\pi /\mathrm{K}`$) at $`p`$ = 2.65GeV/c, the highest momentum that need be considered at CLEO for B decays. The Cherenkov angle difference of $`\pi `$ and $`K`$ at this momentum is 14.4 mrad . In order to obtain very low fake rate with high efficiency, we would like to achieve a $`4\sigma `$ separation between $`\pi `$ and $`K`$ at all the momenta relevant for B decays with the addition of a $`1.8\sigma `$ dE/dx contribution. This requires that the RICH provides 4.0 mrad Cherenkov resolution per track. Thus our benchmark is average 12 photoelectrons per track each with a resolution of $`\pm `$14 mrad.
## II Detector Design and construction
The overall structure of the CLEO III RICH detector is cylindrical as shown in Fig. 1. It consists of compact photon detector modules at the outer radius and radiator crystals at the inner radius, separated by an expansion gap. The whole detector is divided into 30 sectors in the azimuthal direction. The RICH resides between the drift chamber and the electromagnetic calorimeter. The detector occupies a radial space between 80 and 100 cm in radius, is 2.5 m long, and covers 81% of the solid angle .
The choice of the material for the radiator and detector windows is driven by the choice of Triethylamine (TEA) as the photo-sensitive material. The $`\mathrm{CH}_4/\mathrm{TEA}`$ gas mixture has a finite quantum efficiency in the VUV region (135-165 nm), which requires the use of fluoride crystals to insure transparency. The Cherenkov angle resolution is dominated by chromatic dispersion in the emission of light . We choose LiF as radiator for its smaller chromatic dispersion in VUV region, of all the fluorides. The photon detector windows are made of $`\mathrm{CaF}_2`$.
The LiF radiators will be mounted on an inner carbon fiber cylinder as a$`14\times 30`$ array, each radiator is $`170\times 170\mathrm{m}\mathrm{m}^2`$ and 10 mm thick. The normal choice of radiator shape is a flat plate. However, as the total reflection angle of 150 nm photon in LiF ($`n1.5`$ at 150 nm) is about $`42^{}`$ and the Cherenkov angle is close to $`48^{}`$, the photons radiated by track at normal incidence will be trapped as shown in Fig. 2. Instead of tilting the plates to allow the light to get out near the center of the detector, we developed radiators with a ’sawtooth’ pattern on its outer surface, which allows the Cherenkov photons to escape the radiator. We will use such sawtooth radiators for the center four of the 14 rings.
The expansion gap (16 cm) is filled with high purity nitrogen gas. The volume is well sealed to minimize the contamination of $`\mathrm{O}_2`$ and $`\mathrm{H}_2\mathrm{O}`$, which absorb VUV photons.
The photon detector is multi-wire proportional chamber with cathode pad readout. It is filled with $`\mathrm{CH}_4`$ gas bubbled through liquid TEA at $`15^{}\mathrm{C}`$ (7%). The entrance windows are 2 mm thick $`\mathrm{CaF}_2`$, coated with 100 $`\mu \mathrm{m}`$ wide silver traces to act as an electrode. Photons with a wavelength between 135 nm and 165 nm can generate single photoelectrons in the $`\mathrm{CH}_4/\mathrm{TEA}`$ mixture. The photoelectron drifts toward 20 $`\mu \mathrm{m}`$ diameter Au-W anode wires, where avalanche multiplication takes place. Charge is induced on an array of $`8.0\times 7.5\mathrm{mm}^2`$ cathode pads, allowing a precise reconstruction of the position of the Cherenkov photon at the detector plane. The chamber is asymmetric, with the wire to pad distance of 1 mm and the wire to window distance of 3.5 mm, to improve the wire cathode pad coupling.
## III Electronics
The choice of the readout electronics for CLEO III is governed by several Considerations . First, the charge induced by a single photoelectron avalanche at moderate gain follows an exponential distribution as shown in Fig. 3. As the most likely charge is zero, it is necessary to have a very low noise system so that high efficiency is achieved. Furthermore, we want to reconstruct charged tracks accurately. We expect the pulse height from the charged track to be at least a factor 20 higher than the mean gain for a single photoelectron. In addition, high resolution analog readout allows a more effective suppression of cluster overlaps. Lastly, in the final system, we will have 230,400 channels to be readout. An effective zero suppression algorithm is necessary to have a manageable data size.
We use the VA\_RICH chip as the front end processor. It is a custom-designed 64-channel VLSI chip based on the VA series developed for Si microchip readout . It features low noise and large dynamic range. With a 2 pF input capacitor, its equivalent noise charge is about 150 electrons. Linearity is excellent up to $`\pm 4.5\times 10^5\mathrm{e}^{}`$ input. Two chips are wire bonded to one hybrid circuit. Sixty hybrids are mounted on each photon detector.
The signal travels over a 6 m long cable from the detector to a VME databoard. The databoard includes receivers, 12-bit differential ADCs, bias circuitry for the VA\_RICH chips, a circuit generating the timing sequence needed by the VA\_RICH chip, and VME interface. In the final version, the databoards will include a DSP based algorithm for common mode subtraction that will make the system less vulnerable to coherent noise sources. For the beam test, 8 prototype databoards in one crate are used. These databoards were not equipped with coherent noise subtraction, and all the channels were readout to monitor and correct for coherent noise fluctuations.
## IV Beam Test setup
In order to test our design of the RICH detector, a comprehensive beam test was performed. The beam test setup is shown in Fig. 4.
The first two photon detectors built for the final CLEO III RICH were mounted in an aluminum box. In addition, one plane and two sawtooth LiF radiators were mounted in the box at the same distance from the photon detectors as designed for CLEO III RICH. The box was sealed and flushed with pure nitrogen. A schematic of the box can be found in Ref. .
The beam test was performed in a muon halo beam in the Meson East area of FermiLab. Apart from the RICH itself, the system consisted of trigger scintillator counters, and a tracking system. We used two sets of MWPCs to measure the track position and angle. They provided 0.7 mm spatial resolution per station and a track angle resolution of 1 mrad. To simulate different track incidence angles, the RICH detector box was rotated at various polar and azimuthal angles.
## V Beam test electronics performance
During the three week beam test, the photon detectors were run at an average gain of $`4\times 10^4`$. The total electronic noise was about 1000 electrons. After coherent noise subtraction, the residual incoherent noise was 400 electrons, providing an average signal-to-noise ratio for single photon signal of 100:1.
An important effect that we discovered with the beam test was the temperature sensitivity of the analog +2V and –2V, necessary to bias the VA\_RICH. Fig. 5 shows this effect. This, in turn, affected the bias configuration of the chip and its pedestal. The $`\pm 2`$ voltage regulator design has been changed to eliminate this problem.
## VI Results
Fig. 6 shows the displays of two single events from the plane radiator at $`30^{}`$ track incidence and from a sawtooth radiator at normal incidence. For the plane radiator, the image is a single arc as shown. While for the sawtooth radiator, two arcs on opposition sides of the charged track are visible, with the lower one truncated due to acceptance. The acceptance for images from plane radiators is about 85% of the maximum possible acceptance. With the necessary chamber mounts and window frames, 85% is the maximum possible acceptance. The acceptance for sawtooth images was about 50% in this setup, and should be about 85% in the final system.
Channels recording pulse height above a threshold of $`5\sigma `$ noise are selected for further analysis. The first step in the analysis is clustering. The center of the cluster is treated as the location of a photoelectron. About 2.2 pads per cluster are found, with 1.1 photoelectrons per cluster, due to some unresolved photon overlaps. For each photoelectron, the trajectory is optically traced back through all media, with the assumption that it originates from the mid-point of radiator. From this propagation path, the Cherenkov angle is calculated.
We extract the number of photoelectrons per track by fitting the single photon spectrum taking into account the background. We invoke a $`\pm 3\sigma `$ cut so as not to include non-Gaussian tails. The numbers of photoelectron per track at different incident angle are shown in Fig. 7 (a). For plane radiator, the average number of photoelectrons per track is between 12 and 15, varying with incident angle. For the sawtooth radiator the number is lower due to limited acceptance. An extrapolation made to estimate the performance of the final system predicts 17-20 photoelectrons. Fig. 7 (b) shows the measured Cherenkov angle resolution per photon $`\sigma _\gamma `$. The resolution is between 11 and 15 mrad.
The parameter that shows the particle identification power of this system is the Cherenkov angle per track $`\sigma _{track}`$. The resolutions are summarized in Fig. 8 for both plane and sawtooth radiators. The measured $`\sigma _{track}`$ increases with track angle. A Monte Carlo study is performed to simulate the resolution. The Monte Carlo represents the plane radiator data well. For the sawtooth radiator, the data show a worse resolution than the simulation. There are several sources in the simulation that could account for this discrepancy, such as the tracking error, beam background, and imperfect knowledge of the sawtooth shape.
The Monte Carlo study also shows that the contribution to $`\sigma _{track}`$ from the tracking errors is significant, especially for large incident track angles. This error comes in due to the photon emission point error. For the plane radiator at $`40^{}`$, the contribution is about 4 mrad, while the total error is about 5.8 mrad. In the final CLEO III system, the tracking system is expected to be substantially better than the system used in the beam test. In order to have approximate projection of the performance of the CLEO III RICH, we extrapolate the resolution from the beam test results, with the assumption of no tracking error and the acceptance in final system. This resolution is shown as the filled circles in Fig. 8.
## VII Summary and outlook
The beam test results indicate that the specifications of CLEO III RICH design are fulfilled. In particular, the Cherenkov angle resolution per track reaches 4 mrad, which will provide $`4\sigma \pi /\mathrm{K}`$ separation in CLEO III . The RICH detector is on the final stage of construction. We expect the completion of the project in Summer 1999.
## VIII Acknowledgments
We would like to thank FermiLab for providing us with necessary infrastructure and dedicated beam time. We give particular thanks to Chuck Brown and other colleagues from E866 for their hospitality.
|
no-problem/9903/chao-dyn9903015.html
|
ar5iv
|
text
|
# Noise-correlation-time-mediated localization in random nonlinear dynamical systems
## Abstract
We investigate the behavior of the residence times density function for different nonlinear dynamical systems with limit cycle behavior and perturbed parametrically with a colored noise. We present evidence that underlying the stochastic resonancelike behavior with the noise correlation time, there is an effect of optimal localization of the system trajectories in the phase space. This phenomenon is observed in systems with different nonlinearities, suggesting a degree of universality.
By stochastic resonance (SR) it is normally understood the phenomenon by which an additive noise (usually considered uncorrelated) can enhance the coherent response of a periodically driven system. First proposed in climate model studies , SR was first experimentally verified by Fauve and Heslot , and since then this behavior has been predicted and observed in many different theoretical and experimental systems (see for an extensive review and a complete list of references). In particular, the presence of SR has been discussed in a great number of models including spatio-temporal systems , and has helped to understand how biological organisms may use noise to enhance the transmission of weak signals through nervous systems . Quite recently it has been numerically shown that SR can also occur in the absence of an external periodic force as a consequence of the intrinsic dynamics of the nonlinear system , a behavior that has been denominated autonomous stochastic resonance. Most of the work on SR has traditionally focused on systems with additive noise, and with some exceptions (see, for instance, ) little attention has been given to cases where the noise perturbs the system parametrically, in spite of the well known differences with the additive situation. With respect to non-white noise, the effect of additive colored noise on SR has been considered in periodically driven overdamped systems , showing that the correlation time can suppress SR monotonically, a feature demonstrated experimentally in . However, only very recently the situation in which the system is subject to both multiplicative and colored noise has been discussed in the literature. In the authors analyze the effect of multiplicative colored noise on periodically driven linear systems, discussing the appearance of SR by changing either the intensity or the correlation time of the noise. For nonlinear models, in we considered a system without periodic external force but with an intrinsic limit cycle behavior, which was parametrically perturbed by an Ornstein-Uhlenbeck (OU) noise, finding a nonmonotonic behavior of the coherence in the system response when measured as a function of the noise correlation time, while no coherence enhancement was obtained when changing the noise intensity. A similar result has also been recently obtained analytically for an overdamped linear system periodically driven and parametrically perturbed by an OU process .
In this paper, we present numerical evidence which suggests that underlying the SR-like behavior as a function of the noise correlation time, there is a localization effect of the system trajectories in the phase space for a particular value of the correlation time. This is obtained in systems with intrinsic limit cycle, perturbed parametrically by an OU process and with different nonlinearities, which is also a clear indication that the phenomenon is not a peculiarity of an specific model.
We study three different $`2D`$ random systems. The delayed regulation model, known from population dynamics
$$x_{t+1}=\lambda _tx_t(1x_{t1}),$$
(1)
the Sel’kov model for glycolysis
$`\dot{x}`$ $`=`$ $`x+\lambda _ty+x^2y,`$ (2)
$`\dot{y}`$ $`=`$ $`b\lambda _tyx^2y,`$ (3)
and the Odell model also from population dynamics
$`\dot{x}`$ $`=`$ $`x[x(1x)y],`$ (4)
$`\dot{y}`$ $`=`$ $`y(x\lambda _t).`$ (5)
Here, $`t`$ takes discrete values in (1) or continuous values in (3) and (5), and in all cases we will consider the control parameter as a random variable $`\lambda _t=\lambda +\zeta _t`$, i.e., as a deterministic part $`\lambda `$, plus an stochastic perturbation $`\zeta _t`$, which is assumed to be an OU process, i.e., a stationary Gaussian Markov noise with zero mean, $`\zeta _t=0`$, and exponential correlation $`\zeta _t\zeta _t^{}=\left(D/\tau \right)\mathrm{exp}\left(\left|tt^{}\right|/\tau \right)`$, where $`\tau `$ is the correlation time and $`D/\tau =\sigma ^2`$ is the variance of the noise. We will refer to the square root of the variance, $`\sigma `$, as the intensity of the noise. The deterministic counterparts of (1), (3) and (5) undergo a supercritical Hopf bifurcation at $`\lambda \lambda _H`$ which, in the Sel’kov model, also depends on the parameter $`b`$.
The numerical integration has been carried out with $`\lambda `$ in the limit cycle parameter domain. The iteration of (1) has been recreated using an integral algorithm that guarantees the quality of the correlation function in the simulations of the noise at discrete times, while (3) and (5) have been integrated by an order $`2`$ explicit weak scheme . The results presented hereafter are independent of the initial conditions and were obtained after the decay of the initial transients.
The observed fact that for a particular correlation time $`\tau _r`$ the coherence of the system oscillations has a maximum, and that the frequency of these oscillations is close to the deterministic one $`w_d`$, seems to indicate that for the resonant correlation time, the probability that the system visits the attractors associated with the mean control parameter value $`\lambda =<\lambda _t>`$ has also a maximum. If this is the case, this maximum should be accompanied with a decrease in the probability to visit other attractors associated with parameters far away from $`\lambda `$, or, in other words, should lead to an effect of concentration or localization of orbits around the attractor associated with $`\lambda `$ as soon as $`\tau \tau _r`$. It is worth recalling that because of changes in the stability properties, a somehow similar localization effect can also occur in parametric deterministic systems with time dependent parameters, as is the case, for instance, in the well known parametric resonance phenomenon.
With the aim of studying the residence times distribution of the system on the different available attractors in the system periodic or quasiperiodic domain, we consider a deterministic attractor $`\mathrm{\Lambda }(\lambda )`$, i.e., the attractor obtained with the deterministic counterpart of the stochastic system, evaluated at a particular value of the control parameter, $`\lambda `$. Next we divide the system phase space in $`N+1`$ attractors associated with $`N+1`$ values of the parameter separated a distance $`\mathrm{\Delta }\lambda `$. In this way, a mesh is composed by concentric deterministic attractors centered around the stationary equilibrium state $`(x^{},y^{})_{\lambda \lambda _H}`$, with $`\lambda `$ in the fixed point domain. This partition looks like the one shown in Fig. 1a. With this construction, we have a series of $`N+1`$ attractors $`\{\mathrm{\Lambda }(\lambda _{N/2_{}})\mathrm{}\mathrm{\Lambda }(\lambda _1_{}),\mathrm{\Lambda }(\lambda _0),\mathrm{\Lambda }(\lambda _{1_+})\mathrm{}\mathrm{\Lambda }(\lambda _{N/2_+})\}`$, where we use the definition $`\lambda _{k\pm }\lambda \pm k\mathrm{\Delta }\lambda `$. This series divides the phase space in $`N`$ rings, each one denoted by $`\mathrm{\Gamma }(\gamma _k)(\mathrm{\Lambda }(\lambda _k),\mathrm{\Lambda }(\lambda _{k+1}))`$, where $`\gamma _k\left(\lambda _{k+1}+\lambda _k\right)/2`$ is the mean control parameter obtained with the control parameters that define the boundary of the ring. The stochastic system is integrated on this mesh, and its evolution describes random trajectories as the one described in Fig. 1b for the particular case of (1), visiting during a finite time each ring of the mesh. During the integration process we measure the residence time in the rings as follows: let $`t_1^k`$ and $`t_2^k`$ be the entrance and exit times to the ring $`\mathrm{\Gamma }(\gamma _k)`$, respectively. The residence time in this ring is $`t(\gamma _k)=t_2^kt_1^k`$, and we denote the residence time of the $`n`$ visit event to the ring $`\mathrm{\Gamma }(\gamma _k)`$ by $`t_n(\gamma _k)`$. Then, if during an integration time $`I`$, which is achieved by integrating $`R`$ realizations of $`M`$ time steps, there have been $`V_k`$ visit events to the ring $`\mathrm{\Gamma }(\gamma _k)`$, the mean residence time of the system in this ring is given by the mean of the residence events, that is, $`T(\mathrm{\Gamma }(\gamma _k))_{n=1}^{V_k}\frac{t_n(\gamma _k)}{I}.`$ Such a determination of the residence times gives an alternative statistical measure of the resonant amplification described in . Therefore, given a pair $`(\sigma ,\tau )`$, the function defined by the histogram $`P(T)P(T(\gamma _k))P(\frac{T(\mathrm{\Gamma }(\gamma _k))}{\mathrm{\Delta }\lambda })`$ is a measure of the probability density for the system state to be in the region defined by the ring $`\mathrm{\Gamma }(\gamma _k)`$ . An example of histogram is depicted in Fig. 2 and shows that the system mostly visits the attractors surrounding the ring $`\mathrm{\Gamma }(<\lambda _t>)`$. We remark that we have carefully selected the simulation parameters to ensure that the partition does not contain overlapped attractors such that this has a well defined meaning. An illustrative example of the residence times density function (RTDF) as a function of the correlation time is depicted in Fig. 3 for the three models. Obviously, the localization of the system trajectories depends strongly on $`\tau `$. The RTDF height shows a nonmonotonous behavior reaching a maximum at a particular value of $`\tau \tau ^{}`$ and, at the same value, the width $`W`$ calculated at the height $`h/\sqrt{e}`$ shows a remarkable minimum, as represented in the inset curves. The correlation time of the parametric random perturbation acts as a tuner which controls (in an statistical sense) the behavior of the system, maximizing its localization on the region of the phase space surrounding $`<\lambda _t>`$. Furthermore, the relation $`h/W`$ has a maximum for a particular value of $`\tau `$, and this optimal value depends on $`\lambda =<\lambda _t>`$, as can be appreciated in Fig.4. Such a dependence enables us to relate the optimal correlation time for maximal localization, $`\tau ^{}`$, with the temporal scales of the deterministic counterparts. We first study the behavior of the postponement of the bifurcation point because of the multiplicative noise in order to obtain the postponed bifurcation point $`\lambda _H^{}(\sigma ,\tau )`$.We next calculate the effective distance to the bifurcation point $`\mathrm{\Delta }\lambda ^{}=|\lambda \lambda _H^{}|`$, and measure from the deterministic temporal series the period, $`T^{}`$, of the oscillations when the system is evaluated at a distance $`\mathrm{\Delta }\lambda ^{}`$ from the deterministic bifurcation point. With this information, in Fig. 5 we plot the behavior of $`\tau ^{}`$ with the quantity $`\mathrm{\Delta }T^{}|T^{}T(\lambda _H)|`$, where $`T(\lambda _H)`$ is the period of the deterministic system at precisely the Hopf bifurcation point. The curves can be fitted by a power law $`\tau ^{}\left(\mathrm{\Delta }T^{}\right)^\alpha `$ with the exponents $`\alpha =0.59`$, $`0.58`$ and $`0.53`$ for (1), (3) and (5) respectively, and this seems to indicate that the localization behavior with $`\tau `$ is characterized by a unique exponent with value close to $`1/2`$. We note that for the case of (1) it is even possible to relate $`\mathrm{\Delta }T^{}`$ with the system implicit periodicity $`T_2`$, thus recovering a similar relation to that calculated in . In this way, these results relate the resonantlike behavior previously reported in using the quality factor $`\beta `$ , with an increase (in mean) of the localization of the orbits of the system.
From the behavior of the quantity $`h/W`$, it is clear that a concentration of orbits around a narrow range of bands in the phase space implies a bigger weight of those particular frequencies in the power spectrum, and, as a consequence, a nonmonotonous behavior qualitatively similar to that of Fig. 4 should be expected for $`\beta `$, indicating an increase of the coherence in the system response. This is indeed the case for our three models (with quality factors showing a maximum for values of the correlation time close to $`\tau ^{}`$) clearly indicating that the SR-like effect induced by colored noise in nonlinear systems with limit cycle behavior is quite general.
Summarizing, we have presented numerical evidence of a novel effect of enhanced localization of orbits mediated by the correlation time of a multiplicative OU process in nonlinear dynamical systems with limit cycle behavior. This effect is characterized by a power law with exponent close to $`1/2`$ for all the models considered in spite of their different nonlinearities. This behavior could indicate the universal character of this phenomenon, but further research is required to clarify this point. This work also relates the SR-like behavior previously reported with this localization effect.
We acknowledge financial support from DGESEIC (Spain) project PB97-0076.
|
no-problem/9903/cond-mat9903056.html
|
ar5iv
|
text
|
# Spin-Orbit-Induced Kondo Size Effect in Thin Films with 5/2-spin Impurities
## I Introduction
The Kondo effect in samples with reduced dimensions (thin films, narrow wires) is one of the most challenging problems in the field. Most of the experiments have shown that the Kondo contribution to the resistivity is suppressed when the sample size is reduced or the disorder in the sample is increased. In addition the different thermopowers of samples with different thickness gave further evidences for the existence of size dependence . The previously examined samples were Au(Fe), Cu(Fe), and Cu(Cr) alloys, i.e., alloys with integer spin impurities. Surprisingly, however, very weak size dependence has been found recently in Cu(Mn) alloys . The first possible explanation related to the size of the Kondo screening cloud was ruled out both theoretically and experimentally . In the limit of strong disorder the experiments might be well explained with the theory of Phillips and co-workers based on weak localization .
In the dilute limit the theory of the spin-orbit-induced magnetic anisotropy proposed by the authors was able to explain every experiment in samples with reduced dimensions, small disorder and integer spin impurities for thin layers . Recently an elegant method was developed by Fomin and co-workers which can be applied for a general geometry. According to this theory the spin-orbit interaction of the conduction electrons on the non-magnetic host atoms can result in a magnetic anisotropy for the magnetic impurity. This anisotropy can be described by the Hamiltonian $`H_a=K_d(\mathrm{𝐧𝐒})^2`$ where $`𝐧`$ is the normal direction of the experienced surface element, $`𝐒`$ is the spin of the impurity, and $`K_a`$ is the anisotropy constant which is always positive and inversely proportional to the distance of the impurity from the surface. Due to this anisotropy the spin multiplet splits according to the value of $`S_z`$. In the case of integer spin (e.g., $`S=2`$ for Fe), the lowest level is a singlet with $`S_z=0`$ thus at a given temperature the impurities close enough to the surface, where the splitting is greater than $`k_BT`$, cannot contribute to the Kondo resistivity. When the sample size becomes smaller, more and more impurity spins freeze out, reducing the amplitude of the Kondo resistivity .
The theory predicts, however, different behavior for samples with half-integer impurity spin (e.g., $`S=5/2`$ for Mn) which is recently in the center of interest. In the one-half case when the anisotropy looses its meaning , in fact no anomalous size dependence has been found for Ce impurities by Roth and co-workers . In half-integer $`S>1/2`$ case the lowest level is a doublet ($`S_z=\pm 1/2`$) thus even an impurity close to the surface (large anisotropy) has a contribution to the Kondo resistivity. Even accepting that the surface anisotropy reduces the free spin of the manganese to a doublet, it is far from trivial that in this case the size dependence is drastically suppressed, therefore an elaborate theory is required.
In this paper we calculate the Kondo resistivity in thin films of magnetic alloys with $`S=5/2`$ impurity with helps of the simple model and MRG calculation of Ref. . Fitting to the Kondo resistivity the $`\mathrm{\Delta }\rho =B\mathrm{ln}T`$ function, we calculate the coefficient $`B`$ in terms of the film thickness which is quite different from the $`S=2`$ case. These results will be compared to the recent experiment by Jacobs and Giordano presented in thin Cu(Mn) films and we compare also the case of alloys with different Kondo temperatures.
## II Kondo size effect in thin films with impurities S=5/2
In the presence of the spin-orbit-induced surface anisotropy the Kondo Hamiltonian is
$`H`$ $`=`$ $`{\displaystyle \underset{k,\sigma }{}}\epsilon _ka_{k\sigma }^{}a_{k\sigma }+H_a`$ (1)
$`+`$ $`{\displaystyle \underset{\genfrac{}{}{0pt}{}{k,k^{},\sigma ,\sigma ^{}}{M,M^{}}}{}}J_{MM^{}}𝑺_{MM^{}}(a_{k\sigma }^{}𝝈_{\sigma \sigma ^{}}a_{k^{}\sigma ^{}}),`$ (2)
where $`a_{k\sigma }^{}`$ ($`a_{k\sigma }`$) creates (annihilates) a conduction electron with momentum $`k`$, spin $`\sigma `$ and energy $`\epsilon _k`$ measured from the Fermi level. The conduction electron band is taken with constant energy density $`\rho _0`$ for one spin direction, with a sharp and symmetric bandwidth cutoff $`D`$. $`𝝈`$ stands for the Pauli matrices, $`J_{MM^{}}`$’s are the effective Kondo couplings and $`H_a=KM^2`$ is the anisotropy Hamiltonian when the quantization axis is parallel to $`𝐧`$. Applying the Callan-Symantzik multiplicative renormalization group (MRG) method to the problem, the next to leading logarithmic scaling equations for the dimensionless couplings $`j_{MM^{}}=\rho _0J_{MM^{}}`$ were calculated for any impurity spin in Ref. . There was a multiple step scaling performed, corresponding to the freezing out of different intermediate states due to the surface anisotropy. After exploiting the $`j_{M,M^{}}=j_{M^{},M}=j_{M,M^{}}`$ symmetries of the problem, the scaling equations (see Eq. (20) and (21) in Ref. ) were solved numerically in terms of the scaling parameter $`x=\mathrm{ln}(\frac{D_0}{D})`$.
The results for $`S=5/2`$, $`j_0=0.0294`$, and $`D_0=10^5`$K, i.e., $`T_K=10^3`$K for Cu(Mn), can be seen in Fig. 1. We can see from the plot that, when $`K/T`$ is large enough, at $`D=T`$ every coupling can be neglected, except the $`j_{\frac{1}{2}\frac{1}{2}}`$ and $`j_{\frac{1}{2}\frac{1}{2}}`$. This corresponds to the freezing out of the given higher $`S_z`$ states, but it can be seen well that the two lowest energy states are still significant even for large anisotropy.
The Kondo resistivity calculated from the running couplings at $`D=T`$ by solving the Boltzmann equations (see Ref. ) is
$$\rho _{\text{Kondo}}(K,T)=\frac{3}{4}\frac{m}{e^2}\frac{2\pi }{\epsilon _F\rho _0^2}\frac{c}{𝑑\epsilon \left(\frac{f_0}{\epsilon }\right)F^1(\epsilon )}$$
(3)
where $`c`$ is the impurity concentration, $`\epsilon _F`$ is the Fermi energy, $`f_0`$ is the electron distribution function in the absence of an electric field, and $`F`$ is a function of the running couplings at $`D=T`$, spin factors, the strength of the anisotropy $`K`$, and the temperature, defined in Ref. . For thin films the Kondo resistivity calculated in the frame of a simple model where the two surfaces contribute to the anisotropy constant in an additive way as
$$K(d,t)=K_d+K_{td}=\frac{\alpha }{d}+\frac{\alpha }{td},$$
(4)
is
$$\overline{\rho }_{\text{Kondo}}(t,T)\frac{1}{t}\underset{0}{\overset{t}{}}\rho _{\text{Kondo}}(K(x,t),T)𝑑x$$
(5)
where $`\alpha `$ is the proportionality factor of the spin-orbit-induced surface anisotropy, $`t`$ is the thickness of the film and we have used the fact that the Kondo contribution to the resistivity is smaller by a factor of $`10^3`$ than the residual normal impurity contribution (see for the details Ref. ).
In Fig. 2 the resistivity as a function of $`\mathrm{ln}T`$ can be seen for different $`t/\alpha `$ for $`S=5/2`$, $`j_0=0.0294`$, and $`D_0=10^5`$K, i.e., $`T_K=10^3`$K as for Cu(Mn). Thus reducing the thickness of the film for given $`\alpha `$, the Kondo contribution to the resistivity is reduced comparing to the bulk value, but for given thickness there is a temperature below that the reduction turns into an increase. Fitting the logarithmic function $`B\mathrm{ln}T`$ to the Kondo resistivity, the plots of the coefficient $`B/B_{\text{bulk}}`$ as a function of the thickness can be seen in Fig. 3. Here we can better see this very different behavior from the $`S=2`$ case (cf. Ref. ). First of all, the Kondo amplitude is reduced comparing to the bulk value, but the dependence on the thickness is much weaker than for $`S=2`$. Secondly, for small $`t/\alpha `$’s the coefficient does not go to zero as for $`S=2`$, but has a minima at $`t/\alpha 6\frac{1}{K}`$ and then changes sign, begins to increase. This corresponds to that for an $`S=5/2`$ impurity in the presence of the anisotropy, the lowest energy states are the $`S_z=\pm \frac{1}{2}`$ doublet which give a contribution to the resistivity even for large anisotropy (small distance from the surface) and which can also be larger than the bulk value as a consequence of the spin factors in the scaling equations. Because of the large domain of the microscopic estimation of the anisotropy constant ($`\alpha 100\text{Å}K10^4\text{Å}K`$), we cannot give a precise microscopic prediction for the place of the minima. According to the the fits on the experimental results on Au(Fe) and Cu(Fe) , $`\alpha 40250\text{Å}K`$ from which we can obtain a rough estimation for the place of the minima as $`t_{\mathrm{𝑚𝑖𝑛}}2401500\text{Å}`$. However, the theoretical calculation is not reliable in that region where $`j_{\frac{1}{2}\frac{1}{2}}`$ is already in the strong coupling limit. Thus the minima may be a sign of the breakdown of the weak coupling calculation.
We have fitted in the $`T=1.43.9`$ K temperature regime, thus below $`4`$K where the electron-phonon interaction can still be neglected and well above the Kondo temperature where the weak coupling approximation is justified, thus our perturbative calculation and the logarithmic function for the Kondo resistivity is valid. These results are in good agreement with the recent experiments of Jacobs and Giordano on thin films of Cu(Mn).
In Fig. 4 we have examined what like is the function $`B/c`$ for different Kondo temperatures corresponding to different $`5/2`$-spin alloys. It can be seen from the figure that a minima in the $`\frac{B}{c}(t/\alpha )`$ function for small $`t/\alpha `$’s is present at pretty much the same place (i.e., $`t/\alpha 58\frac{1}{K}`$). The character of the $`B`$ coefficient comparing to $`B_{\mathrm{bulk}}`$ would roughly the same, but the absolute measure of the thickness dependence and the reduction from the bulk value become larger for larger $`T_K`$ when we fit in the same temperature regime corresponding to the larger bulk value (see the figure caption of Fig. 4).
## III Conclusions
In this paper we have reexamined our previous calculation about the Kondo resistivity in thin films of alloys with $`S=5/2`$ impurities in the presence of spin-orbit-induced surface anisotropy. First we presented our results on Kondo resistivity for $`S=5/2`$ and $`T_K=10^3`$K, i.e., for Cu(Mn), and fitted the $`B\mathrm{ln}T`$ function on it. The $`B`$ function in terms of the film thickness is plotted in Fig. 3 from which we can see that there is a reduction in resistivity comparing to the bulk value, but the $`B`$ coefficient depends much weaker on the thickness than in case of $`S=2`$, and for small $`t/\alpha `$’s $`B`$ does not go to zero, but it has a minima at ca. $`t/\alpha =6\frac{1}{K}`$ below that it increases which may be only a sign of the breakdown of the weak coupling calculation. These results are in good agreement with the recent experiment on Cu(Mn) films . We do not get, however, their factor of 30 difference from the bulk value.
Then we have examined what like is the $`B/c`$ coefficient for different Kondo temperatures, i.e., for different $`5/2`$-spin alloys. We have find that the character of the $`B/c`$ function is the same, but the reduction from the bulk value and the thickness dependence is somewhat stronger for larger $`T_K`$.
Summarizing, the level structure of the impurity is very different for integer and half-integer spin and at the surface the spin has a degenerate ground state in the latter case, but we obtained an essentially suppressed size dependence which is far from being trivial. The actually dependence is an interplay between different effects as the strength of the anisotropy and the different amplitudes and temperature dependence of coupling strength of the electron induced transitions between levels. The drastically different behavior for the integer and half-integer cases found experimentally and determined theoretically provides a further support of the surface anisotropy as the origin of the size dependence.
We would like to thank N. Giordano for helpful discussions. This work was supported by Grant OTKA $`T024005`$ and $`T029813`$. One of us (A.Z.) benefited from the hospitality of the Meissner Institute and the LMU in Munich and supported by the Humboldt Foundation.
|
no-problem/9903/astro-ph9903151.html
|
ar5iv
|
text
|
# Thermal Instability and Evaporation of Accretion Disc Atmospheres
## 1 Introduction
The vertical structure of accretion discs, as based on the equations of hydrostatic equilibrium, energy transport and detailed opacities and equation of state has been studied extensively in the past (e.g. Mineshige & Osaki 1983, Smak 1982, Meyer & Meyer Hofmeister 1982, Canizzo & Wheeler 1984, Shaviv & Wehrse 1986, Mineshige & Wood 1990). Most of these works assume that the heating prescription for vertically averaged disc structures originally given by Shakura & Sunyaev (1972), $`H\alpha \mathrm{P}`$, is also valid locally. As discussed earlier in Shaviv & Wehrse 1986 (hereafter SW), Adam et al. 1988 (hereafter ASSW), Czerny & King 1989 and Shaviv, Wickramasinghe & Wehrse 1998 (hereafter SWW) this assumption must lead to a thermal instability in the outer, low-density layers of the disc. The underlying mechanism is as follows. The cooling rate at a given temperature of a low density optically thin gas is roughly proportional to the density squared, whereas the viscous heating rate is proportional to the density. Thus, when the density in the disc decreases outwards, the cooling decreases faster than the heating. As long as the cooling rate at constant pressure increases with temperature, the equilibrium temperature rises with decreasing density. More realistic cooling curves however (e.g. Raymond et al. 1976) have a maximum in the cooling rate at a temperature between $`10^4`$ and $`10^5`$ K, after which the cooling rate decreases with increasing temperature. This leads to runaway heating.
In this paper we investigate this instability further, and discuss the likely consequences for the disc structure. ASSW have argued that this instability can lead to the formation of a stable “corona” with a temperature of a few times $`10^5`$ K over a large fraction of the disc surface, and SWW argue that it could lead to mass loss from the surface of the disc that can become comparable to the total mass accretion rate.
These conclusions depend strongly on the opacities used in these calculations, since the opacity essentially prescribes the cooling rate. SWW considered power law opacities, and in their applications to Black Hole Soft X-ray Transients (BHSXTs), took only bremsstrahlung emission and absorption into account, which severely underestimates the cooling rate in the optically thin outer disc layers when they are cooler than $`10^710^8`$ K. ASSW used relatively old opacity tables, and as is discussed in more detail below, used the Rosseland mean absorption coefficient in the thermal equilibrium equation rather than the Planck mean (see also Hubeny 1990). These two means can easily differ by a factor of $`10^3`$, so that this assumption has a major effect on the results. Additionally, these papers did not consider the effect of the irradiation of the outer disc layers by a much harder, but diluted continuum from the central parts of the accretion disc or the accreting compact object.
We therefore reinvestigate the structure of the optically thin outer layers of accretion discs. The outline of this paper is as follows. First, we present some vertical structure models based on the two stream approximation. These show that the thermal structure of the outer layers is essentially determined by the thermal equilibrium curve in the pressure - temperature plane. We then argue (see also Mineshige & Wood 1990) that the description of heating and cooling processes in terms of mean opacities (as used in the set of accretion disc vertical structure equations) is not accurate enough to properly describe the thermal equilibrium, and present a set of detailed calculations of thermal equilibrium curves in the $`PT`$ plane obtained with the photoionisation equilibrium code MAPPINGS. Based on these curves, we then discuss the consequences of the thermal instability for the accretion disc. The results are applied to Cataclysmic variable (CV), and stellar mass black hole discs.
## 2 Method
### 2.1 Vertical Structure Equations
To solve for the detailed vertical disc structure we require a set of equations for the hydrostatic equilibrium and energy generation and transport. Since this paper deals mainly with processes in the outer, optically thin layers of accretion discs we neglect convective energy transport.
The hydrostatic equation is
$$\frac{dP}{dz}=\mathrm{\Omega }^2z\rho (P,T)+\frac{\chi _R}{c}F$$
(1)
where $`P`$ is the gas pressure, and $`\mathrm{\Omega }`$ the Keplerian angular velocity of the disc at the radius considered, $`\chi _R`$ is the Rosseland mean opacity and $`F`$ the radiation flux. Our equation of state
$$\rho =\rho (P,T)$$
(2)
includes the effects of ionization of hydrogen and helium, and is computed together with the mean opacities.
Most detailed vertical structure models calculate the radiative energy transport in the diffusion approximation (Meyer & Meyer-Hofmeister 1982, Canizzo & Wheeler 1984), which is clearly not appropriate for the optically thin outer regions of the disc that are the main emphasis of this work. The best way to solve the problem would be to treat the entire disc as a stellar atmosphere (see e.g. Hubeny 1990, Hubeny & Hubeny 1997) with a full treatment of the angle- and frequency dependent radiation field, but in practice this is not straightforward, and also not necessary if one is just interested in the global behaviour of the solutions and not in the detailed line profiles. We therefore make two approximations. The frequency dependence is eliminated by using only frequency integrated quantities and appropriate mean opacities, and the angle dependence is simplified by considering only an ingoing and an outgoing direction. This leads to the so-called grey two-stream approximation. Although approximative, this method still allows for a natural transition between optically thick and optically thin regions.
We base ourselves on the grey two-stream formalism, in which the direction of the outgoing and ingoing beam are taken to travel at an angle $`\theta `$ with respect to the normal of the surface , with $`\mathrm{cos}\theta =\frac{1}{\sqrt{3}}`$ and $`\frac{1}{\sqrt{3}}`$ respectively. In a slightly different form (taking $`\mathrm{cos}\theta =1,1`$), this formalism has been used in several papers calculating accretion disc vertical structure (SW, ASSW).
We first define the following quantities. The frequency integrated intensities along the outgoing and ingoing stream will be called ($`I^+`$) and ($`I^{}`$). In terms of these two quantities, the mean intensity $`J`$ and flux $`F`$ are defined as
$$J=\frac{1}{2}(I^++I^{})$$
(3)
$$F=\frac{2\pi }{\sqrt{3}}(I^+I^{})$$
(4)
The radiative transfer equations for ($`I^+`$) and ($`I^{}`$) are
$$\frac{1}{\sqrt{3}}\frac{dI^+}{dz}=\chi \rho I^++j=\chi \rho I^++\kappa \rho B(T)$$
(5)
$$\frac{1}{\sqrt{3}}\frac{dI^{}}{dz}=\chi \rho I^{}+j=\chi \rho I^{}+\kappa \rho B(T)$$
(6)
where $`\chi `$ and $`\kappa `$ are the mean opacity and the absorption coefficient (per gram), $`j`$ is the volume emissivity, and $`B(T)`$ the frequency integrated Planck function. The second equality follows from the assumption of LTE.
By adding and subtracting the transfer equations for $`I^+`$ and $`I^{}`$ we derive the folowing equations for $`J`$ and $`F`$ (similar to the first and second moments of the full radiative transfer equation)
$$\frac{dJ}{dz}=\frac{3\chi _R\rho }{4\pi }F$$
(7)
$$\frac{dF}{dz}=4\pi \kappa _P\rho \left(B(T)J\right)$$
(8)
where $`\chi _R`$ is the Rosseland mean opacity, and $`\kappa _P`$ the Planck mean absorption coefficient (both per gram). Equation 8 implicitly assumes that the spectral shape of the mean intensity $`J`$ is not too different from the Planck function at the local temperature $`T`$. This could be a problem if the temperature in the optically thin outer disc layers would be very different from the disc effective temperature (corona-like), but this situation is not encountered in our solutions.
From the definitions of $`J`$ and $`F`$, and the condition that there is no incoming radiation at the outer boundary ($`I^{}=0`$), and the symmetry condition in the disc midplane ($`I^+=I^{}`$), we find the two boundary conditions
$$F=0\mathrm{at}z=0$$
(9)
$$F=\frac{4\pi }{\sqrt{3}}J\mathrm{at}z=z_{out}$$
(10)
where $`z_{out}`$ is the outer edge of the disc, which is predefined to lie at some very low pressure, say comparable to that of the interstellar medium.
In the two-stream approximation the temperature is not obtained by integrating the temperature gradient (as in the diffusion approximation), but is calculated locally from the two variables $`J`$ and $`P`$ by solving the thermal equilibrium equation
$$4\pi \kappa _P(P,T)\rho (P,T)(B(T)J)=H_{visc}(P)$$
(11)
where $`H_{visc}`$ is the viscous heating rate per unit volume,
$$H_{visc}=1.5\alpha \mathrm{\Omega }P.$$
(12)
From our solutions of the vertical disc structure using the two-stream approximation as described above, we will find that the thermal structure of the outer layers of the disc (as defined by the temperature of the solution as a function of pressure) no longer depends on radiative transfer effects. This can be understood as follows. At low optical depth and low pressure, the flux $`F`$ is nearly constant, being determined by the dissipation in the underlying layers. Furthermore, we find that the outer boundary condition (10) is not only satisfied at the exact outer boundary, but is a very good approximation throughout the optically thin region. In this case the temperature equilibrium equation can be simplified to
$$4\pi \kappa _P(P,T)\rho (P,T)\left(\frac{\sigma _{SB}T^4}{\pi }\frac{\sqrt{3}\sigma _{SB}T_{eff}^4}{4\pi }\right)=H_{visc}(P)$$
(13)
where $`\sigma _{SB}`$ is the Stefan-Boltzman constant. Using this equation, we can solve for the heating/cooling equilibrium as a function of the local variables pressure and temperature in the outer layers of the disc for given values of the central mass, mass accretion rate (which is equivalent to effective temperature) and radius without having to consider the vertical disc structure.
### 2.2 Opacities
Previous applications of the grey two-stream method to accretion disc vertical structure have not always taken the correct type of opacity average in the moment equations, and have used the Rosseland mean absorption coefficient rather than the Planck mean in equation 8 (ASSW). The Rosseland mean is basically a measure of how easily radiation can escape. It is designed to give the correct physical result when the diffusion approximation is used, and is thought to be a good approximation of the flux mean, hence its appearance in equation 7. It is mainly determined by how much of the spectral range has a relatively low opacity. Our numerical results confirm the assumption underlying the simplification made in previous works, in that the structure of the main body of the accretion disc is in fact extremely insensitive to the mean opacity in equation 8.
However, there is a problem with using the Rosseland mean when considering the structure of the optically thin regions that we are interested in in this paper. The optically thin cooling rate $`C`$ is given by definition as
$$C4\pi \kappa _\nu B_\nu (T)𝑑\nu 4\pi \kappa _PB(T)$$
(14)
with $`\kappa _P`$ the Planck mean opacity. If the cooling is mainly due to lines or other features with a high absorption coefficient over a narrow frequency range, the Planck mean can be much higher than the Rosseland mean, and the use of the Rosseland mean in equation 11 severely underestimates the cooling rate. Comparing Rosseland and Planck mean opacities from the Opacity Project the difference can exceed a factor of $`10^3`$, and even for a continuum process like free-free absorption the Planck mean is about 40 times higher than the Rosseland mean (Rybicki & Lightman 1979). In our application, it is therefore essential to use the Planck mean in equations 8, 11 and 13.
The Rosseland mean opacities we use are calculated with a program provided by R. Wehrse (private communication), and the Planck mean opacities are based on opacities from the Opacity Project (OP, Seaton et al. 1993) which are the state of the art in including as many bound-bound and bound-free processes as feasible. The reason we use different sources for the two mean opacities is as follows. The opacity program by Wehrse can calculate the opacities from physical principles over the entire range of pressures and temperatures needed for our accretion disc models, whereas the OP tables cover only a part of it. On the other hand, the program does not consider many spectral lines that are quite important for optically thin cooling. The Rosseland mean opacities from these two sources are virtually indistinguishable in the $`(P,T)`$ region where both are available, but the Planck mean resulting from Wehrse’s program is much lower than that from the OP data because many lines are not included.
Using the OP mean opacities for accretion discs two problems are encountered. The first is that the range of density and temperature for which these opacities have been computed do not overlap the entire regime for which they are needed, so that extrapolation to low density is necessary. The second one is that they are calculated for LTE, which will no longer be valid in the low density optically thin outer layers of the disc. In the low density limit we have
$$4\kappa _P\rho \sigma _{SB}T^4\mathrm{\Lambda }n_H^2$$
(15)
where $`\mathrm{\Lambda }`$ is the “cooling function” calculated by many authors (e.g. Raymond et al. 1976, Sutherland & Dopita 1993). We therefore extrapolate the OP opacities to low densities assuming that below the lowest density available on the OP grid $`\kappa _P`$ is reduced proportional to the density. The “cooling function” that can be derived from the OP Planck mean opacities using equation 15 is mostly within a factor of a few from the detailed cooling functions quoted above, except for low temperatures below $`1.5\times 10^4`$K where it is significantly higher. In spite of these difficulties with the OP opacities, we still consider them an improvement over using the severely underestimated Planck mean opacities as calculated with Wehrse’s program.
## 3 Results
### 3.1 Vertical structure models
Here we first describe some results of our vertical structure calculations with the grey two-stream method to be able to put the results of our surface layer models into context. We solve the basic set of differential equations 1, 7, 8 with boundary conditions 9, 10 and $`T=T_c`$ at $`z=0`$, together with the auxiliary equations 2, 11 and 12, for the four variables $`P`$, $`T`$, $`J`$ and $`F`$ as a function of z. The integration is started in the midplane (z=0), and proceeds outwards. In previous applications of the two-stream grey method the integration was usually started at the surface, but because we want to find the point where the thermal equilibrium equation does not have a solution any more, we can not use this method here.
Rather than solving for the disc structure by iterating on the disc height for a given mass accretion rate (or flux at the surface boundary), we solve for a self consistent disc structure by iterating on the central density for a given central temperature $`T_c`$, and the mass accretion rate and surface density are determined by the model. The density at z=0 is iterated until the outer boundary condition (10) is fulfilled at a pressure where the optical depth is already very low (say $`10^3`$). This solution determines the main structure of the disc. We then do one more integration outwards with the midplane density and temperature from the converged solution, but now the integration does not stop at the pressure previously defined as the outer boundary, but continues outwards until the thermal equilibrium equation does not have a solution any more. It was carefully verified that the instability point obtained in this way does not depend on the choice of pressure at the outer boundary in the initial disc structure iterations as long as it lies at sufficiently low optical depth and pressure. Below we present some results for an accretion disc with $`\alpha =0.3`$ around a one solar mass compact object.
In Figure 1 we show the relation between surface density and effective temperature (the “S-curves”) for radii $`10^{10},3.5\times 10^9`$ and $`10^9`$ cm. The lowest $`T_{eff}`$ models have a $`\tau _R`$ of a few tenths, and are therefore already unreliable since the radiation field is no longer approximately a Planck spectrum, and the effective mean opacity will be very different from the Rosseland mean (Mineshige & Wood 1990). We therefore do not extend our models to lower optical depth discs. These curves reproduce reasonably well previous calculations carried out by other investigators (e.g. Canizzo & Wheeler 1984 ) in the diffusion approximation.
In Figure 2 the vertical structure of a disc model at $`R=10^{10}`$ cm with an accretion rate of $`\dot{\mathrm{M}}=10^9\mathrm{M}_{}\mathrm{yr}^1`$ is shown in detail. Plotted are the temperature, pressure and Rosseland mean optical depth as a function of height above the disc mid-plane. The cross on the temperature curve indicates the point where the optical depth is 1. The most pronounced feature is the long temperature plateau at low optical depth, which is explained in the next section. The temperature only starts to rise significantly, signalling the approach of the instability point, at a very low optical depth of less than $`10^9`$. Note that there is no hydrostatic corona in this model: the temperature changes continuously, and at the pressure where the low-temperature equilibrium is lost there is no high-temperature equilibrium available to the gas. This is similar to the case when only free-free processes are considered (SWW).
Figure 3 shows the relation between temperature and pressure for a number of models with increasing mass transfer rate \[$`\dot{\mathrm{M}}_{disc}=10^{14},10^{15},10^{16}`$ and $`10^{17}`$ g s<sup>-1</sup> \] at $`R=3.5\times 10^9`$ cm. All structures have the same general morphology: pressure and temperature drop simultaneously with increasing height, until the point at which $`\tau _R=1`$ is reached. At this point the temperature becomes constant at a value slightly below the effective temperature, while the pressure drops by about 6 orders of magnitude. At very low pressure the temperature starts to rise again until the thermal instability sets in.
### 3.2 Simple surface layer models
In this section we will show how the simplified thermal equilibrium equation 13 explains the behaviour of the full disc solution at low optical depth. In Figure 4 we show the solution of equation 13 in the $`T,P`$ plane for $`R=3.5\times 10^9`$ cm and $`\dot{\mathrm{M}}=10^{17}\mathrm{grs}^1`$ . At high pressures there are two equilibrium temperatures. The high temperature solution mostly lies on an equilibrium curve with a positive $`\frac{dlogP}{dlogT}`$ gradient, and is unstable since after a temperature perturbation at constant pressure (dictated by the hydrostatic equilibrium), the gas will tend to move farther away from the equilibrium line. On the other hand, the low temperature equilibrium is stable. For low temperatures, the temperature is almost independent of the pressure. This is because $`H/\kappa \rho <<\sigma _{SB}T_{eff}^4`$, so that the equilibrium temperature is given to high precision by $`T=(\frac{\sqrt{3}}{4})^{0.25}T_{eff}`$. Since at low densities $`\kappa \rho `$, for low enough pressures the heating term becomes comparable to the absorption and emission terms, and the equilibrium temperature rises. The minimum in the thermal equilibrium curve defines the lowest pressure at which thermal equilibrium is possible.
In Figure 5 we show a close-up of the solution of equation 13 on the same scale as in Figure 3, for $`R=3.5\times 10^9`$ cm and $`\dot{\mathrm{M}}=10^{14},10^{15},10^{16}`$ and $`10^{17}\mathrm{grs}^1`$. Comparing Figure 5 with Figure 3, it is obvious that the disc models just follow the thermal equilibrium curve as soon as the optical depth becomes small. Therefore, to estimate the temperature and pressure at the instability point we do not have to calculate the entire disc model, but only the equilibrium curves.
From Figure 4 it is clear that these vertical structure models never form a significant static hot corona. We note, however, that for a narrow range of pressure and temperature close to the global minimum in the equilibrium curves, there is a second higher temperature branch of negative $`\frac{dlogP}{dlogT}`$ where another stable equilibrium solution appears to be possible. Thus, we see from the close-up of Figure 5, that for a very narrow range of $`\mathrm{log}\dot{\mathrm{M}}`$ between 15 and 16 the equilibrium curve has two minima, the higher temperature one ( $`2\times 10^4`$ K) being slightly lower in pressure than the lower temperature ($`10^4`$ K) one. The difference in pressure between the two minima is so small, however, that this cannot be considered a significant effect. Anyway, our cooling description is not likely to be accurate to such fine detail because of uncertainties in the opacities.
## 4 Detailed surface layer models
Clearly the description of the radiative heating and cooling processes in terms of a single (very uncertain) mean opacity is not very satisfactory. The adequacy of using mean opacities for optically thin discs and accretion disc atmospheres is also addressed in detail by Mineshige & Wood 1990, who found that models with a frequency dependent opacity can lead to significantly different results than those with mean opacities. Furthermore, the model above neglects the possible effect of a dilute external radiation field from the centre of the accretion disc or the surface of the accreting star when one is present. This external radiation field can be quite important, as has already be shown in other models (e.g. Begelman & McKee and Shields 1983, Idan & Shaviv 1996).
The previous section has shown that, in the outer very optically thin layers of the accretion disc where the thermal instability occurs, the temperature as a function of pressure is given to high accuracy by the thermal equilibrium curve in the $`(P,T)`$ plane of an optically thin gas irradiated by the underlying disc (equation 13). This result allows us to construct a more accurate description of the low optical depth region by calculating such thermal equilibrium curves for optically thin irradiated gas in more detail by using a different tool. It turns out that the pressure, temperature and optical depth in these outer layers is such that it is possible to calculate the thermal equilibrium curves with a photoionisation equilibrium code, with an extra heating term added to describe the viscous energy dissipation. We have used the code MAPPINGS (Sutherland & Dopita 1993). As all other photoionisation codes, this code is designed to work for a low density environment, and does not make any assumptions like LTE. The downside is that such codes become less accurate when the density becomes very high, because collisional de-excitation for species other than hydrogen and helium is not included, which could quench some of the emission lines. This limit on the density means that we can not consider the innermost parts of accretion discs around neutron stars or stellar mass black holes because the pressures and densities at the point where the optical depth becomes small are too high for the photionisation equilibrium code.
Thus, we consider the thermal equilibrium of an optically thin parcel of gas heated by viscous heating (equation 12) and irradiated by a radiation field with two components: an undiluted blackbody with a temperature equal to the effective temperature of the underlying accretion disc, and a diluted, generally much hotter blackbody coming from the central parts of the accretion flow. The fact that the disc models above indicate that the instability point lies at extremely low ($`10^710^{10}`$) Rosseland mean optical depth justifies our neglect of radiative transfer effects. We have also neglected the possible effect of shadowing of the radiation coming from the inner parts of the accretion flow by the disc at radii smaller than the one considered.
In terms of input parameters, the surface layer models are a function of the central mass $`M`$, the mass flow rate through the disc $`\dot{\mathrm{M}}_{disc}`$, the radius $`r`$ in the disc considered, the viscosity parameter $`\alpha `$, and the external radiation field.
### 4.1 Application to Cataclysmic Variable Discs
For the external irradiation of a CV disc we take the radiation field expected from the boundary layer. We assume that for accretion rates below $`10^{16}`$ g s<sup>-1</sup> the boundary layer is in a hot ($`TT_{virial}10^8`$ K) state (Warner 1995), leading to highly diluted, hard irradiating spectrum. For $`\dot{\mathrm{M}}_{disc}10^{16}`$g s<sup>-1</sup>, the boundary layer is optically thick, and we take the radiating area to be $`2\pi R_{WD}h_{BL}`$, with $`R_{WD}`$ the white dwarf radius and $`h_{BL}=0.25R_{WD}`$. In all cases we assume that the total luminosity of the boundary layer is
$$L_{BL}=\frac{1}{2}\frac{GM\dot{\mathrm{M}}}{R_{WD}}.$$
(16)
That is, we assume that the heating is what is expected for a steady disc corresponding to the given $`\dot{\mathrm{M}}`$.
Figure 6 shows the result of the thermal equilibrium analysis of the outer layers of a CV disc at $`r=3.5\times 10^9`$ cm, for a range of mass flow rates through the disc ($`\dot{\mathrm{M}}_{disc}`$) ranging from $`10^{14}`$ to $`10^{17}`$ g s<sup>-1</sup>, and two choices for the viscosity parameter $`\alpha `$, 0.3 and 0.01. We show only the part of the equilibrium curve before or slightly beyond the minimum, since for higher temperatures only unstable equilibria exist.
The irradiation flux, and its spectrum, determines the pressure and the temperature at which thermal instability sets in, but does not change the overall qualitative behaviour of the thermal equilibrium curves. To demonstrate the former, we also show two models with the external radiation field turned off. These are the curves marked 17nr in Figure 6. We can see that, to good approximation, the effect of this on the equilibrium curve is the same as reducing the mass accretion rate in the models including external radiation by a factor of $`10^2`$.
The general morphology of the curves is quite similar to Figure 5. Most directly comparable to Figure 4 is the non-irradiated curve 17nr (solid), which has exactly the same parameters ($`\alpha =0.3`$, $`\dot{\mathrm{M}}=10^{17}`$ g s<sup>-1</sup>). In detail there is some difference. The temperature over most of the atmosphere is lower than that derived using a mean opacity, showing that the assumptions (mainly LTE) underlying the mean opacity calculations are not valid in the low density outer layers. The new minimum lies at a similar temperature ($`\mathrm{log}T4.5`$), but at a 10 times higher pressure. The inclusion of the external radiation field raises the critical pressure by an additional factor of 100. The detailed models are also similar to those in the previous section in that we never find hydrostatic coronae, since there is no hot equilibrium phase available to the gas once the pressure has dropped below the minimum.
Following Czerny & King 1989, we estimate the mass loss rate that can result from the thermal instability from the pressure and temperature at the instability point (see also SWW):
$$\dot{\mathrm{M}}_w(r)2\pi r^2\rho _{inst}c_{s,inst}$$
(17)
where $`\rho _{inst}`$ and $`c_{s,inst}`$ are the density and sound speed at the minimum in the thermal equilibrium curve. We consider this estimate to be an upper limit because the real mass loss rate is determined by $`\rho c_s`$ at the critical point where $`v=c_s`$. This critical point must lie at a higher z than the thermal instability point where the outflow starts to be driven, and we know that $`\rho c_sP/\sqrt{T}`$ is a decreasing function of height.
The dotted lines in Figure 6 are lines of equal mass loss as defined by equation 17, labelled with the logarithm of the mass loss rate from the cool disc. We see that the minimum in the thermal equilibrium curves occur at a temperature and pressure that imply an evaporation rate that is always less than $`10^2`$ times the accretion rate, with the largest relative mass loss occurring for the lowest accretion rates. We also find that the relative importance of evaporation (as defined by the ratio $`\dot{\mathrm{M}}_w/\dot{\mathrm{M}}_{disc}`$) is not very dependent on radius being about 3 times higher at $`10^{10}`$ than at $`10^9`$ cm.
On the basis of the above calculations, we can conclude that mass loss by evaporation due to the mechanism considered here will occur, but cannot be very important for the overall structure of $`\mathrm{𝑠𝑡𝑒𝑎𝑑𝑦}`$ CV discs. One may ask if the mechanism could be more effective in non steady discs, such as those found in dwarf novae, and perhaps be responsible for the central “holes” that appear to develop in some systems after an outburst. According to the limit cycle model for dwarf novae, the mass transfer rate through the disc oscillates between the values appropriate to the upper and the lower branch of the equilibrium S curves shown in Figure 1 (Osaki 1996), in general agreement with observations (Warner 1995). After a dwarf nova outburst the mass transfer rate through the disc could drop by a factor of $`10^3`$ from say $`\dot{\mathrm{M}}_{disc}=10^{17}`$ to $`\dot{\mathrm{M}}_{disc}=10^{14}`$ g s<sup>-1</sup>, but the disc may continue to be heated by a luminosity that is significantly higher than the value expected for a steady disc at the lower accretion rate. The irradiation luminosity could be by soft photons from the white dwarf which has been accretion heated by the immediately preceding outburst, and accumulated effects of previous outbursts. Enhanced heating of the outer disc may also occur at the end of an outburst when the luminosity of the inner regions of the disc far exceeds the luminosity expected for steady disc at the mass transfer rate rate appropriate to the lower branch of the S curve.
In order to investigate the effects of non-steady heating, we carried out an additional series of calculations for the model with $`\dot{\mathrm{M}}_{disc}=10^{14}`$ g s<sup>-1</sup> where the heating was enhanced to correspond to mass transfer rates near the inner edge of the disc of $`\dot{\mathrm{M}}_{disc}=10^{15},10^{16}`$ and $`10^{17}`$ g s<sup>-1</sup> respectively (Figure 7). At the upper end of this range, the heating is mainly by UV photons at the appropriate black body temperature, and disc evaporation becomes very effective. These calculations show that non steady heating, for instance due to radiation from the heated white dwarf, can increase disc evaporation significantly. However, an increase in irradiation by a factor of at least $`10^3`$ over the steady case is necessary for the evaporation rate to become comparable to the mass flow rate through the disc.
### 4.2 Application to stellar mass Black Hole discs
The main difference between the thermal equilibrium in the outer layers of an accretion disc around a stellar mass black hole and a CV is the role played by the external hard radiation field (e.g. Begelman, Shields & McKee 1983, Idan & Shaviv 1996). As shown above, for CV’s this external radiation field has a mild effect on the steady state structure, but for black hole discs the radiation field completely changes the behaviour of the equilibrium curves. For discs around a stellar mass black hole, we assume that the temperature of the external radiation field is
$$T_{bb}=T_{d,eff}(r_{max})$$
(18)
where $`T_{d,eff}`$ is the effective temperature of the disc and $`r_{max}`$ is the radius of the disc where most energy is dissipated, at 10 Schwarzschild radii. The dilution factor $`ϵ`$ is
$$ϵ=\eta \frac{\pi r_{in}^2}{4\pi r^2}$$
(19)
with $`\eta `$ a geometrical correction factor for the fact that the central continuum source is likely to be beamed perpendicular to the disc. We take $`\eta `$ to be 0.1. In Figure 8 we show the thermal equilibrium curves for the outer layers of a disc at a radius of $`10^{10}`$ cm around a 10 Mblack hole, parameters appropriate for BHSXT’s.
First we demonstrate the importance of the external radiation field by comparing a model with $`\dot{\mathrm{M}}_{disc}=10^{17}`$ g s<sup>-1</sup> with (the thick lines labelled with 17) and without (thin lines) radiation. The presence of the radiation field increases the temperature at the instability point from $`6\times 10^4`$ to $`7\times 10^5`$ K and the pressure from $`10^2`$ to $`3\times 10^2`$ dyne cm<sup>-2</sup> for the $`\alpha =0.3`$ curve. According to our mass loss estimate equation 17, this implies a factor of $`10^4`$ increase in mass loss from the cool disc relative to the non-irradiated case. The models shown in Figure 8 also do not exhibit a real corona in the sense that a sudden transition to high temperatures occurs, although for the higher mass loss rates the temperature goes up to a million degrees before becoming unstable. We note that for larger radii and higher mass accretion rates a temperature discontinuity does form, with the temperature jumping from a few times $`10^4`$K to about $`10^6`$K, the latter corresponding to a well known equilibrium state of an X-ray irradiated low density gas (Krolik, McKee & Tarter 1981). The gas can remain in the hot state while the pressure drops by another factor of ten before becoming unstable.
It can be seen from figure 8 that for the highest accretion rate considered ($`10^{18}`$ g s<sup>-1</sup>) and $`\alpha =0.01`$, the disc atmosphere does not become unstable at low presssures, but rather assumes a constant temperature as the pressure drops. This is because in this case the radiation field is strong enough to compensate for the viscous heating by inverse Compton cooling. Both the viscous heating rate and the Compton cooling rate are proportional to the pressure, so that the equilibrium remains valid as the pressure drops, and the atmosphere is stable everywhere. However, since the viscous heating rate is proportional to the disc angular velocity $`\mathrm{\Omega }r^{1.5}`$, and the Compton cooling proportional to the energy density of the radiation field $`U_{rad}r^2`$, the cooling decreases faster with radius than the heating, and beyond a certain radius the thermal instability returns. In our case, we find that models with an accretion rate of $`10^{18}`$ g s<sup>-1</sup> and $`\alpha =0.01`$ become unstable for $`\mathrm{log}R>10.5`$. These results are consistent with those of Czerny & King 1989, who studied the stabilization of an accretion disc atmosphere by Compton cooling while neglecting other radiative cooling and heating processes. Czerny & King considered only radiation directly from the underlying disc. Because we take the external irradiation into account, we find more effective stabilization at larger radii than Czerny & King did.
We point out that the models for Compton heated winds from accretion discs of Begelman, McKee and Shields (1983) did not include viscous heating. Most of the effect of these Compton heated winds come from the outer radii of the disc where the virial temperature becomes comparable to the Compton temperature. In our calculations the region where Compton cooling can compensate for viscous heating does not extend out far enough to reach $`T_{comp}T_{vir}`$, so that the regime considered in the Compton heated wind models is never reached. Thus, the Compton heated wind models require that $`\alpha <<0.01`$ in the corona in order to be valid.
Figure 8 also shows that if $`\alpha 0.3`$, the vertical mass loss from the cool disc could become comparable or greater than the mass flow through the disc for $`\dot{\mathrm{M}}_{disc}<10^{16}`$ g s<sup>-1</sup>, implying that steady, cold discs may not exist around stellar mass black holes in the low $`\dot{\mathrm{M}}`$ regime if the dissipation rate in the outer layers is high ($`\alpha >0.1`$). We have summarized our results for the relative mass flow out of the cool disc ($`\dot{\mathrm{M}}_w/\dot{\mathrm{M}}_{disc}`$) for different radii in Figure 9. Our results indicate that disc evaporation by a combination of viscous heating and irradiation is more important in the outer parts of a steady disc than in the inner parts. Only for the lowest accretion rate of $`10^{14}`$ g s<sup>-1</sup> is the entire range of radii we consider susceptible to complete evaporation, implying that a steady state cool disc may not exist beyond a radius of $`10^9`$ cm. However, the evaporation could remain a significant effect at at least the 10 % level for all accretion rates considered. We re-emphasize that this conclusion does depend upon the effectiveness of viscous heating ($`\alpha `$) in the atmosphere, and that for smaller $`\alpha `$ the evaporation becomes important only for lower mass accretion rates and larger disc radii, e.g. for $`\alpha =0.01`$ the accretion rate and evaporation rate become comparable for R = $`10^{11}`$ cm and $`\dot{\mathrm{M}}_{disc}=10^{14}`$ g s<sup>-1</sup>.
The somewhat complex behaviour (crossing over) of the curves at log R = 10.5 in Figure 9 is due to a switch of the equilibrium state at which the lowest stable pressure occurs, between the cool equilibrium at a few times $`10^4`$ K and the hot equilibrium for X-ray irradiated gas at about $`10^6`$ K.
## 5 Discussion
We have shown that mass loss can occur from the surface of an accretion disc by evaporation due to thermal instability for a variety of conditions.
For accretion discs in cataclysmic variables, radiative heating by photons from smaller radii in the form of hard X-rays (from an optically thin boundary layer) or $`10^5`$K black body photons (from an optically thick boundary layer or the heated white dwarf surface), plays an important role in determining the mass loss rate due to atmospheric thermal instability. Under steady state conditions, the mass loss rate is expected to be less than a maximum of a few percent of the mass transfer rate through the disc, with the maximum being reached in low $`\dot{\mathrm{M}}`$ discs. However, in non steady discs, mass loss from thermal instability could be a larger fraction of the mass transfer rate through the disc. Another effect that could significantly increase the evaporation rate is extra heating in the atmosphere, as would be expected if a significant buoyant flux of magnetic field is escaping from the disc and is dissipated in the atmosphere, as has been suggested by some authors (Tout & Pringle 1992, but see also Stone et al. 1996). The thermal instability may therefore provide an explanation for the UV delay seen in some dwarf novae, which has been interpreted as evidence for the existence of ”holes” in the disc during the low $`\dot{\mathrm{M}}`$ phase of the dwarf nova cycle, but other effects that come into play once the instability has led to a significant corona are probably a better candidate.
An example of this is the work of Meyer & Meyer Hofmeister 1994 (MMH) and Liu et al. 1996, who found that evaporation from a CV disc can easily lead to significant mass loss. These models do not consider the stability of a cold disc by itself, but rather consider the situation in which the disc is sandwiched between a corona, and calculate the structure of such a disc-corona combination self-consistently. The evaporation in their case is due to thermal conduction from the viscously heated corona into the cold disc. They also take the dynamics of the evaporative outflow into account, an ingredient that is still missing from our exploratory calculations. Our results indicate that a cold disc by itself is indeed unstable, and that the formation of a corona is inevitable. Thus, evaporation by the mechanism modelled here may play a role by providing seed material for a corona, which may then develop into a coronal siphon flow in the manner outlined by MMH. We do treat the thermal equilibrium of the accretion disc outer layers in more detail than MMH, and show that the effect of the radiation field, which was not included by MMH, can be quite significant. We find that in our case the heating in the disc atmosphere is dominated by absorption of radiation over many pressure scale heights (even in the case of no radiation from the centre of the accretion flow), and not viscous heating as modelled by MMH. We expect that for discs around black holes or neutron stars the irradiation effect is so strong that the disc evaporation picture developed by MMH for CV discs will have to be modified to include the irradiation.
Disc atmospheres around stellar mass black holes develop a thermal instability at much higher pressure than discs around white dwarfs due to the strong irradiation, and significant evaporation by thermal instability appears to be a real possibility. The importance of X-ray heating in determining the S curves of BHSXT discs and disc instability has already been demonstrated by El Khoury and Wickramasinghe (1998). Our calculations have shown that X-ray heating from the inner regions of a black hole disc can be sufficient to drive evaporation at a rate comparable to or larger than the local mass transfer rate through the disc for $`\dot{\mathrm{M}}_{disc}<10^{16}`$ g s<sup>-1</sup>. The evaporation is strongest in the outer regions of the disc,and will be quenched by Compton cooling only at very high $`\dot{\mathrm{M}}_{disc}`$ ($`10^{18}`$ g s<sup>-1</sup>) and for low $`\alpha `$ in the accretion disc atmosphere. Thus, the thermal instability considered here may play a role in the transition of thin cool discs to hot advection dominated discs that has been postulated to explain the absence of a strong X-ray flux from some accreting black hole candidates (e.g Narayan, McClintock & Yi 1996).
Our results could have important implications for the interpretation of the observations of accretion discs around Black Hole Soft X-ray Transients. It is well documented that line profiles of H and He lines seen in BHSXT’s are rarely disc - like. Rather, phase dependent observations show strongly asymmetric and split lines which are inconsistent with an origin in a standard disc (Soria et al. 1998). We draw particular attention to the well studied system GRO1655-40. The spectral energy distribution during its 1996 outburst appears to evolve in time more like a black body than a disc (Hynes et al. 1998). The soft X-ray delay seen in this system (Hameury et al. 1997) also argues for a non standard disc. Based on our calculations, it would appear that the modelling of the time dependent evolution of such systems would require consideration of evaporation, and accretion from the hot, thick disc that could result from this.
## 6 Acknowledgements
We would like to thank Ralph Sutherland for making the photoionisation code MAPPINGS available to us. Our referee, Ivan Hubeny, has contributed significantly to making this paper more clear and consistent.
|
no-problem/9903/cond-mat9903418.html
|
ar5iv
|
text
|
# Note on Two Theorems in Nonequilibrium Statistical Mechanics
## Abstract
An attempt is made to clarify the difference between a theorem derived by Evans and Searles in 1994 on the statistics of trajectories in phase space and a theorem proved by the authors in 1995 on the statistics of fluctuations on phase space trajectory segments in a nonequilibrium stationary state.
preprint: RRU/3-cg
Recently a Fluctuation Theorem (FT) has been proved by the authors (GC), , for fluctuations in nonequilibrium stationary states. Considerable confusion has been generated about the connection of this theorem and an earlier one by Evans and Searles (ES), , so that it seemed worthwhile to try to clarify the present situation with regards to these two theorems.
In a paper in 1993 by Evans, Cohen and Morriss , theoretical considerations lead them to a computer experiment about the statistical properties of the fluctuations of a shear stress model (viscous current) - or the related entropy production rate - in a thermostatted sheared viscous fluid (plane Couette flow) in a nonequilibrium stationary state.
The Fluctuation Relation found in the simulation , reads in current notation, :
$$\frac{\pi _\tau (p)}{\pi _\tau (p)}e^{\tau \sigma _+p}$$
(1)
Here $`\pi _\tau (p)`$ is the probability of observing an average phase space contraction rate (which in the models considered has the interpretation of average entropy production rate) of size $`p\sigma _+`$ on one of many segments of duration $`\tau `$ on a long phase space trajectory of the dynamical system modeling the shearing fluid in a nonequilibrium stationary state; here $`\sigma (x)`$ will denote the phase space contraction rate near a phase point $`x`$ (i.e. the divergence of the equations of motion) and $`\sigma _+`$ is the average phase space contraction rate over positive infinite times so that $`p`$ is a dimensionless characterization of the phase space contraction (with time average $`1`$). The approximation within which eq. (1) was observed was very convincing .
Under suitable assumptions, see below, a more precise formulation of eq. (1) was derived in :
$$\underset{\tau \mathrm{}}{lim}\frac{1}{\tau \sigma _+}\mathrm{ln}\frac{\pi _\tau (p)}{\pi _\tau (p)}=p$$
(2)
where the validity of the fluctuation relation eq. (1) for asymptotically long times $`\tau `$ is more clearly expressed. Later several other computer experiments have confirmed the relation eq. (2), , , .
The original computer experiment, , was inspired by a theoretical argument for the relative probabilities to find a phase space trajectory segment of length $`\tau `$ in a state $`x`$ with phase space contraction rate $`p`$ and in a state $`x^{}`$ with rate $`p`$. These theoretical considerations lead to the correct prediction eq. (1), which was confirmed by the - independently of the theory - carried out computer experiment.
Evans and Searles gave a derivation of a theorem which had a similar form as eq. (1). More precisely: let $`E_p`$ be the set of initial conditions of a dynamical system for phase space trajectories along which the phase space contraction is $`e^{p\sigma _+T}`$ in a time $`T`$. We denote by $`\mu _L(E_p)`$ its Liouville measure. Similarly, let $`\mu _L(E_p)`$ be the Liouville measure of the corresponding set of phase space trajectories along which the phase space contraction in time $`T`$ is $`e^{p\sigma _+T}`$. Quite generally, and in all models considered in the literature relevant here, $`E_p=IS_TE_p`$, if $`S_t`$ is the time evolution (Liouville) operator of the system, and if $`I`$ denotes the time reversal operation, so that $`tS_tx`$ is the phase space trajectory over time $`t`$ starting at $`x`$ at $`t=0`$. Hence $`E_p`$, the set of points around which phase space contracts at rate $`p\sigma _+T`$, is obtained by evolving forward over a time $`T`$ those in $`E_p`$ (which would contract by $`p\sigma _+T`$) and then inverting the velocities by the time reversal operator $`I`$. In fact, the sets $`E_p`$ and $`E_p`$ are those considered by Evans and Searles in .
Then the proof in is the following:
$`{\displaystyle \frac{\mu _L(E_p)}{\mu _L(E_p)}}`$ $`=`$ $`{\displaystyle \frac{\mu _L(E_p)}{\mu _L(IS_TE_p)}}=`$ (3)
$`={\displaystyle \frac{\mu _L(E_p)}{\mu _L(S_TE_p)}}`$ $`=`$ $`{\displaystyle \frac{\mu _L(E_p)}{\mu _L(E_p)e^{p\sigma _+T}}}=e^{p\sigma _+T}`$ (4)
where one has used that the Liouville distribution is time reversal invariant, i.e. $`\mu _L(E)\mu _L(IE)`$ (although it is not stationary) to get the second equality as well as the definition of phase space contraction in the third equality.
The arbitrary time interval $`T`$ includes the short times referring to the transient behavior of the system before reaching the nonequilibrium stationary state. In the derivation of eq. (4) only time reversal symmetry is used. Later, , it was argued that under this assumption alone, eq. (4) also holds in the nonequilibrium stationary state $`\mu _{\mathrm{}}`$, since eq. (4) is valid for any $`T`$ and the Liouville distribution $`\mu _L`$ would evolve in a sufficiently long time $`T`$ into a distribution $`S_T\mu _L`$ arbitrarily close to a nonequilibrium stationary state $`\mu _{\mathrm{}}`$.
Therefore the eq. (4) was reinterpreted in and asserted to be identical to eq. (2), which, however, refers to the statistics of trajectory segments, along a trajectory in a chaotic nonequilibrium stationary state $`\mu _{\mathrm{}}`$, not to the statistics of independent trajectory histories emanating from the initial Liouville distribution $`\mu _L`$ under the time reversibility assumption.
In 1995 the authors proved eq. (2) based on a dynamical assumption, called Chaotic Hypothesis (CH), which assured strong chaoticity (“Anosov system-like behavior”) for the systems for which eq. (2) held. In that work the name Fluctuation Theorem (FT) was first introduced for eq. (2), and was proposed as an explanation for the experimental result eq. (1). We will call this the GCFT.
It is worthwhile to emphasize again that, while the right hand sides of the eqs. (1) and (4) look very similar, they, as well as the left hand sides of these equations, really refer to entirely different physical situations.
The eq. (4), , holds for any $`T`$ on trajectories with initial data sampled from the Liouville distribution at $`t=0`$ and it can be considered as a simple, but interesting, consequence, for reversible systems, of the very definition of phase space contraction. We will call it here the ESI, where the I refers to “identity”. The ESI is much more general than the FT in eq. (2), which needs, in addition to phase space contraction ($`\sigma _+>0`$) and time reversal symmetry, also the Chaotic Hypothesis. The proof of the ESI, fully described in eq. (4) above, is identical in essence to the proof in which is much more involved.
In order to illustrate the fundamental difference between the two theorems we first give an example of a case where the more general ESI eq. (4) holds, while the GCFT eq. (2) does not.
To that end we consider a single charged particle in a periodic box, with charge $`e`$ moving in an electric field $`𝐄`$, i.e. a Lorentz gas without scatterers, and subject to a Gaussian thermostat (to obtain a nonequilibrium stationary state):
$$\dot{𝐪}=𝐩,\dot{𝐩}=e𝐄\alpha 𝐩$$
(5)
where the “thermostat” force $`\alpha 𝐩`$, with $`\alpha =\frac{e𝐄𝐩}{|𝐩|^\mathrm{𝟐}}`$, assures the reaching of a nonequilibrium stationary state of this system.
In this case one can solve explicitly the trivial equations of motion eq. (5) and check that the Liouville distribution $`\mu _L`$ indeed evolves towards a stationary state $`\mu _{\mathrm{}}`$, which is simply a state in which the particle moves with constant speed parallel to $`𝐄`$. The ESI eq. (4) will (of course) hold for the phase space trajectories of this system sampled with the initial Liouville distribution $`\mu _L`$, but it will not be a fluctuation theorem, since there are no fluctuations. Also, GCFT’s eq. (2) will not hold for the phase space trajectory segment fluctuations of this system, which is not a contradiction because the system is not chaotic.
From this simple example follows that the two theorems cannot be equivalent, and the validity of eq. (4) cannot imply much, without extra assumptions, on the fluctuations (absent in this case) in the stationary state. Note that eq. (4) is an identity which is always valid in the systems considered. The system of eq. (5) is therefore a counterexample to the statement, that eq. (4) implies eq. (2), i.e. to the statement, , that ESI implies GCFT.
Second, and more interestingly, one can try to derive the GCFT eq. (2) from the more general ESI eq. (4). One could then try to proceed as follows.
First one would need to show that on a subsequent trajectory segment of length $`\tau `$, after time $`T`$, the ratio of the probabilities of finding a phase space contraction of $`+p\sigma _+\tau `$ to that of finding $`p\sigma _+\tau `$ over this segment of length $`\tau `$, would be given by $`e^{p\sigma _+\tau }`$. Here $`p\sigma _+\tau `$ is any preassigned value of the phase space contraction. However, eq. (4) gives no information whatsoever about those points which after a time $`T`$ evolve into points which in the next $`\tau `$ units of time show a phase space contraction $`\pm p\sigma _+\tau `$. In other words, the ESI does not contain the detailed information needed to derive the GCFT.
If one adds the Chaotic Hypothesis to the time reversal symmetry assumptions made about the dynamical system in the ESI, one could use Sinai’s theorem, , to assert that such a system, starting from the initial Liouville distribution $`\mu _L`$, will indeed approach a chaotic nonequilibrium stationary distribution $`\mu _{\mathrm{}}`$ supported (however) on a fractal attractor $`A`$ with $`0`$ Liouville measure $`\mu _L(A)=0`$. This is, in this case, the SRB distribution, $`\mu _{SRB}`$, of the system, which was used in . However, for a proof of the GCFT, details of the SRB distribution are needed, which contain just the details considered in . That is, one has to make an appropriate (Markov) partition of the phase space and assign weights to the cells of increasingly finer partitions leading, to the SRB distribution. This then allows one to assign appropriate weights to those regions (of $`0`$ Liouville measure) in phase space that will give rise, to phase space contractions on trajectory segments $`\tau `$ of $`\pm p\sigma _+\tau `$ leading to the GCFT.
A more advanced comparison between the ESI and GCFT requires a more quantitative statement of the latter result. Namely, , eq. (2) can be derived from the stronger relation:
$$\frac{\pi _\tau (p)}{\pi _\tau (p)}=e^{(p\sigma _++O(\frac{T_{\mathrm{}}}{\tau }))\tau }$$
(6)
where $`T_{\mathrm{}}`$ is a time scale of the order of magnitude of the time necessary in order that the distribution $`S_T\mu _L`$, into which the Liouville distribution $`\mu _L`$ evolves in time $`T`$, be “practically” indistinguishable from the stationary state that we denote by $`\mu _{\mathrm{}}`$. Here the validity of the fluctuation relation (1) for asymptotically long times $`\tau `$ is more clearly expressed. The esistence of the time $`T_{\mathrm{}}`$ and its role in bounding the error term in eq. (6) are among the main results of . The time $`T_{\mathrm{}}`$ appears in as the range of the potential that generates the representation of $`\mu _{\mathrm{}}`$ as a Gibbs state using a symbolic dynamic representation of the SRB distribution on the Markov partition, .
Examining the ESI derivation above, one easily sees that the following
$$\frac{S_T\mu _L(E_p)}{S_T\mu _L(E_p)}=\frac{\mu _L(S_TE_p)}{\mu _L(S_TE_p)}=e^{(p\sigma _++O(\frac{T}{\tau }))\tau }$$
(7)
can also be derived, ; it is important to note that the argument leading to eq. (4) cannot say more than this: in particular one must justify why $`T`$, which in principle should be large, strictly speaking infinite, so that $`S_T\mu _L`$ be identifiable with $`\mu _{SRB}`$, can in fact be taken smaller than $`\tau `$. Justifying this requires assumptions (as the above counterexample indicates) like the mentioned Gibbs property of the SRB distribution which is even stronger than requiring exponential decay of correlations (also implied by the CH). Otherwise, without extra assumptions, one has to say that $`T`$ has to be taken infinite first with the result that eq. (7), hence eq. (4), becomes empty in content. Of course one can argue that “on physical grounds” $`T`$ needs not to be taken infinite but just as large as some characteristic time scale for the approach to the attractor: but the precise meaning of this, and the assumptions under which it can be stated, is precisely what needs to be determined particularly because in nonequilibrium systems the attractor $`A`$ is fractal with $`\mu _L(A)=0`$ and one can very well doubt that the $`S_T\mu _L`$ distribution is ever close enough to the $`\mu _{\mathrm{}}`$ distribution to allow comparing eq. (6) and (7). This requires a convincing argument, were it only because $`S_T\mu _L(A)=0`$.
Acknowledgements: E.G.D.C. is very much indebted to L. Rondoni for many very clarifying discussions. The authors are also indebted to C. Liverani for suggestions on the proof of eq. (4) and to F. Bonetto, J. L. Lebowitz, and L. Rondoni for helpful discussions. E.G.D.C. gratefully acknowledges support from DOE grant DE-FG02-88-ER13847, while G.G. that from Rutgers University’s CNR-GNFM, MPI.
REVTEX
|
no-problem/9904/astro-ph9904193.html
|
ar5iv
|
text
|
# Schwarzschild black hole lensing
## I Introduction
The phenomena resulting from the deflection of electromagnetic radiation in a gravitational field are referred to as gravitational lensing (GL) and an object causing a detectable deflection is known as a gravitational lens. The basic theory of GL was developed by Liebes, Refsdal, and Bourossa and Kantowski. For detailed discussions on GL see the monograph by Schneider et al. and reviews by Blandford and Narayan, Refsdal and Surdej, Narayan and Bartelmann and Wambsganss.
The discovery of quasars in 1963 paved the way for observing point source GL. Walsh, Carswell and Weymann discovered the first example of GL. They observed twin images QSO 0957+561 A,B separated by $`5.7`$ arcseconds at the same redshift $`z_s=1.405`$ and mag $`17`$. Following this remarkable discovery more than a dozen convincing multiple-imaged quasars are known.
The vision of Zwicky that galaxies can be lensed was crystallized when Lynds and Petrosian and Soucail et al. independently observed giant blue luminous arcs of about $`20`$ arcseconds long in the rich clusters of galaxies. Paczyński interpreted these giant arcs to be distorted images of distant galaxies located behind the clusters. About $`20`$ giant arcs have been observed in the rich clusters. Apart from the giant arcs, there have been also observed weakly distorted arclets which are images of other faint background galaxies.
Hewitt et al. observed the first Einstein ring MG1131+0456 at redshift $`z_s=1.13`$. With high resolution radio observations, they found the extended radio source to actually be a ring of diameter about $`1.75`$ arcseconds. There are about half a dozen observed rings of diameters between $`0.33`$ to $`2`$ arcseconds and all of them are found in the radio waveband; some have optical and infrared counterparts as well.
The general theory of relativity has passed experimental tests in a weak gravitational field with flying colors; however, the theory has not been tested in a strong gravitational field. Testing the gravitational field in the vicinity of a compact massive object, such as a black hole or a neutron star, could be a possible avenue for such investigations. Dynamical observations of several galaxies show that their centres contain massive dark objects. Though there is no iron-clad evidence, indirect arguments suggest that these are supermassive black holes; at least, the case for black holes in the Galaxy as well as in NGC4258 appears to be strong . These could be possible observational targets to test the Einstein theory of relativity in a strong gravitational field through GL.
Immediately after the advent of the general theory of relativity, Schwarzschild obtained a static spherically symmetric asymptotically flat vacuum solution to the Einstein equations, which was later found to have an event horizon when maximally extended; thus this solution represents the gravitational field of a spherically symmetric black hole (see in Hawking and Ellis). Schwarzschild GL in the weak gravitational field region (for which the deflection angle is small) is well-known. Recently Kling et al. developed an iterative approach to GL theory based on approximate solutions of the null geodesics equations, and to illustrate their method they constructed the iterative lens equations and time of arrival equation for a single Schwarzschild lens. In this paper we obtain a lens equation that allows for the large bending of light near a black hole, model the Galactic supermassive “black hole” as a Schwarzschild lens and study point source lensing in the strong gravitational field region, when the bending angle can be very large. Apart from a primary image and a secondary image (which are observed due to small bending of light in a weak gravitational field) we get a theoretically infinite sequence of images on both sides close to the optic axis; we term them relativistic images. The relativistic images are formed due to large bending of light in a strong gravitational field in the vicinity of $`3M`$, and are usually greatly demagnified (the magnification decreases very fast with an increase in the angular position of the source from the optic axis). Though the observation of relativistic images is a very difficult task (it is very unlikely that they will be observed in near future), if it ever were accomplished it would support the general theory of relativity in a strong gravitational field inaccessible to test the theory in any other known way and would also give an upper bound to the compactness of the lens. This is the subject of study in this paper. We use geometrized units (the gravitational constant $`G=1`$ and the speed of light in vacuum $`c=1`$ so that $`MMG/c^2`$).
## II Lens equation, magnification and critical curves
In this section we derive a lens equation that allows for the large bending of light near a black hole. The lens diagram is given in Fig.1. The line joining the observer $`O`$ and the lens $`L`$ is taken as the reference (optic) axis. The spacetime under consideration, with the lens (deflector) causing strong curvature, is asymptotically flat; the observer as well as the source are situated in the flat spacetime region (which can be embedded in an expanding Robertson-Walker universe).
$`SQ`$ and $`OI`$ are tangents to the null geodesic at the source and image positions, respectively; $`C`$ is where their point of intersection would be if there were no lensing object present. The angular positions of the source and the image are measured from the optic axis $`OL`$. $`\mathrm{}LOI`$ (denoted by $`\theta `$) is the image position and $`\mathrm{}LOS`$ (denoted by $`\beta `$) is the source position if there were no lensing object. $`\widehat{\alpha }`$ (i.e. $`\mathrm{}OCQ`$) is the Einstein deflection angle. The null geodesic and the background broken geodesic path $`OCS`$ will be almost identical, except near the lens where most of bending will take place. Given the vast distances from observer to lens and from lens to source, this will be a good approximation, even if the light goes round and round the lens before reaching the observer. We assume that the line joining the point $`C`$ and the location of the lens $`L`$ is perpendicular to the optic axis. This is a good approximation for small values of $`\beta `$. We draw perpendiculars $`LT`$ and $`LN`$ from $`L`$ on the tangents $`SQ`$ and $`OI`$ repectively and these represent the impact parameter $`J`$. $`D_s`$ and $`D_d`$ stand for the distances of the source and the lens from the observer, and $`D_{ds}`$ represents the lens-source distance, as shown in the Fig. 1. Thus, the lens equation may be expressed as
$$\mathrm{tan}\beta =\mathrm{tan}\theta \alpha ,$$
(1)
where
$$\alpha \frac{D_{ds}}{D_s}\left[\mathrm{tan}\theta +\mathrm{tan}\left(\widehat{\alpha }\theta \right)\right].$$
(2)
The lens diagram gives
$$\mathrm{sin}\theta =\frac{J}{D_d}.$$
(3)
A gravitational field deflects a light ray and causes a change in the cross-section of a bundle of rays. The magnification of an image is defined as the ratio of the flux of the image to the flux of the unlensed source. According to Liouville’s theorem the surface brightness is preserved in gravitational light deflection. Thus, the magnification of an image turns out to be the ratio of the solid angles of the image and of the unlensed source (at the observer). Therefore, for a circularly symmetric GL, the magnification of an image is given by
$$\mu =\left(\frac{\mathrm{sin}\beta }{\mathrm{sin}\theta }\frac{d\beta }{d\theta }\right)^1.$$
(4)
The sign of the magnification of an image gives the parity of the image. The singularities in the magnification in the lens plane are known as critical curves (CCs) and the corresponding values in the source plane are known as caustics. Critical images are defined as images of $`0`$-parity.
The tangential and radial magnifications are expressed by
$$\mu _t\left(\frac{\mathrm{sin}\beta }{\mathrm{sin}\theta }\right)^1,\mu _r\left(\frac{d\beta }{d\theta }\right)^1$$
(5)
and singularities in these give tangential critical curves (TCCs) and radial critical curves (RCCs), respectively; the corresponding values in the source plane are known as tangential caustic (TC) and radial caustics (RCs), respectively. Obviously, $`\beta =0`$ gives the TC and the corresponding values of $`\theta `$ are the TCCs. For small values of angles $`\beta `$, $`\theta `$ and $`\widehat{\alpha }`$ equations $`(\text{1})`$ and $`(\text{4})`$ yield the approximate lens equation and magnification, respectively, which have been widely used in studying lensing in a weak gravitational field .
## III Schwarzschild spacetime and the deflection angle
The Schwarzschild spacetime is expressed by the line element
$$ds^2=\left(1\frac{2M}{r}\right)dt^2\left(1\frac{2M}{r}\right)^1dr^2r^2\left(d\vartheta ^2+\mathrm{sin}^2\vartheta d\varphi ^2\right),$$
(6)
where $`M`$ is the Schwarzschild mass. When this solution is maximally extended it has an event horizon at the Schwarzschild radius $`R_s=2M`$. The deflection angle $`\widehat{\alpha }`$ for a light ray with closest distance of approach $`r_o`$ is (Chapters $`8.4`$ and $`8.5`$ in )
$$\widehat{\alpha }\left(r_o\right)=2_{r_o}^{}{}_{}{}^{\mathrm{}}\frac{dr}{r\sqrt{\left(\frac{r}{r_o}\right)^2\left(1\frac{2M}{r_o}\right)\left(1\frac{2M}{r}\right)}}\pi $$
(7)
and the impact parameter $`J`$ is
$$J=r_o\left(1\frac{2M}{r_o}\right)^{\frac{1}{2}}.$$
(8)
A timelike hypersurface $`\{r=r_0\}`$ in a spacetime is defined as a photon sphere if the Einstein bending angle of a light ray with the closest distance of approach $`r_0`$ becomes unboundedly large. For the Schwarzschild metric $`r_0=3M`$ is the photon sphere and thus the deflection angle $`\widehat{\alpha }`$ is finite for $`r_0>3M`$.
The Einstein deflection angle for large $`r_o`$ is
$$\widehat{\alpha }\left(r_o\right)=\frac{4M}{r_o}+\frac{4M^2}{r_{o}^{}{}_{}{}^{2}}\left(\frac{15\pi }{16}1\right)+\mathrm{}...$$
(9)
We mentioned the above result only for completeness as it is not much known in the literature. As we are interested to study GL due to light deflection in a strong graviational field we will use Eq. $`(\text{7})`$ for any further calculations. Introducing radial distance defined in terms of the Schwarzschild radius,
$$x=\frac{r}{2M},x_o=\frac{r_o}{2M},$$
(10)
the deflection angle $`\widehat{\alpha }`$ and the impact paprameter $`J`$ take the form
$$\widehat{\alpha }\left(x_o\right)=2_{x_o}^{}{}_{}{}^{\mathrm{}}\frac{dx}{x\sqrt{\left(\frac{x}{x_o}\right)^2\left(1\frac{1}{x_o}\right)\left(1\frac{1}{x}\right)}}\pi $$
(11)
and
$$J=2Mx_o\left(1\frac{1}{x_o}\right)^{\frac{1}{2}}.$$
(12)
In the computations in the following section we require the first derivative of the deflection angle $`\widehat{\alpha }`$ with respect to $`\theta `$. This is given by (see in )
$$\frac{d\widehat{\alpha }}{d\theta }=\widehat{\alpha }^{}\left(x_o\right)\frac{dx_o}{d\theta },$$
(13)
where
$$\frac{dx_o}{d\theta }=\frac{x_o\left(1\frac{1}{x_o}\right)^{\frac{3}{2}}\sqrt{1\left(\frac{2M}{D_d}\right)^2x_{o}^{}{}_{}{}^{2}\left(1\frac{1}{x_o}\right)^1}}{\frac{M}{D_d}\left(2x_o3\right)}$$
(14)
and the first derivative of $`\widehat{\alpha }`$ with respect to $`x_o`$ is
$$\widehat{\alpha }^{}\left(x_o\right)=\frac{32x_o}{x_{o}^{}{}_{}{}^{2}\left(1\frac{1}{x_o}\right)}_{x_o}^{}{}_{}{}^{\mathrm{}}\frac{\left(4x3\right)dx}{\left(32x\right)^2x\sqrt{\left(\frac{x}{x_o}\right)^2\left(1\frac{1}{x_o}\right)\left(1\frac{1}{x}\right)}}.$$
(15)
## IV Lensing with the Galactic supermassive “black hole”
It is known that the Schwarzschild GL in a weak gravitational field gives rise to an Einstein ring when the source, lens and observer are aligned, and a pair of images (primary and secondary) of opposite parities when the lens components are misaligned. However, when the lens is a massive compact object a strong gravitational field is “available” for investigation. A light ray can pass close to the photon sphere and go around the lens once, twice, thrice, or many times (depending on the impact parameter $`J`$ but for $`J>3\sqrt{3}M`$) before reaching the observer. Thus, a massive compact lens gives rise, in addition to the primary and secondary images, to a large number (indeed, theoretically an infinite sequence) of images on both sides of the optic axis. We call these images (which are formed due to the bending of light through more than $`3\pi /2`$) relativistic images, as the light rays giving rise to them pass through a strong gravitational field before reaching the observer. We call the rings which are formed by bending of light rays more than $`2\pi `$, relativistic Einstein rings.
We model the Galactic supermassive “black hole” as a Schwarzschild lens. This has mass $`M=2.8\times 10^6M_{}`$ and the distance $`D_d=8.5kpc`$; therefore, the ratio of the mass to the distance $`M/D_d1.57\times 10^{11}`$. We consider a point source, with the lens situated half way between the source and the observer, i.e. $`D_{ds}/D_s=1/2`$. We allow the angular position of the source to change keeping $`D_{ds}`$ fixed.
We compute positions and magnifications of two pairs of outermost relativistic images as well as the primary and secondary images for different values of the angular positions of the source. These are shown in figures 2 and 3 and Table 1 (for relativistic images) and in Fig. 4 and Table 2 (for primary and secondary images). The angular positions of the primary and secondary images as well as the critical curves are given in arcseconds; those for relativistic images as well as relativistic critical curves are expressed in microarcseconds.
In Fig.2 we show how the positions of outer two relativistic images on each side of the optic axis change as the source position changes. To find the angular positions of images on the same side of the source we plot $`\alpha `$ (represented by continuous curves on right side of the figure) and $`\mathrm{tan}\theta \mathrm{tan}\beta `$ (represented by dashed curves) against $`\theta `$ for a given value of the source position $`\beta `$; the points of intersection give the image positions (see the right side of the Fig. $`3`$).
Similarly, we plot $`\alpha `$ and $`\mathrm{tan}\theta \mathrm{tan}\beta `$ vs. $`\theta `$ and points of intersection give the image positions on the opposite side of the source (see left side of the Fig. $`2`$). We have taken $`\beta =0.075`$ radian ($`4.29718`$). In fact there are a sequence of theoretically an infinite number of continuous curves which intersect with a given dashed curve giving rise to a sequence of an infinite number of images on both sides of the optic axis. We have plotted only two sets of such curves (note that the third set of continuous curves comes to be very close to the second set and therefore it is not possible to show them in the same figure) demonstrating appearance of two relativistic images on both sides of the optic axis. For $`\beta =0`$ the points of intersection of the continuous curves with the dashed curve give a sequence of infinite number of relativistic tangential critical curves (relativistic Einstein rings). As $`\beta `$ increases any image on the same side of source moves away from the optic axis, whereas any image on the opposite side of the source moves towards the optic axis. The displacement of relativistic images with respect to a change in the source position is very small (see Fig. $`2`$). The two sets of outermost relativistic images are formed at about $`17`$ microarcseconds from the optic axis.
In Fig. 3 we plot the tangential magnification $`\mu _t`$ as well as the total magnification $`\mu `$ vs. the image position $`\theta `$ near the two outermost relativistic tangential critical curves. The singularities in $`\mu _t`$ give the angular radii of the two relativistic Einstein rings. In Fig. 4 we plot the same for the primary-secondary images; the singularity in $`\mu _t`$ gives the angular position of the Einstein ring. The magnification for relativistic images falls extremely fast (as compared with the case of primary and secondary images) as the source position increases from perfect alignment. The tangential parity (sign of $`\mu _t`$) as well as the total parity (sign of $`\mu `$) are positive for all images on the same side of the source and negative for all images on the opposite side of the source. The radial parity (sign of $`\mu _r`$) is positive for all the images in Schwarzschild lensing.
In Table 3 we give the angular radii $`\theta _E`$ of the Einstein and two relativistic Einstein rings. We also give the corresponding values for the deflection angle $`\widehat{\alpha }`$ and the closest distance of approach $`x_o`$ for the light rays giving rise to these rings. We define an “effective deflection angle” $`\widehat{\alpha }2\pi `$ times the number of revolution the light ray has made before reaching the observer. Table 3 shows that the effective deflection angle for a ring decreases with the decrease in its angular radius, which is expected from the geometry of the lens diagram. The same is true for images on the same side of the optic axis, i.e. the effective deflection angle is less for images closer to the optic axis.
The supermassive “black holes” at the centres of $`NGC3115`$ and $`NGC4486`$ have $`M/D_d1.14\times 10^{11}`$ and $`1.03\times 10^{11}`$, resepctively, which are very close to the case of the Galactic “black hole” we have studied. Therefore, if we study lensing with these “black holes” keeping $`D_{ds}/D_s=1/2`$, we will get approximately the same results. The angular radius of the Einstein ring in the Schwarzschild lensing is expressed by $`\theta _E=\{4MD_{ds}/(D_dD_s)\}^{1/2}`$. For a source with $`D_{ds}<D_s`$ one has $`0<\left(D_{ds}/D_s\right)<1`$. If we consider $`D_{ds}/D_s`$ different than $`1/2`$ the magnitude of the Einstein ring can easily be estimated. As relativistic images are formed due to light deflection close to $`r_o=3M`$, their angular positions will be very much less sensitive to a change in the value of $`D_{ds}/D_s`$. We have considered the sources for $`D_d<D_s`$; however, sources with $`D_d>D_s`$ will also be lensed and will also give rise to relativistic images. Thus, all the sources of the universe will be mapped as relativistic images in the vicinity of the black hole (albeit as very faint images). Gravitational lensing with stellar-mass black holes will also give rise to relativistic images; however, unlike in the case of supermassive “black hole” lensing, these images will not be resolved from their primary and secondary images with present observational facilities.
## V Relativistic images as test for general relativity in strong gravitational field
For the Galactic “black hole” lens, Fig. 2 and Table 1 (see caption (c) ) show the angular positions of the two outermost sets of relativistic images (two images on each side of the optic axis) when a source position is given. In fact, there is a sequence of a large number of relativistic Einstein rings when the source, lens and observer are perfectly aligned, and when the alignment is “broken” there is a sequence of large number of relativistic images on both sides of the optic axis. However, for a given source position their magnifications decrease very fast as the angular position $`\theta `$ decreases (see Table 1), and therefore the outermost set of images, one on each side of the optic axis, is observationally the most significant. The angular separations among relativistic images are too small to be resolved with presently available instruments and therefore all these images would be at the same position; however, these relativistic images will be resolved from the primary and secondary images and thus resolution is not a problem for observation of relativistic images.
If we observe a full or “broken” Einstein ring near the centre of a massive dark object at the centre of a galaxy with a faint relativistic image of the same source at the centre of the ring, we would expect that the central (relativistic) image would disappear after a short period of time. If seen, this would be a great success of the general theory of relativity in a strong gravitational field.
Observation of relativistic images would also give an upper bound on the compactness of the lens. To get a relativistic image a light ray has to suffer a deflection by an angle $`\widehat{\alpha }>3\pi /2`$. For the closest distance of approach $`r_o=3.208532M`$ the deflection angle $`\widehat{\alpha }=269.9999`$ and therefore $`r_o/M=3.208532`$ can be considered as an upper bound to the compactness of the lens. The fact that the magnification of a relativistic image decreases very fast as the source position increases from its perfect alignment with the lens and observer can be exploited to give a better estimate of the compactness of the lens. For the lens system considered in section four, the outermost relativistic Einstein ring has angular radius about $`16.898`$ microarcseconds and this is formed due to light rays bending at the closest distance of approach $`r_o3.09023M_{}`$ (see Table $`3`$). As a relativistic image can be observed only very close to a relativistic TCC, the above value of the $`r_o/M`$ gives an estimate of compactness of the massive dark object.
There are some serious difficulties hindering the observation of the primary-secondary image pair near a galactic centre; the observation of relativistic images is even much more difficult. The extinction of electromagentic radiation near the line of sight to galactic nuclei would be appreciable; the smaller the wavelength, the larger the extinction. The interstellar scattering and radiation at several frequencies from the material accreting on the “black hole” would make these observations more difficult. Due to these obstacles no lensing event near a galactic centre has been observed till now, but it seems this is a very worthwhile project.
There are some additional difficulties for observing relativistic images. First, these images are very much demagnified unless the source, lens and observer are highly aligned. When the source position $`\beta `$ decreases the magnification increases rapidly and therefore one may possibly get observable relativistic images, but only if the source, lens and observer are highly aligned ($`\beta <<1`$ microarcsecond) and the source has a large surface brightness. Quasars and supernovae would be ideal sources for observations of relativistic images. The number of observed quasars is low (about $`10^4`$, see in ) and therefore the probability that a quasar will be highly aligned along the direction of any galactic centre of observed galaxies is extremely small. Similarly, there is a very small probability that a supernova will be strongly aligned with any galactic centre. We considered a normal star in the Galaxy to be a point source (note that we took $`D_{ds}/D_s=1/2`$). We cannot use the point source approximation when such a source is very close to the caustic ($`\beta =0`$) and therefore studies of extended source lensing are needed. Second, if relativistic images were observed it would be for a short period of time because the magnification decreases very fast with increase in the source position; however, the time scale for observation of relativistic images will be greater for lensing of more distant sources. It is highly improbable that the relativistic images would in fact be observed in a short observing period and a long term project to search for such images would not have reasonable probability of success. Nevertheless the possibility remains that such images might be detected through lucky observations in the vicinity of galactic centers.
## VI Summary
We obtained a lens equation which allows an arbitrary large values of the deflection angle and used the deflection angle expression for the Schwarzschild metric obtained by Weinberg. This gives the bending angle of a light ray passing through the Schwarzschild gravitational field for a closest distance of approach $`r_o`$ in the range $`3M<r_o<\mathrm{}`$. Using this we studied GL due to the Galactic “black hole” in a strong gravitational field.
Apart from a pair of images (primary and secondary) which are observed due to light deflection in a weak gravitational field, we find a sequence of large number of relativistic images on both sides of the optic axis due to large deflections of light in a strong gravitational field near the photon sphere $`r_o=3M`$. Among these relativistic images, the outermost pair is observationally the most important. Though these relativistic images are resolved from the primary and secondary images, there are serious difficulties in observing them. However, if it were to succeed it would be a great triumph of the general theory of relativity and would also provide valuable information about the nature of massive dark objects. Observations of relativistic images would confirm the Schwarzschild geometry close to the event horizon; therefore these would strongly support the black hole interpretation of the lensing object.
In the investigations in this paper we modelled the massive compact objects as Schwarzschild lens. However, it is worth investigating Kerr lensing to see the effect of rotation on lensing in strong gravitational field, especially when the lens has large intrinsic angular momentum to the mass ratio. There have been some studies of Kerr weak field lensing (see Rauch and Blandford and references therein). In passing, it is worth mentioning that any spacetime endowed with a photon sphere (as defined in Section III) and acting as a gravitational lens would give rise to relativistic images.
###### Acknowledgements.
Thanks are due to H. M. Antia, M. Dominik, J. Kormendy, J. Lehar, and D. Narasimha for helpful correspondence, and J. Menzies and P. Whitelock for helpful discussions on the visibility of images. This research was supported by FRD, S. Africa.
|
no-problem/9904/hep-ph9904436.html
|
ar5iv
|
text
|
# Enhancement of singly and multiply strangeness in p-Pb and Pb-Pb collisions at 158A GeV/c
## I Introduction
Strangeness as a possible signature of the phase transition from a hadronic state to a QGP state was put forward about 16 years ago . It was based on the prediction that the production of strange quark pairs would be enhanced as a result of the approximate chiral symmetry restoration in a QGP state in comparison with a hadronic state. The strangeness enhancement in pA and AA collisions with respect to the superposition of nucleon-nucleon collisions has been investigated and confirmed by many experimental groups . However, alternative explanations exist, they are based on the ‘conventional’ physics in the hadronic regime, like rescattering, string-string interaction, etc. . The first detailed theoretical study of strangeness production can be found in , where the enhanced relative yield of strange and multi-strange particles in nucleus-nucleus collisions with respect to proton-nucleus interactions has been suggested as a sensitive signature of a QGP.
We have done a series of studies in recent years investigating strangeness enhancement with a hadron and string scenario , from which a Monte-Carlo event generator, LUCIAE, was developed . Those studies indicate that including rescattering of the final state hadrons is still not enough to reproduce the NA35 data of strange particle production. To reproduce the NA35 data needs to rely further on the mechanism of reduction of the strange quark suppression in string fragmentation, which contributes to the enhancement of strange particle yield in nucleus-nucleus collisions with respect to the superposition of the nucleon-nucleon collisions . Similarly, in order to reproduce the NA35 data, the RQMD generator, equipped with rescattering though, has to resort to the colour rope mechanism . In this picture it is assumed that the neighboring interacting strings might form a string cluster called colour rope in pA and AA collisions. The colour rope then fragments in a collective way and tends to enhance the production of the strange quark pairs from the colour field of strings through the increase of the effective string tension.
It has been known for years that the strange quark suppression factor ($`\lambda `$ hereafter), i.e., the suppression of s quark pair production in the color field with respect to u or d pair production, in hadron-hadron collisions is not a constant, but energy-dependent, increasing from a value of 0.2 at the ISR energies to about 0.3 at the top of the SPS energies . In we proposed a mechanism to investigate the energy dependence of $`\lambda `$ in hh collisions by relating the effective string tension to the production of hard gluon jets (mini-jets). A parameterization form was then obtained, which reproduces the energy dependence of $`\lambda `$ in hh collisions reasonably well . When the same mechanism is used in the study of pA and AA collisions it is found that $`\lambda `$ would increase with the increase of energy, mass and centrality of a colliding system as a result of mini-jet(gluon) production stemming from the string-string interaction. Our model reproduced nicely the data of strange particle production in hh , pA, and AA collisions.
In this work we use above ideas to study the recently published WA97 data of the enhanced production of singly and multiply strange particles in p-Pb and Pb-pb collisions at 158A GeV/c. The study indicates that the WA97 data, which revealed that the enhancement of strange particle yield increases with the increasing of centrality and of s quark content in multiply strange particles in Pb-Pb collisions with respect to p-Pb collisions, could be explained in a hadron-string model except for $`\mathrm{\Omega }`$ yield in the Pb-Pb data.
## II Brief review of the LUCIAE model
LUCIAE model is developed based on the FRITIOF model . FRITIOF is a string model, which started from the modeling of inelastic hadron-hadron collisions and it has been successful in describing many experimental data from the low energies at the ISR-regime all the way to the SPS energies . In this model a hadron is assumed to behave like a massless relativistic string. A hadron-hadron collision is pictured as the multi-scattering of the partons inside the two colliding hadrons. In FRITIOF, during the collision two hadrons are excited due to longitudinal momentum transfers and/or a Rutherford Parton Scattering (RPS). The highly excited states will emit bremsstrahlung gluons according to the soft radiation model. They are afterwards treated as excitations i.e. the Lund Strings and allowed to decay into final state hadrons according to the Lund fragmentation scheme.
The FRITIOF model has been extended to also describe hadron-nucleus and nucleus-nucleus collisions by assuming that the reactions are superposition of binary hadron-hadron collisions in which the geometry of the nucleus plays an important role because the nuclei should then behave as a “frozen” bag of nucleons. However in the relativistic nucleus-nucleus collisions there are generally many excited strings formed close by each other during a collision. Thus in the LUCIAE model a Firecracker model is proposed to deal with the string-string collective interaction. In the Firecracker model it is assumed that several string from a relativistic heavy ion reaction will form a cluster and then the strings inside such a cluster will interact in a collective way. We assume that the groups of neighbouring strings in a cluster may form interacting quantum states so that both the emission of gluonic bremsstrahlung as well as the fragmentation properties can be affected by the large common energy density.
In relativistic nucleus-nucleus collision there are generally a lot of hadrons produced, however, FRITIOF does not include the final state interactions. Thus in LUCIAE a rescattering model is devised to consider the reinteraction of the produced hadrons with each other and with the surrounding cold spectator matter. The distributions of the final state hadrons will be affected by the rescattering process. We refer to the Refs. for the details and we just give here the list of the reactions involving in LUCIAE, which are cataloged into
| $`\pi `$$`N`$$``$ $`\mathrm{\Delta }`$$`\pi `$ | $`\pi `$$`N`$$``$ $`\rho `$$`N`$ |
| --- | --- |
| $`N`$$`N`$$``$ $`\mathrm{\Delta }`$$`N`$ | $`\pi \pi k\overline{k}`$ |
| $`\pi NkY`$ | $`\pi \overline{N}\overline{k}\overline{Y}`$ |
| $`\pi Yk\mathrm{\Xi }`$ | $`\pi \overline{Y}\overline{k}\overline{\mathrm{\Xi }}`$ |
| $`\overline{k}N\pi Y`$ | $`k\overline{N}\pi \overline{Y}`$ |
| $`\overline{k}Y\pi \mathrm{\Xi }`$ | $`k\overline{Y}\pi \overline{\mathrm{\Xi }}`$ |
| $`\overline{k}Nk\mathrm{\Xi }`$ | $`k\overline{N}\overline{k}\overline{\mathrm{\Xi }}`$ |
| $`\pi \mathrm{\Xi }k\mathrm{\Omega }^{}`$ | $`\pi \overline{\mathrm{\Xi }}\overline{k}\overline{\mathrm{\Omega }^{}}`$ |
| $`k\overline{\mathrm{\Xi }}\pi \overline{\mathrm{\Omega }^{}}`$ | $`\overline{k}\mathrm{\Xi }\pi \mathrm{\Omega }^{}`$ |
| $`\overline{N}N`$ annihilation |
| $`\overline{Y}N`$ annihilation |
where $`Y`$ refers to the $`\mathrm{\Lambda }`$ or $`\mathrm{\Sigma }`$ and $`\mathrm{\Xi }`$ refers to the $`\mathrm{\Xi }^{}`$ or $`\mathrm{\Xi }^0`$. There are 364 reactions involved altogether.
In addition, the reduction mechanism of s quark suppression, i. e., the s quark suppression factor increasing with energy, centrality, and mass of the colliding system, which is linked to string tension, is included in LUCIAE via the parameterized formulas
$$\kappa _{eff}=\kappa _0(1\xi )^\alpha ,$$
(1)
where $`\kappa _0`$ is the string tension of the pure $`q\overline{q}`$ string, $`\alpha `$ is a parameter $``$ 3, and $`\xi `$ ($``$ 1) is calculated by
$$\xi =\frac{\mathrm{ln}(\frac{k_{max}^2}{s_0})}{\mathrm{ln}(\frac{s}{s_0})+_{j=2}^{n1}\mathrm{ln}(\frac{k_j^2}{s_0})},$$
(2)
which represents the scale that a multigluon string is deviated from a pure $`q\overline{q}`$ string.
The s quark suppression factor, $`\lambda `$, of two string states can thus be calculated by
$$\lambda _2=\lambda _1^{\frac{\kappa _{eff1}}{\kappa _{eff2}}},$$
(3)
where $`\kappa _{eff}`$ refers to the effective string tension of a multigluon string. Since $`\lambda `$ is always less than one, above equation indicates the larger effective string tension the more reduction of s quark suppression. The effective string tension is then relevant to the hard gluon kinks (mini-(gluon) jets) created on the string.
It should be mentioned that the LUCIAE (FRITIOF) event generator runs together with JETSET routine. In JETSET routine there are model parameters PARJ(2) (i.e., $`\lambda `$) and PARJ(3). PARJ(3) is the extra suppression of strange diquark production compared to the normal suppression of strange quark pair. Both PARJ(2) and PARJ(3) are responsible for the s quark (diquark) suppression and related to the effective string tension (the relation of Eq. (3) holds true for PARJ(3) as for $`\lambda `$). Besides $`\lambda `$ and PARJ(3) there is PARJ(1), which stands for the suppression of diquark-antidiquark pair production in the color field in comparison with the quark-antiquark pair production and is related to the effective string tension as well. The mechanism mentioned above is performed via these parameters in program. How these three parameters affect the multiplicity distribution of final state particles can be found in .
## III Results and discussions
In Table 1 is given the results of the JETSET parameters PARJ(1), PARJ(2) (i.e., $`\lambda `$), and PARJ(3) varying with the centrality and the size of collision system in p-Pb and Pb-Pb collisions at 158A GeV/c. That seems quite reasonable.
Fig. 1a shows the calculated $`\mathrm{\Lambda }+\overline{\mathrm{\Lambda }}`$, $`\mathrm{\Xi }^{}+\overline{\mathrm{\Xi }^{}}`$, and $`\mathrm{\Omega }^{}+\overline{\mathrm{\Omega }^{}}`$ yields per event ($`|yy_{cm}|`$ 0.5 and p<sub>T</sub> $``$ 0 GeV/c) as a function of the number of participant in minimum bias p-Pb collisions and in central (b=2) Pb-Pb collisions at 158A GeV/c (open labels) comparing with WA97 data (full labels) . The corresponding results in Pb-Pb collisions after recaling each yield according to its value in p-Pb are given in Fig. 1b. One knows from Fig. 1a that the agreement between theory and experiment is quite well for $`\mathrm{\Lambda }+\overline{\mathrm{\Lambda }}`$ and $`\mathrm{\Xi }^{}+\overline{\mathrm{\Xi }^{}}`$, however, for $`\mathrm{\Omega }^{}+\overline{\mathrm{\Omega }^{}}`$ the theoretical results are lower than experiments. That should be study further both theoretically and experimentally. In Fig. 1b the theoretical results of $`\mathrm{\Omega }^{}+\overline{\mathrm{\Omega }^{}}`$ are also lower than experiments, however, the trend of the strangeness enhancement increasing with increase of the centrality and of the s quark content in strange particles is reproduced quite well.
In Fig. 2 and 3 are given, respectively, the calculated m<sub>T</sub> spectra ($`|yy_{cm}|`$ 0.5) of $`\mathrm{\Lambda }`$, $`\overline{\mathrm{\Lambda }}`$, $`\mathrm{\Xi }^{}`$, $`\overline{\mathrm{\Xi }^{}}`$ and $`\mathrm{\Omega }^{}+\overline{\mathrm{\Omega }^{}}`$ in p-Pb and Pb-Pb collisions at 158A GeV/c (open labels). The corresponding full labels in those figures are the corresponding WA97 data . One sees from figure 2 that the agreement between theory and experiment is reasonably good, except that the fluctuation in theoretical results of $`\mathrm{\Omega }^{}+\overline{\mathrm{\Omega }^{}}`$ m<sub>T</sub> spectrum has to be improved. However, the situations in figure 3 is much better, i.e., the agreement between theory and experiment is reasonably good.
In summary, we have used a hadron and string cascade model, LUCIAE, to investigate the WA97 data of the strangeness enhancement increasing with the increase of the centrality and of the s quark content in strange particles. Relying on the mechanism of the reduction of s quark suppression in string fragmentation leads to the enhancement of strange particle yield in nucleus-nucleus collisions the WA97 data could be reproduced nicely except $`\mathrm{\Omega }`$ yield in Pb+Pb collisions, which need to be studied further.
## IV ACKNOWLEDGMENTS
We would like to thank T. Sjöstrand for detailed instructions of using PYTHIA. This work was supported by national Natural Science Foundation of China and Nuclear Industry Foundation of China.
Figure Captions
> Fig. 1 a) The calculated $`\mathrm{\Lambda }+\overline{\mathrm{\Lambda }}`$, $`\mathrm{\Xi }^{}+\overline{\mathrm{\Xi }^{}}`$, and $`\mathrm{\Omega }^{}+\overline{\mathrm{\Omega }^{}}`$ yields per event ($`|yy_{cm}|`$ 0.5 and p<sub>T</sub> $``$ 0 GeV/c) as a function of the number of participant in p-Pb and Pb-Pb collisions at 158A GeV/c (open labels) comparing with WA97 data (full labels) . b) The calculated $`\mathrm{\Lambda }+\overline{\mathrm{\Lambda }}`$, $`\mathrm{\Xi }^{}+\overline{\mathrm{\Xi }^{}}`$, and $`\mathrm{\Omega }^{}+\overline{\mathrm{\Omega }^{}}`$ yields per event in Pb-Pb expressed in units of the corresponding yields in p-Pb as a function of the number of participant in Pb-Pb (open labels), the full labels are the corresponding WA97 data .
>
> Fig. 2 The calculated m<sub>T</sub> spectra ($`|yy_{cm}|`$ 0.5) of $`\lambda `$, $`\overline{\lambda }`$, $`\mathrm{\Xi }^{}`$, $`\overline{\mathrm{\Xi }^{}}`$ and $`\mathrm{\Omega }^{}+\overline{\mathrm{\Omega }^{}}`$ in p-Pb collisions at 158 GeV/c (open labels) comparing with WA97 data (full labels) .
>
> Fig. 3 The calculated m<sub>T</sub> spectra ($`|yy_{cm}|`$ 0.5) of $`\lambda `$, $`\overline{\lambda }`$, $`\mathrm{\Xi }^{}`$, $`\overline{\mathrm{\Xi }^{}}`$ and $`\mathrm{\Omega }^{}+\overline{\mathrm{\Omega }^{}}`$ in Pb-Pb collisions at 158A GeV/c (open labels) comparing with WA97 data (full labels) .
|
no-problem/9904/solv-int9904003.html
|
ar5iv
|
text
|
# Bäcklund transformations for many-body systems related to KdV
## 1. Introduction
Bäcklund transformations (BTs) are an important aspect of the theory of integrable systems which have traditionally been studied in the context of evolution equations. However, more recently there has been much interest in discrete systems or integrable mappings . Within the modern approach to separation of variables (reviewed by Sklyanin in ) this has led to the study of BTs for finite-dimensional Hamiltonian systems . The latter are canonical transformations including a Bäcklund parameter $`\lambda `$, and apart from being interesting integrable mappings in their own right they also lead to separation of variables when $`n`$ such mappings are applied to an integrable system with $`n`$ degrees of freedom. The sequence of Bäcklund parameters $`\lambda _j`$ together with a set of conjugate variables $`\mu _j`$ constitute the separation variables, and satisfy a new property called spectrality introduced in .
We proceed to develop these ideas with some new examples of BTs for $`n`$-body systems, namely the many-body generalisation of the case (ii) integrable Hénon-Heiles system, the Garnier system and the Neumann system on the sphere (see ). It is known that the case (ii) Hénon-Heiles system is equivalent to the stationary flow of fifth-order KdV , while the Garnier and Neumann systems may be obtained as restricted flows of the KdV hierarchy . Thus we derive BTs for these systems by reduction of the standard BT for KdV, which arises from the Darboux-Crum transformation for Schrödinger operators. The restriction of the Darboux transformation to the stationary flows of the modified (mKdV) hierarchy has been discussed in .
In the following section we describe how the reduction works in general, before specialising these considerations to each particular system and presenting the associated generating function for the BT. We note that these systems are examples of the reduced Gaudin magnet , so that we have the following Lax matrix
(1.1)
$$L(u)=\underset{j=1}{\overset{n}{}}\frac{\mathrm{}_j}{ua_j}+B(u),\mathrm{}_j=\left(\begin{array}{cc}S_j^3& S_j^{}\\ S_j^+& S_j^3\end{array}\right)$$
where (up to scaling) the $`S_j`$ satisfy $`n`$ independent copies of the standard $`sl(2)`$ algebra:
(1.2)
$$\{S_j^3,S_k^\pm \}=\pm 2\delta _{jk}S_k^\pm ,\{S_j^+,S_k^{}\}=4\delta _{jk}S_k^3.$$
For the Hénon-Heiles and Garnier systems the matrix $`B(u)`$ is respectively quadratic and linear in the spectral parameter $`u`$, while for the Neumann system it is independent of $`u`$ and turns out to be constant due to the constraint that the particles lie on the sphere (hence the Poisson algebra (1.2) must be modified by Dirac reduction).
We have constructed the BT for the (non-reduced) $`sl(2)`$ Gaudin magnet with quasi-periodic boundary condition in , while some preliminary results on the classical Garnier system and two-body Hénon-Heiles system first appeared in .
## 2. Classical integrable systems and KdV
### 2.1. Restricting the BT
As is well known, the Darboux-Crum transformation consists of mapping the Schrödinger operator $`_t^2+V\lambda `$ to another operator $`_t^2+\stackrel{~}{V}\lambda `$ by factorizing the former and then reversing the order of factorisation. Given an eigenfunction $`\varphi `$ satisfying
$$(_t^2+V\lambda )\varphi =0$$
we may set $`y=(\mathrm{log}[\varphi ])_t`$ and then
(2.1)
$$V=y_ty^2+\lambda ,\stackrel{~}{V}=y_ty^2+\lambda ;$$
for $`\lambda =0`$ this is just the Miura map for KdV. Also given another eigenfunction $`\psi `$ of the Schrödinger operator with potential $`V`$ for a different spectral parameter $`u`$ we have
$$(_t^2+Vu)\psi =0,(_t^2+\stackrel{~}{V}u)\stackrel{~}{\psi }=0$$
where the transformation to the new eigenfunction $`\stackrel{~}{\psi }`$ and its derivative may be given in matrix form as
(2.2)
$$\left(\begin{array}{c}\stackrel{~}{\psi }_t\\ \stackrel{~}{\psi }\end{array}\right)=k\left(\begin{array}{cc}y& y^2+u\lambda \\ 1& y\end{array}\right)\left(\begin{array}{c}\psi _t\\ \psi \end{array}\right)$$
for any constant $`k`$. From (2.1) follows the standard formula for the Darboux-Bäcklund transformation of KdV, $`\stackrel{~}{V}=V+2(\mathrm{log}[\varphi ])_{tt}`$.
For what follows it will also be necessary to consider a product of eigenfunctions for a Schrödinger operator with potential $`V`$ and eigenvalue $`u`$,
$$f=\psi \psi ^{}$$
with Wronskian $`\psi _t\psi ^{}\psi \psi _t^{}=2m`$. It is well known that $`f`$ satisfies the Ermakov-Pinney equation
(2.3)
$$ff_{tt}\frac{1}{2}f_t^2+2(Vu)f^2+2m^2=0.$$
If we now transform $`\psi `$ and $`\psi ^{}`$ according to (2.2) then we find a new product of eigenfunctions $`\stackrel{~}{f}=\stackrel{~}{\psi }\stackrel{~}{\psi }^{}`$ satisfying the same Ermakov-Pinney equation but with $`V`$ replaced by $`\stackrel{~}{V}`$, given explicitly by
(2.4)
$$\stackrel{~}{f}=(\lambda u)^1\frac{(Z^2m^2)}{f},Z=\frac{1}{2}f_tyf,$$
where we have set $`k^2=(\lambda u)^1`$ to ensure that the transformed eigenfunctions have the same Wronskian $`2m`$. It is also straightforward to show that, in terms of $`\stackrel{~}{f}`$, the quantity $`Z`$ can be written as $`Z=\frac{1}{2}\stackrel{~}{f}_ty\stackrel{~}{f}`$ (see ).
We can now describe how this transformation restricts to the finite-dimensional Hamiltonian systems presented below. The systems are expressed in variables $`(q_j,p_j)`$ which appear in the Lax matrix (1.1) via the identification
$$S_j^3=p_jq_j,S_j^{}=p_j^2+\frac{m_j^2}{q_j^2},S_j^+=q_j^2.$$
For Hénon-Heiles and Garnier the non-vanishing Poisson brackets are the standard ones $`\{p_j,q_k\}=\delta _{jk}`$ which provide a realization of the algebra (1.2); for the Neumann system on the sphere the bracket must be modified by Dirac reduction.
All of the systems are Liouville integrable, and thus have a complete set of Hamiltonians in involution, but for these purposes we concentrate on the Hamiltonian $`h`$ generating the flow corresponding to $`t`$ above (in KdV theory this is usually denoted $`x`$, the spatial variable). For this flow the Lax equation $`L_t=[N,L]`$ is the compatibility condition for the linear system
(2.5)
$$L(u)\mathrm{\Psi }=v\mathrm{\Psi },\mathrm{\Psi }_t=N\mathrm{\Psi };N=\left(\begin{array}{cc}0& uV(q_j,p_j)\\ 1& 0\end{array}\right).$$
Observe that the second part of the linear system is just a Schrödinger equation for the potential $`V`$; for Neumann and Garnier this is a function of $`(q_j,p_j)`$ for $`j=1,\mathrm{},n`$, while for Hénon-Heiles there is an extra pair of conjugate variables $`(q_{n+1},p_{n+1})`$ such that $`Vq_{n+1}`$.
The equations of motion generated by this Hamiltonian take the form $`q_{j,t}=p_j`$ and
(2.6)
$$p_{j,t}=q_{j,tt}=(a_jV(q_k,p_k))q_j\frac{m_j^2}{q_j^3}$$
for $`j=1,\mathrm{},n`$; for Hénon-Heiles there are also equations for $`q_{n+1}`$ and $`p_{n+1}=q_{n+1,t}`$. The important thing to observe is that (2.6) is equivalent to the fact that $`S_j^+=q_j^2`$ satisfies the Ermakov-Pinney equation (2.3) corresponding to a Schrödinger equation with potential $`V`$ and eigenvalue $`a_j`$. Thus to obtain a Bäcklund transformation for these many-body systems we simply apply a Darboux-Crum transformation (2.1) to the potential $`V=V(q_j,p_j)`$ to obtain $`\stackrel{~}{V}=V(\stackrel{~}{q}_j,\stackrel{~}{p}_j)`$, and then we know that the solutions of the Ermakov-Pinney equation must transform according to (2.4). By this procedure we may explicitly construct the BT for the many-body systems below (or for any restricted flow of KdV), and it is then simple to calculate the generating function $`F(q_j,\stackrel{~}{q}_j)`$ of this canonical transformation, such that
$$dF=\underset{j}{}(p_jdq_j\stackrel{~}{p}_jd\stackrel{~}{q}_j).$$
The discrete Lax equation for the BT,
$$ML=\stackrel{~}{L}M$$
where $`\stackrel{~}{L}=L(\stackrel{~}{q_j},\stackrel{~}{p_j};u)`$, is necessary to ensure the preservation of the spectral curve $`det(vL(u))=0`$ (so that all the Hamiltonians in involution are preserved). This follows immediately from the properties of the Darboux-Crum transformation, since we know that the vector $`\mathrm{\Psi }`$ in the linear system (2.5) must transform according to (2.2), and hence we may take (setting $`k=1`$)
(2.7)
$$M=\left(\begin{array}{cc}y& y^2+u\lambda \\ 1& y\end{array}\right).$$
Of course we must determine $`y`$ as a function of the dynamical variables. In the Garnier and Hénon-Heiles cases it turns out that the potential depends on coordinates only, $`V=V(q_j)`$, and so by adding the two equations in (2.1) we obtain
$$y(q_j,\stackrel{~}{q_j})=\pm \sqrt{\lambda \frac{1}{2}(V+\stackrel{~}{V})};$$
to obtain the correct continuum limit of the discrete dynamics it is necessary to take the negative branch of the square root (see ). For the Neumann system $`V`$ depends on both coordinates and momenta, so the above does not yield $`y(q_j,\stackrel{~}{q_j})`$.
There is another way of writing $`L`$ which arises more naturally via reduction from the zero curvature representation of the KdV hierarchy , viz
$$L(u)=\left(\begin{array}{cc}\frac{1}{2}\mathrm{\Pi }_t& \hfill \frac{1}{2}\mathrm{\Pi }_{tt}+(uV)\mathrm{\Pi }\\ \mathrm{\Pi }& \hfill \frac{1}{2}\mathrm{\Pi }_t\end{array}\right)$$
where
(2.8)
$$\mathrm{\Pi }(u)=\underset{j=1}{\overset{n}{}}\frac{q_j^2}{ua_j}+\mathrm{\Delta }(u).$$
$`\mathrm{\Delta }`$ is a polynomial in $`u`$ fixing the dynamical term $`B`$ in (1.1); we shall present the appropriate $`\mathrm{\Delta }`$ and $`B`$ in each case below. Clearly the $`t`$ derivatives of $`\mathrm{\Pi }`$ can be rewritten using the equations of motion to yield (1.1).
Finally if we write the (hyper-elliptic) spectral curve as
$$v^2=R(u)$$
then it is easy to check that the spectrality property is satisfied for these systems, in the sense that defining the conjugate variable to $`\lambda `$ by
$$\mu =2\frac{F}{\lambda }$$
we find that
$$L(\lambda )\mathrm{\Omega }=\mu \mathrm{\Omega }$$
with eigenvector $`\mathrm{\Omega }=(y,1)^T`$, or in other words $`\mu ^2=R(\lambda )`$ so that $`(\lambda ,\mu )`$ is a point on the spectral curve. Note that (as for the examples in ) this eigenvector spans the kernel of $`M`$,
$$M(\lambda )\mathrm{\Omega }=0.$$
We can also write $`y`$ explicitly in terms of both the old and the new variables related by the BT, thus:
(2.9)
$$y(q_j,p_j)=\frac{\mathrm{\Pi }_t(\lambda )+2\mu }{2\mathrm{\Pi }(\lambda )},y(\stackrel{~}{q_j},\stackrel{~}{p_j})=\frac{(\stackrel{~}{\mathrm{\Pi }}_t(\lambda )2\mu )}{2\stackrel{~}{\mathrm{\Pi }}(\lambda )};$$
clearly we denote $`\stackrel{~}{\mathrm{\Pi }}(\lambda )=\mathrm{\Pi }(\stackrel{~}{q_j},\stackrel{~}{p_j};\lambda )`$.
### 2.2. Generalised Hénon-Heiles system
For the many-body generalisation of case (ii) integrable Hénon-Heiles system, the Hamiltonian generating the $`t`$ flow takes the form
$$h=\frac{1}{2}\underset{j=1}{\overset{n+1}{}}p_j^2+q_{n+1}^3+q_{n+1}\left(\frac{1}{2}\underset{j=1}{\overset{n}{}}q_j^2+c\right)\frac{1}{2}\underset{j=1}{\overset{n}{}}\left(a_jq_j^2+\frac{m_j^2}{q_j^2}\right).$$
The original case (ii) integrable Hénon-Heiles system corresponds to $`n=1`$ with $`c=m_j=a_j=0`$. The link between stationary fifth-order KdV and the type (ii) system was noted by Fordy in , although this was anticipated in work of Weiss , who used Painlevé analysis to derive a BT and associated linear problem (a similar result also appears in ). None of these authors wrote a BT explicitly as a canonical transformation with parameter, although (without parameter) this was done for a non-autonomous version in .
For the Lax matrix $`L`$ of the generalised $`(n+1)`$-body Hénon-Heiles system we have $`\mathrm{\Delta }=16u8q_{n+1}`$ so that the extra term $`B(u)`$ is given by
$$B=\left(\begin{array}{cc}4p_{n+1}& E\\ 16u8q_{n+1}& 4p_{n+1}\end{array}\right),E=16u^2+8q_{n+1}u4q_{n+1}^2\underset{j=1}{\overset{n}{}}q_j^24c.$$
The equations of motion for $`h`$ imply that the squares of the first $`n`$ coordinates $`q_j^2`$ satisfy the Ermakov-Pinney equation (2.3) for $`m=m_j`$ with
$$V=q_{n+1}$$
and eigenvalue $`a_j`$. Thus the BT for the system can be calculated directly by applying the Darboux-Crum transformation to $`V=q_{n+1}`$, to yield $`\stackrel{~}{V}=\stackrel{~}{q}_{n+1}`$, and applying (2.4) to each $`q_j^2`$ for $`j=1,\mathrm{},n`$.
After some calculation the generating function for this canonical transformation is found to be
$$F(q_j,\stackrel{~}{q}_j;\lambda )=\underset{j=1}{\overset{n}{}}\left(Z_j+\frac{m_j}{2}\mathrm{log}\left[\frac{Z_jm_j}{Z_j+m_j}\right]\right)+\frac{16}{5}y^5+4(q_{n+1}+\stackrel{~}{q}_{n+1})y^3$$
$$+\left(2q_{n+1}^2+2q_{n+1}\stackrel{~}{q}_{n+1}+2\stackrel{~}{q}_{n+1}^2+\frac{1}{2}\underset{j=1}{\overset{n}{}}(q_j^2+\stackrel{~}{q}_j^2)+2c\right)y,$$
where we have found it convenient to use the quantities $`Z_j(q_j,\stackrel{~}{q}_j)`$ and $`y(q_j,\stackrel{~}{q}_j)`$ defined by
(2.10)
$$Z_j^2=m_j^2+(\lambda a_j)q_j^2\stackrel{~}{q}_j^2,$$
and
$$y=\sqrt{\lambda \frac{1}{2}(q_{n+1}+\stackrel{~}{q}_{n+1})}.$$
In order to check the spectrality property, we have explicitly found that the eigenvalue of $`L(\lambda )`$ with eigenvector $`\mathrm{\Omega }=(y,1)^T`$ can be written as
$$\mu (q_j,\stackrel{~}{q}_j;\lambda )=\underset{j=1}{\overset{n}{}}\frac{Z_j}{\lambda a_j}\frac{1}{y}\frac{F}{y},$$
which precisely equals $`2\frac{F}{\lambda }`$ as required.
### 2.3. Garnier system
For the Garnier system the $`t`$ flow is generated by the Hamiltonian
$$h=\frac{1}{2}\underset{j=1}{\overset{n}{}}p_j^2+\frac{1}{2}(\underset{j=1}{\overset{n}{}}q_j^2)^2\frac{1}{2}\underset{j=1}{\overset{n}{}}\left(a_jq_j^2+\frac{m_j^2}{q_j^2}\right).$$
This differs from the traditional Garnier system as in by the inclusion of extra inverse square terms. The Newton equations for the $`q_j`$ are
$$q_{j,tt}+2(\underset{k}{}q_k^2)q_j=a_jq_j\frac{m_j^2}{q_j^3},$$
so clearly for the standard restricted flows of KdV , when $`m_j=0`$, each $`q_j`$ is an eigenfunction of a Schrödinger operator with potential
$$V=2\underset{j}{}q_j^2$$
and eigenvalue $`a_j`$, while in general $`q_j^2`$ is a product of eigenfunctions satisfying the Ermakov-Pinney equation for $`m=m_j`$.
The Lax matrix of the Garnier system has $`\mathrm{\Delta }=1`$, so $`L`$ takes the form (1.1) with
$$B=\left(\begin{array}{cc}0& u_jq_j^2\\ 1& 0\end{array}\right).$$
Applying the Darboux-Crum transformation we obtain a new potential
$$\stackrel{~}{V}=2\underset{j}{}\stackrel{~}{q}_j^2,$$
and the corresponding BT induced on the Garnier system is equivalent to gauging $`L`$ by the matrix $`M`$ of the form (2.7) with
$$y=\sqrt{\lambda \underset{j}{}(q_j^2+\stackrel{~}{q}_j^2)}.$$
Finally we can calculate the generating function for this BT, which may be written as follows:
$$F(q_j,\stackrel{~}{q}_j;\lambda )=\underset{j=1}{\overset{n}{}}\left(Z_j+\frac{m_j}{2}\mathrm{log}\left[\frac{Z_jm_j}{Z_j+m_j}\right]\right)\frac{1}{3}y^3,$$
where $`y(q_j,\stackrel{~}{q}_j)`$ is as above and $`Z_j`$ is given by the same expression (2.10) as for Hénon-Heiles. In we derived this generating function for the special case $`m_j=0`$ when the logarithm terms do not appear. To check spectrality we notice that $`L(\lambda )`$ has eigenvalue
$$\mu (q_j,\stackrel{~}{q}_j;\lambda )=\underset{j=1}{\overset{n}{}}\frac{Z_j}{\lambda a_j}+y$$
with eigenvector $`\mathrm{\Omega }`$, and so we see that $`\mu =2\frac{F}{\lambda }`$.
### 2.4. Neumann system on the sphere
For the Neumann system the $`t`$ flow is generated by
$$h=\frac{1}{2}\underset{j=1}{\overset{n}{}}p_j^2\frac{1}{2}\underset{j=1}{\overset{n}{}}\left(a_jq_j^2+\frac{m_j^2}{q_j^2}\right).$$
Once again this has extra inverse square terms compared with the standard Neumann system . The Poisson bracket for this system is modified by constraining the particles to lie on a sphere, so that
(2.11)
$$(q,q)\underset{j}{}q_j^2=const,(q,p)\underset{j}{}q_jp_j=0$$
which results in the non-vanishing Dirac brackets
(2.12)
$$\{p_j,q_k\}=\delta _{jk}\frac{q_jq_k}{(q,q)},\{p_j,p_k\}=\frac{q_jp_kq_kp_j}{(q,q)}.$$
With this bracket the Hamilton equations are $`q_{j,t}=p_j`$ and (2.6) with
$$V=(q,q)^1\underset{j}{}\left(p_j^2+a_jq_j^2\frac{m_j^2}{q_j^2}\right).$$
The Lax matrix for the Neumann system arises by setting $`\mathrm{\Delta }=0`$, which in (1.1) gives the following matrix $`B`$:
$$B=\left(\begin{array}{cc}0& (q,q)\\ 0& 0\end{array}\right).$$
In fact if we start from the linear system (2.5) and leave $`V`$ unspecified then (2.6) as well as the constraint $`(q,q)_t=0`$ are consequences of the Lax equation, and together these are sufficient to determine the form of $`V`$; this is also how the equations for the constrained Neumann system arise in a Lagrangian approach .
Given that the phase space is now degenerate with two Casimirs given by (2.11), it would appear that the standard sort of generating function will no longer be appropriate for describing a BT. It turns out that we can apply the Darboux-Crum transformation as before, and transform the quantities $`q_j^2`$ according to (2.4). In this way we obtain new variables $`\stackrel{~}{q}_j(q_k,p_k)`$ and $`\stackrel{~}{p}_j(q_k,p_k)`$, which are naturally written with the use of the quantity $`y(q_k,p_k)`$ given by the first formula in (2.9); on the Lax matrix this transformation arises by gauging with $`M`$ as in (2.7). Similarly the transformation can be inverted to give $`q_j(\stackrel{~}{q}_k,\stackrel{~}{p}_k)`$ and $`p_j(\stackrel{~}{q}_k,\stackrel{~}{p}_k)`$ written in terms of $`y(\stackrel{~}{q}_k,\stackrel{~}{p}_k)`$ given by the right hand formula of (2.9).
However, it would still be nice to write a generating function for this transformation. We have found that if we formally take
$$F(q_j,\stackrel{~}{q}_j;\lambda )=\underset{j=1}{\overset{n}{}}\left(Z_j+\frac{m_j}{2}\mathrm{log}\left[\frac{Z_jm_j}{Z_j+m_j}\right]+\frac{1}{2}y(q_j^2\stackrel{~}{q}_j^2)\right)$$
with $`Z_j`$ given by (2.10) as usual, and regard $`y`$ as a sort of Lagrange multiplier (independent of the coordinates and $`\lambda `$), then we do indeed obtain the correct expressions
$$p_j=\frac{F}{q_j},\stackrel{~}{p}_j=\frac{F}{\stackrel{~}{q}_j},$$
but these contain $`y`$ which is unspecified. If we then require that the constraints (2.11) are preserved under the BT applied from old to new variables or vice-versa, then in either direction the constraints are preserved if and only if $`y`$ satisfies a quadratic equation with solution given respectively by the formulae (2.9). Alternatively if we require spectrality then second component of the equation $`L(\lambda )\mathrm{\Omega }=\mu \mathrm{\Omega }`$ gives
$$\mu (q_j,\stackrel{~}{q}_j;\lambda )=\underset{j=1}{\overset{n}{}}\frac{Z_j}{\lambda a_j}=2\frac{F}{\lambda }$$
as required, while the first component gives (after making use of the formula (2.10) and the BT)
$$\mu =\underset{j=1}{\overset{n}{}}\frac{Z_j}{\lambda a_j}+\frac{1}{y}\underset{j}{}(q_j^2\stackrel{~}{q}_j^2).$$
Hence spectrality requires that the second term vanishes, and so the first constraint (2.11) is preserved; the preservation of the second constraint is then an algebraic consequence of the BT.
Thus we see that for this BT we can write the new variables as functions of the old and vice-versa, but a formula for $`y(q_j,\stackrel{~}{q}_j;\lambda )`$ is lacking. Also this discretization of the Neumann system is apparently new, since it is exact (preserving the Lax matrix for the continuous system) unlike the Veselov or Ragnisco discretizations discussed in .
## 3. Conclusions
It would also be interesting to look at BTs with parameter in the non-autonomous case , where deformation with respect to the Bäcklund parameter would probably have to be introduced (corresponding to the associated isomonodromy problem).
## 4. Acknowledgements
ANWH thanks the Leverhulme Trust for providing a Study Abroad Studentship in Rome, and is grateful to J. Harnad and Y. Suris for useful conversations. VBK acknowledges the support from the EPSRC and the support from Istituto Nazionale di Fisica Nucleare for his visit to Rome. The authors would also like to thank the organisers of the meeting Integrable Systems: Solutions and Transformations in Guardamar, Spain (June 1998) where some of this work was carried out.
|
no-problem/9904/cond-mat9904313.html
|
ar5iv
|
text
|
# A Model of Evolution with Interaction Strength
## Abstract
Interaction strength, denoted by $`\alpha _I`$, is introduced in a model of evolution in $`d`$-dimension space. It is realized by imposing a constraint concerning $`2d`$ differences of fitnesses between that of any extremal site and those of its $`2d`$ nearest neighbours at each time step in the evolution of the model. For any given $`\alpha _I(0<\alpha _I1)`$ the model can self-organize to a critical state. Two exact equations found in Bak-Sneppen model still hold in our model for different $`\alpha _I`$. Simulations of one- and two-dimensional models for ten different values of $`\alpha _I`$ are given. It is found that self-organized threshold, $`f_c`$, decreases with $`\alpha _I`$ increasing. It is also shown that the critical exponent, $`\gamma `$, and two basic exponents, $`\tau `$, avalanche distribution, and $`D`$, avalanche dimension, are $`\alpha _I`$ dependent.
PACS number(s): 87.10.+e, 05.40.+j, 64.60.Lx
The concept of self-organized criticality (SOC) concerns the spatiotemporal complexity in the systems that contain information over a wide range of length and time scales . It implies that through a dynamical process a system can start in a state with uncorrelated behavior and end up in a complex critical state with a high degree of correlation. This concept and the prototype model, sandpile model, are proposed by Bak, Tang and Wiesenfeld in 1987. Self-organized criticality is so far the only known general mechanism to generate complexity , and hence the one trying to understand why nature is complex, not simple.
Systems which can exhibit SOC are common in physics, geography, biology, and even social sciences such as economy. Such kinds of complex phenomena are ubiquitous in macroscopic world. Recently, it has been proposed by Meng $`et`$ $`al.`$ that SOC may exist in microscopic systems—at the level of quarks and gluons, as well as in macroscopic world.
Evidence from the biology has suggested that the ecology of interacting species had self-organized to a critical state. In 1990, Bak, Chen, and Crutz created a cellular automaton simulating a society of living organisms operates at, or very close to, the critical state when driven by random mutations. However, the model is very sensitive in the sense that small modifications of the details may drive it away from the critical state. NKC model proposed by Kauffman and Johnsen can exhibit a transition from order to disorder, but the criticality emerged in the system is obtained through parameter tuning, not self-organizing. In 1993 , a simple model of evolution, Bak-Sneppen model, was introduced by Bak and Sneppen. Instead of considering the evolution on the individual level they present the coevolution of species on a coarsed-grained scale. In their model the whole species is represented by a single fitness, i.e., a random number chosen arbitrarily from a uniform distribution between zero and $`1`$. And mutations correspond to updating the fitnesses of a given extremal site and its two nearest neighbours with three new random numbers chosen from the same flat distribution between zero and $`1`$. Such model of an evolving biology can self-organize to a critical steady state during which all sizes of avalanches can occur. Most important of all, their model can exhibit punctuated equilibrium behavior observed in biology. Two exact equations describing self-organization and the average avalanche size behavior respectively were found later . A hierarchical structure of avalanches were also observed in B-S model. It was even proposed by Ref. that the formation of fractal structures, the appearance of $`\frac{1}{f}`$ noise, diffusion with anomalous Hurst exponents, Levy flights, and punctuated equilibria can all be related to the same underlying avalanche dynamics.
Our model of evolution is based on B-S model, but differs decisively in driving mechanism of interaction between neighboring species. We also consider coevolution of an interacting species system and each species is represented by a single fitness, i.e., a random number chosen arbitrarily from a flat distribution between zero and $`1`$. But when considering the interaction between neighboring species we impose a constraint concerning the differences of fitnesses between that of the extremal species and those of its nearest neighbours at each time step. Before knowing how the constraint is imposed let us take a first look at another case of evolution, a non-interactive biology. In a non-interactive biology each species would tend to evolve towards a stable state where the fitness of each species approaches $`1`$, but the evolution process is extremely slow. If we use $`\alpha _I`$ to denote $`\underset{¯}{interactionstrength}`$, which represents the degree of interaction between the extremal site and its nearest neighboring species, it is natural to let $`\alpha _I`$ be $`1`$ in B-S model and $`\alpha _I`$ be zero in the non-interactive biology. If so, it seems that these two cases of $`\alpha _I`$ correspond to two extremal cases of interaction . Then, if $`\alpha _I`$ is allowed to take any value between zero and $`1`$, what can we do with our model? Thus, several questions arise there:
1) What does interaction strength mean for an evolution model?
2) How to present the definition of interaction strength $`\alpha _I`$ and impose it into a model of evolution?
3) If the two questions are solved, then are the model of evolution and some features of it affected by the values of interaction strength?
It will be shown that these questions could be solved successfully in the following text.
Our model is intended to consider the evolution of an ecosystem which consists of a number of species. Followed the ideas proposed in Ref. each species is represented by a single fitness. The fitness may represent population of a whole species or living capability of the species. A high fitness of a certain species may imply that number of the species is immense or its living capability is very great, and vice versa. Change of fitness of a species may imply change of number of the species or change of its living capability. So it is natural to expect that a species with a low fitness is more likely to change, namely mutation in biology, in order to live better and/or longer in nature. Only through mutations the species with a bad fitness can have the chances of choosing a better fitness in order to avoid extinction. Furthermore, the fitness of each species is affected by other species which are also parts of the ecosystem. Any adaptive change of any species may change the fitness and the fitness landscape of its coevolutionary parts coupled in the same ecosystem. So the species may interact with each other through, say, a food chain. Hence, the species with a high fitness may live well and comfortably unless its bad neighbours are going to mutate.
Our model is defined and simulated through the following items:
(1) A number of, say, $`L^d`$ species are located on a $`d`$-dimensional lattice of linear size $`L`$. Initially, random numbers chosen arbitrarily from a uniform distribution between zero and $`1`$, $`p(f)`$, are assigned independently to each species. At each time step,
(2) choose the extremal site, that is, the species with the lowest fitness, $`f_{\mathrm{min}}`$, among all the species and update it by assigning a new random number also chosen from $`p(f)`$ to it and
(3) mutate those of its $`2d`$ neighbours whose fitnesses satisfy the constraint $`f_if_{\mathrm{min}}<\alpha `$ by assigning new random numbers between zero and $`1`$ to them ($`f_\mathrm{i}`$ denotes the fitness of the $`i`$th nearest neighbours.), $`\alpha `$ is a parameter between zero and $`1`$ and is fixed for a given model.
It should be pointed out that different values of $`\alpha `$ correspond to different versions of the model, even to say, different models. Consider two special values of $`\alpha `$, 0 and $`1`$. For $`\alpha =0`$ the model returns to the non-interactive biology since difference of fitness between those of any two neighbours is always greater than zero, so in the model in which $`\alpha =0`$ none of the neighbours of any extremal species will be chosen for updating at each time step. This is the case where there is no interaction between neighbours. For $`\alpha =1`$ the model returns to a $`d`$-dimensional B-S model. It is because that difference of fitness between those of any two neighbours is less than $`1`$ so at each time step all the $`2d`$ nearest neighbours of the extremal site will be chosen for updating.
For a given $`\alpha `$ let the model evolve from the beginning when the first extremal species is chosen and we determine how many of its $`2d`$ neighbours will be chosen, according to the constraint on difference of fitness, for updating at the same time step. The updating process, i.e., the evoultion of the model, continues indefinitely. For a given $`\alpha `$ satisfying $`0<\alpha <1`$ the updating of the $`2d`$ nearest neighbours of the extremal site at each time step will have many kinds of probabilities. For instance, maybe none of the $`2d`$ neighbours will be updated, maybe half of the $`2d`$ neighbours will be chosen for updating, $`etc`$. If we do not distinguish the neighbours when we only care the number of the updated neighbours at each time step it is straightforward that the updating of the neighbours will have $`2d+1`$ probabilities. Say, at some time step, an extremal site and $`m`$ of its $`2d`$ nearest neighbours are chosen for updating according to the constraint. Such an updating is called by us a $`m`$-event. As the evolution goes on we can observe many kinds of such events, during which $`m`$ can be different. If the evolution time is large enough we can obtain $`2d+1`$ kinds of events during which $`m`$ spans through $`0`$ and $`2d`$. If we define $`P_d(m)`$ the probability of $`m`$-event among all events during time period $`T`$, i.e.,
$$P_d(m)=\frac{N(m)}{N_T},$$
(1)
where $`N(m)`$ is the number of $`m`$-event during the time period $`T`$ and $`N_T`$, the total number of all events, and $`d`$ denotes dimension. In the $`T`$ limit ($`TL^d`$) we will obtain the distribution of $`m`$-event, that is,
$$P_d(m)=\underset{T\mathrm{}}{lim}\frac{N(m)}{N_T}.$$
(2)
And $`P_d(m)`$ should satisfy the normalization,
$$\underset{m=0}{\overset{m=2d}{}}P_d(m)=1.$$
(3)
Next we will present the definition of interaction strength $`\alpha _I`$ related to $`P_d(m)`$,
$$\alpha _I=\underset{T\mathrm{}}{lim}\frac{1}{2d}\underset{m=0}{\overset{2d}{}}mP_d(m).$$
(4)
One can see that $`\alpha _I`$ is actually the statistical ratio of number of updated neighbours among $`2d`$ ones during the evolution. This can be easily seen from two extremal cases when $`\alpha =0`$ and $`\alpha =1`$. For $`\alpha =0`$, $`\alpha _I=0`$, this is because $`P_d(0)=1`$ and $`P_d(m)=0`$ for $`0<m2d`$; for $`\alpha =1`$, $`\alpha _I=1`$, this is because $`P_d(2d)=1`$ and $`P_d(m)=0`$ for $`0m<2d`$. That is, in the nointeractive biology interaction strength $`\alpha _I`$ is zero while in the B-S model $`\alpha _I`$ is 1. Hence, it is natural to expect that the definition of $`\alpha _I`$ can give a good description of the change of strength of interaction between neighboring sites when $`\alpha `$ changes. So, it is also natural to expect that $`0<\alpha _I<1`$ for $`0<\alpha <1`$, and that $`\alpha _I`$ increases as $`\alpha `$ increases. There should also exists one-to-one correspondence between $`\alpha _I`$ and $`\alpha `$. Having these in mind we measure the distribution $`P_d(m)`$ and $`\alpha _I`$ for ten different values of $`\alpha `$ for one- and two-dimensional models on the computer. Simulation results are given in Fig. 1 and Fig. 2 respectively. Figures (a) and (b) in Fig. 1 show the distribution of $`m`$-event ($`0m2d`$) for ten different values of $`\alpha `$ in one- and two-dimensional models respectively. It is clearly shown in the above figures that $`P_d(0)`$ decreases with increase of $`\alpha `$ and $`P_d(2d)`$, i.e., $`P_{d=1}(2)`$ for $`d=1`$ and $`P_{d=2}(4)`$ for $`d=2`$, increases as $`\alpha `$ increases. Note in the figures change of $`P_d(m)(0<m<2d)`$ with increase of $`\alpha `$. Their exists a peak in the curve of $`\alpha `$ dependence of $`P_d(m)(0<m<2d)`$. This can ensure the normalization of $`P_d(m)`$. Two plots in Fig. 2 show the dependence of $`\alpha _I`$ on $`\alpha `$ in one- and two-dimensional models respectively. As shown in Fig. 2 $`\alpha _I`$ almost increases linearly as $`\alpha `$ increases, and most important of all is that one $`\alpha _I`$ corresponds to one and only one $`\alpha `$. From here one can clearly see that our definition of $`\alpha _I`$ can be explicitly related to $`\alpha `$ and hence can be put into our model naturally. Relating Fig. 1 and Fig. 2 one can see that increase of interaction strength increases $`P_d(2d)`$. That is, when $`\alpha _I`$ increases any extremal site will possibly affect its nearest neighbours more strongly. Furthermore, different values of $`\alpha _I`$ correspond to different interaction strength. Generally speaking, larger $`\alpha _I`$ represents stronger interaction, and vice versa. Hence, the definition of $`\alpha _I`$ can provide a good description for the strength of interaction between neighboring sites. It is a good quantity in describing the interaction. In the following text dependence of self-organized threshold and some critical exponents on $`\alpha _I`$ will be given. Since $`\alpha _I`$ has one-to-one correspondence with $`\alpha `$ and increases as $`\alpha `$ increases, it is hence convenient and equivalent to present the dependence of these quantities on $`\alpha `$.
The model is already defined, it is natural to investigate whether the model can self-organize to a critical state. That is, we should observe the “fingerprint“ of SOC . If so, it is also worthwhile to know whether the criticality is sensitive to the value of $`\alpha `$. In addition, the self-organization, referred to a dynamical process whereby a system starts in a state with uncorrelated behavior and ends up in a complex state with a high degree of correlation, of the system and punctuated equilibrium, the most important feature of evolution, should also be observed.
Fig. 3 shows the space-time fractal activity pattern for a one-dimensional model of size $`L=100`$ with $`\alpha =0.8`$. We track the updated sites at each time step. $`S`$ and $`R`$ in the figure denote number of updated time steps and location of updated sites respectively. Simulations of one- and two-dimensional models for different values of $`\alpha `$ are also done and exhibit the similar space-time fractal activity. Indeed, spatiotemporal complexity emerges in our model and its appearance is independent of the value of $`\alpha `$ choosen.
We now present some explanations of the quantities which will appear in the following equations for those readers who are not so familar with SOC. $`f_{\mathrm{min}}`$ denotes the extremal fitness at each time step in the evolution. $`G(s)`$, the gap appeared in punctuated equilibrium, is an envelope function that tracks the increasing peaks in $`f_{\mathrm{min}}`$. Its definition is : at time step $`s`$ the gap $`G(s)`$ is the maximum of all the minimum random numbers chosen, $`f_{\mathrm{min}}(s^{})`$, for all $`0s^{}s`$. $`f_c`$ is the value of $`G(s)`$ at critical state, i.e.,
$$f_c=\underset{s\mathrm{}}{lim}G(s).$$
(5)
$`S_{G(s)}`$ is the size of avalanches correspond to plateus in $`G(s)`$ during which $`f_{\mathrm{min}}(s)<G(s)`$ and $`S_{G(s)}`$ is the average value of $`S_{G(s)}`$. An avalanche is defined as subsequent mutations below a certain threshold. Hence with this definition there is a hierarchy of avalanches each defined by their respective thresholds. So we can have $`f_0`$-avalanche where $`f_0`$ is only an auxiliary parameter between zero and $`1`$ to define avalanches. More detailed definition of $`f_0`$-avalanche is given in Ref. . The size of an avlanche, $`S`$, is the number of subsequent mutations below the threshold. And $`\gamma `$ is a critical exponent which governs the divergence of $`S_{G(s)}`$ when $`s`$ approaches infinity. And $`n_{\mathrm{cov}}`$ is the number of sites covered by an avalanche. Apparently $`n_{\mathrm{cov}}(2d+1)S`$ ($`S`$ is the avalanche size) in d-dimensional space. $`n_{\mathrm{cov}}`$ denotes the average value of $`n_{\mathrm{cov}}`$. The above defined quantities will have their counterparts in our model. So we will use the same definitions of these quantities while make some minor corrections on the symbols of them.
Following the method used in Ref. we monitor the extremal signal $`f_{\mathrm{min}}`$ as a function of $`s`$ during the transient in the one- and two-dimensional models for different values of $`\alpha `$. Again, we observe Devil’s staircase in all these cases. Fig. 4 shows punctuated equilibrium behavior in one-dimensional model of size $`L=100`$ with $`\alpha =0.5`$. Hence, we can see punctuated equilibrium does emerge in our model of evolution and its emergence is $`\alpha `$ independent.
Observations through above simulations suggest that our model can self-organize to a critical state. But how to determine the self-organized threshold $`f_c`$ is still a hard bone to us. Fortunately Ref. provides us a method. In Ref. the author presents two exact equations for Bak-Sneppen model. With a reasonable scaling ansatz of average size of avalanches they can determine $`f_c`$ very accurately. It is straightforward to expect that the two exact equations found in Bak-Sneppen model have the corresponding ones in our model. According to the derivation of these two equations we can directly write down two similar exact equations in our model except for some minor replacements on the symbols of some quantities. Introduction of interaction strength $`\alpha _I`$ suggests that some quantities are $`\alpha _I`$ and hence $`\alpha `$ dependent, which can be seen from our simulations: as $`\alpha `$ increases $`f_c`$ has a tendcy to decrease. Having this in mind we make replacements of symbols of some quantities:
$`G(s)`$ $`G(s,\alpha ),`$ (6)
$`f_c`$ $`f_c(\alpha ),`$ (7)
$`\gamma `$ $`\gamma (\alpha ),`$ (8)
$`\tau `$ $`\tau (\alpha ),`$ (9)
$`D`$ $`D(\alpha ).`$ (10)
Two exact equations are given as below,
$$\frac{\mathrm{d}G(s,\alpha )}{\mathrm{d}s}=\frac{1G(s,\alpha )}{L^dS_{G(s,\alpha )}}$$
(11)
$$\frac{\mathrm{dln}S_{f_0}}{\mathrm{d}f_0}=\frac{n_{\mathrm{cov}}_{f_0}}{1f_0}$$
(12)
In order to solve the two equations a scaling ansatz of $`S_{G(s,\alpha )}`$ should be given,
$$S_{G(s,\alpha )}[f_c(\alpha )G(s,\alpha )]^{\gamma (\alpha )}.$$
(13)
Inserting Eq. (8) into Eq. (7) one obtains
$$\gamma (\alpha )=\underset{f_0f_c(\alpha )}{lim}\frac{n_{\mathrm{cov}}_{f_0}[f_c(\alpha )G(s,\alpha )]}{1f_0}.$$
(14)
Using Eq. (14) we can determine $`f_c(\alpha )`$ and $`\gamma (\alpha )`$ very accurately. We measure $`f_c(\alpha )`$ and $`\gamma (\alpha )`$ in the one- and two-dimensional models for different values of $`\alpha `$. It is to our expectation that these two quantities are $`\alpha `$ dependent. Fig. 6 shows the dependence of $`f_c(\alpha )`$ on $`\alpha `$ and figure (a) in Fig. 7 shows that of $`\gamma (\alpha )`$ on $`\alpha `$. In addition, we measure two basic exponents of the model, $`\tau (\alpha )`$, avalanche size distribution, and $`D(\alpha )`$, avalanche dimension. We find they are also $`\alpha `$ dependent. Figures (b) and (c) in Fig. 7 show their dependence on $`\alpha `$ respectively.
The following text will present our analysis of the simulations and our conclusion. Let us first check the “fingerprint“ of SOC. Fig. 5 shows avalanche distribution $`P_{\mathrm{aval}}(S)`$ for one-dimensional model of size $`L=100`$ with $`\alpha =0.7`$. Such distribution can also be found in one- and two-dimensional models for different $`\alpha `$. Indeed, power law emerges in our model and its appearance is $`\alpha `$ dependent. Then let us come to the results of $`f_c(\alpha )`$. Two plots, (a) and (b), in Fig. 6 show the dependence of $`f_c(\alpha )`$ on $`\alpha `$ in the one- and two-dimensional models respectively. It is clearly shown that these two figures display the similar behaviors of $`f_c(\alpha )`$. Firstly, we can see that in both figures $`f_c(\alpha )`$ decreases as $`\alpha `$ increases. It is not difficult to understand this kind of behavior. As $`\alpha `$ increases, i.e., $`\alpha _I`$ increases, the chances for the nearest neighbours of a given extremal site to be chosen for updating at each time step will be greater. This can be easily seen from Fig. 1. As shown in Fig. 1 $`P_d(2d)`$ increases as $`\alpha `$ increases and reaches $`1`$ for $`\alpha =1`$, and $`P_d(0)`$ decreases as $`\alpha `$ increases and reaches zero when $`\alpha =1`$. So, with the increase of $`\alpha `$ , that is the increase of $`\alpha _I`$ , more possible neighbours will be involved in the evolution, hence, the threshold $`f_c(\alpha )`$ will be lowered further. This can be explained in another way. Compare the values of $`f_c(\alpha )`$ for the same $`\alpha `$ in one- and two-dimensional models. Say, compare $`f_c(1)`$ in one dimensional model with that in two-dimensional model. It has been shown that $`f_c(1)`$ in the former case is greater than that in the latter case. In one -dimensional model in which $`\alpha =1`$ when an extremal site is chosen, two of its nearest neighbours will also be chosen, for updating at each time step. While in two-dimensional model with $`\alpha =1`$, an extremal site, together with its four nearest neighbours, will be chosen for updating at each time step. Why we mention this two cases here is just trying to show that the increase of average number of neighbours involved in the evolution will lower the self-organized threshold. Thus, one can see, the increase of interaction strength will increase the number of neighbouring sites involved in the evolution and hence lowers down the value of $`f_c(\alpha )`$. Secondly, it is shown in both figures of Fig. 6 that $`f_c(\alpha )`$ almost decreases linearly as $`\alpha `$ increases for $`0<\alpha <0.6`$ and decreases asymptotically as $`\alpha `$ increases for $`\alpha `$ between $`0.6`$ and $`1.0`$. This implies that effect of the constraint on the evolution grows implicitly as $`\alpha `$ increases. When interaction strength is small the effect is very explcitly. But as interaction strength increases the effect will be not so explicitly shown. Specifically, we measure $`f_c(\alpha )`$ when $`\alpha =1`$ for one- and two-dimensional models. We find $`f_c(1)=0.668\pm 0.001`$ for $`d=1`$ system of size $`L=100`$ , which is very close to the value in Refs. and , who found $`f_c(1)=0.660702\pm 0.00003`$; for $`d=2`$ system of size $`L=20`$, $`f_c(1)=0.334\pm 0.00006`$ which is close to the corresponding value in Ref. , who found $`f_c(1)=0.328855\pm 0.000004`$. In addition, for $`d=1`$ we find $`f_c(0.1)=0.96388\pm 0.000002`$ for a system of size $`L=100`$ and for $`d=2`$, $`f_c(0.1)=0.9179\pm 0.00001`$ for a system of size $`L=20`$.
We also measure $`\gamma (\alpha )`$ for different $`\alpha `$ in one- and two-dimensional models. It is found that $`\gamma `$ is also $`\alpha `$ dependent. Dependence of $`\gamma (\alpha )`$ on $`\alpha `$ is given in Fig. 7. In figure (a) in Fig. 7 $`\gamma (\alpha )`$ first increases and then decreases as $`\alpha `$ increases. For $`d=1`$, $`\gamma (1)=2.6166\pm 0.00004`$, which is close to the value found in Refs. , who found $`\gamma (1)=2.70\pm 0.01`$. For $`d=2`$, we find $`\gamma (1)=2.249\pm 0.0004`$, which is not in agreement with Ref. , who found $`\gamma (1)=1.70\pm 0.01`$. Maybe this is because of our small size of system. If the size is larger the results will be more precise.
For different values of $`\alpha `$ in one- and two-dimensional models we measure two basic exponents, $`\tau (\alpha )`$, which characterizes the distribution of avalanche sizes, and $`D(\alpha )`$, the avalanche dimension. The definition of $`\tau (\alpha )`$ is $`P_{\mathrm{aval}}(S)S^{\tau (\alpha )}`$, and that of $`D(\alpha )`$ : $`n_{\mathrm{cov}}S^{D(\alpha )/d}`$. Results of these two basic exponents are given in Fig. 7(b) and Fig. 7(c) respectively. It is shown in the plots that $`\tau (\alpha )`$ first decreases as $`\alpha `$ increases and then changes slowly with variation of $`\alpha `$. Specifically, for $`d=1`$, we find $`\tau (1)=0.858\pm 0.001`$, which is not in agreement with Ref. , who measured $`\tau =1.07\pm 0.001`$, but agrees with Ref. , who measured $`\tau =0.8\pm 0.1`$. This is because our size $`L=100`$ is close to the size $`L=64`$ in Ref. but far from the size $`L=10^4`$ in Ref. . For $`d=2`$, $`\tau (1)=1.131\pm 0.0005`$, which is close to Ref. , who measured $`\tau =1.245\pm 0.01`$.
Fig. 7(c) shows the dependence of $`D(\alpha )`$ on $`\alpha `$ for one- and two-dimensional models. It can be inferred from the plots that $`D(\alpha )`$ first decreases rapidly and then slowly as $`\alpha `$ increases. For $`d=1`$, $`D(1)=2.4189\pm 0.0001`$, which is close to Ref. , who measured $`D=2.43\pm 0.01`$, and for $`d=2`$, $`D(1)=2.94\pm 0.0001`$, which agrees with Refs. , who measured $`D=2.92\pm 0.02`$. It is clearly shown that for all $`\alpha `$ $`D(\alpha )`$ is larger than the dimension space $`d`$.
It should be emphasized that interaction strength, $`\alpha _I`$, is not a parameter tuning the model to a critical state, since in our model when a model is chosen the corresponding $`\alpha _I`$ is fixed during the whole evolution process. Thus, criticlity emerged in our model is self-organized, not tuned. And appearance of criticality in our model is independent of $`\alpha _I`$ chosen, which you can test on your own PC. This can strongly support the idea that SOC does not depend on the dynamical details of the system. That is, self-organized criticality should be universal. Furthermore, self-organized fractal growth is basically different from growth processes, say, described by (variants of) the Kardar-Parisi-Zhang (KPZ) equation . The KPZ equation is scale invariant by symmetry, thus the criticality is not self-organized. SOC cannot, even in principle, be regarded as sweeping a system through a critical point, which contrasts to the claims in Ref. . SOC should be an attractor of the complex system, but this attractor is vastly different from the one found in chaos—”strange attractor”.
As shown in this paper and in others , interaction plays a very important role in the models which exhibit SOC. If there is no interaction between individuals in a system, the system will evolve towards a frozen state and the evolution process will be indefinitely long. This is clearly shown in non-interactive biology in which fitness of each species tends to be $`1`$. Our model and simulations of its different versions corresponding to different degrees of interaction between neighbours imply that only the coevolutionary system can evolve to a self-organized critical state despite the fact that interaction strength may be relatively small. Through our simulations that we have learned it would take longer time for the system to involve to a critical state when the interaction strength $`\alpha _I`$ is smaller. It is also worthwhile to perform two simulations of the models in which $`\alpha _I`$ is very close to zero and $`1`$ respectively. We can expect in the former case the model will evolve to a frozen state, while in the latter case we will approach B-S model.
Another important feature of our model is that fitness itself is directly involved in the interaction, which is not realized in the B-S model. As a coevolutionary system, interaction between any extremal species and its neighbours should and must involve the features of the extremal site and that of the neighbours. Because of model’s simplicity each species has only one feature: fitness, represented by a random number chosen arbitrarily from a flat distribution between zero and $`1`$. Hence, this feature should enter the evolution model. Through a constraint which can be related to interaction strength the fitness is involved in and injected into the evolution process of the system. It is shown in our model that evolution is indeed an coevolutionary phenomena, which agrees with Darwin’s opinion on evolution of the biology .
Thus, in conclusion:
(1) A simple model of evolution with interaction strength defined and considered is proposed. The models with different interaction strength ($`0<\alpha _I1`$) can self-organize to critical states.
(2) Simulations of one- and two-dimensional model of various degrees of interaction strength show that $`f_c(\alpha _I)`$ decreases as $`\alpha _I`$ increases. It is also shown that $`\gamma (\alpha _I)`$, and two basic exponents , $`\tau (\alpha _I)`$ and $`D(\alpha _I)`$, are $`\alpha _I`$ dependent.
This work was supported by NSFC in China. The authors thank Prof. T. Meng and other members of our small group on SOC for their fruitful discussions during his stay in wuhan.
Figure Captions:
Fig. 1: Distribution of m-event for different $`\alpha `$ in the evolution of (a)one-dimensional models of size $`L=100`$ and (b)two-dimensional models of size $`L=20`$.
Fig. 2: Dependence of interaction strength $`\alpha _I`$ on $`\alpha `$ in the evolution of one-dimensional models of size $`L=100`$ and two- dimensional models of size $`L=20`$.
Fig. 3: Space-time fractal activity pattern for one-dimensional evolution model with $`\alpha =0.8`$. $`S`$ is the number of updated steps and $`R`$ is the location of updated sites.
Fig. 4: Punctuated equilibrium behavior emerges in one-dimensional evolution models of size $`L=100`$ with $`\alpha =0.5`$. $`G(s)`$ is the gap that tracks the peaks of extremal signal, $`f_{\mathrm{min}}`$, in the transient. $`f_c`$ in the plot is about $`0.8556`$.
Fig. 5: Avalanche distribution for one-dimensional model of size $`L=100`$ with $`\alpha =0.7`$. $`S`$, the size of avalanche, is the number of subsequent mutations below the threshold $`0.728`$. $`P_{\mathrm{aval}}(S)`$ denotes the distribution of $`S`$. The slope of the curve is about $`0.774\pm 0.001`$.
Fig. 6: Dependence of $`f_c(\alpha )`$ on $`\alpha `$ for one-dimensional evolution models of size $`L=100`$ and two-dimensional models of size $`L=20`$.
Fig. 7: (a) Dependence of $`\gamma (\alpha )`$ on $`\alpha `$ for one-dimensional evolution models of size $`L=100`$ and two-dimensional models of size $`L=20`$. (b) Dependence of $`\tau (\alpha )`$ on $`\alpha `$ for one-dimensional evolution models of size $`L=100`$ and two-dimensional models of size $`L=20`$. (c) Dependence of $`D(\alpha )`$ on $`\alpha `$ for one-dimensional evolution models of size $`L=100`$ and two-dimensional models of size $`L=20`$.
|
no-problem/9904/astro-ph9904235.html
|
ar5iv
|
text
|
# Overview of Secondary Anisotropies of the CMB
## 1. Introduction
The Cosmic Microwave Background (CMB) provides a unique probe of the early universe (see White, Scott, & Silk 1994 for a review). If CMB fluctuations are consistent with inflationary models, future ground-based and satellite experiments will yield accurate measurements of most cosmological parameters (see Zaldarriaga, Spergel, & Seljak 1997; Bond, Efstathiou, & Tegmark 1997 and reference therein). These measurements rely on the detection of primordial anisotropies produced at the surface of last scattering. However, various secondary effects produce fluctuations at lower redshifts. The study of these secondary fluctuations (or extragalactic foregrounds) is important in order to isolate primordial fluctuations. In addition, secondary fluctuations are interesting in their own right since they provide a wealth of information on the local universe.
In this contribution, I present an overview of the different extragalactic foregrounds of the CMB. The foregrounds produced by discrete sources, the thermal Sunyaev-Zel’dovich (SZ) effect, the Ostriker-Vishniac (OV) effect, the Integrated Sachs-Wolfe (ISW) effect, gravitational lensing, and other effects, are briefly described. I show their relative importance on the multipole-frequency plane, and pay particular attention to their impact on the future CMB missions MAP (Bennett et al. 1995) and Planck Surveyor(Bersanelli et al. 1996). A more detailed account of each extragalactic foreground can be found in the other contributions to this volume. In this article, I have focused on the latest literature, and have not aimed for bibliographical completeness. This overview is based on a more detailed study of extragalactic foregrounds in the context of the MAP mission (Refregier et al. 1998).
## 2. Comparison of Extragalactic Foregrounds
To assess the relative importance of the extragalactic foregrounds, I decompose the temperature fluctuations of the CMB into the usual spherical harmonic basis, $`\frac{\delta T}{T_0}(\theta )=_{\mathrm{},m}a_{lm}Y_{lm}(\theta )`$, and form the averaged multipole moments $`C_l|a_{lm}|^2`$. Following Tegmark & Efstathiou (1996), I consider the quantity $`\mathrm{\Delta }T_{\mathrm{}}\left[\mathrm{}(2\mathrm{}+1)C_{\mathrm{}}/4\pi \right]^{\frac{1}{2}}T_0`$, which gives the rms temperature fluctuations per $`\mathrm{ln}\mathrm{}`$ interval centered at $`\mathrm{}`$. Another useful quantity that they considered is the value of $`\mathrm{}=\mathrm{}_{eq}`$ for which foreground fluctuations are equal to the CMB fluctuations, i.e. for which $`C_{\mathrm{}}^{\mathrm{foreground}}C_{\mathrm{}}^{\mathrm{CMB}}`$. Note that, since the foregrounds do not necessarily have a thermal spectrum, $`\mathrm{\Delta }T_{\mathrm{}}`$ and $`\mathrm{}_{eq}`$ generally depend on frequency.
The comparison is summarized in table 1 and in figure 1. Table 1 shows $`\mathrm{\Delta }T_{\mathrm{}}`$ and $`\mathrm{}_{eq}`$ for each of the major extragalactic foregrounds at $`\nu =94`$ GHz and $`\mathrm{}=450`$, which corresponds to a FWHM angular scale of about $`\theta .3`$ deg. These values were chosen to be relevant to the MAP W-band ($`\nu 94`$ GHz and $`\theta _{\mathrm{beam}}0.^{}21`$. I also indicate whether each foreground component has a thermal spectrum.
Figure 1 summarizes the importance of each of the extragalactic foregrounds in the multipole-frequency plane. It should be compared to the analogous plot for galactic foregrounds (and discrete sources) shown in Tegmark & Efstathiou (1996; see also Tegmark 1997 for an updated version). These figures show regions on the $`\mathrm{}`$-$`\nu `$ plane in which the foreground fluctuations exceed the CMB fluctuations, i.e. in which $`C_{\mathrm{}}^{\mathrm{foreground}}>C_{\mathrm{}}^{\mathrm{CMB}}`$. As a reference for $`C_{\mathrm{}}^{\mathrm{CMB}}`$, a COBE normalized CDM model with $`\mathrm{\Omega }_b=0.05`$ and $`h=0.5`$ was used. Also shown in figure 1 is the region in which MAP and Planck Surveyor are sensitive, i.e. in which $`\mathrm{\Delta }C_{\mathrm{}}^{\mathrm{noise}}<C_{\mathrm{}}^{\mathrm{CMB}}`$, where $`\mathrm{\Delta }C_{\mathrm{}}^{\mathrm{noise}}`$ is the rms uncertainty for the instrument. Note that this figure is only intended to illustrate the domains of importance of the different foregrounds qualitatively.
In the following, I briefly describe each extragalactic foreground and comment on its respective entries in table 1 and figure 1.
### 2.1. Discrete Sources
Discrete sources produce positive, point-like, non-thermal fluctuations. While not much is known about discrete source counts around $`\nu 100`$ GHz, several models have been constructed by interpolating between radio and IR observations (Toffolatti et al. 1998; Gawiser & Smoot 1997; Gawiser et al. 1998; Sokasian et al. 1998). Here, I adopt the model of Toffolatti et al. and consider the two flux limits $`S<1`$ and 0.1 Jy for the source removal in table 1. The sparsely dotted region figure 1 shows the discrete source region for $`S<1`$ Jy. In the context of CMB experiments, the Poisson shot noise dominates over clustering for discrete sources (see Toffolatti et al. ). As a result, the discrete source power spectrum, $`C_{\mathrm{}}^{\mathrm{discrete}}`$, is essentially independent of $`\mathrm{}`$.
### 2.2. Thermal Sunyaev-Zel’dovich Effect
The hot gas in clusters and superclusters of galaxies affect the spectrum of the CMB through inverse Compton scattering. This effect, known as the Sunyaev-Zel’dovich effect (for reviews see Sunyaev & Zel’dovich 1980; Rephaeli 1995), results from both the thermal and bulk motion of the gas. We first consider the thermal SZ effect, which typically has a larger amplitude and has a non-thermal spectrum (see the §2.3. below for a discussion of the kinetic SZ effect). The CMB fluctuations produced by the thermal SZ effect have been studied using the Press-Schechter formalism (see Bartlett 1997 for a review), and on large scales using numerical simulations (Cen & Ostriker 1992; Scaramella, Cen, & Ostriker 1993) and semi-analytical methods (Persi et al. 1995). Here, I consider the SZ power spectrum, $`C_{\mathrm{}}^{\mathrm{SZ}}`$, calculated by Persi et al. (see their figure 5). In table 1, I consider their calculation both with and without bright cluster removal. In figure 1, only the spectrum without cluster removal is shown.
### 2.3. Ostriker-Vishniac Effect
In addition to the thermal SZ effect described above, the hot intergalactic medium can produce thermal CMB fluctuations as a result of its bulk motion. While this effect essentially vanishes to first order, the second order term in perturbation theory, the Ostriker-Vishniac effect (Ostriker & Vishniac 1986; Vishniac 1987), can be significant on small angular scales. The power spectrum of the OV effect depends on the ionization history of the universe, and has been calculated by Hu & White (1996), and Jaffe & Kamionkowski (1998; see also Persi et al. 1995). We use the results of Hu & White (see their figure 5) who assumed that the universe was fully reionized beyond a redshift $`z_r`$. In table 1, I consider the two values $`z_r=10`$ and 50, while in figure 1, I only plot the region corresponding to $`z_r=50`$. For consistency, the standard CDM power spectrum is still used as a reference, even though the primordial power spectrum would be damped in the event of early reionization. (Using the damped primordial spectrum makes, at any rate, only small corrections to both table 1 and figure 1.)
### 2.4. Integrated Sachs-Wolfe Effect
The Integrated Sachs-Wolfe Effect (ISW) describes thermal CMB fluctuations produced by time variations of the gravitational potential along the photon path (Sachs & Wolfe 1967). Linear density perturbations produce non-zero ISW fluctuations in a $`\mathrm{\Omega }_m1`$ universe only. Non-linear perturbations produce fluctuations for any geometry, an effect often called the Rees-Sciama effect (Rees & Sciama 1968). Tuluie & Laguna (1995) have shown that anisotropies due to intrinsic changes in the gravitational potentials of the inhomogeneities and anisotropies generated by the bulk motion of the structures across the sky generate CMB anisotropies in the range of $`10^7<\frac{\mathrm{\Delta }T}{T}<10^6`$ on scales of about $`1^{}`$ (see also Tuluie et al. 1996). The power spectrum of the ISW effect in a CDM universe was computed by Seljak (1996a; see also references therein). In table 1, I consider values of the density parameter, namely $`\mathrm{\Omega }h=0.25`$ and $`0.5`$. In figure 1, only the $`\mathrm{\Omega }h=0.25`$ case is shown. As above, the standard CDM ($`\mathrm{\Omega }=1`$, $`h=0.5`$) spectrum is still used as a reference.
### 2.5. Gravitational Lensing
Gravitational lensing is produced by spatial perturbations in the gravitational potential along the line of sight (see Schneider, Ehlers, & Falco 1992; Narayan & Bartelmann 1996). This effect does not directly generate CMB fluctuations, but modifies existing background fluctuations. The effect of lensing on the CMB power spectrum was calculated by Seljak (1996b) and Metcalf & Silk (1997). Recently, Zaldarriaga & Seljak (1998a) included the lensing effect in their CMB spectrum code (CMBFAST; Seljak & Zaldarriaga 1996). This code was used to compute the absolute lensing correction $`|\mathrm{\Delta }C_{\mathrm{}}^{\mathrm{lens}}|`$ to the standard CDM spectrum, including nonlinear evolution. The results are shown in table 1 and figure 1.
### 2.6. Other Extragalactic Foregrounds
In addition to the effects discussed above, other extragalactic foregrounds can cause secondary anisotropies. For instance, patchy reionization produced by the first generation of stars or quasars can cause second order CMB fluctuations through the doppler effect (Aghanim et al. 1996a,b; Gruzinov & Hu 1998; Knox, Scoccimaro, & Dodelson 1998; Peebles & Juzkiewicz 1998). Calculations of the spectrum of this effect are highly uncertain, but show that the resulting CMB fluctuations could be of the order of 1 $`\mu `$K on 10 arcminute scales, for extreme patchiness. More likely patchiness parameters make the effect negligible on these scales, but potentially important on arcminute scales. Another potential extragalactic foreground is that produced by the kinetic SZ effect from Ly<sub>α</sub> absorption systems, as was recently proposed by Loeb (1996). The resulting CMB fluctuations are of the order of a few $`\mu `$K on arcminute scales, and about one order of magnitude lower on 10 arcminute scales. Because of the uncertainties in the models for these two foregrounds and because they are small on 10 arcminute scales, they are not included in table 1 and figure 1.
## 3. Discussion and Conclusion
An inspection of table 1 shows that, at 94 GHz and $`\mathrm{}=450`$, the power spectra of the largest extragalactic foregrounds considered are a factor of 5 below the primordial CDM spectrum. As can be seen in figure 1, the dominant foregrounds for MAP and Planck Surveyor are discrete sources, the thermal SZ effect and gravitational lensing. Note that, for Planck surveyor, these three effects produce fluctuations which are close to the sensitivity of the instrument. The spectra of the OV and ISW effects will produce fluctuations of the order of $`1\mu `$K , and are thus less important for a measurement of the power spectrum. The effect of gravitational lensing is now incorporated in CMB codes such as CMBFAST, and can thus be taken into account in the estimation of cosmological parameters. The other two dominant extragalactic contributions, discrete sources and the thermal SZ effect, must also be accounted for, but are more difficult to model. Note that, on large angular scales, extragalactic foregrounds produce relatively small fluctuations, and are thus not detectable in the COBE maps (Boughn & Jahoda 1993; Bennett et al. 1993; Banday et al. 1996; Kneissl et al. 1997)
While I have concentrated above on the power spectrum, secondary anisotropies are also a source of non-gaussianity in CMB maps. Discrete sources and the SZ effect from clusters of galaxies mainly produce Poisson fluctuations and are thus clearly non-gaussian. The other extragalactic foregrounds (SZ, OV, ISW, and lensing) are also non-gaussian and trace large-scale structures in the local universe. As a consequence of the latter fact, extragalactic foregrounds can be probed by cross-correlating CMB maps with galaxy catalogs, which act as tracers of the large scale structure. Such technique can be used to detect the ISW effect (Boughn et al. 1998, and reference therein), gravitational lensing (Suginohara et al. 1998) and the SZ effect by superclusters (Refregier et al. 1998). Gravitational lensing is particularly interesting since it produces a specific non-gaussian signature (Bernardeau 1998). This signature can be used to reconstruct the gravitational potential projected along the line of sight (Zaldarriaga & Seljak 1998b). Further non-gaussian signatures result from the fact that the different extragalactic foregrounds are spatially correlated. For instance, a detection of the cross-correlation signal between gravitational lensing and the ISW and SZ effects would allow us to determine the fraction of the ionized gas and the time evolution of gravitational potential (Goldberg & spergel, 1998; Seljak & Zaldarriaga 1998). A detection of secondary anisotropies would help break the degeneracy between cosmological parameters measured from primary anisotropies alone.
#### Acknowledgments.
I thank David Spergel and Thomas Herbig for active collaboration and discussions on this project. This work was supported by the MAP MIDEX program.
## References
Aghanim, N., Desert, F.-X., Puget, J. L., & Gispert, R. 1996a, A&A, 311, 1; see also erratum to appear in A&A, preprint astro-ph/9811054
Aghanim, N., Puget, J. L., & Gispert, R. 1996b, in Microwave Background Anisotropies, proceedings des XVIièmes Rencontres de Moriond, p. 407, eds. Bouchet, F.R., Gispert, R., Guiderdoni, B., & Trân Thanh Vân, J.
Bartlett, J. G. 1997, course given at From Quantum Fluctuations to Cosmological Structures, Casablanca, Dec. 1996, preprint astro-ph/9703090
Banday, A. J., Górski, K. M., Bennett, C. L., Hinshaw, G., Kogut, A., & Smoot, G. F. 1996, ApJ, 468, L85
Bennett, C. L., Hinshaw, W. G., Banday, A., Kogut, A., Wright, E. L., Loewenstein, K., & Cheng, E. S. 1993, ApJ, 414, L77
Bennett, C.L. et al. 1995, BAAS 187.7109; see also http://map.gsfc.nasa.gov
Bernardeau, F. 1998, A&A, 338, 767
Bersanelli, M. et al. 1996, COBRAS/SAMBA, Report on Phase A Study, ESA Report D/SCI(96)3; see also http://astro.estec.esa.nl/Planck/
Bond, J.R., Efstathiou, G., & Tegmark, M. 1997, MNRAS, 291, L33
Boughn, S. P., & Jahoda, K. 1993, ApJ, 412, L1
Boughn, S. P., Crittenden, R.G., & Turok, N.G. 1998, NewA, 3, 275
Cen, R., & Ostriker, J. P. 1992, ApJ, 393, 22
Gawiser, E., & Smoot, G. 1997, ApJ, 480, L1
Gawiser, E., Jaffe, A., & Silk, J., 1998, submitted to ApJ, preprint astro-ph/9811148
Goldberg, D., & Spergel, D. 1998, preprint astro-ph/9811251
Gruzinov, A. & Hu, W. 1998, submitted to ApJ, preprint astro-ph/9803188
Hu, W. & White, M. 1996, A&A, 315, 33
Jaffe, A. H., & Kamionkowski, M. 1998, to appear in Phys.Rev.D, preprint astro-ph/9801022
Kneissl, R., Egger, R., Hasinger, G., Soltan, A. M., & Trümper, J. 1997, A&A, 320, 685
Knox, L., Scoccimaro, R., & Dodelson, S. 1998, preprint astro-ph/9805012
Loeb, A. 1996, ApJ, 471, L1
Metcalf, R. B., & Silk, J. 1997, ApJ, 489, 1
Narayan, R., & Bartelmann, M. 1996, Lectures on Gravitational Lensing, preprint astro-ph/9606001
Ostriker, J. P. & Vishniac, E. T. 1986, ApJ, 306, L51
Peebles, P. J. E. & Juszkiewicz, R. 1998, preprint astro-ph/9804260
Persi, F. M., Spergel, D. N., Cen, R., & Ostriker, J. P. 1995, ApJ, 442, 1
Rees, M. & Sciama, D. W. 1968, Nature, 517, 611
Sachs, R. K. & Wolfe, A.M. 1967, ApJ, 147, 73
Refregier, A., Spergel, D., & Herbig, T. 1998, submitted to ApJ, preprint astro-ph/9806349
Rephaeli, Y. 1995, ARA&A, 33, 541
Scaramella, R., Cen, R., & Ostriker, J. P. 1993, ApJ, 416, 399
Schneider, P., Ehlers, J., & Falco, E. E. 1992, Gravitational Lenses, (New York: Springer-Verlag)
Seljak, U. 1996a, ApJ, 460, 549
Seljak, U. 1996b, ApJ, 463, 1
Seljak, U. & Zaldarriaga, M. 1996, ApJ, 469, 437
Seljak, U. & Zaldarriaga, M. 1998, preprint astro-ph/9811123
Spergel, D. & Goldberg, D. 1998, preprint astro-ph/9811252
Sokasian, A., Gawiser, E., & Smoot, G. F. 1998, submitted to ApJ, preprint astro-ph/9811311
Suginohara, M., Suginohara, T., & Spergel, D 1998, ApJ, 495, 511
Sunyaev, R. A. & Zeldovich, Y. B. 1980, ARA&A, 18, 537
Tegmark, M. & Efstathiou, G. 1996, MNRAS, 281, 1297
Tegmark, M. 1997, to appear in ApJ, preprint astro-ph/9712038
Toffolatti, L., Argüeso Gómez, F., De Zotti, G., Mazzei, P., Franceschini, A., Danese, L., & Burigana, C. 1998, MNRAS, 297, 117
Tuluie, R., Laguna, P. 1995, ApJ, 445, L73
Tuluie, R., Laguna, P., & Anninos, P. 1996, ApJ, 463, 15
Vishniac, E. T. 1987, ApJ, 322, 597
White, M., Scott, D., & Silk, J. 1994, ARAA, 32,319
Zaldarriaga, M., Spergel, D. N., & Seljak, U. 1997, ApJ, 488, 1
Zaldarriaga, M. & Seljak, U. 1998a, preprint astro-ph/9803150
Zaldarriaga, M. & Seljak, U. 1998b, preprint astro-ph/9810257
|
no-problem/9904/physics9904007.html
|
ar5iv
|
text
|
# On localization of acoustic waves
## I Introduction
When propagating through media containing many scatterers, waves will be scattered by each scatterer. The scattered waves will be scattered again by other scatterers. This process is repeated to establish an infinite recursive pattern of rescattering between scatterers, forming a multiple scattering process which causes the scattering characteristics of the scatterers to change. Multiple scattering of waves is responsible for a wide range of fascinating phenomena. This includes, on large scales, twinkling light in the evening sky, modulation of ambient sound at ocean surfaces, acoustic scintillation from turbulent flows and fish schools. On smaller scales, phenomena such as white paint, random laser, electron transport in impured solids and photonic band gaps in periodic structures are also explained in terms of multiple scattering. Under proper conditions, multiple scattering leads to a phenomenon termed localization, which has now become an everyday experience.
Wave localization is a ubiquitous phenomenon. It refers to situations in which transmitted waves in a scattering medium are trapped in space and will remain confined in the neighborhood of the initial site until dissipated. The concept of localization was conceived from the theory describing the disorder induced conductor-insulator transition in electronic systems. Since its inception, wave localization has stimulated considerable interest among scientists from many areas of disciplinary. Several monographs have been devoted to the subject (e. g. Ref. ). Wave localization may be realized in a variety of situations. In disordered solids, electron localization is common. Localization has also been reported for microwaves in random scattering rods and spheres, and recently for light in a ground gallium-arsenide suspension in methanol. Acoustic localization has also been studied both theoretically and experimentally. Research suggests that acoustic localization may be observed in bubbly liquids.
Despite the tremendous efforts, however, no deeper insight into localization has been documented in the literature. The general cognition remains that enhanced backscattering is a precursor to wave localization and that disorder is an essential ingredient of localization. Important questions such as how and when the localization occurs stay as an unsolved puzzle.
In a recent paper, we proposed to study wave localization phenomenon by investigating wave propagation in liquid media containing many air-filled bubbles. There are several advantages of studying sound in bubbly liquids. (1) The air-filled bubbles are strong acoustic scatterer. At low frequencies, about $`ka0.0136`$, it appears the resonant scattering and the scattering strength is greatly enhanced at resonance, making it an ideal system to study strong scattering. Here $`k`$ is the acoustic wavenumber in water and $`a`$ is the radius of the bubbles. (2) The scattering function of a single bubble has been well studied and has a simple form. The scattering function of a spherical bubble can be found in many textbooks, whereas the scattering function of a deformed bubble, such as an ellipsoidal bubble, has also been analytically derived recently. It is shown that the scattering function of a single bubble has a simple isotropic resonant form, permitting many simplifications. Furthermore, perhaps more important, such an isotropic scattering feature remains valid even as the bubbles are subject to significant deformation. (3) Each term in the scattering function has clear physical meaning, and can be modified according to the need. When the thermal exchange and viscosity effects are taken into account, absorption indeed shows up but in a way such that it can be turned off or adjusted. This allows to unequivocally isolate localization due to scattering from attenuation caused by absorption, making waves in bubbly water an ideal system for studying phenomena related to multiple scattering.
In this paper, we first discuss general aspects of wave localization in a 3D system. An intuitive picture is proposed to describe wave localization and is subsequently supported by numerical examples in acoustic propagation in bubbly liquids. Wave localization is a phase transition. A novel method is proposed to describe such a phase transition. It is shown that when the localization occurs, all air-filled bubbles as a resonant scatterer prevail a surprising collective behavior.
## II General aspects
Consider a plane wave normally incident upon a semi-infinite random medium. The transport equation for the total energy intensity $`I`$ may be intuitively written as
$$\frac{dI}{dx}=\alpha I,$$
(1)
where $`\alpha `$ represents decay along the path traversed. After penetrating into the random medium, the wave will be scattered by random inhomogeneities. As a result, the wave coherence starts to decrease, yielding the way to incoherence. Extinction of the coherent intensity $`I_C`$ is described by
$$\frac{dI_C}{dx}=\gamma I_C,$$
(2)
with the attenuation constant $`\gamma `$. Eqs. (1) and (2) lead to the exponential solutions
$$I(x)=I(0)e^{\alpha x},\text{and}I_C(x)=I(0)e^{\gamma x}.$$
(3)
In deriving these equations, the boundary condition was used; it states that $`I(0)=I_C(0)`$ as no scattering has been incurred yet at the interface. According to energy conservation, the incoherent intensity $`I_D`$ (diffusive) is thus
$$I_D(x)=I(x)I_C(x).$$
(4)
When there is no absorption, the decay constant $`\alpha `$ is expected to vanish and the total intensity will then be constant along the propagation path. Then, the coherent energy gradually decreases due to random scattering and transforms to the diffusive energy, while the sum of the two forms of energy remains a constant. This scenario, however, changes when localization occurs. Even without absorption, the total intensity can be localized near the interface due to multiple scattering. When this happens, $`\alpha `$ does not vanish. The transport of the total intensity may be still described by Eq. (1), and the inverse of $`\alpha `$ would then refer to the localization length.
The above perceptual description may be illustrated by Fig. 1. Without or with little absorption, the energy propagation is anticipated to follow the behavior depicted in (a). When the localization comes in sight, the wave will be trapped within an $`e`$-folding distance from the penetration, as prescribed by (b). In the non-localization case, the diffusive intensity increases steadily as more and more scattering occurs, complying with the Milne diffusion. In the localized state, the diffusion energy increases initially. It will be eventually stopped by the interference of multiple scattering waves. Issues may be raised with respect to whether this apprehended image is supported by actual situations. In the rest, we will inspect this problem by considering acoustic waves in bubbly water.
## III Acoustic localization in bubbly liquids
Consider sound emission from a bubble cloud. For simplicity, the shape of the cloud is taken as spherical. Such a model eliminates irrelevant edge effects, and is useful to separate phenomena pertinent to the discussion. Total $`N`$ bubbles of the same radius $`a`$ are randomly distributed within the cloud. The volume void fraction, the space occupied by bubbles per unit volume, is take as $`\beta `$. A monochromatic acoustic source is located at the center of the cloud. Adaptation of such a model for other geometries and situations is straightforward. The wave transmitted from the source propagates through the bubble layer, where multiple scattering incurs, and then it reaches a receiver located at some distance from the cloud. The multiple scattering in the bubbly layer is described by a set of self-consistent equations. The energy transmission can be solved numerically in a rigorous fashion.
A traditional way to study wave localization is to calculate the Green’s function for the energy transport, leading to the Bethe-Salpeter equation. Under certain approximations, such as long time and large distance from the initial site, a solution to this equation can be obtained in the form of a diffusion formula in which a mean free path and diffusion coefficient can be defined and have been used as the basis for discussion in the literature. Alternatively, certain situations allow direct computation of the energy transport, without recourse to unnecessary approximations. In the case, information about wave propagation can be inferred straightway. Acoustic wave propagation is one of such situations. The propagation is calculated incorporating all multiple scattering by the self-consistent scheme. In the following, we will inspect the features of acoustic localization from three aspects and then present a discussion.
### A Frequency response
First consider the frequency response. In Fig. 2, the transmission in arbitrary units is plotted against frequency in terms of $`ka`$ for two different bubble sizes. Here $`k`$ is the usual wavenumber, and the bubble void fraction is $`10^3`$. It is clearly suggested in the figure that a narrow region is opened in which the transmission is virtually forbidden. This inhibition gap ranges roughly from $`ka=0.015`$ to $`0.02`$. When the void fraction is reduced to about $`10^5`$, the localization disappears. In order to show that the inhibition is not because of absorption, the situation with the absorption being turned off is also plotted in the dotted line for $`a=2`$cm. The comparison between with and without absorption excludes the absorption as the cause for the propagation hindrance. In fact, when the absorption factor of a single bubble is increased, the localization will be degraded.
### B Distance variation
To unambiguously identify that the transmission inhibition region is indeed the localization range, it is proper to study the spatial dependence of energy propagation. Fig. 3(a) presents the total transmission and its coherent and diffusive parts, scaled by the geometry spreading factor $`r^2`$, as a function of propagation distance scaled by the radius of the bubble cloud. The solid, dotted and broken lines refer to total, diffusive and coherent intensity respectively. The bubble radius is 2 cm, whereas the frequency is taken as $`ka=0.171`$, which lies in the localization regime. When plotting the data in the natural logarithm in Fig. 3(b), we found that nearly all the data rest on a straight line. In (b), the straight line refers to the fitted curve and the crosses refer to the numerical data. We find the slope for the line amounts to a constant $`\alpha =0.195`$, which is found to be true for other bubble sizes as well. This suggests that the intensity varies with propagation distance $`r`$ as
$$I\frac{e^{\alpha r/a}}{r^2}.$$
(5)
Therefore the transport equation for the total wave can be written as
$$\frac{dI}{dr}=\left(\frac{\alpha }{a}+\frac{2}{r}\right)I,$$
(6)
which is an equivalence of Eq. (1) in the spherical geometry. The second term on the right hand side of Eq. (6) denotes the geometrical spreading effect. From Eq. (5), the localization length is computed to be $`l=5.13a`$. At this length, $`kl0.0877`$; therefore the Ioffe-Regel criterion for localization is satisfied, providing another verification. At this frequency we also calculated attenuation length due to thermal and viscosity absorption to be about $`118a`$, which is much larger than the observed localization length.
The same calculation has also been performed for other sizes of bubbles. We found that within the localization regime, the ratio between the localization length and the size of bubble is nearly a constant around $`5`$, as long as the bubble radius is large enough, roughly bigger than 10$`\mu `$m. For too small bubble, the absorption due to thermal exchange and viscosity effects is significant. Then the transmission will be size dependent and therefore the scaling behavior disappears. These features do not appear for $`ka`$ outside the localization regime. Fig. 3(a) draws remarkable analogy to the perception demonstrated in Fig. 1(b).
### C Collective behavior
Upon incidence, each air-bubble acts as a secondary pulsating point source. The radiated wave from the $`i`$-th bubble ($`i=1,2,3,\mathrm{},N`$) can be written as $`A_iG_0(\stackrel{}{r}\stackrel{}{r}_i)`$, where $`G_0(\stackrel{}{r}\stackrel{}{r}_i)`$ is the usual 3D Green’s function and $`\stackrel{}{r}_i`$ denotes the positions of the bubble. The complex coefficient $`A_i`$ refers to the effective strength of the secondary source, and is computed incorporating all multiple scattering effects. The total wave at any space point is the addition of the direct wave from the transmitting source and the radiated wave from all bubbles.
We express $`A_i`$ as $`|A_i|\mathrm{exp}(i\theta _i)`$; the modulus $`A_i`$ represents the strength, whereas $`\theta _i`$ the phase of the secondary source. We assign a unit vector $`\stackrel{}{u}_i`$, hereafter termed phase vector, to each phase $`\theta _i`$, and these vectors are represented on an argand diagram in the $`xy`$ plane. That is, the starting point of each phase vector is positioned at the center of individual scatterers with an angle with respect to the positive $`x`$-axis equal to the phase, $`\stackrel{}{u}_i=\mathrm{cos}\theta _i\widehat{x}+\mathrm{sin}\theta _i\widehat{y}`$. Letting the phase of the initiative emitting source be zero, numerical experiments are carried out to study the behavior of the phases of the bubbles and the spatial distribution of the acoustic energy. Figure 4 shows the argand diagrams of the phase vectors, and the energy distribution for three frequencies in terms of $`ka`$ for one arbitrary random configuration of bubbles.
We observe that for frequencies smaller than a certain value, there is no obvious order for the directions of the phase vectors, nor for the energy distribution. The phase vectors point to various directions, and no energy localization appears. The random behavior in the directions and energy distribution is attributed to the boundary effect of a finite number of the scatterers. In effect, as the wave is not localized, it can propagate through and is reflected by the asymmetric border; all bubbles can experience the effect via strong multiple scattering.
As the frequency increases, an order in the phase vectors and energy localization becomes evident. The case with $`ka=0.0164`$ clearly shows that the energy is localized near the source, and amazingly, all bubbles oscillate completely in phase, but exactly out of phase with the transmitting source. Such collective behavior allows for efficient cancellation of incoming waves. The energy distribution decays exponentially, which sets the localization length. The localization behavior is independent of the outer boundary and always appears for sufficiently large $`\beta `$ and $`N`$. When the frequency increases further, exceeding a certain amount, the in-phase order disappears. Meanwhile, the wave becomes non-localized again. This is illustrated by the case of $`ka=0.1`$.
Further numerical investigation shows that the pattern depicted by Fig. 4 holds qualitatively true for other sizes of bubble (as long as the bubble is not too small; when the bubble is too small, the viscosity and thermal effects will dominate and the localization phenomenon will disappear. And the features are always valid for sufficiently large bubble void fraction.
### D On localization
Now a question immediately follows: What can we learn about wave localization from the above discussion? First, wave localization refers to trapping of the total wave energy, which may be indicated by the exponential decay of transmitted energies along the distance traveled by wave. However, the localization due to the scattering must be differentiated from the attenuation due to absorption. It is well-known that when absorption is present, energies will also decrease with the traveling distance. Second, when localization occurs, wave is trapped near the point of transmission. If a continuous wave is pumped into the system, there could be several conjectures. (1) The energy will be built up in the neighborhood of the transmission point until the amplitude is so large that localization fails. (2) When the localization occurs, no more energy can be pumped into the system. This scenario may be hinted by from Fig. 4 which shows that the anti-phase collective behavior allows for efficient cancellation of transmitting waves. This may be in analogy with situation for the electrical current in a conductor with resistance $`R`$ that is connected to a battery with potential $`V`$. The electrical power is $`V^2/R`$. When added with sufficient amount of impurities, the conductor will become an insulator, i. e. $`R\mathrm{}`$. Then the battery can no longer inject power into the medium. Which scenario describes the actual situation remains an open problem. However, the present study seems to hint at the second hypothesis.
## IV Summary
In summary, we have considered the behavior of acoustic localization in water containing many air-filled bubbles, using a simple numerical model. The localization behavior was investigated and is shown to follow the description of a simple transport equation. The research provides a fundamental backbone for a simple intuitive picture about wave localization in random media. A novel approach has been proposed for describing the localization phase transition. It is shown that a collective behavior appears when wave localization occurs. More detailed discussion of the collective behavior and a consideration of 2D situations will be published elsewhere.
## Acknowledgments
The work received support from National Science Council of ROC and also from the National Central University in the form of a CO-OP scholarship to EH. One of us (AA) also acknowledges the support from NSC and the Spanish Ministry of Education in the form of post doctoral fellowships.
## Figure Captions
A conceptual illustration of wave localization
Transmission as a function of $`ka`$ for two different bubble radii
Transmission as a function of propagation distance and localization length
Left column: Argand diagrams for the two-dimensional phase vectors lying parallel to the x-y plane. Right column: Spatial distribution of acoustic energy (arbitrary scale).
|
no-problem/9904/hep-ph9904363.html
|
ar5iv
|
text
|
# 1 Spectrum and leptonic constants of the pseudoscalar 𝑏𝑐̄ mesons.
BARI-TH/99-333
Radiative Leptonic $`B_c`$ Decays
P. Colangelo <sup>1</sup><sup>1</sup>1E-mail address: COLANGELO@BARI.INFN.IT, F. De Fazio <sup>2</sup><sup>2</sup>2E-mail address: DEFAZIO@BARI.INFN.IT
Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Italy
> Abstract
>
> We analyze the radiative leptonic $`B_c`$ decay mode: $`B_c\mathrm{}\nu \gamma `$ ($`\mathrm{}=e,\mu `$) using a QCD-inspired constituent quark model. The prediction: $`(B_c\mathrm{}\nu \gamma )3\times 10^5`$ confirms that this channel is experimentally promising in view of the large number of $`B_c`$ mesons which are expected to be produced at the future hadron facilities.
Recently, the CDF Collaboration at the Fermilab Tevatron has reported the observation of the $`B_c`$ meson, the lowest mass $`\overline{b}c`$ ($`b\overline{c}`$) bound state, through the semileptonic decay mode $`B_c^\pm J/\psi \mathrm{}^\pm X`$ . The measured mass and lifetime of the meson are
$`M_{B_c}`$ $`=`$ $`6.40\pm 0.39(\mathrm{stat})\pm 0.13(\mathrm{syst})\mathrm{GeV}`$ (1)
$`\tau (B_c)`$ $`=`$ $`0.46_{0.16}^{+0.18}(\mathrm{stat})\pm 0.03(\mathrm{syst})\mathrm{ps}.`$ (2)
The particular interest of this observation is related to the fact that the meson ground-state with open beauty and charm can decay only weakly, thus providing the rather unique opportunity of investigating weak decays in a heavy quarkonium-like system. Moreover, studying this meson, important information can be obtained, not only concerning fundamental parameters, such as, for example, the element $`V_{cb}`$ of the Cabibbo-Kobayashi-Maskawa mixing matrix, but also the strong dynamics responsible of the binding of the quarks inside the hadron. Understanding such dynamics is one of the most important issues in the analysis of heavy hadrons . The observation of the $`B_c`$ meson at the Tevatron confirms that $`B_c`$ physics will gain an important role at the future hadron facilities, where a large production rate of such particles is expected; in particular, at the Large Hadron Collider (LHC), which will be operating at CERN, it is estimated that $`4.5\times 10^{10}`$ $`B_c^+`$ mesons will be produced per year for a machine luminosity of $`=10^{34}`$ cm<sup>-2</sup> sec<sup>-1</sup> at $`\sqrt{s}=14`$ TeV .
$`B_c`$ meson decays can be classified according to the mechanism inducing the processes at quark level. Neglecting Cabibbo-suppressed and penguin-induced transitions, such mechanisms are:
* the $`b`$-quark transition $`bcW^{}`$, with the $`\overline{c}`$ quark having the role of spectator; the corresponding final states are of the kind $`J/\psi \pi `$, $`J/\psi \mathrm{}\nu `$;
* the charm quark transition $`\overline{c}\overline{s}W^{}`$, with $`b`$ as spectator and possible final states $`B_s\pi `$, $`B_s\mathrm{}\nu `$, etc.;
* the annihilation modes $`\overline{c}bW^{}`$.
The first two mechanisms are responsible of the largest part of the $`B_c`$ decay width . In particular, the measurement (2) provides us with an indication that the dominant $`B_c`$ decay mechanism is the $`c`$-quark decay, which implies a $`B_c`$ lifetime in the range $`\tau _{B_c}=(0.40.7)ps`$ , whereas dominance of the $`b`$quark decay mechanism would produce a longer lifetime: $`\tau _{B_c}=(1.11.2)ps`$ . Various analyses of $`B_c`$ transitions induced by the two mechanisms are available in the literature ; for example, a QCD sum rule calculation of $`B_c`$ semileptonic decays suggested the dominance of the charm transition .
As far as the annihilation processes are concerned, the leptonic radiative decay mode $`B_c\mathrm{}\nu \gamma `$ and the leptonic decay without photon in the final state represent a minor fraction of the $`B_c`$ full width. Nevertheless, their analysis is of particular interest, both from the phenomenological and the theoretical point of view. From the phenomenological side, $`B_c`$ annihilation modes are governed by $`V_{cb}`$; therefore they are Cabibbo-enhanced with respect to the analogous $`B_u`$ decays, and represent new channels to access this matrix element. From the theoretical viewpoint, the purely leptonic and the radiative leptonic $`B_c`$ transitions are interesting since, in the nonrelativistic limit of the quark dynamics, both their rates can be expressed in terms of a single hadronic parameter, the $`B_c`$ leptonic decay constant $`f_{B_c}`$ . In this limit, a relation between the widths of the processes $`B_c\mathrm{}\nu \gamma `$ and $`B_c\mathrm{}\nu `$ can be worked out :
$$R_{\mathrm{}}=\frac{\mathrm{\Gamma }(B_c\mathrm{}\nu \gamma )}{\mathrm{\Gamma }(B_c\mathrm{}\nu )}0.40r_{\mathrm{}}$$
(3)
with $`r_{\mathrm{}}={\displaystyle \frac{\alpha }{4\pi }}{\displaystyle \frac{m_{B_c}^2}{m_{\mathrm{}}^2}}`$. Eq.(3) implies that the width of the radiative leptonic $`B_c`$ decay into muons is nearly equal to the purely leptonic width $`\mathrm{\Gamma }(B_c\mu \nu )`$, whereas, in the case of electrons in the final state, the radiative leptonic mode is largely dominant.
Eq.(3) presents uncertainties coming from the used values of the charm and beauty quark masses. Moreover, there could be corrections if the quark dynamics in the $`B_c`$ meson deviates from the nonrelativistic regime, and the size of the corrections is useful for understanding the theoretical uncertainty affecting the ratio (3).
Corrections to the ratio (3) can be estimated by considering a model for the $`B_c`$ meson where relativistic effects in the constituent quark dynamics are, at least partially, taken into account; the present letter is devoted to such a study.
In order to analyze the decay mode $`B_c\mathrm{}\nu \gamma `$ ($`\mathrm{}=e,\mu `$), we follow the method adopted in ref. to investigate the analogous $`B_u`$ transition. Let us consider the process
$$B_c^{}(p)\mathrm{}^{}(p_1)\overline{\nu }(p_2)\gamma (k,ϵ)$$
(4)
whose amplitude can be written as
$$𝒜(B_c\mathrm{}\nu \gamma )=\frac{G_F}{\sqrt{2}}V_{cb}ϵ^\nu \left(L^\mu \mathrm{\Pi }_{\mu \nu }ief_{B_c}p^\mu \stackrel{~}{L}_{\mu \nu }\right),$$
(5)
with $`G_F`$ the Fermi constant and $`ϵ`$ the photon polarization vector. The current $`L^\mu =\overline{\mathrm{}}(p_1)\gamma ^\mu (1\gamma _5)\nu (p_2)`$ represents the weak leptonic current in (4). The hadronic function $`\mathrm{\Pi }_{\mu \nu }(p,k)`$ is the correlator
$$\mathrm{\Pi }_{\mu \nu }(p,k)=id^4xe^{iqx}<0|T[J_\mu (x)V_\nu (0)]|B_c(p)>$$
(6)
with $`J_\mu (x)=\overline{c}(x)\gamma _\mu (1\gamma _5)b(x)`$ the weak hadronic current governing (4) and $`V_\nu `$ given by $`V_\nu (0)=e[Q_c\overline{c}(0)\gamma _\nu c(0)+Q_b\overline{b}(0)\gamma _\nu b(0)]`$, $`eQ_c`$ and $`eQ_b`$ being the charm and beauty quark electric charges. The momentum $`q`$ is defined as $`q=p_1+p_2`$. Therefore, the first term in (5) correspond to the photon emitted from the meson, and the second term represents the contribution of the photon emitted by the charged lepton leg. Notice that the $`B_c`$ leptonic constant $`f_{B_c}`$ is defined by the matrix element
$$<0|\overline{c}\gamma _\mu \gamma _5b|B_c(p)>=if_{B_c}p_\mu $$
(7)
and that $`\stackrel{~}{L}_{\mu \nu }`$ reads
$$\stackrel{~}{L}_{\mu \nu }=\overline{\mathrm{}}(p_1)\gamma ^\nu \frac{1}{\overline{)}p_1+\overline{)}km_l}\gamma ^\mu (1\gamma _5)\nu (p_2).$$
(8)
The hadronic function $`\mathrm{\Pi }_{\mu \nu }(p,k)`$ can be expanded in independent Lorentz structures
$$\mathrm{\Pi }_{\mu \nu }(pk)=\alpha p_\mu p_\nu +\beta k_\mu k_\nu +\zeta k_\mu p_\nu +\delta p_\mu k_\nu +\xi g_{\mu \nu }+i\eta ϵ_{\mu \nu \rho \sigma }p^\rho k^\sigma $$
(9)
(with the invariant functions $`\alpha ,\mathrm{}\eta `$ depending on $`pk`$) so that the requirement of gauge invariance for the amplitude (5) implies the condition:
$$(\alpha +\zeta )(pk)+\xi ief_{B_c}=0.$$
(10)
We shall see in the following that this condition is satisfied in our calculation.
The final expression of the amplitude, eq.(5),
$$𝒜(B_c\mathrm{}\nu \gamma )=\frac{G_F}{\sqrt{2}}V_{cb}ϵ^\nu L^\mu [(\alpha +\zeta )(k_\mu p_\nu pkg_{\mu \nu })+i\eta ϵ_{\mu \nu \rho \sigma }p^\rho k^\sigma ]ϵ^\nu ,$$
(11)
is given in terms of the form factors $`\eta `$ and $`\zeta +\alpha `$, related to the vector and the axial vector weak current contributions to the process (4), respectively.
In order to compute the invariant functions in (11) we employ the costituent quark model developed in ref. to describe the static properties of mesons containing heavy quarks. In the correlator (6) we write the state of the pseudoscalar $`(b\overline{c})`$ meson at rest in terms of a wave-function $`\psi _{B_c}`$ and of quark and antiquark creation operators:
$$|B_c^{}>=i\frac{\delta _{\alpha \beta }}{\sqrt{3}}\frac{\delta _{rs}}{\sqrt{2}}𝑑\stackrel{}{k}\psi _{B_c}(\stackrel{}{k})b^{}(\stackrel{}{k},r,\alpha )c^{}(\stackrel{}{k},s,\beta )|0>;$$
(12)
$`\alpha `$ and $`\beta `$ are colour indices, $`r`$ and $`s`$ spin indices; the operator $`b^{}`$ creates a $`b`$ quark with momentum $`\stackrel{}{k}`$, while $`c^{}`$ creates a charm antiquark with momentum $`\stackrel{}{k}`$. The wave-function $`\psi _{B_c}`$, describing the quark momentum distribution in the meson, is obtained as a solution of the Salpeter equation
$$\left\{\sqrt{\stackrel{}{k}^2+m_b^2}+\sqrt{\stackrel{}{k}^2+m_c^2}m_{B_c}\right\}\psi _{B_c}(\stackrel{}{k})+𝑑\stackrel{}{k^{}}V(\stackrel{}{k},\stackrel{}{k^{}})\psi _{B_c}(\stackrel{}{k^{}})=0$$
(13)
stemming from the quark-antiquark Bethe-Salpeter equation, in the approximation of an istantaneous interaction represented by the interquark potential $`V`$. Within the model in , $`V`$ is chosen as the Richardson potential , which reads in the $`r`$space:
$$V(r)=\frac{8\pi }{332n_f}\mathrm{\Lambda }\left[\mathrm{\Lambda }r\frac{f(\mathrm{\Lambda }r)}{\mathrm{\Lambda }r}\right],$$
(14)
with $`\mathrm{\Lambda }`$ a parameter, $`n_f`$ the number of active flavours, and the function $`f(t)`$ given by:
$$f(t)=\frac{4}{\pi }_0^{\mathrm{}}𝑑q\frac{sin(qt)}{q}\left[\frac{1}{ln(1+q^2)}\frac{1}{q^2}\right].$$
(15)
The linear increase at large distances of the potential in (14) provides QCD confinement; at short distances the potential behaves as $`\frac{\alpha _s(r)}{r}`$, with $`\alpha _s(r)`$ logarithmically decreasing with the distance $`r`$, thus reproducing the asymptotic freedom property of QCD. A smearing procedure at short-distances is also adopted, to account for effects of the relativistic kinematics . Finally, spin interaction effects are neglected since in the case of heavy mesons the chromomagnetic coupling is of order of the inverse heavy quark masses.
The interest for eq.(13) is that relativistic effects are taken into account at least in the quark kinematics. Therefore, the same wave-equation can be used to study heavy-light as well as heavy-heavy quark systems, and one case can be obtained from the other one by continuously varying a parameter, the $`u(c)`$ quark mass. From the analysis of the solutions, a comparison between the two cases, the heavy-light and the heavy-heavy quark meson systems, can be meaningfully performed.
Eq.(13) can be solved by numerical methods, as described in , fixing the masses $`m_c`$ and $`m_b`$ of the constituent quarks, together with the parameter $`\mathrm{\Lambda }`$, in such a way that the charmonium and bottomomium spectra are reproduced. The chosen values for the parameters are: $`m_b=4.89`$ GeV and $`m_c=1.452`$ GeV, with $`\mathrm{\Lambda }=397`$ MeV . A fit of the heavy-light meson masses also fixes the values of the constituent light-quark masses: $`m_u=m_d=38`$ MeV . For the $`b\overline{c}`$ system all the input parameters required in (13) are fixed from the analysis of other channels, and the predictions do not depend on new external quantities. The numerical solution of (13) produces spectrum of the $`b\overline{c}`$ bound states; the predicted masses of the first three radial $`S`$wave resonances are reported in Table 1.
The spectrum in Table 1 agrees with other theoretical determinations based on constituent quark models . As for QCD sum rule and lattice QCD results, in the value $`m_{B_c}=6.35`$ GeV was found using two-point function QCD sum rules, whereas a recent lattice QCD analysis predicts: $`m_{B_c}=6.388\pm 9\pm 98\pm 15`$ GeV, with the larger error related to the quenched approximation . Within the errors, the mass of the $`B_c`$ meson in Table 1 agrees with the CDF measurement reported in eq.(1).
Also the wave-function $`\psi _{B_c}`$ can be obtained by solving eq.(13). We use the covariant normalization:
$$\frac{1}{(2\pi )^3}𝑑\stackrel{}{k}|\psi _{B_c}(\stackrel{}{k})|^2=2m_{B_c}$$
(16)
and define, in the $`B_c`$ meson rest-frame, the reduced wave-function $`u_{B_c}(k)`$ ($`k=|\stackrel{}{k}|`$):
$$u_{B_c}(k)=\frac{k\psi _{B_c}(k)}{\sqrt{2}\pi }$$
(17)
which is normalized as: $`_0^{\mathrm{}}𝑑k|u_{B_c}(k)|^2=2m_{B_c}`$. In fig.1 the $`B_c`$ meson wave-function $`u_{B_c}(k)`$ is depicted, together with the function $`u_{B_u}`$ of the $`B_u`$ meson.
The meson wave-functions, together with the predictions of the spectrum of the bound states, are the main results of the model; they allow us to calculate hadronic quantities such as the leptonic decay constants and the strong couplings to light mesons . In particular, using $`u_{B_c}(k)`$ depicted in fig.1, we can infer the size of the deviation from the nonrelativistic limit in the Salpeter equation (13). As a matter of fact, the average squared quark momentum $`<k^2>`$ turns out to be $`<k^2>=0.95`$ GeV<sup>2</sup>, and the ratios $`<k^2>/m_c^2=0.43`$ and $`<k^2>/m_b^2=0.04`$. In the case of the $`B_u`$ meson, the average squared quark momentum $`<k^2>`$ is $`<k^2>=0.47`$ GeV<sup>2</sup>, to be compared to $`m_u^2=1.4\times 10^3`$ GeV<sup>2</sup>. Therefore, in the case of $`B_c`$ mesons, deviations from the nonrelativistic limit, although small, are not negligible, mainly due to term related to the charm quark.
Before coming to the analysis of the radiative leptonic $`B_c`$ decay and to the calculation of the ratio (3), let us evaluate the leptonic constants $`f_{B_c^n}`$ of the $`S`$-wave excitations, defined by matrix elements analogous to (7). The numerical results, obtained from the eigenfunctions of eq.(13), are reported in Table 1. Few comments are in order. The value of $`f_{B_c}`$ in Table 1 is compatible, within the theoretical errors, with the result $`f_{B_c}=360\pm 60`$ MeV obtained from QCD sum rules (the uncertainty related to the input parameters of the potential model is estimated of $`\pm 15\%`$ for the leptonic constants ). On the other hand, the leptonic constants in Table 1 are smaller than the outcome of several constituent quark models considered in , based on a purely nonrelativistic description of the $`b\overline{c}`$ system. Notice that the decreasing values of $`f_{B_c^n}`$, for increasing radial number $`n`$, is due to the nodes of the wave-functions of the radial excitations. Finally, the $`B_c`$ leptonic constant turns out to be compatible with a lattice NRQCD determination .
Let us now consider the correlator (6). Writing the state $`|B_c>`$ as in (12) and expanding the T-product according to the Wick theorem, we can express the product of the two currents $`J_\mu (x)V_\nu (0)`$ in terms of quark creation and annihilation operators; then, exploiting the anticommutation relations between such operators, we can derive an expression for $`\mathrm{\Pi }_{\mu \nu }`$ in terms of the $`B_c`$ wave function, $`u_{B_c}`$, which is analogous to the expression reported in for the $`B_u`$ decay.
It is useful to relate the invariant functions in (9) to the various components of the hadronic tenson $`\mathrm{\Pi }_{\mu \nu }`$. In the $`B_c`$ rest-frame $`p=(m_{B_c},\stackrel{}{0})`$ and choosing $`k^\mu =(k^0,0,0,k^0)`$ one gets from eq.(6):
$`\zeta (k^0)`$ $`=`$ $`{\displaystyle \frac{1}{m_{B_c}k^0}}(\mathrm{\Pi }_{30}\mathrm{\Pi }_{33}+\mathrm{\Pi }_{11})`$
$`\alpha (k^0)`$ $`=`$ $`{\displaystyle \frac{1}{m_{B_c}^2}}(\mathrm{\Pi }_{00}+\mathrm{\Pi }_{33}\mathrm{\Pi }_{30}\mathrm{\Pi }_{03})`$
$`\eta (k^0)`$ $`=`$ $`i{\displaystyle \frac{1}{m_{B_c}k^0}}\mathrm{\Pi }_{12}(k^0).`$ (18)
Therefore, the condition (10) ensuring gauge invariance can be written as
$$ief_{B_c}=\frac{k^0}{m_{B_c}}\left(\mathrm{\Pi }_{00}+\mathrm{\Pi }_{33}\mathrm{\Pi }_{30}\mathrm{\Pi }_{03}\right)+\mathrm{\Pi }_{30}\mathrm{\Pi }_{33},$$
(19)
a condition that must be checked in our analysis. The explicit calculation, using the expression of $`\mathrm{\Pi }_{\mu \nu }`$ in terms of the meson wave-function , shows that eq.(19) is verified provided that the leptonic constant $`f_{B_c}`$ is given by
$$f_{B_c}=\sqrt{3}\frac{1}{2\pi m_{B_c}}_0^{\mathrm{}}𝑑k^{}k^{}u_{B_c}(k^{})\left[\frac{(E_b+m_b)(E_c+m_c)}{E_bE_c}\right]^{\frac{1}{2}}\left[1\frac{k^2}{(E_b+m_b)(E_c+m_c)}\right]$$
(20)
with $`E_{b,c}=\sqrt{k^2+m_{b,c}^2}`$. Indeed, eq.(20) is the expression for $`f_{B_c}`$ obtained in the framework of the constituent quark model , hence the gauge invariance property of the amplitude (5) is preserved in our calculation. <sup>3</sup><sup>3</sup>3In the case of the radiative leptonic $`B_u`$ decay, the contribution proportional to $`f_B`$ turns out to be numerically negligible.
Having checked the property of gauge invariance, we can simply compute the decay width of the mode (4) and the photon energy distribution. Notice that we compute the photon energy spectrum for a photon energy larger than $`100`$ MeV, which represents the photon energy resolution we assume in our analysis.
The expression for the width of the decay (4), considering massless leptons in the final state ($`m_{\mathrm{}}0`$), is
$$\mathrm{\Gamma }(B_c\mathrm{}\nu \gamma )=\frac{G_F^2|V_{cb}|^2}{3(2\pi )^3}_0^{m_{B_c}/2}𝑑k^0k^0(m_{B_c}2k^0)[|\mathrm{\Pi }_{11}+ief_{B_c}|^2+|\mathrm{\Pi }_{12}|^2].$$
(21)
Using the data in (1),(2), together with $`V_{cb}=0.04`$, we predict:
$$\mathrm{\Gamma }(B_c\mathrm{}\nu \gamma )=\mathrm{4\hspace{0.33em}10}^{16}GeV,(B_c\mathrm{}\nu \gamma )=\mathrm{3\hspace{0.33em}10}^5.$$
(22)
The energy spectrum of the photon, computed using eq.(21), is depicted in fig.2; it looks symmetric with respect to the point $`E_\gamma =m_{B_c}/4`$, as observed also in .
The result (22) confirms that the rate of the radiative leptonic $`B_c`$ decays into muons (and electrons) is sizeable, and could be accessed at the future hadronic machines, such as LHC.
Let us now consider the ratio (3). In order to determine it, we need the width of the purely leptonic $`B_c`$ decay into electrons and muons, given by
$$\mathrm{\Gamma }(B_c\mathrm{}\nu )=\frac{G_F^2}{8\pi }|V_{cb}|^2f_{B_c}^2\left(\frac{m_{\mathrm{}}}{m_{B_c}}\right)^2m_{B_c}^3\left(1\frac{m_{\mathrm{}}^2}{m_{B_c}^2}\right)^2,$$
(23)
which depends on $`f_{B_c}`$. Using the value in Table 1 we obtain:
$$\mathrm{\Gamma }(B_c\mu \nu )=\mathrm{1.1\hspace{0.33em}10}^{16}GeV,(B_c\mu \nu )=\mathrm{7.8\hspace{0.33em}10}^5,$$
(24)
corresponding to $`R_\mu ={\displaystyle \frac{\mathrm{\Gamma }(B_c\mu \nu \gamma )}{\mathrm{\Gamma }(B_c\mu \nu )}}0.4`$. This result must be compared to the value $`R_\mu 0.8`$ obtained using eq.(3) , and suggests that the corrections to the nonrelativistic limit, although small, play a role in modifying the prediction (3). In any case, the widths of the radiative leptonic and the purely leptonic $`B_c`$ transitions are still comparable, a result which is very different from the case of radiative leptonic $`B_u`$ decays, which are enhanced with respect to the purely leptonic decay modes into muons and electrons. In that latter case, the helicity suppression, displayed by the factor $`\left(\frac{m_{\mathrm{}}}{m_{B_q}}\right)^2`$ in (23) and producing very small decay rates, is efficiently avoided by the presence of the photon in the final state also in the case of muons, so that the branching fraction of the radiative leptonic $`B_u`$ modes into muons turns out to be enhanced by one order of magnitude with respect to the purely muon leptonic decay .
As for the decay into electrons, we predict $`(B_ce\nu _e)2\times 10^9`$.
The decay mode $`B_c\mathrm{}\nu \gamma `$ has been investigated, as mentioned above, in the framework of the nonrelativistic quark model (NRQM) , and also using the light front quark model (LFQM) and light-cone QCD sum rules (LCSR) . In Table 2 we report the various predictions, obtained using the value of $`f_{B_c}`$ reported in Table 1. We also report the estimates of the ratio
$$r=\frac{N_{B_u}}{N_{B_c}}=\frac{(B_u\mathrm{}\nu \gamma )}{(B_c\mathrm{}\nu \gamma )}$$
(25)
which represents the relative fraction of the final state $`\mathrm{}\nu \gamma `$ coming from the different sources $`B_u`$ and $`B_c`$.
Our small result: $`r=0.03`$ means that a large number of $`\mathrm{}\nu \gamma `$ final states should come from $`B_c`$ decays; the actual fraction of radiative leptonic final states can be obtained by multiplying eq.(25) by the probabilities of producing $`B_u`$ and $`B_c`$ mesons from $`b`$ quarks.
Let us conclude our analysis, based on a QCD-inspired relativistic constituent quark model, observing that the quark dynamics inside the $`B_c`$ meson could modify the prediction (3). Therefore, the range $`[0.40.8]`$ for the ratio (3) can be interpreted as the theoretical uncertainty for this quantity. The measurement of the radiative leptonic $`B_c`$ decay rate, together with the purely leptonic rate, although challenging from the experimental viewpoint, is one of the expected results in the LHC era.
Acknowledgments
We thank G. Nardulli and N. Paver for discussions.
|
no-problem/9904/hep-ph9904231.html
|
ar5iv
|
text
|
# 1 Υ meson dissociation time in a quark-gluon plasma as function of temperature.
Upsilon ($`\mathrm{{\rm Y}}`$) Dissociation in Quark-Gluon Plasma
Sidi Cherkawi Benzahra<sup>1</sup><sup>1</sup>1benzahra@physics.spa.umn.edu
School of Physics and Astronomy
University of Minnesota
Minneapolis, MN 55455
## Abstract
I consider the dissociation of $`\mathrm{{\rm Y}}`$ due to absorption of a thermal gluon. I discuss the dissociation rate in terms of the energy density, the number density, and the temperature of the quark-gluon plasma. I compare this to the effect due to screening.
When the bottom quark or anti-quark is struck by a high energy gluon, the upsilon meson can dissociate into other elements. The medium, the quark-gluon plasma, can be full of gluons that can cause this dissociation, and this can happen by exciting color-singlet $`b\overline{b}^{(1)}`$ into a color-octet continuum state. The bottom quark absorbs energy from the gluon field. When the bottom quark and anti-quark are close to each other, asymptotic freedom comes into play, and the binding energy can be derived the same way we derive it for the hydrogen atom. To an approximation, there is a parallelism between the two physics . For the singlet state the energy is:
$$E=\frac{4}{9}\alpha _s^2m_Q,$$
(1)
and the radius is:
$$a=\frac{3}{2\alpha _sm_Q}.$$
(2)
The wavelength of the gluon that dissociates this state fits the radius of the singlet state of $`\mathrm{{\rm Y}}`$ to a good approximation. The S matrix in this case is:
$$S_{fi}=i_{\mathrm{}}^{\mathrm{}}𝑑t\mathrm{octet}grE^a\mathrm{cos}\theta \mathrm{singlet}$$
(3)
where $`\mathrm{E}^\mathrm{a}`$ is the color electric field. Taking the singlet state to be a 1s-wavefunction and the octet to be a plane wave, and using the dipole moment matrix, the calculation of the S matrix in terms of the relative momentum of the $`b\overline{b}^{(8)}`$ pair can be tedious, but straightforward. The ionization of the hydrogen atom by electromagnetic radiation leads to a similar S matrix .
$$S_{fi}=\frac{32\pi gE^a(\omega )ka^5\mathrm{cos}\theta }{\sqrt{6}\sqrt{\pi a^3V}(1+k^2a^2)^3}.$$
(4)
Here $`\omega `$ is the difference between the octet state energy and the singlet state energy, k is the relative momentum of the bottom quark in the octet state, a is the radius of the singlet state, and V is the quantization volume. Assuming color neutrality of the medium
$$E_i^a(\omega )E_j^b(\omega )=\frac{1}{24}\delta _{ij}\delta _{ab}E(\omega )^2.$$
(5)
Using Fermi’s Golden Rule, the transition rate is:
$$R_{fi}=2\pi \rho (k)\mathrm{octet}grE\mathrm{cos}\theta \mathrm{singlet}^2$$
(6)
where
$$\rho (k)=\frac{m_QVk\mathrm{sin}\theta d\theta d\varphi }{8\pi ^3}.$$
(7)
Integrating over k, the transition probability becomes
$$P_{fi}=\frac{2}{3}\pi \alpha _sa^2E(\omega )^2.$$
(8)
There is a threshold energy of about 850 MeV for the dissociation of the upsilon meson into two highly-energetic bottom quarks. Only gluons with energy exceeding $`\omega _{min}`$= 850 MeV can dissociate the upsilon. So the relevant energy density is not just the average energy density, but the energy of the gluons which have an energy higher than the threshold energy. In deconfined matter, such as quark-gluon plasma, we expect gluons in a medium of 200 MeV temperature to have an average momentum of 600 MeV . So there are some higher-energy gluons which overcome the 850 MeV and dissociate the upsilon.
Dividing both sides of eq.(8) by the total interaction time, $`\tau _{fi}`$, I get
$$\mathrm{\Gamma }_{\mathrm{dis}}=\frac{2}{3}\pi a^2\alpha _s\frac{E(\omega )^2}{\tau _{fi}},$$
(9)
where $`E(\omega )^2/\tau _{fi}`$ is the color-electric power density of the medium, which can be evaluated analytically for a variety of models. One such model is a dilute gas of color charges. Denoting the Casimir of the color charge by $`Q^2`$ in this model, it is found that
$$\frac{E(\omega )^2}{\tau _{fi}}\frac{\pi }{2}\alpha _sQ^2\stackrel{~}{\rho }(w),$$
(10)
where $`\stackrel{~}{\rho }(w)`$ is the weighted average of the density of charges in the medium. Combining eqs.(9) and (10), and introducing the number density of gluons, the dissociation rate of the upsilon meson becomes :
$$\mathrm{\Gamma }_{dis}\frac{8}{9}\pi ^3\alpha _{s}^{}{}_{}{}^{2}a^2n.$$
(11)
This dissociation rate can be calculated in terms of the temperature of the quark-gluon plasma. But I will only include the gluons with energy exceeding the threshold energy of dissociation, $`\omega _{min}`$. For a medium of gluons
$$n=N/V=\frac{1}{2\pi ^2}_{\omega _{min}}^{\mathrm{}}𝑑\omega \frac{\omega ^2}{\mathrm{exp}(\omega /T)1},$$
(12)
which gives us
$$\tau _{dis}\frac{m_Q^2}{\pi _{k=1}^{\mathrm{}}\left[\left(\frac{T}{k}\right)\omega _{\mathrm{min}}^2+2\left(\frac{T}{k}\right)^2\omega _{\mathrm{min}}+2\left(\frac{T}{k}\right)^3\right]e^{k\omega _{\mathrm{min}}/T}}.$$
(13)
The minimum temperature required to achieve deconfinement is generally understood to be about 150 to 200 MeV. RHIC, the Relativistic Heavy Ion Collider, is the first collider designed to specifically create this plasma. It may reach a temperature of 500 MeV. CERN, on the other hand, hopes to reach a temperature of 1 GeV by colliding heavy nuclei in the Large Hadron Collider. Inserting a temperature of 500 MeV into eq.(13) I get a dissociation time of 4 fm/c, which is comparable to the life span of the quark-gluon plasma–a typical life span of a quark-gluon plasma is about 2 to 5 fm/c . If I use the CERN 1 GeV temperature, I get $`\tau _{\mathrm{dis}}0.55\text{fm/c}`$. See figure 1.
In addition to the dissociation of upsilon by gluons, there is another kind of dissociation which is caused by the screening of the color charges of the quarks in the medium . In the high temperature deconfined phase, the bottom-antibottom free energy $`V_{b\overline{b}}`$, which is the Debye potential with inverse screening length $`m_{el}`$, is given by
$$V_{b\overline{b}}=\frac{4}{3}\frac{\alpha _s}{r}e^{m_{el}r}.$$
(14)
Using a variational calculation with an exponential trial wavefunction, $`Ae^{r/a_T}`$, I look for a critical value of $`m_{el}`$ where the upsilon meson is no longer bound. I find this critical $`m_{el}`$ to be
$$m_{el}=\frac{2}{3}\alpha _sm_Q.$$
(15)
Here I use $`\alpha _s(m_b)=0.2325\pm 0.0044`$. Inserting $`m_{el}`$ in the equation
$$m_{el}^2=\frac{1}{3}g^2(N+\frac{N_f}{2})T^2,$$
(16)
where N=3 from the SU(N) group and $`N_f=3`$ is the number of light flavors, and using the temperature-dependent coupling constant as given by
$$\frac{g^2}{4\pi }=\frac{6\pi }{27\mathrm{l}\mathrm{n}\left(\mathrm{T}/50\mathrm{M}\mathrm{e}\mathrm{V}\right)}$$
(17)
I find that the ground state of upsilon is unbound at a temperature of T=250 MeV. Above this temperature, the effect of screening does not allow the upsilon meson to exist in a 1s state.
According to the above results, the time of dissociation of the upsilon meson seems to be comparable to the life span of the quark-gluon plasma. If we calculate the dissociation time for $`J/\psi `$, using the same principle we used above, we will find it to be even less than the one for the $`\mathrm{{\rm Y}}`$. This works better for the $`J/\mathrm{\Psi }`$ meson because its quark, c, is light compared to the b quark of the $`\mathrm{{\rm Y}}`$, and also the binding energy of the $`J/\mathrm{\Psi }`$ is smaller compared to the binding energy of the $`\mathrm{{\rm Y}}`$. Screening plays a major role when the temperature of the quark-gluon plasma exceeds 250 MeV. If there is a suppression of upsilon mesons, it is more likely due to the screening effect than it is to the high energy gluon absoption or collision. I conclude that high energy gluons and mostly the effect of screening are indispensable for the dissociation of the upsilon meson in the quark-gluon plasma.
Acknowledgment
I am indebted to Benjamin Bayman, Mohamed Belkacem, Paul Ellis, Joseph Kapusta, and Stephen Wong for their help and advice on this paper.
References
B. Muller, preprint nuc-th/9806023 v2 14 July, 1998(unpublished) .
L. I. Schiff, Quantum Mechanics (McGraw-Hill, New York, 1968).
D. Kharzeev and H. Satz, Physics Letters B 334, 155 (1994).
S. A. Bass et. al., preprint nucl-th/9902055 Feb. 22, 1999
T. Matsui and H. Satz, Physics Letters B 178, 416 (1986).
J. I. Kapusta, Finite-Temperature Field Theory (Cambridge University Press, Cambridge, 1989).
Matthias Jamin and Antonio Pich, preprint hep-ph/9702276 Feb. 6, 1997.
|
no-problem/9904/astro-ph9904015.html
|
ar5iv
|
text
|
# The Nature of Boxy/Peanut-Shaped Bulges in Spiral Galaxies
## 1 Introduction
Many spiral galaxies display boxy or peanut-shaped bulges when viewed edge-on. Unfortunately, statistical studies of the incidence of these objects have not used objective criteria to quantify the boxiness of the bulges, and they are therefore hard to compare and yield moderately different results. Nevertheless, it seems clear that at least 20-30% of edge-on spirals possess a boxy/peanut-shaped (hereafter B/PS) bulge (see Jarvis 1986; Shaw 1987; de Souza & dos Anjos 1987). Spiral galaxies with a B/PS bulge are therefore a significant class of objects.
The fact that the boxy/peanut shape is seen only in edge-on spirals indicates that the shape is related to the vertical distribution of light. Compared to the usual $`R^{1/4}`$ light distribution of spheroids, B/PS bulges have excess light above the plane at large galactocentric radii (see Shaw 1993). Furthermore, the three-dimensional (hereafter 3D) light distribution of B/PS bulges must be even more extreme than their projected surface brightness (see, e.g., Binney & Petrou 1985 for axisymmetric models). B/PS bulges also appear to rotate cylindrically, i.e. their rotation is independent of the height above the plane (e.g. Kormendy & Illingworth 1982; Rowley 1986).
Several models have been proposed to explain the structure of B/PS bulges (e.g. Combes & Sanders 1981; May, van Albada, & Norman 1985; Binney & Petrou 1985). However, despite their prevalence and interesting structural and dynamical properties, B/PS bulges remain poorly studied observationally, probably because the edge-on projection makes the interpretation of observational data difficult.
In this paper, we present new kinematical data for a large sample of spiral galaxies with a B/PS bulge. Our main goals are to determine their 3D structure and to identify their likely formation mechanism. We use our data to show that B/PS bulges are simply thick bars viewed edge-on.
In § 2, we discuss the two main scenarios proposed for the formation of B/PS bulges – accretion of satellite galaxies and buckling of a bar. We describe ways of discriminating between the two scenarios using kinematical data in § 3. The observations are presented in § 4 and the results discussed at length in § 5. We conclude in § 6.
## 2 Formation Mechanisms
### 2.1 Accretion
Accretion of external material such as satellite galaxies was the early favoured mechanism to explain the formation of B/PS bulges in spiral galaxies. Binney & Petrou (1985) and Rowley (1986, 1988) showed that it is possible to construct axisymmetric cylindrically rotating B/PS bulges from relatively simple distribution functions. Binney & Petrou adopted a distribution function with a third integral of motion, in addition to the energy $`E`$ and angular momentum along the symmetry axis $`L_z`$. This integral favoured orbits reaching a particular height above the plane. Rowley adopted a two-integral distribution function, with a truncation depending on both $`E`$ and $`L_z`$. Both authors argued that the required distribution functions can form naturally through the accretion of material onto a host spiral galaxy. Binney & Petrou (1985) additionally argued that, for the accreted material to form a B/PS bulge, the velocity dispersion of the satellite must be much lower than its orbital speed, and the decay timescale must be much longer that the orbital time. This ensures that the accreted material stays clustered in phase space.
Accretion scenarios face several problems. At one extreme, one can consider the accretion of several small satellite galaxies. However, only a narrow range of orbital energy and angular momentum can lead to the formation of a B/PS bulge, so it is improbable that many satellite galaxies would all share these properties. Remaining satellites should then be present but they are not observed (Shaw 1987). The accretion of a single large companion is also ruled out by the large velocity dispersion and small decay timescale involved (Binney & Petrou 1985). Similarly, the merger of two spiral galaxies of similar sizes (or of a spiral and a small elliptical) seems an unlikely route to form B/PS bulges. This would require a fairly precise alignment of the spins and orbital angular momenta of the two galaxies. We recall that about a third of all spiral galaxy bulges should be formed this way.
From the arguments presented above, it seems that the accretion of a small number of moderate-sized satellites is the only accretion scenario which may lead to the formation of B/PS bulges. In favour of accretion is the fact that the best examples of B/PS bulges are found in small groups (e.g. NGC 128, ESO 597- G 036). In addition, the possibly related X-shaped galaxies can form through the accretion of a satellite galaxy (e.g. Whitmore & Bell 1988; Hernquist & Quinn 1989). On the other hand, no evidence of accretion (arcs, shells, filaments, etc.) were detected by Shaw (1993) near spirals with a B/PS bulge, which argues against any kind of accretion. Furthermore, spiral galaxies with a B/PS bulge are not found preferentially in clusters (Shaw 1987). We are thus led to the conclusion that, while accretion of external material may play a role in the formation of some B/PS bulges, it is unlikely to be the primary formation mechanism.
### 2.2 Bar-Buckling
Bars can form naturally in $`N`$-body simulations of rotationally supported stellar disks (e.g. Sellwood 1981; Athanassoula & Sellwood 1986), due to global bisymmetric instabilities (e.g. Kalnajs 1971, 1977). Based on 3D $`N`$-body simulations, Combes & Sanders (1981) were the first to suggest that B/PS bulges may arise from the thickening of bars in the disks of spiral galaxies. Their results were confirmed and their suggestion supported by later works (see, e.g., Combes et al. 1990; Raha et al. 1991). In short, the simulations show that, soon after a bar develops, it buckles and settles with an increased thickness and vertical velocity dispersion, appearing boxy-shaped when seen end-on and peanut-shaped when seen side-on. The B/PS bulges so formed are cylindrically rotating, as required.
Toomre (1966) first considered the buckling of disks, in highly idealised models, and found that, if the vertical velocity dispersion in a disk is less than about a third of the velocity dispersion in the plane, the disk will be unstable to buckling modes (fire-hose or buckling instability; see also Fridman & Polyachenko 1984; Araki 1985). Bar formation in a disk makes the orbits within the bar more eccentric without affecting much their perpendicular motions, thereby providing a natural mechanism for the bar to buckle. Resonances between the rotation of the bar and the vertical oscillations of the stars can also make the disk vertically unstable (see Pfenniger 1984; Combes et al. 1990; Pfenniger & Friedli 1991). Irrespective of exactly how a bar buckled, the final shape of the bar is probably due to orbits trapped around the 2:2:1 periodic orbit family (see, e.g., Pfenniger & Friedli 1991).
Bar-buckling is the currently favoured mechanism for the formation of B/PS bulges. In particular, it provides a natural way to form B/PS bulges in isolated spiral galaxies, which accretion scenarios are unable to do. A number of facts also suggest a connection between (thick) bars and B/PS bulges, although they do not support bar-buckling directly. In particular, the fraction of edge-on spirals possessing a B/PS bulge is similar to the fraction of strongly barred spirals among face-on systems ($`30\%`$; see, e.g., Sellwood & Wilkinson 1993; Shaw 1987). In a few cases, a bar can also be directly associated with a B/PS bulge from morphological arguments. NGC 4442 is such an example (Bettoni & Galletta 1994).
The main observational problem faced by the bar-buckling mechanism is that B/PS bulges seem to be shorter (relative to the disk diameter) than real bars or the strong bars formed in $`N`$-body simulations. However, this might simply be due to projection (a given surface brightness level being reached at a smaller radius in a more face-on disk) and no proper statistics have yet been compiled. The long term secular evolution of bars is also poorly understood. For example, it is known that bars can transfer angular momentum to a spheroidal component very efficiently (e.g. Sellwood 1980; Weinberg 1985). On the other hand, Debattista & Sellwood (1998) showed that, while a bar can be slowed down, it continues to grow and the boxy/peanut shape is preserved. It is less clear what happens if a bar is strongly perturbed. However, Norman, Sellwood, & Hasan (1996) showed that, if a bar is destroyed through the growth of a central mass concentration (e.g. Hasan & Norman 1990; Friedli & Benz 1993), the boxy/peanut shape is also destroyed.
The merits of the bar-buckling mechanism significantly outweigh these potential problems, and the bar-buckling scenario to form B/PS bulges remains largely unchallenged at the moment, despite very little observational support. The main aim of this paper is to test this mechanism, by looking for bars in a large sample of spiral galaxies with a B/PS bulge. Although this stops short of a direct verification of buckling, it does test directly for a possible relationship between bars and B/PS bulges. We will come back to this distinction in § 5.
Although it is not part of our program, the Milky Way is a primary example of such a galaxy. The Galactic bulge is boxy-shaped (Weiland et al. 1994) and it is now well established that it is bar-like (e.g. Binney et al. 1991; Paczyński et al. 1994; see Kuijken 1996 for a brief review of the subject).
We note that “hybrid” scenarios have also been proposed to explain the formation of B/PS bulges, and have gathered recent support from statistical work on the environment of B/PS bulges by Lütticke & Dettmar (1998). An interaction or merger can excite (or accelerate) the development of a bar in a disk which is stable (or quasi-stable) against bar formation (e.g. Noguchi 1987; Gerin, Combes, & Athanassoula 1990; Miwa & Noguchi 1998). Bars formed this way are then free to evolve to a boxy/peanut shape in the manner described above (see, e.g., Mihos et al. 1995), and the bulges thus formed owe as much to interactions as they do to the bar-buckling instability. However, the shape of the bulges is due to the buckling of the bars and interaction is merely a way to start the bar formation process. In that sense, hybrid scenarios for the formation of B/PS bulges are really bar-buckling scenarios, and the possible accretion of material is not directly related to the issue of the bulges’ shape.
## 3 Observational Diagnostics
Our main goal with the observations presented in this paper is to look for the presence of a bar in the disk of spiral galaxies possessing a B/PS bulge. There is no straightforward photometric way to identify a bar in an edge-on spiral. The presence of a plateau in the major-axis light profile of the disk has often been invoked as an indicator of a bar (e.g. de Carvalho & da Costa 1987; Hamabe & Wakamatsu 1989). However, axisymmetric or quasi-axisymmetric features (e.g. a lens) can be mistaken for a bar and end-on bars may remain undetected. Kuijken & Merrifield (1995; see also Merrifield 1996) were the first to demonstrate that bars could be identified kinematically in external edge-on spiral galaxies. They showed that periodic orbits in a barred galaxy model produce characteristic double-peaked line-of-sight velocity distributions when viewed edge-on. This gives their modeled spectra a spectacular “figure-of-eight” (or X-shaped) appearance, which they were able to observe in the long-slit spectra of the B/PS systems NGC 5746 and NGC 5965 (see Fig. 6 for examples). Their approach is analogous to that used in the Galaxy with longitude-velocity diagrams (e.g. Peters 1975; Mulder & Liem 1986; Binney et al. 1991).
Bureau & Athanassoula (1999, hereafter BA99) refined the dynamical theory of Kuijken & Merrifield (1995). They used periodic orbit families in a barred galaxy model as building blocks to model the structure and kinematics of real galaxies. They showed that the global structure of a position-velocity diagram<sup>1</sup><sup>1</sup>1Position-velocity diagrams (PVDs) show the projected density of material in a system as a function of line-of-sight velocity and projected position. (hereafter PVD) taken along the major axis of an edge-on system is a reliable bar diagnostic, particularly the presence of gaps between the signatures of the different families of periodic orbits. Athanassoula & Bureau (1999, hereafter AB99) produced similar bar diagnostics using hydrodynamical simulations of the gaseous component alone. They showed that, when $`x_2`$ orbits are present (corresponding to the existence of an inner Lindblad resonance (hereafter ILR)), the presence of gaps in a PVD, between the signature of the $`x_2`$-like flow and that of the outer parts of the disk, reliably indicates the presence of a bar. If no $`x_2`$ orbits are present, one must rely on indirect evidence to argue for the presence of a bar (see, e.g., Contopoulos & Grosbøl 1989 for a review of periodic orbits in barred spirals). The gaps are a direct consequence of the shocks which develop in relatively strong bars. These shocks drive an inflow of gas toward the center of the galaxies and deplete the outer (or entire) bar regions (see, e.g., Athanassoula 1992). The simulations of AB99 can be directly compared with the emission line spectra presented here, and will form the basis of our argument.
The models of BA99 and AB99 can also be used to determine the viewing angle with respect to a bar, as the signature present in the PVDs changes with the orientation of the line-of-sight. In particular, when a bar is seen end-on, the $`x_1`$ orbits (and $`x_1`$-like flow, both elongated parallel to the bar) reach very high radial velocities, while the $`x_2`$ orbits (and $`x_2`$-like flow, both elongated perpendicular to the bar) show only relatively low velocities. The opposite is true when a bar is seen side-on. In addition, the presence or absence of $`x_2`$ orbits can somewhat constrain the mass distribution and bar pattern speed of an observed galaxy.
We have not developed specific observational criteria to identify past or current accretion of material in the observed galaxies. As discussed in § 2.1, accretion will occur through interactions or merger events. We will take as the signature of such events, and of possible accretion, the presence of irregularities in the observed PVDs, in particular strong asymmetries about the center of an object (see Fig. 6 for examples).
## 4 Observations
### 4.1 The Sample
Our sample of galaxies consists of 30 edge-on spirals selected from the catalogues of Jarvis (1986), Shaw (1987), and de Souza & dos Anjos (1987) (spiral galaxies with a B/PS bulge), and from the catalogue of Karachentsev, Karachentseva, & Parnovsky (1993) (spirals with extreme axial ratios; see also Karachentsev et al. 1993). In order to have enough spatial resolution in the long-slit spectroscopy, but still be able to image the galaxies relatively quickly with a small-field near-infrared (hereafter NIR) camera, we have selected objects with bulges larger than 0$`\stackrel{}{\mathrm{.}}`$6 in diameter and disks smaller than about 7$`\stackrel{}{\mathrm{.}}`$0 (at the 25 B mag arcsec<sup>-2</sup> level). NIR imaging is important to refine the classification of the bulges and to subsequently study the vertical structure of the identified bars. All objects are accessible from the south ($`\delta 15\mathrm{°}`$). Three-quarters (23/30) of the galaxies either have probable companions or are part of a group or cluster. A few of these probably are chance alignments, so it is fair to say that we are not biased either against or for galaxies in a dense environment. We should therefore be able to estimate reliably the importance of accretion in the formation of B/PS bulges.
Of the sample galaxies described above, 80% (24/30) have a boxy or peanut-shaped bulge, while 20% (6/30) have a more spheroidal bulge morphology and constitute a “control” sample. Of the former group, it turned out that 17 galaxies have emission lines extending far enough in the disk to apply the diagnostics developed by BA99 and AB99 with the ionised gas; all galaxies in the control group fulfill this condition. In this paper, we will thus concentrate on a main sample of 17 edge-on spiral galaxies with a B/PS bulge and a comparison sample of 6 edge-on spiral galaxies with more spheroidal bulges. The galaxies in each sample are listed in Tables 1 and 2 respectively, along with information on their properties and environment. The galaxies with no or confined emission are listed in Table 3. For those, stellar kinematics must be used to search for the presence of bar. We note that the galaxy type listed in Tables 13 is not precise to more than one or two morphological type, because of the difficulty of classifying edge-on spirals. The bulge-to-disk ratio is effectively the only criterion left to classify the objects.
Other than the catalogue of Karachentsev et al. (1993), we are not aware of any general catalogue of edge-on spiral galaxies. This makes it difficult to build a large and varied control sample including edge-on spiral galaxies with large bulges (the Karachentsev et al. 1993 catalogue is restricted to galaxies with major to minor axis ratio $`a/b7`$). Such a catalogue would be very useful, and could probably be constructed from an initial list of candidates taken from a catalogue such as the RC3 (de Vaucouleurs et al. 1991), which would then be inspected on survey material.
### 4.2 Observations and Data Reduction
Our spectroscopic data were acquired between December 1995 and May 1997 (total of 39 nights) using the Double Beam Spectrograph on the 2.3 m telescope at Siding Spring Observatory. A $`1752\times 532`$ pixels SITE ST-D06A thinned CCD was used. The observations discussed in this paper were obtained with the red arm of the spectrograph centered on the H$`\alpha `$ $`\lambda 6563`$ emission line. All galaxies were observed using a $`1\stackrel{}{\mathrm{.}}8\times 400\mathrm{}`$ slit aligned with the major axis. For objects with a strong dust lane, the slit was positioned just above it. The spectral resolution is about 1.1 Å FWHM (0.55 Å pixel<sup>-1</sup>) and the spatial scale is 0$`\stackrel{}{\mathrm{.}}`$9 pixel<sup>-1</sup>. These data can be directly compared with the gas dynamical models of AB99.
When no emission line was detected in an object, the red arm of the spectrograph was moved to the Ca II absorption line triplet. The blue arm was always centered on the Mg $`b`$ absorption feature. These data will form the core of a future paper discussing stellar kinematics in the sample galaxies (including the galaxies in Table 3). Total exposure times on both arms ranged from 12000 to 21000 s on each object.
The spectra were reduced using the standard procedure within IRAF. The data were first bias-subtracted, using both vertical and horizontal overscan regions, and then using bias frames. If necessary, the data were also dark-subtracted. The spectra were then flatfielded using flattened continuum lamp exposures, and wavelength-calibrated using bracketing arc lamp exposures for each image. The data were then rebinned to a logarithmic wavelength (linear velocity) scale corresponding to 25 km s<sup>-1</sup> pixel<sup>-1</sup>. The spectra were then corrected for vignetting along the slit using sky exposures. All exposures of a given object were then registered and offset along the spatial axis, corrected to a heliocentric rest frame, and combined. The resulting spectra were then sky-subtracted using source-free regions of the slit on each side of the objects. The sky subtraction was less than perfect in some cases, mainly because of difficulties in obtaining a uniform focus of the spectrograph along the entire length of the slit. This was particularly troublesome for objects like IC 2531 and NGC 5746 which have sizes comparable to that of the slit (see, e.g., Fig. 6a). In order to isolate the emission lines, the continuum emission of the objects was then subtracted using a low-order fit to the data in the spectral direction. The resulting spectra constitute the basis of our discussion in the next section.
We note that in regions with bright continuum emission, like the center of some galaxies, the continuum subtraction can leave high shot noise in the data, which should not be confused with line emission in the grayscale plots of Figure 66. This is the case for example in the spectra of ESO 240- G 011, NGC 1032, and NGC 4703. The effect is perhaps best seen when a bright star is subtracted, such as in the spectra of NGC 2788A or NGC 1032 (see Fig. 6).
We have not extracted rotation curves from our data. This is because the entire two-dimensional spectrum, or PVD, is required to identify the signature of a bar in an edge-on spiral galaxy (see BA99 and AB99). Evidence of interaction and of possible accretion of material is also more easily seen in the PVDs. We present the \[N II\] $`\lambda 6584`$ emission line rather than H$`\alpha `$ because it is not affected by underlying stellar absorption.
### 4.3 Results
We present the emission line spectrum for the sample galaxies which have extended emission only. The PVDs of the galaxies in the main sample and the control sample are shown in Figure 6 and Figure 6, respectively. In order to illustrate the range of galaxy type and bulge morphology in the sample, and to allow connections to be made between bulge morphologies and kinematical features in the disks, each PVD is accompanied by a registered image of the corresponding galaxy (from the Digitized Sky Survey) on the same spatial scale. We discuss here the trends observed across the data set.
The most important trend, and the main result of this paper, is that most galaxies in the main sample show a clear bar signature in their PVD (as described in § 3). Of the 17 galaxies in the main sample of spirals with a B/PS bulge and extended emission lines, we conclude that 14 are barred. In these objects, the PVD clearly shows a strong and steep inner component, associated with an $`x_2`$-like flow, and a slowly-rising almost solid-body component, associated with the outer disk, and joining the flat section of the rotation curve in the outer parts. The two components are separated by a gap, caused by the absence (or low density) of gaseous material with $`x_1`$-like kinematics in the outer bar regions<sup>2</sup><sup>2</sup>2The PVDs of NGC 128 and IC 2531 do not display a bar signature, but Emsellem & Arsenault (1997) and Bureau & Freeman (1997) showed, using other data, that each galaxy harbours a bar.. The best examples of this type of bar signature are seen in the PVD of earlier-type objects, like NGC 2788A, NGC 5746, and IC 5096. However, the signature is still clearly visible in the PVD of galaxies as late as ESO 240- G 011.
In the main sample, only one galaxy, NGC 4469, may be axisymmetric, with no evidence of either a bar or interaction (although it is not possible to rule out an interaction which would have occured a long time ago, leaving no observable trace). Two galaxies, NGC 3390 and ESO 597- G 036, have a disturbed strongly asymmetric PVD, which we ascribe to a recent interaction (obvious in the case of ESO 597- G 036). These interactions may have led to the accretion of material. The results for the entire main sample are summarised in Table 1.
Another significant result of this study is that no galaxy in the control sample shows evidence for a bar. Four of the six galaxies appear to be axisymmetric, without evidence for either a bar or interaction, and two galaxies (NGC 5084 and NGC 7123) have a disturbed PVD, indicating they underwent an interaction recently and possibly accreted material. These results are tabulated in Table 2.
## 5 Discussion
### 5.1 The Structure of Boxy/Peanut-Shaped Bulges
The only previous study of this kind was that of Kuijken & Merrifield (1995), who proposed the method and considered two galaxies; Bureau & Freeman (1997) presented preliminary results of the current work. This is thus the first systematic observational study of the relationship between bars and B/PS bulges. In summary, our main result is that, based on new kinematical data, 14 of the 17 galaxies with a B/PS bulge in our sample are barred, and the remaining 3 galaxies show evidence of interaction or may be axisymmetric. None of the 6 galaxies without a B/PS bulge in our sample shows any indication of a bar. This means that most B/PS bulges are due to the presence of a thick bar viewed edge-on, and only a few may be due to the accretion of external material. In addition, the more spheroidal bulges (i.e. non-B/PS) do seem axisymmetric. It appears then that most B/PS bulges are edge-on bars, and that most bars are B/PS when viewed edge-on. However, the small size of the control sample prevents us from making a stronger statement about this converse. There is also a continuum of bar strengths in nature and we would expect to have intermediate cases. The galaxies NGC 3957 and NGC 4703 may represent such objects: one could argue that their PVDs display weak bar signatures, and indeed their bulges are the most flattened in the control sample.
If bars were unrelated to the structure of bulges, we would have expected only about 5 galaxies in the main sample to be strongly barred, and about 2 galaxies in the control sample (about 30% of spirals are strongly barred, Sellwood & Wilkinson 1993). Clearly, our results are incompatible with these expectations. Recent results by Merrifield & Kuijken (1999) also support our conclusions. Based on a smaller sample of northern edge-on spirals, they show clearly that as bulges become more B/PS, the complexity and strength of the bar signature in the PVD also increase.
Our association of bars and B/PS bulges supports the bar-buckling mechanism for the formation of the latter. However, we do not test directly for buckling, but rather for the presence of a barred potential in the plane of the disks. Because bars form rapidly and buckle soon after, on a timescale of only a few dynamical times (see, e.g., Combes et al. 1990), it is unlikely that any galaxy in this nearby sample would have been caught in the act. Thus, other mechanisms which could lead to thick bars cannot be excluded. In addition, we have no way of determining how the bars themselves formed, or even whether they formed spontaneously in isolation or through interaction with nearby galaxies or companions. Therefore, the possibility of hybrid scenarios for the formation of B/PS bulges, where a bar is formed because of an interaction and subsequently thickens due to the buckling instability, remains (see § 2.2).
We have looked mainly at objects with large bulges; only two of the galaxies studied are late-type spirals (ESO 240- G 011 and IC 5176). This is a selection effect due to the difficulty to identify very small B/PS bulges. It would therefore be interesting to search for bars in very late edge-on spiral galaxies, and verify whether thin bars do exist: the bar in ESO 240- G 011 is not very thick, but it is thicker (isophotally) than the disk. H I synthesis imaging is probably the best way to achieve this goal, as these objects are often dusty and H I-rich. Such bars may even provide a novel way to constrain the total (luminous and dark) mass distribution of spirals, in a manner analogous to the use of warps or flaring, as buckling is sensitive to the presence of a dark halo (e.g. Combes & Sanders 1981). We will report on H I synthesis observations of a few objects in our sample in a later paper.
Our observations revive the old issue of the exact nature of bulges. In face-on systems, one can often clearly identify a bar and a more nearly axisymmetric component usually referred to as the bulge. However, Kormendy (1993) has argued that, at least in some examples, these apparent bulges may just be structures in the disk. In edge-on spirals, we are not aware of any galaxies displaying two separate vertically extended components. This raises the question of whether the bars and bulges of face-on systems are really two distinct structural and dynamical components, despite the fact that they can be separated photometrically. Our data on edge-on galaxies tightly link the presence of a bar with the presence of a B/PS bulge, which suggests that bars and B/PS bulges are very closely related. This view is supported by theoretical and modelling work on barred spiral galaxies (e.g. Pfenniger 1984; Pfenniger & Friedli 1991), as well as by some photometric studies (e.g. Ohta 1996). However, more work is required to settle the issue. Kinematical data covering whole bulges would be particularly useful.
BA99 and AB99 proposed diagnostics, again based on the structure of the observed PVD, to determine the viewing angle with respect to a bar in an edge-on disk (see § 3). When the bar is seen close to side-on, the maximum line-of-sight velocity reached by the $`x_2`$-like flow is similar to or larger than the flat portion of the rotation curve, and the component of the PVD associated with that flow is very steep. When the bar is seen close to end-on, the $`x_2`$-like flow only reaches low velocities and extends to relatively large projected distances. These diagnostics are ideally suited to test the main prediction of $`N`$-body models, that bars are peanut-shaped when seen side-on and boxy-shaped when seen end-on (see, e.g., Combes & Sanders 1981; Raha et al. 1991).
Of the 12 barred galaxies in the main sample for which it is possible to apply these criteria (we exclude NGC 128 and IC 2531), two-thirds (8/12) seem to be consistent with the above prediction of $`N`$-body simulations. For example, in the galaxy with a peanut-shaped bulge IC 4937, it is clear that the steep inner component associated with the $`x_2`$-like flow extends to higher velocities than the outer parts of the disk (see Fig. 6). This situation is reversed in NGC 1886, which has a boxy-shaped bulge. However, caution is required when interpreting this result. Firstly, the present classification of the shape of the bulges is affected by both dust and the low dynamic range of the material used (Digitized Sky Survey). To remedy this problem, we have acquired $`K`$-band images of all the sample galaxies and will report on these observations in a future paper. Secondly, no clear prediction has been made from $`N`$-body simulations about the viewing angle at which the transition from a boxy to a peanut-shaped bulge occurs. In that regard, it would be useful to apply quantitative measurements of the boxiness and “peanutness” of the bulges to both simulation results and observational data (see, e.g., Bender & Möllenhoff 1987 and Athanassoula et al. 1990 for two possible methods). Thirdly, because the $`x_2`$-like flow is located near the center of the galaxies, the velocities it reaches depend somewhat on the central concentration of the objects (which affects the circular velocity in the central regions). This obviously varies significantly amongst the galaxies in our sample. Therefore, while our observations may support the prediction of $`N`$-body models concerning the orientation of the bar in B/PS bulges, we believe that it is premature to make a detailed quantitative comparison of the data with the models.
For a more detailed comparison with theory, data of higher signal-to-noise and higher spatial resolution than the average PVD presented here would be very desirable. However, it would be worthwhile to model individually the best PVDs obtained in the present study (e.g. NGC 5746, IC 5096, and a few others). This would very likely lead to tight constraints on the mass distribution and bar properties of the galaxies, including the orientation and pattern speed of the bars (see BA99; AB99). The $`K`$-band images could also be used to constrain the mass distributions. Comparing the data with the kinematics (or simply the rotation curve) predicted from an axisymmetric deprojection would provide an easy test of the shape of the bulges. On a related subject, the significant thickness of bars suggests that their 3D structure should be taken into account when deriving the potential of more face-on systems from NIR images.
In that regard, we should stress that the bar diagnostics we used rely on the presence of an $`x_2`$-like flow in the center of the galaxies, and thus on the existence of ILRs (or, at least, one ILR; see BA99; AB99). A priori, barred disks or B/PS bulges need not have ILRs, but at least 13 of the 17 galaxies in the main sample do (we additionally exclude NGC 128 here). Our data therefore strongly support the view that barred spiral galaxies generally have ILRs (see also Athanassoula 1991, 1992).
### 5.2 Dust and Emission Line Ratios in Boxy/Peanut-Shaped Bulges
Because many galaxies in our sample have a prominent dust lane, it is important to consider the effects dust may have on our observations. Its principal consequence in edge-on systems is to limit the depth to which the line-of-sight penetrates the disk. To bypass this problem, we selected many galaxies to be slightly inclined, so it was possible in those objects to position the slit just above the dust lane and have a line-of-sight that still goes through most of the disk, as required for a comparison with the models of BA99 and AB99. In the few cases with a strong dust lane and a perfectly edge-on disk, we tried again to offset the slit slightly. Unfortunately, with the observational set-up available at the telescope, it was difficult to position the slit with great precision. The objects where we suspected that dust could affect our observations are indicated in Tables 13. A large dust optical depth produces an almost featureless PVD, as one sees only an outer annulus of material in the disk, and the rotation curve appears slowly-rising and solid-body (see, e.g., Bosma et al. 1992). The only objects showing obvious signs of extinction are IC 2531<sup>3</sup><sup>3</sup>3This is confirmed by the H I radio synthesis data of Bureau & Freeman (1997), which reveal a complex PVD with a bar signature., NGC 4703, and possibly ESO 443- G 042. Because we see a lot of structure in most PVDs, including the PVDs of objects with a significant dust lane, we do not believe that our results are significantly affected by dust. We do detect a clear bar signature in most galaxies in the main sample.
This statement is strengthened by the fact that all the PVDs showing a bar signature are close to symmetric. AB99 showed that the bar signature would be strongly asymmetric in a very dusty disk, and this is not observed. Similarly, it is improbable that irregular dust distributions would lead to such well-ordered and symmetric PVDs (very large and localized dust “patches” would be required to create the important gaps observed in many objects).
An unexpected but interesting prospect raised by our observations concerns emission line ratios. For many of the barred galaxies in the main sample, the emission line ratios in the central regions are different from those expected of typical H II regions. For 9 barred galaxies out of 14, mostly those with the strongest bar signatures, the \[N II\] $`\lambda 6584`$ to H$`\alpha `$ $`\lambda 6563`$ ratio in the bulge region is greater than unity. In particular, in a few objects, the steep inner component of the PVD, associated with the $`x_2`$-like flow, is much stronger in \[N II\] than it is in H$`\alpha `$, while the slowly rising component, associated with the outer disk, has a \[N II\]/H$`\alpha `$ ratio typical of H II regions. In fact, the inner component can be almost absent in H$`\alpha `$. We illustrate these behaviours in Figure 6, which shows the PVDs of the galaxies NGC 5746 and IC 5096 in the H$`\alpha `$ and \[N II\] $`\lambda 6548,6584`$ lines.
It is possible that the H$`\alpha `$ emission line is weakened by the underlying stellar absorption. This suggestion is supported by the fact that the spectra of many of the galaxies in the main sample show strong Balmer absorption lines. However, the H$`\alpha `$ absorption would need to be very large to account for the extreme \[N II\]/H$`\alpha `$ ratios observed in some objects (e.g. IC 5096). The strong Balmer lines observed in many objects are nevertheless interesting in themselves, and indicate that a significant intermediate age ($`1`$ Gyr) stellar population is present in the central regions of the disk of many of the galaxies. It would be interesting to investigate if these past bursts of star formation can be related to the presence of the bars.
The high emission line ratios are interesting for two reasons. Firstly, high \[N II\]/H$`\alpha `$ ratios are commonly believed to be produced by shocks (see, e.g., Binette, Dopita, & Tuohy 1985; Dopita & Sutherland 1996). This is consistent with the view that B/PS bulges are barred spirals viewed edge-on. The steep inner components of the PVDs, which display high \[N II\]/H$`\alpha `$ ratios, are associated with an $`x_2`$-like flow and the nuclear spirals observed in many barred spiral galaxies (AB99). Athanassoula (1992) showed convincingly that these are the locus of shocks. Secondly, if one were to derive H$`\alpha `$ and \[N II\] rotation curves from the data, by taking the upper envelope of the PVDs (the standard method), the H$`\alpha `$ and \[N II\] rotation curves would significantly differ for many objects. The \[N II\] lines would yield rapidly rising rotation curves flattening out at small radii, while the H$`\alpha `$ line would yield slowly rising rotation curves flattening out at relatively large radii. Mass models derived from such data would thus yield qualitatively different results, and our understanding of galactic dynamics and structure gained from this type of work could be seriously erroneous (at least for highly inclined spirals). Of course, now that these galaxies are known to be barred, their rotation curves should not be used directly for mass modelling, as they are not a good representation of the circular velocity.
Data such as those presented in Figure 6 also open up the possibility of determining the ionisation conditions and abundance of the gas in different regions of the galaxies in a single observation. Because the deprojected location of each component of the PVDs is known (see AB99), this provides an efficient way to study the effects of bars on the interstellar medium of galaxies on various scales.
## 6 Conclusions
In this paper, we discussed the various mechanisms proposed for the formation of the boxy and peanut-shaped bulges observed in some edge-on spiral galaxies. We argued that accretion scenarios were unlikely to account for most boxy/peanut-shaped (B/PS) bulges, but that bar-buckling scenarios, discovered through $`N`$-body simulations, had this potential. Using recently developed kinematical bar diagnostics, we searched for bars in a large sample of edge-on spiral galaxies with a B/PS bulge. Of the 17 galaxies where the diagnostics could be applied using emission lines, 14 galaxies were shown to be barred, 2 galaxies were significantly disturbed, and 1 galaxy seemed to be axisymmetric. In a control sample of 6 galaxies with spheroidal bulges, none appeared to be barred.
Our study supports the bar-buckling mechanism for the formation of B/PS bulges. Our results imply that most B/PS bulges are due to the presence of a thick bar that we are viewing edge-on, while only a few may be due to the accretion of external material. Furthermore, spheroidal bulges do appear to be axisymmetric. This suggests that all bars are B/PS. Our observations also seem to support the main prediction of $`N`$-body simulations, that bars appear boxy-shaped when seen end-on and peanut-shaped when seen side-on. However, this issue should be revisited in a more quantitative manner in the future. With our data, we have no way of determining whether the bars leading to B/PS bulges have formed in isolation or through interactions and mergers. The association of B/PS bulges and bars is entirely consistent with the properties of the bulge of the Milky Way, which is known to be both boxy and bar-like.
We considered the effects of dust on our observations, but concluded that it does not affect our results significantly. We have also shown that emission line ratios correlate with kinematical structures in many barred galaxies. This make possible a direct study of the large scale effects of bars on the interstellar medium in disks.
Our study opens up the possibility to study observationally the vertical structure of bars. This was not possible before and represents an interesting spin-off from the use of bar diagnostics in edge-on spiral galaxies. To this end, we have obtained $`K`$-band images of all our sample galaxies, and will report on this work in a future paper.
We thank the staff of Mount Stromlo and Siding Spring Observatories for their assistance during and after the observations. We also thank A. Kalnajs, E. Athanassoula, A. Bosma, and L. Sparke for useful discussions. M. B. acknowledges the support of an Australian DEET Overseas Postgraduate Research Scholarship and a Canadian NSERC Postgraduate Scholarship during the conduct of this research. The Digitized Sky Surveys were produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. The plates were processed into the present compressed digital form with the permission of these institutions. The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
|
no-problem/9904/hep-th9904054.html
|
ar5iv
|
text
|
# UPR-843T, PUPT-1856 Moduli Spaces of Fivebranes on Elliptic Calabi-Yau Threefolds
## 1 Introduction
If string theory is to be a model of supersymmetric particle physics, one should be able to find four-dimensional vacua preserving $`𝒩=1`$ supersymmetry. The first examples of such backgrounds arose as geometrical vacua, from the compactification of the heterotic string on a Calabi–Yau threefold. Recent advances in string duality have provided new ways of obtaining geometrical $`𝒩=1`$ vacua, by compactifying other limits of string theory and, in particular, by including branes in the background.
Of particular interest is the strongly coupled limit of the $`E_8\times E_8`$ heterotic string. The low-energy effective theory is eleven-dimensional supergravity compactified on an $`S^1/Z_2`$ orbifold interval, with a set of $`E_8`$ gauge fields on each of the ten-dimensional orbifold fixed planes . To construct a theory with $`𝒩=1`$ supersymmetry one further compactifies on a Calabi–Yau threefold $`X`$ . One is then free to choose general $`E_8`$ bundles which satisfy the hermitian Yang–Mills equations. Furthermore, one can include some number of five-branes. The requirements of four-dimensional Lorentz invariance and supersymmetry mean that these branes must span the four-dimensional Minkowski space, while the remaining two dimensions wrap a holomorphic curve within the Calabi–Yau threefold .
In the low-energy four-dimensional effective theory, there is an array of moduli. There are geometrical moduli describing the dimensions of the Calabi–Yau manifold and orbifold interval. There are bundle moduli describing the two $`E_8`$ gauge bundles. And finally, there are moduli describing the positions of the fivebranes. It is this last set of moduli, together with the generic low-energy gauge fields on the fivebranes, upon which we shall focus in this paper. We note that, although we consider this problem in the specific case of M fivebranes, the moduli spaces are quite general and should have applications to other supersymmetric brane configurations.
We shall look at this question in the particular case of an elliptically fibered Calabi–Yau threefold with section. This expands on the discussion of we gave in a recent letter . There we used this special class of manifolds to construct explicitly an array of new particle physics vacua. The general structure of the constructions was given in detail in a second paper . This is the companion paper which explains the structure of the five-brane moduli space as well as the nature of gauge enhancement on the five-branes.
The constructions in and used the analysis of gauge bundles on elliptic Calabi–Yau threefolds given by Friedman, Morgan and Witten , Donagi and Bershadsky et al. . The vacua preserved, for example, $`SU(5)`$ or $`SO(10)`$ gauge symmetry with three families. The presence of the five-branes allowed a much larger class of possible backgrounds. The number of families is given by an index first calculated in this context by Andreas and Curio . Curio also gave explicit examples of bundles where this index was three. Subsequently, the case of non-simply connected elliptic Calabi–Yau threefolds with bisections has been considered in . We note that, in the M theory context, another class of explicit models with non-standard gauge bundles but with orbifold Calabi–Yau spaces, were first contructed in , while the generic form of the effective four- and five-dimensional theories, including fivebranes, is given in .
In constructing these vacua the fivebranes cannot be chosen arbitrarily . The boundaries of $`S^1/Z_2`$ and the fivebranes are all magnetic sources for the four-form field strength $`G`$ of eleven-dimensional supergravity. The fact that there can be no net magnetic charge for $`G`$ in the Calabi–Yau threefold fixes the homology class of the fivebranes in terms of the gauge bundles and $`X`$ itself. As discussed in , to describe real fivebranes this homology class must be “effective” in $`X`$. Mathematically, then, the problem we wish to solve is to find the moduli space of holomorphic curves in $`X`$ in a given effective homology class.
Specific to the M theory case, we will also include the moduli corresponding to moving the fivebranes in $`S^1/Z_2`$ and, in addition, their axionic moduli partners . These latter fields are compact scalars which arise as zero modes of the self-dual three-form $`h`$ on the fivebranes. The other zero modes of $`h`$ lead to low-energy gauge fields. Generically the gauge group is $`U(1)^g`$, where $`g`$ is the genus of the holomorphic curve . Since we will be able to calculate $`g`$ at each point in the moduli space, we will also be able to identify the low-energy gauge multiplets.
One consequence of considering elliptically fibered Calabi–Yau threefolds with section is that there is a dual F theory description . For the fivebranes, those wrapping purely on the fiber of $`X`$ correspond to threebranes on the F theory side . Rajesh and Diaconescu and Rajesh have recently argued that fivebranes lying wholly in the base of $`X`$ correspond to blow-ups of the corresponding curve in the F theory vacuum. We will not comment in detail on this interesting correspondence. However, we will show that, locally, the moduli spaces match those expected from duality to F theory. We will also comment on how the global structure is encoded on the M theory side through a twisting of the axion modulus. This will be discussed further in . An additional point we will only touch on is the structure of additional low-energy fields which can appear when fivebranes intersect. We will, however, clearly identify these points in moduli space in our analysis. Finally, we will also ignore any non-perturbative corrections. In general, since the low-energy theory is only $`𝒩=1`$, one expects that some of the directions in moduli space are lifted by non-perturbative effects, in particular by instantonic membranes stretching between fivebranes.
Specifically, we do the following. In section 2, we briefly review the anomaly cancellation condition in heterotic M-theory, discuss how that constraint leads to non-perturbative vacua with fivebranes and review some aspects of homology and cohomology theory required in our analysis. Properties of elliptically fibered Calabi–Yau threefolds and a discussion of their algebraic and effective classes are presented in section 3. Section 4 is devoted to studying the simple case of the the moduli spaces of fivebranes wrapped purely on the elliptic fiber. We also comment on global structure of the moduli space and the relation to F theory. In section 5, we present two examples of fivebranes wrapping curves with a component in the base. We analyze, in detail, the moduli space of these two examples, including the generic low-energy gauge groups on and possible intersections of the fivebrane. Techniques developed in sections 4 and 5 are generalized in section 6, where we give a procedure for the analysis of the moduli spaces of fivebranes wrapped on any holomorphic curve, generically with both a fiber and a base component. We note a particular exceptional case which occurs when the fivebrane wraps an exceptional divisor in the base. Finally, in sections 7, 8 and 9 we make these methods concrete by presenting three specific examples, two with a del Pezzo base and one with a Hirzebruch base. Two of these examples correspond realistic three-family, non-perturbative vacua in Hořava-Witten theory.
## 2 Heterotic M theory vacua with fivebranes
### 2.1 Conditions for a supersymmetric background
As we have discussed in the introduction, the standard way to obtain heterotic vacua in four dimensions with $`𝒩=1`$ supersymmetry, is to compactify eleven-dimensional M theory on the manifold $`X\times S^1/Z_2\times M_4`$ . Here $`X`$ is a Calabi–Yau threefold, $`S^1/Z_2`$ is an orbifold interval, while $`M_4`$ is four-dimensional Minkowski space. This background is not an exact solution, but is good to first order in an expansion in the eleven-dimensional Planck length. To match to the low-energy particle physics parameters, the Calabi–Yau threefold is chosen to be the size of the Grand Unified scale, while the orbifold is somewhat larger . In general, there is a moduli space of different compactifications. There are the familiar moduli corresponding to varying the complex structure and the Kähler metric on the Calabi–Yau threefold. Similarly, one can vary the size of the orbifold. These parameters all appear as massless fields in the low-energy four-dimensional effective action. In general, there are additional low-energy scalar fields coming from zero-modes of the eleven-dimensional three-form field $`C`$.
The second ingredient required to specify a supersymmetric background is to choose the gauge bundle on the two orbifold planes. In general, one can turn on background gauge fields in the compact Calabi–Yau space. Supersymmetry implies that these fields cannot be arbitrary. Instead, they are required to satisfy the hermitian Yang–Mills equations
$$F_{ab}=F_{\overline{a}\overline{b}}=0g^{a\overline{b}}F_{a\overline{b}}=0$$
(2.1)
Here $`a`$ and $`b`$ are holomorphic indices in $`X`$, $`\overline{a}`$ and $`\overline{b}`$ are antiholomorphic indices, while $`g_{a\overline{b}}`$ is the Kähler metric on $`X`$. Having fixed the topology of the gauge bundle, that is, how the bundle is patched over the Calabi–Yau manifold, there is then a set of different solutions to these equations. There are additional low-energy moduli which interpolate between these different solutions. In general, the full moduli space of bundles is hard to analyze. However, when the Calabi–Yau threefold is elliptically fibered, the generic structure of this moduli space can be calculated and has been discussed in and also in .
The final ingredient to the background is that one can include fivebranes . In order to preserve supersymmetry and four-dimensional Lorentz invariance, the fivebranes must span the four-dimensional Minkowski space while the remaining two dimensions wrap a holomorphic curve within the Calabi–Yau threefold . In addition, each brane must be parallel to the orbifold fixed planes. Thus it is localized at a single point in $`S^1/Z_2`$. Again, there are a set of moduli giving the positions of the five-branes within the Calabi–Yau manifold as well as in the orbifold interval. As we will discuss below, there are also extra moduli coming from the self-dual tensor fields on the fivebranes . These fields generically give some effective $`𝒩=1`$ gauge theory in four-dimensions. Finding the moduli space of the fivebranes, and some information about the effective gauge theory which arises on the fivebrane worldvolumes will be the goal of this paper.
In summary, the M theory background is determined by choosing
* a spacetime manifold of the form $`X\times S^1/Z_2\times M_4`$, where $`X`$ is a Calabi–Yau threefold
* two $`E_8`$ gauge bundles, $`V_1`$ and $`V_2`$, satisfying the hermitian Yang–Mills equations (2.1) on $`X`$
* a set of fivebranes parallel to the orbifold fixed planes and wrapped on holomorphic curves within $`X`$.
This ensures that we preserve $`𝒩=1`$ supersymmetry in the low-energy four-dimensional effective theory.
### 2.2 Cohomology condition
The above conditions are not sufficient to ensure that one has a consistent background. Anomaly cancellation on both the ten-dimensional orbifold fixed planes and the six-dimensional fivebranes is possible only because each is a magnetic source for the supergravity four-form field strength $`G=dC`$ . This provides an inflow mechanism to cancel the anomaly on the lower dimensional space. In general, the magnetic sources for $`G`$ are five-forms. Explicitly, if $`0x^{11}\pi \rho `$ parameterizes the orbifold interval, one has
$$dG=J_1\delta (x^{11})+J_2\delta (x^{11}\pi \rho )+\underset{i}{}J_5^{(i)}\delta (x^{11}x^{(i)})$$
(2.2)
where $`J_1`$ and $`J_2`$ are four-form sources on the two fixed planes and $`J_5^{(i)}`$ is a delta-function four-form source localized at the position of the $`i`$-th five-brane in $`X`$. The explicit one-form delta functions give the positions of the orbifold fixed planes at $`x^{11}=0`$ and $`x^{11}=\pi \rho `$ and the five-branes at $`x^{11}=x^{(i)}`$ in $`S^1/Z_2`$.
Compactifying on $`X\times S^1/Z_2`$, we have the requirement that the net charge in the internal space must vanish, since there is nowhere for flux to escape. Equivalently, the integral of $`dG`$ over any five-cycle in $`X\times S^1/Z_2`$ must be zero since $`dG`$ is exact. Integrating over the orbifold interval then implies that the integral of $`J_1+J_2+_iJ_5^{(i)}`$ over any four cycle in $`X`$ must vanish. Alternatively, this means that the sum of these four-forms must be zero up to an exact form, that is, they must vanish cohomologically.
Explicitly, the source on each orbifold plane is proportional to
$$J_n\mathrm{tr}F_nF_n\frac{1}{2}\mathrm{tr}RR$$
(2.3)
where $`F_n`$ for $`n=1,2`$ is the $`E_8`$ field strength on the $`n`$-th fixed plane, while $`R`$ is the spacetime curvature. The full cohomology condition can then be written as
$$\lambda (TX)=w(V_1)+w(V_2)+[W]$$
(2.4)
with
$`w(V)`$ $`={\displaystyle \frac{1}{608\pi ^2}}\mathrm{tr}_{\mathrm{𝟐𝟒𝟖}}FF`$ (2.5)
$`\lambda (TX)`$ $`={\displaystyle \frac{1}{2}}p_1(TX)={\displaystyle \frac{1}{28\pi ^2}}\mathrm{tr}_\mathrm{𝟔}RR`$
where the right-hand sides of these expressions really represent cohomology classes, rather than the forms themselves. The traces are in the adjoint $`\mathrm{𝟐𝟒𝟖}`$ of $`E_8`$ and the vector representation of $`SO(6)`$. $`[W]`$ represents the total cohomology class of the five-branes, which we will discuss in a moment. Note that $`\lambda `$ is half the first Pontrjagin class. It is, in fact, an integer class because we are on a spin manifold. On a Calabi–Yau threefold it is equal to the second Chern class $`c_2(TX)`$, where the tangent bundle $`TX`$ is viewed as an $`SU(3)`$ bundle and the trace is in the fundamental representation. Thus, the cohomology condition simplifies to
$$[W]=c_2(TX)w(V_1)w(V_2)$$
(2.6)
What do we mean by the cohomology class $`[W]`$? We recall that we associated four-form delta function sources to the five-branes in $`X`$. The class $`[W]`$ is then the cohomology class of the sum of all these sources. Recall that the five-branes wrap on holomorphic curves within the Calabi–Yau threefold. The sum of the five-branes thus represents an integer homology class in $`H_2(X,𝐙)`$. In general, one can then use Poincaré duality to associate an integral cohomology class in $`H^4(X,𝐙)`$ to the homology class of the fivebranes, or also a de Rham class in $`H_{\mathrm{DR}}^4(X,𝐑)`$. This is the class $`[W]`$ which enters the cohomology condition, though we will throughout use the same expression $`[W]`$ for the integral homology class in $`H_2(X,𝐙)`$, the integral cohomology class in $`H^4(X,𝐙)`$, and the de Rham cohomology class in $`H_{\mathrm{DR}}^4(X,𝐑)`$.
### 2.3 Homology classes and effective curves
Let us now turn to analyzing the cohomology condition (2.6) in more detail. One finds that the requirement that $`[W]`$ correspond to the homology class of a set of supersymmetric fivebranes puts a constraint on the allowed bundle classes .
Since the sources are all four-forms, equation (2.6) is clearly a relation between de Rahm cohomology classes $`H_{\mathrm{DR}}^4(X,𝐑)`$. However, in fact, the sources are more restricted than this. In general, they are all in integral cohomology classes. By this we mean that their integral over any four-cycle in the Calabi–Yau threefold gives an integer. (As noted above, this is even true when we no longer have a Calabi–Yau threefold but only a spin manifold, and $`c_2(TX)`$ is replaced by $`\frac{1}{2}p_1(X)`$.) The class $`[W]`$ is integral because it is Poincaré dual to an integer sum of fivebranes, an element of $`H_2(X,𝐙)`$. Note that there is a general notion of the integer cohomology group $`H^p(X,𝐙)`$ which, in general, includes discrete torsion groups such as $`𝐙_2`$. This maps naturally to de Rahm cohomology $`H_{\mathrm{DR}}^p(X,𝐑)`$. However, it is important to note that the map is not injective. Torsion elements in $`H^p(X,𝐙)`$ are lost. The integral classes to which we refer in this paper are to be identified with the images of $`H^p(X,𝐙)`$ in $`H_{\mathrm{DR}}^p(X,𝐑)`$.
In general, $`[W]`$ cannot be just any integral class. We have seen that supersymmetry implies that fivebranes are wrapped on holomorphic curves within $`X`$. Thus $`[W]`$ must correspond to the homology class of holomorphic curves. Furthermore, $`[W]`$ must refer to some physical collection of fivebranes. Included in $`H_2(X,𝐙)`$ are negative classes like $`[C]`$ where $`C`$ is, for example, a holomorphic curve in $`X`$. These have cohomology representatives which would correspond to the “absence” of a five-brane, contributing a negative magnetic charge to the Bianchi identity for $`G`$ and negative stress-energy. Such states are physically not allowed. The condition that $`[W]`$ describes physical, holomorphic fivebranes further constrains $`c_2(TX)`$, $`w(V_1)`$ and $`w(V_2)`$ in the cohomology condition (2.6).
In order to formalize these constraints, we need to introduce some definitions. We will use the following terminology.
* A curve is a holomorphic complex curve in the Calabi–Yau manifold. A curve is reducible if it can be written as the union of two curves.
* A class is a homology class in $`H_2(X,𝐙)`$ (or the Poincaré dual cohomology class in $`H^4(X,𝐙)`$). In general, it may or may not have a representative which is a holomorphic curve. If it does, then a class is irreducible if it has an irreducible representative. Note that there may be other curves in the class which are reducible, but the class is irreducible if there is at least one irreducible representative.
* A class which can be written as a sum of irreducible classes with arbitrary integer coefficients is called algebraic.
* A class is effective if it can be written as the sum of irreducible classes with positive integer coefficients.
Note that we will occasionally use analogous terminology to refer to surfaces (or divisors) in $`X`$. These are holomorphic complex surfaces in the Calabi–Yau threefold, so they have four real dimensions, and their classes lie in $`H_4(X,𝐙)`$.
Physically, the above definitions correspond to the following. A curve $`W`$ describes a collection of supersymmetric fivebranes wrapped on holomorphic two-cycles in the Calabi–Yau space. A reducible curve is the union of two or more separate five-branes. A general class in $`H_2(X,𝐙)`$ has representatives which are a general collection of five-branes, perhaps supersymmetric, perhaps not, and maybe including “negative” fivebranes of the form mentioned above. An algebraic class, on the other hand, has representatives which are a collection of only five-branes wrapped on holomorphic curves and so supersymmetric, but again includes the possibility of negative fivebranes. Finally, an effective class has representatives which are collections of supersymmetric fivebranes but exclude the possibility of non-physical negative fivebrane states.
From these conditions, we see that the constraint on $`[W]`$ is that we must choose the Calabi–Yau threefold and the gauge bundles $`V_1`$ and $`V_2`$ such that
$`[W]`$ must be effective (2.7)
As it stands, it is not clear that $`[W]=c_2(TX)w(V_1)w(V_2)`$ is algebraic, let alone effective. However, supersymmetry implies that both the tangent bundle and the gauge bundles are holomorphic. There is then a useful theorem that the classes of holomorphic bundles are algebraic<sup>1</sup><sup>1</sup>1This is a familiar result for Chern classes (see ). For $`E_8`$, or other groups, it can be seen by taking any matrix representation of the group and treating it as a vector bundle, that is, by embedding $`E_8`$ in $`GL(n,𝐂)`$. The second Chern class of the vector bundle is then algebraic and is some integer multiple $`p`$ of the class $`w(V)`$, where the factor is related to the quadratic Casimir of the representation. We conclude that $`w(V)`$ is rationally algebraic: it is integral, and a further integral multiple of it is algebraic., and so $`[W]`$ is in fact necessarily algebraic. However, there remains the condition that $`[W]`$ must be effective which does indeed constrain the allowed gauge bundles on a given Calabi–Yau threefold.
### 2.4 The theory on the fivebranes, $`𝒩=1`$ gauge theories and the fivebrane moduli space
While two of the fivebrane dimensions are wrapped on a curve within the Calabi–Yau manifold, the remaining four dimensions span uncompactified Minkowski space. The low-energy massless degrees of freedom on a given fivebrane consequently fall into four-dimensional $`𝒩=1`$ multiplets. At a general point in moduli space there are a set of complex moduli $`(m_i,\overline{m}_i)`$ describing how the fivebrane curve can be deformed within the Calabi–Yau three-fold. These form a set of chiral multiplets. In addition, there is a single real modulus $`x^{11}`$ describing the position of the fivebrane in the orbifold interval. This is paired under supersymmetry with an axion $`a`$ which comes from the reduction of the self-dual three-form degree of freedom, $`h`$, on the fivebrane to form a further chiral multiplet. When the fivebrane is non-singular, that is, does not intersect itself, touch another fivebrane, or pinch, at any point, the remaining degrees of freedom are a set of $`U(1)`$ gauge multiplets, where the gauge fields also arise from the reduction of the self-dual three-form. The number of $`U(1)`$ fields is given by the genus $`g`$ of the curve. In summary, generically, we have
chiral multiplets: $`(x^{11},a),(m_i,\overline{m}_i)`$ (2.8)
vector multiplets: $`g`$ multiplets with $`U(1)^g`$ gauge group
for each distinct fivebrane.
When the fivebrane becomes singular, new degrees of freedom can appear. These correspond to membranes stretched between parts of the same fivebrane, or the fivebrane and other fivebranes, which shrink and become massless when the fivebrane becomes singular. They may be new chiral or vector multiplets. In the following, we will not generally identify all the massless degrees of freedom at singular configurations but, rather, concentrate on describing the degrees of freedom on the smooth parts of the moduli space.
In conclusion, we have seen that fixing the Calabi–Yau manifold and gauge bundles, in general, fixes an element $`[W]`$ of $`H_2(X,𝐙)`$ describing the homology class of the holomorphic curve in $`X`$ on which the fivebranes are wrapped. In order to describe an actual set of fivebranes, $`[W]`$ must be effective, which puts a constraint on the choice of gauge bundles. In general, there are a great many different arrangements of fivebranes in the same homology class. The fivebranes could move about within the Calabi–Yau threefold and also in the orbifold interval. In addition, there can be transitions where branes split and join. The net effect is that there is, in general, a complicated moduli space of five-branes parameterizing all the different possible combinations. In the low-energy effective theory on the fivebranes, the moduli space is described by a set of chiral multiplets. In order to describe the structure of this moduli space, it is clear that we need to analyze the moduli space of all the holomorphic curves in the class $`[W]`$, including the possibility that each fivebrane can move in $`S^1/Z_2`$ and can have a different value of the axionic scalar $`a`$.
## 3 Elliptically fibered Calabi–Yau manifolds
The moduli spaces we will investigate in detail in this paper are those for five-branes wrapped on smooth elliptically fibered Calabi–Yau threefolds $`X`$. Consequently, in this section we will briefly summarize the structure of $`X`$, then identify the generic algebraic classes and finally understand the conditions for these classes to be effective.
### 3.1 Properties of elliptically fibered Calabi–Yau threefolds
An elliptically fibered Calabi–Yau threefold $`X`$ consists of a base $`B`$, which is a complex surface, and an analytic map
$$\pi :XB$$
(3.1)
with the property that for a generic point $`bB`$, the fiber $`E_b=\pi ^1(b)`$ is an elliptic curve. That is, $`E_b`$ is a Riemann surface of genus one with a particular point, the origin $`p`$, identified. In particular, we will require that there exist a global section, denoted $`\sigma `$, defined to be an analytic map
$$\sigma :BX$$
(3.2)
that assigns to every point $`bB`$ the origin $`\sigma (b)=pE_b`$. We will sometimes refer to this as the zero section. The requirement that the elliptic fibration have a section is crucial for duality to F theory. However, one notes that from the M theory point of view it is not necessary.
In order to be a Calabi–Yau threefold, the canonical bundle of $`X`$ must be trivial. From the adjunction formula, this implies that the normal bundle to the section, $`N_{B/X}`$, which is a line bundle over $`B`$ and tells us how the elliptic fiber twists as one moves around the base, must be related to the canonical bundle of the base, $`K_B`$. In fact,
$$N_{B/X}=K_B.$$
(3.3)
Further conditions appear if one requires that the Calabi–Yau threefold be smooth. The canonical bundle $`K_B`$ is then constrained so that the only possibilities for the base manifold are as follows :
* for a smooth Calabi–Yau manifold the base $`B`$ can be a del Pezzo ($`dP_r`$), Hirzebruch ($`F_r`$) or Enriques surface, or a blow-up of a Hirzebruch surface.
These are the only possibilities we will consider. The structure of these surfaces is discussed in detail in an appendix to . In the following, we will adopt the notation used there.
It will be useful to recall that, in general, there is a set of points in the base at which the fibration becomes singular. These form a curve, the discriminant curve $`\mathrm{\Delta }`$, which is in the homology class $`12K_B`$, as can be shown explicitly by considering the Weierstrass form of the fibration.
### 3.2 Algebraic classes on $`X`$
Since $`[W]`$ is algebraic, we need to identify the set of algebraic classes on our elliptically fibered Calabi–Yau manifold. This was discussed in , but here we will be more explicit. It will be useful to identify these classes both in $`H_2(X,𝐙)`$ and $`H_4(X,𝐙)`$. In general, the full set of classes will depend on the particular fibration in question. However, there is a generic set of classes which are always present, independent of the fibration, and this is what we will concentrate on.
Simply because we have an elliptic fibration, the fiber at any given point is a holomorphic curve in $`X`$. Consequently, one algebraic class in $`H_2(X,𝐙)`$ which is always present is the class of the fiber, which we will call $`F`$. The existence of a section means there is also a holomorphic surface in $`X`$. Thus the class of the section, which we will call $`D`$, defines an algebraic class in $`H_4(X,𝐙)`$.
Some additional algebraic classes may be inherited from the base $`B`$. In general, $`B`$ has a set of algebraic classes in $`H_2(B,𝐙)`$. One useful fact is that for all the bases which lead to smooth Calabi–Yau manifolds, one finds that every class in $`H_2(B,𝐙)`$ is algebraic. This follows from the Lefschetz theorem which tells us that we can identify algebraic classes on a surface $`S`$ with the image of integer classes in the Dolbeault cohomology $`H^{1,1}(S)`$. One then has the following picture. In general, the image of $`H^2(S,𝐙)`$ is a lattice of points in $`H^2(S,𝐑)`$. Choosing a complex structure on $`S`$ corresponds to fixing an $`h^{1,1}`$-dimensional subspace within $`H^2(S,𝐑)`$ describing the space $`H^{1,1}(S)`$. Generically, no lattice points will intersect the subspace and so there are no algebraic classes on $`S`$. The exception is when $`h^{2,0}=0`$, which is the case for all the possible bases $`B`$. Then the subspace is the whole space $`H^2(S,𝐑)`$ so all classes in $`H^2(S,𝐙)`$ are algebraic.
If $`\mathrm{\Omega }`$ is an algebraic class in $`H_2(B,𝐙)`$, there are two ways it can lead to a class in $`X`$. First, one can use the section $`\sigma `$ to form a class in $`H_2(X,𝐙)`$. If $`C`$ is some representative of $`\mathrm{\Omega }`$, then the inclusion map $`\sigma `$ gives a curve $`\sigma (C)`$ in $`X`$. The homology class of this curve in $`H_2(X,𝐙)`$ is denoted by $`\sigma _{}\mathrm{\Omega }`$. Second, we can use the projection map $`\pi `$ to pull $`\mathrm{\Omega }`$ back to a class in $`H_4(X,𝐙)`$. For a given representative $`C`$, one forms the fibered surface $`\pi ^1(C)`$ over $`C`$. The homology class of this surface in $`H_4(X,𝐙)`$ is then denoted by $`\pi ^{}\mathrm{\Omega }`$. This structure is indicated in Figure 1.
In general, these maps may have kernels. For instance, two curves which are non-homologous in $`B`$, might be homologous once one embeds them in the full Calabi–Yau threefold. In fact, we will see that this is not the case. One way to show this, which will be useful in the following, is to calculate the intersection numbers between the classes in $`H_2`$ and $`H_4`$. We find
$$\begin{array}{ccccc}& & & \multicolumn{2}{c}{H_2(X,𝐙)}\\ & & & \sigma _{}\mathrm{\Omega }^{}& F\\ & & & & \\ \hfill H_4(X,𝐙)& \hfill \begin{array}{c}\pi ^{}\mathrm{\Omega }\\ D\end{array}& & \begin{array}{c}\mathrm{\Omega }\mathrm{\Omega }^{}\\ K_B\mathrm{\Omega }^{}\end{array}& \begin{array}{c}0\\ 1\end{array}\end{array}$$
(3.4)
where the entries in the first column are the intersections of classes in $`B`$. The intersection of $`\sigma _{}\mathrm{\Omega }^{}`$ with $`D`$ is derived by adjunction, recalling that the normal bundle to $`B`$ is $`N_{B/X}=K_B^1`$. Two classes are equivalent if they have the same intersection numbers. If we take a set of classes $`\mathrm{\Omega }_i`$ which from a basis of $`H_2(B,𝐙)`$, we see that the matrix of intersection numbers of the form given in (3.4) is non-degenerate. Thus, for each nonzero $`\mathrm{\Omega }H_2(B,𝐙)`$, we get nonzero classes $`\sigma _{}\mathrm{\Omega }`$ and $`\pi ^{}\mathrm{\Omega }`$ in $`H_2(X,𝐙)`$ and $`H_4(X,𝐙)`$.
As we mentioned, the algebraic classes we have identified so far are generic, always present independently of the exact form of the fibration. There are two obvious sources of additional classes. Consider $`H_4(X,𝐙)`$. First, we could have additional sections non-homologous to the zero-section $`\sigma `$. Second, the pull-backs of irreducible classes on $`B`$ could split so that $`\pi ^{}(\mathrm{\Omega })=\mathrm{\Sigma }_1+\mathrm{\Sigma }_2`$. This splitting comes from the fact that there can be curves on the base over which the elliptic curve degenerates, for example, into a pair of spheres. New classes appear from wrapping the four-cycle over either one sphere or the other. Now consider $`H_2(X,𝐙)`$. We see that the possibility of degeneration of the fiber means that the fiber class $`F`$ can similarly split, with representatives wrapped, for instance, on one sphere or the other. Finally, the presence of new sections means there is a new way to map curves from $`B`$ into $`X`$ and, in general, classes in $`H_2(B,𝐙)`$ will map under the new section to new classes in $`H_2(X,𝐙)`$.
In all our discussions in this paper, we will ignore these additional classes. This will mean that our moduli space discussion is not in general complete. However, this restriction will allow us to analyze generic properties of the moduli space. In summary, we have identified the generic algebraic classes $`\mathrm{\Omega }`$ in $`H_2(X,𝐙)`$ as classes in $`B`$ (since these are all algebraic for the bases in question) mapped via the section into $`X`$, together with the fiber class $`F`$; while in $`H_4(X,𝐙)`$ the generic algebraic classes are the pull-backs $`\pi ^{}(\mathrm{\Omega })`$ of classes in $`B`$, together with the class $`D`$ of the section. Furthermore, distinct algebraic classes in $`B`$ lead to distinct algebraic classes in $`X`$.
### 3.3 Effective classes on $`X`$
We argued in the previous section that a generic algebraic class $`[W]`$ in $`H_2(X,𝐙)`$ on a general $`X`$ can be written as
$$[W]=\sigma _{}\mathrm{\Omega }+fF$$
(3.5)
where $`\mathrm{\Omega }`$ is an algebraic class in $`B`$ (which is then mapped to $`X`$ via the section) and $`F`$ is the fiber class, while $`f`$ is some integer. If $`[W]`$ is to be the class of a set of five-branes it must be effective. What are the conditions, then, on $`\mathrm{\Omega }`$ and $`f`$ such that $`[W]`$ is effective?
We showed in that the following is true. First, for a base which is any del Pezzo or Enriques surface, $`[W]`$ is effective if and only if $`\mathrm{\Omega }`$ is an effective class in $`B`$ and $`f0`$. Second, this is also true for a Hirzebruch surface $`F_r`$, with the exception of when $`\mathrm{\Omega }`$ happens to contain the negative section $`𝒮`$ and $`r3`$. Here, following the notation of , we write a basis of algebraic classes on $`F_r`$ as the negative section $`𝒮`$ and the fiber $``$. In this case, there is a single additional irreducible class $`\sigma _{}𝒮(r2)\sigma _{}`$. In this paper, for simplicity, we will not consider these exceptional cases for which the statement is untrue. Thus, under this restriction, we have that
$$W=\sigma _{}\mathrm{\Omega }+fF\text{ is effective in }X\mathrm{\Omega }\text{ is effective in }B\text{ and }f0$$
(3.6)
This reduces the question of finding the effective curves in $`X`$ to knowing the generating set of effective curves in the base $`B`$. For the set of base surfaces $`B`$ we are considering, finding such generators is always possible (see for instance ).
The derivation of this result goes as follows. Clearly, if $`\mathrm{\Omega }`$ is effective in $`B`$ and $`f`$ is non-negative then since effective curves in $`B`$ must map under the section to effective curves in $`X`$, we can conclude that $`[W]`$ is effective. One can also prove that the converse is true in almost all cases. One sees this as follows. First, unless a curve is purely in the fiber, in which case $`\mathrm{\Omega }=0`$, the fact that $`X`$ is elliptically fibered means that all curves $`W`$ project to curves in the base. The class $`[W]`$ similarly projects to the class $`\mathrm{\Omega }`$. The projection of an effective class must be effective, thus if $`[W]`$ is effective in $`X`$ then so is $`\mathrm{\Omega }`$ in $`B`$. The only question then is whether there are effective, irreducible, curves in $`X`$ with negative $`f`$. To address this, we use the fact that any irreducible class in $`H_2(X,𝐙)`$ must have non-negative intersection with any effective class in $`H_4(X,𝐙)`$ unless all the representative curves are contained within the representative surfaces. We start by noting that if $`\mathrm{\Omega }`$ is an effective class in $`B`$ then $`\pi ^{}\mathrm{\Omega }`$ must be an effective class in $`H_4(X,𝐙)`$. This can be seen by considering any given representative of $`B`$ and its inverse image in $`X`$. From the intersection numbers given in (3.4) and the generic form of $`[W]`$ (3.5) we see that, if $`\mathrm{\Omega }^{}`$ is an effective class in $`B`$, then
$`\pi ^{}\mathrm{\Omega }^{}[W]`$ $`=\mathrm{\Omega }^{}\mathrm{\Omega }`$ (3.7)
$`D[W]`$ $`=K_B\mathrm{\Omega }+f`$
From the first intersection one simply deduces again that if $`[W]`$ is effective then so is $`\mathrm{\Omega }`$. Now suppose that $`f`$ is non-zero. Then $`W`$ cannot be contained within $`B`$ and so, from the second expression, we have $`fK_B\mathrm{\Omega }`$. We recall that for del Pezzo and Enriques surfaces $`K_B`$ is nef, so that its intersection with any effective class $`\mathrm{\Omega }`$ is non-negative. Thus, we must have $`f0`$ for $`[W]`$ to be effective. The exception is a Hirzebruch surface $`F_r`$ for $`r3`$. We then have $`K_B=2>0`$ but $`K_B𝒮=2r<0`$. This allows the existence of effective classes of the form $`\sigma _{}𝒮+fF`$ with $`f`$ negative. Indeed, the existence of such a class can be seen as follows. Consider a representative curve $`C`$ of $`𝒮`$ in $`F_r`$ with $`r3`$ (in fact, the representative is unique). It is easy to see that $`C`$ is topologically $`𝐏^1`$ (see equation (9.10) below). The surface $`\pi ^1(C)`$ above $`C`$ should thus be an elliptic fibration over $`𝐏^1`$. However, as shown in equation (9.11) below, in fact, $`C`$ is contained within the discriminant curve $`\mathrm{\Delta }`$ of the Calabi–Yau fibration. Thus all the fibers over $`C`$ are singular. The generic singular fiber is a $`𝐏^1`$, suggesting that $`S_C`$ is a $`𝐏^1`$ fibration over $`𝐏^1`$. In fact it can be shown that $`S_C`$ is indeed itself the Hirzebruch surface $`F_{r2}`$ (or a blow-up of such a surface). What class is our original curve $`C`$ in the new surface $`F_{r2}`$? If we write the classes of $`F_{r2}`$ as $`𝒮^{}`$ and $`^{}`$, we identify $`^{}=F`$ since this is just the fiber class of the $`F_{r2}`$. In addition, one can show that $`𝒮=𝒮^{}+(r2)^{}`$. However, we know that $`𝒮^{}`$ itself is an irreducible class, so $`𝒮^{}=𝒮+(2r)F`$ is irreducible in $`H_2(X,𝐙)`$. Thus we see there is one new irreducible class with negative $`f`$ which saturates the condition that $`f(r2)`$.
## 4 The moduli space for fivebranes wrapping the elliptic fiber and the role of the axion
Probably the simplest example of a fivebrane moduli space is the case where the fivebranes wrap only the elliptic fiber of the Calabi–Yau threefold. By way of introduction to calculating moduli spaces, in this section, we will consider this case, first for a single fivebrane and then for a collection of fivebranes. These configurations are well understood in the dual F-theory picture as collections of D3-branes . We end the section with a discussion of the connection between our results and the F-theory description.
### 4.1 $`[W]=F`$
If it wraps a fiber only once, the class of the fivebrane curve is simply given by
$$[W]=F$$
(4.1)
A fivebrane wrapping any of the elliptic fibers will be in this class. One might imagine that there are other fivebranes in this class, where not all the fivebrane lies at the same point in the Calabi–Yau threefold. Instead, as one moves along the fivebrane in the fiber direction, the fivebrane could have a component in the base directions. However, if the curve is to be holomorphic, every point in the fivebrane curve must lie over the same point in the base. Similarly, in order to preserve $`𝒩=1`$ supersymmetry, the brane must be parallel to the orbifold fixed planes, so it is also at a fixed point in the orbifold. Since these position moduli are independent, the moduli space appears to be $`B\times S^1/Z_2`$. The two complex coordinates on $`B`$ form a pair of chiral $`𝒩=1`$ superfields. The metric on this part of the moduli space should simply come from the Kähler metric on the base $`B`$.
However, we have, thus far, ignored the axionic scalar, $`a`$, on the fivebrane world volume. We have argued that this is in a chiral multiplet with the orbifold modulus $`x^{11}`$. Furthermore, it is compact, describing an $`S^1`$. However, at the edges of the orbifold this changes. It has been argued in that there is a transition when a fivebrane reaches the boundary. At the boundary, the brane can be described by a point-like $`E_8`$ instanton. New low-energy fields then appear corresponding to moving in the instanton moduli space. Similarly, some of the fivebrane moduli disappear. Throughout this transition the low-energy theory remains $`𝒩=1`$. Thus, since the $`x^{11}`$ degree of freedom disappears in the transition, so must the axionic degree of freedom. Consequently the axionic $`S^1`$ moduli space must collapse to a point at the boundary. We see that the full $`(x^{11},a)`$ moduli space is just the fibration of $`S^1`$ over the interval $`S^1/Z_2`$, where the $`S^1`$ is singular at the boundaries, that is, the orbifold and axion part of the moduli space is simply $`S^2=𝐏^1`$.
The fact that the axionic degree of freedom disappears on the boundary can be seen in another way. In the fivebrane equation of motion, one can write the self-dual three-form field strength $`h`$ in terms of a two-form potential, $`b`$, in combination with the pull-back onto the fivebrane worldvolume of the eleven-dimensional three-form potential $`C`$ as
$$h=dbC$$
(4.2)
Under the $`Z_2`$ orbifold symmetry $`C`$ is odd unless it has a component in the direction of the orbifold. Since the fivebrane must be parallel to the orbifold fixed-planes this is not the case. This implies that $`h`$ must also be odd. Consequently, $`h`$ must be zero on the orbifold fixed planes implying that the axion $`a`$ also disappears on the boundary.
In summary, the full moduli space is given locally by
$$(F)=B\times 𝐏_a^1.$$
(4.3)
where the subscript on $`𝐏_a^1`$ denotes that this part of moduli space describes the axion multiplet. Globally, this $`𝐏^1`$ could twist as we move in $`B`$; so $`(F)`$ is really a $`𝐏^1`$ bundle over $`B`$. We will return to this point below. What about the vector degrees of freedom? Since the fiber is elliptic, the fivebrane curve must be topologically a torus. Thus we have
$$g=\text{genus}(W)=1$$
(4.4)
and there is a single $`U(1)`$ vector multiplet in the low-energy theory.
### 4.2 $`[W]=nF`$
The generalization to the case where the fivebrane class is a number of elliptic fibers is straightforward. The class
$$[W]=fF$$
(4.5)
where $`f1`$, means we have a collection of curves which wrap the fiber $`f`$ times. In general, we could have one component which wraps $`f`$ times or two or more components each wrapping a fewer number of times. In the limiting case, there are $`f`$ distinct components each wrapping only once. A single component must wrap all at the same point in the base. In addition, it must be at a fixed point in the orbifold interval and must have a single value of the axionic scalar. Two or more distinct components can wrap at different points in the base and have different values of $`x^{11}`$ and $`a`$.
As homology cycles, there is no distinction between the case where some number $`n`$ of singly wrapped components overlap, lying at the same point in the base, and the case where there is a single component wrapping $`n`$ times. Both cases represent the same two-cycle in the Calabi–Yau manifold. Physically, they could be distinguished if the $`n`$ singly wrapped components were at different points in $`S^1/Z_2`$ or had different values of the axion. However, if the values of $`x^{11}`$ and $`a`$ were also the same, by analogy with D branes, we would expect that we could not then distinguish, in terms of the scalar fields on the branes, the $`n`$ singly wrapped fivebranes from a single brane wrapped $`n`$ times. From the discussion in the last section, each singly wrapped fivebrane has a moduli space given locally by $`B\times 𝐏_a^1`$. Thus for $`f`$ components, we expect the full scalar field moduli space locally has the form
$$(fF)=\left(B\times 𝐏_a^1\right)^f/𝐙_f$$
(4.6)
where we have divided out by permutations since the fivebranes are indistinguishable. The ambiguous points in moduli space, which could correspond to a number of singly wrapped fivebranes or a single multiply wrapped fivebrane, are then the places where two or more of the points in the $`f`$ $`B\times 𝐏_a^1`$ factors coincide. Note, in addition, that this is again only the local structure of $`(fF)`$. We do not know how the $`𝐏_a^1`$ factors twist as we move the fivebranes in the base. Thus, globally, $`(fF)`$ is the quotient of a $`(𝐏^1)^f`$ bundle over $`B^f`$.
In a similar way, the gauge symmetry on the fivebranes also follows by analogy with D-branes. At a general point in the moduli space, we have $`n`$ distinct fivebranes each wrapping a torus and so, as in the previous section, each with a single $`U(1)`$ gauge field. When two branes collide in the Calabi–Yau threefold, and are at the same point in the orbifold and have the same value of the axion, we expect the symmetry enhances to $`U(2)`$. The new massless states come from membranes stretched between the fivebranes. The maximal enhancement is when all the fivebranes collide and the group becomes $`U(n)`$.
### 4.3 Duality to F theory and twisting the axion
The results of the last two sections are extremely natural from the F theory point of view. It has been argued that fivebranes wrapping an elliptic fiber of $`X`$ correspond to threebranes spanning the flat $`M_4`$ space on the type IIB side . To understand the correspondence, we first very briefly review the relation between M and F theory .
The duality states that the heterotic string on an elliptically fibered Calabi–Yau threefold $`X`$ is dual to F theory on a Calabi-Yau fourfold $`X^{}`$ fibered by K3 over the same base $`B`$. The M theory limit of the heterotic string we consider here is consequently also dual to the same F theory configuration. In addition, the duality requires that the K3 fibers should themselves be elliptically fibered. This means that the fourfold $`X^{}`$ also has a description as an elliptic fibration over a threefold base $`B^{}`$. Since the base of an elliptically fibered K3 manifold is simply $`𝐏^1`$, this implies that $`B^{}`$ must be a $`𝐏^1`$ fibration over $`B`$. As a type IIB background, the spacetime is $`B^{}\times M_4`$, where $`M_4`$ is flat Minkowski space. The complex structure of the elliptic fibers of $`X^{}`$ then encode how the IIB scalar doublet, the dilaton and the Ramond-Ramond scalar, vary as one moves over the ten-dimensional manifold $`B^{}\times M_4`$. As such, they describe some configuration of seven-branes in type IIB.
M theory fivebranes which wrap the elliptic fiber in $`X`$, map to threebranes spanning $`M_4`$ in the dual F theory vacuum. As such, the three-brane is free to move in the remaining six compact dimensions. Thus we expect that the threebrane moduli space is simply $`B^{}`$. However, we have noted that $`B^{}`$ is a $`𝐏^1`$ fibration over $`B`$. Thus we see that locally the moduli space as calculated on the F theory side exactly coincides with the moduli space of the fivebrane given in (4.3) above. The $`𝐏^1`$ fiber in $`B^{}`$ is precisely the orbifold coordinate $`x^{11}`$ together with the axion $`a`$. For a collection of $`f`$ threebranes, we expect the moduli space is simply promoted to the symmetric product $`B_{}^{}{}_{}{}^{f}/Z_f`$. Again, locally, this agrees with the moduli space (4.6) of the corresponding M theory fivebranes. Similarly, it is well known that a threebrane carries a single $`U(1)`$ gauge field, as does the M theory fivebrane. For a collection of $`f`$ threebranes this is promoted to $`U(f)`$, which was really the motivation for our claim for the vector multiplet structure calculated in the M theory picture.
In general, the arguments given in the previous two sections were only sufficient to give the local structure of the axion multiplet part of the fivebrane moduli space. We did not determine how the axion fiber $`𝐏_a^1`$ twisted as one moved the fivebrane in the Calabi–Yau manifold. From duality with F theory, we have seen that, in general, we expect this twisting is non-trivial. In fact, it can also be calculated from the M theory side. We will not give the details here but simply comment on the mechanism. A full description will be given elsewhere . The key is to recall that the self-dual three-form on the fivebrane (4.2) depends on the pull-back of the supergravity three-form potential $`C`$. This leads to holonomy for the axion degree of freedom as one moves the fivebrane within the Calabi–Yau threefold. The holonomy can be non-trivial if the field strength $`G`$ is non-trivial. However, from the modified Bianchi identity (2.2), we see that this is precisely the case when there are non-zero sources from the boundaries of $`S^1/Z_2`$ and also from the fivebranes in the bulk. In general, one can calculate how the axion twists, and hence how $`𝐏_a^1`$ twists, in terms of the different sources.
This phenomenon is interesting but not central to the structure of the fivebrane moduli spaces, such as the dimension of the space, how its different branches intersect, or waht is the the genus of the fivebrane curve. Thus, for simplicity, in the rest of this paper we will ignore the issue of how $`a`$ twists as one moves a given collection of fivebranes within the Calabi–Yau manifold. Consequently, the moduli spaces we quote will strictly only be locally correct for the axion degrees of freedom. So that it is clear where the extra global structure can appear, we will always label the $`𝐏^1`$ degrees of freedom associated with the axions as $`𝐏_a^1`$.
## 5 Two examples with fivebranes wrapping curves in the base
The discussion of the moduli space becomes somewhat more complicated once one includes classes where the fivebrane wraps a curve in the base manifold. Again, we will take two simple examples to illustrate the type of analysis one uses. In both cases, we will assume, for specificity, that the base manifold is a $`dP_8`$ surface, though the methods of our analysis would apply to any base $`B`$. Throughout, we will use the notation and results of . A $`dP_8`$ surface is a $`𝐏^2`$ surface blown up at eight points, $`p_1,\mathrm{},p_8`$. In general there are nine algebraic classes in the base: the class $`l`$ inherited from the class of lines in the $`𝐏^2`$ and the eight classes of the blown-up points $`E_1,\mathrm{},E_8`$. In the following, we will often describe curves in $`dP_8`$ in terms of the corresponding plane curve in $`𝐏^2`$.
### 5.1 $`[W]=\sigma _{}l\sigma _{}E_1`$
We first take an example where the fivebrane class includes no fiber components
$$[W]=\sigma _{}l\sigma _{}E_1$$
(5.1)
where $`\sigma _{}l`$ and $`\sigma _{}E_1`$ are the images in the Calabi–Yau manifold of the corresponding classes in the base. Since $`\mathrm{\Omega }=lE_1`$ is an effective class in the base, (see ), from (3.6) we see that $`[W]`$ is effective in $`X`$, as required.
If we knew that the curve lay only in the base, the moduli space would then simply be the space of curves in a $`dP_8`$ surface in the class $`lE_1`$, which is relatively easy to calculate. In general, however, $`W`$ lies somewhere in the full Calabi–Yau threefold. The fact that its homology class is the image of a homology class in the base does not imply that $`W`$ is stuck in $`B`$. Nonetheless, we do know that, under the projection map $`\pi `$ from $`X`$ to $`B`$, the curve $`W`$ must project onto a curve $`C`$ in the base as shown in Figure 2. Furthermore the class of $`C`$ must be $`\mathrm{\Omega }=lE_1`$ in $`B`$. What we can do is find the moduli space of such curves $`C`$ in the base and then ask, for each such $`C`$, what set of curves $`W`$ in the full Calabi–Yau manifold would project onto $`C`$. That is to say, the full moduli space should have a fibered structure. The base of this space will be the moduli space of curves $`C`$ in $`B`$, while the fiber above a given curve is the class of $`W`$ in $`X`$ which projects onto the given $`C`$.
In our example, the moduli space of curves $`C`$ in the class $`\mathrm{\Omega }=lE_1`$ is relatively easy to analyze. In $`𝐏^2`$, $`\mathrm{\Omega }`$ describes the class of lines through one point, $`p_1`$. A generic line in $`𝐏^2`$ is a homogeneous polynomial of degree one,
$$ax+by+cz=0$$
(5.2)
where $`[x,y,z]`$ are homogeneous coordinates on $`𝐏^2`$. Since the overall coefficient is irrelevant, a given line is fixed by giving $`[a,b,c]`$ up to an overall scaling. Thus the moduli space of lines is itself $`𝐏^2`$. Furthermore, we see that a given point in the line is specified by fixing, for instance, $`x`$ and $`y`$ up to an overall scaling. Consequently, we see that, topologically, a line in $`𝐏^2`$ is just a sphere $`𝐏^1`$. For the class $`\mathrm{\Omega }`$, we further require that the line pass through a given point $`p_1=[x_1,y_1,z_1]`$. This provides a single linear constraint on $`a`$, $`b`$ and $`c`$,
$$ax_1+by_1+cz_1=0$$
(5.3)
We now have only the set of lines radiating from $`p_1`$ and the moduli space is reduced to $`𝐏^1`$. Topologically, the line in $`𝐏^2`$ is still just a sphere and, generically, its image in $`dP_8`$ will also be a sphere.
There are, however, seven special points in the moduli space. A general line passing through $`p_1`$ will not intersect any other blown-up point. However, there are seven special lines radiating from $`p_1`$ which also pass through a second blown-up point. (To be a $`dP_8`$ manifold, the eight blow-up points must be in general position, so no three are ever in a line.) This is shown in Figure 3. Let us consider one of these seven lines, say the one which passes through $`p_2`$. The transform of such a line to $`dP_8`$ splits into two curves
$$C=C_1+C_2$$
(5.4)
The first component $`C_1`$ projects back to the line in $`𝐏^2`$. The second component corresponds to a curve wrapping the blown up $`𝐏^1`$ at $`p_2`$ and so has no analog in $`𝐏^2`$. Specifically, the classes of the two curves are
$$[C_1]=lE_1E_2,[C_2]=E_2$$
(5.5)
Using the results in the Appendix to , we see that
$$[C_1][C_1]=[C_2][C_2]=1$$
(5.6)
It follows that both curves are in exceptional classes in $`dP_8`$ and so cannot be deformed within the base. Hence, no new moduli for moving in the base appear when the curve splits.
From the form of (5.5), we see that, when the curve splits, $`C_1`$ remains a line in $`𝐏^2`$ so is topologically still a sphere, while $`C_2`$ wraps the blown up $`𝐏^1`$ and, so, is also topologically a sphere. Furthermore, the intersection number
$$[C_1][C_2]=1$$
(5.7)
implies that the two spheres intersect at one point. What has happened is that the single sphere $`C`$ has pinched off into a pair of spheres as shown in Figure 4. In summary, for the moduli space of curves $`C`$ in the base, in the homology class $`\mathrm{\Omega }=lE_1`$, we have,
$$\begin{array}{ccc}[C]& \text{genus}& \text{moduli space}\\ & & \\ lE_1& 0& 𝐏^17\text{ pts.}\\ (lE_1E_i)+(E_i)& 0+0& \text{single pt.}\end{array}$$
(5.8)
where in the second line $`i=2,\mathrm{},8`$.
The next step is to find, for a given curve $`C`$, how many curves $`W`$ there are in the full Calabi–Yau space which project onto $`C`$. Furthermore, $`W`$ must be in the homology class $`\sigma _{}l\sigma _{}E_1`$. Let us start with a curve $`C`$ at a generic point in the moduli space (5.8), that is, a point in the first line of the table where the curve has not split. Any curve $`W`$ which projects onto $`C`$ must lie somewhere in the space of the elliptic fibration over $`C`$. Thus, we are interested in studying the complex surface
$$S_C=\pi ^1(C)$$
(5.9)
This structure is shown in Figure 2. By definition, this surface is an elliptic fibration over $`C`$ which, means it is a fibration over $`𝐏^1`$. In general, the surface will have some number of singular fibers. This is equal to the intersection number between the discriminant curve $`\mathrm{\Delta }`$, which gives the position of all the singular fibers on $`B`$, and the base curve $`C`$. Recall that $`[\mathrm{\Delta }]=12K_B`$. Using the results summarized in the Appendix to , since the base is a $`dP_8`$ surface and intersection numbers depend only on homology classes, we have
$$[\mathrm{\Delta }][C]=12\left(3lE_1\mathrm{}E_8\right)\left(lE_1\right)=24$$
(5.10)
Thus we see that, generically, $`S_C`$ is an elliptic fibration over $`𝐏^1`$ with 24 singular fibers. This implies that
$$S_C\text{ is a K3 surface}$$
(5.11)
The curve $`C`$ is the zero section of the fibration. Further, projection gives us a map from the actual curve $`W`$ to its image $`C`$ in the base. The projection only wraps $`C`$ once, so, since $`C`$ is not singular, the map is invertible and $`W`$ must also be a section of $`S_C`$. Our question then simplifies to asking: what is the moduli space of sections of $`S_C`$ in the class $`\sigma _{}l\sigma _{}E_1`$?
To answer this question, we start by identifying the algebraic classes in $`S_C`$. We know that we have at least two classes inherited from the Calabi–Yau threefold: the class of the zero section $`C`$, which we write as $`D_C`$, and the class of the elliptic fiber, $`F_C`$. Specifically, under the inclusion map
$$i_C:S_CX$$
(5.12)
$`D_C`$ and $`F_C`$ map into the corresponding classes in $`X`$
$`i_CD_C`$ $`=[C]=\sigma _{}l\sigma _{}E_1`$ (5.13)
$`i_CF_C`$ $`=F`$
where $`i_C`$ is the map between classes
$$i_C:H_2(S_C,𝐙)H_2(X,𝐙)$$
(5.14)
These are the only relevant generic classes in $`X`$. However, there may be additional classes on $`S_C`$ which map to the same class in $`X`$ so that the map $`i_C`$ has a kernel. That is to say, two curves which are homologous in $`X`$ may not be homologous in $`S_C`$. However, we note that a generic K3 surface would have no algebraic classes since $`h^{2,0}0`$ (see the discussion in section 3.2). Given that in our case of an elliptically fibered K3 with section we have at least two algebraic classes, the choice of complex structure on $`S_C`$ cannot be completely general. However, generically, we have no reason to believe that there are any further algebraic classes. For particular choices of complex structure additional classes may appear but, since here we are considering the generic properties of the moduli space, we will ignore this possibility.
Now, we require that $`W`$, like $`C`$, is also in the class $`\sigma _{}l\sigma _{}E_1`$ in the full Calabi–Yau space. This immediately implies, given the map (5.13), that $`W`$ is also in the class $`D_C`$ of the zero section $`C`$ in $`S_C`$. Furthermore, we can calculate the self-intersection number of this class within $`S_C`$. This can be done as follows. Recall that the Riemann–Hurwitz formula applied to the curve $`C`$ states that
$$2g2=\mathrm{deg}K_C$$
(5.15)
where $`g`$ is the genus and $`K_C`$ is cohomology class of the canonical bundle of $`C`$. The adjunction formula then gives
$$\mathrm{deg}K_C=\left(K_{S_C}+D_C\right)D_C$$
(5.16)
where $`K_{S_C}`$ is the canonical class of the K3 surface $`S_C`$. Using the fact that the canonical class of a K3 surface is zero, $`K_{S_C}=0`$, and that $`C`$ is a sphere so $`g=0`$, it follows from (5.15) and (5.16) that
$$D_CD_C=2$$
(5.17)
This implies that the section cannot be deformed at all within the surface $`S_C`$. In conclusion, we see that there is, generically, no moduli space of curves $`W`$ which project onto $`C`$. Rather, the only curve in $`S_C`$ in the class $`\sigma _{}l\sigma _{}E_1`$ is the section $`C`$ itself. We see that, generically, the curve $`W`$ can only move in the base of $`X`$ and cannot be deformed in a fiber direction.
Recall that a fivebrane wrapped on $`W`$ also has a modulus describing its position in $`S^1/Z_2`$, as well as the axionic modulus. Together, as was discussed in the previous section, these form a $`𝐏^1`$ moduli space. Thus, we conclude that the moduli space associated with a generic curve in (5.8) is locally simply
$$_{\text{generic}}=\left(𝐏^1\text{7 pts.}\right)\times 𝐏_a^1$$
(5.18)
As discussed above, since the axion can be twisted, globally, this extends to a $`𝐏_a^1`$ bundle over $`𝐏^1`$. Physically, we have a single fivebrane wrapping an irreducible curve in the Calabi–Yau threefold, which lies entirely within the base $`B`$. The curve can be deformed in the base, which gives the first factor in the moduli space, but cannot be deformed in the fiber direction. It can also move in the orbifold interval and have different values for the axionic modulus, which gives the second factor in (5.18). Since the curve has genus zero, there are no vector fields in the low-energy theory.
Thus far we have discussed the generic part of the moduli space. The full moduli space will have the form
$$(\sigma _{}l\sigma _{}E_1)=\left(𝐏^1\text{7 pts.}\right)\times 𝐏_a^1_{\text{non-generic}}$$
(5.19)
where the additional piece $`_{\text{non-generic}}`$ describes the moduli space at each of the special 7 points where the curve $`C`$ splits into two components. To analyze this part of the moduli space, we must consider each component separately, but we can use the same procedure we used above. The fact that the image $`C`$ splits, means that the original curve $`W`$ must also split in $`X`$
$$W=W_1+W_2$$
(5.20)
with $`C_1`$ being the projection onto the base of $`W_1`$ and $`C_2`$ the projection of $`W_2`$. Let us consider the case where the line in $`𝐏^2`$ also intersects $`p_2`$. Then the homology classes of $`W_1`$ and $`W_2`$ must split as
$$[W_1]=\sigma _{}l\sigma _{}E_1\sigma _{}E_2[W_2]=\sigma _{}E_2$$
(5.21)
Note that one might imagine adding $`nF`$ to $`[W_1]`$ and $`nF`$ to $`[W_2]`$, still leaving the total $`[W]`$ unchanged and having the correct projection onto the base. However, from (3.6), we see that one class would not then be effective and so, since $`W_1`$ and $`W_2`$ must each correspond to a physical fivebrane, such a splitting is not allowed.
If we start with $`W_1`$, to find the curves in $`X`$ which project onto $`C_1`$ and are in the homology class $`\sigma _{}l\sigma _{}E_1\sigma _{}E_2`$, we begin, as above, with the surface $`S_{C_1}=\pi ^1(C_1)`$ above $`C_1`$. Calculating the number of singular fibers, we find
$$\mathrm{\Delta }[C_1]=12\left(3lE_1\mathrm{}E_8\right)\left(lE_1E_2\right)=12$$
(5.22)
Since $`C_1`$ is a sphere, we have an elliptic fibration over $`𝐏^1`$ with 12 singular fibers, which implies that
$$S_{C_1}\text{ is a }dP_9\text{ surface}$$
(5.23)
Similarly, if we consider $`[C_2]=E_2`$,
$$\mathrm{\Delta }[C_2]=12\left(3lE_1\mathrm{}E_8\right)E_2=12$$
(5.24)
Since the curve $`C_2`$ is also a sphere, it follows that we again have an elliptic fibration over $`𝐏^1`$ with 12 singular fibers, and hence we also have
$$S_{C_2}\text{ is a }dP_9\text{ surface}$$
(5.25)
Thus, we are considering the degeneration of the K3 surface $`S_C`$, which had 24 singular fibers, into a pair of $`dP_9`$ surfaces $`S_{C_1}`$ and $`S_{C_2}`$, each with 12 singular fibers. On a given $`dP_9`$ surface, say $`S_{C_1}`$, we are guaranteed, as for the K3 surface, that there are at least two algebraic classes, the section class $`D_{C_1}`$ and the fiber class $`F_{C_1}`$. However, the $`dP_9`$ case is more interesting than the case of a K3 surface since there are always other additional algebraic classes. On a $`dP_9`$ surface, $`h^{2,0}=0`$. Consequently, as was discussed in section 3.2, whatever complex structure one chooses, all classes in $`H_2(dP_9,𝐙)`$ are algebraic. Thus, one finds that the algebraic classes on $`dP_9`$ form a 10-dimensional lattice. Since there are only two distinguished classes on the Calabi–Yau threefold (namely $`\sigma _{}l\sigma _{}E_1`$ and the fiber class $`F`$), this implies that distinct classes in $`S_{C_1}`$ must map to the same class in $`X`$. That is to say, curves which are not homologous in $`S_{C_1}`$ are homologous once one considers the full threefold $`X`$.
The full analysis of the extra classes on $`S_{C_1}`$ will be considered in section 6.3. In our particular case, it will turn out that, for $`W_1`$ to be in the same class as $`C_1`$ in the full Calabi–Yau threefold, it must also be in the same class within $`S_{C_1}`$. Thus, we are again interested in the moduli space of the section class $`D_{C_1}`$ in $`S_{C_1}`$. Now, we recall (see, for instance, the Appendix to ) that the canonical class for a $`dP_9`$ is simply $`K_{S_{C_1}}=F_D`$. So by the analogous calculation to (5.17), using the fact that $`D_{C_1}F_D=1`$ since $`C_1`$ is a section, we have that
$$D_{C_1}D_{C_1}=1$$
(5.26)
This means that the curve $`C_1`$ cannot be deformed within $`S_{C_1}`$. Thus, as in the K3 case, the only possible $`W_1`$ is the section $`C_1`$ itself.
An identical calculation goes through for the other component $`W_2`$. Furthermore, the analysis is the same at each of the other six exceptional points in moduli space. Given that the curve has split into two components at each of these points, we have two separate moduli describing the position of each component in $`S^1/Z_2`$ as well as two moduli describing the axionic degree of freedom for each component. It follows that
$$_{\text{non-generic}}=7\left(𝐏_a^1\times 𝐏_a^1\right)$$
(5.27)
where $`7(𝐏^1\times 𝐏^1)=𝐏^1\times 𝐏^1\mathrm{}𝐏^1\times 𝐏^1`$. We find, then, that the full moduli space has a branched structure,
$$(\sigma _{}l\sigma _{}E_1)=\left(𝐏^1\text{7 pts.}\right)\times 𝐏_a^17\left(𝐏_a^1\times 𝐏_a^1\right)$$
(5.28)
where, globally, the first component, $`_{\text{generic}}`$, in fact, extends to a $`𝐏^1`$ bundle over $`𝐏^1`$. We can also describe the way each copy of $`𝐏_a^1\times 𝐏_a^1`$ is attached to $`_{\text{generic}}`$: the diagonal of $`𝐏_a^1\times 𝐏_a^1`$, the set of points where the two components intersect, is glued to a fiber of the $`𝐏_a^1`$ bundle $`_{\text{generic}}`$.
Physically, as we discussed above, at a generic point in the moduli space we have a single fivebrane wrapping a curve which lies solely in the base of the Calabi–Yau threefold and is topologically a sphere. The curve can be moved in the base and in $`S^1/Z_2`$ but not in the fiber direction. In moving around the base there are seven special points where the fivebrane splits into two curves intersecting at one point, as in Figure 4. These are each fixed in both the base and the fiber of the Calabi–Yau manifold, but can now each move independently in $`S^1/Z_2`$. The two fivebranes can then be separated so that they no longer intersect. In making the transition from one of these branches of the moduli space to the case where there is a single fivebrane, the two fivebranes must be at the same point in $`S^1/Z_2`$ and have the same value of the axionic scalar $`a`$. They can then combine and be deformed away within the base as a single curve. This structure is shown in Figure 5. Note that, unlike the pure fiber case (4.6), the two curves $`W_1`$ and $`W_2`$ are distinguishable, since they wrap different cycles in the base, so we do not have to be concerned with modding out by discrete symmetries.
Since in our example all the curves are topologically spheres, there are generically no vector fields in the low-energy theory. However, at the points where there is a transition between the two-fivebrane branch and the single fivebrane branch, additional low-energy fields can appear. These correspond to membranes which stretch between the two fivebranes becoming massless as the fivebranes intersect.
### 5.2 $`[W]=\sigma _{}l\sigma _{}E_1+F`$
We can generalize the previous example by including a fiber component in the class of $`W`$, so that
$$[W]=\sigma _{}l\sigma _{}E_1+F$$
(5.29)
Note that from (3.6) this class is effective.
We immediately see that one simple possibility is that $`W`$ splits into two curves
$$W=W_0+W_F$$
(5.30)
where
$$[W_0]=\sigma _{}l\sigma _{}E_1,[W_F]=F$$
(5.31)
The moduli space of the $`W_0`$ component will be exactly the same as our previous example, while for the pure fiber component, as given in equation (4.3), the moduli space is locally $`B\times 𝐏_a^1`$. Since the base in this example is $`dP_8`$ here we conclude that, when the curve splits, this part of the moduli space is just the product of the moduli spaces for $`W_0`$ and $`W_F`$, that is
$$(\sigma _{}l\sigma _{}E_1)\times \left(dP_8\times 𝐏_a^1\right)$$
(5.32)
where $`(\sigma _{}l\sigma _{}E_1)`$ was given above in (5.28). Physically we have two fivebranes, one wrapped on the fiber and one on the base, which can each move independently. As discussed above, for the curve wrapped on the base there are certain special points in moduli space where it splits into a pair of fivebranes, so that, at these special points, we have a total of three independent fivebranes. Since the curves of the base are all topologically spheres, their genus is zero. Hence, the only vector multiplets come from the fivebrane wrapping the fiber which, being topologically a torus with $`g=1`$, gives a $`U(1)`$ theory. Generically, the five brane wrapping the fiber $`W_F`$ does not intersect the fivebranes in the base $`W_0`$. However, there is a curve of points in the moduli space of $`W_F`$ where both fivebranes are in the same position in $`S^1/Z_2`$, with the same value of $`a`$, and $`W_0`$ lies above $`W_B`$ in the Calabi–Yau fibration. Generically, this gives a single intersection. However, there is a special point, when the base curves splits, as in (5.20) and (5.21), and the fiber component intersects exactly the point where the two base curves intersect. At such a point in moduli space, we have three fivebranes intersecting at a single point in the Calabi–Yau threefold. These different possible intersections are shown in Figure 6. Generically, we expect there to be additional multiplets in the low-energy theory at such points.
We might expect that there is also a component of the moduli space where the curve $`W`$ does not split at all, that is, where we have a single fivebrane in the class $`\sigma _{}l\sigma _{}E_1+F`$. To analyze this second possibility we simply follow the analysis given above, where we first consider the moduli space of the image $`C`$ of $`W`$ in the base and then find the moduli space of curves which project down to the same given curve $`C`$.
Since the projection of $`F`$ onto the base is zero, the image of $`[W]`$ in the base is $`\mathrm{\Omega }=lE_1`$, as above. Thus many of the results of the previous discussion carry over to this situation. The moduli space of $`C`$ is given by (5.8). At a generic point in moduli space $`S_C=\pi ^1(C)`$ is a K3 surface, while at the seven special points where $`C`$ splits, the surface above each of the two components of $`C`$ is a $`dP_9`$ surface. If we consider first a generic point in the moduli space, $`W`$ is again a section of the K3 surface, but must now be in the class $`\sigma _{}l\sigma _{}E_1+F`$. How many such sections are there? It turns out that for generic K3 there are none. We can see this as follows. By adjunction, since the genus of $`C`$ was zero, we showed in equation (5.17) that its class in $`S_C`$ satisfies $`D_CD_C=2`$. The identical calculation applies to any section, since all sections have genus zero. Thus, in particular, we have
$$[W]_C[W]_C=2$$
(5.33)
where by $`[W]_C`$, we mean the class of $`W`$ in $`S_C`$. However, from the map between classes (5.13), it is clear that $`[W]_C=D_C+F_C`$. Since $`D_C`$ is the class of the zero section, we have $`D_CF_C=1`$. For the fiber class we always have $`F_CF_C=0`$. Hence, we must also have
$$[W]_C[W]_C=0$$
(5.34)
This contradiction implies that there can be no sections of $`S_C`$ in the class $`\sigma _{}l\sigma _{}E_1+F`$. In other words, we have shown that, generically, we cannot have just a single fivebrane in the class $`\sigma _{}l\sigma _{}E_1+F`$. Rather, the fivebrane always splits into a pure fiber component and a pure base component, as described above.
What about the special points in the moduli space where the curve $`C`$ splits into two? Do we still have to have a separate pure fiber component? The answer is no, for the reason that, as discussed above, the space above each component is a $`dP_9`$ surface and, unlike the K3 case, there are many more algebraic classes on $`dP_9`$ than just the zero section and the fiber. Specifically, suppose there is no separate pure fiber component in $`W`$ and consider the point where $`C`$ splits into $`C_1+C_2`$ with $`[C_1]=lE_1E_2`$ and $`[C_2]=E_2`$. The actual curve $`W`$ must also split into $`W_1`$ and $`W_2`$. Given that each component must be effective, we then have two possibilities, depending on which component includes the fiber class
$$\begin{array}{c}[W_1]=\sigma _{}l\sigma _{}E_1\sigma _{}E_2+F,[W_2]=\sigma _{}E_2\\ \text{or}\\ [W_1]=\sigma _{}l\sigma _{}E_1\sigma _{}E_2,[W_2]=\sigma _{}E_2+F\end{array}$$
(5.35)
Let us concentrate on the first case, although a completely analogous analysis holds in the second example. Above, we calculated the number of curves in the case where the class contains no $`F`$. We found, for instance, that if $`[W_2]=\sigma _{}E_2`$ then $`W_2`$ is required to be precisely the section $`C_2`$ and there is no moduli space for moving the curve in the fiber direction. The situation is richer, however, for the class with an $`F`$ component. We will discuss this is more detail in section 6.3 below, but it turns out that there are 240 different sections of $`dP_9`$ in the class $`[W_1]=\sigma _{}l\sigma _{}E_1\sigma _{}E_2+F`$. It is a general result, just repeating the calculation that led to (5.26), that any section of the $`dP_9`$ has self-intersection $`1`$. Consequently none of the 240 different sections in the class $`\sigma _{}l\sigma _{}E_1\sigma _{}E_2+F`$ can be deformed in the fiber direction and, hence, they simply provide a discrete set of different $`W_1`$ which all map to the same $`C_1`$. Furthermore, one can show that none of these sections intersect the base of the Calabi–Yau manifold. Thus, since, in the case we are considering, $`W_2`$ lies solely in the base, we find that $`W_1`$ and $`W_2`$ can never overlap. Their relative positions within the Calabi–Yau threefold are shown in Figure 7.
It is important to note that these curves are completely stuck within the Calabi–Yau threefold. They cannot combine into a single curve and move away from the exceptional point in the moduli space of $`C`$ (the projection of $`W`$ into the $`dP_8`$ base) where $`C`$ splits into two curves. Furthermore, we have argued that they cannot move in the fiber. Thus, the only moduli for this component of the moduli space are the positions of the two fivebranes in the orbifold interval and the values of their axions, giving a moduli space of
$$𝐏_a^1\times 𝐏_a^1$$
(5.36)
Furthermore, since all the sections of $`dP_9`$ are topologically spheres, there are no vector multiplets in this part of the moduli space. As we noted above, the fivebranes cannot overlap. Hence, there is no possibility of additional multiplets appearing. Finally, we note that there were seven ways $`C`$ could split into $`C_1+C_2`$, and, for each splitting, $`W`$ can decompose one of two ways (5.35). Since for each decomposition there are 240 distinct sections, we see that there is a grand total of 3360 ways of making the analogous decomposition to that we have just discussed.
In conclusion, we see that the full moduli space for $`[W]=\sigma _{}l\sigma _{}E_1+F`$ has a relatively rich structure. It splits into a large number of disconnected components. The largest component is where $`W`$ splits into separate fiber and base components, $`W=W_0+W_F`$. The moduli space is then given by (5.32), which includes the possibility of the base component splitting. There are then 3360 disconnected components where $`W`$ splits into two irreducible components, one of which includes the fiber class $`F`$. We can summarize this structure in a table
$$\begin{array}{ccc}[W]& \text{genus}& \text{moduli space}\\ & & \\ & & \\ (\sigma _{}l\sigma _{}E_1)+(F)& 0+1& \left(\left[𝐏^17\text{ pts.}\right]\times 𝐏_a^1\right)\times \left(dP_8\times 𝐏_a^1\right)\\ (\sigma _{}l\sigma _{}E_1\sigma _{}E_2)+(\sigma _{}E_2)+(F)& 0+0+1& 𝐏_a^1\times 𝐏_a^1\times \left(dP_8\times 𝐏_a^1\right)\\ & & \\ (\sigma _{}l\sigma _{}E_1\sigma _{}E_2+F)+(\sigma _{}E_2)& 0+0& 𝐏_a^1\times 𝐏_a^1\end{array}$$
(5.37)
Here, the first column gives the homology classes in $`X`$ of the different components of $`W`$. The first two rows describe the moduli space (5.32) where $`W`$ splits into $`W_0+W_F`$, first for a generic component in the base and then at one of seven points where the base component splits. The final row describes one of the 3360 disconnected components of the moduli space (5.36). From the genus count, we see that in the first two cases we expect a $`U(1)`$ gauge field on the fivebrane which wraps the fiber, while in the last case there are no vector multiplets. As we have noted, the component of the moduli given in the first two rows has the possibility that the fivebranes intersect leading to additional low-energy fields, as depicted in Figure 6. The disconnected components, have no such enhancement mechanism. Furthermore, their moduli space is severely restricted since neither fivebrane can move within the Calabi–Yau manifold.
## 6 General procedure for analysis of the moduli space
From the examples above, we can distill a general procedure for the analysis of the generic moduli space. We start with a general fivebrane curve in the class
$$[W]=\sigma _{}\mathrm{\Omega }+fF$$
(6.1)
Furthermore, $`[W]`$ is assumed to be effective, so that, by (3.6), $`\mathrm{\Omega }`$ is some effective class in $`H_2(B,𝐙)`$ and $`f0`$. We need to first find the moduli space of the projection $`C`$ of $`W`$ onto the base. One then finds all the curves in the full Calabi–Yau threefold in the correct homology class which project onto $`C`$. In general, we will find that the above case, where the space $`S_C=\pi ^1(C)`$ above $`C`$ was a K3 surface, is the typical example. There, we found that no irreducible curve which projects onto $`C`$ could include a fiber component in its homology class. Hence, $`[W]`$ splits into a pure base and a pure fiber component. One exception, as we saw above, is when $`S_C`$ is a $`dP_9`$ surface. In the following, we will start by analysing the generic case and then give a separate discussion for the case where $`dP_9`$ appears.
### 6.1 Decomposition of the moduli space
Any curve $`W`$ in the Calabi–Yau threefold can be projected to the base using the map $`\pi `$. In general, $`W`$ may have one or many components. Typically, a component will project to a curve in the base. However, there may also be components which are simply curves wrapping the fiber at different points in the base. These curves will all project to points in the base rather than curves. Thus, the first step in analyzing the moduli space is to separate out all such curves. We therefore write $`W`$ as the sum of two components, each of which may be reducible,
$$W=W_0+W_F$$
(6.2)
but with the assumption that none of the components of $`W_0`$ are pure fiber components. For general $`[W]`$ given in (6.1), the classes of these components are
$$[W_0]=\sigma _{}\mathrm{\Omega }+nF[W_F]=(fn)F$$
(6.3)
with $`0nf`$, since each class must be separately effective. Note that, although $`W_0`$ has no components which are pure fiber, its class may still involve $`F`$ since, in general, components of $`W_0`$ can wrap around the fiber as they wrap around a curve in the base. Except when $`n=f`$, so $`W_F=0`$, $`W`$ has at least two components in this decomposition and so there are at least two five-branes.
In general, the decomposition (6.2) splits the moduli space into $`f+1`$ different components depending on how we partition the $`f`$ fiber classes between $`[W_0]`$ and $`[W_F]`$. Within a particular component, the moduli space is a product of the moduli space of $`W_0`$ and $`W_F`$. If $`_F((fn)F)`$ is the moduli space of $`W_F`$ and $`_0(\sigma _{}\mathrm{\Omega }+nF)`$ is the moduli space of $`W_0`$, we can then write the full moduli space as
$$(\sigma _{}\mathrm{\Omega }+fF)=\underset{n=0}{\overset{f}{}}_0(\sigma _{}\mathrm{\Omega }+nF)\times ((fn)F)$$
(6.4)
The problem is then reduced to finding the form of the moduli spaces for $`W_0`$ and $`W_F`$. The latter moduli space has already been analyzed in section 4.2. We found that, locally,
$$((fn)F)=\left(B\times 𝐏_a^1\right)^{fn}/𝐙_{fn}$$
(6.5)
Thus we are left with $`_0(\sigma _{}\mathrm{\Omega }+nF)`$, which can be analyzed by the projection techniques we used in the preceding examples.
Projecting $`W_0`$ onto the base, we get a curve $`C`$ in the class $`\mathrm{\Omega }`$ in $`B`$. Let us call the moduli space of such curves in the base $`_B(\mathrm{\Omega })`$. This space is relatively easy to analyze since we know the form of $`B`$ explicitly. In general, it can be quite complicated with different components and branches as curves degenerate and split. To find the full moduli space $`_0(\sigma _{}\mathrm{\Omega }+nF)`$, we fix a point in $`_B(\mathrm{\Omega })`$ giving a particular curve $`C`$ in $`B`$ which is the projection of the original curve $`W_0`$ in the Calabi–Yau threefold. In the following, we will assume that $`C`$ is not singular. By this we mean that it does not, for instance, cross itself or have a cusp in $`B`$. If it is singular, it is harder to analyze the space of curves $`W_0`$ which project onto $`C`$. In general, $`C`$ splits into $`k`$ components, so that
$$C=C_1+\mathrm{}+C_k$$
(6.6)
We then also have
$$\mathrm{\Omega }=\mathrm{\Omega }_1+\mathrm{}+\mathrm{\Omega }_k$$
(6.7)
where $`\mathrm{\Omega }_i=[C_i]`$ and, so, must be an effective class on the base for each $`i`$. Clearly if the curve in the base has more than one component then so does the original curve $`W_0`$, so that
$$W_0=W_1+\mathrm{}+W_k$$
(6.8)
with $`\pi (W_i)=C_i`$. In general, the class of $`W_0`$ will be partitioned into a sum of classes of the form
$$[W_i]=\sigma _{}\mathrm{\Omega }_i+n_iF$$
(6.9)
where, for each curve to be effective, $`n_i0`$ and $`n_1+\mathrm{}+n_k=n`$, leading to a number of different possible partitions.
One now considers a particular component $`C_i`$. To find the moduli space over $`C_i`$, one needs to find all the curves $`W_i`$ in $`X`$ in the cohomology class $`\sigma _{}\mathrm{\Omega }_i+n_iF`$ which project on $`C_i`$. Recall, in addition, that we have assumed in our original partition (6.2) that $`[W_i]`$ contains no pure fiber components. Repeating this procedure for each component and for each partition of the $`n`$ fibers into $`\{n_i\}`$, gives the moduli space over a given point $`C`$ in $`_B(\mathrm{\Omega })`$ and, hence, the full moduli space.
Consequently, we have reduced the problem of finding the full moduli space $`_0(\sigma _{}\mathrm{\Omega }+nF)`$ to the following question. To simplify notation, let $`R`$ be the given irreducible curve $`C_i`$ in the base, and let $`V`$ be the corresponding curve $`W_i`$ in the full Calabi–Yau threefold. Let us further write the class $`\mathrm{\Omega }_i`$ of $`C_i`$ as $`\mathrm{\Lambda }`$ and write $`m`$ for $`n_i`$. Our general problem is then to find, for the given irreducible curve $`R`$ in the effective homology class $`\mathrm{\Lambda }`$, what are all the curves $`V`$ in the Calabi–Yau threefold in the class $`\sigma _{}\mathrm{\Lambda }+mF`$, where $`m0`$ which project onto $`R`$. Necessarily, all the curves $`V`$ which project into $`R`$ lie in the surface $`S_R=\pi ^1(R)`$. By construction, $`S_R`$ is elliptically fibered over the base curve $`R`$. Furthermore, typically, the map from $`V`$ to $`R`$ wraps $`R`$ only once. It is possible that $`R`$ is some number $`q`$ of completely overlapping curves in $`B`$, so that $`[R]=q\mathrm{\Gamma }`$ for some effective class $`\mathrm{\Gamma }`$ in $`B`$. Then the map from $`V`$ to $`R`$ wraps the base curve $`q`$ times. This will occur in one of the examples we give later in the paper, but here, since we are discussing generic properties, let us ignore this possibility. Then, assuming $`R`$ is not singular, the map is invertible and we see that $`V`$ must be a section of the fibered surface $`S_R`$.
Furthermore, we note that $`S_R`$ can be characterized by the genus $`g`$ of the base curve $`R`$ and the number of singular fibers. The former is, by adjunction, given by
$$2g2=\left(K_B+\mathrm{\Lambda }\right)\mathrm{\Lambda }$$
(6.10)
The latter is also a function only of the class $`\mathrm{\Lambda }`$ of $`R`$ and can be found by intersecting the discriminant class $`[\mathrm{\Delta }]`$ with $`\mathrm{\Lambda }`$. Since $`[\mathrm{\Delta }]=12K_B`$, the number of singular fibers must be of the form $`12p`$ with
$$p=K_B\mathrm{\Lambda }$$
(6.11)
where $`p`$ is an integer. If $`p`$ is negative, the curve $`R`$ lies completely within the discriminant curve of the elliptically fibered Calabi–Yau manifold. This means that every fiber above $`R`$ is singular. The form of $`S_R`$ then depends on the structure of the particular fibration of the Calabi–Yau threefold. Since we want to consider generic properties of the moduli space, we will ignore this possibility and restrict ourselves to the case where $`p`$ is non-negative. We note that this is not very restrictive. For del Pezzo and Enriques surfaces $`K_B`$ is nef, meaning that its intersection with any effective class in the base is non-negative. Hence, since $`\mathrm{\Lambda }`$ must be effective, we necessarily have $`p0`$. The only exceptions, are Hirzebruch surfaces $`F_r`$ with $`r3`$ and where $`\mathrm{\Lambda }`$ includes the negative section $`𝒮`$.
Finally, then, finding the full moduli space $`_0(\sigma _{}\mathrm{\Omega }+nF)`$ has been reduced to the following problem
* For a given irreducible curve $`R`$ in the base $`B`$ with homology class $`\mathrm{\Lambda }`$, find the moduli space of sections $`V`$ of the surface $`S_R=\pi ^1(R)`$ in the homology class $`\sigma _{}\mathrm{\Lambda }+mF`$ in the full Calabi–Yau threefold $`X`$, where $`m0`$.
$`S_R`$ is characterized by $`g=\text{genus}(C)`$, as given in (6.10), and $`p`$, where $`12p`$ is the number of singular elliptic fibers, as given in (6.11). Consequently, we write this moduli space as $`(g,p;m)`$.
We will assume that $`p0`$. This is necessarily true, except when $`B`$ is an $`F_r`$ surface with $`r3`$ and $`\mathrm{\Lambda }`$ contains the class of the section at infinity $`𝒮`$.
### 6.2 The generic form of $`(g,p;m)`$
To understand the sections of $`S_R`$, we start by finding the algebraic classes on $`S_R`$. As for the K3 surface, a generic surface $`S`$ has no algebraic classes since $`h^{2,0}0`$. For $`S_R`$, we know that two classes are necessarily present, the class of the zero section $`D_R`$ and the fiber class $`F_R`$. However, generically, there need not be any other classes. Additional classes may appear for special choices of complex structure, but here we will consider only the generic case. The obvious exception is the case where $`g=0`$ and $`p=1`$. From equations (6.10) and (6.11), this implies that $`R`$ is an exceptional curve in $`B`$. The surface $`S_R`$ is then an elliptic fibration over $`𝐏^1`$ with 12 singular fibers, which is a $`dP_9`$ surface. In this case $`h^{2,0}=0`$ and every class in $`H_2(S_R,𝐙)`$ is algebraic. We will return to this case in the next section. The inclusion map $`i_R:S_RX`$, gives a natural map between classes in $`S_R`$ and in $`X`$
$$i_R:H_2(S_R,𝐙)H_2(X,𝐙)$$
(6.12)
In general, with only two classes the map is simple. By construction, we have
$`i_RD_R`$ $`=\sigma _{}\mathrm{\Lambda }`$ (6.13)
$`i_RF_R`$ $`=F`$
Just as in the K3 example, we will find that the existence of only two classes strongly constrains the moduli space $`(g,p;m)`$. We can find the analog of the contradiction of equations (5.33) and (5.34) as follows. Let $`K_{S_R}`$ be the cohomology class of the canonical bundle of the surface. Since both $`V`$ and $`R`$ are sections, they have the same genus $`g`$. By the Riemann–Hurwitz formula and adjunction, we have
$$2g2=\left(K_{S_R}+D_R\right)D_R=K_{S_R}D_R+D_RD_R$$
(6.14)
where $`D_R`$ is the class of the zero section, and
$$2g2=\left(K_{S_R}+[V]_R\right)[V]_R=K_{S_R}[V]_R+[V]_R[V]_R$$
(6.15)
where $`[V]_R`$ the class of $`V`$ in $`S_R`$. We also have, for the fiber, by a similar calculation, since it has genus one and, since two generic fibers do not intersect, $`F_RF_R=0`$, that
$$0=\left(K_{S_R}+F_R\right)F_R=K_{S_R}F_R$$
(6.16)
Finally, since we require in $`X`$ that $`[V]=\sigma _{}\mathrm{\Lambda }+mF`$ and since we assume that no additional classes exist on $`S_R`$,
$$[V]_R=D_R+mF_R$$
(6.17)
Substituting this expression into (6.15) and using (6.14), we find
$$[V]_R[V]_R=D_RD_R$$
(6.18)
On the other hand, given that $`F_RD_R=1`$ because the fiber intersects a section at one point, we can compute the self-intersection of $`[V]_R=D_R+mF_R`$, explicitly, yielding
$$[V]_R[V]_R=D_RD_R+2m$$
(6.19)
Comparing (6.18) with (6.19), we are left with the important conclusion that we must have $`m=0`$. This implies that, generically, no component of $`W_0`$ can contain any fibers in its homology class. We conclude that
$$(g,p;m)=\mathrm{}\text{unless}m=0$$
(6.20)
In fact, we can go further. Since we require $`m=0`$, we see from (6.17) that $`V`$ is in the same class $`D_R`$ as the zero section $`R`$. Using the Riemann–Roch formula and Kodaira’s description of elliptically fibered surfaces, one can show that the canonical class in the cohomology of $`S_R`$ is given by
$$K_{S_R}=\left(2g2+p\right)F_R$$
(6.21)
Then, substituting this expression into (6.14), we see that
$$D_RD_R=p$$
(6.22)
Thus we see that for $`p>0`$, $`R`$ is an exceptional divisor and cannot be deformed in $`S_R`$. Consequently, all the fivebrane can do is to move in the orbifold direction and change its value of $`a`$, so we have a moduli space of $`𝐏_a^1`$. If $`p=0`$, the fibration is locally trivial. It may or may not be globally trivial. However, it is always globally trivial when pulled back to some finite cover of the base. Every section is in the class $`D_R`$ and the moduli space simply corresponds to moving the fivebrane in the fiber direction and in $`S^1/Z_2`$ and $`a`$. If the original fibration was globally trivial, this yields a moduli space of $`E\times 𝐏_a^1`$, where $`E`$ is an elliptic curve describing motion of $`V`$ in the fiber direction. If it was only locally trivial, then these deformations in the fiber directions still make sense over the cover, but only a finite subset of them happens to descend, so the actual moduli space consists in this case of some finite number of copies of $`𝐏_a^1`$.
In summary, we see that, generically, if we exclude the case $`g=0`$, $`p=1`$ where $`S_R`$ is a $`dP_9`$ surface,
$$(g,p;m)=\mathrm{}\text{for}m>0$$
(6.23)
while if $`m=0`$ we have,
$$(g,p;0)=\{\begin{array}{cc}N𝐏_a^1\hfill & \text{if }p=0\text{ and the fibration is not globally trivial}\hfill \\ E\times 𝐏_a^1\hfill & \text{if }p=0\text{ and the fibration is globally trivial}\hfill \\ 𝐏_a^1\hfill & \text{if }p>0\hfill \end{array}$$
(6.24)
where $`N`$ is some integer depending on the global structure of $`S_C`$. In each case, the gauge group on the fivebrane is given by $`U(1)^g`$ where $`g`$ is the genus of the curve $`R`$ in the base $`B`$. Since both $`R`$ and $`V`$ are sections of $`S_R`$, this is equal to the genus of the curve $`V`$ in the space $`X`$.
### 6.3 The $`dP_9`$ exception and $`(0,1;m)`$
As we have mentioned, the obvious exception to the above analysis is when $`S_R`$ is a $`dP_9`$ surface. This occurs when the base curve $`R`$ is topologically $`𝐏^1`$ and there are 12 singular elliptic fibers in $`S_R`$, that is if $`[R]=\mathrm{\Lambda }`$ in the base $`B`$,
$$2g2=\left(K_B+\mathrm{\Lambda }\right)\mathrm{\Lambda }=2,p=K_B\mathrm{\Lambda }=1$$
(6.25)
As we mentioned above, this implies that $`R`$ is an exceptional curve in the base. In this case, $`D_R`$ and $`F_R`$ are not the only generic algebraic classes on $`S_R`$. Rather, since $`h^{2,0}=0`$, every integer class in $`dP_9`$ is algebraic. The surface $`dP_9`$ can be described as the plane $`𝐏^2`$ blown up at nine points which are at the intersection of two cubic curves. Consequently, there are ten independent algebraic classes on $`dP_9`$, the image $`l^{}`$ of the class of a line in $`𝐏^2`$ and the nine exceptional divisors, $`E_i^{}`$ for $`i=1,\mathrm{},9`$, corresponding to the blown up points. Here we use primes to distinguish these classes from classes in the base $`B`$ of the Calabi–Yau threefold (specifically, the $`E_i`$ classes in $`B`$ when the base is a del Pezzo surface).
The point here is that, in the full Calabi–Yau manifold $`X`$, there are only two independent classes associated with $`S_R`$, namely, the class of the base curve $`\sigma _{}\mathrm{\Lambda }`$ and the fiber class $`F`$. Consequently, if $`i_R:S_RX`$ is the inclusion map from the $`dP_9`$ surface into the Calabi–Yau threefold, the corresponding map between classes
$$i_R:H_2(S_R,𝐙)H_2(X,𝐙)$$
(6.26)
must have a non-empty kernel, since it maps the ten independent classes on $`S_R`$ mentioned above into only two in $`X`$. Recall that our goal is to find the set of sections $`V`$ of $`S_R`$ which are in the class $`\sigma _{}\mathrm{\Lambda }+mF`$ in $`X`$. Previously, we found that generically there were no such sections unless $`m=0`$. This followed from equations (6.14) to (6.19). Now, the appearance of a kernel in the map (6.26), means that $`[V]_R`$ is no longer necessarily of the form $`D_R+mF_R`$, as in equation (6.17). Consequently we can no longer conclude that we must have $`m=0`$. In fact, as we will see, there are several different sections $`V`$ in the same class $`\sigma _{}\mathrm{\Lambda }+mF`$ in $`X`$ with $`m>0`$.
Let us start by recalling why a $`dP_9`$ surface is also an elliptic fibration. Viewed as $`𝐏^2`$ blown up at nine points, we have the constraint that the nine points lie at the intersection of two cubics. This is represented in Figure 8. The cubics can be written as two third-order homogeneous polynomials in homogeneous coordinates $`[x,y,z]`$ on $`𝐏^2`$. Let us call these polynomials $`f`$ and $`g`$. Clearly any linear combination $`af+bg`$ defines a new cubic polynomial. By construction, this polynomial also passes through the same nine points. Since the overall scale does not change the cubic, the set of cubics is given by specifying $`a`$ and $`b`$ up to overall scaling. Thus, we have a $`𝐏^1`$ of cubics passing through the nine points. Since each cubic defines an elliptic curve, we can think of this as an elliptic fibration over $`𝐏^1`$. Furthermore, the cubics cannot intersect anywhere else in $`𝐏^2`$. The space of cubics spans the whole of the plane and, further, it blows up each intersection point into a $`𝐏^1`$ of distinct points, one point in $`𝐏^1`$ for each cubic passing through the intersection. Thus the space of cubics is giving an alternative description of the $`dP_9`$ surface. In addition, we note that each of the exceptional divisors $`E_i^{}`$, the blow-ups of the intersection points, is a $`𝐏^1`$ surface which intersects each fiber at a single point, and so corresponds to section of the fibration. Furthermore, the anti-canonical class of $`dP_9`$ is given by
$$K_{S_R}=3l^{}E_1^{}\mathrm{}E_9^{}=F_R$$
(6.27)
and is precisely the class of the cubics passing through the nine intersection points. It follows that
$$K_{S_R}=F_R$$
(6.28)
which, since $`g=0`$ and $`p=1`$, agrees with the general expression (6.21).
It is natural to ask if there are other sections of $`dP_9`$ aside from the exceptional curves $`E_i^{}`$. We first note that any section of $`dP_9`$ is exceptional, with $`[V]_R[V]_R=1`$. This follows from (6.15) and (6.27), recalling that any section will have genus zero and intersects the fiber (the anti-canonical class) only once. Next, we recall that there is a notion of addition of points on elliptic curves. Consequently, we can add sections point-wise to get a new section. Thus, we see that the set of sections forms an infinite Abelian group containing all the exceptional curves on the $`dP_9`$ surface. From the point of view of curves in $`𝐏^2`$, these additional exceptional classes correspond to curves of higher degree passing through the nine intersection points with some multiplicity (see for instance ). In general, we can write an exceptional class (except the classes $`E_i^{}`$) as
$$Q=ql^{}q_iE_i^{}$$
(6.29)
such that
$$q,q_i0,q^2q_i^2=1,3qq_i=1$$
(6.30)
where the first condition is required in order to describe an effective curve in $`𝐏^2`$, the second condition gives $`QQ=1`$ and the third condition gives $`QF_R=1`$. The appearance of an infinite number of exceptional classes means that these equations have an infinite number of solutions for $`i=1,\mathrm{},9`$.
Having identified all the relevant classes in $`dP_9`$, we can now turn to a description of the map (6.26). First, we note that the group structure of the set of sections means that we can choose any section as part of the basis of classes. In our case, we have singled out one section as the class of the base curve $`R`$. Thus, without loss of generality, we can identify this class with one of the $`E_i^{}`$, for example, $`E_9^{}`$. Thus we set
$$D_R=E_9^{}$$
(6.31)
By construction, we know that $`D_R`$ maps to $`\sigma _{}\mathrm{\Lambda }`$ and $`F_R`$ maps to $`F`$ under $`i_R`$. Thus, in terms of $`l^{}`$ and $`E_i^{}`$, we have, using (6.27) and the generic result (6.13),
$$\begin{array}{c}i_RE_9^{}=\sigma _{}\mathrm{\Lambda }\\ i_R\left(3l^{}E_1^{}\mathrm{}E_9^{}\right)=F\end{array}$$
(6.32)
We now need to understand how the remaining independent classes $`E_1^{},\mathrm{},E_8^{}`$ map under $`i_R`$. We first note that, since all these classes are sections, they must project onto the class $`\mathrm{\Lambda }`$ in the base. The only ambiguity is how many multiples of the fiber class $`F`$ each contains. Thus, we know
$$i_RE_i^{}=\sigma _{}\mathrm{\Lambda }+c_iF$$
(6.33)
for some $`c_i`$. The easiest way to calculate $`c_i`$ is to recall that a class in $`H_2(X,𝐙)`$ is uniquely determined by its intersection numbers with a basis of classes in $`H_4(X,𝐙)`$. A suitable basis was given in section 3.2. In particular, we can consider the intersection of $`E_i^{}`$ with the class $`D`$ of base $`B`$ of the full Calabi–Yau threefold. We note that $`E_i^{}E_9^{}=0`$ for $`i=1,\mathrm{},8`$, so the extra sections $`E_i^{}`$ do not intersect the base $`R`$ of the $`dP_9`$. However, since $`R`$ describes the intersection of $`B`$ with the $`dP_9`$ surface $`S_R`$, we see that the extra classes cannot intersect $`B`$. Thus, we must have
$$i_RE_i^{}D=0\text{for}i=1,\mathrm{},8$$
(6.34)
From the table of intersections (3.4), using (6.25), we see that we must have $`c_i=1`$ for $`i=1,\mathrm{},8`$. Together with the result (6.32) we find, in conclusion, that the map (6.26) is given by
$`i_Rl^{}`$ $`=3\sigma _{}\mathrm{\Lambda }+3F`$ (6.35)
$`i_RE_i^{}`$ $`=\sigma _{}\mathrm{\Lambda }+F\text{for}i=1,\mathrm{},8`$
$`i_RE_9^{}`$ $`=\sigma _{}\mathrm{\Lambda }`$
demonstrating explicitly that the map has a kernel.
Having identified the map, we would now like to return to our original question, which was how many sections are there in the class $`\sigma _{}\mathrm{\Lambda }+mF`$ and what is their moduli space. The second part is easy to answer. We have noted that all sections are exceptional with self-intersection $`1`$. This implies that they cannot be moved within the $`dP_9`$ surface. However, they can move in the orbifold and have different values of the axion. Consequently, each section has a moduli space of $`𝐏_a^1`$. If there are a total of $`N(m)`$ sections for a given $`m`$, then the total moduli space $`(0,1;m)`$ has the form
$$(0,1;m)=N(m)𝐏_a^1$$
(6.36)
The value of $`N(m)`$ is a problem in discrete mathematics. Recall that the class of a general section had the form given in (6.29). Under the map $`i_R`$ this maps into
$$i_RQ=\sigma _{}\mathrm{\Lambda }+\left(q_9+1\right)F$$
(6.37)
where we have used the condition $`3qq_i=1`$. Thus, we can summarize the problem as finding the number of solutions to
$`q^2{\displaystyle q_i^2}+1`$ $`=0`$ (6.38)
$`3q{\displaystyle q_i}`$ $`=1`$
with
$$q0,q_i0,q_9=m1$$
(6.39)
We will not solve this problem in general, but just note the solution of $`m=0`$ and $`m=1`$. The former case is not actually included in the above form, since it implies $`q_9=1`$. However, this case is easy to analyze since, with $`m=0`$, we see, from (3.4) and (6.25), that the intersection of $`[V]=\sigma _{}\mathrm{\Lambda }`$ with the base class $`D`$ of the Calabi–Yau manifold is $`1`$. Consequently, a component of $`V`$ must lie in the base $`B`$. Since, by assumption, $`V`$ is irreducible, this means that the whole of $`V`$ lies in $`B`$. Thus, $`V`$ can only be the base section $`R`$ in the $`dP_9`$ surface. Thus we conclude that, for a $`dP_9`$ surface $`S_R`$, $`N(0)=1`$. That is,
$$(0,1;0)=𝐏_a^1$$
(6.40)
corresponding to moving the single curve $`R`$ in $`S^1/Z_2`$. This result was used in analyzing the example in section 5.1.
For $`m=1`$, there are 240 solutions to the equations (6.38), that is $`N(1)=240`$. The easiest way to see this is to note that $`m=1`$ implies $`q_9=0`$. Thus, we can ignore $`E_9^{}`$. Effectively, one is then trying to count the number of exceptional curves on a $`dP_8`$ surface. This is known to be a finite number, $`240`$ . Explicitly, they are of the following forms
$`E_i^{}`$ (6.41)
$`l^{}E_i^{}E_j^{}`$
$`2l^{}E_{i_1}^{}\mathrm{}E_{i_5}^{}`$
$`3l^{}2E_i^{}E_{i_1}^{}\mathrm{}E_{i_6}^{}`$
$`4l^{}2E_{i_1}^{}\mathrm{}2E_{i_3}^{}E_{i_4}^{}\mathrm{}E_{i_8}^{}`$
$`5l^{}2E_{i_1}^{}\mathrm{}2E_{i_6}^{}E_{i_7}^{}E_{i_8}^{}`$
$`6l^{}2E_{i_1}^{}\mathrm{}2E_{i_7}^{}3E_{i_8}^{}`$
where all the indices run from $`1`$ to $`8`$. One can see, using (6.35), that all these classes map under $`i_R`$ to $`\sigma _{}\mathrm{\Lambda }+F`$. Again, since none of these sections can move in the $`dP_9`$ surface, the moduli space is simply 240 copies of $`𝐏_a^1`$,
$$(0,1;1)=240𝐏_a^1$$
(6.42)
This result was used in analyzing the example in section 5.2. We also note that none of these sections intersects the base section $`R`$.
## 7 A three-family $`dP_8`$ example
In the previous section we have given a general procedure for analyzing the moduli space of fivebranes wrapped on a holomorphic curve in a particular effective class in the Calabi–Yau manifold. The problem is of particular importance because it has been shown that including fivebranes in supersymmetric M theory compactifications greatly enlarges the number of vacua with reasonable grand unified gauge groups and three families of matter.
In the remaining sections, we apply this procedure to some specific examples which arise in the construction of phenomenological models. We will find that the moduli spaces are very rich. Nonetheless, we will find that there are isolated parts of moduli space where the number of moduli is greatly reduced. We will not describe the full moduli spaces here, but rather consider various characteristic components.
Let us start with the example given in and expanded upon in . There, the base $`B`$ was a $`dP_8`$ surface and the class of the fivebranes was given by
$$[W]=2\sigma _{}E_1+\sigma _{}E_2+\sigma _{}E_3+17F$$
(7.1)
where $`l`$ and $`E_i`$ for $`i=1,\mathrm{},8`$ are the line class and the exceptional blow-up classes in the $`dP_8`$. Since $`2E_1+E_2+E_3`$ is effective in $`dP_8`$, this describes an effective class in the Calabi–Yau manifold. This choice of fivebrane class, together with a non-trivial $`E_8`$ bundle $`V_1`$, led to a low-energy $`SU(5)`$ theory with three families.
### 7.1 General decomposition
Let us follow exactly the procedure laid down in the previous section. First, we separate from $`W`$ all the pure fiber components, writing it as the sum of $`W_0`$ and $`W_F`$ as in (6.2). We write
$$[W_0]=2E_1+E_2+E_3+nF,[W_F]=(17n)F$$
(7.2)
with $`0n17`$. Unless $`n=17`$, we have at least two separate fivebranes. This splits the moduli space into several different components depending on the partition of 17 into $`n`$ and $`17n`$, as given in (6.4). The moduli space for the pure fiber component $`W_F`$ is just the familiar form, read off from (6.5)
$$((17n)F)=\left(dP_8\times 𝐏_a^1\right)^{17n}/𝐙_{17n}$$
(7.3)
More interesting is the analysis of the $`W_0`$ moduli space. As described above, the first step in the analysis is to project $`W_0`$ onto the base. This gives the curve $`C`$ which is in the homology class
$$[C]=2E_1+E_2+E_3$$
(7.4)
We then need to find the moduli space $`_B(2E_1+E_2+E_3)`$ of $`C`$ in the base. We recall that the del Pezzo surface $`dP_8`$ can be viewed as $`𝐏^2`$ blown up at eight points. The exceptional classes $`E_i`$ each have a unique representative, namely the exceptional curve $`𝐏^1`$ at the $`i`$-th blown-up point. Furthermore, we have the intersection numbers $`E_iE_j=\delta _{ij}`$. Thus $`[C]`$ has a negative intersection number with each of $`E_1`$, $`E_2`$ and $`E_3`$. This implies that it must have a component contained completely within each of the exceptional curves described by $`E_1`$, $`E_2`$ and $`E_3`$. Since these are all distinct, $`C`$ must be reducible into three components
$$C=C_1+C_2+C_3$$
(7.5)
where
$$[C_1]=2E_1[C_2]=E_2[C_3]=E_3$$
(7.6)
None of these components can be moved in the base, since they all have negative self-intersection number. Consequently, we have
$$_B(2E_1+E_2+E_3)=\text{single pt.}$$
(7.7)
corresponding to three fivebranes, each wrapping a different exceptional curve.
Since the projection $`C`$ splits, so must the curve $`W_0`$ itself. We must have
$$W_0=W_1+W_2+W_3$$
(7.8)
We can partition the fiber class in $`[W_0]`$ in different ways. In general we write
$$[W_1]=2\sigma _{}E_1+n_1F[W_2]=\sigma _{}E_2+n_2F[W_3]=\sigma _{}E_3+n_3F$$
(7.9)
with $`n_1+n_2+n_3=n`$ and $`n_i0`$ since each curve must be separately effective. The problem of finding the full moduli space has now been reduced to finding the moduli space of $`W_1`$, $`W_2`$ and $`W_3`$ separately.
We see that, unless $`n=17`$, we have at least four separate five-branes, one wrapping the pure fiber curve $`W_F`$ and one wrapping each of the three curves $`W_1`$, $`W_2`$ and $`W_3`$. As discussed in section 4.2, the pure fiber component can move in the base $`B`$, as well as in the orbifold. Furthermore it has transitions where it separates into more than one fivebrane. The components $`W_i`$, meanwhile, are stuck above fixed exceptional curves in the base. They are free to move in the orbifold and may be free to move in the fiber direction (this will be discussed in the following sections). The $`W_i`$ components cannot intersect since the exceptional curves in the base over which they are stuck cannot intersect. However, the pure fiber components can intersect the $`W_i`$, leading to the possibility of additional low-energy fields appearing.
### 7.2 The $`W_2`$ and $`W_3`$ components
We start by analyzing the $`W_2`$ and $`W_3`$ components. As usual, we are interested in the number of sections of the surface $`S_{C_i}`$ which are in the class $`\sigma _{}E_i+n_iF`$ in the full Calabi–Yau manifold for $`i=2,3`$. In each case, the base curve $`C_i`$ wraps an exceptional curve $`𝐏^1`$ of one of the blown-up points in $`dP_8`$. Such a case is familiar from the examples given in sections 5.1 and 5.2. The corresponding surface $`S_{C_i}`$, as expected, since $`C_i`$ wraps an exceptional curve, is a $`dP_9`$ manifold. Explicitly, for both $`C_2`$ and $`C_3`$ we have
$$g_i=\text{genus}(C_i)=0$$
(7.10)
and the number of singular fibers in the fibration is given by $`12p_i`$, where
$$\begin{array}{cc}\hfill p_i& =K_{dP_8}E_i\hfill \\ & =\left(3lE_1\mathrm{}E_8\right)E_i=1\hfill \end{array}$$
(7.11)
Thus, we see that
$$S_{C_i}\text{ is a }dP_9\text{ surface for }i=2,3$$
(7.12)
The moduli space for each $`W_i`$, for $`i=2,3`$, is then the moduli space of sections of the $`dP_9`$ surface in the homology class $`\sigma _{}E_i+n_iF`$ in the full Calabi–Yau space $`X`$. Since in this case $`g_i=0`$ and $`p_i=1`$, it follows that we are interested in the moduli spaces $`(0,1;n_i)`$ discussed in section 6.3.
Let us concentrate on $`W_2`$, since the moduli space of $`W_3`$ is completely analogous. We recall from section 6.3 that, for a given $`n_2`$, there are a finite number of sections of the $`dP_9`$ in the class $`\sigma _{}E_2+n_2F`$ in $`X`$. Furthermore, all these sections are exceptional with self-intersection $`1`$ and so there is no moduli space for moving these curves within $`dP_9`$. Since the curve $`C_2`$ is also fixed in the base, we see that there are no moduli for moving $`W_2`$ in the Calabi–Yau threefold. All we are left with are the moduli for moving in $`S^1/Z_2`$ and the axion modulus.
We showed in section 6.3 that there was precisely one section in the class $`\sigma _{}E_2`$ and 240 in $`\sigma _{}E_2+F`$. Rather than do a general analysis, let us consider some examples for which $`n_2=2`$. That is
$$[W_2]=\sigma _{}E_2+2F$$
(7.13)
Consequently, we will be interested in the moduli space $`(0,1;2)`$. We know from the previous discussion that
$$(0,1;2)=N(2)𝐏_a^1$$
(7.14)
Here, we will not evaluate $`N(2)`$ but content ourselves with several specific examples. From the general map (6.35), with $`\mathrm{\Lambda }=E_2`$, we see that we can, for instance, write the class of $`W_2`$ as
$$[W_2]=i_{C_2}\left(l^{}E_1^{}E_9^{}\right)$$
(7.15)
or
$$[W_2]=i_{C_2}\left(2l^{}E_1^{}E_2^{}E_3^{}E_4^{}E_9^{}\right)$$
(7.16)
or many other decompositions which we will not discuss here. We might be tempted to include the case where
$$[W_2]=i_{C_2}\left(3l^{}E_2^{}\mathrm{}E_9^{}\right)$$
(7.17)
However, this is not a section. This can be seen directly from the fact that it fails to satisfy the equations (6.38). Alternatively, we note that the nine blown up points in $`dP_9`$ are not in general position. If a cubic passes through eight of them, then it also passes through the ninth. Consequently, the class $`3l^{}E_2^{}\mathrm{}E_9^{}`$, which is the class of a cubic through eight of the nine points, always splits into two classes: the fiber $`F_{C_2}=3l^{}E_1^{}\mathrm{}E_9^{}`$ and the base $`E_1^{}`$. Consequently, the example (7.17) is always reducible, splitting into a pure fiber component and the section in the class $`E_1^{}`$.
Since all the sections are genus zero like the base, we have a simple table for the cases given in (7.15) and (7.16)
$$\begin{array}{ccc}[W_2]\text{ in }dP_9& \text{genus}& \text{moduli space}\\ & & \\ l^{}E_1^{}E_9^{}& 0& 𝐏_a^1\\ 2l^{}E_1^{}E_2^{}E_3^{}E_4^{}E_9^{}& 0& 𝐏_a^1\end{array}$$
(7.18)
All the other possible sections in the class $`\sigma _{}E_2+2F`$ will have the same genus and moduli space.
### 7.3 The $`W_1`$ Component
The remaining component $`W_1`$ is considerably more interesting. We start by noting that the projection $`C_1=2R`$, where $`R`$ is the exceptional curve corresponding to $`E_1`$. This implies that the projection map from $`W_1`$ to $`R`$ is a double cover. For the same reason as in the previous section, we have that the fibered space $`S_R`$ above $`R`$ satisfies
$$S_R\text{ is a }dP_9\text{ surface}$$
(7.19)
However, the fact that $`W_1`$ is a double cover implies that, unlike all the cases we have considered thus far, $`W_1`$ is not a section. Hence, the $`W_1`$ moduli space is not given by $`(0,1;n_1)`$. However, we can still analyze its moduli space using the general map between classes (6.35).
Suppose, for simplicity, we choose $`n_1=2`$, that is
$$[W_1]=2E_1+2F$$
(7.20)
since other cases can be analyzed in an analogous way. As above, we will not consider all possibilities for the class of $`W_1`$ in the $`dP_9`$. Instead, we will restrict our discussion to a few interesting examples. For instance, using (6.35) with $`\mathrm{\Lambda }=E_1`$, some possibilities are
$`[W_1]`$ $`=i_R\left(2E_1^{}\right)`$ (7.21a)
$`[W_1]`$ $`=i_R\left(E_1^{}+E_2^{}\right)`$ (7.21b)
$`[W_1]`$ $`=i_R\left(l^{}E_1^{}\right)`$ (7.21c)
$`[W_1]`$ $`=i_R\left(3l^{}E_1^{}\mathrm{}E_7^{}\right)`$ (7.21d)
We can analyze the moduli spaces in the $`dP_9`$ surface of each of these different cases by considering the right-hand sides of (7.21) as classes of curves through some number of points in $`𝐏^2`$. In examples (7.21a) and (7.21b), the curves are always reducible to just two copies of an exceptional curve in $`dP_8`$ and, hence, have no moduli for moving in $`dP_9`$. Since the curve $`R`$ is also fixed in the base, these cases have no moduli for moving in the Calabi–Yau manifold at all.
The third case, (7.21c), corresponds to a line (topologically $`𝐏^1`$) in $`𝐏^2`$ through one point. As such, as discussed in section 5.1, its moduli space is $`𝐏^1`$. Furthermore, there are special points in the moduli space where the line passes through one of the other blown-up points in $`dP_9`$ and the curve becomes reducible (in analogy to the process described in Figure 3). We can have, for instance,
$$W_1=U_1+U_2$$
(7.22)
where the classes in $`dP_9`$ are
$$[U_1]_R=l^{}E_1^{}E_2^{}[U_2]_R=E_2^{}$$
(7.23)
What was previously a single sphere has now been reduced to a pair of spheres, each an exceptional curve, which intersect at a single point (as in Figure 4).
The last case, (7.21d), is even more interesting. This class corresponds to a cubic in $`𝐏^2`$ passing through 7 points. A cubic is an elliptic curve and so has genus one. A general cubic has a moduli space of $`𝐏^9`$. However, by being restricted to pass through seven points, the remaining moduli space is simply $`𝐏^2`$. At special points in the moduli space the cubic can degenerate. First, we can have a double point. This occurs when the discriminant of the curve vanishes. It corresponds to one of the cycles of the torus pinching, as shown in Figure 9. The vanishing of the discriminant is a single additional condition on the parameters and so gives a curve, which we will call $`\mathrm{\Delta }_{W_1}`$, in $`𝐏^2`$. When the cubic degenerates, the blown up curve is a sphere. Thus, it has changed genus. We can now go one step further. At certain places, the discriminant curve $`\mathrm{\Delta }_{W_1}`$ in $`𝐏^2`$ has a double point. This corresponds to places where the curve becomes reducible. The curve $`W_1`$ splits into two
$$W_1=U_1+U_2$$
(7.24)
with, for instance, $`U_1`$ and $`U_2`$ describing a line and a conic,
$$[U_1]_R=l^{}E_1^{}E_2^{}[U_2]_R=2l^{}E_3^{}\mathrm{}E_7^{}$$
(7.25)
Note that each of the resulting curves is exceptional and so has no moduli space in the $`dP_9`$. However, we note that this splitting can happen $`\left(\genfrac{}{}{0pt}{}{7}{2}\right)=21`$ different ways. The two curves intersect at two points, corresponding to a double pinching of the torus into a pair of spheres, as in Figure 9. For completeness, we note that there is one further singularity possible. That is where the cubic develops a cusp. However, the topology remains that of a sphere, so we will not distinguish these points.
We can summarize these branches of moduli space in the following table
$$\begin{array}{ccc}[W_1]\text{ in }dP_9& \text{genus}& \text{moduli space}\\ & & \\ & & \\ 2E_1^{}& 0+0& 𝐏_a^1\times 𝐏_a^1/𝐙_2\\ & & \\ E_1^{}+E_2^{}& 0+0& 𝐏_a^1\times 𝐏_a^1\\ & & \\ l^{}E_1^{}& 0& \left(𝐏^18\text{ pts.}\right)\times 𝐏_a^1\\ \left(l^{}E_1^{}E_2^{}\right)+\left(E_2^{}\right)& 0+0& 𝐏_a^1\times 𝐏_a^1\\ & & \\ 3l^{}E_1^{}\mathrm{}E_7^{}& 1& \left(𝐏^2\mathrm{\Delta }_{W_1}\right)\times 𝐏_a^1\\ 3l^{}E_1^{}\mathrm{}E_7^{}& 0& \left(\mathrm{\Delta }_{W_1}21\text{ pts.}\right)\times 𝐏_a^1\\ \left(l^{}E_1^{}E_2^{}\right)+\left(2l^{}E_3^{}\mathrm{}E_7^{}\right)& 0+0& 𝐏_a^1\times 𝐏_a^1\end{array}$$
(7.26)
Note that in the first line, we mod out by $`𝐙_2`$ since the fivebranes wrap the same curve in the Calabi–Yau manifold and so are indistinguishable. This is not the case in the second example.
We note that there is another type of splitting possible for the fourth example. The cubic could pass though one of the additional blown up points. It would then pass through eight of the nine points. However, as we have discussed above, it must then also pass through the ninth point. The curve $`W_1`$ would then decompose into a pure fiber component plus two sections, namely
$$W_1=U_1+U_2+U_3$$
(7.27)
with
$$[U_1]_R=F_R=3l^{}E_1^{}\mathrm{}E_9^{},[U_2]_R=E_8^{},[U_3]_R=E_9^{}$$
(7.28)
Thus we see that, unlike the case for $`W_2`$ and $`W_3`$, it is also possible to have a transition where a fiber component splits off from $`W_1`$.
Let us end this section with an important observation. We have not discussed the full moduli space of $`W`$ here but only various characteristic branches. There is, however, a certain type of branch that we would like to emphasize. Consider a branch where we choose $`n=17`$, so that $`W_F=0`$, and split the curve $`W`$ as follows
$$W=W_0=W_1+W_2+W_3$$
(7.29)
with, for example
$$[W_1]=2\sigma _{}E_1+2F,[W_2]=\sigma _{}E_2+5F,[W_3]=\sigma _{}E_3+10F$$
(7.30)
From our previous discussion, $`W_2`$ and $`W_3`$ are required to be sections of the $`dP_9`$ surface above the exceptional curve in the class $`E_2`$ and $`E_3`$ respectively. As such, they have no moduli to move within the Calabi–Yau threefold. Furthermore, we have seen an example in the second line of the table (7.26) where $`W_1`$ splits into two components, neither of which can move in the Calabi–Yau space. Consequently, we see that there is a component of moduli space where we simply have four fivebranes, each wrapping a fixed curve within the Calabi–Yau threefold. Furthermore, none of the fivebranes can intersect. We then have a very simple moduli space. It is
$$𝐏_a^1\times 𝐏_a^1\times 𝐏_a^1\times 𝐏_a^1$$
(7.31)
corresponding to moving each fivebrane in $`S^1/Z_2`$ and changing the value of the axions. Thus, even when the fivebrane class is relatively complicated, we see that there are components of the moduli space with very few moduli.
## 8 A second three-family $`dP_8`$ example
In this section, we will briefly discuss a second example of a realistic fivebrane moduli space. Again, we will take the base
$$B\text{ is a }dP_8\text{ surface}$$
(8.1)
and choose the fivebrane class
$$[W]=\sigma _{}l\sigma _{}E_1+\sigma _{}E_2+\sigma _{}E_3+27F$$
(8.2)
This gives a three family model with an unbroken $`SU(5)`$ gauge group, as can be seen explicitly using the rules given in and (taking $`\lambda =\frac{1}{2}`$ in the equations given there). From the condition (3.6), since $`lE_1+E_2+E_3`$ is effective in the base, $`[W]`$ is an effective class as required.
To calculate the moduli space, first, one separates the pure fiber components, partitioning $`W`$ as $`W=W_0+W_F`$, as in (6.2), with
$$[W_0]=\sigma _{}l\sigma _{}E_1+\sigma _{}E_2+\sigma _{}E_3+nF,[W_F]=\left(27n\right)F$$
(8.3)
where $`0n27`$. As usual, unless $`n=27`$, this implies that we have at least two distinct fivebranes. This partition splits the moduli space into 28 components, as in (6.4). The moduli space of the $`W_F`$ component is the usual symmetric product given in (6.5)
$$((27n)F)=\left(F_r\times 𝐏_a^1\right)^{27n}/𝐙_{27n}$$
(8.4)
Next, we analyze the moduli space of $`W_0`$ for a given $`n`$. If we project $`W_0`$ onto the base, we get the curve $`C`$ in the class
$$[C]=lE_1+E_2+E_3$$
(8.5)
We then need to find the moduli space $`_B(lE_1+E_2+E_3)`$ of such curves in the base. The last two classes correspond to curves wrapping exceptional blow-ups and so, as in the previous section, $`C`$ must be reducible. We expect that there is always one component wrapping the exceptional curve in $`E_2`$ and another component wrapping the exceptional curve in $`E_3`$.
In general, there are eight distinct parts of $`_B(lE_1+E_2+E_3)`$, with different numbers of curves in the base. In general, $`C`$ decomposes into $`k`$ curves as
$$C=C_1+\mathrm{}+C_k$$
(8.6)
with
$$[C_i]=\mathrm{\Omega }_i,\mathrm{\Omega }_1+\mathrm{}+\mathrm{\Omega }_k=lE_1+E_2+E_3$$
(8.7)
As $`C`$ splits, so does $`W_0`$. In general we can partition the $`n`$ fiber components in different ways, so that
$$W=W_1+\mathrm{}+W_k,[W_i]=\sigma _{}\mathrm{\Omega }_i+n_iF$$
(8.8)
where $`n_i0`$ and $`n_1+\mathrm{}+n_k=n`$. The eight different parts of the moduli space of $`C`$ can be summarized as follows
$$\begin{array}{ccc}[C_i]& \text{genus}(C_i)& \text{moduli space}\\ & & \\ & & \\ \begin{array}{c}[C_1]=lE_1\\ [C_2]=E_2\\ [C_3]=E_3\end{array}& \begin{array}{c}0\\ 0\\ 0\end{array}& 𝐏^17\text{ pts.}\\ & & \\ \begin{array}{c}[C_1]=lE_1E_2\\ [C_2]=2E_2\\ [C_3]=E_3\end{array}& \begin{array}{c}0\\ 0\\ 0\end{array}& \text{single pt.}\\ & & \\ \begin{array}{c}[C_1]=lE_1E_3\\ [C_2]=E_2\\ [C_3]=2E_3\end{array}& \begin{array}{c}0\\ 0\\ 0\end{array}& \text{single pt.}\\ & & \\ \begin{array}{c}[C_1]=lE_1E_i\\ [C_2]=E_2\\ [C_3]=E_3\\ [C_4]=E_i\end{array}& \begin{array}{c}0\\ 0\\ 0\\ 0\end{array}& \begin{array}{c}\text{single pt.}\\ i=4,\mathrm{},8\end{array}\end{array}$$
(8.9)
The analysis is very similar to that in section 5.1. The $`C_2`$ and $`C_3`$ components are always stuck on the exceptional curves in $`E_2`$ and $`E_3`$ and so have no moduli. In the first row in the table, $`C_1`$ is a curve in the $`dP_8`$ corresponding to a line through the blown-up point $`p_1`$. As such it has a moduli of $`𝐏^1`$, except for the seven special points where it also passes through one of the other seven blown-up points. If it passes through one of the points $`p_4,\mathrm{},p_8`$, the curve splits into two curves and we have a total of four distinct fivebranes. This case is given in the last row in (8.9). If it passes through $`p_2`$ or $`p_3`$, generically, we still only have three components, but now $`C_2`$ or $`C_3`$ is in the class $`2E_2`$ or $`2E_3`$ respectively. These are the second and third rows in (8.9). All the curves are spheres so have genus zero. In conclusion, we have
$$_B(lE_1+E_2+E_3)=𝐏^1$$
(8.10)
Where there are seven special points in the moduli space: the five cases given in the last row in (8.9), where $`C`$ splits into four curves, and the two further points where the class of $`C_2`$ and $`C_3`$ changes as given in the second and third rows of (8.9). We see that, unlike the previous example, $`_B`$ is more than just a single point.
Let us concentrate on a generic point in moduli space, as in the first row of (8.9). As we noted above, $`W_0`$ then splits into three distinct components,
$$W_0=W_1+W_2+W_3$$
(8.11)
with
$$[W_1]=\sigma _{}l\sigma _{}E_1+n_1F,[W_2]=\sigma _{}E_2+n_2F,[W_3]=\sigma _{}E_3+n_3F$$
(8.12)
where $`n_1+n_2+n_3=n`$ and $`n_i0`$.
We note that $`W_2`$ and $`W_3`$ are exactly of the form we analyzed in section 7.2 above. Recall that each component is a curve stuck above the exceptional curve $`C_2`$ or $`C_3`$ in the base. The space $`S_{C_i}=\pi ^1(C_i)`$, for $`i=2,3`$, above each exceptional curve was given by
$$S_{C_i}\text{ is a }dP_9\text{ surface for }i=2,3$$
(8.13)
This means we have $`g_i=0`$ and $`p_i=1`$ and the corresponding moduli spaces are given by $`(0,1;n_i)`$. These spaces were given in (6.36) and are a discrete number $`N(n_i)`$ of copies of $`𝐏_a^1`$, corresponding to different sections of a $`dP_9`$ surface. We have
$$(0,1;n_i)=N(n_i)𝐏_a^1\text{for }i=2,3$$
(8.14)
There are no moduli for moving the curves within either the base or the fiber of the Calabi–Yau manifold. Since $`g_i=0`$ there are no vector multiplets on the fivebranes.
We now turn to $`W_1`$. This is essentially of the form we considered section 5.2. There, we showed that the surface $`S_{C_1}`$ is given by
$$S_{C_1}\text{ is a K3 surface}$$
(8.15)
That is to say, the genus $`g_1`$ of $`C_1`$ is zero and $`p_1=2`$, so there are 24 singular fibers. Thus we are interested in the moduli space $`(0,2;n_1)`$. From the general discussion in section 6.2, we find from equation (6.24) that the moduli space for $`W_1`$ is empty unless $`n_1=0`$. One then has
$$(0,2;0)=𝐏_a^1$$
(8.16)
Putting this together with the moduli space for $`W_2`$ and $`W_3`$ given in (8.14), and recalling the form of the $`C`$ moduli space (8.9), we can write, for this generic part of the moduli space of $`W_0`$,
$$\begin{array}{c}_0(\sigma _{}l\sigma _{}E_1+\sigma _{}E_2+\sigma _{}E_3+nF)\hfill \\ \hfill =\left(\left[𝐏^17\text{ pts.}\right]\times 𝐏_a^1\right)\times N(n_2)𝐏_a^1\times N(n_3)𝐏_a^1+_{\text{non-generic}}\end{array}$$
(8.17)
where we must have $`n_1=0`$, and $`n_2+n_3=n`$ in the decomposition (8.12). We have three distinct fivebranes, two of which, $`W_2`$ and $`W_3`$ are stuck in the Calabi–Yau threefold. The third fivebrane can move within the base of the Calabi–Yau. Each fivebrane wraps a curve of genus zero and so there are no vector multiplets. The full $`W`$ moduli space is constructed, for each $`n`$, from the product of this space together with the $`W_B`$ moduli space given in (8.4).
We will not consider all the seven possible exceptional cases in the moduli space of $`C`$, listed in (8.9). Rather, consider just one of the cases where $`C`$ splits into four components,
$$[C_1]=lE_1E_4,[C_2]=E_2,[C_3]=E_3,[C_4]=E_4$$
(8.18)
with the corresponding split of $`W_0`$ as
$`[W_1]`$ $`=\sigma _{}l\sigma _{}E_1\sigma _{}E_4+n_1F,`$ $`[W_2]`$ $`=\sigma _{}E_2+n_2F,`$ (8.19)
$`[W_3]`$ $`=\sigma _{}E_3+n_3F,`$ $`[W_4]`$ $`=\sigma _{}E_4+n_4F`$
with $`n_i0`$ and $`n_1+n_2+n_3+n_4=n`$. Let us further assume that $`n=27`$ so that $`W_F=0`$ and there are no pure fiber components. Each of the $`C_i`$ is an exceptional curve in the base. Consequently, we have
$$S_{C_i}\text{ is a }dP_9\text{ surface for }i=1,2,3,4$$
(8.20)
(For $`C_1`$ the calculation is just as in section 5.2). Thus we have $`g_i=0`$ and $`p_i=1`$ for each curve and so we have the moduli spaces
$$(0,1;n_i)=N(n_i)𝐏_a^1\text{for }i=1,2,3,4$$
(8.21)
where, as usual, $`N(n_i)`$ counts the number of distinct sections in each $`dP_9`$. In particular, we are no longer required to take $`n_1=0`$. We now have a total of four distinct fivebranes, wrapping $`W_1`$, $`W_2`$, $`W_3`$ and $`W_4`$. All these curves are stuck in the Calabi–Yau threefold, so that a given connected part of the moduli space has the form
$$𝐏_a^1\times 𝐏_a^1\times 𝐏_a^1\times 𝐏_a^1$$
(8.22)
(Here we have assumed that either $`n_1`$ or $`n_4`$ is non-zero so that this branch of moduli space if not connected to the generic branch discussed above.) As in the previous section, we see that there are disconnected components of the moduli space with very few moduli. Each curve has genus zero, so there are no vector multiplet degrees of freedom.
## 9 Two simple Hirzebruch examples
In this section, we will briefly discuss two simple examples where the base is a Hirzebruch surface $`F_r`$. These are $`𝐏^1`$ fibrations over $`𝐏^1`$, characterized by a non-negative integer $`r`$. Following the notation of , they have two independent algebraic classes, the class $`𝒮`$ of the section of the fibration at infinity and the fiber class $``$. These have the following intersection numbers
$$𝒮𝒮=r,𝒮=1,=0$$
(9.1)
The canonical bundle is given by
$$K_B=2𝒮(2+r)$$
(9.2)
Effective classes in $`F_r`$ are of the form $`\mathrm{\Omega }=a𝒮+b`$ with $`a`$ and $`b`$ non-negative. Thus, from (3.6), a general class of effective curves in the Calabi-Yau threefold can be written as
$$[W]=a\sigma _{}𝒮+b\sigma _{}+fF$$
(9.3)
where $`a`$, $`b`$ and $`f`$ are all non-negative. (Note, however, that, as discussed above equation (3.6), there is actually an additional effective class for $`r3`$, which we will ignore here.)
For any realistic model with three families of matter and realistic gauge groups, if the hidden $`E_8`$ group is unbroken, then the coefficients $`a`$ and $`b`$ are typically large . The moduli space is then relatively complicated to analyze. Thus, for simplicity, we will consider only two very simple cases with either a single $`𝒮`$ class or a single $``$ class.
### 9.1 $`[W]=\sigma _{}𝒮+fF`$
We consider first
$$B\text{ is an }F_r\text{ surface with }r2$$
(9.4)
and
$$[W]=\sigma _{}𝒮+fF$$
(9.5)
using the general procedure we outlined above. Note that we require $`r2`$ to exclude the trivial case of $`F_0`$, which is just the product $`𝐏^1\times 𝐏^1`$, and $`F_1`$, which is actually the del Pezzo surface $`dP_1`$.
We recall that the first step is to split off any pure fiber components from $`W`$, writing it as a sum of $`W_0`$ and $`W_F`$ as in (6.2)
$$[W_0]=\sigma _{}𝒮+nF,[W_F]=(fn)F$$
(9.6)
with $`0nf`$. As usual, unless $`n=f`$, this implies we have at least two distinct fivebranes. This splits the moduli space into $`f+1`$ components as in (6.4). The moduli space of $`W_F`$ has the familiar form, from (6.5),
$$((fn)F)=\left(F_r\times 𝐏_a^1\right)^{fn}/𝐙_{fn}$$
(9.7)
Next we turn to analyzing the $`W_0`$ moduli space for a given $`n`$. Projecting $`W_0`$ onto the base gives the curve $`C`$ in the homology class
$$[C]=𝒮$$
(9.8)
Our first step is then to find the moduli space $`_B(𝒮)`$ of $`C`$ in the base. This, however, is very simple. Since the self-intersection of $`𝒮`$ is negative, there is a unique representative of the class $`𝒮`$, namely the section at infinity. Thus
$$_B(𝒮)=\text{single point}$$
(9.9)
The second step is the to find all the curves $`W_0`$ in the class $`\pi _{}𝒮+nF`$ in the Calabi–Yau threefold which project onto $`C`$. To answer this, we need to characterize the surface $`S_C=\pi ^1(C)`$. We recall that this will be an elliptic fibration over $`C`$ and is characterized by the genus $`g`$ of $`C`$ and the number $`12p`$ of singular fibers. Since $`C`$ is a section of the Hirzebruch surface, it must be topologically $`𝐏^1`$. Consequently
$$g=\text{genus}(C)=0$$
(9.10)
The expected number number of singular fibers is given by $`12p`$, where
$`p`$ $`=K_B[C]`$ (9.11)
$`=\left(2𝒮+(2+r)\right)𝒮=2r`$
For $`r3`$ this seems to predict that we have a negative number of singular fibers. Actually, this reflects the fact that the curve $`C`$ is contained within the discriminant curve of the Calabi–Yau elliptic fibration. Thus the elliptic curve over $`C`$ is singular everywhere. This is a non-generic case we wish to avoid in discussing the moduli space. Thus, we will assume $`r<3`$. Given that $`F_0`$ and $`F_1`$ do not give new surfaces, we are left with restricting to $`r=2`$ and so
$$B\text{ is }F_2$$
(9.12)
Then we have $`p=0`$ which implies the elliptic fibration is locally trivial. If we assume that the surface $`S_C`$ is also a globally trivial fibration, it is then simply the product
$$S_C=𝐏^1\times E$$
(9.13)
where $`E`$ is an elliptic curve. Comparing with the notation of section 6, we have $`g=p=0`$ and so we are interested in the moduli space $`(0,0;n)`$. However, we have argued, (6.24), that these spaces are empty unless $`n=0`$. In this case $`(0,0;0)=E\times 𝐏_a^1`$, where the first factor comes from moving the fivebrane within $`S^1/Z_2`$ and the second factor comes from moving it within $`S_C`$. (Again, here we are assuming that $`S_C`$ is a globally as well as locally trivial fibration.) In particular, the curve is a section of $`S_C`$ and, so, can lie at any point on the elliptic curve $`E`$.
Thus, we see that the only partition (9.6) allowed is where $`n=0`$. In that case the moduli space of the curve $`W_0`$ is simply
$$_0(\sigma _{}𝒮)=E\times 𝐏_a^1$$
(9.14)
The full moduli space is then the product
$$(\sigma _{}𝒮+fF)=\left(E\times 𝐏_a^1\right)\times \frac{\left(F_2\times 𝐏_a^1\right)^f}{𝐙_f}$$
(9.15)
Generically, we have $`f+1`$ fivebranes. One is $`W_0`$, which lies over the section at infinity in $`F_2`$ and can be at any position in the elliptic fiber over the base as well as in $`S^1/Z_2`$. The other $`f`$ fivebranes are pure fiber components, which can each be at arbitrary points in the base and in $`S^1/Z_2`$. Since the genus of $`C`$ is zero, so is the genus of $`W_0`$. Thus, generically, the only vector multiplets come from $`W_F`$. At a generic point in moduli space, we therefore have $`U(1)^f`$ gauge symmetry. It is possible for the fiber components to intersect $`W_0`$, in which case we might expect new massless fields to appear in the low-energy theory.
### 9.2 $`[W]=\sigma _{}+fF`$
Now consider the case where again
$$B\text{ is a }F_r\text{ surface with }r2$$
(9.16)
but we take
$$[W]=\sigma _{}+fF$$
(9.17)
As always, we first split off the pure fiber components in $`W`$, writing $`W=W_0+W_F`$ with $`f+1`$ different partitions
$$[W_0]=\sigma _{}𝒮+nF,[W_F]=(fn)F$$
(9.18)
The moduli space of $`W_F`$ is the familiar form
$$((fn)F)=\left(F_r\times 𝐏_a^1\right)^{fn}/𝐙_{fn}$$
(9.19)
To analyze the the moduli space of $`W_0`$, we project onto the base, giving the curve $`C`$ with
$$[C]=$$
(9.20)
Thus $`C`$ is in the fiber class of the $`F_n`$ base. Since $`F_n`$ is a $`𝐏^1`$ fibration over $`𝐏^1`$, the fiber can lie at any point in the base $`𝐏^1`$ so we have that the moduli space of $`C`$ is given by
$$_B()=𝐏^1$$
(9.21)
Next we need to find all the curves $`W_0`$ which project onto a given $`C`$. We first characterize the surface $`S_C=\pi ^1(C)`$. Since $`C`$ is a fiber of $`F_r`$, it must be topologically $`𝐏^1`$. Hence
$$g=\text{genus}(C)=0$$
(9.22)
The number of singular fibers, $`12p`$, in the elliptic fibration $`S_C`$ given in terms of
$`p`$ $`=K_B[C]`$ (9.23)
$`=\left(2𝒮+(2+r)\right)=2`$
for any $`r`$. This implies that $`S_C`$ is an elliptic fibration over $`𝐏^1`$ with 24 singular fibers, that is
$$S_C\text{ is a K3 surface}$$
(9.24)
In the notation of section 6, we have $`g=0`$ and $`p=2`$ and so we are interested in the moduli space $`(0,2;n)`$. However, we have argued, (6.24), that these spaces are empty unless $`n=0`$, in which case $`(0,2;0)=𝐏_a^1`$. That is, the curve is completely stuck within $`S_C`$, but is free to move within $`S^1/Z_2`$.
Thus the only partition (9.18) allowed is where $`n=0`$. In this case, the full moduli space of $`W_0`$ is then given by
$$_0(\sigma _{})=𝐏^1\times 𝐏_a^1$$
(9.25)
The first factor of $`𝐏^1`$ reflects the moduli space of curves $`C`$ in the base (9.21). The second factor reflects the fact that, for a given $`C`$, the moduli space of $`W_0`$ is $`(0,2;0)=𝐏_a^1`$. We note that there are no moduli for moving $`W_0`$ in the direction of the elliptic fiber. The full moduli space is then given by the product
$$(\sigma _{}+fF)=\left(𝐏^1\times 𝐏_a^1\right)\times \frac{\left(F_r\times 𝐏_a^1\right)^f}{𝐙_f}$$
(9.26)
As in the previous example, generically we have $`f+1`$ fivebranes. One is $`W_0`$ which, in the base, can be deformed in the base and can move in $`S^1/Z_2`$, but has no moduli for moving in the elliptic fiber. The other $`f`$ fivebranes are pure fiber components, which can each be at arbitrary points in the base and in $`S^1/Z_2`$. Again, since the genus of $`C`$ is zero, so is the genus of $`W_0`$. Thus, generically, the only vector multiplets come from $`W_F`$. At a generic point in moduli space we have, therefore, $`U(1)^f`$ gauge symmetry. It is possible for the fiber components to intersect $`W_0`$, in which case we might expect new massless fields to appear in the low-energy theory.
### Acknowledgments
B.A.O. and D.W. would like to thank Angel Uranga for helpful discussions. R.D. is supported in part by an NSF grant DMS-9802456 as well as by grants from the University of Pennsylvania Research Foundation and Hebrew University. B.A.O. is supported in part by a Senior Alexander von Humboldt Award, by the DOE under contract No. DE-AC02-76-ER-03071 and by a University of Pennsylvania Research Foundation Grant. D.W. is supported in part by the DOE under contract No. DE-FG02-91ER40671.
|
no-problem/9904/hep-ph9904424.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
In the standard construction of a first-order equilibrium phase transition via the Gibbs criteria from a quark-gluon plasma (QGP) to a hot and dense hadron gas (HG), the appearance of a discontinuity in the entropy per baryon ratio (s/n) makes the phase transition at fixed temperature T and fixed chemical potential $`\mu `$ irreversible. Recently several papers have been addressed to the question of conserving the entropy per baryon s/n across the phase boundary. Leonidov et al. have proposed a bag model equation of state (EOS) for the QGP consisting of massless, free gas of quarks and gluons using a ($`\mu `$, T) dependent bag constant $`B(\mu ,T)`$ in an isentropic equilibrium phase transition from a QGP to the HG at a constant T and $`\mu `$. Later Patra and Singh have extended this idea to remedy some anomalous behaviour of such a bag constant $`B(\mu ,T)`$ through the inclusion of perturbative QCD corrections in the EOS for QGP. They have also explored the consequences of such a bag constant on the deconfining phase transition in the relativistic heavy-ion collisions as well as in the early Universe case .
The above mentioned analysis refers only to the stationary systems. But in the context of modern experiments on ultrarelativistic heavy-ion collisions, the dynamical evolution of the system within the framework of hydrodynamical models has to be incorporated also. Recently Chernavskaya have suggested a double phase transition model via an intermediate phase containing massive constituent quarks and pions. He claimed that only the continuity condition in s/n ratio does not provide equilibrium character of first-order transition in dynamically evolving systems as for all equilibrium processes, but enthalpy of the system as well. Subsequently he has also criticized the work of Ref. 1 and 2 on the ground of twin constraints arising from Gibbs - Duham equilibrium relation and enthalpy conservation for an evolving system. The aim of the present note is to logically counter both these criticisms below.
## 2 Gibbs - Duham Relation
Ref. suggests that if $`ϵ`$ is the energy density and p the pressure then form-invariance of the relation
$`ϵ+p\mu n=sT`$ (1)
imposes the condition
$`\mu {\displaystyle \frac{B}{\mu }}=T{\displaystyle \frac{B}{T}}`$ (2)
We comment that eq.(2) is untenable due to two reason. Firstly, it would imply that B, instead of being a function of two independent variables $`\mu `$ and T, will depend only on the single variable $`\mu `$/T as can be verified by direct differentiation. Secondly, eq.(2) would make it impossible to apply the iterative analytical procedure of solving the basic partial differential equation based on s/n in the extreme regions $`\mu `$ 0, T $`\mathrm{}`$ as well as $`\mu \mathrm{}`$, T $``$ 0. This would imply a conflict with the QCD sum rule results .
## 3 Enthalpy Condition
Ref. mentions that, if $`\omega `$ = $`ϵ+P`$ is the enthalpy density, then for an evolving hydrodynamic system containing the mixed phase of the QGP and hadronic gas, the conservation of enthalpy per baryon $`\omega `$/n gives an additional constraint. We comment that this constraint is redundant, i.e., does not give a new information. This is so because even if the system is evolving with time, we can sit in the local comoving frame where the relation (1) is expected to hold. Then
$`{\displaystyle \frac{\omega }{n}}={\displaystyle \frac{(ϵ+P)}{n}}=T{\displaystyle \frac{s}{n}}+\mu `$ (3)
implying that, for any given T and $`\mu `$, the conservations of $`\omega `$/n and s/n are equivalent.
|
no-problem/9904/astro-ph9904310.html
|
ar5iv
|
text
|
# On Beaming Effects in Afterglow Light Curves
## 1 INTRODUCTION
Beaming of relativistic ejecta in GRBs has been postulated by many authors in order to ease the GRB energy budget (see, e.g., Mészáros, Rees, & Wijers 1998 and refs. therein). There are basically two ways to verify the beaming observationally: one is statistical and is based on counting the afterglow like transient sources and comparing their rate with the GRB rate, and the second one is related to the beaming effects predicted to be imprinted in the afterglow light curves of individual objects (Rhoads 1997). Applying the first method to the X-ray transient sources, Grindlay (1999) found that results are consistent with no beaming differentiating the GRB and X-transient rates. However, as was pointed out by Woods & Loeb (1999), the conclusive results about excess (or its lack) of X-ray transients over GRBs must wait for much more sensitive future instruments. This is because statistically significant contribution to the excess of X-ray transients over GRBs is expected to be provided only by weak X-ray transients, those representing afterglows phases when the bulk Lorentz factor of the radiating shell drops below the inverse of its angular size. Similar studies can be performed also in optical and radio band (Rhoads 1997; Woods & Loeb 1999).
In individual objects, the beaming related effects are expected to be imprinted in the optical and X-ray afterglow light-curves. The lateral expansion of the shocked, relativistic plasma causes that at some moment the front of the blast wave starts to increase faster than due to the cone-outflow (Rhoads 1997). Due to this the blast wave begins to decelerate faster than without the sideways outflows and this produces a break in the light curve, the sooner the larger the beaming factor is. Such a break is claimed to be present in the light curve of GRB 990123, the most energetic GRB up to date (Kulkarni et al. 1999). Sari, Piran, & Halpern (1999) speculate that afterglows with very steep light curves are highly beamed. Possibly the break in such objects is not recorded because it took place before the optical follow-ups.
As now, all theoretical studies of the light-curve breaks are analytical and are based on: power-law approximation of the blast wave dynamics, broken power-law approximation of radiation spectra and on “on-axis” relation between the observed flux and the emitted flux (Rhoads 1977, 1999; Kulkarni et al. 1999; Sari, Piran, & Halpern 1999). In this paper, we treat the dynamics using the prescription given by Blandford and McKee (1976). The evolution of radiation spectrum is calculated exactly, by computing the time evolution of electrons from continuity equation and by computing the observed luminosity through integrating the emitted radiation over the “$`t=\mathrm{const}`$” surfaces. Our results show that the change of the light curve slope is significant, but smaller than predicted analytically. And, what is more important, the light curves steepen very slowly, so that it is very difficult to talk about the specific time location of the break. In order to demonstrate better the beaming effect, we compare our results with the spherical case. We also show how the light curve should look like if there is no lateral expansion.
In §2 we collect equations, which are used to compute the blast wave speed, evolution of electrons, and afterglow light-curves. In §3 we present results of our numerical studies of afterglows produced by beamed ejecta, and, in §4 we compare them with simple analytical estimations.
## 2 BASIC EQUATIONS
### 2.1 Dynamics
The deceleration of a blast wave is described by the following equations (Blandford & McKee 1976; Chiang & Dermer 1998):
$$\frac{d\mathrm{\Gamma }}{dm}=\frac{\mathrm{\Gamma }^21}{M},$$
(1)
$$\frac{dM}{dr}=\frac{dm}{dr}[\mathrm{\Gamma }ϵ_{rad}ϵ_e(\mathrm{\Gamma }1)],$$
(2)
and
$$dm/dr=\mathrm{\Omega }_jr^2\rho =2\pi r^2(1cos\theta _j)\rho ,$$
(3)
where $`\mathrm{\Gamma }`$ is the bulk Lorentz factor of the blast wave, $`M`$ is the total mass including internal energy, $`r`$ is the distance from the central engine to the blast wave, $`dm`$ is the rest mass swept up in the distance $`dr`$, $`\rho `$ is the mass density of the external medium, $`ϵ_e`$ is the fraction of dissipated energy converted to relativistic electrons, $`ϵ_{rad}`$ is the fraction of electron energy which is radiated, and $`\theta _j`$ is the angular size of the blast wave. This angular size is not constant but increases due to thermal expansion (Rhoads 1997),
$$\theta _j\frac{a}{r}=\theta _{j0}+\frac{v_l^{}}{c\mathrm{\Gamma }},$$
(4)
where the speed of the lateral expansion, $`v_l^{}`$, is assumed by Rhoads (1999) to be equal to the sound speed in the relativistic plasma, $`c_s=c/\sqrt{3}`$, but considered by Sari et al. (1999) to be relativistic. Noting, that the plasma in the blast wave is continuously loaded by the fresh gas which initially doesn’t have any lateral bulk speed, one can expect that in reality $`v_l^{}`$ does not reach relativistic value and sets up somewhere between $`c_s`$ and $`\beta _\mathrm{\Gamma }c`$, and in general depends on $`r`$, and on $`\theta _j`$.
### 2.2 Electron energy distribution
We assume that the electrons are injected with the power low energy distribution
$$Q=K\gamma ^p,$$
(5)
with the minimum energy of injected electrons
$$\gamma _m=\frac{ϵ_e(\mathrm{\Gamma }1)m_p}{m_e}\frac{p2}{p1}.$$
(6)
The maximum energy of injected electrons for a given magnetic field, $`B^{}`$, is assumed to be given by (de Jager et al. 1996):
$$\gamma _{max}4\times 10^7\left(\frac{B^{}}{1\mathrm{G}}\right)^{1/2}$$
(7)
Normalization of the injection function, $`K`$, is provided by
$$L_{e,inj}^{}_{\gamma _m}^{\gamma _{max}}Q\gamma m_ec^2𝑑\gamma =ϵ_e\frac{dE_{acc}^{}}{dt^{}},$$
(8)
where
$$\frac{dE_{acc}^{}}{dt^{}}=\frac{dr}{dt^{}}\frac{dE_{acc}^{}}{dr}=\frac{dr}{dt^{}}\frac{dm}{dr}c^2(\mathrm{\Gamma }1)=\mathrm{\Omega }_jr^2\rho \beta _\mathrm{\Gamma }\mathrm{\Gamma }(\mathrm{\Gamma }1)c^3,$$
(9)
is the rate of accreted kinetic energy, $`dr=c\beta _\mathrm{\Gamma }\mathrm{\Gamma }dt^{}`$, $`\beta _\mathrm{\Gamma }=\sqrt{\mathrm{\Gamma }^21}/\mathrm{\Gamma }`$, and $`t^{}`$ is the time measured in the blast wave comoving frame.
The evolution of the electron energy distribution is given by the continuity equation
$$\frac{N_\gamma }{r}=\frac{}{\gamma }\left(N_\gamma \frac{d\gamma }{dr}\right)+Q,$$
(10)
where
$$\frac{d\gamma }{dr}=f(r)\gamma ^2g\frac{\gamma }{r},$$
(11)
are the electron energy losses. In both equations above, the derivatives over comoving time $`t^{}`$ have been replaced by the derivatives over the distance $`r`$, according to the relation $`/t^{}=c\beta _\mathrm{\Gamma }\mathrm{\Gamma }/r`$. The first term on the rhs of Eq. (11) represents synchrotron plus Compton energy losses, i.e.,
$$f(r)=\frac{\sigma _T}{6m_ec^2}\frac{B^2}{\beta _\mathrm{\Gamma }\mathrm{\Gamma }}(1+u_s^{}/u_B^{}),$$
(12)
where $`u_B^{}=B^2/8\pi `$ is the magnetic energy density, and $`u_s^{}`$ is the energy density of the synchrotron radiation, both as measured in the blast wave frame. The second term on the rhs of Eq. (11) represents the adiabatic losses. The parameter $`g`$ depends on the geometry of the expansion; for 2-dimensional (lateral) expansion $`g=2/3`$, and for 3-dimensional expansion $`g=1`$.
We calculate the magnetic field following Chiang & Dermer (1999)
$$u_B^{}\frac{B_{}^{}{}_{}{}^{2}}{8\pi }=ϵ_B\kappa \rho c^2\mathrm{\Gamma }^2,$$
(13)
where $`\kappa `$ is the compression ratio and $`ϵ_B`$ parameterizes the departure of the magnetic field intensity from its equipartition value.
### 2.3 Synchrotron spectrum
The evolution of the synchrotron spectrum in the blast wave frame is given by
$$L_{syn,\nu ^{}}^{}(r)=N_\gamma (r)P(\nu ^{},\gamma )𝑑\gamma ,$$
(14)
where $`P(\nu ^{},\gamma )`$ is the power spectrum of synchrotron radiation of a single electron in isotropic magnetic field (see, e.g., Chiaberge & Ghisellini 1999).
The apparent monochromatic synchrotron luminosity as a function of time (a light curve) is calculated from
$$L_{syn,\nu }(t,\theta _{obs})=_{\mathrm{\Omega }_j}\frac{L_{syn,\nu ^{}}^{}[r(\stackrel{~}{\theta })]𝒟^3}{\mathrm{\Omega }_j}d\mathrm{cos}\stackrel{~}{\theta }d\stackrel{~}{\varphi },$$
(15)
where $`𝒟=1/\mathrm{\Gamma }(1\beta _\mathrm{\Gamma }\mathrm{cos}\stackrel{~}{\theta })`$ is the Doppler factor of the blast wave at the angle $`\stackrel{~}{\theta }`$. The coordinates ($`\stackrel{~}{\theta },\stackrel{~}{\varphi }`$) are chosen so that the observer is located at $`\stackrel{~}{\theta }=0`$ ($`\theta =\theta _{obs}`$) and the jet axis is at $`\stackrel{~}{\theta }=\theta _{obs}`$. The integral is taken over the surfaces
$$t=\frac{(1\beta _\mathrm{\Gamma }\mathrm{cos}\stackrel{~}{\theta })}{c\beta _\mathrm{\Gamma }}𝑑r=\mathrm{const},$$
(16)
enclosed within the blast wave boundaries, $`\mathrm{\Omega }_j`$.
### 2.4 Inverse-Compton radiation
We assume hereafter that cooling of relativistic electrons is dominated by synchrotron radiation, i.e. that $`u_s^{}u_B^{}`$. This condition will be verified and discussed in Appendix A.
## 3 RESULTS
We have used the following model parameters of the afterglow model in our calculations: initial energy per solid angle, $`E_0/\mathrm{\Omega }_{j0}=10^{54}\mathrm{ergs}\mathrm{s}^1/4\pi `$; $`\mathrm{\Gamma }_0=300`$; $`\theta _{j0}=0.2`$; $`\kappa =4`$; $`\rho =m_p/1\mathrm{c}\mathrm{m}^3`$; $`ϵ_e=0.1`$ (the quasi-adiabatic case); $`ϵ_B=0.03`$; $`p=2.4`$. The parameters were not chosen to fit any specific observations, but rather to demonstrate the difference between simple analytical predictions and self-consistent numerical calculations regarding the beaming effects in a light-curves.
In Fig. 1 we present the dependence of a bulk Lorentz factor of the blast wave on its distance from the central engine. We show three solutions for three different values of $`v_l^{}`$: $`v_l^{}=0`$ (thin line); $`v_l^{}=c/\sqrt{3}`$ (solid line); and $`v_l^{}=c`$ (dotted line). For $`v_l^{}=0`$ ($`\theta _j=\mathrm{const}`$) and $`r_0rr_{nr}`$, the bulk Lorentz factor is well approximated by
$$\mathrm{\Gamma }\mathrm{\Gamma }_0\left(r_0/r\right)^{3/2},$$
(17)
where
$$r_0\left(\frac{3E_0}{\mathrm{\Gamma }_0^2\rho c^2\mathrm{\Omega }_j}\right)^{1/3}1.2\times 10^{17}\mathrm{cm},$$
(18)
is the radius where deceleration of the GRB ejecta by sweeping of interstellar gas starts to be efficient, and
$$r_{nr}r_0\left(\frac{\mathrm{\Gamma }_0}{2}\right)^{2/3}3.4\times 10^{18}\mathrm{cm},$$
(19)
is the radius above which the blast wave becomes nonrelativistic. Fig. 1 demonstrates that steepening of the $`\mathrm{\Gamma }(r)`$ curves due to lateral outflow is very smooth, without any sharp break like the one predicted analytically to take place at a distance, at which $`\mathrm{\Gamma }`$ drops below $`1/\theta _{j0}`$, i.e. at
$$r_Dr_0(\mathrm{\Gamma }_0\theta _{j0})^{2/3}1.8\times 10^{18}\mathrm{cm}.$$
(20)
In Fig. 2 we show the radial dependence of the rate of kinetic energy accreted by the blast wave, $`dE_{acc}^{}/dt^{}`$ (see Eq. 9). As one could expect, the larger the lateral outflow speed, the larger the accretion rate is. The steepening of curves at large $`r`$ is due to transition from the relativistic regime ($`\mathrm{\Gamma }>2`$) to the nonrelativistic regime, where $`dE_{acc}^{}/dt^{}`$ is significantly reduced, and becomes $`(\mathrm{\Gamma }1)`$ (see Eq. 9). When divided by $`ϵ_e`$, curves in Fig. 2 illustrate also the $`r`$ dependence of injection luminosity of relativistic electrons (see Eq. 8).
In Fig. 3 we show time evolution of the electron energy distribution, $`N_\gamma `$, multiplied by $`\gamma ^2`$. The curves are calculated at such values of the radius $`r`$, from which the signal produced on the axis $`\stackrel{~}{\theta }=0`$ is reached by the observer $`t=1`$,$`10`$, $`10^2`$, …, $`10^7`$ seconds after “the signal” from $`r=0`$. The relation between $`r`$ and $`t`$ is
$$t=_0^r\frac{(1\beta _\mathrm{\Gamma })}{c\beta _\mathrm{\Gamma }}𝑑r_0^r\frac{1}{2c\mathrm{\Gamma }^2}𝑑r.$$
(21)
The peak positions of the $`N_\gamma \gamma ^2`$ curves mark Lorentz factor of those electrons which carry most of leptonic energy at a given distance. For injection spectral index $`2<p<3`$, the peak is located at $`\gamma _m`$ given by Eq. (6), and this is the case in our model. Another characteristic energy is
$$\gamma _c=6.1\times 10^{20}\frac{m_p}{ϵ_B\kappa \rho }\frac{1}{r\mathrm{\Gamma }},$$
(22)
below which the time scale of electron energy losses due to synchrotron radiation is longer than the dynamical time scale. We present the dependence of $`\gamma _m`$ and $`\gamma _c`$ on the radius $`r`$ in Fig. 4.
We can see from Fig. 4, that for the first 5 curves presented in the Fig. 3, $`\gamma _m>\gamma _c`$. In this case, in accordance with the analytical predictions, the electron spectra at $`\gamma >\gamma _m`$ are well described by the power-law function, $`N_\gamma \gamma ^s`$, with the index $`s=p+1`$. For $`\gamma _c<\gamma <\gamma _m`$, analytical crude estimations predict $`s=2`$, which in our plot should be represented by horizontal lines. This, however, is expected to be true only for $`\gamma _c\gamma <\gamma _m`$. In our model the ratio $`\gamma _m/\gamma _c`$ is not large enough to provide space for $`s=2`$ and there is a smooth transition to very hard low energy tail reached by electrons due to adiabatic losses. For $`\gamma _c>\gamma _m`$, which is the case for the top 4 curves in the Fig. 3, the predicted electron spectra should have a slope $`s=p+1`$ for $`\gamma >\gamma _c`$, and $`s=p`$ for $`\gamma _m<\gamma <\gamma _c`$. The former is seen, but the latter, again, due to narrow range between $`\gamma _c`$ and $`\gamma _m`$ doesn’t apply. Instead, there the log-energy distribution is curved, smoothly joining the high energy portion of the electron spectrum with its low energy adiabatic part. Let us note, that very steep low energy tails of the curves on top of the plot result from the fact that there is not enough time for electrons to drift adiabatically to lower energies. Note also, that the details of the low energy parts of the electron energy distribution are not important, because contribution of electrons from these parts to the observed radiation is negligible.
In Fig. 5 we present the observed radiation spectra computed for the same sequence of $`t`$, as electron energy distributions shown in Fig. 3. It should be pointed out, however, that unlike in simple analytical calculations, they are computed by integration of electron radiation from $`t=\mathrm{const}`$ surfaces (see Eq. 16), i.e. taking into account light travel differences between photons emitted at different $`\stackrel{~}{\theta }`$’s. The observed radiation spectra are peaked around $`h\nu \mathrm{\Gamma }\gamma _m^2B/B_{cr}m_ec^2`$, where rhs quantities are calculated for $`r`$ given by Eq. (16) and $`B_{cr}=2\pi m_e^2c^3/he4.4\times 10^{13}`$ Gauss. As we can see from Fig. 5, the high energy and the low energy parts of the observed radiation spectra are well described by power-law functions $`L_\nu \nu ^\alpha `$, with $`\alpha =p/2=1.1`$ and $`\alpha =1/3`$, respectively. The former is produced, as predicted analytically, by electrons with $`\gamma >`$ Max$`[\gamma _m;\gamma _c]`$, the latter represents the low energy synchrotron radiation of electrons with energies $`\gamma <`$ Min$`[\gamma _c;\gamma _m]`$. These high and low energy spectrum portions are joined very smoothly without showing any intermediate piece of the power-law spectrum. This smoothing of the observed radiation spectra results mainly from the fact that the observed radiation at any given moment is contributed by radiation from $`t=\mathrm{const}`$ surfaces, i.e. from different radii.
The light curves, computed for $`\nu =4.2\times 10^{14}`$Hz and $`\nu =2.5\times 10^{17}`$Hz, are shown in Fig. 6. In this calculation we used $`v_l^{}=c/\sqrt{3}`$ and considered two different locations of the observer, $`\stackrel{~}{\theta }=0`$ and $`\stackrel{~}{\theta }=0.28`$. The latter case is for the observer located outside the initial ejecta cone.
In order to demonstrate better the beaming effect and its dependence on $`v_l^{}`$, we plot in Fig. 7 four optical light curves; three for different values of $`v_l^{}`$: $`0`$, $`c/\sqrt{3}`$, and $`c`$, and the fourth one for the spherical outburst. We can see, that for models with lateral expansion the steepening of the light curves is extended over more than two time decades and, down to nonrelativistic regime, doesn’t reach analytically predicted slope $`\beta =p`$ ($`L_\nu t^\beta `$). Sharp break is found only for $`v_l^{}=0`$ model, and, it emerges shortly after $`t(r_D)`$, as theoretically predicted.
We should note here, that our calculations of the model with the outflows expansion are not fully consistent, because the Doppler factor includes only the radial component of the bulk motion. This, however, is expected to affect only the results of the $`v_l=c`$ model, where we overestimate the radiation contributed from the blast wave edge.
## 4 DISCUSSION AND CONCLUSIONS
The afterglows provide exceptional opportunity to study whether and how much the GRB ejecta are beamed. As predicted by Rhoads (1997), the beamed outflows should diverge from the cone geometry while decelerated by sweeping up the external gas. The sideways outflow of the shocked relativistic plasma increases the front of the blast wave leading to a faster deceleration. Rhoads (1997) showed, using simple analytical analyses, that this should be imprinted in the light curve as a break around $`t(r_D)`$, i.e. when $`\mathrm{\Gamma }`$ drops below $`(c_l/c)/\theta `$. There the light curve should steepen, changing the slope from $`\beta =(3p2)/4`$ (for $`\gamma >\mathrm{Max}[\gamma _c;\gamma _m]`$) or from $`\beta =3(p1)/4`$ (for $`\gamma _m<\gamma <\gamma _c`$) (Sari, Piran, & Narayan 1998) to $`\beta =p`$ (Rhoads 1999). Our numerical results agree only qualitatively with these predictions. The steepening does occur, however, the slope change is smaller (to about $`2.0`$ instead of $`p=2.4`$) and is extended over more than two decades of the observed time.
There are two reasons why, contrary to simple analytical estimations, the distinct break does not emerge in our calculations. First, as is shown in Fig. 1, the dynamics of the blast wave is affected by the lateral outflow very smoothly over the whole deceleration phase, and not just around $`r_D`$ (note, that at $`r_D`$ the blast wave area is already almost $`4`$ times larger than it would be without the sideways expansion). Second, the observed radiation at any given moment $`t`$ is contributed by the plasma which at larger $`\theta `$ emits radiation from smaller $`r`$, and noting, that at smaller $`r`$ plasma is moving faster and radiating stronger than at larger $`r`$, the contribution of the off-axis plasma to the observed radiation is larger than in the case of radiation contribution taken from $`r=\mathrm{const}`$ surfaces as calculated analytically.
The steepening of the light curve is predicted also for the beamed ejecta without the lateral outflows. In this case, because no change of dynamics and small light travel effects at $`r>r_b`$ (note that there the Doppler cone becomes narrower than the ejecta cone), the break is very well located, just around the time the $`\mathrm{\Gamma }`$ drops below $`1/\theta _j`$, and the light curve steepens by $`\mathrm{\Delta }\beta =3/4`$, in accordance with analytical predictions (see, e.g., Mészáros & Rees 1999).
It should be emphasized, however, that our treatment of the dynamics with the sideways expansion is based on the approximation, that at any $`r`$ the material is uniformly distributed across the blast wave. In reality, the lateral outflow can create $`\theta `$ dependent structure, with the density of the swept material and the radial bulk Lorentz factor decreasing sideways, and in this case the break in the light curve may become more prominent. 2D-hydro relativistic simulations are required to verify this.
This project was partially supported by NSF grant No. AST-9529175, ITP/NSF grant No. PHY94-07194, and the Polish KBN grant No. 2P03D 00415. MS and RM thank Fellows of ITP/UCSB for hospitality during the visit and participating in the program “Black Hole Astrophysics”. RM thanks NASA for support under the Long Term Space Astrophysics grant NASA-NAG-6337. MS acknowledges financial support of NASA/RXTE and ASTRO-E observing grants to USRA (G. Madejski, PI).
## Appendix A INVERSE COMPTON COOLING
The ratio of inverse Compton luminosity to synchrotron luminosity is given by
$$\frac{L_C^{}}{L_s^{}}=\frac{u_s^{}}{u_B^{}},$$
(A1)
where
$$u_s^{}\frac{L_s^{}}{2c\mathrm{\Omega }_jr^2},$$
(A2)
$$L_s^{}=(1\eta _C)ϵ_{rad}ϵ_e\frac{dE_{acc}^{}}{dt^{}},$$
(A3)
$`dE_{acc}^{}/dt^{}`$ and $`u_B^{}`$ are given by Eq. (9) and (13), respectively, and $`\eta _C=L_C^{}/(L_C^{}+L_s^{})`$. Using all these relations in Eq. (A1), we find that for $`\mathrm{\Gamma }1`$
$$\frac{L_C^{}}{L_s^{}}=\frac{(1\eta _C)ϵ_{rad}ϵ_e}{2\kappa ϵ_B},$$
(A4)
and noting that $`L_C^{}/L_s^{}=\eta _C/(1\eta _C)`$, we obtain
$$\eta _C=\frac{(1+2\chi )\sqrt{1+4\chi }}{2\chi },$$
(A5)
where
$$\chi =\frac{ϵ_{rad}ϵ_e}{2ϵ_B\kappa }.$$
(A6)
and for $`\chi 1`$, $`\eta _C\chi `$.
For $`\gamma _m>\gamma _c`$ practically all energy converted to electrons is radiated ($`ϵ_{rad}1`$) and the inverse Compton is energetically negligible ($`\eta _C1/2`$) if $`ϵ_B>10^2`$. For $`\gamma _m<\gamma _c`$, the luminosity peaks at $`\nu _c`$ and
$$ϵ_{rad}\left(\frac{\gamma _m}{\gamma _c}\right)^{p2}.$$
(A7)
In the latter case, the inverse Compton is energetically not important, if
$$\frac{\gamma _c}{\gamma _m}\left(\frac{10^2}{ϵ_B}\right)^{\frac{1}{p2}}.$$
(A8)
One can easily check, using the above criteria and Fig. 4, that for our specific model the inverse Compton process does not dominate electron cooling at any moment.
It should be noted, however, that the inverse Compton process can be imprinted in the afterglow light curves, even if the Compton cooling is less efficient than the synchrotron one. At the moment when the Compton component drifts down to the observed band, the light curve is expected to flatten. Chiang & Dermer (1999) demonstrated that this effect can be visible in the X-ray light curves; in the optics it appears very late and is already too weak to be observed, especially if overshined by the host galaxy.
|
no-problem/9904/hep-ph9904448.html
|
ar5iv
|
text
|
# Supersymmetric CP violating Phases and the LSP relic density and detection rates
## 1. Introduction
CP violation is an important test for physics beyond the standard model. In the standard model, only one CP violating phase exists in the Kobayashi-Maskawa matrix. However, in the supersymmetric standard model there are many complex parameters, in addition to Yukawa couplings, which lead to new sources of CP violation. These include the mass coefficient $`\mu `$ of the bilinear term involving the two Higgs doublets, the $`SU(3)`$, $`SU(2)`$ and $`U(1)`$ gaugino masses $`M_3`$, $`M_2`$ and $`M_1`$, and the parameters $`A_f`$ and $`B`$ which respectively are the coefficients of the supersymmetric breaking trilinear and bilinear couplings. ( the subscript $`f`$ denotes the flavor index.) In minimal supersymmetric standard model (MSSM), only two of these phases are physical. Through appropriate field redefinitions we end up with the phase of $`\mu `$ ($`\varphi _\mu `$) and the phase of $`A`$ ($`\varphi _A`$) as the physical phases which cannot be rotated away. The phase of $`B`$ is fixed by the condition that $`B\mu `$ is real. It is known that, unless these phases are sufficiently small, their contributions to the neutron electric dipole moment (EDM) are larger than the experimental limit $`1.1\times 10^{25}`$ e.cm. Recently, the effect of these phases on the EDM of the neutron was examined in a model with dilaton-dominated supersymmetry (SUSY) breaking, taking into account the cancellation mechanism between the different contributions. It was shown that for a wide region of the parameter space the phase of $`\mu `$ is constrainted to be of order $`10^1`$, while the phase of $`A`$ is strongly correlated with that of $`\mu `$ in order not to violate the bound on the neutron EDM. The effect of SUSY CP violating phases on the relic density of the LSP has been considered for the MSSM case in Ref. , and for the supersymmetric standard model coupled to N=1 supergravity in Ref. . It was shown in Ref. that the upper bound on the LSP (from $`\mathrm{\Omega }h^20.25`$) is relaxed from 250 GeV to 650 GeV. The effect of CP violation on the direct detection rates of the LSP in MSSM is also considered in Ref. . We argue that such a large upper bound on the LSP is not possible in the model we consider here. We show that in case of the bino-like LSP the chance of the CP phases to have a significant effect is very small. The impact of the phases on the direct and indirect detection rates is an important issue and we present some details here. The paper is organized as follows. In section 2 we discuss the effect of the CP phases on the LSP mass and purity within the string inspired model considered in Ref. . In section 3 we compute the relic abundance of the LSP for low and intermediate values of $`\mathrm{tan}\beta `$. We find that the CP phases have almost no effect on the LSP relic density, so that the upper bound on the LSP mass obtained in Ref. remains unchanged. In section 4 we discuss the large $`\mathrm{tan}\beta `$($`m_t/m_b`$) case and again find that there is no significant effect of CP phases on the LSP relic density. In section 5 we show that CP phases can have a substantial effect on the LSP detection rates. Our conclusions are given in section 6
## 2. String inspired model
We will consider the string inspired model which has been recently studied in Ref. . In this model, the dilaton $`S`$ and overall modulus field $`T`$ both contribute to SUSY breaking. The soft scalar masses $`m_i`$ and the gaugino masses $`M_a`$ are given as
$`m_i^2`$ $`=`$ $`m_{3/2}^2(1+n_i\mathrm{cos}^2\theta ),`$ (1)
$`M_a`$ $`=`$ $`\sqrt{3}m_{3/2}\mathrm{sin}\theta e^{i\alpha _S},`$ (2)
where $`m_{3/2}`$ is the gravitino mass, $`n_i`$ is the modular weight of the chiral multiplet, and $`\mathrm{sin}\theta `$ defines the ratio between the $`F`$-terms of $`S`$ and $`T`$, (For example, the limit $`\mathrm{sin}\theta 1`$ corresponds to a dilaton-dominant SUSY breaking). The phase $`\alpha _S`$ originates from the $`F`$-term of $`S`$.
The $`A`$-terms can be written as
$`A_{ijk}`$ $`=`$ $`\sqrt{3}m_{3/2}\mathrm{sin}\theta e^{i\alpha _S}m_{3/2}\mathrm{cos}\theta (3+n_i+n_j+n_k)e^{i\alpha _T},`$ (3)
where $`n_i`$, $`n_j`$ and $`n_k`$ are the modular weights of the fields that are coupled by this $`A`$-term. One needs a correction term in eq (3) when the corresponding Yukawa coupling depends on moduli fields. However, the $`T`$-dependent Yukawa coupling includes a suppression factor , and so we ignore it. Finally, the phase $`\alpha _T`$ originates from the $`F`$-term of $`T`$. The magnitude of the soft SUSY breaking term $`B\mu H_1H_2`$ depends on the way one generates a ‘natural’ $`\mu `$-term. Here we take $`\mu `$ and $`B`$ as free parameters and we will fix them by requiring successful electroweak (EW) symmetry breaking. As stated earlier, the gaugino masses as well as $`A`$-terms and $`B`$-term are, in general, complex. We have the freedom to rotate $`M_a`$ and $`A_{ijk}`$ at the same time . Here we use the basis in which $`M_a`$ is real. Similarly, we can rotate the phase of $`B`$ so that $`B\mu `$ itself is real. In other words, $`\varphi _B=\varphi _\mu =\mathrm{arg}(BM^{})`$. In this basis, $`A`$-terms contain a single phase, $`(\alpha _A\alpha _T\alpha _S)`$. As shown in eqs.(1-3), the values of the soft SUSY breaking parameters at string scale depend on the modular weights of the matter states. The modular weights of the matter fields $`n_i`$ are normally negative integers. Following the approach of Ref. the ‘natural’ values of modular weights for matter fields ( in case of $`Z_N`$ orbifolds) are -1,-2,-3 and -4. It was shown in Ref. that the following modular weights for quark and lepton superfields is favorable for EW breaking
$$n_Q=n_U=n_{H_1}=1,$$
and
$$n_D=n_L=n_E=n_{H_2}=2.$$
Under this assumption we have
$$A_t=A_b=\sqrt{3}m_{3/2}\mathrm{sin}\theta +m_{3/2}\mathrm{cos}\theta e^{i\alpha _A},$$
(4)
and
$$A_\tau =\sqrt{3}m_{3/2}\mathrm{sin}\theta +2m_{3/2}\mathrm{cos}\theta e^{i\alpha _A},$$
(5)
Given the boundary conditions in eqs. (1-3) at the compactification scale, we determine the evolution of the couplings and the mass parameters according to their one loop renormalization group equation in order to estimate the mass spectrum of the SUSY particles at the weak scale. The radiative EW symmetry breaking imposes the following conditions on the renormalized quantities:
$$m_{H_1}^2+m_{H_2}^2+2\mu ^2>2B\mu ,$$
(6)
$$(m_{H_1}^2+\mu ^2)(m_{H_2}^2+\mu ^2)<(B\mu )^2,$$
(7)
$$\mu ^2=\frac{m_{H_1}^2m_{H_2}^2\mathrm{tan}^2\beta }{\mathrm{tan}^2\beta 1}\frac{M_Z^2}{2},$$
(8)
and
$$\mathrm{sin}2\beta =\frac{2B\mu }{m_{H_1}^2+m_{H_2}^2+2\mu ^2},$$
(9)
where $`\mathrm{tan}\beta =H_2^0/H_1^0`$ is the ratio of the two Higgs VEVs that gives masses to the up and down type quarks, and $`m_{H_1}^2`$, $`m_{H_2}^2`$ are the two soft Higgs square masses at the EW scale. Using the above equations we can determine $`|\mu |`$ and $`B`$ in terms of $`m_{3/2}`$, $`\theta `$ and $`\alpha _A`$. The phase of $`\varphi _\mu `$ remains undetermined. Since we are interested in investigating the effect of the supersymmetric phases $`\alpha _A`$ and $`\varphi _\mu `$ on the relic density of the LSP and its direct and indirect detection rates, we first study the allowed regions of these phases and later impose the constraints (derived in Ref ) from the experimental bounds on the electric dipole moments. The neutralinos $`\chi _i^0`$, $`(i=1,2,3,4)`$ are the physical (mass) superpositions of the Higgsinos $`\stackrel{~}{H}_1^0`$, $`\stackrel{~}{H}_2^0`$ and the two neutral gaugino $`\stackrel{~}{B}^0`$ (bino) and $`\stackrel{~}{W}_3^0`$ (wino). The neutralino mass matrix is given by
$$M_N=\left(\begin{array}{cccc}M_1& 0\hfill & M_Z\mathrm{cos}\beta \mathrm{sin}\theta _W& \hfill M_Z\mathrm{sin}\beta \mathrm{sin}\theta _W\\ 0& M_2\hfill & M_Z\mathrm{cos}\beta \mathrm{cos}\theta _W& \hfill M_Z\mathrm{sin}\beta \mathrm{cos}\theta _W\\ M_Z\mathrm{cos}\beta \mathrm{sin}\theta _W& M_Z\mathrm{cos}\beta \mathrm{cos}\theta _W\hfill & 0& \hfill \mu e^{\varphi _\mu }\\ M_Z\mathrm{sin}\beta \mathrm{sin}\theta _W& M_Z\mathrm{sin}\beta \mathrm{cos}\theta _W\hfill & \mu e^{\varphi _\mu }& \hfill 0\end{array}\right),$$
(10)
where $`M_1`$ and $`M_2`$ now refer to ‘low energy’ quantities whose asymptotic values are given in equation (2). The lightest eigenstates $`\stackrel{~}{\chi }_1^0`$ is a linear combination of the original fields:
$$\stackrel{~}{\chi }_1^0=N_{11}\stackrel{~}{B}+N_{12}\stackrel{~}{W}^3+N_{13}\stackrel{~}{H}_1^0+N_{14}\stackrel{~}{H}_2^0,$$
(11)
where the unitary matrix $`N_{ij}`$ relates the $`\stackrel{~}{\chi }_i^0`$ fields to the original ones. The entries of this matrix depend on $`m_{3/2}`$, $`\theta `$ and $`\varphi _\mu `$. The dependence of the $`\stackrel{~}{\chi }_1^0`$ (LSP) mass on $`\varphi _\mu `$ is shown in figure 1 for $`m_{3/2}100GeV`$, $`\mathrm{cos}^2\theta 1/2`$ and $`\alpha _A\pi /2`$.
A useful parameter for describing the neutralino composition is the gaugino ”purity” function
$$f_g=|N_{11}|^2+|N_{12}|^2$$
(12)
We plot this function versus $`\varphi _\mu `$ in figure (2) which clearly shows that the LSP is essentially a pure bino.
These two figures show that the neutralino mass and composition are only slightly dependent on the supersymmetric phase $`\varphi _\mu `$.
## 3. Relic Abundance Calculation for low and <br>intermediate values of $`\mathrm{tan}\beta `$
In this section we compute the relic density of the LSP in the case of low $`\mathrm{tan}\beta `$ (i.e. $`\mathrm{tan}\beta 3`$), as well as for intermediate $`\mathrm{tan}\beta `$ values (i.e. $`\mathrm{tan}\beta 15`$). Using a standard method in which we expand the thermally averaged cross section $`\sigma _Av`$ as
$$\sigma _Av=a+bv^2+\mathrm{},$$
(13)
where $`v`$ is the relative velocity, $`a`$ is the s-wave contribution at zero relative velocity and $`b`$ contains contributions from both the s and p waves, the relic abundance is given by .
$$\mathrm{\Omega }_\chi h^2=\frac{\rho _\chi }{\rho _c/h^2}=2.82\times 10^8Y_{\mathrm{}}(m_\chi /GeV),$$
(14)
where
$$Y_{\mathrm{}}^1=0.264g_{}^{1/2}M_Pm_\chi (\frac{a}{x_F}+\frac{3b}{x_F^2}),$$
(15)
$`h`$ is the well known Hubble parameter, $`0.4h0.8`$, and $`\rho _c2\times 10^{29}h^2`$ is the critical density of the universe. The freeze-out temperature is given by
$$x_F=\mathrm{ln}\frac{0.0764M_P(a+6b/x_F)c(2+c)m_\chi }{\sqrt{g_{}x_F}}$$
(16)
Here $`x_F=m_\chi /T_F`$, $`M_P=1.22\times 10^{19}`$ GeV is the Planck mass, and $`g_{}`$ ($`8\sqrt{g_{}}10`$) is the effective number of relativistic degrees of freedom at $`T_F`$. Also $`c=1/2`$ as explained in Ref . Given that the LSP is bino-like, the annihilation is predominantly into leptons, with the other channels either closed or suppressed. It is worth noting that the squark exchanges are suppressed due to their large masses, as figure (3) shows.
The annihilation process is dominated by the exchange of the right slepton. In fact, the masses of the right slepton are essentially independent of $`\alpha _A`$ and $`\varphi _\mu `$, unless there is a significant amount of slepton mixing. Here, the off-diagonal element of the matrices are much smaller than the diagonal elements $`M_{l_L}^2`$ and $`M_{l_R}^2`$. Furthermore, since the LSP is essentially a bino, it only slightly depends on the phase of $`\mu `$ as figure (1) confirms. Therefore, we find that the constraint on the relic density: $`0.1\mathrm{\Omega }_{\mathrm{LSP}}0.9`$, with $`0.4h0.8`$ leads, as figure (4) shows, to the previously known upper bound on the LSP mass found in the case of vanishing SUSY phases , namely, $`m_\chi 250`$ GeV.
This result is different from the one discussed in Ref. where it was claimed that the CP violating phases have a significant effect in that the cosmological upper bound on the bino mass is increased from 250 GeV to 650 GeV. This enhancement can be traced to the assumptions proposed in that model, namely that all the scalar masses are equal and of order $`M_W`$ at the weak scale. Also, the sfermion mxings were assumed to be large (with $`\mu TeV`$). It turns out that such assumptions may lead to unacceptable charge and color breaking, as explained in Ref. . Moreover, they cannot be motivated from supergravity or superstring models. We have also considered the relic density for intermediate $`\mathrm{tan}\beta `$ (i.e. $`\mathrm{tan}\beta 15`$). We find that there is no significant difference between this and the case of low $`\mathrm{tan}\beta `$. The upper bound on the LSP mass is still of order 250 GeV.
## 4. SUSY CP phases with Large $`\mathrm{tan}\beta `$
We now extend our study to the case when $`\mathrm{tan}\beta `$ is large ($`50`$). In a large class of supersymmetric models with flavor $`U(1)`$ symmetry, $`\mathrm{tan}\beta (\frac{m_t}{m_b}).ϵ^n`$, where $`ϵ0.2`$ ($``$ Cabibbo angle) is a ‘small’ expansion parameter, and $`n=0,1,2`$. (See for instance and reference therein). For $`\mathrm{tan}\beta \frac{m_t}{m_b}`$, it is known that the phenomenological aspects of these models are very different compared with the small $`\mathrm{tan}\beta `$ case. In particular, radiative EW symmetry breaking is an important non-trivial issue. Non-universality, such as $`m_{H_1}^2>m_{H_2}^2`$ at the Planck scale, is favored for a successful EW breaking with large $`\mathrm{tan}\beta `$. Further, non-universality of the squark and slepton masses can affect symmetry breaking as well as other phenomenological aspects. We have adopted this non-universality in our choice for the modular weights in section 2. In the large $`\mathrm{tan}\beta `$ case the Higgs potential has two characteristic features. It follows from the minimization conditions that
$$m_2^2\frac{M_Z^2}{2},$$
(17)
$$m_3^2\frac{M_A^2}{\mathrm{tan}^2\beta }0,$$
(18)
with
$$M_A^2m_1^2+m_2^2>0.$$
(19)
Here, $`m_i^2=m_{H_i}^2+\mu ,i=1,2`$ and $`m_3^2=B\mu `$. A combination of eqs.(17) and (19) gives the following constraint on the low energy parameters
$$m_1^2m_2^2>M_Z^2,$$
(20)
i.e $`m_{H_1}^2m_{H_2}^2>M_Z^2`$. In order to have electroweak breaking in the large $`\mathrm{tan}\beta `$ case, the difference between the masses of the two Higgs fields should satisfy the above inequality. In our model we find that this inequality is indeed satisfied, and the EW symmetry is broken at the weak scale. Also, one of the stau leptons ($`\stackrel{~}{\tau }_R`$) has a ‘small’ mass of order O(100) GeV, and happens to be the lightest slepton. It therefore dominates the LSP annihilation process. This relaxes the upper bound on the LSP mass from 250 to 300 GeV. Thus, even in the of large $`\mathrm{tan}\beta `$ case the effect of the supersymmetric phases are relatively small as figure (5) shows. This essentially follows because the diagonal elements of the stau mass matrices, in this model are larger than the off-diagonal ones, i.e., there is no large mixing, as well as from the fact that the LSP is bino like.
## 5. CP phases and detection rates of the LSP
We have seen that the effects of CP violating phases on the neutralino relic density are very small. In this section we examine the effect of these phases on the event rates of relic neutralinos scattering off nuclei in terrestrial detectors. The direct detection experiments provide the most natural way of searching for the neutralino dark matter. Any large CP violating phases can affect the detection rate, as will see below. It is interesting to note that the measured event rate may shed light on the value of the supersymmetric phases. The differential detection rate is given by
$$\frac{dR}{dQ}=\frac{\sigma \rho _\chi }{2m_\chi m_r^2}F^2(Q)_{v_{min}}^{\mathrm{}}\frac{f_1(v)}{v}𝑑v,$$
(21)
where $`f_1(v)`$ is the distribution of speeds relative to the detector. The reduced mass is $`m_r=\frac{m_\chi m_N}{m_\chi ^2+m_r^2}`$, where $`m_N`$ is the mass of the nucleus, $`v_{min}=(\frac{Qm_N}{2m_r^2})^{1/2}`$, $`Q`$ is the energy deposited in the detector, and $`\rho _\chi `$ is the density of the neutralino near the Earth. $`\sigma `$ is the elastic-scattering cross section of the LSP with a given nucleus. In general $`\sigma `$ has two contributions: a spin-dependent contribution arising from $`Z^0`$ and $`\stackrel{~}{q}`$ exchange diagrams, and a spin-independent (scalar) contribution due to the Higgs and squark exchange diagrams. For $`{}_{}{}^{76}Ge`$ detector, where the total spin of $`{}_{}{}^{76}Ge`$ is equal to zero, we have contributions only from the scalar part.
$$\sigma =\frac{4m_r^2}{\pi }[Zf_p+(AZ)f_n]^2,$$
(22)
where $`Z`$ is the nuclear charge, and $`AZ`$ is the number of neutrons. The expressions for $`f_p`$ and $`f_n`$, and their dependence on the SUSY phases can be found in Ref. . The effect of the CP violating phases enter through the neutralino eigenvector components $`N_{ij}`$, and also through the matrices that diagonalize the squark mass matrices. Finally, $`F(Q)`$ in (21) is the form factor. We use the standard parameterization
$$F(Q)=\frac{3j_1(qR_1)}{qR_1}e^{\frac{1}{2}q^2s^2},$$
(23)
where the momentum transfer $`q^2=2m_NQ`$, $`R_1=(R^25s^2)^{1/2}`$ with $`R=1.2fmA^{1/2}`$, and A is the mass number of $`{}_{}{}^{76}Ge`$. $`j_1`$ is the spherical Bessel function and $`s1fm`$. The ratio $`R`$ of the event rate with non-vanishing CP violating phases to the event rate in the absence of these phases is presented in figure (6). The solid curve corresponds to the case $`\alpha _A=0`$, while the dashed one corresponds to $`\alpha _A=\pi /2`$.
From this figure, it is clear that, in the model we are considering, the CP violating phases can significantly affect the event rates of the direct detection of the LSP. The phase $`\varphi _\mu `$ reduces the value of $`R`$, while the phase $`\alpha _A`$ in the trilinear coupling increases it. However, as explained in Ref. , $`\varphi _\mu `$ is constrained from the electric dipole moment experimental limit to be $`10^1`$ For completeness, we also examine the effect of CP violating phases on the indirect detection rates of the LSP in the halo. The observation of energetic neutrinos from the annihilation of the LSP that accumulate in the sun or in the earth is a promising method for detecting them. The technique for detecting such energetic neutrinos is through observation of upward going muons produced by charged current interactions of the neutrinos in the rock below the detector. The flux of such muons from neutralino annihilation in the sun is given by
$$\mathrm{\Gamma }=2.9\times 10^8m^2yr^1\mathrm{tanh}^2(t/\tau )\rho _\chi ^{0.3}f(m_\chi )\zeta (m_\chi )(\frac{m_\chi ^2}{GeV})^2(\frac{f_P}{GeV^2})^2.$$
(24)
The neutralino-mass dependence of the capture rates is described by
$$f(m_\chi )=\underset{i}{}f_i\varphi _iS_i(m_\chi )F_i(m_\chi )\frac{m_i^3m_\chi }{(m_\chi +m_i)^2},$$
(25)
where the quantities $`\varphi _i`$ and $`f_i`$ describe the distribution of element $`i`$ in the sun and they are listed in Ref. , the quantity $`S_i(m_\chi )=S(\frac{m_\chi }{m_{N_i}})`$ is the kinematics suppression factor for the capture of neutralino of mass $`m_\chi `$ from a nucleus of mass $`m_{N_i}`$ , and $`F_i(m_\chi )`$ is the form factor suppression for the capture of a neutralino of mass $`m_\chi `$ by a nucleus $`i`$. Finally, the function $`\zeta (m_\chi )`$ describes the energy spectrum from neutralino annihilation for a given mass.
In Figure (7) we present the ratio of the muon fluxes resulting from captured neutralinos in the sun in the case of non-vanishing CP violating phases to that of vanishing $`\varphi _A`$, for $`\rho _\chi =0.3GeV/cm^3`$. We see that the predicted muon flux increases as the phase of A-term is increased. We can understand this important effect of CP violating phases on the detection rate as follows. The phases affect the neutralino eigenvector components $`N_{ij}`$ and the squark mass matrices. Consequently, they have a significant effect on the neutralino coupling to quarks. The spin independent contribution, as also shown in Ref. and , is decreased by increasing the phase of $`\mu `$, and goes in the other direction if the phase of $`A`$-term is increased. This leads to the same behaviour for the elastic scattering cross section, which translates this dependence on the phases of $`\mu `$ and $`A`$ to the detection rates as figures 6 and 7 confirm.
## 6. Conclusions
We have studied the impact of CP violating phases from soft SUSY breaking terms in string-inspired models on the LSP, its purity and its relic abundance density. For different values of $`\mathrm{tan}\beta `$ (of order unity, intermediate and of order $`m_t/m_b`$), we found that these phases have no significant effect on the LSP relic density, so that the upper bound on the LSP mass is essentially unchanged. We have examined the effect of these phases on the direct and indirect detection rates. We found that increasing the value of the phase $`\varphi _\mu `$ leads to a decrease the event rates, while the phase $`\varphi _A`$ of the trilinear coupling has the opposite effect.
## Acknowledgments
S.K. would like to acknowledge the support provided by the Fulbright Commission and the hospitality of the Bartol Research Institute. Q.S. is supported in part by DOE Grant No. DE-FG02-91ER40626, and the Nato contract number CRG-970149.
|
no-problem/9904/astro-ph9904352.html
|
ar5iv
|
text
|
# X-ray timing behaviour of Cygnus X-2 at low intensities
## 1 Introduction
Cygnus X-2 is one of the brightest persistent low-mass X-ray binaries. It varies on time scales from milliseconds to months (e.g. Kuulkers, van der Klis & Vaughan 1996, Wijnands, Kuulkers & Smale 1996, Wijnands et al. 1997a, 1998a). The primary is a neutron star, while the donor star is an A9 subgiant. They orbit each other with a period of $``$9.8 days (Cowley, Crampton & Hutchings 1979; Casares, Charles & Kuulkers 1998). The mass accretion rate is high (Ṁ$``$10<sup>18</sup> g s<sup>-1</sup>), giving rise to near-Eddington X-ray luminosities (see Smale 1998). The source shows the Z source behaviour in X-ray colour-colour diagrams and hardness-intensity diagrams, and associated fast timing ($`\text{ }<`$100 sec) properties that are characteristic for such high X-ray luminosity. It shows type I X-ray bursts (see Smale 1998, and references therein) and kiloHertz (kHz) quasi-periodic oscillations (QPO; Wijnands et al. 1998a). Five other persistent high luminosity neutron stars show similar X-ray behaviour. They are referred to as “Z” sources, because of the Z shape of the tracks they trace out in the colour-colour diagram (Hasinger & van der Klis 1989).
The limbs of the Z are, from top to bottom, called horizontal branch, normal branch and flaring branch. It is thought that mass-accretion rate increases from the horizontal branch, through the normal branch, to the flaring branch. In the horizontal branch and upper part of the normal branch QPO are present with frequencies varying between $``$15 and $``$60 Hz (called horizontal branch QPO or HBO) together with a noise component below $``$20 Hz (called low-frequency noise or LFN). On the normal branch different QPO (called normal branch QPO or NBO) are present with frequencies of 5–7 Hz. In the Z sources Sco X-1 and GX 17+2 the normal branch QPO merge smoothly into flaring branch QPO (called FBO) with frequencies of up to $``$20 Hz on the lower part of the flaring branch . No such flaring branch QPO have been reported in Cyg X-2, although $``$26 Hz QPO were seen when Cyg X-2 was in the upper part of the flaring branch, during an intensity ‘dip’ (Kuulkers & van der Klis 1995).
The Rossi X-ray Timing Explorer (RXTE) has opened up a new window on low-mass X-ray binaries in the millisecond regime. KHz QPO (for a review see e.g. van der Klis 1998) have been detected in all 6 Z sources (Van der Klis et al. 1996, 1997, Wijnands et al. 1997b, 1998a, 1998b; Jonker et al. 1998; Zhang, Strohmayer & Swank 1998). The frequency of the kHz QPO increases with increasing mass-accretion rate.
Of the Z sources, Cyg X-2 displays the most noticeable variations in the X-ray intensity on long time scales (days to months, e.g. Smale & Lochner 1992; Wijnands et al. 1996; see also Kong, Charles & Kuulkers 1998). These so-called secular variations (which have been empirically divided into three intensity intervals, called low, medium and high intensity state) recur on a time scale of $``$78 days and are associated with systematic changes in position and shape of the Z track in the colour-colour and hardness-intensity diagrams (Kuulkers et al. 1996; Wijnands et al. 1996; 1997a). They are clearly distinct from the process by which the source traces out the Z track itself on a time scale of hours to a day. As the source goes from the medium intensity to high intensity state (or vice versa) the fast timing properties in at least the normal branch change (Wijnands et al. 1997a).
On one occasion when Cyg X-2 was very faint in X-rays, no clear Z pattern was seen in the colour-colour and hardness-intensity diagram, but only a long diagonal branch associated with flaring behaviour which is stronger at higher energies (see Kuulkers et al. 1996). Since the fast timing properties of the source in the low intensity state were unknown we proposed to observe the source in this rare state by using the RXTE All Sky Monitor (ASM) to trigger pointed observations. In this paper we report on the results of these observations.
## 2 Observations and analysis
The RXTE Proportional Counter Array (PCA, Bradt, Rothschild & Swank 1993) obtained Target of Opportunity observations of Cyg X-2 on 1996 October 31 05:29–08:00 UTC (orbital phase according to Casares et al. 1998: $`\varphi _{\mathrm{orb}}`$$``$0.48–0.49, where phase zero corresponds to X-ray source superior conjunction), 1997 September 28 09:24–18:13 UTC ($`\varphi _{\mathrm{orb}}`$$``$0.22–0.26) and 1997 September 29 04:36–13:42 UTC ($`\varphi _{\mathrm{orb}}`$$``$0.30–0.34), when the ASM rate dropped below $``$20 counts s<sup>-1</sup> SCC<sup>-1</sup>. Most of the data were collected with all five proportional counter units (PCUs) on, simultaneously with a time resolution of 16 s (129 photon energy channels, effectively covering 2–60 keV) and down to 16 $`\mu `$s using various timing (event, binned and single-bit) modes covering the 2–60 keV range.
We constructed colour-colour and hardness-intensity diagrams from the 16 s data using the same energy ranges as Wijnands et al. (1998a). The intensity is defined as the 3-PCU count rate in the energy band 2.0–16.0 keV, whereas the soft and hard colours are defined as the logarithm of the count rate ratios between 3.5–6.4 keV and 2.0–3.5 keV and between 9.7–16.0 keV and 6.4–9.7 keV, respectively. All count rates were corrected for background.
Power density spectra were made from the high time resolution data using 16 s data stretches, also in the same energy range as Wijnands et al. (1998a), i.e., 5.0–60 keV. In order to study the low-frequency ($``$100 Hz) behaviour we fitted the 0.125–256 Hz power spectra with a constant representing the dead time modified Poisson noise, Lorentzians or exponentially cut-off power laws to describe peaked noise components, and a power law describing the underlying continuum (called very-low-frequency noise or VLFN). To search for kHz QPO we fitted the 256–2048 Hz power spectra with a function described by a constant and a Lorentzian to describe any QPO. Errors quoted for the power spectral parameters were determined using $`\mathrm{\Delta }\chi ^2`$=1. Upper limits were determined using $`\mathrm{\Delta }\chi ^2`$=2.71, corresponding to 95% confidence levels.
## 3 Results
### 3.1 Colour-colour and hardness-intensity diagrams
In Fig. 1 we show the colour-colour diagram and in Figs. 2 and 3 the hardness-intensity diagrams of the individual observations (a–c) and combined (d) together with the data points of Wijnands et al. (1998a). All our data points correspond to X-ray intensities of $``$2000–2500 counts s<sup>-1</sup> (3 PCUs), so we succeeded in catching the source at its lowest intensity levels. The observations obtained in 1996 October seem to be extensions towards lower intensity on the horizontal branch. This is apparent in both the hard hardness-intensity diagram and colour-colour diagram by comparing with the data of Wijnands et al. (1998a). In the soft hardness-intensity diagram, however, the 1996 October data fall slightly below their horizontal branch.
The observations obtained in 1997 September, however, can not be immediately placed within the general Z pattern behaviour of the source as defined by the Wijnands et al. (1998a) data. In both hardness-intensity diagrams the 1997 September data describe a slightly curved branch, which does not fall on top the earlier data points. In the colour-colour diagram the September 29 observations trace out a curved track; the September 28 observations fall on top of the September 29 data points and on top of the upper part of the normal branch of Wijnands et al. (1998a). It looks as if the curved track respresents the lower part of the normal branch and flaring branch but shifted to higher soft and hard colours. However, in the hardness-intensity diagrams no clear indications of “flaring” or “dipping” behaviour (see Kuulkers et al. 1996) can be found.
### 3.2 Power spectra
#### 3.2.1 1996 October
The mean power spectrum (5–60 keV) of the 1996 October data (Fig. 4a; total of $``$6.3 ksec) clearly showed horizontal branch QPO near $``$19 Hz together with a higher harmonic near $``$38 Hz on top of a low-frequency noise component, confirming that the source is in the left end of the horizontal branch. A fit to this power spectrum (LFN + 2 QPO) resulted in a reduced $`\chi ^2`$ of 1.72 for 114 degrees of freedom (dof). This is not a good fit; in fact, a close inspection of the power spectrum reveals that the harmonic is not well fitted with this model. We therefore added another cut-off power-law component with a cut-off near 20 Hz, i.e. a so-called high-frequency noise (HFN) component, which significantly improved the fit at high frequencies: reduced $`\chi ^2=1.42`$ for 112 dof. Moreover, the resulting fit seems to describe the second harmonic much better; the centroid frequency of the harmonic is 37.6$`\pm `$0.2, compared to 36.5$`\pm `$0.3 without the high-frequency noise component. The ratio of the harmonic frequency to the QPO frequency is 1.96$`\pm `$0.01 (compare this with 1.90$`\pm `$0.01 without the high-frequency noise component). The full-width-at-half-maximum (FWHM) is $``$10 Hz, compared to $``$20 Hz without the high-frequency noise component. The resulting fit to the power spectrum including the high-frequency noise component is shown in Fig. 4a.
The QPO and noise components are significant enough to divide the data up into two parts. We computed the S<sub>Z</sub> values which measure the position along the Z in the hard hardness-intensity diagram, using the Z track of Wijnands et al. (1998a). In Table 1 we give the results of fits to the power spectra corresponding to the two selected regions. As Cyg X-2 moves further onto the horizontal branch (to lower inferred mass-accretion rate), the frequencies of the horizontal branch QPO and the harmonic decrease, as expected.
We found no evidence for kHz QPO with upper limits of $``$3.4%, when fixing the FWHM at 150 Hz. This is significantly different from earlier observations in the same part of the horizontal branch (see Section 4.3).
#### 3.2.2 1997 September
The variability during both the September 28 and 29 observations was low. In Table 2 and Figs. 4b–d we give the results of the fits to the power spectra for the September 28 and 29 observations.
The mean power spectrum of the September 28 observations (total of $``$22.6 ksec) showed weak power-law noise ($``$1.3%, Fig. 4b). Weak QPO near $``$40 Hz are, however, discernable (see inset in Fig. 4b). We fitted these QPO and found that they were significant at the $``$4$`\sigma `$ level as estimated from an F-test for the inclusion of QPO \[$`\chi ^2`$/dof=104/97 vs. $`\chi ^2`$/dof=130/100\] and from the 68% confidence error-scan of the integral power in the $`\chi ^2`$-space, i.e. $`\mathrm{\Delta }\chi ^2`$=1. Taking into account the number of trials decreases the significance to $``$3$`\sigma `$. A subdivision in the colour-colour and hardness-intensity diagram tracks of this observation did not show significant differences in the power spectral shapes.
Since the colour-colour diagram of the September 29 observations (total of $``$21.0 ksec) indicates the presence of two different branches we decided to investigate the power spectra by selecting these branches as indicated by the two regions denoted ‘A’ and ‘B’ in Fig. 1b. The power spectra are displayed in Fig. 4c and Fig. 4d, for regions ‘A’ and ‘B’, respectively. The mean power spectrum for region ‘A’ shows only a power-law noise component (rms $``$1.4%), whereas the mean power spectrum for region ‘B’ shows a weak peaked-noise component between $``$2–20 Hz peaking near 6–7 Hz (rms $``$3%), on top of a power-law noise component (rms $``$1%).
Since the September 28 observation and part of the September 29 observations are parallel to the normal branch of the Wijnands et al. (1998a) observations, we investigated the power spectra for normal branch QPO. None were found with upper limits of $``$1.3% and $``$0.8%, for the September 28 and 29 observations, respectively, with typical values for the frequency and FWHM of 5.5 Hz and 2.5 Hz, respectively. The upper limits on normal branch QPO in regions ‘A’ and ‘B’ of the September 29 observation are $``$0.8% and $``$1.1%, respectively. Upper limits on the strength of QPO in the September 29 observations, similar to that found in the Feb 28 observations near $``$40 Hz, are $``$1.3%. We also searched the September 28 and 29 power spectra for the presence of kHz QPO but found none. Upper limits are $``$3% and $``$2.8%, for the September 28 and 29 observations, respectively.
## 4 Discussion
We performed RXTE observations when the ASM indicated that Cyg X-2 was at overall low intensities. These were sucessfully performed in October 1996 and September 1997. It appears that we obtained data in two different kinds of low intensity “states” on the two occasions. In the next subsections we discuss the two observations separately, and investigate the kHz QPO properties.
### 4.1 October 1996
During our first observation in October 1996 we found the source in the left part of the horizontal branch, based on the place of the source in the colour-colour diagram and hardness-intensity diagrams and the presence of horizontal branch QPO at $``$19 Hz and their harmonic at twice this frequency. Previous EXOSAT (Hasinger 1987) and RXTE (Focke 1996; Smale 1998) observations already showed horizontal branch QPO in the same frequency range. Assuming that the horizontal/normal-branch vertex is at the same location in the hard hardness-intensity diagram as derived by Wijnands et al. (1998a), the horizontal branch QPO frequencies found during the October 1996 observations are lower than expected at the same position in the Z. This most likely indicates that the Z track of our observation was shifted with respect to that of Wijnands et al. (1998a). This is supported by the fact that our observation is located below the horizontal branch of Wijnands et al. (1998a) in the soft hardness-intensity diagram, similar to what was seen in EXOSAT data by Kuulkers et al. (1996).
We found evidence for the presence of a cut-off power-law component in the power with a cut-off frequency near 20 Hz. A similar component has been observed previously in Cyg X-2 (Hasinger & van der Klis 1989, Wijnands et al. 1997a) and in other Z sources (Hasinger & van der Klis 1989, Hertz et al. 1992, Kuulkers et al. 1994, 1997, Kamado, Kitamoto & Miyamoto 1997), and is mostly referred to as high-frequency noise. High-frequency noise is strongest in the horizontal branch. We note, however, that our observed high-frequency noise strength is higher than that reported previously for Cyg X-2. This may be due to the higher energy range we investigated compared to that of Hasinger & van der Klis (1989) and Wijnands et al. (1997a). The high-frequency noise has been observed to become stronger at higher energies (e.g. Dieters & van der Klis 1999).
### 4.2 September 1997
The September 1997 observations do not show clear Z behaviour in the colour-colour diagram and hardness-intensity diagrams, although we cannot rule out the possibility that a “complete” Z was traced out on a longer timescale. However, the September 29 observations show a curved branch in the colour-colour diagram, which might be part of the Z, i.e. the lower normal branch and the lower flaring branch, but shifted to higher colour values. Moreover, the September 28 observation is aligned with the normal branch, suggesting it to be the same branch. It is known that the source hardens, i.e. the Z-pattern shifts to higher colour values, when it is at overall lower intensities (Kuulkers et al. 1996, Wijnands et al. 1996).
The situation is less clear for the hardness-intensity diagrams. The hardness-intensity diagrams of the September observations are more reminiscent of those reported by Vrtilek et al. (1986). Such shapes are seen when the source intensity is at an overall low level (see Kuulkers et al. 1996). Both the hardness-intensity diagrams and colour-colour diagram of the September observations do not resemble, however, the large diagonal branch seen with EXOSAT in 1983 (see Kuulkers et al. 1996), which also occurred during a low intensity state.
For the first time we have been able to examine the rapid variability at low overall intensities. We find that the very-low frequency variability during the RXTE September observations is low, i.e. $``$1.0–1.4% (0.1–1 Hz, 5–60 keV). Such low variability was also found for the very-low-frequency noise in the medium intensity level (1–20 keV; Wijnands et al. 1997a). Our observed very-low-frequency-noise component is unusually flat. Its index is $`\alpha `$$``$0.6–0.7), consistent with extrapolating the observed decrease in index in the normal branch (Wijnands et al. 1997a) from the high ($`\alpha `$$``$1.5–1.7) to medium ($`\alpha `$$``$1) intensity level down to the low intensity level.
In order to compare our RXTE observations with the 1983 ”diagonal branch” observations of Cyg X-2 we calculated power spectra of the EXOSAT data using 64-s data stretches. These data were obtained with a 0.25 s time resolution and no energy information (1–20 keV; so-called “I3”-data from the HER3 mode, see e.g. Kuulkers 1995). We used all data during which the collimator response was 100% and all detectors were on source (total of $``$16.5 ksec). The resulting 0.02–2 Hz average power spectrum corrected for instrumental noise, see Berger & van der Klis 1998) can be well ($`\chi _{\mathrm{red}}^2`$ of 1.06 for 42 dof) described by a steep power law ($`\alpha `$=2.0$`\pm `$0.2) with 1.9$`\pm `$0.1% rms (0.01–1 Hz). Clearly, during the 1983 observations the very-low-frequency noise was much steeper than that during the September 1997 observations.
We found evidence for weak ($``$2%, 5–60 keV) QPO at $``$40 Hz during the September 28 observations. Since it has been observed in Cyg X-2 (Wijnands et al. 1997a) that the horizontal branch QPO frequency (and rms amplitude) decreases from the horizontal/normal-branch connection ($``$55 Hz) down the normal branch (down to $``$45 Hz), we can interpret our observed QPO as horizontal branch QPO occurring in the lower/middle part of the normal branch. The fact that the horizontal branch QPO on the normal branch has a similar width (i.e. 10–20 Hz FWHM; e.g. Wijnands et al. 1997a, 1998a) as we see in our observations ($``$15 Hz) supports this identification.
Since the normal branch QPO become more prominent when going from the high to the medium intensity level, we searched for normal branch QPO in our data. None were seen with upper limits of $``$1% (5–60 keV), which is below that seen in the normal branch of the medium intensity level ($``$1–2.5%, 1–20 keV; Wijnands et al. 1997a). However, when during the September 29 observations the source went from the inferred normal branch to the inferred flaring branch, a broad ($``$13 Hz) noise component appeared, which peaked near 6–7 Hz. Interestingly, similar broad noise components have been reported in the lower part of the flaring branch of other observations, but with somewhat lower strength, i.e. $``$2% (1–20 keV; Hasinger & van der Klis 1989; Hasinger et al. 1990; Kuulkers & van der Klis 1995; Wijnands et al. 1997a) compared to $``$3% (5–60 keV). It is apparant from Wijnands et al. (1997a) that the strength of these “flaring branch QPO” becomes stronger from the high to medium intensity level. Our observations extend this trend to lower overall intensities.
### 4.3 KiloHertz QPO
No kHz QPO were found during the October 1996 observations with upper limits which are significantly lower ($``$3%, 5–60 keV) than previously observed by Wijnands et al. (1998a) in the same part of the horizontal branch as inferred from the horizontal branch QPO frequency (4–5%, 5–60 keV). It is, however, consistent with the upper limits quoted by Smale (1998) when the source was also in the horizontal branch ($``$1%, 4–11 keV), but at higher overall intensities. As noted by Smale (1998), this may indicate that the strength of kHz QPO (at the same position in the Z) changes as a function of the overall intensity level. Unfortunately, for our October 1996 observations we can not infer to which overall intensity level it corresponds.
During the September 1997 observations we found no indication for kHz QPO with upper limits of $``$3% (5–60 keV). This is consistent with the upper limits reported previously in the normal/flaring-branch region (2–4%, 5–60 keV; Wijnands et al. 1998a).
## 5 Conclusion
Using RXTE we observed Cyg X-2 at low overall intensities, for the first time with sufficient time resolution. In October 1996 we found the source in the leftmost part of the horizontal branch. Our observations show horizontal branch QPO properties which are generally consistent with earlier observations in this part of the Z track, but also indicate significant variations in the strength of the kHz QPO there. We conclude that we have seen parts of the normal branch and flaring branch during our September 1997 observations, when the source was seen at low overall intensities. They do not, however, resemble the behaviour seen during a rare low intensity state in 1983. Such a rare state may be observed when the overall intensity is even lower than during our observations. The properties of the very-low-frequency noise during our September low-intensity observations (low amplitude, flat power law slope) are consistent with extrapolation from those seen in previous observations at higher intensity. However, the lack of normal branch QPO during our observations is not consistent with the observed trends, and suggests that the normal branch QPO amplitude is either non-monotonically related to intensity or varies independently from this parameter.
It has been suggested that obscuration by the outer accretion disk of the inner accretion disk regions and neutron star causes the low overall observed intensities during certain times and the high to medium to low intensity level variations. Such a configuration might be due to the precession of a warped accretion disk, mainly based on the rather strict periodicity of the overall intensity variations on times scales of months (see e.g. Wijnands et al. 1996, Wijers & Pringle 1999). We note that obscuration effectively hardens the spectrum which leads to the changes in the position of the Z in the colour-colour diagram (see Kuulkers et al. 1996, Wijnands et al. 1996). Scattering in the outer disk would affect the variability amplitudes by light travel time smearing down to a frequency of order 0.01 Hz. While this picture would explain the monotonic decrease in very-low-frequency noise amplitude with decreasing intensity, it seems inconsistent with the flattening of its power law index and the non-monotonic dependence of normal branch QPO amplitude on intensity. A model where the low intensity states are associated with changes in the character of the inner accretion flow itself seems therefore favoured.
## acknowledgements
This work was supported in part by the Netherlands Organization for Scientific Research (NWO) and by the Netherlands Foundation for Research in Astronomy (ASTRON) under grants PGS 78-277 and 781-76-017, respectively. EK thanks the Astronomical Institute “Anton Pannekoek”, where part of the analysis was done, for its hospitality.
|
no-problem/9904/nucl-th9904048.html
|
ar5iv
|
text
|
# Dynamical Interpretation of Chemical Freeze-Out Parameters.
## Abstract
It is shown that the condition for chemical freeze-out, average energy per hadron $`1`$ GeV, selects the softest point of the equation of state, namely the point where the pressure divided by the energy density, $`p(\epsilon )/\epsilon `$ has a minimum. The sensitivity to the equation of state used is discussed. The previously proposed mixed phase model, which is consistent with lattice QCD data naturally leads to the chemical freeze-out condition.
Over the last few years the question of chemical equilibrium in heavy ion collisions has attracted much attention . Assuming thermal and chemical equilibrium within a statistical model, it has now been shown that it is indeed possible to describe the hadronic abundances produced using beam energies ranging from 1 to 200 AGeV. The observation was made that the parameters of the chemical freeze-out curve obtained at CERN/SPS, BNL/AGS and GSI/SIS all lie on a unique freeze-out curve in the $`T\mu _B`$ plane. Recently, a surprisingly simple interpretation of this curve has been proposed: the hadronic composition at the final state is determined solely by an energy per hadron of approximately 1 GeV per hadron in the rest frame of the system under consideration . In this letter we propose a dynamical interpretation of the chemical freeze-out curve and show that it is intimately related to the softest point of equation of state defined by the minimum of the ratio $`{\displaystyle \frac{p}{\epsilon }}(\epsilon )`$ as a function of $`\epsilon `$ . Our considerations are essentially based on the recently proposed mixed phase model which is consistent with the available QCD lattice data . The underlying assumption of the Mixed Phase (MP) model is that unbound quarks and gluons may coexist with hadrons forming an homogeneous quark/gluon–hadron phase . Since the mean distance between hadrons and quarks/gluons in this mixed phase may be of same order as that between hadrons, their interaction with unbound quarks/gluons plays an important role defining the order of the phase transition.
Within the MP model the effective Hamiltonian is written in the quasi particle approximation with the density-dependent mean–field interaction. Under quite general requirements of confinement for color charges, the mean–field potential of quarks and gluons is approximated by the following form:
$$U_q(\rho )=U_g(\rho )=\frac{A}{\rho ^\gamma }$$
(1)
with the total density of quarks and gluons
$$\rho =\rho _q+\rho _g+\underset{j}{}n_j\rho _j$$
where $`\rho _q`$ and $`\rho _g`$ are the densities of unbound quarks and gluons outside of hadrons, while $`\rho _j`$ is the density and $`n_j`$ is the number of valence quarks inside the hadron of type $`j`$. The presence of the total density $`\rho `$ in (1) corresponds to the inclusion of the interaction between all components of the mixed phase. The approximation (1) recovers two important limiting cases of the QCD interaction, namely, if $`\rho 0`$ the interaction potential goes to infinity, i.e. an infinite energy should be spent to create an isolated quark or gluon which ensures the confinement of color objects and, in the other extreme case of high energy density corresponding to $`\rho \mathrm{}`$ we obtain the asymptotic freedom regime.
The use of a density-dependent potential (1) for quarks and a hadronic potential described by a modified non-linear mean–field model requires certain constraints, related to thermodynamic consistency, to be fulfilled . For the chosen form of the Hamiltonian these conditions require that $`U_g(\rho )`$ and $`U_q(\rho )`$ should be independent of the temperature. From these conditions one also obtains an expression for the form of the quark–hadron potential .
A detailed study of the pure gluonic $`SU(3)`$ case with a first order phase transition allows one to fix the values of the parameters as $`\gamma =0.62`$ and $`A^{1/(3\gamma +1)}=250MeV`$. These values are then generalized to the the $`SU(3)`$ system including quarks. For the case of quarks of two light flavors at zero baryon density, $`n_B=0`$, the MP model is consistent with the results from lattice QCD with a deconfinement temperature $`T_{dec}=153MeV`$ and the crossover type of the deconfinement phase transition. The model can be extended to baryon-rich systems in a parameter–free way .
A particular consequence of the MP model is that for $`n_B=0`$ the ’softest point’ of the equation of state, as defined in , is located at comparatively low values of the energy density: $`\epsilon _{SP}0.45GeV/fm^3`$. This value of $`\epsilon `$ is close to the energy density inside a nucleon and, thus, reaching this value signals us that we are dealing with a single ’big’ hadron consisting of deconfined matter. For baryonic matter the softest point is gradually washed out at $`n_B\text{ }>0.4n_0`$. As shown in , this behavior differs drastically from both the interacting hadron gas model which has no soft point and the two–phase approach, based on the bag model, having a first order phase transition by construction and the softest point at $`\epsilon _{SP}>1GeV/fm^3`$ independent of $`n_B`$ . These differences should manifest themselves in the expansion dynamics.
In Fig.1 we show trajectories of the evolution of central Au+Au collisions in the $`T\mu _B`$ plane together with the freeze-out parameters obtained from hadronic abundances. The initial state was estimated using a transport model starting from a cylinder in the center-of-mass frame with radius $`R=4fm`$ and length $`L=2R/\gamma _{c.m.}`$ as described in . The subsequent isoentropic expansion was calculated using a scaled hydrodynamical model with the MP equation of state. As seen from the figure, the turning points of these trajectories correlate nicely with the extracted freeze-out parameters, as was noted in , as well as with the smooth curve corresponding to a fixed energy per hadron in the hadronic gas model .
The observed correlation is further elucidated in Fig.2. The quantity $`p/\epsilon `$ is closely related to the square of the velocity of sound and characterizes the expansion speed<sup>*</sup><sup>*</sup>*In simple hydrodynamic models, for example, the transverse expansion of a cylindrical source, the evolution is governed by the pressure-to-enthalpy ratio, $`p/(p+\epsilon )`$ ., so the system lives for the longest time around the softest point which allows it to reach chemical equilibrium for the strongly interacting components. It is also seen that the position of the softest point correlates with the average energy per hadron being about 1 GeV in all nuclear cases and even for $`p\overline{p}`$ collision. One should note that the quantity $`\epsilon /\rho _{had}`$, where $`\epsilon `$ is the total energy density, coincides with $`E_{had}/N_{had}`$ considered in only in the case when there are no unbound quarks/gluons in the system. In the MP model, all components are interacting with each other and therefore the quantity $`E_{had}`$ is not defined. The admixture of unbound quarks at the softest point $`\epsilon _{SP}`$ amounts to about $`13\%`$ and $`8\%`$ at beam energies $`E_{lab}=150`$ and $`10AGeV`$, correspondingly.
The MP equation of state plays a decisive role for the regularity considered here, describing both the order of the phase transition and the deconfinement temperature. The two-phase (bag) model exhibits a first order phase transition with $`T_{dec}=160MeV`$ and has a spatially separated Gibbs mixed phase but the corresponding trajectories in the $`T\mu _B`$ plane are quite different from those in the MP model as shown in . The exit point from the Gibbs mixed phase at $`E_{lab}=150`$ AGeV is close to the corresponding freeze-out point in Fig.1. However the large differences noted above in $`\epsilon _{SP}`$ and in its dependence on $`n_B`$, mainly caused by the different type of the predicted phase transition, does not lead to the observed correlation with the softest point position in the whole energy range considered. The interacting hadron gas model has no softest point effect as was demonstrated in . This fact is seen also from Fig.2 where at $`E_{lab}=2AGeV`$ the quark admixture is practically degenerated ($`1\%`$) and instead of a minimum there occurs a monotonic fall-off specific for hadronic models with a small irregularity in $`p/\epsilon `$ near the point $`\epsilon /\rho _{had}=1GeV`$ Note that at the SIS energies the chemical freeze-out point practically coincides with the thermal freeze-out .
It is noteworthy that similarly to the results presented in Fig.2 the softest point of the equation of state correlates with an average energy per quark, $`\epsilon /\rho 350MeV`$ which is close to the constituent quark mass. So, at higher values of $`\epsilon /\rho `$ we are dealing with a strongly-interacting mixture of highly-excited hadrons and unbound massive quarks/gluons forming (in accordance with Landau’s idea ) an ’amorphous’ fluid suitable for hydrodynamic treatment. Below the soft point the interaction deceases, the relative fraction of unbound quarks/gluons decreases, higher hadronic resonances decay into baryons and light mesons and thereby the value of $`\epsilon /\rho `$ goes down.
In summary, the unified description of the chemical freeze-out parameters found in is naturally related to the fact that the proposed condition $`E_{had}/N_{had}1GeV`$ selects the softest point of the equation of state where the strongly interacting system stays for a long time. Such a clear correlation is observed for the equation of state of the mixed phase model but not in purely hadronic nor in two–phase models. In this respect the success of the MP model in the dynamical interpretation of the freeze-out regularity may be considered as an argument in favor of a crossover type of the deconfinement phase transition in $`SU(3)`$ system with massive quarks.
We thank B. Friman, Yu. Ivanov and W. Nörenberg for useful discussions. E.G.N. and V.D.T. gratefully acknowledge the hospitality at the Theory Group of GSI, where some part of this work has been done. J.C. gratefully acknowledges the hospitality of the physics department of the University of Bielefeld. This work was supported in part by BMBF under the program of scientific-technological collaboration (WTZ project RUS-656-96).
Figure captions
Fig.1. The compiled chemical freeze-out parameters (borrowed from ) obtained from the observed hadronic abundances and dynamical trajectories calculated for central $`Au+Au`$ collisions at different beam energies $`E_{lab}`$ with the mixed phase equation of state. The smooth dashed curve is calculated in the hadronic gas model for $`E_{had}/N_{had}=1GeV`$ .
Fig.2. The ratio of pressure to energy density, $`p/\epsilon `$, versus the average energy per hadron, $`\epsilon /\rho _{had}`$, for evolution of different systems. The upper curve corresponds to $`p\overline{p}`$ collisions at $`\sqrt{S}=40GeV`$ with isoentropic expansion from a sphere with $`R=1fm`$. Other cases are calculated for central $`Au+Au`$ collisions at the given beam energy under the same conditions as in Fig.1.
|
no-problem/9904/math9904016.html
|
ar5iv
|
text
|
# Crystallography and Riemann Surfaces
## 1 Introduction
Crystallography is concerned with point sets in $`^n`$ that are discrete and distributed more or less uniformly. In “classical” crystallography, periodicity was imposed as well, but this restriction is not considered as fundamental in the “modern” era. With the discovery of intermetallic quasicrystals in the early 1980s, it became clear that there exist aperiodic point sets that share a basic property with periodic point sets. This property should really be associated with a distribution: in this case, the distribution formed by placing a Dirac delta at each point of the set. In these terms, the class of aperiodic sets singled out by crystallography is characterized by the property that the Fourier transform of the corresponding distribution has support on a lattice . The rank of this “Fourier lattice” equals the dimension of space for periodic sets, and exceeds it (but is still finite), in the case of *quasiperiodic* sets. The vertex set of the Penrose tiling of the plane is a familiar example of a quasiperiodic set.
The standard construction of quasiperiodic sets $`𝒜`$ begins by embedding $`^n=Y`$ in a larger Euclidean space, $`^{m+n}=X\times Y`$. Into $`X\times Y`$ one then immerses a smooth $`m`$-manifold $`𝒮`$ that is (i) transversal to $`Y`$, and (ii) invariant under the action of a lattice $`\mathrm{\Lambda }`$ generated by $`n+m`$ linearly independent translations in $`X\times Y`$. Point sets $`𝒜Y`$ are obtained as sections (“cuts”) of $`𝒮`$ by spaces parallel to $`Y`$. More formally, in terms of the standard projections
$`\pi _X`$ $`:𝒮X`$ (1)
$`\pi _Y`$ $`:𝒮Y,`$
the section of $`𝒮`$ at $`xX`$ is the set
$$𝒜(x)=\pi _Y\pi _{X}^{}{}_{}{}^{1}(x).$$
(2)
Periodicity or quasiperiodicity of $`𝒜(x)`$ is determined by the rank of the lattice $`\mathrm{\Lambda }_Y=\mathrm{\Lambda }Y`$. Since the generators of $`\mathrm{\Lambda }`$ were assumed to be linearly independent, $`\mathrm{rk}(\mathrm{\Lambda }_Y)n`$. Quasiperiodicity corresponds to $`\mathrm{rk}(\mathrm{\Lambda }_Y)<n`$, with complete absence of periodicity characterized by $`\mathrm{\Lambda }_Y=\{0\}`$. An important motivation for constructing quasiperiodic sets, in this context, is the fact that symmetry groups that cannot be realized by periodic point sets in $`^n`$, *can* be realized by periodic surfaces in $`^{m+n}`$.
Transversality and periodicity are relatively mild restrictions on the manifold $`𝒮`$, called the “atomic surface” by physicists. A further restriction, one which leads to point sets called “model sets” , is to require that $`𝒮`$ is the $`\mathrm{\Lambda }`$-orbit of a polytope in $`X`$. The algorithm which constructs $`𝒜(x)`$ from such $`𝒮`$ naturally leads to the terminology “window” or “acceptance domain” for the corresponding polytopes. Model sets can always be organized into finitely many tile shapes, and, because of this simplicity, have dominated the study of quasiperiodic sets.
A different viewpoint on the construction of $`𝒮`$, pioneered by Kalugin and Katz , emphasizes the continuity properties of $`𝒜(x)`$ with respect to $`x`$. Consider in more detail the construction of a model set: $`𝒮=𝒫+\mathrm{\Lambda }`$, where $`𝒫X`$ is a polytope. Now, if $`x𝒫+\pi _X(\lambda )`$ for some $`\lambda \mathrm{\Lambda }`$, then $`y=\pi _Y(\lambda )𝒜(x)`$. But now consider what happens when $`x`$ crosses the boundary of $`𝒫+\pi _X(\lambda )`$. As $`x`$ “falls off the edge of the earth”, the corresponding point $`y`$ in the point set $`𝒜(x)`$ disappears. By the same process, of course, points can spontaneously appear “out of thin air”. To gain control over these processes, Kalugin and Katz advocated a restriction on $`𝒫`$, in relation to $`\mathrm{\Lambda }`$, such that whenever $`x`$ falls off the edge of one polytope, $`𝒫+\pi _X(\lambda )`$, it falls within another, say $`𝒫+\pi _X(\lambda ^{})`$. This restriction corresponds mathematically to the statement that the boundaries of the disconnected components of $`𝒮=𝒫+\mathrm{\Lambda }`$ can be “glued” together to form a topological manifold without boundary.
In the process of restoring transversality to the glued complex of polytopes one encounters the problems addressed by singularity theory. The map $`\pi _X`$ should now be a smooth (but not necessarily 1-to-1) map of $`m`$-mainfolds. In the trivial situation, when $`\pi _X`$ has no singularities, $`𝒮`$ must be diffeomorphic to a collection of hyperplanes. This is the situation explored by Levitov for point sets in two and three dimensions and various symmetry groups.
When $`𝒮`$ is a generic 2-manifold, we have the classic result of Whitney that the stable singularities of smooth maps, such as $`\pi _X`$, are folds and cusps, having respectively codimension one and two. Because the cusp is always accompanied by two folds, the locus of singular values of $`\pi _X`$ consists of curves. The space $`X`$ is thus populated by singular curves such that whenever $`x`$ crosses a curve, a pair of points in $`𝒜(x)`$ merge and annihilate. One motivation for the present work was the desire to eliminate this point-merging singularity to the greatest extent possible.
By giving the 2-manifold $`𝒮`$ a complex structure, and identifying $`X`$ with the complex plane, we impose additional regularity by insisting that $`\pi _X`$ is locally holomorphic. The singularities of $`\pi _X`$ will then be isolated points. A construction that naturally leads to a $`\pi _X`$ with this property is to let $`𝒮`$ be (locally) the graph of a holomorphic function, $`f:XY`$. Globally this corresponds to a Riemann surface $`𝒮`$ immersed in $`^2=X\times Y`$ and having an atlas of compatible charts in $`X`$. The other ingredient needed by our construction is some way to guarantee that $`𝒮`$ is invariant with respect to a lattice $`\mathrm{\Lambda }`$. We meet this challenge by using conformal maps between triangles to define a fundamental graph of $`𝒮`$. Schwarz reflections in the triangle edges extend this graph and generate the isometry group of $`𝒮`$. For appropriate choices of triangles, the isometry group has a lattice subgroup with the desired properties.
In the second half of this paper we classify a subset of all Riemann surfaces generated by conformal maps of triangles. This subset is characterized by the property that the conformal map is regular at one vertex of the triangles and that the edges at this vertex make the largest possible angle, $`\pi /2`$. With the only other restriction being that the corresponding point sets $`𝒜(x)`$ are discrete in $`Y`$, one arrives at a set of seven surfaces. Four of these are quasiperiodic. The point set obtained from a section of one of them is shown in Figure 1. Also shown in Figure 1 is a much studied model set : a tiling of boats, stars, and jester’s-caps (whose vertices coincide with a subset of the Penrose-tiling vertex set). The point set determined by the Riemann surface can be said to be approximated by the vertex set of the tiling by a systematic process that renders the Riemann surface piecewise flat. The other surfaces obtained in our partial classification, when flattened, also produce familiar tilings (Fig. 3).
## 2 Riemann surfaces generated by conformal maps
### 2.1 Immersed Riemann surfaces
We consider Riemann surfaces as analytically continued holomorphic functions, interpreted geometrically as surfaces immersed in $`^2`$. Our treatment follows closely the notation and terminology of Ahlfors .
###### Definition 2.1.
A *function element* $`F=(U,f)`$ consists of a domain $`U`$ and a holomorphic function $`f:U`$.
###### Definition 2.2.
Function elements $`F_1=(U_1,f_1)`$ and $`F_2=(U_2,f_2)`$ are *direct analytic continuations* of each other iff $`V=U_1U_2\mathrm{}`$ and $`f_1=f_2`$ when restricted to $`V`$.
###### Definition 2.3.
The *complete, global analytic function determined by function element* $`F_0=(U_0,f_0)`$ is the maximal collection of function elements $``$ such that for any $`F_i`$ there exists a chain of function elements $`F_0,\mathrm{},F_i`$, all in $``$, with every link in the chain a direct analytic continuation.
Up to this point the set of function elements comprising a complete, global analytic function $``$ only possesses the discrete topology, where $`F_1F_2=\mathrm{}`$ whenever $`F_1`$ and $`F_2`$ are distinct elements of $``$. By refining this topology we can identify $``$ with a surface and, ultimately, a Riemann surface. Consider a pair of function elements in $``$, $`F_1=(U_1,f_1)`$ and $`F_2=(U_2,f_2)`$. In the refined topology we define the intersection by
$$F_1F_2=\{\begin{array}{cc}F_3=(U_3,f_3)\hfill & \text{if }F_1\text{ and }F_2\text{ are related by direct analytic continuation,}\hfill \\ & \\ \mathrm{}\hfill & \text{otherwise,}\hfill \end{array}$$
(3)
where $`U_3=U_1U_2`$ and $`f_3`$ is $`f_1=f_2`$ restricted to $`U_3`$. It is straightforward to check that this defines a valid topology. Moreover, the projection $`\pi :`$ given by
$$\pi :(U,f)U,$$
(4)
provides the complex charts that identify $``$ with a Riemann surface.
Throughout the rest of this paper we will mostly be interested in Riemann surfaces immersed in $`^2`$.
###### Definition 2.4.
Let $``$ be a complete, global analytic function. The *immersed Riemann surface $`𝒮`$ corresponding to $``$* is the image of the immersion $`\mathrm{\Psi }:^2`$ given by
$$\mathrm{\Psi }:(U,f)\{(x,f(x)):xU\}.$$
(5)
###### Notation.
We denote the first component of $`^2`$ by $`X`$, the second by $`Y`$.
If we restrict the immersion $`\mathrm{\Psi }`$ to a single function element, $`F_0=(U_0,f_0)`$, we obtain the *graph*
$$𝒮_0=\{(x,f_0(x))X\times Y:xU_0\}$$
(6)
Thus $`𝒮_0`$ represents a piece of $`𝒮`$ and in fact determines all of $`𝒮`$; $`𝒮`$ is connected because every pair of function elements in a complete global analytic function is related by a chain of direct analytic continuations. $`𝒮`$ is the *completion* of $`𝒮_0`$.
###### Notation.
We write $`[𝒮_0]`$ to denote the completion of the graph $`𝒮_0`$.
Since all subsequent references to “Riemann surface” will be as a surface immersed in $`X\times Y`$, we drop the qualifier “immersed” below. We also omit the term “complete”, since the only instances of incomplete surfaces, graphs, will always be identified as such. Given a Riemann surface $`𝒮`$, we will frequently make use of the projections
$`\pi _X`$ $`:𝒮X,`$ (7)
$`\pi _Y`$ $`:𝒮Y.`$
The historical construction of Riemann surfaces we have followed can be criticized for its inequivalent treatment of the spaces $`X`$ and $`Y`$. We can correct this fault by insisting that the functions $`f`$ appearing in the function elements $`(U,f)`$ are not just holomorphic in their respective domains $`U`$, but *conformal* (holomorphic with holomorphic inverse). The graph (6) could then be equally written as
$$𝒮_0=\{(f_{0}^{}{}_{}{}^{1}(y),y)X\times Y:yV_0\},$$
(8)
where $`V_0=f_0(U_0)`$. If this “inversion”, or interchange of $`X`$ with $`Y`$, is to work for all function elements $`(U,f)`$, then one must remove all points $`x_0U`$, where $`f`$ behaves locally as $`f(x)f(x_0)=c(xx_0)^m+\mathrm{}`$, with $`m>1`$. These correspond to branch points of the map $`\pi _Y`$. Conversely, had we begun with the inverted function elements our Riemann surface would have *included* branch points of $`\pi _X`$, i.e. points where $`f`$ is singular. In keeping with tradition we augment our definition of a Riemann surface $`𝒮`$ to *include* all points $`(x_0,y_0)`$ where $`𝒮`$ behaves locally like the algebraic curve $`(yy_0)^n=c(xx_0)^m`$, where $`m`$ and $`n`$ are positive integers.
###### Definition 2.5.
A point $`(x,y)𝒮`$ is *regular* if the corresponding complete, global analytic function contains a function element $`(U,f)`$, with $`xU`$ and $`f`$ conformal at $`x`$. A point which is not regular is *singular*.
### 2.2 Transformations
Two transformations of Riemann surfaces will be needed in our discussion of symmetry properties. These are defined in terms of their action on the spaces $`X`$ and $`Y`$ and induce a transformation on Riemann surfaces as subsets of $`X\times Y`$. Let $`(x,y)`$ be a general point in $`X\times Y`$ and define the following transformations:
$`\tau (a,b;c,d)`$ $`:(x,y)(ax+b,cy+d)`$ (9)
$`\sigma `$ $`:(x,y)(\overline{x},\overline{y})`$ (10)
Transformation $`\tau `$ (for complex constants $`a`$, $`b`$, $`c`$, and $`d`$) is the general bilinear map while $`\sigma `$ corresponds to Schwarz reflection (componentwise complex conjugation).
###### Lemma 2.1.
If $`𝒮`$ is a Riemann surface and $`T`$ is either of the transformations $`\tau `$ or $`\sigma `$, then $`T𝒮`$ is again a Riemann surface.
###### Proof.
Write $`T(x,y)=(T_Xx,T_Yy)`$ where $`T_X`$ and $`T_Y`$ are just maps of the complex plane. Since $`𝒮`$ corresponds to a complete global analytic function $``$, we need to verify that $`T𝒮`$ corresponds to some other complete global analytic function $`_T`$. From our definitions we see that $`_T`$ is obtained from $``$ by substituting each function element $`F=(U,f)`$ by $`F_T=(T_XU,T_YfT_X^1)`$. It is easily checked that $`T_X`$ is open and $`T_YfT_X^1`$ is holomorphic for both of the transformations being considered. Thus $`F_T`$ remains a valid function element. One also verifies that the direct analytic continuation relationships among function elements are unchanged by these transformations. ∎
###### Corollary 2.2.
If $`𝒮_0`$ is a graph and $`T`$ is either of the transformations $`\tau `$ or $`\sigma `$, then $`[T𝒮_0]=T[𝒮_0]`$.
Two special transformations are rotations and translations, for which we introduce the following notation:
$`r(\theta ,\varphi )`$ $`=\tau (e^{i\theta },0;e^{i\varphi },0)`$ (11)
$`t(u,v)`$ $`=\tau (0,u;0,v).`$ (12)
More generally, transformations $`T:X\times YX\times Y`$ which act isometrically on the spaces $`X`$ and $`Y`$ are just the products of Euclidean motions in $`X`$ and $`Y`$. Isometries of Euclidean spaces normally include reflections; to preserve the structure of the immersed Riemann surface, however, any reflection in $`X`$ (complex conjugation) must be accompanied by a reflection in $`Y`$.
###### Definition 2.6.
The *isometries of* $`X\times Y`$ is the group of transformations generated by $`\sigma `$, $`r(\theta ,\varphi )`$, and $`t(u,v)`$.
In what follows we use the term “isometry” only in this sense. Isometries which preserve a Riemann surface $`𝒮`$ are called isometries of $`𝒮`$ and form a group. The maximal group of isometries is called the isometry group of $`𝒮`$.
###### Definition 2.7.
The group of *proper isometries* of $`X\times Y`$ is the normal subgroup of isometries generated by $`r(\theta ,\varphi )`$ and $`t(u,v)`$. Any element of the coset, $`\sigma g`$, where $`g`$ is a proper isometry, is called a *Schwarz reflection*.
### 2.3 Surfaces generated by conformal maps of triangles
We now focus on the class of Riemann surfaces determined by graphs which solve a purely geometrical problem: the conformal map between two bounded triangular regions, $`PX`$ and $`QY`$. The Riemann mapping theorem asserts there is a three-parameter family of conformal maps $`f:PQ`$ that extend to homeomorphisms of the closures $`\overline{P}`$ and $`\overline{Q}`$. To fix these parameters we require that the three vertices of $`\overline{P}`$ map to the vertices of $`\overline{Q}`$. This defines the graph
$$P|Q=\{(x,f(x)):xP\},$$
(13)
and a corresponding Riemann surface $`[P|Q]`$. The closure of $`P|Q`$ is defined analogously and is written $`\overline{P}|\overline{Q}`$. One of the main benefits of using a conformal map of triangles to determine a Riemann surface $`𝒮`$ is that its isometry group can be understood simply in terms of its action on a partition of $`𝒮`$ into *tiles*.
Just as $`\overline{P}`$ can be decomposed into an interior $`P`$, edges which bound $`P`$, and vertices which bound each edge, there is a corresponding cell decomposition of the graph $`\overline{P}|\overline{Q}`$. For example, if $`P_1`$ is one vertex of $`\overline{P}`$, and $`f(P_1)=Q_1`$ is its image in $`\overline{Q}`$, then we use the symbol $`P_1|Q_1`$ to represent the corresponding vertex of $`\overline{P}|\overline{Q}`$. Each vertex of $`\overline{P}|\overline{Q}`$ is associated with two angles, a vertex angle of $`P`$ and the corresponding vertex angle of $`Q`$. Let the three angle pairs be $`\alpha _i,\beta _i`$, $`i=1,2,3`$. If $`\alpha _i=\beta _i`$ for all $`i`$, then $`P`$ is similar to $`Q`$ and $`f`$ is just a linear map. Because the corresponding Riemann surface would be trivial (a plane) we exclude this case. It is impossible to have $`\alpha _i\beta _i`$ for just one $`i`$ since then the angle sum could not be $`\pi `$ in both triangles. Thus we must have at least two vertices with unequal angles. At these vertices $`f`$ fails to be conformal. Any vertex of $`\overline{P}|\overline{Q}`$ where the corresponding angles in $`P`$ and $`Q`$ are unequal will be called a *singular vertex*. The singular vertices of $`\overline{P}|\overline{Q}`$ are the only singular points of $`\overline{P}|\overline{Q}`$.
The edges of $`\overline{P}|\overline{Q}`$ (associated with each pair of vertices $`ij=12,13,23`$) are effectively the generators of the isometry group of $`[P|Q]`$. By $`P_{12}|Q_{12}`$ we mean the graph given by the restriction of $`f`$ to the edge $`P_{12}`$ of $`\overline{P}`$ with image an edge $`Q_{12}`$ of $`\overline{Q}`$. Consider the triangles $`P^{}`$ and $`Q^{}`$ obtained from $`P`$ and $`Q`$ by reflection in these edges. The graph $`P^{}|Q^{}`$, determined by the conformal map $`g:P^{}Q^{}`$, is clearly related to $`P|Q`$ by an isometry of $`X\times Y`$, a Schwarz reflection which we call $`\sigma _{12}`$. Because $`\sigma _{12}`$ fixes every point of $`P_{12}|Q_{12}`$, we have that $`f(x)=g(x)`$ for all $`xP_{12}`$. A basic result from complex analysis then tells us that the function elements $`(P,f)`$ and $`(P^{},g)`$ are related by analytic continuation. Thus $`[P|Q]=[P^{}|Q^{}]=[\sigma _{12}(P|Q)]=\sigma _{12}[P|Q]`$, by Corollary 2.2.
###### Definition 2.8.
The group $`G`$ generated by the Schwarz reflections $`\sigma _{ij}`$ which fix the three edges of a triangular graph $`P|Q`$ is called the *edge group* of $`P|Q`$. The edge group of $`P|Q`$ is a subgroup of the isometry group of $`[P|Q]`$.
In order to show that the edge group of a triangular graph is the *maximal* isometry group, we first need to refine the sets on which these groups act.
###### Notation.
The symbol $`\stackrel{ˇ}{𝒮}`$ corresponds to the Riemann surface $`𝒮`$ whose singular points have been removed.
###### Definition 2.9.
A *real curve* of the Riemann surface $`𝒮`$ is any curve $`\mathrm{\Gamma }\stackrel{ˇ}{𝒮}`$, homeomorphic to $``$, and pointwise invariant with respect to a Schwarz reflection.
Since both $`\pi _X:\stackrel{ˇ}{𝒮}X`$ and $`\pi _Y:\stackrel{ˇ}{𝒮}Y`$ are immersions, the map $`\pi _Y\pi _{X}^{}{}_{}{}^{1}:\pi _X(\mathrm{\Gamma })\pi _Y(\mathrm{\Gamma })`$ is an immersion as well. Thus it makes sense to use our graph notation, $`\mathrm{\Gamma }=\gamma |\delta `$, for real curves, where $`\gamma =\pi _X(\mathrm{\Gamma })`$ and $`\delta =\pi _Y(\mathrm{\Gamma })`$. A real curve $`\gamma |\delta `$ is geometrically no different from the edge of a triangular graph; the projections $`\gamma `$ and $`\delta `$ are always straight lines. Any real curve is isometric with the graph of a real analytic function.
The three real curves which bound the triangular graph $`P|Q`$ generate a topological cell decomposition of $`\overline{P}|\overline{Q}`$ into vertices, edges, and the graph $`P|Q`$ itself. The generators of the edge group, $`\sigma _{ij}`$, acting on $`\overline{P}|\overline{Q}`$, generate three closed graphs, each having one edge in common with $`\overline{P}|\overline{Q}`$. By continuing this construction we obtain a cell decomposition of $`[P|Q]`$ into 2-cells isometric with $`P|Q`$, 1-cells isometric with one of the edges of $`\overline{P}|\overline{Q}`$, and points. The cell complex as a whole defines a tiling $`𝒯`$; the 2-cells by themselves form a set of tiles, $`𝒯_2`$, and every element of $`𝒯_2`$ can be expressed as $`g(P|Q)`$, where $`g`$ is an element of the edge group, $`G`$.
To show that $`G`$ is the maximal isometry group we first need to check that the tiling $`𝒯`$ is *primitive*, that is, there is no refinement of the tiles $`𝒯_2`$ by additional real curves within $`[P|Q]`$ we may have missed. For this it suffices to check that there are no real curves within $`P|Q`$. Before we can prove this statement we need some basic properties of real curves.
###### Definition 2.10.
A real curve is *complete* if it is not a proper subset of any other real curve.
###### Lemma 2.3.
The closure in $`𝒮`$ of a complete real curve $`\gamma |\delta \stackrel{ˇ}{𝒮}`$, if bounded, has singular endpoints.
###### Proof.
Without loss of generality let $`\gamma `$ and $`\delta `$ lie on the real axes of, respectively, $`X`$ and $`Y`$. The functions $`f`$ of the function elements $`(U,f)`$, which represent $`𝒮`$ locally, will then have power series on the real axis (of $`X`$) with real coefficients. Since a real power series when analytically continued along the real axis continues to be real, we can continue $`\gamma |\delta `$ until we encounter either a singularity of $`f`$ or a zero of $`f^{}`$ (i.e. a singularity of $`f^1`$ on the real axis of $`Y`$). ∎
The next Lemmas deal with the angles formed by intersecting real curves.
###### Definition 2.11.
The *angle between lines* $`\gamma `$ and $`\gamma ^{}`$ (in $`X`$ or $`Y`$), denoted $`\mathrm{}(\gamma ,\gamma ^{})`$, is the smallest counterclockwise rotation required to make $`\gamma `$ parallel to $`\gamma ^{}`$.
###### Lemma 2.4.
If real curves $`\gamma |\delta \stackrel{ˇ}{𝒮}`$ and $`\gamma ^{}|\delta ^{}\stackrel{ˇ}{𝒮}`$ intersect, then $`\mathrm{}(\gamma ,\gamma ^{})=\mathrm{}(\delta ,\delta ^{})`$.
###### Proof.
Near the point of intersection $`\stackrel{ˇ}{𝒮}`$ is represented by a function element $`(U,f)`$ where $`f`$ is conformal. The equality of angles, formed by a pair of lines in $`X`$ and their images by $`f`$ in $`Y`$, is simply the geometrical statement that $`f`$ is conformal. ∎
###### Lemma 2.5.
Only a finite number $`n>1`$ of real curves can intersect at any point of a nontrivial Riemann surface and the angle formed by any pair must be a multiple of $`\pi /n`$.
###### Proof.
Suppose $`\gamma |\delta `$ and $`\gamma ^{}|\delta ^{}`$ intersect with angle $`\mathrm{}(\gamma ,\gamma ^{})=\mathrm{}(\delta ,\delta ^{})=\alpha >0`$ on a nontrivial Riemann surface $`𝒮`$; for convenience, let $`(0,0)`$ be the point of intersection. These curves are fixed by Schwarz reflections $`\sigma `$ and $`\sigma ^{}`$ respectively, and $`\sigma ^{}\sigma =r(2\alpha ,2\alpha )`$ is an isometry of $`𝒮`$. The neighborhood of the point of intersection is the graph
$$𝒮_0=\{(x,f(x)):xU\},$$
(14)
where $`U`$ is a neighborhood of the origin in $`X`$, and $`f`$ is conformal at $`x=0`$. The Taylor series for $`f`$ at the origin has the form
$$f(x)=\underset{k=1}{\overset{\mathrm{}}{}}a_kx^k,$$
(15)
where $`a_10`$. A short calculation shows
$$r(2\alpha ,2\alpha )𝒮_0=\{(x,f_\alpha (x)):xU_\alpha \},$$
(16)
where $`U_\alpha =e^{i2\alpha }U`$ is again a neighborhood of the origin, and
$$f_\alpha (x)=\underset{k=1}{\overset{\mathrm{}}{}}a_ke^{i2\alpha (1k)}x^k.$$
(17)
Since $`r(2\alpha ,2\alpha )`$ is an isometry, the Taylor series for $`f`$ and $`f_\alpha `$ must agree, term by term. Now if $`\alpha =\pi \omega `$ and $`\omega `$ is irrational, then $`\omega (1k)`$ can be an integer only for $`k=1`$ (so that $`e^{i2\alpha (1k)}=1`$). But this requires $`a_k=0`$ for $`k>1`$ which is impossible since $`𝒮`$ is nontrivial. Thus we may assume $`\omega =p/q`$ where $`p`$ and $`q`$ are relatively prime positive integers, $`p<q`$ (since $`\alpha <\pi `$), and $`a_{1+mq}0`$ for some integer $`m>0`$.
Now let $`\gamma ^{\prime \prime }|\delta ^{\prime \prime }`$ be any real curve that intersects $`\gamma |\delta `$ at the origin; then $`\mathrm{}(\gamma ,\gamma ^{\prime \prime })=\mathrm{}(\delta ,\delta ^{\prime \prime })=\pi (p^{}/q^{})`$ by the argument just given, where $`p^{}`$ and $`q^{}`$ are relatively prime positive integers, $`p^{}<q^{}`$. However, since $`a_{1+mq}0`$, we must have $`e^{i2\pi (p^{}/q^{})mq}=1`$, or that $`q^{}`$ divides the product $`mq=n`$. This shows that $`\mathrm{}(\gamma ,\gamma ^{\prime \prime })`$ is a multiple of $`\pi /n`$, for some $`n>1`$. ∎
Clearly any triangular graph with a nontrivial isometry must be “isoceles” and fails to be primitive because it can be decomposed into two isometric tiles. This is made precise by the following Lemma.
###### Lemma 2.6.
Let $`P|Q`$ be a nontrivial triangular graph with trivial isometry group, then $`P|Q`$ contains no real curves.
###### Proof.
Suppose $`P|Q`$ contains a real curve and call its completion $`\gamma |\delta `$. We recall that $`\gamma `$, and its closure in $`X`$, $`\overline{\gamma }`$, are straight lines and $`\overline{\gamma }`$ cannot have an endpoint within $`P`$ (Lemma 2.3). The possible geometrical relationships between $`\overline{\gamma }`$ and $`P`$ are diagrammed in Figure 2. Since $`P|Q`$ is nontrivial, at least two vertices are singular and are shown circled in each diagram. Either $`\overline{\gamma }`$ intersects two edges of $`P`$, as in cases $`A`$ and $`B`$, or, it intersects an edge and the opposite vertex which may be singular (case $`C`$) or possibly regular (case $`D`$). The vertex labels on the diagram refer to our notation for the vertex angles and edges. For example, $`\alpha _1`$ and $`\beta _1`$ are the angles in $`P`$ and $`Q`$, respectively, of vertex 1; $`P_{12}|Q_{12}`$ is the edge (real curve) bounded by vertices 1 and 2, etc.
Case $`A`$ is easily disposed of using Lemma 2.4:
$`\mathrm{}(\overline{\gamma },P_{12})=\mathrm{}(\overline{\gamma },P_{13})+\alpha _1=\mathrm{}(\overline{\delta },Q_{13})+\alpha _1`$ $`=\mathrm{}(\overline{\delta },Q_{12})\beta _1+\alpha _1`$ (18)
$`=\mathrm{}(\overline{\gamma },P_{12})\beta _1+\alpha _1.`$
This is impossible because vertex 1 is singular ($`\alpha _1\beta _1`$).
By using Schwarz reflection to imply the existence of additional real curves, the remaining cases either reduce to case $`A`$ or imply the existence of a singularity within $`P|Q`$ or one of its edges — neither of which is possible.
First consider case $`B`$. Let $`\gamma `$ intersect $`P_{13}`$ at $`x_1`$ and $`P_{23}`$ at $`x_2`$, forming angles $`\theta _1`$ and $`\theta _2`$ (see Fig. 2). Any other complete real curve with projection $`\gamma ^{}`$ which intersects $`x_1`$ makes a finite angle with $`\gamma `$ by Lemma 2.5. Thus we may assume the angles $`\theta _1`$ and $`\theta _2`$ are the smallest possible (for a $`\gamma `$ that intersects both $`P_{13}`$ and $`P_{23}`$). Since one of $`\theta _1`$ and $`\theta _2`$ must be greater than $`\pi /2`$, we assume without loss of generality it is $`\theta _1`$. If we now reflect $`\gamma `$ in $`P_{13}`$ we obtain a real curve with projection $`\gamma ^{}`$ such that $`\gamma ^{}`$ intersects $`P_{13}`$ but not $`P_{23}`$. Thus case $`B`$ always reduces to cases $`A`$ or $`C`$.
In case $`C`$ we consider the sequence of real curves with projections $`\gamma _k`$, where $`\gamma _0=P_{23}`$, $`\gamma _1=\gamma `$, and $`\gamma _{k+1}`$ is the image of $`\gamma _{k1}`$ under reflection in $`\gamma _k`$. Let $`\theta _k`$ be the angle formed at vertex 2 in $`P`$ by $`\gamma _k`$. Clearly for some $`k`$ we arrive at a $`\gamma ^{}=\gamma _k`$ such that $`\theta =\theta _k\alpha _2/2`$ (see Fig. 2). This leads to three subcases: $`C_1`$, where $`\mathrm{}(\gamma ^{},P_{13})\pi /2`$, $`C_2`$, where $`\gamma ^{}`$ and $`P_{13}`$ are perpendicular, and $`C_3`$, where $`\mathrm{}(\gamma ^{},P_{13})\pi /2`$. In case $`C_2`$, $`\theta <\alpha _2/2`$ since otherwise $`P|Q`$ would have a nontrivial isometry (reflection in $`\gamma ^{}|\delta ^{}`$). All three subcases immediately lead to contradictions. In $`C_1`$, reflecting $`\gamma ^{}`$ in $`P_{13}`$ presents us with a $`\gamma ^{\prime \prime }`$ satisfying case $`A`$. In $`C_3`$, the image of vertex 1 under reflection in $`\gamma ^{}`$ implies a singularity within $`P`$; in $`C_2`$ the same reflection implies a singularity on $`P_{13}`$.
Case $`D`$: we either have $`\mathrm{}(\gamma ,P_{12})=\pi /2`$, case $`D_1`$, or $`\mathrm{}(\gamma ,P_{12})\pi /2`$, case $`D_2`$. Since $`P`$ has no nontrivial isometry, a reflection in $`\gamma `$ in case $`D_1`$ would place the image of either vertex 1 or 2 (both singular) somewhere on $`P_{12}`$. In $`D_2`$, a Schwarz reflection of $`\gamma `$ leads to case $`A`$. ∎
###### Theorem 2.7.
Let $`P|Q`$ be a nontrivial triangular graph with trivial isometry group, then the maximal isometry group of $`[P|Q]`$ is the edge group of $`P|Q`$.
###### Proof.
We use the real curves to decompose $`[P|Q]`$ into a set of tiles (2-cells) $`𝒯_2`$. Lemma 2.6 tells us that $`P|Q𝒯_2`$, so that $`𝒯_2=G(P|Q)`$, where $`G`$ is the edge group of $`P|Q`$. On the other hand, if $`h`$ is an isometry of $`[P|Q]`$, then $`h(P|Q)=P^{}|Q^{}𝒯_2`$, where $`P^{}|Q^{}=g(P|Q)`$ for some $`gG`$. But since $`P|Q`$ has no nontrivial isometries, the map $`h^1g:P|QP|Q`$ must be the identity and $`h=g`$. ∎
We conclude this section with a formula for the topological genus of a Riemann surface $`[P|Q]`$ compactified by the translation subgroup of its isometry group, the *lattice group* $`\mathrm{\Lambda }`$ of $`[P|Q]`$.
###### Definition 2.12.
The *vertex groups* $`G_i`$, $`(i=1,2,3)`$, of a triangular graph $`P|Q`$, are the subgroups of the edge group of $`P|Q`$ generated by the adjacent edges of, respectively, the three vertices of $`P|Q`$.
###### Theorem 2.8.
Let $`P|Q`$ be a nontrivial triangular graph with trivial isometry group. Let $`G`$ be the isometry group of $`[P|Q]`$, $`\mathrm{\Lambda }`$ its lattice group, and $`G_i`$, $`(i=1,2,3)`$, the three vertex groups of $`P|Q`$. If $`|G/\mathrm{\Lambda }|`$ is finite, the genus $`g`$ of the surface $`[P|Q]/\mathrm{\Lambda }`$ satisfies
$$22g=|G/\mathrm{\Lambda }|\left(\underset{i=1}{\overset{3}{}}\frac{1}{|G_i|}\frac{1}{2}\right).$$
(19)
###### Proof.
If $`|G/\mathrm{\Lambda }|`$ is finite we can view $`[P|Q]/\mathrm{\Lambda }`$ as a finite cell complex. We can relate the number of 0-cells, $`N_0`$, and the number of 1-cells, $`N_1`$, in this complex to the number of 2-cells, $`N_2`$. Since every 2-cell is bounded by three 1-cells, each of which bounds exactly one other 2-cell, $`N_1=(3/2)N_2`$. Similarly, the boundary of each 2-cell contains three 0-cells (the vertices $`i=1,2,3`$), each of which belongs to the boundary of a number of 2-cells equal to the order of the corresponding vertex group, $`|G_i|`$. Thus $`N_0=(_{i=1}^3|G_i|^1)N_2`$. Finally, since $`P|Q`$ has no nontrivial isometry, and $`G/\mathrm{\Lambda }`$ acts transitively on the 2-cells of $`[P|Q]/\mathrm{\Lambda }`$, $`N_2=|G/\mathrm{\Lambda }|`$. The result (19) follows from Euler’s formula, $`22g=N_2N_1+N_0`$. ∎
### 2.4 Discreteness and uniformity
The whole point of immersing a Riemann surface $`𝒮`$ in $`X\times Y=^2`$ is that by forming sections of $`𝒮`$, i.e. intersections with $`X=`$, one obtains patterns of points. A very primitive property of a point set, normally taken for granted in crystallography, is discreteness.
###### Definition 2.13.
The *section* of the Riemann surface $`𝒮`$ at $`x`$ is the set
$$𝒜(x)=\pi _Y\pi _{X}^{}{}_{}{}^{1}(x).$$
(20)
The basic property of a holomorphic function, that its zeros form a discrete set, translates to the statement that the preimages $`\pi _{X}^{}{}_{}{}^{1}(x)`$ are discrete in a Riemann surface $`𝒮`$. We use the stronger property that $`\pi _{X}^{}{}_{}{}^{1}(x)`$ is discrete in $`X\times Y`$ to define a *discrete Riemann surface*. This is equivalent to the following statement about sections:
###### Definition 2.14.
A Riemann surface $`𝒮`$ is *discrete* if its sections $`𝒜(x)`$ are discrete in $`Y`$ for every $`xX`$.
All the statements we can make about discreteness of a Riemann surface $`𝒮`$ hinge upon properties of the lattice group of $`𝒮`$, $`\mathrm{\Lambda }`$. One property is the rank, $`\mathrm{rk}(\mathrm{\Lambda })`$, given by the cardinality of the generators of $`\mathrm{\Lambda }`$. The orbit of the origin of $`X\times Y`$, $`\mathrm{\Lambda }(0,0)`$, is called a lattice and is also represented by the symbol $`\mathrm{\Lambda }`$. When $`\mathrm{rk}(\mathrm{\Lambda })=4`$, a second property is the determinant of the lattice, $`det\mathrm{\Lambda }`$. If $`det\mathrm{\Lambda }>0`$, the four generators of $`\mathrm{\Lambda }`$ are linearly independent (as vectors in $`X\times Y`$); if $`det\mathrm{\Lambda }=0`$ the generators are linearly dependent and $`\mathrm{\Lambda }`$ (as a lattice) is not discrete in $`X\times Y`$. A lattice with $`\mathrm{rk}(\mathrm{\Lambda })>4`$ is never discrete in $`X\times Y`$.
###### Notation.
The standard measure for a set $`A`$ is written $`|A|`$. If $`A`$ is a region in $``$ then $`|A|`$ is its area; if $`A`$ is a set of points, then $`|A|`$ is its cardinality. Finally, if $`\mathrm{\Lambda }`$ is a rank 4 lattice, then $`|\mathrm{\Lambda }|=\sqrt{det\mathrm{\Lambda }}`$ is the volume in $`X\times Y`$ of its fundamental region.
The following Lemma provides a necessary condition for discreteness:
###### Lemma 2.9.
The lattice group $`\mathrm{\Lambda }`$, of a discrete, nontrivial Riemann surface $`𝒮`$, is discrete (as a lattice) in $`X\times Y`$ and in particular, $`\mathrm{rk}\mathrm{\Lambda }4`$.
###### Proof.
If $`\mathrm{\Lambda }`$ is not discrete we can find a sequence of $`t(u,v)\mathrm{\Lambda }`$ such that both $`u0`$ and $`v0`$. Let $`(x_0,y_0)`$ be a regular point of $`𝒮`$, then $`y_0𝒜(x_0)`$. Near $`(x_0,y_0)`$ we can represent $`𝒮`$ by the graph
$$U|V=\{(x,f(x)):xU\},$$
(21)
where $`U`$ is a neighborhood of $`x_0`$ in $`X`$, $`f`$ is conformal in $`U`$, and $`f(x_0)=y_0`$. Since $`t(u,v)`$ is an isometry, $`t(u,v)(U|V)𝒮`$. From
$$t(u,v)(U|V)=\{(x+u,f(x)+v):xU\},$$
(22)
we see that $`f(x_0u)+v𝒜(x_0)`$, since $`x_0uU`$ as $`u`$ can be arbitrarily small. Since $`𝒮`$ is discrete, $`y_0`$ is isolated in $`Y`$ and there must be a subsequence $`t(u^{},v^{})`$ such that $`f(x_0u^{})+v^{}=f(x_0)`$. If, within the sequence $`t(u^{},v^{})`$, there is a subsequence $`t(u^{\prime \prime },v^{\prime \prime })`$ with $`u^{\prime \prime }=0`$, then $`v^{\prime \prime }=f(x_0)f(x_0u^{\prime \prime })=0`$ and we have a contradiction. Thus there must be a subsequence with $`u^{\prime \prime }0`$. Since $`f`$ is conformal at $`x_0`$,
$`\underset{u^{\prime \prime }0}{lim}{\displaystyle \frac{f(x_0)f(x_0u^{\prime \prime })}{u^{\prime \prime }}}`$ $`=f^{}(x_0)`$ (23)
$`=\underset{(u^{\prime \prime },v^{\prime \prime })(0,0)}{lim}{\displaystyle \frac{v^{\prime \prime }}{u^{\prime \prime }}}.`$
But the second limit, above, is independent of $`x_0`$ so we are forced to conclude that $`f^{}`$ is constant. This is impossible because $`𝒮`$ is nontrivial. ∎
The point sets studied in crystallography normally are Delone sets and have the property of being *uniformly discrete* . For a point set in $`^n`$ this means there exists a real number $`r>0`$ such that a spherical neighborhood of radius $`r`$ about any point of the set contains no other point of the set. For the sets $`𝒜(x)`$ generated by Riemann surfaces this property is clearly too strong: it is violated whenever $`x`$ is near a branch point of $`\pi _X`$. We therefore adopt a weaker form of this property which is nevertheless stronger than discreteness and useful in establishing the existence of the density.
###### Definition 2.15.
Let $`B_r(y)Y`$ be an open disk of radius $`r`$ centered at $`y`$. A Riemann surface $`𝒮`$, with sections $`𝒜(x)`$, is *finitely discrete* if for some $`r>0`$, $`|𝒜(x)B_r(y)|`$ is uniformly bounded above for all $`xX`$ and $`yY`$.
Since a disk of radius $`r^{}`$ can always be covered by finitely many disks of radius $`r`$, the finitely discrete property holds for any $`r^{}`$ once it has been established for a particular $`r`$.
We now introduce the class of Riemann surfaces which is the focus of this study.
###### Definition 2.16.
A Riemann surface is *crystallographic* if its lattice group $`\mathrm{\Lambda }`$ has rank 4 and $`|\mathrm{\Lambda }|>0`$.
###### Definition 2.17.
Any isometry $`g`$ of $`X\times Y`$ can uniquely be expressed in the form $`g=g_0\lambda `$, where $`g_0`$ fixes the origin and $`\lambda `$ is a translation. The *derived point group* of the isometry group $`G`$, $`\psi (G)`$, is the image of $`G`$ by the homomorphism $`\psi :gg_0`$. Since $`\mathrm{Ker}\psi =\mathrm{\Lambda }`$, the lattice group of $`G`$, we have the isomorphism $`\psi (G)G/\mathrm{\Lambda }`$.
It is important to remember that $`\psi (G)`$ need not be a subgroup of $`G`$; nevertheless, the lattice group of $`G`$ is always left invariant by $`\psi (G)`$, just as it is invariant within $`G`$. Furthermore, if a Riemann surface is crystallographic, then the action (by conjugation) of $`\psi (G)`$ on its lattice group $`\mathrm{\Lambda }`$ is a faithful representation of $`G/\mathrm{\Lambda }`$. Since the isometry group of a (finite rank) lattice is finite, we have that $`G/\mathrm{\Lambda }`$, for a crystallographic Riemann surface, is always finite.
###### Lemma 2.10.
A Riemann surface determined by a triangular graph $`P|Q`$, if crystallographic, is finitely discrete.
###### Proof.
Let $`B_r(x,y)X\times Y`$ be an open ball of radius $`r`$ centered at an arbitrary point $`(x,y)`$. Consider the piece of the Riemann surface within this ball, $`𝒮_B=[P|Q]B_r(x,y)`$, and the projection $`\pi _X:𝒮_BX`$. $`[P|Q]`$ is finitely discrete if there is a uniform upper bound on the number of preimages $`\pi _X^1(x)`$.
$`[P|Q]`$ is covered by the orbit of closed graphs, $`G(\overline{P}|\overline{Q})`$, where $`G`$ is the edge group of $`P|Q`$. Let $`\mathrm{\Lambda }`$ be the lattice group of $`G`$, then $`G(\overline{P}|\overline{Q})`$ is the union of cosets, $`H_i(\overline{P}|\overline{Q})`$, $`i=1,\mathrm{},N`$, where $`N=|G/\mathrm{\Lambda }|`$ is finite because $`[P|Q]`$ is crystallographic. Again, because $`[P|Q]`$ is crystallographic, all but finitely many graphs in $`H_i(\overline{P}|\overline{Q})`$ have empty intersection with a ball of radius $`r`$, in particular, $`B_r(x,y)`$. Thus we have a bound (independent of $`x`$ and $`y`$) on the number of graphs in $`G(\overline{P}|\overline{Q})`$ which intersect $`B_r(x,y)`$. But a graph can have at most one preimage of $`\pi _X`$; hence $`|\pi _X^1(x)|`$ is uniformly bounded above. ∎
The sections $`𝒜(x)`$ of a crystallographic Riemann surface also possess a uniformity with respect to the parameter $`x`$. Our handle on this property is provided, in part, by the smooth behavior of $`𝒜(x)`$ with $`x`$. Before we can proceed, however, we need to be aware of two point sets in $`X`$ which create problems: branch points (of $`\pi _X`$) and crossing points.
###### Definition 2.18.
A point $`(x,y)𝒮`$ is a *self-intersection point* if in the description of $`𝒮`$ as a complete global analytic function there exist function elements $`(U,f)`$ and $`(V,g)`$, such that $`xUV`$, $`fg`$ in $`UV`$, and $`f(x)=g(x)=y`$. The point $`xX`$ is called a *crossing point*.
###### Lemma 2.11.
A Riemann surface determined by a triangular graph $`P|Q`$ has countably many self-intersection points and $`\pi _X`$ has countably many branch points.
###### Proof.
$`[P|Q]`$ is the union of countably many closed graphs $`\overline{P}_i|\overline{Q}_i`$ given by the orbit of $`\overline{P}|\overline{Q}`$ under the action of the edge group. Since each graph has at most three singular points, $`\pi _X`$ has countably many branch points. If there were uncountably many self-intersection points then uncountably many must arise from one pair of distinct graphs, say $`\overline{P}_i|\overline{Q}_i`$ and $`\overline{P}_j|\overline{Q}_j`$. Let $`f_i`$ and $`f_j`$ be the corresponding conformal maps; then $`f_i(x)=f_j(x)`$ would have uncountably many solutions $`x\overline{P}_i\overline{P}_j`$. Thus either $`f_i=f_j`$, a contradiction, or the zeroes of $`f_if_j`$ would not be isolated, another impossibility. ∎
###### Lemma 2.12.
Let $`[P|Q]`$ be crystallographic, $`X_cX`$ its crossing points, and $`X_bX`$ the branch points of $`\pi _X`$; then for any pair $`x,x^{}X(X_bX_c)`$, there exists a bijection of sections of $`[P|Q]`$,
$$\mathrm{\Psi }:𝒜(x)𝒜(x^{}),$$
(24)
such that $`y\mathrm{\Psi }(y)`$ is uniformly bounded above for $`y𝒜(x)`$.
###### Proof.
We arrive at $`\mathrm{\Psi }`$ by composing bijections
$`\mathrm{\Psi }_1`$ $`:𝒜(x)𝒜(x^{\prime \prime }),`$ (25)
$`\mathrm{\Psi }_2`$ $`:𝒜(x^{\prime \prime })𝒜(x^{}),`$
such that $`y\mathrm{\Psi }_1(y)`$ and $`y^{\prime \prime }\mathrm{\Psi }_2(y^{\prime \prime })`$ are (correspondingly) uniformly bounded. The Lemma then follows by application of the triangle inequality.
Since $`[P|Q]`$ is crystallographic, we can partition $`X\times Y`$ into translates of a bounded fundamental region, $`V(0)`$, of its lattice $`\mathrm{\Lambda }`$. Thus for any pair $`x,x^{}X`$ we can write
$`(x,0)`$ $`V(0)+\lambda ,`$ (26)
$`(x^{},0)`$ $`V(0)+\lambda ^{},`$
where $`\lambda ,\lambda ^{}\mathrm{\Lambda }`$. Consider the point
$$(x^{\prime \prime },y^{\prime \prime })=(x,0)+\lambda ^{}\lambda V(0)+\lambda ^{}.$$
(27)
Since
$$(x^{\prime \prime }x^{},y^{\prime \prime })=(x^{\prime \prime },y^{\prime \prime })(x^{},0)V(0)V(0),$$
(28)
both $`x^{\prime \prime }x^{}`$ and $`y^{\prime \prime }`$ have upper bounds independent of $`x`$ and $`x^{}`$. Now
$$𝒜(x)=\{yY:(x,y)[P|Q]\},$$
(29)
and
$$𝒜(x)+y^{\prime \prime }=\{y^{}Y:(x,y^{}y^{\prime \prime })[P|Q]\}.$$
(30)
But $`(x,y^{}y^{\prime \prime })[P|Q]`$ iff $`(x,y^{}y^{\prime \prime })+\lambda ^{\prime \prime }[P|Q]`$, where $`\lambda ^{\prime \prime }\mathrm{\Lambda }`$. Choosing
$$\lambda ^{\prime \prime }=\lambda ^{}\lambda =(x^{\prime \prime }x,y^{\prime \prime }),$$
(31)
we obtain
$$𝒜(x)+y^{\prime \prime }=\{y^{}Y:(x^{\prime \prime },y^{})[P|Q]\}=𝒜(x^{\prime \prime }).$$
(32)
As our first bijection we take the translation $`\mathrm{\Psi }_1(y)=y+y^{\prime \prime }`$, where $`y^{\prime \prime }`$ is uniformly bounded from above. The point of this intermediate step is that for $`\mathrm{\Psi }_2`$ we need consider only pairs of sections with bounded separation $`x^{\prime \prime }x^{}`$.
In constructing $`\mathrm{\Psi }_2`$ we avoid branch points and crossing points. Since $`xX(X_bX_c)`$, equation (27) implies $`x^{\prime \prime }X(X_bX_c)`$. Let $`\gamma :[0,1]X(X_bX_c)`$ be a smooth rectifiable curve with $`\gamma (0)=x^{\prime \prime }`$ and $`\gamma (1)=x^{}`$. To show that $`\gamma `$ exists we recall that $`X_bX_c`$ is countable. We can then find $`\gamma `$ in the uncountable family of circular arcs with endpoints $`x^{\prime \prime }`$ and $`x^{}`$, since each point of $`X_bX_c`$ can eliminate at most one arc.
The curve $`\gamma (t)`$ generates a homotopy of the sections $`𝒜(x^{\prime \prime })`$ and $`𝒜(x^{})`$. At each point $`y^{\prime \prime }𝒜(x^{\prime \prime })`$, $`\gamma (t)`$ is lifted to a unique curve $`\gamma (t)|\delta (t)[P|Q]`$ with endpoint $`(\gamma (0),\delta (0))=(x^{\prime \prime },y^{\prime \prime })`$ and we define our second bijection by $`\mathrm{\Psi }_2(y^{\prime \prime })=\delta (1)`$. To finish the proof we need to show that $`\delta (1)\delta (0)`$ is uniformly bounded.
Let $`\stackrel{ˇ}{P}`$ be the closed subset of $`\overline{P}`$ that is a suitably small distance $`r`$ or greater from any of its vertices that are branch points of $`\pi _X`$. Let $`\stackrel{ˇ}{P}|\stackrel{ˇ}{Q}\overline{P}|\overline{Q}`$ be the corresponding graph. The orbit under the edge group, $`\stackrel{ˇ}{𝒮}=G(\stackrel{ˇ}{P}|\stackrel{ˇ}{Q})[P|Q]`$, is a Riemann surface from which all the points of ramification (of the map $`\pi _X`$) have been “cut out”. The complement, $`\widehat{𝒮}=[P|Q]\stackrel{ˇ}{𝒮}`$, is the disjoint union of the branched neighborhoods of all the points of ramification. It is possible to find curves $`\gamma (t)`$, such as the circular arcs considered above, where the branched neighborhoods $`\widehat{𝒮}_i`$ visited by $`\gamma (t)|\delta (t)`$ are visited only once, for $`tT_i[0,1]`$. Also, because we can bound the length $`L`$ of $`\gamma `$ and there is a minimum distance between branch points (on the branched covering of $`X`$), the number of such subintervals $`T_i`$ is bounded, i.e. $`i=1,\mathrm{},N`$. The bounds on $`N`$ and $`L`$ are uniform bounds, independent of the points $`x`$ and $`x^{}`$.
Two additional bounds are needed before we can proceed to bound $`\delta (1)\delta (0)`$. The first is an upper bound $`D`$ on the diameter of the projection of a branched neighborhood, $`\pi _Y(\widehat{𝒮}_i)`$. This follows from the fact that $`\widehat{𝒮}_i`$ is isometric with the branched neighborhood $`\widehat{𝒮}_j`$ of a vertex of $`\overline{P}|\overline{Q}`$, and $`\widehat{𝒮}_jG_j(P|Q)`$, where $`G_j`$ is the corresponding vertex group. Clearly the maximum diameter of $`\pi _Y(G_j(P|Q))`$ is bounded because $`Q`$ is bounded.
The map $`f:PQ`$ (which defines $`P|Q`$), when restricted to $`\stackrel{ˇ}{P}`$ is conformal and $`f^{}`$ has a maximum value, $`\mu `$, since $`\stackrel{ˇ}{P}`$ is closed. This means that if $`\gamma (t)|\delta (t)\stackrel{ˇ}{P}|\stackrel{ˇ}{Q}`$, then
$$\frac{d\delta }{dt}=f^{}(\gamma )\frac{d\gamma }{dt}\mu \frac{d\gamma }{dt}.$$
(33)
Because $`\stackrel{ˇ}{𝒮}`$ is generated from $`\stackrel{ˇ}{P}|\stackrel{ˇ}{Q}`$ by the action of $`G`$, this bounds applies globally, for $`\gamma (t)|\delta (t)\stackrel{ˇ}{𝒮}`$.
We are now ready to complete the proof:
$$\delta (1)\delta (0)=_{[0,1]}\frac{d\delta }{dt}_{_{i=1}^NT_i}\frac{d\delta }{dt}+_{[0,1]_{i=1}^NT_i}\frac{d\delta }{dt}.$$
(34)
For each piece of the curve in a branched neighborhood we have
$$_{T_i}\frac{d\delta }{dt}D,$$
(35)
while in the complement ($`\stackrel{ˇ}{𝒮}`$),
$`{\displaystyle _{[0,1]_{i=1}^NT_i}}{\displaystyle \frac{d\delta }{dt}}`$ $`{\displaystyle _{[0,1]_{i=1}^NT_i}}{\displaystyle \frac{d\delta }{dt}}`$ (36)
$`\mu {\displaystyle _{[0,1]_{i=1}^NT_i}}{\displaystyle \frac{d\gamma }{dt}}`$
$`\mu L.`$
Inequality (34) thus becomes
$$\delta (1)\delta (0)ND+\mu L.$$
(37)
For discrete Riemann surfaces with sufficiently uniform sections $`𝒜(x)`$, one can define their *density*.
###### Definition 2.19.
Let $`B_R(0)Y`$ be a disk of radius $`R`$ centered at the origin. The limit
$$\rho (x)=\underset{R\mathrm{}}{lim}\frac{|B_R(0)𝒜(x)|}{|B_R(0)|},$$
(38)
if it exists and is finite, is the density of $`𝒜(x)`$.
With the aid of Lemmas 2.10 and 2.12 we can show, that for a crystallographic Riemann surface generated by a triangular graph, $`\rho (x)`$ exists and is (essentially) independent of $`x`$.
###### Notation.
The standard volume form in $`X`$ is $`\omega _X=dxd\overline{x}`$, its pullback on a Riemann surface $`𝒮`$ is written $`\pi _{X}^{}{}_{}{}^{}\omega _X`$.
###### Theorem 2.13.
If $`[P|Q]`$ is crystallographic with lattice group $`\mathrm{\Lambda }`$, its sections $`𝒜(x)`$ have density
$$\rho =\frac{1}{|\mathrm{\Lambda }|}_{[P|Q]/\mathrm{\Lambda }}\pi _{X}^{}{}_{}{}^{}\omega _X,$$
(39)
independent of $`x`$, provided $`x`$ is not a crossing point or a branch point of $`\pi _X`$. If $`G`$ is the edge group of $`P|Q`$, then
$$\rho =\frac{|G/\mathrm{\Lambda }||P|}{|\mathrm{\Lambda }|}.$$
(40)
###### Proof.
###### Notation.
The expression $`c=\mathrm{O}(1/R)`$ indicates there exist constants $`c_1`$ and $`c_2`$ (independent of $`R`$) such that for sufficiently large $`R`$, $`c_1/R<c<c_2/R`$.
Let $`B_R(0)Y`$ be a disk of radius $`R`$ centered at the origin and let
$$N_R(x)=|B_R(0)𝒜(x)|.$$
(41)
We first obtain a bound on the difference, $`N_R(x)N_R(x^{})`$, when neither $`x`$ nor $`x^{}`$ is a crossing point or a branch point of $`\pi _X`$. By Lemma 2.12 there exists a bijection $`\mathrm{\Psi }:𝒜(x)𝒜(x^{})`$ such that if $`yB_R(0)𝒜(x)`$, then $`\mathrm{\Psi }(y)B_{R+d}(0)𝒜(x^{})`$, where $`d>0`$ is a constant independent of $`R`$, $`x`$, and $`x^{}`$. This shows
$`N_R(x)`$ $`|B_{R+d}(0)𝒜(x^{})|`$ (42)
$`=N_R(x^{})+|(B_{R+d}(0)B_R(0))𝒜(x^{})|.`$
We can cover the annulus $`B_{R+d}(0)B_R(0)`$ by $`M_R`$ disks $`B_r(y^{})`$ of a fixed radius $`r>0`$, where, for sufficiently large $`R`$, $`M_R<mR`$ and $`m`$ is a constant independent of $`R`$. By Lemma 2.10, $`|B_r(y^{})𝒜(x^{})|<n`$, where $`n`$ is independent of $`x^{}`$ and $`y^{}`$. Thus $`N_R(x)N_R(x^{})<mnR`$. Combining this bound with the bound obtained by interchanging $`x`$ and $`x^{}`$, we arrive at the statement
$$N_R(x)N_R(x^{})=|B_R(0)|\mathrm{O}(1/R).$$
(43)
We now introduce a disk $`C_R(0)X`$ and consider the region $`W_R=C_R(0)\times B_R(0)X\times Y`$. Since the set of branch points and crossing points is countable and has zero measure in $`X`$, and $`\pi _X`$ is otherwise smooth,
$$_{\pi _X([P|Q]W_R)}\omega _X=_{[P|Q]W_R}\pi _{X}^{}{}_{}{}^{}\omega _X.$$
(44)
Because $`[P|Q]`$ is crystallographic, we can partition $`X\times Y`$ into translates of a bounded fundamental region of its lattice, $`V(0)`$. Let $`\pi _\mathrm{\Lambda }:[P|Q][P|Q]/\mathrm{\Lambda }`$ be the standard projection on the quotient. On $`[P|Q]V(0)`$ the map $`\pi _\mathrm{\Lambda }`$ is 1-to-1 and
$$_{[P|Q]V(0)}\pi _{X}^{}{}_{}{}^{}\omega _X=_{[P|Q]/\mathrm{\Lambda }}(\pi _\mathrm{\Lambda }^1)^{}\pi _{X}^{}{}_{}{}^{}\omega _X=\rho |\mathrm{\Lambda }|.$$
(45)
This defines $`\rho `$, which we can make positive by appropriate choice of orientation on $`[P|Q]`$.
Turning now to the region $`W_R`$, there is a maximal subset $`\mathrm{\Lambda }_{}\mathrm{\Lambda }`$ such that $`\mathrm{\Lambda }_{}+V(0)W_R`$ and a smallest subset $`\mathrm{\Lambda }_+\mathrm{\Lambda }`$ such that $`W_R\mathrm{\Lambda }_++V(0)`$. If $`\lambda \mathrm{\Lambda }_+\mathrm{\Lambda }_{}`$, then $`V_\lambda =(\lambda +V(0))W_R`$ is a proper subset of a fundamental region and
$$0<_{[P|Q]V_\lambda }\pi _{X}^{}{}_{}{}^{}\omega _X<\rho |\mathrm{\Lambda }|.$$
(46)
From this it follows that
$$\rho |\mathrm{\Lambda }_{}||\mathrm{\Lambda }|<_{[P|Q]W_R}\pi _{X}^{}{}_{}{}^{}\omega _X<\rho |\mathrm{\Lambda }_+||\mathrm{\Lambda }|,$$
(47)
and, from straightforward estimates of $`\mathrm{\Lambda }_+`$ and $`\mathrm{\Lambda }_{}`$, we conclude
$$_{[P|Q]W_R}\pi _{X}^{}{}_{}{}^{}\omega _X=\rho |W_R|(1+\mathrm{O}(1/R)).$$
(48)
The projection $`\pi _X([P|Q]W_R)`$ covers the disk $`C_R(0)`$ multiple times, the multiplicity at the point $`xC_R(0)`$ being the number $`N_R(x)`$ defined above. Thus
$$_{\pi _X([P|Q]W_R)}\omega _X=_{C_R(0)}N_R(x)\omega _X.$$
(49)
We can again neglect the countable set of branch points $`X_b`$ and crossing points $`X_c`$ to argue, for $`x_0X(X_bX_c)`$ fixed,
$`{\displaystyle _{C_R(0)}}N_R(x)\omega _X`$ $`={\displaystyle _{C_R(0)}}N_R(x_0)\omega _X+{\displaystyle _{C_R(0)}}(N_R(x)N_R(x_0))\omega _X`$
$`=N_R(x_0)|C_R(0)|+|C_R(0)||B_R(0)|\mathrm{O}(1/R),`$
where in the last step we used (43). Combining (44), (48), (49), (2.4), and using $`|W_R|=|B_R(0)||C_R(0)|`$, we obtain
$$\frac{N_R(x_0)}{|B_R(0)|}=\rho (1+\mathrm{O}(1/R)),$$
(50)
and thus
$$\underset{R\mathrm{}}{lim}\frac{N_R(x_0)}{|B_R(0)|}=\rho .$$
(51)
To evaluate $`\rho `$ from (45), we regard $`[P|Q]/\mathrm{\Lambda }`$ as $`|G/\mathrm{\Lambda }|`$ equivalence classes of tiles, all isometric to $`P|Q`$. The result (40) follows because the integral of the form $`\pi _{X}^{}{}_{}{}^{}\omega _X`$ over $`P|Q`$ is just the volume of $`\pi _X(P|Q)=P`$ in $`X`$. ∎
Formula (39) for the density was introduced by Kalugin to extend the notion of stoichiometry to quasicrystals. Because the 2-form $`\pi _{X}^{}{}_{}{}^{}\omega _X`$ is closed, this formula gives the same density for 2-manifolds homologous in the torus $`(X\times Y)/\mathrm{\Lambda }`$. One must remember, however, that this homology invariant only corresponds to the true density when the map $`\pi _X`$ is orientation preserving (see equation (49)).
## 3 Classification of discrete Riemann surfaces generated by conformal maps of right triangles
### 3.1 Conformal maps of right triangles
The simplest nontrivial conformal maps of triangles, $`f:PQ`$, are those where one of the vertices of the corresponding graph $`P|Q`$ is regular. The edges of $`P|Q`$ adjacent to this vertex are real curves, and by Lemma 2.5, belong to a set of $`n>1`$ real curves intersecting at the same vertex with minimum angle $`\pi /n`$. This suggests that among those maps with one regular vertex, the simplest case is $`n=2`$, i.e. the conformal map of right triangles with the right angle being the regular vertex.
Without loss of generality, we give $`P`$ and $`Q`$ a standard scale, position, and angular orientation as specified by the vertices $`x_i|y_i`$ of the corresponding graph $`P|Q`$:
$`x_1|y_1`$ $`=0|0`$
$`x_2|y_2`$ $`=\mathrm{cos}\alpha |\mathrm{cos}\beta `$
$`x_3|y_3`$ $`=e^{i\alpha }|e^{i\beta },`$
where $`0<\alpha <\pi /2`$ and $`0<\beta <\pi /2`$ are two free real parameters. Below we frequently use the abbreviations $`a=\mathrm{cos}\alpha `$, $`b=\mathrm{cos}\beta `$. $`P`$ and $`Q`$ have angles $`\alpha `$ and $`\beta `$, respectively, at vertex 1, and the corresponding complementary angles at vertex 3. A nontrivial graph $`P|Q`$ has $`\alpha \beta `$ with vertices 1 and 3 singular; vertex 2 is regular.
The Schwarz-Christoffel formula gives the conformal map $`f`$ as the composition $`f=hg^1`$ where $`g`$ and $`h`$ map the upper half plane of $`Z=`$ conformally onto, respectively, $`P`$ and $`Q`$. Explicitly:
$`g(z)`$ $`=A{\displaystyle _0^z}z^{\frac{\alpha }{\pi }1}(1z)^{\frac{1}{2}}𝑑z,`$ (52)
$`h(z)`$ $`=B{\displaystyle _0^z}z^{\frac{\beta }{\pi }1}(1z)^{\frac{1}{2}}𝑑z.`$ (53)
In both (52) and (53) the branches of the fractional powers are chosen so that the integrands are real and positive for $`z(0,1)`$. The normalization factors $`A`$ and $`B`$ are positive real numbers determined by the conditions $`g(1)=a`$ and $`h(1)=b`$. Further properties of $`g`$ and $`h`$ are easily checked, in particular, $`g(\mathrm{})=e^{i\alpha }`$ and $`h(\mathrm{})=e^{i\beta }`$.
### 3.2 Isometry groups
###### Notation.
Let $`K`$ be a set of elements of a group $`G`$, and let $`kG`$ be some element. We denote by $`K`$ the subgroup generated by the elements of $`K`$, and by $`\{k\}_G`$ the conjugacy class of $`k`$ in $`G`$.
In the case of right triangles, $`P|Q`$ can have a nontrivial isometry only if $`\alpha =\pi /4`$ and $`\beta =\pi /4`$. But this makes $`P|Q`$ trivial. From Theorem 2.7 we know that for nontrivial $`P|Q`$ the isometry group $`G`$ is just the edge group generated by the three Schwarz reflections:
$`\sigma _{12}`$ $`=\sigma ,`$ (54)
$`\sigma _{13}`$ $`=r(2\alpha ,2\beta )\sigma ,`$
$`\sigma _{23}`$ $`=t(2a,2b)r(\pi ,\pi )\sigma .`$
From the isometry group
$$G=\sigma _{12},\sigma _{13},\sigma _{23}=\sigma ,r(2\alpha ,2\beta ),t(2a,2b)r(\pi ,\pi ),$$
(55)
we wish to extract the lattice group $`\mathrm{\Lambda }`$. Helpful in this enterprise are the vertex group
$$G_1=\sigma ,r(2\alpha ,2\beta ),$$
(56)
and its cyclic subgroup,
$$R=r(2\alpha ,2\beta ).$$
(57)
Sets of translations invariant with respect to $`G_1`$, or *stars*, play a central role in the construction of $`\mathrm{\Lambda }`$. In what follows we will need two stars:
$`\mathrm{\Sigma }`$ $`=\{t(2a,2b)\}_{G_1},`$ (58)
$`\mathrm{\Sigma }^1`$ $`=r(\pi ,\pi )\mathrm{\Sigma }r(\pi ,\pi )=\{t(2a,2b)\}_{G_1}.`$
The main result is contained in the following Lemma:
###### Lemma 3.1.
Let $`G`$ be the isometry group of the Riemann surface $`[P|Q]`$ generated by the conformal map of the right triangles specified in (3.1) and let $`R`$ be defined by (57), $`G_1`$ by (56), $`\mathrm{\Sigma }`$ and $`\mathrm{\Sigma }^1`$ by (58) and (58). If $`r(\pi ,\pi )R`$, then $`G`$ has lattice group $`\mathrm{\Lambda }=\mathrm{\Sigma }`$ and $`G=\mathrm{\Lambda }G_1`$; otherwise, $`G`$ has lattice group $`\mathrm{\Lambda }=\mathrm{\Sigma }\mathrm{\Sigma }^1`$ and $`G=\mathrm{\Lambda }G_1t(2a,2b)r(\pi ,\pi )\mathrm{\Lambda }G_1`$.
###### Proof.
First consider the case $`r(\pi ,\pi )R`$; then
$$G=\sigma ,r(2\alpha ,2\beta ),t(2a,2b).$$
(59)
Now consider the group $`H=\mathrm{\Lambda }G_1`$, where $`\mathrm{\Lambda }=\mathrm{\Sigma }`$ is normal in $`H`$. Clearly $`HG`$. Moreover, one easily verifies $`gH=Hg=H`$, where $`g`$ is any of the three generators of $`G`$. These two facts together show $`G=H`$; $`\mathrm{\Lambda }`$ is clearly the lattice group of $`G`$.
Next consider the case $`r(\pi ,\pi )R`$. For the generators of $`G`$ we must now use (55). Consider the group $`\stackrel{~}{G}=\stackrel{~}{\mathrm{\Lambda }}G_1`$, where $`\stackrel{~}{\mathrm{\Lambda }}=\mathrm{\Sigma }\mathrm{\Sigma }^1`$ is normal in $`\stackrel{~}{G}`$. Clearly $`\stackrel{~}{G}G`$. In contrast to the previous case, we can now only verify that $`g\stackrel{~}{G}g^1=\stackrel{~}{G}`$, where $`g`$ is any of the three generators in (55). Thus $`\stackrel{~}{G}`$ is normal in $`G`$. $`\stackrel{~}{G}`$ has index at most two, since multiplication of $`\stackrel{~}{G}`$ by the generators of $`G`$ produces at most two, possibly distinct, cosets: $`\stackrel{~}{G}`$ and $`\stackrel{~}{G}^{}=t(2a,2b)r(\pi ,\pi )\stackrel{~}{G}`$. But if $`\stackrel{~}{G}^{}=\stackrel{~}{G}`$, then we would have some $`\stackrel{~}{\lambda }\stackrel{~}{\mathrm{\Lambda }}`$ and some $`g_1G_1`$ such that $`\stackrel{~}{\lambda }g_1=t(2a,2b)r(\pi ,\pi )`$, or $`g_1=\stackrel{~}{\lambda }^1t(2a,2b)r(\pi ,\pi )`$. Since $`g_1`$ fixes the origin, $`\stackrel{~}{\lambda }^1t(2a,2b)`$ must be the trivial translation and $`g_1=r(\pi ,\pi )`$. This contradicts our assumption $`r(\pi ,\pi )R`$ and we conclude that $`\stackrel{~}{G}`$ and $`\stackrel{~}{G}^{}`$ are distinct. Let $`\mathrm{\Lambda }`$ be the lattice group of $`G`$. Clearly $`\stackrel{~}{\mathrm{\Lambda }}\mathrm{\Lambda }`$. Now suppose $`\lambda \mathrm{\Lambda }`$ but $`\lambda \stackrel{~}{\mathrm{\Lambda }}`$. Since then $`\lambda \stackrel{~}{G}`$, we must have $`\lambda \stackrel{~}{G}^{}`$, that is, $`\lambda =t(2a,2b)r(\pi ,\pi )g_1\stackrel{~}{\lambda }`$ for some $`g_1G_1`$ and $`\stackrel{~}{\lambda }\stackrel{~}{\mathrm{\Lambda }}`$. But this implies $`r(\pi ,\pi )g_1=t(2a,2b)\lambda \stackrel{~}{\lambda }^1`$, a translation, and we arrive at the contradiction $`r(\pi ,\pi )g_1=1`$. Thus $`\mathrm{\Lambda }=\stackrel{~}{\mathrm{\Lambda }}`$. ∎
### 3.3 The discreteness restriction
The requirement that $`[P|Q]`$ is discrete places strong constraints on the angles $`\alpha `$ and $`\beta `$ of the right triangles $`P`$ and $`Q`$.
###### Lemma 3.2.
If $`[P|Q]`$ is discrete, then $`\alpha `$ and $`\beta `$ are rational multiples of $`\pi `$.
###### Proof.
By Lemma 2.9, $`[P|Q]`$ has a discrete lattice, i.e. the orbit $`\mathrm{\Lambda }(0,0)`$ is discrete in $`X\times Y`$. For right triangles, Lemma 3.1 gives us $`\mathrm{\Lambda }=\mathrm{\Sigma }`$ if $`r(\pi ,\pi )R`$, $`\mathrm{\Lambda }=\mathrm{\Sigma }\mathrm{\Sigma }^1`$ otherwise. Thus discreteness of $`[P|Q]`$ implies discreteness of the star $`\mathrm{\Sigma }(0,0)`$ in $`X\times Y`$. Since $`\mathrm{\Sigma }=\{t(2a,2b)\}_{G_1}=\{t(2a,2b)\}_R`$, $`\mathrm{\Sigma }(0,0)`$ is just the orbit of $`t(2a,2b)(0,0)`$ under action of the group $`R`$ generated by $`r(2\alpha ,2\beta )`$. Clearly $`\mathrm{\Sigma }(0,0)`$ lies in a 2-torus $`S_1\times S_1`$ embedded in $`X\times Y`$. Since $`\mathrm{\Sigma }(0,0)`$ is discrete, there is a disjoint union of neighborhoods, each containing just one element of $`\mathrm{\Sigma }(0,0)`$. Moreover, since $`R`$ is an isometry of $`X\times Y`$ and acts transitively on $`\mathrm{\Sigma }(0,0)`$, there is a uniform lower bound on the volumes of these neighborhoods. This implies the existence of disjoint neighborhoods in $`S_1\times S_1`$, again with a uniform lower bound on their measure. Since $`S_1\times S_1`$ has finite measure, this is only possible if $`R`$ has finite order. ∎
###### Notation.
Given positive integers $`m`$ and $`n`$, $`\mathrm{GCD}(m,n)`$ is their greatest common divisor, $`\mathrm{LCM}(m,n)`$ their least common multiple.
Lemma 3.2 allows us to write $`\alpha =\pi (i/k)`$, $`\beta =\pi (j/l)`$, where $`i`$, $`j`$, $`k`$ and $`l`$ are positive integers and $`\mathrm{GCD}(i,k)=\mathrm{GCD}(j,l)=1`$. Since $`R=r(2\pi (i/k),2\pi (j/l))`$, we identify $`n=\mathrm{LCM}(k,l)`$ as the order of $`R`$. A more convenient parameterization is given by
$$\alpha =\pi \frac{p}{n},\beta =\pi \frac{q}{n},$$
(60)
where $`p=i(n/k)`$, $`q=j(n/l)`$ are positive integers with no common factors that are also factors of $`n`$.
The transformation
$$t(\mathrm{sin}\alpha ,\mathrm{sin}\beta )r(\pi /2,\pi /2)t(\mathrm{cos}\alpha ,\mathrm{cos}\beta )\sigma ,$$
(61)
has the effect of replacing the angles $`\alpha `$ and $`\beta `$ by $`\alpha ^{}=\pi /2\alpha `$ and $`\beta ^{}=\pi /2\beta `$ in the definition of the triangles $`P`$ and $`Q`$. Without loss of generality we may therefore take the smallest of the four angles $`\alpha `$, $`\alpha ^{}`$, $`\beta `$ and $`\beta ^{}`$, and rename this angle $`\alpha `$. Transformation (61), as well as an interchange of the spaces $`X`$ and $`Y`$, will then give all other cases of triangles $`P`$ and $`Q`$ (in their standard position, orientation and scale). From $`0<\alpha \alpha ^{}`$ we obtain
$$0<p\frac{n}{4}.$$
(62)
The inequalities $`\alpha \beta `$ and $`\alpha \beta ^{}`$, plus the condition to avoid triviality, $`\alpha \beta `$, then give
$$p<q\frac{n}{2}p.$$
(63)
Together, inequalities (62) and (63) have no solution unless
$$n6.$$
(64)
Since the vertex group $`G_1`$ is generated by $`R`$ and $`\sigma `$,
$$|G_1|=2n.$$
(65)
We can use transformation (61), which has the effect of interchanging the angles at vertices 1 and 3, to compute the order of $`G_3`$. Let $`n^{}`$, $`p^{}`$ and $`q^{}`$ be the integers parameterizing the angles at vertex 3 (as in (60)); then
$$n^{}=\frac{2n}{\mathrm{GCD}(2n,n2p,n2q)}$$
(66)
$`{\displaystyle \frac{p}{n}}+{\displaystyle \frac{p^{}}{n^{}}}`$ $`={\displaystyle \frac{1}{2}}`$
$`{\displaystyle \frac{q}{n}}+{\displaystyle \frac{q^{}}{n^{}}}`$ $`={\displaystyle \frac{1}{2}},`$
and,
$$|G_3|=2n^{}.$$
(67)
Finally, since the group of the regular vertex is generated by two real curves which intersect at right angles,
$$|G_2|=4.$$
(68)
From (55) we find
$$\psi (G)=G_1,r(\pi ,\pi )=\sigma ,R,r(\pi ,\pi )$$
(69)
for the derived point group of $`G`$. Recognizing $`n=|R|`$ as the order of an element of the isometry group of the *lattice* $`\mathrm{\Lambda }`$, we can use the following theorem of Senechal and Hiller , and the requirement of discreteness, to bound $`n`$ from above.
###### Theorem 3.3 (Senechal , Hiller ).
Let $`N(n)`$ be the smallest integer such that the group $`\mathrm{GL}(N(n),)`$ has an element of order $`n`$, then
$$N(n)=\underset{p_{i}^{}{}_{}{}^{m_i}2}{}\varphi (p_{i}^{}{}_{}{}^{m_i}),$$
(70)
where $`p_{1}^{}{}_{}{}^{m_1}\mathrm{}`$ is the prime factorization of $`n`$ and $`\varphi (k)`$ is Euler’s totient function: the number of positive integers less than and relatively prime to $`k`$ (note: the prime $`2`$ is included in the sum only if its exponent is greater than $`1`$).
###### Lemma 3.4.
If a Riemann surface $`[P|Q]`$ generated by the conformal map of right triangles $`P`$ and $`Q`$ is discrete, then the angles of $`P`$ and $`Q`$ are given by (60) and either $`n6`$, or $`n=8,10`$ or $`12`$.
###### Proof.
Let $`G`$ be the isometry group of $`[P|Q]`$, $`\psi (G)`$ its derived point group, and $`\mathrm{\Lambda }`$ its lattice group. Since conjugation by $`g\psi (G)`$ leaves $`\mathrm{\Lambda }`$ invariant, consider the automorphisms $`\mathrm{\Phi }_g:\mathrm{\Lambda }\mathrm{\Lambda }`$ given by $`\mathrm{\Phi }_g(\lambda )=g\lambda g^1`$. If $`\mathrm{\Lambda }`$ has rank $`N`$, then the homomorphism $`\mathrm{\Psi }:\psi (G)\mathrm{Aut}(\mathrm{\Lambda })`$, where $`\mathrm{\Psi }(g)=\mathrm{\Phi }_g`$, induces a representation of $`\psi (G)`$ by integral $`N\times N`$ matrices of determinant $`\pm 1`$. In fact, $`\mathrm{\Psi }`$ is an isomorphism since for any of the generators $`g`$ of $`\psi (G)`$ we can easily find a $`\lambda \mathrm{\Lambda }`$ which is not fixed by $`\mathrm{\Phi }_g`$ (it suffices to look within the star $`\mathrm{\Sigma }`$ or, if $`r(\pi ,\pi )R`$, $`\mathrm{\Sigma }\mathrm{\Sigma }^1`$). Since $`R\psi (G)`$ has order $`n`$, there must be an element of order $`n`$ in $`\mathrm{GL}(N,)`$. By Theorem 3.3 we must have $`NN(n)`$. On the other hand, if $`[P|Q]`$ is discrete, then $`\mathrm{\Lambda }`$ must be discrete (as a lattice in $`X\times Y`$) which is possible only if $`N4`$. Since $`\varphi (p^m)=p^{m1}(p1)`$, we need only consider the values of $`\varphi `$ for powers of small primes: $`\varphi (4)=2`$, $`\varphi (8)=4`$, $`\varphi (3)=2`$, $`\varphi (5)=4`$ (all other primes and higher powers yield values greater than $`4`$). From these facts we obtain just the values of $`n`$ given in the statement of the Lemma. ∎
With Lemma 3.4 and inequalities (62) and (63), the set of possible combinations of $`n`$, $`p`$ and $`q`$ is already finite. Several of these combinations can be eliminated by the following Lemma which provides a lower bound on $`\mathrm{rk}(\mathrm{\Lambda })`$ when either $`p`$ or $`q`$ is a nontrivial divisor of $`n`$.
###### Lemma 3.5.
Let $`\mathrm{\Lambda }`$ be the lattice group of a discrete Riemann surface $`[P|Q]`$ with $`P`$ and $`Q`$ defined by parameters $`n`$, $`p`$ and $`q`$ (60); then if $`p>1`$ and $`p`$ divides $`n`$, $`\mathrm{rk}(\mathrm{\Lambda })\varphi (n/p)+\varphi (n/\mathrm{GCD}(n,q))`$ (and the same statement with $`p`$ and $`q`$ interchanged).
###### Proof.
Suppose $`p`$ divides $`n`$ and $`n/p=d>1`$. We may assume that $`p`$ does not divide $`q`$ since otherwise $`n`$, $`p`$ and $`q`$ would have $`p`$ as a common divisor. Let $`r=r(2\alpha ,2\beta )=r(2\pi /d,2\pi (q/n))`$; then $`R=r`$ and $`R_p=r^d`$ is a subgroup of $`R`$ of order $`p`$. By looking in $`\mathrm{\Sigma }`$ (or $`\mathrm{\Sigma }\mathrm{\Sigma }^1`$), we can find a translation $`t_0=t(u,v)\mathrm{\Lambda }`$ such that $`u0`$ and $`v0`$. Now consider the two products of translations
$`t_1`$ $`=t(u_1,v_1)={\displaystyle \underset{sR_p}{}}(st_0s^1),`$ (71)
$`t_2`$ $`=t(u_2,v_2)={\displaystyle \underset{k=1}{\overset{d}{}}}(s_kt_0s_{k}^{}{}_{}{}^{1}),`$
where $`s_k`$ is any element of the coset $`r^kR_p`$, and $`t_2`$ depends on the particular choice of coset elements. Evaluating the products we find
$`u_1`$ $`=pu,`$ (72)
$`v_1`$ $`=\left({\displaystyle \underset{m=1}{\overset{p}{}}}e^{2\pi i(qm/p)}\right)v.`$
Equation (72) implies $`e^{2\pi i(q/p)}v_1=v_1`$, and, since $`p`$ does not divide $`q`$, we conclude $`v_1=0`$. Thus $`t_1=t(pu,0)`$. Similarly, we find
$$u_2=\left(\underset{k=1}{\overset{d}{}}e^{2\pi i(k/d)}\right)u=0.$$
(73)
On the other hand, $`v_2`$ is changed just by making a different choice for one coset element $`s_k`$ (again because $`p`$ does not divide $`q`$). Thus we can always make a choice such that $`t_2=t(0,v_2)`$, where $`v_20`$.
Now $`\{t_1\}_R=\mathrm{\Lambda }_X`$ is a lattice in $`X`$ isomorphic to the cyclotomic lattice $`[e^{2\pi i/d}]`$, while $`\{t_2\}_R=\mathrm{\Lambda }_Y`$ is a lattice in $`Y`$ isomorphic to $`[e^{2\pi i(q/n)}]`$. Since $`\mathrm{\Lambda }\mathrm{\Lambda }_X\mathrm{\Lambda }_Y`$, $`\mathrm{rk}(\mathrm{\Lambda })\mathrm{rk}(\mathrm{\Lambda }_X)+\mathrm{rk}(\mathrm{\Lambda }_Y)`$. The statement of the Lemma follows from the well known formula for the rank of a cyclotomic lattice. ∎
As an example of the application of Lemma 3.5, consider the case $`n=8`$, $`p=1`$ and $`q=2`$. For these numbers Lemma 3.5 gives $`\mathrm{rk}(\mathrm{\Lambda })\varphi (4)+\varphi (8)=6`$, and the corresponding Riemann surface would not be discrete. Together, Lemma 3.4, Lemma 3.5 and the inequalities (62) and (63) limit the set of possibilities for $`n`$, $`p`$, and $`q`$ to the combinations $`(n,p,q)=(6,1,2)`$, $`(8,1,3)`$, $`(10,1,3)`$, $`(10,1,4)`$, $`(12,1,5)`$, and $`(12,2,3)`$. All other combinations (consistent with the inequalities) correspond to Riemann surfaces whose lattice groups have ranks exceeding 4 and therefore cannot be discrete. We have no general method to settle the discreteness of these remaining candidates and therefore considered them case by case. The procedure was to obtain generators $`e_i`$ of the finite sets $`\mathrm{\Sigma }`$, or $`\mathrm{\Sigma }\mathrm{\Sigma }^1`$ if $`r(\pi ,\pi )R`$, since these generate the lattice group $`\mathrm{\Lambda }`$. The computations were performed in Mathematica using the function LatticeReduce, which implements the Lenstra-Lenstra-Lovasz (LLL) lattice reduction algorithm. With the exception of the case $`(n,p,q)=(10,1,4)`$, which was found to have rank 8, all others proved to have rank 4 and nonvanishing determinant. To these 5 cases of crystallographic Riemann surfaces, two more should be added which simply correspond to an interchange of the spaces $`X`$ and $`Y`$ (obtained by interchanging $`p`$ and $`q`$). Although the interchangeability of these spaces was assumed in the derivation of inequalities (62) and (63), clearly the sections $`𝒜(x)`$ will detect a difference. For example, there is no isometry which relates the surfaces $`(10,1,3)`$ and $`(10,3,1)`$ (or their sections). On the other hand, the use of transformation (61) shows that the surface $`(10,3,1)`$ is isometric with the surface described by $`(n^{},p^{},q^{})=(5,1,2)`$ (where now $`p^{}<q^{}`$). However, not all interchanges of $`p`$ and $`q`$ produce a new Riemann surface. For example, $`(n,p,q)=(8,3,1)`$ corresponds, by (66), to $`(n^{},p^{},q^{})=(8,1,3)`$ (the values before the interchange).
We summarize this discussion by our main Theorem:
###### Theorem 3.6.
Up to linear transformations, there are seven discrete Riemann surfaces $`[P|Q]`$ generated by conformal maps of right triangles $`P`$ and $`Q`$. Each of these surfaces is crystallographic; their properties, in particular the integers $`n`$, $`p`$ and $`q`$ which specify the angles of $`P`$ and $`Q`$, are given in Table 1.
###### Proof.
The properties listed in Table 1 are simple consequences of general results. Since
$$G/\mathrm{\Lambda }\psi (G)=\sigma ,R,r(\pi ,\pi ),$$
(74)
we see that $`G/\mathrm{\Lambda }`$ is isomorphic to an abstract group generated by an element of order 2, $`\stackrel{~}{\sigma }`$, and an element $`\stackrel{~}{r}`$, which has order $`2|R|=2n`$, if $`n=|R|`$ is odd (and therefore $`r(\pi ,\pi )R`$) or $`n`$ is even and just one of $`p`$ and $`q`$ is odd (since then the element of order two in $`R`$ is either $`r(\pi ,0)`$ or $`r(0,\pi )`$). Otherwise ($`n`$ even, $`p`$ and $`q`$ both odd), $`r(\pi ,\pi )R`$ and $`\stackrel{~}{r}`$ has order $`|R|=n`$. These generators have the relation $`\stackrel{~}{\sigma }\stackrel{~}{r}\stackrel{~}{\sigma }=\stackrel{~}{r}^1`$ and imply $`G/\mathrm{\Lambda }d_n`$, the dihedral group of order $`2n`$, or $`G/\mathrm{\Lambda }d_{2n}`$.
To compute the genus we use the orders of the vertex groups, (65), (67) and (68), in formula (19):
$$22g=|G/\mathrm{\Lambda }|\left(\frac{1}{2n}+\frac{1}{2n^{}}\frac{1}{4}\right).$$
(75)
The geometry of $`\mathrm{\Lambda }`$ is completely specified by the Gram matrix formed from its generators $`e_k`$, $`k=1,\mathrm{},4`$:
$$(M_{kl})=e_ke_l,$$
(76)
where $``$ is the standard inner product. If we consider each $`e_k`$ as a vector in $`^4`$ and form the $`4\times 4`$ matrix $`E`$ whose rows are $`e_k`$, then $`M=EE^{\text{tr}}`$ and $`|\mathrm{\Lambda }|=|detE|=\sqrt{detM}`$. Using the LLL algorithm it was found that the generators of $`\mathrm{\Sigma }`$ and $`\mathrm{\Sigma }\mathrm{\Sigma }^1`$ (and hence $`\mathrm{\Lambda }`$) could always be written as, respectively
$`e_k`$ $`=2(ae^{i2k\alpha },be^{i2k\beta })`$ (77)
$`e_k`$ $`=2(a(e^{i2\alpha }1)e^{i2k\alpha },b(e^{i2\beta }1)e^{i2k\beta }).`$
To help identify the lattice geometry it was sometimes necessary to define a new basis $`E^{}=SE`$, where $`S\mathrm{SL}(4,)`$. The new Gram matrix is then given by $`M^{}=SMS^{\text{tr}}`$. Details of this analysis, for the seven combinations of $`(n,p,q)`$ in Table 1, are provided in the appendix.
The only additional data needed in the density formula (40) is the triangle area $`|P|=(1/4)\mathrm{sin}2\pi (p/n)`$.
The last column of Table 1 identifies which surfaces have doubly periodic sections. In general, since the kernel of the homomorphism $`\pi _X:\mathrm{\Lambda }\pi _X(\mathrm{\Lambda })`$ is given by the lattice $`\mathrm{\Lambda }_Y=\mathrm{\Lambda }Y`$,
$$\mathrm{rk}\mathrm{\Lambda }_Y=\mathrm{rk}\mathrm{\Lambda }\mathrm{rk}\pi _X(\mathrm{\Lambda }).$$
(78)
Surfaces with doubly periodic sections have $`\mathrm{rk}\mathrm{\Lambda }_Y=2`$, while $`\mathrm{rk}\mathrm{\Lambda }_Y=0`$ corresponds to completely quasiperiodic sections. These are the only cases that occur, since (from (77) and (77)), $`\pi _X(\mathrm{\Lambda })[e^{2\pi i(p/n)}]`$, the cyclotomic lattice with rank $`\varphi (n/\mathrm{GCD}(n,p))`$. ∎
### 3.4 Piecewise flat surfaces and model sets
Suppose a Riemann surface $`𝒮`$ is deformed into another surface, $`\stackrel{~}{𝒮}`$, not necessarily representable locally by graphs of holomorphic functions. As long as the deformation preserves the isometry group and transversality with respect to $`Y`$, all the crystallographically relevant properties of the point set $`𝒜(x)`$ will be maintained in the corresponding deformed point set $`\stackrel{~}{𝒜}(x)`$. Kalugin’s formula (39), for example, makes this invariance explicit for the density.
When a Riemann surface is generated by a triangular graph $`P|Q`$, deformations that preserve isometry and transversality are easily specified by the map defining the fundamental graph, $`\stackrel{~}{f}:PQ`$. We recall that in the Riemann surface, $`\stackrel{~}{f}`$ is holomorphic and extends to a homeomorphism on the closure $`\overline{P}`$. For the deformed surface $`\stackrel{~}{𝒮}`$ we continue to use the edge group $`G`$, defined by the geometry of the triangles $`P`$ and $`Q`$, to form the orbit of the fundamental graph, but insist only that $`\stackrel{~}{f}`$ is a homeomorphism. While still preserving isometry and transversality, we will even go a step further and modify the *topology* of $`\stackrel{~}{𝒮}`$ by defining $`\stackrel{~}{𝒮}`$ as the orbit under $`G`$ of an *open* graph. Under these circumstances, when $`\stackrel{~}{𝒮}`$ is a collection of disconnected components, we are free to relax the condition that $`\stackrel{~}{f}`$ is a homeomorphism. In fact, we will primarily be interested in the case when $`\stackrel{~}{f}`$ is a constant map, thereby making each piece of $`\stackrel{~}{𝒮}`$ flat.
Let $`P|y`$ represent the graph of the constant map, $`\stackrel{~}{f}(P)=y`$. We will weigh the merits of various choices of $`yY`$ as giving optimal “approximations” of the map $`P|Q`$ defined by the conformal map of triangles. By choosing $`y=Q_i`$, a vertex of $`Q`$, we go the furthest in restoring partial connectedness to $`\stackrel{~}{𝒮}`$. This is because the action of the vertex group $`G_i`$ on $`P|Q_i`$ generates a flat polygon (possibly stellated) composed of $`|G_i|`$ triangles. Thus $`|G_i|`$ surface pieces will have been “aligned” by this choice of $`y`$. With the exception of the $`n=5`$ and $`n=10`$ surfaces, $`|G_1|=|G_3|=2n>|G_2|`$, suggesting that either one of the singular vertices is a good choice for $`y`$.
There is another criterion, however, that applies uniformly to all the surfaces and even distinguishes among the two singular vertices. A natural question to ask is: which value of $`yQ`$ “occurs with the highest frequency” in the graph $`P|Q`$? To give this question a proper probabilistic interpretation, we suppose that $`x`$ is sampled uniformly in $`P`$ and ask for the probability that $`f(x)y_0<\mathrm{\Delta }`$, where $`f`$ is the conformal map of triangles and $`\mathrm{\Delta }`$ is the radius of a small disk about $`y_0Q`$. Since $`f`$ is conformal, this condition (for $`\mathrm{\Delta }0`$) is equivalent to $`xx_0<\mathrm{\Delta }/f^{}(x_0)`$, where $`x_0=f^1(y_0)`$. Thus the probability of finding $`y`$ in a neighborhood of $`y_0`$ is maximized by minimizing $`f^{}(x_0)`$. At a singular point $`f^{}(x_0)`$ either vanishes or diverges, and in our case vanishes for $`x_0Q_1`$ since we always have $`p<q`$. We therefore choose $`y=Q_1`$ for all of our surfaces.
The result of flattening the seven Riemann surfaces in Table 1 by this prescription is particularly simple. In all cases $`𝒫=G_1(P|Q_1)`$ is a regular $`(n/p)`$-gon covered $`p`$ times (we restore connectedness to $`𝒫`$ by including edges incident to vertex 1). This means that if each point of $`\stackrel{~}{𝒜}(x)`$ is counted with multiplicity $`p`$, then $`\stackrel{~}{𝒜}(x)`$ and $`𝒜(x)`$ have the same density. Using Lemma 3.1, we have
$$\stackrel{~}{𝒮}=\mathrm{\Lambda }𝒫,\mathrm{\Lambda }=\mathrm{\Sigma },$$
(79)
if $`r(\pi ,\pi )R`$, otherwise,
$$\stackrel{~}{𝒮}=\mathrm{\Lambda }\mathrm{\Lambda }t(2a,2b)r(\pi ,\pi )𝒫,\mathrm{\Lambda }=\mathrm{\Sigma }\mathrm{\Sigma }^1.$$
(80)
With the exception of the surface $`(5,1,2)`$, $`r(\pi ,\pi )𝒫=𝒫`$, and we have the simpler description:
$$\stackrel{~}{𝒮}=\mathrm{\Sigma }𝒫,$$
(81)
since
$$\mathrm{\Sigma }\mathrm{\Sigma }^1\mathrm{\Sigma }\mathrm{\Sigma }^1t(2a,2b)=\mathrm{\Sigma }.$$
(82)
Figure 1 compares the point sets $`𝒜(x)`$ and $`\stackrel{~}{𝒜}(x)`$, given by the flattening process just described, of the Riemann surface $`(5,1,2)`$ of Table 1. Edges have been added to $`\stackrel{~}{𝒜}(x)`$ to aid in the visualization of three tile shapes: the boat, star, and jester’s cap. There is a one-to-one correspondence between the two point sets, with most pairs having quite small separations. In any case, we are guaranteed the separation of corresponding points never exceeds 1, the diameter of triangle $`Q`$. $`\stackrel{~}{𝒜}(x)`$, the vertex set of a popular tiling model , is a Delone set. $`𝒜(x)`$ fails to be a Delone set because triples of points appear with arbitrarily short separations. That always three points coalesce in this way is a signature of the order of the branch points of $`\pi _X`$ for this surface. $`𝒜(x)`$, on the other hand, has a “dynamical” advantage over $`\stackrel{~}{𝒜}(x)`$. Seen as atoms in a crystal or quasicrystal, the positions $`𝒜(x)`$ evolve continuously (in fact analytically) with $`x`$ (viewed as a parameter), while the atoms in $`\stackrel{~}{𝒜}(x)`$ experience discontinuous “jumps”, and for the most part never move at all. The singular loci of $`xX`$ for the two point sets have different dimensionalities: 0 for $`𝒜(x)`$, 1 for $`\stackrel{~}{𝒜}(x)`$; the dynamics of $`𝒜(x)`$ is thus more regular also in this sense. To emphasize this point, we note that if $`\gamma (t)`$ is almost any curve in $`X`$, then $`𝒜(\gamma (t))`$ is a regular homotopy (see Lemma 2.12).
The sets $`\stackrel{~}{𝒜}(x)`$ are examples of *model sets*, as defined by Moody , and therefore belong to the larger family of *Meyer sets*. That the set $`\stackrel{~}{𝒜}(x)`$ shown in Figure 1 can be organized into a finite set of tile shapes, for example, is a general property of model sets. Model sets obtained by flattening the other six Riemann surfaces of Table 1 are shown in Figure 3. In the three periodic cases, of course, even $`𝒜(x)`$ is a Meyer set and flattening is not necessary if that is our only goal. We flattened these surfaces because the sets $`\stackrel{~}{𝒜}(x)`$ are then particularly symmetric. In the three quasiperiodic cases the sets $`\stackrel{~}{𝒜}(x)`$ again organize themselves into tilings that have been discussed in the quasicrystal literature .
## Acknowledgements
I thank the Aspen Center for Physics, where a large part of this paper was written. Noam Elkies provided a useful suggestion for the proof of Lemma 2.12.
## Appendix A Appendix
For each of the entries in Table 1 we give below the corresponding Gram matrix for the generators (either (77) or (77)). From the transformed Gram matrices we see that the lattice geometries are in all cases simple root lattices ($`A_n`$, $`D_n`$, $`Z_n`$) or direct products. The lattices for $`(5,1,2)`$ and $`(12,3,4)`$ are obtained, respectively, from the lattices of $`(10,1,3)`$ and $`(12,2,3)`$ by an interchange of the spaces $`X`$ and $`Y`$.
$`(6,1,2)`$
$`M=`$ $`\left[\begin{array}{cccc}6& 0& 3& 0\\ 0& 6& 0& 3\\ 3& 0& 6& 0\\ 0& 3& 0& 6\end{array}\right]`$ $`\mathrm{\Lambda }A_2\times A_2|\mathrm{\Lambda }|=27`$
$`(8,1,3)`$
$`M=`$ $`\left[\begin{array}{cccc}4& 2& 0& 2\\ 2& 4& 2& 0\\ 0& 2& 4& 2\\ 2& 0& 2& 4\end{array}\right]`$ $`S=\left[\begin{array}{cccc}1& 0& 0& 0\\ 0& 0& 1& 0\\ 0& 1& 1& 1\\ 0& 0& 0& 1\end{array}\right]`$
$`SMS^{\text{tr}}=`$ $`\left[\begin{array}{cccc}4& 0& 0& 2\\ 0& 4& 0& 2\\ 0& 0& 4& 2\\ 2& 2& 2& 4\end{array}\right]`$ $`\mathrm{\Lambda }D_4|\mathrm{\Lambda }|=8`$
$`(10,1,3)`$
$`M={\displaystyle \frac{1}{2}}`$ $`\left[\begin{array}{cccc}10& 5& 0& 0\\ 5& 10& 5& 0\\ 0& 5& 10& 5\\ 0& 0& 5& 10\end{array}\right]`$ $`\mathrm{\Lambda }A_4|\mathrm{\Lambda }|={\displaystyle \frac{25}{4}}\sqrt{5}`$
$`(12,1,5)`$
$`M=`$ $`\left[\begin{array}{cccc}4& 3& 2& 0\\ 3& 4& 3& 2\\ 2& 3& 4& 3\\ 0& 2& 3& 4\end{array}\right]`$ $`S=\left[\begin{array}{cccc}1& 1& 0& 0\\ 0& 0& 1& 1\\ 1& 0& 1& 1\\ 0& 1& 1& 0\end{array}\right]`$
$`SMS^{\text{tr}}=`$ $`\left[\begin{array}{cccc}2& 1& 0& 0\\ 1& 2& 0& 0\\ 0& 0& 2& 1\\ 0& 0& 1& 2\end{array}\right]`$ $`\mathrm{\Lambda }A_2\times A_2|\mathrm{\Lambda }|=3`$
$`(12,2,3)`$
$`M={\displaystyle \frac{1}{2}}`$ $`\left[\begin{array}{cccc}14& 3& 1& 6\\ 3& 14& 3& 1\\ 1& 3& 14& 3\\ 6& 1& 3& 14\end{array}\right]`$ $`S=\left[\begin{array}{cccc}1& 0& 1& 0\\ 0& 1& 0& 1\\ 1& 1& 1& 0\\ 0& 1& 1& 1\end{array}\right]`$
$`SMS^{\text{tr}}={\displaystyle \frac{1}{2}}`$ $`\left[\begin{array}{cccc}6& 3& 0& 0\\ 3& 6& 0& 0\\ 0& 0& 8& 0\\ 0& 0& 0& 8\end{array}\right]`$ $`\mathrm{\Lambda }A_2\times Z_2|\mathrm{\Lambda }|=6\sqrt{3}`$
|
no-problem/9904/math9904064.html
|
ar5iv
|
text
|
# Theorem 1
Non-symmetric convex domains Illinois J. Math., to appear have no basis of exponentials
Mihail N. Kolountzakis<sup>1</sup><sup>1</sup>1 Department of Mathematics, 1409 W. Green St, University of Illinois, Urbana, IL 61801, USA. E-mail: kolount@math.uiuc.edu<sup>2</sup><sup>2</sup>2Partially supported by the U.S. National Science Foundation, under grant DMS 97-05775.<sup>3</sup><sup>3</sup>3Current address: Department of Mathematics, University of Crete, 714 09 Iraklio, GREECE. E-mail: kolount@math.uch.gr
December 1998; revised October 1999
## Abstract
A conjecture of Fuglede states that a bounded measurable set $`\mathrm{\Omega }^d`$, of measure $`1`$, can tile $`^d`$ by translations if and only if the Hilbert space $`L^2(\mathrm{\Omega })`$ has an orthonormal basis consisting of exponentials $`e_\lambda (x)=\mathrm{exp}2\pi i\lambda ,x`$. If $`\mathrm{\Omega }`$ has the latter property it is called spectral. We generalize a result of Fuglede, that a triangle in the plane is not spectral, proving that every non-symmetric convex domain in $`^d`$ is not spectral.
§0. Introduction
Let $`\mathrm{\Omega }`$ be a measurable subset of $`^d`$ of measure $`1`$ and $`\mathrm{\Lambda }`$ be a discrete subset of $`^d`$. We write
$`e_\lambda (x)`$ $`=`$ $`\mathrm{exp}2\pi i\lambda ,x,(x^d),`$
$`E_\mathrm{\Lambda }`$ $`=`$ $`\{e_\lambda :\lambda \mathrm{\Lambda }\}L^2(\mathrm{\Omega }).`$
The inner product and norm on $`L^2(\mathrm{\Omega })`$ are
$$f,g_\mathrm{\Omega }=_\mathrm{\Omega }f\overline{g},\text{ and }f_\mathrm{\Omega }^2=_\mathrm{\Omega }\left|f\right|^2.$$
Definition 1 The pair $`(\mathrm{\Omega },\mathrm{\Lambda })`$ is called a spectral pair if $`E_\mathrm{\Lambda }`$ is an orthonormal basis for $`L^2(\mathrm{\Omega })`$. A set $`\mathrm{\Omega }`$ will be called spectral if there is $`\mathrm{\Lambda }^d`$ such that $`(\mathrm{\Omega },\mathrm{\Lambda })`$ is a spectral pair. The set $`\mathrm{\Lambda }`$ is then called a spectrum of $`\mathrm{\Omega }`$.
Example: If $`Q_d=(1/2,1/2)^d`$ is the cube of unit volume in $`^d`$ then $`(Q_d,^d)`$ is a spectral pair.
We write $`B_R(x)=\{y^d:\left|xy\right|<R\}`$. Definition 2 (Density)
(i) The set $`\mathrm{\Lambda }^d`$ has uniformly bounded density if for each $`R>0`$ there exists a constant $`C>0`$ such that $`\mathrm{\Lambda }`$ has at most $`C`$ elements in each ball of radius $`R`$ in $`^d`$.
(ii) The set $`\mathrm{\Lambda }^d`$ has density $`\rho `$, and we write $`\rho =\mathrm{dens}\mathrm{\Lambda }`$, if we have
$$\rho =\underset{R\mathrm{}}{lim}\frac{\left|\mathrm{\Lambda }B_R(x)\right|}{\left|B_R(x)\right|},$$
uniformly for all $`x^d`$.
We define translational tiling for complex-valued functions below. Definition 3 Let $`f:^d`$ be measurable and $`\mathrm{\Lambda }^d`$ be a discrete set. We say that $`f`$ tiles with $`\mathrm{\Lambda }`$ at level $`\mathrm{w}`$, and sometimes write “$`f+\mathrm{\Lambda }=w^d`$”, if
$$\underset{\lambda \mathrm{\Lambda }}{}f(x\lambda )=w,\text{for almost every (Lebesgue) }x^d,$$
(1)
with the sum above converging absolutely a.e. If $`\mathrm{\Omega }^d`$ is measurable we say that $`\mathrm{\Omega }+\mathrm{\Lambda }`$ is a tiling when $`\mathrm{𝟏}_\mathrm{\Omega }+\mathrm{\Lambda }=w^d`$, for some $`w`$. If $`w`$ is not mentioned it is understood to be equal to $`1`$.
Remarks
1. If $`fL^1(^d)`$ and $`\mathrm{\Lambda }`$ has uniformly bounded density one can easily show (see \[KL96\] for the proof in one dimension, which works in higher dimension as well) that the sum in (1) converges absolutely a.e. and defines a locally integrable function of $`x`$.
2. In the very common case when $`fL^1(^d)`$ and $`_^df0`$ the condition that $`\mathrm{\Lambda }`$ has uniformly bounded density follows easily from (1) and need not be postulated a priori.
3. It is easy to see that if $`fL^1(^d)`$, $`_^df0`$ and $`f+\mathrm{\Lambda }`$ is a tiling then $`\mathrm{\Lambda }`$ has a density and the level of the tiling $`w`$ is given by
$$w=_^df\mathrm{dens}\mathrm{\Lambda }.$$
From now on we restrict ourselves to tiling with functions in $`L^1`$ and sets of finite measure.
Example: $`Q_d+^d`$ is a tiling.
The following conjecture is still unresolved.
Conjecture: (Fuglede \[F74\]) If $`\mathrm{\Omega }^d`$ is bounded and has Lebesgue measure $`1`$ then $`L^2(\mathrm{\Omega })`$ has an orthonormal basis of exponentials if and only if there exists $`\mathrm{\Lambda }^d`$ such that $`\mathrm{\Omega }+\mathrm{\Lambda }=^d`$ is a tiling.
Remark: It is not hard to show \[F74\] that $`L^2(\mathrm{\Omega })`$ has a basis $`\mathrm{\Lambda }`$ which is a lattice (i.e., $`\mathrm{\Lambda }=A^d`$, where $`A`$ is a non-singular $`d\times d`$ matrix) if and only if $`\mathrm{\Omega }+\mathrm{\Lambda }^{}`$ is a tiling. Here
$$\mathrm{\Lambda }^{}=\{\mu ^d:\mu ,\lambda ,\lambda \mathrm{\Lambda }\}$$
is the dual lattice of $`\mathrm{\Lambda }`$ (we have $`\mathrm{\Lambda }^{}=A^{}^d`$).
Fuglede \[F74\] showed that the disk and the triangle in $`^2`$ are not spectral domains.
In this note we prove the following generalization of Fuglede’s triangle result.
###### Theorem 1
Let $`\mathrm{\Omega }`$ have measure $`1`$ and be a convex, non-symmetric, bounded open set in $`^d`$. Then $`\mathrm{\Omega }`$ is not spectral.
The set $`\mathrm{\Omega }`$ is called symmetric with respect to $`0`$ if $`y\mathrm{\Omega }`$ implies $`y\mathrm{\Omega }`$, and symmetric with respect to $`x_0^d`$ if $`y\mathrm{\Omega }`$ implies that $`2x_0y\mathrm{\Omega }`$. It is called non-symmetric if it is not symmetric with respect to any $`x_0^d`$. For example, in any dimension a simplex is non-symmetric.
It is known \[V54, M80\] that every convex body that tiles $`^d`$ by translation is a centrally symmetric polytope and that each such body also admits a lattice tiling and, therefore (see the remark after Fuglede’s conjecture above), its $`L^2`$ admits a lattice spectrum. Given Theorem 1, to prove Fuglede’s conjecture restricted to convex domains, one still has to prove that any symmetric convex body that is not a tile admits no orthonormal basis of exponentials for its $`L^2`$.
In §1 we derive some necessary and some sufficient conditions for $`f+\mathrm{\Lambda }`$ to be a tiling. These conditions roughly state that tiling is equivalent to a certain tempered distribution, associated with $`\mathrm{\Lambda }`$ being “supported” on the zero set of $`\widehat{f}`$ plus the origin. Similar conditions had been derived in \[KL96\] but here we have to work with less smoothness for $`\widehat{f}`$. To compensate for the lack of smoothness we work with compactly supported $`\widehat{f}`$ and nonnegative $`f`$ and $`\widehat{f}`$, conditions which are fulfilled for our problem.
In §1 we restate the property that $`\mathrm{\Omega }`$ is spectral as a tiling problem for $`\left|\widehat{\mathrm{𝟏}_\mathrm{\Omega }}\right|^2`$ and use the conditions derived in §1 to prove Theorem 1. What makes the proof work is that when $`\mathrm{\Omega }`$ is a non-symmetric convex set the set $`\mathrm{\Omega }\mathrm{\Omega }`$ has volume strictly larger than $`2^d\mathrm{vol}\mathrm{\Omega }`$.
§1. Fourier-analytic conditions for tiling
Our method relies on a Fourier-analytic characterization of translational tiling, which is a variation of the one used in \[KL96\]. We define the (generally unbounded) measure
$$\delta _\mathrm{\Lambda }=\underset{\lambda \mathrm{\Lambda }}{}\delta _\lambda ,$$
where $`\delta _\lambda `$ represents a unit mass at $`\lambda ^d`$. If $`\mathrm{\Lambda }`$ has uniformly bounded density then $`\delta _\mathrm{\Lambda }`$ is a tempered distribution (see for example \[R73\]) and therefore its Fourier Transform $`\widehat{\delta _\mathrm{\Lambda }}`$ is defined and is itself a tempered distribution.
The action of a tempered distribution (see \[R73\]) $`\alpha `$ on a Schwartz function $`\varphi `$ is denoted by $`\alpha (\varphi )`$. The Fourier Transform of $`\alpha `$ is defined by the equation
$$\widehat{\alpha }(\varphi )=\alpha (\widehat{\varphi }).$$
The support $`\mathrm{supp}\alpha `$ is the smallest closed set $`F`$ such that for any smooth $`\varphi `$ of compact support contained in the open set $`F^c`$ we have $`\alpha (\varphi )=0`$.
###### Theorem 2
Suppose that $`f0`$ is not identically $`0`$, that $`fL^1(^d)`$, $`\widehat{f}0`$ has compact support and $`\mathrm{\Lambda }^d`$. If $`f+\mathrm{\Lambda }`$ is a tiling then
$$\mathrm{supp}\widehat{\delta _\mathrm{\Lambda }}\{x^d:\widehat{f}(x)=0\}\left\{0\right\}.$$
(2)
Proof of Theorem 2. Assume that $`f+\mathrm{\Lambda }=w^d`$ and let
$$K=\left\{\widehat{f}=0\right\}\left\{0\right\}.$$
We have to show that
$$\widehat{\delta _\mathrm{\Lambda }}(\varphi )=0,\varphi C_c^{\mathrm{}}(K^c).$$
Since $`\widehat{\delta _\mathrm{\Lambda }}(\varphi )=\delta _\mathrm{\Lambda }(\widehat{\varphi })`$ this is equivalent to $`_{\lambda \mathrm{\Lambda }}\widehat{\varphi }(\lambda )=0`$, for each such $`\varphi `$. Notice that $`h=\varphi /\widehat{f}`$ is a continuous function, but not necessarily smooth. We shall need that $`\widehat{h}L^1`$. This is a consequence of a well-known theorem of Wiener \[R73, Ch. 11\]. We denote by $`𝕋^d=^d/^d`$ the $`d`$-dimensional torus. Theorem (Wiener)
If $`gC(𝕋^d)`$ has an absolutely convergent Fourier series
$$g(x)=\underset{n^d}{}\widehat{g}(n)e^{2\pi in,x},\widehat{g}\mathrm{}^1(^d),$$
and if $`g`$ does not vanish anywhere on $`𝕋^d`$ then $`1/g`$ also has an absolutely convergent Fourier series. Assume that
$$\mathrm{supp}\varphi ,\mathrm{supp}\widehat{f}(\frac{L}{2},\frac{L}{2})^d.$$
Define the function $`F`$ to be:
(i) periodic in $`^d`$ with period lattice $`(L)^d`$,
(ii) to agree with $`\widehat{f}`$ on $`\mathrm{supp}\varphi `$,
(iii) to be non-zero everywhere and,
(iv) to have $`\widehat{F}\mathrm{}^1(^d)`$, i.e.,
$$\widehat{F}=\underset{n^d}{}\widehat{F}(n)\delta _{L^1n},$$
is a finite measure in $`^d`$.
One way to define such an $`F`$ is as follows. First, define the $`(L)^d`$-periodic function $`g0`$ to be $`\widehat{f}`$ periodically extended. The Fourier coefficients of $`g`$ are $`\widehat{g}(n)=L^df(n/L)0`$. Since $`g,\widehat{g}0`$ and $`g`$ is continuous at $`0`$ it is easy to prove that $`_{n^d}\widehat{g}(n)=g(0)`$, and therefore that $`g`$ has an absolutely convergent Fourier series.
Let $`ϵ`$ be small enough to guarantee that $`\widehat{f}`$ (and hence $`g`$) does not vanish on $`(\mathrm{supp}\varphi )+B_ϵ(0)`$. Let $`k`$ be a smooth $`(L)^d`$-periodic function which is equal to $`1`$ on $`(\mathrm{supp}\varphi )+(L^d)`$ and equal to $`0`$ off $`(\mathrm{supp}\varphi +B_ϵ(0))+(L^d)`$, and satisfies $`0k1`$ everywhere. Finally, define
$$F=kg+(1k).$$
Since both $`k`$ and $`g`$ have absolutely summable Fourier series and this property is preserved under both sums and products, it follows that $`F`$ also has an absolutely summable Fourier series. And by the nonnegativity of $`g`$ we get that $`F`$ is never $`0`$, since $`k=0`$ on $`Z(\widehat{f})+(L^d)`$.
By Wiener’s theorem, $`\widehat{F^1}\mathrm{}^1(^d)`$, i.e., $`\widehat{F^1}`$ is a finite measure on $`^d`$. We now have that
$$\left(\frac{\varphi }{\widehat{f}}\right)^{}=\widehat{\varphi F^1}=\widehat{\varphi }\widehat{F^1}L^1(^d).$$
This justifies the interchange of the summation and integration below:
$`{\displaystyle \underset{\lambda \mathrm{\Lambda }}{}}\widehat{\varphi }(\lambda )`$ $`=`$ $`{\displaystyle \underset{\lambda \mathrm{\Lambda }}{}}\left({\displaystyle \frac{\varphi }{\widehat{f}}}\widehat{f}\right)^{}(\lambda )`$
$`=`$ $`{\displaystyle \underset{\lambda \mathrm{\Lambda }}{}}\left({\displaystyle \frac{\varphi }{\widehat{f}}}\right)^{}\widehat{\widehat{f}}(\lambda )`$
$`=`$ $`{\displaystyle \underset{\lambda \mathrm{\Lambda }}{}}{\displaystyle _^d}\left({\displaystyle \frac{\varphi }{\widehat{f}}}\right)^{}(y)f(y\lambda )𝑑y`$
$`=`$ $`{\displaystyle _^d}\left({\displaystyle \frac{\varphi }{\widehat{f}}}\right)^{}(y){\displaystyle \underset{\lambda \mathrm{\Lambda }}{}}f(y\lambda )dy`$
$`=`$ $`w{\displaystyle _^d}\left({\displaystyle \frac{\varphi }{\widehat{f}}}\right)^{}(y)𝑑y`$
$`=`$ $`w{\displaystyle \frac{\varphi }{\widehat{f}}}(0)`$
$`=`$ $`0,`$
as we had to show.
$`\mathrm{}`$
For a set $`A^d`$ and $`\delta >0`$ we write
$$A_\delta =\{x^d:\mathrm{dist}(x,A)<\delta \}.$$
We shall need the following partial converse to Theorem 2.
###### Theorem 3
Suppose that $`fL^1(^d)`$, and that $`\mathrm{\Lambda }^d`$ has uniformly bounded density. Suppose also that $`O^d`$ is open and
$$\mathrm{supp}\widehat{\delta _\mathrm{\Lambda }}\left\{0\right\}O\text{ and }O_\delta \left\{\widehat{f}=0\right\},$$
(3)
for some $`\delta >0`$. Then $`f+\mathrm{\Lambda }`$ is a tiling at level $`\widehat{f}(0)\widehat{\delta _\mathrm{\Lambda }}(\left\{0\right\})`$.
Proof. Let $`\psi :^d`$ be smooth, have support in $`B_1(0)`$ and $`\widehat{\psi }(0)=1`$ and for $`ϵ>0`$ define the approximate identity $`\psi _ϵ(x)=ϵ^d\psi (x/ϵ)`$. Let
$$f_ϵ=\widehat{\psi _ϵ}f,$$
which has rapid decay.
First we show that $`(f_ϵ)^1f_ϵ+\mathrm{\Lambda }`$ is a tiling. That is, we show that the convolution $`f_ϵ\delta _\mathrm{\Lambda }`$ is a constant. Let $`\varphi `$ be any Schwartz function. Then
$$f_ϵ\delta _\mathrm{\Lambda }(\varphi )=\widehat{f_ϵ}\widehat{\delta _\mathrm{\Lambda }}(\widehat{\varphi }(x))=\widehat{\delta _\mathrm{\Lambda }}(\widehat{\varphi }(x)\widehat{f_ϵ}).$$
The function $`\widehat{\varphi }(x)\widehat{f_ϵ}`$ is a Schwartz function whose support intersects $`\mathrm{supp}\widehat{\delta _\mathrm{\Lambda }}`$ only at $`0`$, since, for small enough $`ϵ>0`$,
$$\mathrm{supp}\widehat{\varphi }\widehat{f_ϵ}\mathrm{supp}\widehat{f_ϵ}(\mathrm{supp}\widehat{f})_ϵO^c.$$
Hence, for each Schwartz function $`\varphi `$
$$f_ϵ\delta _\mathrm{\Lambda }(\varphi )=\widehat{\varphi }(0)\widehat{f_ϵ}(0)\widehat{\delta _\mathrm{\Lambda }}(\left\{0\right\}),$$
which implies
$$f_ϵ\delta _\mathrm{\Lambda }(x)=\widehat{f_ϵ}(0)\widehat{\delta _\mathrm{\Lambda }}(\left\{0\right\}),\text{a.e.(}x\text{)}.$$
We also have that $`_{\lambda \mathrm{\Lambda }}\left|f(x\lambda )\right|`$ is finite a.e. (see Remark 1 following the definition of tiling), hence, for almost every $`x^d`$
$$\underset{\lambda \mathrm{\Lambda }}{}\left|f(x\lambda )f_ϵ(x\lambda )\right|=\underset{\lambda \mathrm{\Lambda }}{}\left|f(x\lambda )\right|\left|1\widehat{\psi _ϵ}(x\lambda )\right|,$$
which tends to $`0`$ as $`ϵ0`$. This proves
$$\underset{\lambda \mathrm{\Lambda }}{}f(x\lambda )=\widehat{f}(0)\widehat{\delta _\mathrm{\Lambda }}(\left\{0\right\}),\text{a.e.(}x\text{)}.$$
$`\mathrm{}`$
§2. Proof of the main result
We now make some remarks that relate the property of $`E_\mathrm{\Lambda }`$ being a basis for $`L^2(\mathrm{\Omega })`$ to a certain function tiling $`^d`$ with $`\mathrm{\Lambda }`$.
Assume that $`\mathrm{\Omega }`$ is a bounded open set of measure $`1`$. Notice first that
$$e_\lambda ,e_x_\mathrm{\Omega }=\widehat{\mathrm{𝟏}_\mathrm{\Omega }}(x\lambda ).$$
The set $`E_\mathrm{\Lambda }`$ is an orthonormal basis for $`L^2(\mathrm{\Omega })`$ if and only if for each $`fL^2(\mathrm{\Omega })`$
$$f_\mathrm{\Omega }^2=\underset{\lambda \mathrm{\Lambda }}{}\left|e_\lambda ,f_\mathrm{\Omega }\right|^2,$$
and, by the completeness of the exponentials in $`L^2`$ of a large cube containing $`\mathrm{\Omega }`$, it is necessary and sufficient that
$$\underset{\lambda \mathrm{\Lambda }}{}\left|\widehat{\mathrm{𝟏}_\mathrm{\Omega }}(x\lambda )\right|^2=1,$$
(4)
for each $`x^d`$. In other words a necessary and sufficient condition for $`(\mathrm{\Omega },\mathrm{\Lambda })`$ to be a spectral pair is that $`\left|\widehat{\mathrm{𝟏}_\mathrm{\Omega }}\right|^2+\mathrm{\Lambda }`$ is a tiling at level $`1`$. Notice also that $`\left|\widehat{\mathrm{𝟏}_\mathrm{\Omega }}\right|^2`$ is the Fourier Transform of $`\mathrm{𝟏}_\mathrm{\Omega }\stackrel{~}{\mathrm{𝟏}_\mathrm{\Omega }}`$ which has support equal to the set $`\overline{\mathrm{\Omega }\mathrm{\Omega }}`$. We use the notation $`\stackrel{~}{f}(x)=\overline{f(x)}`$.
Proof of Theorem 1: Write $`K=\mathrm{\Omega }\mathrm{\Omega }`$, which is a symmetric, open convex set. Assume that $`(\mathrm{\Omega },\mathrm{\Lambda })`$ is a spectral pair. We can clearly assume that $`0\mathrm{\Lambda }`$. It follows that $`\left|\widehat{\mathrm{𝟏}_\mathrm{\Omega }}\right|^2+\mathrm{\Lambda }`$ is a tiling and hence that $`\mathrm{\Lambda }`$ has uniformly bounded density, has density equal to $`1`$ and $`\widehat{\delta _\mathrm{\Lambda }}(\left\{0\right\})=1.`$
By Theorem 2 (with $`f=\left|\widehat{\mathrm{𝟏}_\mathrm{\Omega }}\right|^2,\widehat{f}=\mathrm{𝟏}_\mathrm{\Omega }\stackrel{~}{\mathrm{𝟏}_\mathrm{\Omega }}(x)`$) it follows that
$$\mathrm{supp}\widehat{\delta _\mathrm{\Lambda }}\left\{0\right\}K^c.$$
Let $`H=K/2`$ and write
$$f(x)=\mathrm{𝟏}_H\stackrel{~}{\mathrm{𝟏}_H}(x)=_^d\mathrm{𝟏}_H(y)\mathrm{𝟏}_H(yx)𝑑y.$$
The function $`f`$ is supported in $`\overline{K}`$ and has nonnegative Fourier Transform
$$\widehat{f}=\left|\widehat{\mathrm{𝟏}_H}\right|^2.$$
We have
$$_^d\widehat{f}=f(0)=\mathrm{vol}H$$
and
$$\widehat{f}(0)=_^df=(\mathrm{vol}H)^2.$$
By the Brunn-Minkowski inequality (see for example \[G94, Ch. 3\]), for any convex body $`\mathrm{\Omega }`$,
$$\mathrm{vol}\frac{1}{2}(\mathrm{\Omega }\mathrm{\Omega })\mathrm{vol}\mathrm{\Omega },$$
with equality only in the case of symmetric $`\mathrm{\Omega }`$. Since $`\mathrm{\Omega }`$ has been assumed to be non-symmetric it follows that
$$\mathrm{vol}H>1.$$
For
$$1>\rho >\left(\frac{1}{\mathrm{vol}H}\right)^{1/d}$$
consider
$$g(x)=f(x/\rho )$$
which is supported properly inside $`K`$, and has
$$g(0)=f(0)=\mathrm{vol}H,_^dg=\rho ^d_^df=\rho ^d(\mathrm{vol}H)^2.$$
Since $`\mathrm{supp}g`$ is properly contained in $`K`$ Theorem 3 implies that $`\widehat{g}+\mathrm{\Lambda }`$ is a tiling at level $`\widehat{g}\mathrm{dens}\mathrm{\Lambda }=\widehat{g}=g(0)=\mathrm{vol}H`$. However, the value of $`\widehat{g}`$ at $`0`$ is $`g=\rho ^d(\mathrm{vol}H)^2>\mathrm{vol}H`$, and, since $`\widehat{g}0`$ and $`\widehat{g}`$ is continuous, this is a contradiction.
$`\mathrm{}`$
§3. Bibliography
|
no-problem/9904/cond-mat9904053.html
|
ar5iv
|
text
|
# High-Frequency Hopping conductivity of Disordered 2D-system in the IQHE Regime
## Introduction
If one places a semiconducting heterostructure over a piezoelectric which SAW is being propagated, the SAW undergoes attenuation associated with the interaction of the electrons of heterostructure with the electric field of SAW. This is the basis of the acoustic method pioneered by Wixforth for the investigation of GaAs/AlGaAs heterostructures. In the paper it has been found that in a GaAs/AlGaAs heterostructure in the IQHE regime the acoustically measured conductivity $`\sigma ^{hf}`$ does not coincide with the $`\sigma ^{dc}`$ obtained from the direct-current measurements: $`\sigma ^{dc}`$=0, whereas $`\sigma ^{hf}`$ has a finite value. This difference was explained by means of a conventional model of electrons being localized in the IQHE regime, therefore the conductivity mechanism for a direct current differs from that for an alternative current. For the localized electrons in the hopping conduction regime hf-conductivity can be expressed as a complex value: $`\sigma ^{hf}=\sigma _1^{hf}i\sigma _2^{hf}`$ . Absorption coefficient of SAW, $`\mathrm{\Gamma }`$, and the change of SAW velocity- $`\mathrm{\Delta }V/V`$-can be presented in the way:
$`\mathrm{\Gamma }=8.68{\displaystyle \frac{K^2}{2}}kA{\displaystyle \frac{(\frac{4\pi \sigma _1}{\epsilon _sV})t(k)}{[1+(\frac{4\pi \sigma _2}{\epsilon _sV})t(k)]^2+[(\frac{4\pi \sigma _1}{\epsilon _sV})t(k)]^2}},`$ (1)
$$A=8b(k)(\epsilon _1+\epsilon _0)\epsilon _0^2\epsilon _sexp(2k(a+d)),$$
$$\frac{\mathrm{\Delta }V}{V}=\frac{K^2}{2}A\frac{(\frac{4\pi \sigma _2}{\epsilon _sV})t(k)+1}{[1+(\frac{4\pi \sigma _2}{\epsilon _sV})t(k)]^2+[(\frac{4\pi \sigma _1}{\epsilon _sV})t(k)]^2},$$
where $`K^2`$ is the electromechanical coupling coefficient of piezoelectric substrate, $`k`$ and $`V`$ are the SAW wavevector and the velocity respectively, $`a`$ is the vacuum gap width of 2DEG, $`d`$ is the depth of the 2D-layer, $`\epsilon _1`$, $`\epsilon _0`$ and $`\epsilon _s`$ are the dielectric constants of lithium niobate, vacuum and semiconductor respectively, $`b`$ and $`t`$ are complex function of $`a`$, $`d`$, $`\epsilon _1`$, $`\epsilon _0`$ and $`\epsilon _s`$.
The aim of the work is to determine Re$`\sigma ^{hf}`$(H,T) and Im$`\sigma ^{hf}`$(H,T) in IQHE regime from the $`\mathrm{\Gamma }`$ and $`\mathrm{\Delta }V/V`$ of SAW measurements ($`f`$=30MHz, T=1.5-4.2K, H up to 7T) and to analyze the 2D-electrons localization mechanism.
## The Experimental results and discussion
Fig.1 illustrates the experimental dependencies of $`\mathrm{\Gamma }`$ and $`\mathrm{\Delta }V/V`$ on H at T=1.5K for the sample 1 ($`n=2.710^{11}cm^2`$). As long as $`\mathrm{\Gamma }`$ and $`\mathrm{\Delta }V/V`$ are determined by the 2DEG conductivity (Eq.(1)), quantizing of the electro nspectrum in the magnetic field, leading to the SdH oscillations, results in similar peculiarities in $`\mathrm{\Gamma }`$ and $`\mathrm{\Delta }V/V`$ of fig.1. Fig.2a presents the $`\sigma _1(T)`$ dependencies determined from $`\mathrm{\Gamma }`$ and $`\mathrm{\Delta }V/V`$ (Fig.1) using Eq.(1) for H=5.5; 2.7 and 1.8T ($`\nu `$=2, 4, 6 respectively), $`\nu `$=nch/eH is the filling factor. Fig.2b shows the $`\sigma _2(T)`$ dependencies derived from $`\mathrm{\Gamma }`$ and $`\mathrm{\Delta }V/V`$ for H=5.5, 2.7 and 1.8 T. Fig.3 illustrates the dependencies $`\sigma _1`$ and $`\sigma _2`$ on magnetic field near H=5.5T ($`\nu `$=2) at different temperatures. In a number of papers (see f.e.) devoted to the study of magnetoresistance in IQHE regime it was established that in IQHE plateau regions at low T the dominant conductivity mechanism is variable range hopping.
HF-conductivity in this case is determined by the two-site model and is associated with the electronic transition between localized states of ”tight” pairs of impurities $`\sigma _1`$. In this case the relation is valid:
$`{\displaystyle \frac{Re\sigma }{Im\sigma }}={\displaystyle \frac{\sigma _1}{\sigma _2}}={\displaystyle \frac{\pi }{2}}ln{\displaystyle \frac{\omega }{\omega _{ph}}},`$ (2)
where $`\omega =2\pi f`$ is the frequency of SAW, $`\omega _{ph}`$ is the characteristic phonon frequency of the order of $`10^{12}10^{13}sec^1`$. The calculation using Eq.(2) gives $`\sigma _1/\sigma _2=0.15`$ (f=30 MHz). For all samples it was experimentally found $`\sigma _1/\sigma _2=0.14\pm 0.03`$ at T=1.5K and H in the Hall plateaux middle. This fact allows one to suppose that the mechanism that determines the $`\mathrm{\Gamma }`$ and $`\mathrm{\Delta }V/V`$ of SAW interacting with localized electrons is that of the hf hopping conductivity .
For the analysis of the dependencies $`\sigma _1(H,T)`$ and $`\sigma _2(H,T)`$ we supposed it to be determined by two mechanisms: hf-hopping on the localized states of impurities and thermal activation to the upper Landau band. The value of $`\sigma _2`$ is proportional to the number of ”tight” pairs, which is changed with a thermal activation processes ($`\sigma _2=0`$ for delocalized electrons ). So the dependence $`\sigma _2(H,T)`$ reflects a change of number of these pairs.
The dependence $`\sigma _1(H)`$ is similar to behaviour of $`\sigma ^{dc}`$ when it makes SdH-oscillations, but $`\sigma _1>\sigma ^{dc}=0`$. $`\sigma _1(T)`$ can be presented as a sum of two terms: $`\sigma _1=\sigma _1^h+\sigma _1^a`$, where $`\sigma _1^h=0.11\sigma _2^h`$ is hf-conductivity, $`\sigma _1^a=\sigma _0exp(\mathrm{\Delta }E/kT)`$ is the conductivity on upper Landau band due to the electrons activated from the states on Fermi level where the activation energy $`\mathrm{\Delta }E=\mathrm{}\omega _c/2C/2`$, ($`\mathrm{}\omega _c`$ is the cyclotron frequency, $`C`$ is the Landau band width). From the plotted dependence $`ln(\sigma _1\sigma _1^h)`$ on 1/T the $`\mathrm{\Delta }E`$ was found for H=5.5, 2.7, 1.8T. One can see (inset of Fig.2a) that $`\mathrm{\Delta }E`$ is linear function of H with the slope $`0.5\mathrm{}\omega _c`$ what allow to derive the width of Landau band, broadened, probably, by the impurity random potential; the width is appeared to be $`C`$=2meV.
## Acknowledgements
The work was supported by RFFI (N 98-02-18280) and Minnauki (N 97-1043).
|
no-problem/9904/cond-mat9904062.html
|
ar5iv
|
text
|
# Scaling for the Percolation Backbone
## Abstract
We study the backbone connecting two given sites of a two-dimensional lattice separated by an arbitrary distance $`r`$ in a system of size $`L`$. We find a scaling form for the average backbone mass: $`M_BL^{d_B}G(r/L)`$, where $`G`$ can be well approximated by a power law for $`0x1`$: $`G(x)x^\psi `$ with $`\psi =0.37\pm 0.02`$. This result implies that $`M_BL^{d_B\psi }r^\psi `$ for the entire range $`0<r<L`$. We also propose a scaling form for the probability distribution $`P(M_B)`$ of backbone mass for a given $`r`$. For $`rL`$, $`P(M_B)`$ is peaked around $`L^{d_B}`$, whereas for $`rL`$, $`P(M_B)`$ decreases as a power law, $`M_B^{\tau _B}`$, with $`\tau _B1.20\pm 0.03`$. The exponents $`\psi `$ and $`\tau _B`$ satisfy the relation $`\psi =d_B(\tau _B1)`$, and $`\psi `$ is the codimension of the backbone, $`\psi =dd_B`$.
PACS numbers: 64.60.Ak, 05.45.Df
The percolation problem is a classical model of phase transitions, as well as a useful model for describing connectivity phenomena, and in particular for describing porous media . At the percolation threshold $`p_c`$, the mass of the largest cluster scales with the system size $`L`$ as $`ML^{d_f}`$. The fractal dimension $`d_f`$ is related to the space dimension $`d`$ and to the order parameter and correlation length exponents $`\beta `$ and $`\nu `$ by $`d_f=d\beta /\nu `$ . In two dimensions, $`d_f=91/48`$ is known exactly.
An interesting subset of the percolation cluster is the backbone which is obtained by removing the non-current carrying bonds from the percolation cluster. The structure of the backbone consists of blobs and links. The backbone can in fact be further partitioned into subsets according to the magnitude of the electric current carried. The backbone is relevant to transport properties and fracture . The fractal dimension $`d_B`$ of the backbone can be defined via its typical mass $`M_B`$, which scales with the system size $`L`$ as $`M_BL^{d_B}`$. The backbone dimension is an independent exponent and its exact value is not known. A current numerical estimate is $`d_B=1.6432\pm 0.0008`$.
The operational definition of the backbone has an interesting history. Customarily, one defines the backbone using parallel bars, and looks for the percolation cluster (and the backbone) which connects the two sides of the system. A different situation arises in oil field applications , where one studies the backbone connecting two wells separated by an arbitrary distance $`r`$. This situation is important for transport properties, since in oil recovery one injects water at one point and recovers oil at another point . From a fundamental point of view, it is important to understand how the percolation properties depend on different boundary conditions.
We study here the backbone connecting two points separated by an arbitrary distance $`r`$ in a two-dimensional system of linear size $`L`$. One goal is to understand the distribution of the backbone mass $`M_B(r,L)`$, and how its average value scales with $`r`$ and $`L`$ in the entire range $`0<r<L`$.
We choose two sites $`A`$ and $`B`$ belonging to the infinite percolating cluster on a two-dimensional square lattice (the fraction of bonds is $`p=p_c=1/2`$). $`A`$ and $`B`$ are separated by a distance $`r`$ and symetrically located between the boundaries. Using the burning algorithm, we determine the backbone connecting these two points for values of $`L`$ ranging from $`100`$ to $`1000`$. For each value of $`L`$, we consider a sequence of values of $`r`$ with $`2rL2`$. In order to test the universality of the exponents, we perform our study on three lattices: square, honeycomb and triangular lattice. For simplicity, we restrict our discussion here to the square lattice, as we find similar results for the other two lattices.
We begin by studying the backbone mass probability distribution $`P(M_B)`$. We show that $`P(M_B)`$ obeys a simple scaling form in the entire range of $`r/L`$,
$$P(M_B)\frac{1}{r^{d_B}}F\left(\frac{M_B}{r^{d_B}}\right),$$
(1)
where $`F(x)`$ is a scaling function, whose shape depends on the ratio $`r/L`$.
For $`rL`$, it seems reasonable to assume that $`P(M_B)`$ will be peaked around its average value $`<M_B>L^{d_B}`$. The data collapse predicted by Eq. (1) is represented in Fig. 1(a). In this case, the scaling function $`F`$ is peaked at approximately $`L^{d_B}`$.
However, the case $`rL`$ is far less clear. In fact, we expect for $`rL`$ that the backbone mass fluctuates greatly from one realization to another, since its minimum value can be $`r`$ and its maximum can be of order $`L^{d_f}`$. Fig. 1(b) shows a log-log plot of $`P(M_B)`$. It has a lower cut-off of order $`r`$ (since the backbone must connect points $`A`$ and $`B`$) and a upper cut-off of order $`L^{d_B}`$. We find good data collapse (Fig. 1(c)), which indicates that the scaling function $`F`$ is a power law in the range from $`r^{d_B}`$ to $`L^{d_B}`$, with exponent approximately $`\tau _B1.20\pm 0.03`$ (there is a cut-off at $`M_BL^{d_B}`$ not shown here). The exponent $`\tau _B`$ is connected to the blob size distribution since typically, the two sites belong to the same blob, and the sampling of backbones is equivalent to sampling of the blobs. From ,
$$\frac{d}{d_B}=\tau _B.$$
(2)
This relation gives the estimate $`\tau _B1.22`$ in good agreement with our numerical simulation.
We note that for larger values of $`M_B`$, a “bump” (indicated by an arrow on Fig. 1(b)) located at approximately $`L^{d_B}`$ appears and assumes increasing importance when $`r`$ approaches $`L`$.
We now study the average backbone mass $`M_B`$. From dimensional considerations, the $`r`$ dependence can only be a function of $`r/L`$. We thus propose the following Ansatz:
$$M_B(r,L)=L^{d_B}G\left(\frac{r}{L}\right).$$
(3)
In Fig. 2(a), we show a double logarithmic scale $`M_B`$ versus $`r`$ for different values of $`L`$. In order to test the Eq. (3), we scale the data of Fig. 2(a). The data collapse is obtained using $`d_B=1.65`$ and is shown on Fig. 2(b). This (log-log) plot supports the scaling Ansatz (3). Moreover, one can see that the scaling function $`G`$ is, surprisingly, a pure power law on the entire range $`[0,1]`$, with exponent $`\psi =0.37\pm 0.02`$.
The results (1) and (3) are consistent, since if (1) holds with a power law behavior for the scaling function $`F(x)x^{\tau _B}`$ for $`x>1`$, and $`F(x)=0`$ for $`x<1`$, then the average mass is given by
$$M_B(r,L)=_r^{L^{d_B}}F\left(\frac{M}{r^{d_B}}\right)\frac{dM}{r^{d_B}}M.$$
(4)
Assuming that $`L/r`$ is large enough, the integral in (4) can be approximated as $`L^{d_B\psi }r^\psi `$, where
$$\psi =d_B(\tau _B1)$$
(5)
In our simulation $`\tau _B1.20\pm 0.03`$, which leads to the value $`\psi 0.33\pm 0.05`$ in reasonable agreement with the value measured directly on the average mass.
Moreover, using Eq. (2) together with Eq. (5), we obtain
$$\psi =dd_B$$
(6)
which means that $`\psi `$ is the codimension of the fractal backbone.
To summarize, we find that for any value of $`r/L`$, the scaling form, Eq. (1), for the probability distribution is valid. The shape of the scaling function $`F`$ depends on $`r/L`$, being a peaked distribution for $`rL`$, and a power law for $`rL`$. The average backbone mass varies with $`r`$ and $`L`$ according to Eq. (4). For fixed system size, it varies as $`M_Br^\psi `$ (for $`0<r<L`$). The value of $`\psi `$ is small ($`\psi 0.37`$) indicating that the backbone mass does not change drastically as $`r`$ changes. On the other hand, the exponent governing the variation of $`M_B`$ with $`L`$ for fixed $`r`$ is expected to be larger, with $`M_Br^{d_B\psi }`$. This exponent $`d_B\psi `$ is not equal to the fractal dimension $`d_B`$ of the backbone, but is smaller by an amount equal to $`\psi `$.
Acknowledgements. We would like to thank P. Gopikrishnan for help with the simulations and L. A. N. Amaral, A. Coniglio, N. V. Dokholyan, Y. Lee, P. R. King and V. Plerou for stimulating discussions. MB thanks the DGA for financial support. The Center for Polymer Studies is supported by the NSF and BP Amoco.
|
no-problem/9904/math9904023.html
|
ar5iv
|
text
|
# Untitled Document
Difference Ramsey Numbers and Issai Numbers
Aaron Robertson<sup>1</sup><sup>1</sup>1webpage: www.math.temple.edu/~aaron/
This paper is part of the author’s Ph.D. thesis under the direction of Doron Zeilberger.
This paper was supported in part by the NSF under the PI-ship of Doron Zeilberger.
Department of Mathematics, Temple University
Philadelphia, PA 19122
email: aaron@math.temple.edu
Classification: 05D10, 05D05
## Abstract
We present a recursive algorithm for finding good lower bounds for the classical Ramsey numbers. Using notions from this algorithm we then give some results for generalized Schur numbers, which we call Issai numbers.
Introduction
We present two new ideas in this paper. The first will be dealing with classical Ramsey numbers. In this part we give a recursive algorithm for finding so-called difference Ramsey numbers. Using the ideas from this first part we then define Issai numbers, a generalization of the Schur numbers. We give some easy results, values, and bounds for these Issai numbers. Recall that $`N=R(k_1,k_2,\mathrm{},k_r)`$ is the minimal integer with the following property: Ramsey Property: If we $`r`$-color the edges of the complete graph on $`N`$ vertices, then there exist $`j`$, $`1jr`$, such that a monochromatic $`j`$-colored complete graph on $`k_j`$ vertices is a subgraph of the $`r`$-colored $`K_N`$. To find a lower bound, $`L`$, for one of these Ramsey numbers, it suffices to find an edgewise coloring of $`K_L`$ which avoids the Ramsey property. To this end, we will restrict our seach to the subclass of difference graphs. After presenting some results, we will show that the Issai numbers are a natural consequence of the difference Ramsey numbers, and a natural extension of the Schur numbers. The Difference Ramsey Numbers part of this article is accompanied by the Maple package AUTORAMSEY. It has also been translated into Fortran77 and is available as DF.f at the author’s website. The Issai Numbers part of this article is accompanied by the Maple package ISSAI. All computer packages are available for download at the author’s website.
Difference Ramsey Numbers
Our goal here is to find good lower bounds for the classical Ramsey numbers. Hence, we wish to find edgewise colorings of complete graphs which avoid the Ramsey Property. Our approach is to construct a recursive algorithm to find the best possible colorings among those colorings we search. Since searching all possible colorings of a complete graph on any nontrivial number of vertices is not feasible by today’s computing standards, we must restict the class of colored graphs to be searched. The class of graphs we will search will be the class of difference graphs. Definition: Difference Graph: Consider the complete graph on $`n`$ vertices, $`K_n`$. Number the vertices $`1`$ through $`n`$. Let $`i<j`$ be two vertices of $`K_n`$. Let $`B_n`$ be a set of arbitrary integers between $`1`$ and $`n1`$. Call $`B_n`$ the set of blue differences on $`n`$ vertices. We now color the edges of $`K_n`$ as follows: if $`jiB_n`$ then color the edge connecting $`i`$ and $`j`$ blue, otherwise color the edge red. The resulting colored graph will be called a difference graph. Given $`k`$ and $`l`$, a difference graph with the maximal number of vertices which avoids both a blue $`K_k`$ and a red $`K_l`$ will be called a maximal difference Ramsey graph. Let the number of vertices of a maximal difference Ramsey graph be $`V`$. Then we will define the difference Ramsey number, denoted $`D(k,l)`$, to be $`V+1`$. Further, since the class of difference graphs is a subclass of all two-colored complete graphs, we have that $`D(k,l)R(k,l)`$. Hence, by finding the difference Ramsey numbers, we are finding lower bounds for the classical Ramsey numbers. Before we present the computational aspect of these difference Ramsey numbers, we establish an easy result: $`D(k,l)D(k1,l)+D(k,l1)`$, which is analagous to the upper bound derived from Ramsey’s proof \[GRS p. 3\], does not follow from Ramsey’s proof.
To see this consider the difference Ramsey number $`D(3,3)=6`$. Let the set of red differences be $`R_6=\{1,2,4\}`$ (and thus the set of blue differences is $`B_6=\{3,5\}`$). Call this difference graph $`D_6`$. In Ramsey’s proof, a vertex $`v`$ is isolated. The next step is to notice that, regardless of the choice of $`v`$, the number of red edges from $`v`$ to $`D_6\{v\}D(2,3)=3`$. Call the graph which has each vertex connected to the vertex $`v`$ by a red edge $`G`$. If $`v1,6`$ then $`G`$ has $`3`$ vertices, otherwise it has $`4`$ vertices. Either way, the number of vertices of $`G`$ is at least $`D(2,3)=3`$.
In order for Ramsey’s argument to work in the difference graph situation, we must show that $`G`$ is isomorphic to a difference graph. Assume there exists an isomorphism, $`\varphi :\{1,2,3,4,5,6\}\{1,2,3,4,5,6\}`$, such that the vertex set of $`G`$, is mapped onto $`\{1,2,3\}`$ or $`\{1,2,3,4\}`$ (depending on the number of vertices of $`G`$), and the edge coloring is preserved. Then $`\varphi (G)`$ would be a difference graph. Notice now that $`\varphi (\{v\})\{4,5,6\}`$. For any choice of $`\varphi (\{v\})`$ we obtain the contradiction that the difference $`1`$ must be both red and blue (for different edges). Hence, no such isomorphism can exist. Hence we cannot use the difference Ramsey number property to conclude that the inequality holds.
However, the difference Ramsey numbers seem to be, for small values, quite close to the Ramsey numbers. This may just be a case of the Law of Small Numbers, but numerical evidence from this paper leads us to make the following Conjecture 1: $`D(k,l)D(k1,l)+D(k,l1)`$
The set of difference graphs is a superclass of the often searched circular (or cyclic) graphs (see the survey \[CG\] by Chung and Grinstead), which are similarly defined. The distinction is that, using the notation above, for a graph to be circular we require that if $`bB_n`$, then we must have $`nbB_n`$. By removing this circular condition, we remove from the coloring the dependence on $`n`$ (the number of vertices), and can thereby construct a recursive algorithm to find the set of maximal difference Ramsey graphs: The recursive step in the algorithm is described as follows. A difference graph on $`n`$ vertices consists of $`B_n`$, the set of blue differences, and $`R_n`$, the set of red differences. Thus $`B_nR_n=\{1,2,3,\mathrm{},n1\}`$. To obtain a difference graph on $`n+1`$ vertices, we consider the difference $`d=n`$. If $`B_n\{d\}`$ avoids a red clique, then we have a difference graph on $`n+1`$ vertices where $`B_{n+1}=B_n\{d\}`$ and $`R_{n+1}=R_n`$. (Note that now $`B_{n+1}R_{n+1}=\{1,2,3,\mathrm{},n\}`$.) Likewise, if $`R_n\{d\}`$ avoids a blue clique, then we have a different difference graph on $`n+1`$ vertices with $`B_{n+1}=B_n`$ and $`R_{n+1}=R_n\{d\}`$. Hence, we have a simple recursion which is not possible with circular graphs. (By increasing the number of vertices from $`n`$ to $`n+1`$, a circular graph goes from being circular, to being completely noncircular (if $`bB_n`$, then $`nbB_n`$) ). We can now use our recursive algorithm to find automatically (and we must note theoretically due to time and memory constraints, but much less time and memory than would be required to search all graphs) all maximal difference Ramsey graphs for any given $`k`$ and $`l`$.
About the Maple Package AUTORAMSEY
AUTORAMSEY is a Maple package that automatically computes the difference graph(s) with the maximum number of vertices that avoids both a blue $`K_k`$ and a red $`K_l`$. Hence, this package automatically finds lower bounds for the Ramsey number $`R(k,l)`$. In the spirit of automation, and to take another step towards AI, AUTORAMSEY can create a verification Maple program tailored to the maximal graph(s) calculated in AUTORAMSEY (that can be run at your leisure) and can write a paper giving the lower bound for the Ramsey number $`R(k,l)`$ along with a maximal difference graph that avoids both a blue $`K_k`$ and a red $`K_l`$. The computer generated program is a straightforward program that can be used to (double) check that the results obtained in AUTORAMSEY do indeed avoid both a blue $`K_k`$ and a red $`K_l`$. Further, this program can be easily altered (with instructions on how to do so) to search two-colored complete graphs for $`k`$-cliques and $`l`$-anticliques.
AUTORAMSEY has also been translated into Fortran77 as DF.f to speed up the algorithm implementation. The code for the translated programs (dependent upon the clique sizes we are trying to avoid) is available for download at my webpage.
The Algorithm
Below we will give the pseudocode which finds the maximal difference Ramsey graph(s). Hence, it also will find the exact value of the difference Ramsey numbers $`D(k,l)`$. Because the number of difference graphs is of order $`2^n`$ as compared to $`2^{n^2/2}`$ for all colored graphs, the algorithm can feasibly work on larger Ramsey numbers.
Let $`𝒟_n`$ be the class of difference graphs on $`n`$ vertices. Let GoodSet be the set of difference graphs that avoid both a blue $`K_k`$ and a red $`K_l`$.
$`\mathrm{𝙻𝚎𝚝}m=min(k,l)`$$`\mathrm{𝙵𝚒𝚗𝚍}𝒟_{m1},\mathrm{𝚘𝚞𝚛}\mathrm{𝚜𝚝𝚊𝚛𝚝𝚒𝚗𝚐}\mathrm{𝚙𝚘𝚒𝚗𝚝}.`$$`\mathrm{𝚂𝚎𝚝}GoodSet=𝒟_{m1}.`$$`\mathrm{𝚂𝚎𝚝}j=m1`$$`\mathrm{𝚆𝙷𝙸𝙻𝙴}flag0\mathrm{𝚍𝚘}`$ $`\mathrm{𝙵𝙾𝚁}𝚒\mathrm{𝚏𝚛𝚘𝚖}\mathtt{1}\mathrm{𝚝𝚘}\mathrm{𝙶𝚘𝚘𝚍𝚂𝚎𝚝}\mathrm{𝚍𝚘}`$ $`\mathrm{𝚃𝚊𝚔𝚎}TGoodSet,\mathrm{𝚠𝚑𝚎𝚛𝚎}T\mathrm{𝚒𝚜}\mathrm{𝚘𝚏}\mathrm{𝚝𝚑𝚎}\mathrm{𝚏𝚘𝚛𝚖}T=[B_j,R_j]`$ $`\mathrm{𝚠𝚑𝚎𝚛𝚎}B_j\mathrm{𝚊𝚗𝚍}R_j\mathrm{𝚊𝚛𝚎}\mathrm{𝚝𝚑𝚎}\mathrm{𝚋𝚕𝚞𝚎}\mathrm{𝚊𝚗𝚍}\mathrm{𝚛𝚎𝚍}\mathrm{𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚌𝚎}\mathrm{𝚜𝚎𝚝𝚜}\mathrm{𝚘𝚗}j\mathrm{𝚟𝚎𝚛𝚝𝚒𝚌𝚎𝚜}`$ $`\mathrm{𝙲𝚘𝚗𝚜𝚒𝚍𝚎𝚛}S_B=[B_j\{j\},R_j]\mathrm{𝚊𝚗𝚍}S_R=[B_j,R_j\{j\}]`$ $`\mathrm{𝙸𝚏}S_B\mathrm{𝚊𝚟𝚘𝚒𝚍𝚜}\mathrm{𝚋𝚘𝚝𝚑}𝚊\mathrm{𝚋𝚕𝚞𝚎}K_k\mathrm{𝚊𝚗𝚍}𝚊\mathrm{𝚛𝚎𝚍}K_l\mathrm{𝚝𝚑𝚎𝚗}`$ $`\mathrm{𝙽𝚎𝚠𝙶𝚘𝚘𝚍𝚂𝚎𝚝}:=\mathrm{𝙽𝚎𝚠𝙶𝚘𝚘𝚍𝚂𝚎𝚝}𝚂_𝙱`$ $`\mathrm{𝙸𝚏}S_R\mathrm{𝚊𝚟𝚘𝚒𝚍𝚜}\mathrm{𝚋𝚘𝚝𝚑}𝚊\mathrm{𝚋𝚕𝚞𝚎}K_k\mathrm{𝚊𝚗𝚍}𝚊\mathrm{𝚛𝚎𝚍}K_l\mathrm{𝚝𝚑𝚎𝚗}`$ $`\mathrm{𝙽𝚎𝚠𝙶𝚘𝚘𝚍𝚂𝚎𝚝}:=\mathrm{𝙽𝚎𝚠𝙶𝚘𝚘𝚍𝚂𝚎𝚝}𝚂_𝚁`$ $`\mathrm{𝚁𝚎𝚙𝚎𝚊𝚝}\mathrm{𝙵𝙾𝚁}\mathrm{𝚕𝚘𝚘𝚙}\mathrm{𝚠𝚒𝚝𝚑}𝚊\mathrm{𝚗𝚎𝚠}𝚃`$$`\mathrm{𝙸𝚏}NewGoodSet=0\mathrm{𝚝𝚑𝚎𝚗}\mathrm{𝚁𝙴𝚃𝚄𝚁𝙽}GoodSet\mathrm{𝚊𝚗𝚍}\mathrm{𝚜𝚎𝚝}flag=0`$$`\mathrm{𝙾𝚝𝚑𝚎𝚛𝚠𝚒𝚜𝚎},\mathrm{𝚜𝚎𝚝}GoodSet=NewGoodSet,NewGoodSet=\{\},\mathrm{𝚊𝚗𝚍}j=j+1`$$`\mathrm{𝚁𝚎𝚙𝚎𝚊𝚝}\mathrm{𝚆𝙷𝙸𝙻𝙴}\mathrm{𝚕𝚘𝚘𝚙}`$
For this algorithm to be efficient we must have the subroutine which checks whether or not a monochromatic clique is avoided be very quick. We use the following lemma to achieve quick results in the Fortran77 code. (The Maple code is mainly for separatly checking (with a different, much slower, but more straightforward, algorithm) the Fortran77 code for small cases.) Lemma 1: Define the binary operation $``$ to be $`xy=xy`$. Let $`D`$ be a set of differences. If $`D`$ contains a $`k`$-clique, then there exists $`KD`$, with $`K=k1`$, such that for all $`x,yK`$, $`xyD`$. Proof: We will prove the contrapositive. Let $`K=\{d_1,d_2,\mathrm{},d_{k1}\}`$. Order and rename the elements of $`K`$ so that $`d_1<d_2<\mathrm{}<d_{k1}`$. Let $`v_0<v_1<\mathrm{}<v_{k1}`$ be the vertices of a $`k`$-clique where $`d_i=v_iv_0`$. By supposition, there exists $`I<J`$ such that $`d_Jd_I=d_Jd_ID`$. This is the edge connecting $`v_J`$ with $`v_I`$. Since this edge is not in $`D`$, $`D`$ contains no $`k`$-clique. By using this lemma we need only check pairs of elements in a $`k`$-set, rather than constructing all possible colorings using the $`k`$-set. Further, we need not worry about the ordering of the pairs; the operation $``$ is commutative.
Some Results
It is easy to find lower bounds for $`R(k,l)`$, so we must show that the algorithm gives “good” lower bounds. Below are two tables of the difference Ramsey number results obtained so far. The first table is of the difference Ramsey number values. The second table is of the number of maximal difference Ramsey graphs. If we are considering the diagonal Ramsey number $`R(k,k)`$, then the number of maximal difference graphs takes into account the symmetry of colors; i.e. we do not count a reversal of colors as a different difference graph. Where lower bounds are listed we have made constraints on the size of the set GoodSet in the algorithm due to memory and/or (self-imposed) time restrictions.
Difference Ramsey Numbers $`l`$ $`3`$ $`4`$ $`5`$ $`6`$ $`7`$ $`8`$ $`9`$ $`10`$ $`11`$ $`k`$ $`3`$ 6 9 14 17 22 27 36 39 46 $`4`$ 18 25 34 47 $`53`$ $`62`$ $`5`$ 42 $`57`$
Number of Maximal Difference Ramsey Graphs $`l`$ $`3`$ $`4`$ $`5`$ $`6`$ $`7`$ $`8`$ $`9`$ $`10`$ $`11`$ $`k`$ $`3`$ 1 2 3 7 13 13 4 21 6 $`4`$ 1 6 24 21 n/a n/a $`5`$ 11 n/a
When we compare our test results to the well known maximal Ramsey graphs for $`R(3,3)`$, $`R(3,4)`$, $`R(3,5)`$, $`R(4,4)`$ \[GG\], and $`R(4,5)`$ \[MR\], we find that the program has found the critical colorings for all of these numbers. The classical coloring in \[GRS\] for $`R(3,4)`$ is not a difference graph, and hence is not found by the program. More importantly, however, is that it does find a difference graph on $`8`$ vertices that avoids both a blue $`K_3`$ and a red $`K_4`$. Hence, for the Ramsey numbers found by Gleason and Greenwood \[GG\], and for $`R(4,5)`$ found by McKay and Radziszowski \[MR\] we have found critical Ramsey graphs which are also difference graphs.
The algorithm presented above can be trivially extended to search difference graphs with more than two colors. The progress made so far in this direction follows.
Multicolored Difference Ramsey Numbers
The algorithm presented here can be applied to an arbitrary number of colors. The recursive step in the algorithm simply becomes the addition of the next difference to each of the three color set $`B_n`$, $`R_n`$, and $`G_n`$ ($`G`$ for green). Everything else remains the same. Hence, the alteration of the program to any number of colors is a simple one. The main hurdle encountered while searching difference graphs of more than two colors is that the size of the set GoodSet in the algorithm grows very quickly. In fact, the system’s memory while fully searching all difference graphs was consumed within seconds for most multicolored difference Ramsey numbers.
$`D(3,3,3)=15`$
$`D(3,3,4)=30`$
$`D(3,3,5)=42`$
$`D(3,3,6)60`$
We note here that $`D(3,3,6)60`$ implies that $`R(3,3,6)60`$, which is a new result. The previous best lower bound was $`54`$ \[SLZL\]. The coloring on $`59`$ vertices is cyclic, hence we need only list the differences up to $`29`$: Color 1: 5,12,13,14,16,20,22 Color 2: 10,15,19,24,26,27 Color 3: 1,2,3,4,6,7,8,9,11,17,18,21,23,25,28,29
Future Directions
Currently the algorithm which searches for the maximal difference Ramsey graphs is a straightforward search. In other words, if the memory requirement excedes the space in the computer, the algorithm will only return a lower bound. In the future this algorithm should be adapted to backtrack searches or network searching. For a backtrack search we would note for which difference the memory barrier is reached and then start splitting up the searches. This would create a tree-like stucture. We then check all leaves on this tree and choose the maximal graph. For network searching, the same type of backtrack algorithm would be used except that difference branches of the tree would be sent to different computers. This would be much quicker, but of course would cost much more in computer facilities.
Issai Numbers
Issai Schur proved in 1916 the following theorem which is considered the first Ramsey theorem to spark activity in Ramsey Theory. Schur’s Theorem: Given $`r`$, there exists an integer $`N=N(r)`$ such that any $`r`$\- coloring of the integers $`1`$ through $`N`$ must admit a monochromatic solution to $`x+y=z`$. We may extend this to the following theorem: Theorem 1: Given $`r`$ and $`k`$, there exists an integer $`N=N(r,k)`$ such that any $`r`$-coloring of the integers $`1`$ through $`N`$ must admit a monochromatic solution to $`_{i=1}^{k1}x_i=x_k`$. This is not a new theorem. In fact it is a special case of Rado’s Theorem \[GRS p. 56\]. We will, however, present a simple proof which relies only on the notions already presented in this paper. Proof: Consider the $`r`$-colored difference Ramsey number $`N=D(k,k,\mathrm{},k)`$. Then any $`r`$-coloring of $`K_N`$ must have a monochromatic $`K_k`$ subgraph. Let the vertices of this subgraph be $`\{v_0,v_1,\mathrm{},v_{k1}\}`$, with the differences $`d_i=v_iv_0`$. By ordering and renaming we may assume that $`d_1<d_2<\mathrm{}<d_{k1}`$. Since $`K_k`$ is monochromatic, we have that the edges $`\overline{v_{i1}v_i}`$, $`i=1,2,\mathrm{},k1`$, and $`\overline{v_{k1}v_0}`$ must all be the same color. Since the $`r`$-colored $`K_N`$ is a difference graph we have that $`(d_{i+1}d_i)`$, $`i=1,2,\mathrm{},k1`$, $`d_1`$, and $`d_{k1}`$ must all be assigned the same color. Hence we have the monochromatic solution $`d_1+_{i=1}^{k2}(d_{i+1}d_i)=d_{k1}`$.
Using this theorem we will define Issai numbers. But first, another definition is in order. Definition: Schur $`k`$-tuple We will call a $`k`$-tuple, $`(x_1,x_2,\mathrm{},x_k)`$, a Schur $`k`$-tuple if $`_{i=1}^{k1}x_i=x_k`$. In the case where $`k=3`$, the $`3`$-tuple $`(x,y,x+y)`$ is called a Schur triple. In Schur’s theorem the only parameter is $`r`$, the number of colors. Hence, a Schur number is defined to be the minimal integer $`S=S(r)`$ such that any $`r`$-coloring of the integers $`1`$ through $`S`$ must contain a monochromatic Schur triple. It is known that $`S(2)=5`$, $`S(3)=14`$, and $`S(4)=45`$. The Schur numbers have been generalized in \[BB\] and \[S\] in directions different from what will be presented here. We will extend the Schur numbers in the same fashion as the Ramsey numbers were extended from $`R(k,k)`$ to $`R(k,l)`$. Definition: Issai Number Let $`S=S(k_1,k_2,\mathrm{},k_r)`$ be the minimal integer such that any $`r`$-coloring of the integers from $`1`$ to $`S`$ must have a monochromatic Schur $`k_i`$-tuple, for some $`i\{1,2,\mathrm{},r\}`$. $`S`$ will be called an Issai number. The existence of these Issai numbers is trivially implied by the existence of the difference Ramsey numbers $`D(k_1,k_2,\mathrm{},k_r)`$. In fact, we have the following result: Lemma 2: $`S(k_1,k_2,\mathrm{},k_r)D(k_1,k_2,\mathrm{},k_r)1`$ Proof: By definition, there exists a minimal integer $`N=D(k_1,k_2,\mathrm{},k_r)`$ such that any $`r`$-coloring of $`K_N`$ must contain a monochromatic $`K_{k_i}`$, for some $`i\{1,2,\mathrm{},r\}`$. Using the same reasoning as in the proof of Theorem 1 and the fact that the differences in the difference graph are $`1,2,\mathrm{},N1`$, we have the stated inequality. Using this new definition and notation, it is already known that $`S(3,3)=5`$, $`S(3,3,3)=14`$, and $`S(3,3,3,3)=45`$. We note here that since $`D(3,3,3)=15`$ we immediately have $`S(3,3,3)14`$, whereas before, since $`R(3,3,3)=17`$, we had only that $`S(3,3,3)16`$. Attempts to find a general bound for $`S(k,l)`$ have been unsuccessful. The values below lead me to make the following seemingly trivial conjecture: Conjecture 2: $`S(k1,l)S(k,l)`$ The difficulty here is that a monochromatic Schur $`k`$-tuple in no way implies the existence of a monochromatic Schur $`(k1)`$-tuple. To see this, consider the following coloring of $`\{1,2,\mathrm{},9\}`$. Color $`\{1,3,5,9\}`$ red, and the other integers blue. Then we have the red Schur $`4`$-tuple $`(1,3,5,9)`$. However no red Schur triple exists in this coloring.
Some Issai Values and Colorings
We used the Maple package ISSAI to calculate the exact values as well as an exceptional coloring given below. ISSAI is written for two colors, but can easily be extended to any number of colors. The value $`S(3,3)=5`$ has been known since before Schur proved his theorem. The value $`S(4,4)=11`$ follows from Beutelspacher and Brestovansky in \[BB\], who more generally show that $`S(k,k)=k^2k1`$. The remaining values are new.
Issai Numbers
$$\begin{array}{ccccccc}& & & & & & \\ & \hfill l& 3& 4& 5& 6& 7\\ k\hfill & & & & & & \\ & & & & & & \\ & & & & & & \\ 3\hfill & & 5& 7& 11& 13& 17\\ & & & & & & \\ 4\hfill & & & 11& 14& & \end{array}$$
The exceptional colorings found by ISSAI are as follows. Let $`S(k,l)`$ denote the minimal number such that and $`2`$-coloring of the integers from 1 to $`S(k,l)`$ must contain either a red Schur $`k`$-tuple or a blue Schur $`l`$-tuple. It is enough to list only those integers colored red:
| S(3,4)\>6: | Red: 1,6 |
| --- | --- |
| S(3,5)\>10: | Red: 1,3,8,10 |
| S(4,4)\>10: | Red: 1,2,9,10 |
| S(3,6)\>12: | Red: 1,3,10,12 |
| S(4,5)\>13: | Red: 1,2,12,13 |
| S(3,7)\>16: | Red: 1,3,5,12,14,16 |
Acknowledgment I would like to thank my advisor, Doron Zeilberger, for his guidence, his support, and for sharing his mathematical philosophies. I would also like to thank Hans Johnston for his expertise and help with my Fortran code. Further, I would like to thank Daniel Schaal for his help with some references.
References \[BB\] A. Beutelspacher and W. Brestovansky,Generalized Schur Numbers, Lecture Notes in Mathematics (Springer), 969, 1982, 30-38. \[C\] F. Chung, On the Ramsey Numbers $`N(3,3,\mathrm{},3)`$, Discrete Mathematics, 5, 1973, 317-321. \[CG\] F.R.K. Chung and C.M. Grinstead, A Survey of Bounds for Classical Ramsey Numbers, Journal of Graph Theory, 7, 1983, 25-37. \[E\] G. Exoo, On Two Classical Ramsey Numbers of the Form $`R(3,n)`$, SIAM Journal of Discrete Mathematics, 2, 1989, 5-11. \[GG\] A. Gleason and R. Greenwood, Combinatorial Relations and Chromatic Graphs, Canadian Journal of Mathematics, 7, 1955, 1-7. \[GR\] C. Grinstead and S. Roberts, On the Ramsey Numbers $`R(3,8)`$ and $`R(3,9)`$, Journal of Combinatorial Theory, Series B, 33, 1982, 27-51. \[GRS\] R. Graham, B. Rothschild, and J. Spencer, Ramsey Theory, John Wiley and Sons, 1980, 74-76. \[GY\] J.E. Graver and J. Yackel, Some Graph Theoretic Results Associated with Ramsey’s Theorem, Journal of Combinatorial Theory, 4, 1968, 125-175. \[K\] J. G. Kalbfleisch, Chromatic Graphs and Ramsey’s Theorem, Ph.D. Thesis, University of Waterloo, 1966. \[Rad\] S. Radziszowski, Small Ramsey Numbers, Electronic Journal of Combinatorics, Dynamic Survey DS1, 1994, 28pp. \[RK\] S. Radziszowski and D. L. Kreher, On $`R(3,k)`$ Ramsey Graphs: Theoretical and Computational Results, Journal of Combinatorial Mathematics and Combinatorial Computing, 4, 1988, 207-212. \[S\] D. Schaal, On Generalized Schur Numbers, Cong. Numer., 98, 1993, 178-187. \[SLZL\] Su Wenlong, Luo Haipeng, Zhang Zhengyou, and Li Guiqing, New Lower Bounds of Fifteen Classical Ramsey Numbers, to appear in Australasian Journal of Combinatorics.
|
no-problem/9904/gr-qc9904057.html
|
ar5iv
|
text
|
# Experimental Tests of Relativistic Gravity
## 1 Introduction
Einstein’s gravitation theory can be thought of as defined by two postulates. One postulate states that the action functional describing the propagation and self-interaction of the gravitational field is
$$S_{\mathrm{gravitation}}=\frac{c^4}{16\pi G}\frac{d^4x}{c}\sqrt{g}R(g).$$
(1)
A second postulate states that the action functional describing the coupling of all the fields describing matter and its electro-weak and strong interactions (leptons and quarks, gauge and Higgs bosons) is a (minimal) deformation of the special relativistic action functional used by particle physicists (the so called “Standard Model”), obtained by replacing everywhere the flat Minkowski metric $`\eta _{\mu \nu }=\mathrm{diag}(1,+1,+1,+1)`$ by $`g_{\mu \nu }(x^\lambda )`$ and the partial derivatives $`_\mu /x^\mu `$ by $`g`$-covariant derivatives $`_\mu `$. Schematically, one has
$$S_{\mathrm{matter}}=\frac{d^4x}{c}\sqrt{g}_{\mathrm{matter}}[\psi ,A_\mu ,H;g_{\mu \nu }].$$
(2)
Einstein’s theory of gravitation is then defined by extremizing the total action functional, $`S_{\mathrm{tot}}[g,\psi ,A,H]=S_{\mathrm{gravitation}}[g]+S_{\mathrm{matter}}[\psi ,A,H,g]`$.
Although, seen from a wider perspective, the two postulates (1) and (2) follow from the unique requirement that the gravitational interaction be mediated only by massless spin-2 excitations , the decomposition in two postulates is convenient for discussing the theoretical significance of various tests of General Relativity. Let us discuss in turn the experimental tests of the coupling of matter to gravity (postulate (2)), and the experimental tests of the dynamics of the gravitational field (postulate (1)). For more details and references we refer the reader to or .
## 2 Experimental tests of the coupling between matter and gravity
The fact that the matter Lagrangian depends only on a symmetric tensor $`g_{\mu \nu }(x)`$ and its first derivatives (i.e. the postulate of a universal “metric coupling” between matter and gravity) is a strong assumption (often referred to as the “equivalence principle”) which has many observable consequences for the behaviour of localized test systems embedded in given, external gravitational fields. In particular, it predicts the constancy of the “constants” (the outcome of local non-gravitational experiments, referred to local standards, depends only on the values of the coupling constants and mass scales entering the Standard Model) and the universality of free fall (two test bodies dropped at the same location and with the same velocity in an external gravitational field fall in the same way, independently of their masses and compositions).
Many sorts of data (from spectral lines in distant galaxies to a natural fission reactor phenomenon which took place at Oklo, Gabon, two billion years ago) have been used to set limits on a possible time variation of the basic coupling constants of the Standard Model. The best results concern the electromagnetic coupling, i.e. the fine-structure constant $`\alpha _{\mathrm{em}}`$. A recent reanalysis of the Oklo phenomenon gives a conservative upper bound
$$6.7\times 10^{17}\mathrm{yr}^1<\frac{\dot{\alpha }_{\mathrm{em}}}{\alpha _{\mathrm{em}}}<5.0\times 10^{17}\mathrm{yr}^1,$$
(3)
which is much smaller than the cosmological time scale $`10^{10}\mathrm{yr}^1`$. It would be interesting to confirm and/or improve the limit (3) by direct laboratory measurements comparing clocks based on atomic transitions having different dependences on $`\alpha _{\mathrm{em}}`$. \[Current atomic clock tests of the constancy of $`\alpha _{\mathrm{em}}`$ give the limit $`|\dot{\alpha }_{\mathrm{em}}/\alpha _{\mathrm{em}}|<3.7\times 10^{14}\mathrm{yr}^1`$ .\]
The universality of free fall has been verified at the $`10^{12}`$ level both for laboratory bodies , e.g. (from the last reference in )
$$\left(\frac{\mathrm{\Delta }a}{a}\right)_{\mathrm{Be}\mathrm{Cu}}=(1.9\pm 2.5)\times 10^{12},$$
(4)
and for the gravitational accelerations of the Moon and the Earth toward the Sun ,
$$\left(\frac{\mathrm{\Delta }a}{a}\right)_{\mathrm{Moon}\mathrm{Earth}}=(3.2\pm 4.6)\times 10^{13}.$$
(5)
In conclusion, the main observable consequences of the Einsteinian postulate (2) concerning the coupling between matter and gravity (“equivalence principle”) have been verified with high precision by all experiments to date (see Refs. , for discussions of other tests of the equivalence principle). The traditional paradigm (first put forward by Fierz ) is that the extremely high precision of free fall experiments ($`10^{12}`$ level) strongly suggests that the coupling between matter and gravity is exactly of the “metric” form (2), but leaves open possibilities more general than eq. (1) for the spin-content and dynamics of the fields mediating the gravitational interaction. We shall provisionally adopt this paradigm to discuss the tests of the other Einsteinian postulate, eq. (1). However, we shall emphasize at the end that recent theoretical findings suggest a new paradigm.
## 3 Tests of the dynamics of the gravitational field in the weak field regime
Let us now consider the experimental tests of the dynamics of the gravitational field, defined in General Relativity by the action functional (1). Following first the traditional paradigm, it is convenient to enlarge our framework by embedding General Relativity within the class of the most natural relativistic theories of gravitation which satisfy exactly the matter-coupling tests discussed above while differing in the description of the degrees of freedom of the gravitational field. This class of theories are the metrically-coupled tensor-scalar theories, first introduced by Fierz in a work where he noticed that the class of non-metrically-coupled tensor-scalar theories previously introduced by Jordan would generically entail unacceptably large violations of the equivalence principle. The metrically-coupled (or equivalence-principle respecting) tensor-scalar theories are defined by keeping the postulate (2), but replacing the postulate (1) by demanding that the “physical” metric $`g_{\mu \nu }`$ (coupled to ordinary matter) be a composite object of the form
$$g_{\mu \nu }=A^2(\phi )g_{\mu \nu }^{},$$
(6)
where the dynamics of the “Einstein” metric $`g_{\mu \nu }^{}`$ is defined by the action functional (1) (written with the replacement $`g_{\mu \nu }g_{\mu \nu }^{}`$) and where $`\phi `$ is a massless scalar field. \[More generally, one can consider several massless scalar fields, with an action functional of the form of a general nonlinear $`\sigma `$ model \]. In other words, the action functional describing the dynamics of the spin 2 and spin 0 degrees of freedom contained in this generalized theory of gravitation reads
$`S_{\mathrm{gravitational}}[g_{\mu \nu }^{},\phi ]`$ $`=`$ $`{\displaystyle \frac{c^4}{16\pi G_{}}}{\displaystyle \frac{d^4x}{c}\sqrt{g_{}}}`$ (7)
$`\times `$ $`[R(g_{})2g_{}^{\mu \nu }_\mu \phi _\nu \phi ].`$
Here, $`G_{}`$ denotes some bare gravitational coupling constant. This class of theories contains an arbitrary function, the “coupling function” $`A(\phi )`$. When $`A(\phi )=\mathrm{const}.`$, the scalar field is not coupled to matter and one falls back (with suitable boundary conditions) on Einstein’s theory. The simple, one-parameter subclass $`A(\phi )=\mathrm{exp}(\alpha _0\phi )`$ with $`\alpha _0R`$ is the Jordan-Fierz-Brans-Dicke theory , , . In the general case, one can define the (field-dependent) coupling strength of $`\phi `$ to matter by
$$\alpha (\phi )\frac{\mathrm{ln}A(\phi )}{\phi }.$$
(8)
It is possible to work out in detail the observable consequences of tensor-scalar theories and to contrast them with the general relativistic case (see, e.g., ref. ).
Let us now consider the experimental tests of the dynamics of the gravitational field that can be performed in the solar system. Because the planets move with slow velocities $`(v/c10^4)`$ in a very weak gravitational potential $`(U/c^2(v/c)^210^8)`$, solar system tests allow us only to probe the quasi-static, weak-field regime of relativistic gravity (technically described by the so-called “post-Newtonian” expansion). In the limit where one keeps only the first relativistic corrections to Newton’s gravity (first post-Newtonian approximation), all solar-system gravitational experiments, interpreted within tensor-scalar theories, differ from Einstein’s predictions only through the appearance of two “post-Einstein” parameters $`\overline{\gamma }`$ and $`\overline{\beta }`$ (related to the usually considered Eddington parameters $`\gamma `$ and $`\beta `$ through $`\overline{\gamma }\gamma 1`$, $`\overline{\beta }\beta 1`$). The parameters $`\overline{\gamma }`$ and $`\overline{\beta }`$ vanish in General Relativity, and are given in tensor-scalar theories by
$$\overline{\gamma }=2\frac{\alpha _0^2}{1+\alpha _0^2},$$
(9)
$$\overline{\beta }=+\frac{1}{2}\frac{\beta _0\alpha _0^2}{(1+\alpha _0^2)^2},$$
(10)
where $`\alpha _0\alpha (\phi _0)`$, $`\beta _0\alpha (\phi _0)/\phi _0`$; $`\phi _0`$ denoting the cosmologically-determined value of the scalar field far away from the solar system. Essentially, the parameter $`\overline{\gamma }`$ depends only on the linearized structure of the gravitational theory (and is a direct measure of its field content, i.e. whether it is pure spin 2 or contains an admixture of spin 0), while the parameter $`\overline{\beta }`$ parametrizes some of the quadratic nonlinearities in the field equations (cubic vertex of the gravitational field).
All currently performed gravitational experiments in the solar system, including perihelion advances of planetary orbits, the bending and delay of electromagnetic signals passing near the Sun, and very accurate range data to the Moon obtained by laser echoes, are compatible with the general relativistic predictions $`\overline{\gamma }=0=\overline{\beta }`$ and give upper bounds on both $`\left|\overline{\gamma }\right|`$ and $`\left|\overline{\beta }\right|`$(i.e. on possible fractional deviations from General Relativity). The best current limits come from: (i) VLBI measurements of the deflection of radio waves by the Sun, giving : $`3.8\times 10^4<\overline{\gamma }<2.6\times 10^4`$, and (ii) Lunar Laser Ranging measurements of a possible polarization of the orbit of the Moon toward the Sun (“Nordtvedt effect” ) giving : $`4\overline{\beta }\overline{\gamma }=0.0007\pm 0.0010`$.
The corresponding bounds on the scalar coupling parameters $`\alpha _0`$ and $`\beta _0`$ are: $`\alpha _0^2<1.9\times 10^4`$, $`8.5\times 10^4<(1+\beta _0)\alpha _0^2<1.5\times 10^4`$. Note that if one were working in the more general (and more plausible; see below) framework of theories where the scalar couplings violate the equivalence principle one would get much stronger constraints on the basic coupling parameter $`\alpha _0`$ of order $`\alpha _0^2<10^7`$ .
The parametrization of the weak-field deviations between generic tensor-scalar theories and Einstein’s theory has been extended to the second post-Newtonian order . Only two post-post-Einstein parameters, $`\epsilon `$ and $`\zeta `$, representing a deeper layer of structure of the gravitational interaction, show up. These parameters have been shown to be already significantly constrained by binary-pulsar data: $`|\epsilon |<7\times 10^2`$, $`|\zeta |<6\times 10^3`$.
## 4 Tests of the dynamics of the gravitational field in the strong field regime
In spite of the diversity, number and often high precision of solar system tests, they have an important qualitative weakness : they probe neither the radiation properties nor the strong-field aspects of relativistic gravity. Fortunately, the discovery and continuous observational study of pulsars in gravitationally bound binary orbits has opened up an entirely new testing ground for relativistic gravity, giving us an experimental handle on the regime of strong and/or radiative gravitational fields.
The fact that binary pulsar data allow one to probe the propagation properties of the gravitational field is well known. This comes directly from the fact that the finite velocity of propagation of the gravitational interaction between the pulsar and its companion generates damping-like terms in the equations of motion, i.e. terms which are directed against the velocities. \[This can be understood heuristically by considering that the finite velocity of propagation must cause the gravitational force on the pulsar to make an angle with the instantaneous position of the companion , and was verified by a careful derivation of the general relativistic equations of motion of binary systems of compact objects \]. These damping forces cause the binary orbit to shrink and its orbital period $`P_b`$ to decrease. The measurement, in some binary pulsar systems, of the secular orbital period decay $`\dot{P}_bdP_b/dt`$ thereby gives us a direct experimental probe of the damping terms present in the equations of motion.
The fact that binary pulsar data allow one to probe strong-field aspects of relativistic gravity is less well known. The a priori reason for saying that they should is that the surface gravitational potential of a neutron star $`Gm/c^2R0.2`$ is a mere factor 2.5 below the black hole limit (and a factor $`10^8`$ above the surface potential of the Earth). Due to the peculiar “effacement” properties of strong-field effects taking place in General Relativity , the fact that pulsar data probe the strong-gravitational-field regime can only be seen when contrasting Einstein’s theory with more general theories. In particular, it has been found in tensor-scalar theories that a self-gravity as strong as that of a neutron star can naturally (i.e. without fine tuning of parameters) induce order-unity deviations from general relativistic predictions in the orbital dynamics of a binary pulsar thanks to the existence of nonperturbative strong-field effects. \[The adjective “nonperturbative” refers here to the fact that this phenomenon is nonanalytic in the coupling strength of the scalar field, eq. (8), which can be as small as wished in the weak-field limit\]. As far as we know, this is the first example where large deviations from General Relativity, induced by strong self-gravity effects, occur in a theory which contains only positive energy excitations and whose post-Newtonian limit can be arbitrarily close to that of General Relativity.
A comprehensive account of the use of binary pulsars as laboratories for testing strong-field gravity will be found in ref. . Two complementary approaches can be pursued : a phenomenological one (“Parametrized Post-Keplerian” formalism), or a theory-dependent one , , .
The phenomenological analysis of binary pulsar timing data consists in fitting the observed sequence of pulse arrival times to the generic DD timing formula whose functional form has been shown to be common to the whole class of tensor-multi-scalar theories. The least-squares fit between the timing data and the parameter-dependent DD timing formula allows one to measure, besides some “Keplerian” parameters (“orbital period” $`P_b`$, “eccentricity” $`e`$,$`\mathrm{}`$), a maximum of eight “post-Keplerian” parameters: $`k,\gamma ,\dot{P}_b,r,s,\delta _\theta ,\dot{e}`$ and $`\dot{x}`$. Here, $`k\dot{\omega }P_b/2\pi `$ is the fractional periastron advance per orbit, $`\gamma `$ a time dilation parameter (not to be confused with its post-Newtonian namesake), $`\dot{P}_b`$ the orbital period derivative mentioned above, and $`r`$ and $`s`$ the “range” and “shape” parameters of the gravitational (“Shapiro”) time delay caused by the companion. The important point is that the post-Keplerian parameters can be measured without assuming any specific theory of gravity. Now, each specific relativistic theory of gravity predicts that, for instance, $`k,\gamma ,\dot{P}_b,r`$ and $`s`$ (to quote parameters that have been successfully measured from some binary pulsar data) are some theory-dependent functions of the (unknown) masses $`m_1,m_2`$ of the pulsar and its companion. Therefore, in our example, the five simultaneous phenomenological measurements of $`k,\gamma ,\dot{P}_b,r`$ and $`s`$ determine, for each given theory, five corresponding theory-dependent curves in the $`m_1m_2`$ plane (through the 5 equations $`k^{\mathrm{measured}}=k^{\mathrm{theory}}(m_1,m_2)`$, etc$`\mathrm{}`$). This yields three $`(3=52)`$ tests of the specified theory, according to whether the five curves meet at one point in the mass plane, as they should. \[In the most general (and optimistic) case, discussed in , one can phenomenologically analyze both timing data and pulse-structure data (pulse shape and polarization) to extract up to nineteen post-Keplerian parameters.\] The theoretical significance of these tests depends upon the physics lying behind the post-Keplerian parameters involved in the tests. For instance, as we said above, a test involving $`\dot{P}_b`$ probes the propagation (and helicity) properties of the gravitational interaction. But a test involving, say, $`k,\gamma ,r`$ or $`s`$ probes (as shown by combining the results of and ) strong self-gravity effects independently of radiative effects.
Besides the phenomenological analysis of binary pulsar data, one can also adopt a theory-dependent methodology , , . The idea here is to work from the start within a certain finite-dimensional “space of theories”, i.e. within a specific class of gravitational theories labelled by some theory parameters. Then by fitting the raw pulsar data to the predictions of the considered class of theories, one can determine which regions of theory-space are compatible (at say the 90% confidence level) with the available experimental data. This method can be viewed as a strong-field generalization of the parametrized post-Newtonian formalism used to analyze solar-system experiments. When non-perturbative strong-field effects are absent one can parametrize strong-gravity effects in neutron stars by using an expansion in powers of the “compactness” $`c_A2\mathrm{ln}m_A/\mathrm{ln}GGm_A/c^2R_A`$. Ref. has then shown that the observable predictions of generic tensor-multi-scalar theories could be parametrized by a sequence of “theory parameters”, $`\overline{\gamma },\overline{\beta },\beta _2,\beta ^{},\beta ^{\prime \prime },\beta _3,(\beta \beta ^{})\mathrm{}`$ representing deeper and deeper layers of structure of the relativistic gravitational interaction beyond the first-order post-Newtonian level parametrized by $`\overline{\gamma }`$ and $`\overline{\beta }`$. When non-perturbative strong-field effects develop, one cannot use the multi-parameter approach just mentioned. A useful alternative approach is then to work within specific, low-dimensional “mini-spaces of theories”. Of particular interest is the two-dimensional mini-space of tensor-scalar theories defined by the coupling function $`A(\phi )=\mathrm{exp}\left(\alpha _0\phi +\frac{1}{2}\beta _0\phi ^2\right)`$. The predictions of this family of theories (parametrized by $`\alpha _0`$ and $`\beta _0`$) are analytically described, in weak-field contexts, by the post-Einstein parameter (9), and can be studied in strong-field contexts by combining analytical and numerical methods .
Let us now briefly summarize the current experimental situation. Concerning the first discovered binary pulsar PSR$`1913+16`$ , it has been possible to measure with accuracy the three post-Keplerian parameters $`k,\gamma `$ and $`\dot{P}_b`$. From what was said above, these three simultaneous measurements yield one test of gravitation theories. After subtracting a small ($`10^{14}`$ level in $`\dot{P}_b`$ !), but significant, perturbing effect caused by the Galaxy , one finds that General Relativity passes this $`(k\gamma \dot{P}_b)_{1913+16}`$ test with complete success at the $`10^3`$ level. More precisely, one finds ,
$`\left[{\displaystyle \frac{\dot{P}_b^{\mathrm{obs}}\dot{P}_b^{\mathrm{galactic}}}{\dot{P}_b^{\mathrm{GR}}[k^{\mathrm{obs}},\gamma ^{\mathrm{obs}}]}}\right]_{1913+16}`$ $`=`$ $`1.0032\pm 0.0023(\mathrm{obs})`$ (11)
$`\pm `$ $`0.0026(\mathrm{galactic})`$
$`=`$ $`1.0032\pm 0.0035,`$
where $`\dot{P}_b^{\mathrm{GR}}[k^{\mathrm{obs}},\gamma ^{\mathrm{obs}}]`$ is the GR prediction for the orbital period decay computed from the observed values of the other two post-Keplerian parameters $`k`$ and $`\gamma `$.
This beautiful confirmation of General Relativity is an embarrassment of riches in that it probes, at the same time, the propagation and strong-field properties of relativistic gravity ! If the timing accuracy of PSR$`1913+16`$ could improve by a significant factor two more post-Keplerian parameters ($`r`$ and $`s`$) would become measurable and would allow one to probe separately the propagation and strong-field aspects . Fortunately, the discovery of the binary pulsar PSR$`1534+12`$ (which is significantly stronger than PSR$`1913+16`$ and has a more favourably oriented orbit) has opened a new testing ground, in which it has been possible to probe strong-field gravity independently of radiative effects. A phenomenological analysis of the timing data of PSR$`1534+12`$ has allowed one to measure the four post-Keplerian parameters $`k,\gamma ,r`$ and $`s`$ . From what was said above, these four simultaneous measurements yield two tests of strong-field gravity, without mixing of radiative effects. General Relativity is found to pass these tests with complete success within the measurement accuracy , . The most precise of these new, pure strong-field tests is the one obtained by combining the measurements of $`k`$, $`\gamma `$ and $`s`$. Using the most recent data one finds agreement at the 1% level:
$$\left[\frac{s^{\mathrm{obs}}}{s^{\mathrm{GR}}[k^{\mathrm{obs}},\gamma ^{\mathrm{obs}}]}\right]_{1534+12}=1.007\pm 0.008.$$
(12)
Recently, it has been possible to extract also the “radiative” parameter $`\dot{P}_b`$ from the timing data of PSR$`1534+12`$. Again, General Relativity is found to be fully consistent (at the $`15\%`$ level) with the additional test provided by the $`\dot{P}_b`$ measurement . Note that this gives our second direct experimental confirmation that the gravitational interaction propagates as predicted by Einstein’s theory.
More recently, measurements of the pulse shape of PSR $`1913+16`$ , have detected a time variation of the pulse shape compatible with the prediction , that the general relativistic spin-orbit coupling should cause a secular change in the orientation of the pulsar beam with respect to the line of sight (“geodetic precession”). As envisaged long ago this precession will cause the pulsar to disappear (around 2035) and to remain invisible for hundreds of years , .
A theory-dependent analysis of the published pulsar data on PSRs $`1913+16`$, $`1534+12`$ and $`0655+64`$ (a dissymetric system constraining the existence of dipolar radiation ) has been recently performed within the $`(\alpha _0,\beta _0)`$-space of tensor-scalar theories introduced above . This analysis proves that binary-pulsar data exclude large regions of theory-space which are compatible with solar-system experiments. This is illustrated in Fig. 9 of Ref. which shows that $`\beta _0`$ must be larger than about $`5`$, while any value of $`\beta _0`$ is compatible with weak-field tests as long as $`\alpha _0`$ is small enough.
## 5 Was Einstein 100% right ?
Summarizing the experimental evidence discussed above, we can say that Einstein’s postulate of a pure metric coupling between matter and gravity (“equivalence principle”) appears to be, at least, $`99.9999999999\%`$ right (because of universality-of-free-fall experiments), while Einstein’s postulate (1) for the field content and dynamics of the gravitational field appears to be, at least, $`99.9\%`$ correct both in the quasi-static-weak-field limit appropriate to solar-system experiments, and in the radiative-strong-field regime explored by binary pulsar experiments. Should one apply Occam’s razor and decide that Einstein must have been $`100\%`$ right, and then stop testing General Relativity ? My answer is definitely, no !
First, one should continue testing a basic physical theory such as General Relativity to the utmost precision available simply because it is one of the essential pillars of the framework of physics. This is the fundamental justification of an experiment such as Gravity Probe B (the Stanford gyroscope experiment), which will advance by one order of magnitude our experimental knowledge of post-Newtonian gravity.
Second, some very crucial qualitative features of General Relativity have not yet been verified : in particular the existence of black holes, and the direct detection on Earth of gravitational waves. Hopefully, the LIGO/VIRGO network of interferometric detectors will observe gravitational waves early in the next century.
Last, some theoretical findings suggest that the current level of precision of the experimental tests of gravity might be naturally (i.e. without fine tuning of parameters) compatible with Einstein being actually only 50% right ! By this we mean that the correct theory of gravity could involve, on the same fundamental level as the Einsteinian tensor field $`g_{\mu \nu }^{}`$, a massless scalar field $`\phi `$.
Let us first question the traditional paradigm , according to which special attention should be given to tensor-scalar theories respecting the equivalence principle. This class of theories was, in fact, introduced in a purely ad hoc way so as to prevent too violent a contradiction with experiment. However, it is important to notice that the scalar couplings which arise naturally in theories unifying gravity with the other interactions systematically violate the equivalence principle. This is true both in Kaluza-Klein theories (which were the starting point of Jordan’s theory) and in string theories. In particular, it is striking that (as first noted by Scherk and Schwarz ) the dilaton field $`\mathrm{\Phi }`$, which plays an essential role in string theory, appears as a necessary partner of the graviton field $`g_{\mu \nu }`$ in all string models. Let us recall that $`g_s=e^\mathrm{\Phi }`$ is the basic string coupling constant (measuring the weight of successive string loop contributions) which determines, together with other scalar fields (the moduli), the values of all the coupling constants of the low-energy world. This means, for instance, that the fine-structure constant $`\alpha _{\mathrm{em}}`$ is a function of $`\mathrm{\Phi }`$ (and possibly of other moduli fields). In intuitive terms, while Einstein proposed a framework where geometry and gravitation were united as a dynamical field $`g_{\mu \nu }(x)`$, i.e. a soft structure influenced by the presence of matter, string theory extends this idea by proposing a framework where geometry, gravitation, gauge couplings and gravitational couplings all become soft structures described by interrelated dynamical fields. Symbolically, one has $`g_{\mu \nu }(x)g^2(x)G(x)`$. This spatiotemporal variability of coupling constants entails a clear violation of the equivalence principle. In particular, $`\alpha _{\mathrm{em}}`$ would be expected to vary on the Hubble time scale (in contradiction with the limit (3) above), and materials of different compositions would be expected to fall with different accelerations (in contradiction with the limits (4), (5) above).
The most popular idea for reconciling gravitational experiments with the existence, at a fundamental level, of scalar partners of $`g_{\mu \nu }`$ is to assume that all these scalar fields (which are massless before supersymmetry breaking) will acquire a mass after supersymmetry breaking. Typically one expects this mass $`m`$ to be in the TeV range . This would ensure that scalar exchange brings only negligible, exponentially small corrections $`\mathrm{exp}(mr/\mathrm{}c)`$ to the general relativistic predictions concerning low-energy gravitational effects. However, the interesting possibility exists that the mass $`m`$ be in the milli eV range, corresponding to observable deviations from usual gravity below one millimeter , , .
But, the idea of endowing the scalar partners of $`g_{\mu \nu }`$ with a non zero mass is fraught with many cosmological difficulties , , . Though these cosmological difficulties might be solved by a combination of ad hoc solutions (e.g. introducing a secondary stage of inflation to dilute previously produced dilatons , ), a more radical solution to the problem of reconciling the existence of the dilaton (or any moduli field) with experimental tests and cosmological data has been proposed (see also which considered an equivalence-principle-respecting scalar field). The main idea of Ref. is that string-loop effects (i.e. corrections depending upon $`g_s=e^\mathrm{\Phi }`$ induced by worldsheets of arbitrary genus in intermediate string states) may modify the low-energy, Kaluza-Klein type matter couplings $`(e^{2\mathrm{\Phi }}F_{\mu \nu }F^{\mu \nu })`$ of the dilaton (or moduli) in such a manner that the VEV of $`\mathrm{\Phi }`$ be cosmologically driven toward a finite value $`\mathrm{\Phi }_m`$ where it decouples from matter. For such a “least coupling principle” to hold, the loop-modified coupling functions of the dilaton, $`B_i(\mathrm{\Phi })=e^{2\mathrm{\Phi }}+c_0+c_1e^{2\mathrm{\Phi }}+\mathrm{}+`$ (nonperturbative terms), must exhibit extrema for finite values of $`\mathrm{\Phi }`$, and these extrema must have certain universality properties. A natural way in which the required conditions could be satisfied is through the existence of a discrete symmetry in scalar space. \[For instance, a symmetry under $`\mathrm{\Phi }\mathrm{\Phi }`$ would guarantee that all the scalar coupling functions reach an extremum at the self-dual point $`\mathrm{\Phi }_m=0`$\].
A study of the efficiency of this mechanism of cosmological attraction of $`\phi `$ towards $`\phi _m`$ ($`\phi `$ denoting the canonically normalized scalar field in the Einstein frame, see Eq. (7)) estimates that the present vacuum expectation value $`\phi _0`$ of the scalar field would differ (in a rms sense) from $`\phi _m`$ by
$$\phi _0\phi _m2.75\times 10^9\times \kappa ^3\mathrm{\Omega }_m^{3/4}\mathrm{\Delta }\phi .$$
(13)
Here $`\kappa `$ denotes the curvature of the gauge coupling function $`\mathrm{ln}B_F(\phi )`$ around the maximum $`\phi _m`$, $`\mathrm{\Omega }_m`$ denotes the present cosmological matter density in units of $`10^{29}g`$ cm<sup>-3</sup>, and $`\mathrm{\Delta }\phi `$ the deviation $`\phi \phi _m`$ at the beginning of the (classical) radiation era. Equation (13) predicts (when $`\mathrm{\Delta }\phi `$ is of order unity<sup>1</sup><sup>1</sup>1However, $`\mathrm{\Delta }\phi `$ could be $`1`$ if the attractor mechanism already applies during an early stage of potential-driven inflation .) the existence, at the present cosmological epoch, of many small, but not unmeasurably small, deviations from General Relativity proportional to the square of $`\phi _0\phi _m`$. This provides a new incentive for trying to improve by several orders of magnitude the various experimental tests of Einstein’s equivalence principle. The most sensitive way to look for a small residual violation of the equivalence principle is to perform improved tests of the universality of free fall. The mechanism of Ref. suggests a specific composition-dependence of the residual differential acceleration of free fall and estimates that a non-zero signal could exist at the very small level
$$\left(\frac{\mathrm{\Delta }a}{a}\right)_{\mathrm{rms}}^{\mathrm{max}}1.36\times 10^{18}\kappa ^4\mathrm{\Omega }_m^{3/2}(\mathrm{\Delta }\phi )^2,$$
(14)
where $`\kappa `$ is expected to be of order unity (or smaller, leading to a larger signal, in the case where $`\phi `$ is a modulus rather than the dilaton).
Let us emphasize that the strength of the cosmological scenario considered here as counterargument to applying Occam’s razor lies in the fact that the very small number on the right-hand side of eq. (14) has been derived without any fine tuning or use of small parameters, and turns out to be naturally smaller than the $`10^{12}`$ level presently tested by equivalence-principle experiments (see equations (4), (5)). The estimate (14) gives added significance to the project of a Satellite Test of the Equivalence Principle (nicknamed STEP, and currently studied by NASA, ESA and CNES) which aims at probing the universality of free fall of pairs of test masses orbiting the Earth at the $`10^{18}`$ level .
|
no-problem/9904/cond-mat9904401.html
|
ar5iv
|
text
|
# SECOND HARMONICS AND COMPENSATION EFFECT IN CERAMIC SUPERCONDUCTORS
One of the most fascinating discoveries in condensed matter physics is the paramagnetic Meissner effect (PME) in certain ceramic superconductors . The nature of the unusual paramagnetic behaviour may be related to the appearance of spontaneous suppercurrents (or of orbital moments) . The latter appear due to the existence of $`\pi `$-junctions characterized by the negative Josephson couplings . Furthermore, Sigrist and Rice argued that the PME in the high-$`T_c`$ superconductors is consistent with the $`d`$-wave superconductivity. This effect is succesfully reproduced in a single loop model as well as in a model of interacting junction-loops .
The mechanism of the PME based on the $`d`$-wave symmetry of the order parameter remains ambiguous because it is not clear why this effect could not be observed in many ceramic materials. More importantly, the paramagnetic response has been seen even in the conventional Nb and Al superconductors. In order to explain the PME in terms of conventional superconductivity one can employ the idea of the flux compression inside of a sample. Such phenomenon becomes possible in the presence of the inhomogeneities or of the sample boundary. Thus the intrinsic mechanism leading to the PME is still under debate.
Recently Heinzel et al. have shown that the PME may be analyzed by the compensation technique based on the measurement of the second harmonics of the magnetic ac susceptibility. Their key observation is that the so called compensation effect (CE) appears only in the samples which show the PME but not in those which do not. Overall, this effect may be detected in the following way. The sample is cooled in the external dc field down to a low temperature and then the field is switched off. At the fixed low $`T`$ the second harmonics are monitored by applying the dc and ac fields to the sample. Due to the presence of non-zero spontaneous orbital moments the remanent magnetization or, equivalently, the internal field appears in the cooling process. If the direction of the external dc field is identical to that during the field cooled (FC) procedure, the induced shielding currents will reduce the remanence. Consequently, the absolute value of the second harmonics $`|\chi _2|`$ decreases until the signal of the second harmonics is minimized at a field $`H_{dc}=H_{com}`$. Thus the CE is a phenomenon in which the external and internal fields are compensated and the second harmonics become zero.
The goal of this paper is to explain the CE theoretically by Monte Carlo simulations. Our starting point is based on the possible existence of the chiral glass phase in which the remanence necessary for observing the CE should occur in the cooling procedure. Such remanence phenomenon is similar to what happens in spin glass. Furthermore, the PME related to the CE can also be observed in the chiral glass phase. There are several experimental results which appear to corroborate the existence of such a novel glassy phase in ceramic high-$`T_c`$ superconductors.
In the chiral glass phase the frustration due to existence of 0- and $`\pi `$-junctions (0-junctions correspond to positive Josephson contact energies) leads to non-zero supercurrents. The internal field (or the remanent magnetization) induced by the supercurrents in the cooling process from high temperatures to the chiral glass phase may compensate the external dc field.
We model ceramic superconductors by the three-dimensional XY model of the Josephson network with finite self-inductance. We show that in the FC regime the CE appears in the samples which show the PME but not in those containing only $`0`$-junctions. In the zero field cooled (ZFC) regime decreasing the external dc field also gives rise to the CE in the frustrated ceramics. Both of these findings agree with the experimental data of Heinzel et al.
We neglect the charging effects of the grain and consider the following Hamiltonian
$`={\displaystyle \underset{<ij>}{}}J_{ij}\mathrm{cos}(\theta _i\theta _jA_{ij})+`$ (1)
$`{\displaystyle \frac{1}{2}}{\displaystyle \underset{p}{}}(\mathrm{\Phi }_p\mathrm{\Phi }_p^{ext})^2,`$ (2)
$`\mathrm{\Phi }_p={\displaystyle \frac{\varphi _0}{2\pi }}{\displaystyle \underset{<ij>}{\overset{p}{}}}A_{ij},A_{ij}={\displaystyle \frac{2\pi }{\varphi _0}}{\displaystyle _i^j}\stackrel{}{A}(\stackrel{}{r})𝑑\stackrel{}{r},`$ (3)
where $`\theta _i`$ is the phase of the condensate of the grain at the $`i`$-th site of a simple cubic lattice, $`\stackrel{}{A}`$ is the fluctuating gauge potential at each link of the lattice, $`\varphi _0`$ denotes the flux quantum, $`J_{ij}`$ denotes the Josephson coupling between the $`i`$-th and $`j`$-th grains, $``$ is the self-inductance of a loop (an elementary plaquette), while the mutual inductance between different loops is neglected. The first sum is taken over all nearest-neighbor pairs and the second sum is taken over all elementary plaquettes on the lattice. Fluctuating variables to be summed over are the phase variables, $`\theta _i`$, at each site and the gauge variables, $`A_{ij}`$, at each link. $`\mathrm{\Phi }_p`$ is the total magnetic flux threading through the $`p`$-th plaquette, whereas $`\mathrm{\Phi }_p^{ext}`$ is the flux due to an external magnetic field applied along the $`z`$-direction,
$$\mathrm{\Phi }_p^{ext}=\{\begin{array}{cc}HS\hfill & \text{if }p\text{ is on the }<xy>\text{ plane}\hfill \\ 0\hfill & \text{otherwise},\hfill \end{array}$$
(4)
where $`S`$ denotes the area of an elementary plaquette. The external field $`H`$ includes the dc and ac parts and it is given by
$$H=H_{dc}+H_{ac}\mathrm{cos}(\omega t).$$
(5)
It should be noted that the dc field is necessary to generate even harmonics.
In the present paper, we consider two models with two types of bond distributions. Model I: the sign of the Josephson couplings could be either positive (0-junction) or negative ($`\pi `$-junction) and the spin glass type bimodal ($`\pm J`$) distribution of $`J_{ij}`$ is taken. The coexistence of 0- and $`\pi `$-junctions gives rise to frustration even in zero external field and the chiral glass phase may occur at low temperatures. Model II: the interactions $`J_{ij}`$ are assumed to be ’ferromagnetic’ and distributed uniformly between 0 and 2$`J`$. Obviously, there is no frustration in zero external field in this model. It has been also demonstrated that the PME is present in model I but not in model II.
The ac linear susceptibilty of models I and II has been studied by Monte Carlo simulations. It was found that, due to the frustration, model I exhibits much stronger dissipation than model II in the low frequency regime. Here we go beyond our previous calculations of the linear ac susceptibility . We study the dependence of the second harmonics as a function of the dc field. In this way, we can make a direct comparison with the CE observed in the experiments . The second harmonics of a similar Josephson network model with a finite self-inductance were considered by Wolf and Majhofer. However, these authors dealt with the two-dimensional version of model II and the CE has not been studied. In this paper we are mainly interested in the CE in the frustrated three-dimensional system described by model I.
The dimensionless magnetization along the $`z`$-axis mormalized per plaquette, $`\stackrel{~}{m}`$, is given by
$$\stackrel{~}{m}=\frac{1}{N_p\varphi _0}\underset{p<xy>}{}(\mathrm{\Phi }_p\mathrm{\Phi }_p^{ext}),$$
(6)
where the sum is taken over all $`N_p`$ plaquettes on the $`<xy>`$ plane of the lattice. The real and imaginary parts of the ac second order susceptibility $`\chi _2^{}(\omega )`$ and $`\chi _2^{\prime \prime }(\omega )`$ are calculated as
$`\chi _2^{}(\omega )`$ $`=`$ $`{\displaystyle \frac{1}{\pi h_{ac}}}{\displaystyle _\pi ^\pi }\stackrel{~}{m}(t)\mathrm{cos}(2\omega t)d(\omega t),`$ (7)
$`\chi _2^{\prime \prime }(\omega )`$ $`=`$ $`{\displaystyle \frac{1}{\pi h_{ac}}}{\displaystyle _\pi ^\pi }\stackrel{~}{m}(t)\mathrm{sin}(2\omega t)d(\omega t),`$ (8)
where $`t`$ denotes the Monte Carlo time. The dimensionless ac field $`h_{ac}`$, dc field $`h_{dc}`$ and inductance $`\stackrel{~}{}`$ are defined as follows
$`h_{ac}={\displaystyle \frac{2\pi H_{ac}S}{\varphi _0}},h_{dc}={\displaystyle \frac{2\pi H_{dc}S}{\varphi _0}},`$ (9)
$`\stackrel{~}{}=(2\pi /\varphi _0)^2J.`$ (10)
The dependence of $`\stackrel{~}{}`$ on the parameters of the system such as the critical current and the typical size of the grains is discussed in .
Our results have been obtained by employing Monte Carlo simulations based on the standard Metropolis updating technique. While Monte Carlo simulations involve no real dynamics, one can still expect that they give useful information on the long-time behavior of the system. In fact, the amplitude of the ac field we use is much smaller than the typical energy of the dc part. On the other hand, the characteristic time for the sintered samples, which are believed to be captured by our model, is of order $`10^{12}s`$. This time has the same order of magnitude as a single Monte Carlo step. So the period of oscillations chosen in the present work is much longer than the characteristic time (see below). For such a weak and slowly changing ac field the system can be regarded as being in quasi-equilibrium and the Monte Carlo updating may be applied. A priori, the validity of this approximation is not clear but it may be justified by comparing our results with those obtained by other approaches to the dynamics such as considered in ref. . For the first harmonics, our method and the method of ref. yield results that agree qualitatively. Furthermore, our results presented in Fig.1 for the second harmonics are also in a qualitative agreement with the corresponding results obtained by solving the equations of motion . So one can expect that the standard Monte Carlo may actually give reasonable results for the CE.
We choose the gauge where the bond variables $`A_{ij}`$ along the $`z`$-direction are fixed to be zero. The lattice studied are simple cubic with $`L\times L\times L`$ sites and free boundary conditions are adopted. In all calculations presented below, we take $`L=8`$ and $`\omega =0.001`$. The sample average is taken over 20-40 independent bond realizations. $`\chi _2(\omega )`$ has been estimated following the procedure in. Namely, at the beginnig of a given Monte Carlo run, we first switch on the field (3). Then, after waiting for initial $`t_0`$ Monte Carlo steps per spin (MCS), we start to monitor the time variation of the magnetization, $`t_0`$ is being chosen so that all transient phenomena can be considered extinct. We set $`t_0`$ to be $`2\times 10^4`$ MCS. After passing the point $`t=t_0`$, $`\stackrel{~}{m}(t)`$ is averaged over typically 200 periods, each period contains $`t_T`$ MCS ($`t_T=2\pi /\omega `$). The real and imaginary parts of the second order ac susceptibility are then extracted via Eq. (5). We set $`h_{ac}=0.1`$, corresponding to $`0.016`$ flux quantum per plaquette. Smaller value of $`h_{ac}`$ turned out to leave the results almost unchanged.
The dependence of $`|\chi _2|`$, $`|\chi _2|=\sqrt{(\chi _2^{})^2+(\chi _2^{\prime \prime })^2}`$, on $`h_{dc}`$ at $`T=0.1J`$ is presented in Fig.1. For small values of $`\stackrel{~}{}`$, the oscillation of $`|\chi _2|`$ shows up. Such oscillation has been found for the two-dimensional superconductors in Ref. and its nature is related to the lattice periodicity. Our new observation is that the oscillatory behavior is still present in the superconductors with $`0`$\- and $`\pi `$-junctions (model I) but to less extent compared to model II. It is clear from Fig. 1 that $`|\chi _2|`$ does not decrease at large $`h_{dc}`$ but gets saturated. This is an artifact of the assumption that the Josephson contact energies $`J_{ij}`$ are field-independent. The field dependence of $`J_{ij}`$ should remove the saturation of $`|\chi _2|`$ at strong dc fields.
In order to study the difference between model I and model II through the CE we have to consider the weak field region where the PME may be observed. For model I the PME appears clearly for $`h_{dc}1`$ . So the largest $`h_{dc}`$ we take is 1. In this weak field regime there is no periodicity of $`|\chi _2|`$ versus $`h_{dc}`$ which may complicate the study of the CE. The chiral glass phase is found to exist below a critical value of the inductance $`\stackrel{~}{}_c`$ where $`5\stackrel{~}{}_c7`$ . One has to choose, therefore, an $`\stackrel{~}{}`$ which is smaller than its critical value and in what follows we take $`\stackrel{~}{}=4`$.
In this paper we focus on the system size $`L=8`$, $`\stackrel{~}{}=4`$, $`\omega =0.001`$, and $`T=0.1`$. Our preliminary studies show that the qualitative results do not depend on the choise of the parameters of the system.
Fig. 2 shows the dependence of second harmonics on $`h_{dc}`$ in the FC regime for the superconductors described by model I. Our calculations follow exactly the experimental procedure of Heinzel et al. First the system is cooled in the dc field $`h_{dc}=1`$ from $`T=0.7`$ down to $`T=0.1`$ which is below the paramagnet-chiral glass transition temperature $`T_c0.17`$ . The temperature step is chosen to be equal to 0.05. At each temperature, the system is evolved through 2$`\times 10^4`$ Monte Carlo steps. When the lowest temperature is reached the dc field used in cooling is switched off and we apply the combined field given by Eq. (3). We monitor the second harmonics reducing the dc field from $`h_{dc}=1`$ to zero stepwise by an amount of $`\mathrm{\Delta }h_{dc}=0.05`$. $`|\chi _2|`$ reaches minimum at the compensation field $`h_{com}=0.7\pm 0.05`$. At this point, similar to the experimental findings, the intersection of $`\chi _2^{}`$ and $`\chi _2^{\prime \prime }`$ is observed. This fact indicates that at $`H_{com}`$ the system is really in the compensated state. Furthermore, in accord with the experiments, at the compensation point the real and imaginary parts should change their sign. Our results show that $`\chi _2^{}`$ changes its sign roughly at $`h_{dc}=h_{com}`$. A similar behavior is also displayed by $`\chi _2^{\prime \prime }`$ but it is harder to observe due to a smaller amplitude of $`\chi _2^{\prime \prime }`$.
Fig. 3 shows the dependence of the second harmonics on $`h_{dc}`$ in FC regime for model II. The calculations are carried out in the same way as for model I. A difference is that we start to cool the system from $`T=1.4`$ which is above the superconducting transition point $`T_s0.9`$ ($`T_s`$ is estimated from the maximum of the specific heat for $`\stackrel{~}{}=4`$ and the results are not shown here). The temperature step is set equal to 0.1. Obviously, $`|\chi _2|`$ decreases with decreasing $`h_{dc}`$ monotonically. Thus, there is no CE because the remanent magnetization does not appear in the cooling process. This result is again in accord with the experimental data.
We now turn to the ZFC regime. The experiments show that no CE can be expected if after the ZFC procedure one increases the dc field. However, if the field is decreased a remanent magnetization is developed and the CE appears. The results of our simulations for the ceramic superconductors described by model I are shown in Fig. 4. As in the FC regime the system is cooled from $`T=0.7`$ to $`T=0.1`$ but without the external field. Then at $`T=0.1`$ we apply the field given by Eq. (3) and study three cases. In one of them $`h_{dc}`$ is decreased from $`h_{dc}=1`$ to -0.5. The values of $`|\chi _2|`$ are represented by solid circles in Fig. 4. The CE is clearly seen at $`h_{com}=0.15\pm 0.05`$. At this point the real and imaginary parts of the second harmonics also intersect (the results are not shown). It is not surprising that the $`h_{com}`$ in the ZFC regime appears to be smaller than in the FC regime. Fig. 4 shows also the dependence of $`|\chi _2|`$ on the dc field when it changes from $`h_{dc}=0`$ to 1 (open hexagons) and from $`h_{dc}=0`$ to -0.5 (open squares). Obviously, no CE is observed in this case. The results presented in Fig. 4 qualitatively agree with those shown in Fig.2 of Ref..
In conclusion we have shown that the CE may be explained, at least qualitatively, by using the chiral glass picture of the ceramic superconductors. The CE is shown to appear in the chiral glass phase in which the PME is present but not in the samples without the PME.
We thank M. Cieplak for a critical reading of manuscript and H. Kawamura, D. Dominguez, A. Majhofer and S. Shenoy for discussions. Financial support from the Polish agency KBN (Grant number 2P03B-025-13) is acknowledged.
|
no-problem/9904/hep-ph9904443.html
|
ar5iv
|
text
|
# Atmospheric neutrinos: phenomenological summary and outlook
## 1 Evidence for the disappearance of muon neutrinos
The data of Super–Kamiokande (SK) and other detectors have given strong evidence that $`\nu _\mu `$’s and $`\overline{\nu }_\mu `$’s ‘disappear’. This evidence comes from the observation of three experimental effects: (i) the detection of an up–down asymmetry for the $`\mu `$–like events, (ii) the detection of a small $`\mu /e`$ ratio, (iii) the detection of a distortion of the zenith angle distribution and a suppression of the $`\nu `$ induced upward going muon flux. The three effects are listed in order of ‘robustness’ with respect to systematic uncertainties. The statistical significance of the effects in SK, especially for (i) and (ii), is very strong. In the following we will discuss how theoretical uncertainties in the predictions cannot ‘reabsorb’ the observed effects, but do play a role in the interpretation of the data.
## 2 Systematic uncertainties in the predictions
In the prediction of the event rates for an atmospheric $`\nu `$ experiment one needs to: (i) consider an initial flux of cosmic ray particles, (ii) model the hadronic showers produced by these particles in the Earth atmosphere, (iii) describe the $`\nu `$ cross sections, (iv) describe the detector response to $`\nu `$ interactions (and possible background sources). Here we will consider only the first three ‘theoretical’ elements of the calculation, and will argue that there are significant uncertainties, that influence the absolute normalization, the shape of the energy spectrum, the angular distribution, and the $`\mu /e`$ ratio of the events. The primary cosmic ray (c.r.) flux has been a major source of uncertainty (see for a detailed discussion) because of the discrepant<sup>1</sup><sup>1</sup>1It is highly unlikely that the obseved differences are the result of time variations. results obtained by two groups (Webber 79 and LEAP 87) differing by $`50\%`$ (see fig. 1). Recently, new measurements of the c.r. proton flux have given results consistent with the lower normalization. If the lower normalization is accepted as correct, the uncertainty in the primary c.r. flux can be reduced, however the description of the primary c.r. flux (see fig. 1) used in two calculations of the atmopheric $`\nu `$ fluxes: Honda et al. (HKKM) and Bartol used in predictions for SK and other detectors are then too high (by $`30\%`$ and $`10`$%).
A second source important source of uncertainty is our lack of knowledge on the properties of particle production in $`p`$–nucleus and nucleus–nucleus interactions. The calculations of Bartol and HKKM use different descriptions for the multiplicity and energy spectrum of the pions produced in $`p`$–Nitrogen (Oxygen) interactions. For the same primary c.r. flux, this would result in a $`20\%`$ higher $`\nu `$ event rate for the Bartol calculation. Some controversy exists about what description of hadronic interactions is in better agreement with the existing data. An experimental program, studying in detail the structure of particle production in the relevant energy range (a broad region centered at $`E_020`$ GeV), would result in an improvement in the predictions.
The similar normalization of the two calculations is the result of a cancellation between a higher (lower) flux for the c.r. flux, and a lower (higher) $`\nu `$ yield per primary particle for the HKKM (Bartol) calculation. This cancellation, to a large extent, is not casual, but is the consequence of fitting the (same) data on $`\mu ^\pm `$ fluxes at ground level. This underlies the importance of these measurements. It is very desirable to repeat them with greater accuracy. High altitude meaurements with balloons also offer a great potential.
A third source of ‘theoretical’ uncertainty, of comparable importance to the other two, is related to the description of $`\sigma _\nu `$. At high $`E_\nu `$, when most of the phase space for $`\nu `$ interactions is in the deep–inelastic region, $`\sigma _\nu `$ is reliably calculable in terms of well determined parton distribution functions (PDF’s). However for $`E_\nu 1`$ GeV the description of $`\sigma _\nu `$ is theoretically more difficult. Quasi–elastic scattering is the most important mode, but also events with the production of one or more pions (where the additional particles are undetected or are reabsorbed in the target nucleus) are important contributions to the signal. The production of $`\mathrm{\Delta }`$’s and other resonances is important, nuclear effects have to be included. A relatively small modification in the description of a fraction of $`\sigma _\nu `$ in the SK montecarlo: the choice of a new set of PDF’s (GRV94LO replacing the CCFR parametrization) has resulted in an increase of the predicted number of partially contained events by approximately 7% (compare the MC predictions in and ). It appears very difficult to calculate accurately $`\sigma _\nu `$ from first principles in the relevant energy region. The existing data do not determine the absolute value of the cross section and the energy spectrum of the final state lepton better that $`15\%`$. Additional data could help in improving the situation. The K2K $`\nu `$ beam with a spectrum not too different from the atmospheric one offers interesting possibilities.
## 3 Robust properties of the predictions and observed effects
Two properties of the $`\nu `$ fluxes are to a large extent independent from the details of the calculation and provide ‘self calibration’ methods: (i) the fluxes are approximately up/down symmetric: $`\varphi _{\nu _\alpha }(E_\nu ,\theta )\varphi _{\nu _\alpha }(E_\nu ,\pi \theta )`$, (ii) the $`\nu _\mu `$ and $`\nu _e`$ fluxes are strictly related to each other because they are produced in the chain decay of the same charged mesons (as in $`\pi ^+\nu _\mu \mu ^+`$ followed by $`\mu ^+\overline{\nu }_\mu \nu _ee^+`$). Writing $`\varphi _{\nu _\mu }(E,\theta )=r(E_\nu ,\theta )\times \varphi _{\nu _e}(E,\theta )`$ the factor $`r(E_\nu ,\theta )`$ varies slowly with energy and angle and is quite insensitive to the details of the calculation. These two properties are at the basis of the robustness of the evidence for oscillations. The up–down symmetry follows as a simple and purely geometrical consequence from two assumptions: the primary c.r. flux is isotropic, the Earth is spherically symmetric. The c.r. flux at 1 A.U. of distance form the sun is isotropic to a precision better than $`10^3`$, as can be measured observing in a fixed direction and looking for time variations while the Earth rotates. The isotropy is spoiled by the geomagnetic field that bends the particle trajectories and forbids the lowest rigidity ones from reaching the Earth’s surface introducing directional (east–west) and location (latitude) effects.
These effects vanish at large momentum (see fig. 2). The measurement of an (oscillation independent) east–west effect for atmospheric $`\nu `$’s in agreement with predictions, by SK is an important test, that validates the calculations. Note that at Kamioka (near the magnetic equator) geomagnetic effect have the opposite effect to $`\nu `$–oscillations, and produce an up–going $`\nu `$ flux larger than the down–going one (both magnetic poles are below the detector). At the Soudan mine (near the magnetic pole) the opposite is true. Note also that the predicted asymmetry at low energy (see fig. 2) has some model dependence with the Bartol calculation predicting a higher no–oscillation asymmetry. This is important for the detection of a zenith angle modulation in the Soudan detector and is also relevant for the interpretation of the SK sub–GeV events.
The right panel of fig. 2 shows how different calculations of the atmospheric $`\nu `$ flux predict very similar $`\mu /e`$ ratios. This however refers to a fixed value of $`E_\nu `$. In fig. 3
we show an estimate of the energy distributions of the neutrinos that produce the SK events, note how the distributions for $`\mu `$ and $`e`$ like events differ. For the multi–GeV samples, a harder $`\nu `$ spectrum, or a faster raise with energy of $`\sigma _\nu `$, result in a larger (smaller) increase in the predicted rate of $`\mu `$–like ($`e`$–like) events, therefore in a smaller double ratio $`R`$ (because of a larger denominator) and finally to a larger $`\mathrm{\Delta }m^2`$ (for the same mixing) in the $`\nu _\mu \nu _\tau `$ interpretation to explain the larger suppression. At this conference, SK has presented a new estimate for the (90% C.L.) allowed region in the ($`\mathrm{sin}^22\theta `$, $`\mathrm{\Delta }m^2`$) plane for the $`\nu _\mu \nu _\tau `$ hypothesis, considering a larger exposure and a slightly modified MC calculation. The new allowed region is smaller that the previously published one not including the interval $`|\mathrm{\Delta }m^2|0.5`$$`1.0\times 10^3`$ eV<sup>2</sup>, a result very encouraging for the LBL programs. The use of a new set of PDF’s in the description of $`\sigma _\nu `$ has the qualitative effect to enhance the contribution of high energy events, and for the argument outlined above is an important contribution to the exclusion of the low $`\mathrm{\Delta }m^2`$ interval.
## 4 Outlook
The detection of oscillations in atmospheric $`\nu `$ experiments is a result of great importance. The detailed study of this phenomenon and the precise measurement of the parameters (masses and mixing) involved is a great opportunity and a difficult challenge. SK has a remarkable potential to obtain more convincing evidence and more precise measurements. New data on primary c.r. fluxes, hadron–nucleus interactions, $`\nu `$–nucleus interactions and $`\mu ^\pm `$ fluxes could help the interpretation of present and future data. Long baseline $`\nu `$ beams, have also the potential to confirm the results and study the phenomenon. This could happen very soon with the K2K project, the existence of two (similar to each other) LBL projects in the US and in Europe is seen by some as a beneficial case of scientific competition, and by others as a dangerous waste of resources.
Acknowledgments: Special thanks to prof. Y. Suzuki for kind explanations.
## References
|
no-problem/9904/cond-mat9904099.html
|
ar5iv
|
text
|
# Quantal distribution functions in non-extensive statistics and an early universe test revisited
## Acknowledgments
U.T. is a TUBITAK Münir Birsel Foundation Fellow and acknowledges partial support from Ege University Research Fund under the Project Number 97 FEN 025. During the course of this research, D.F.T. was a Chevening Scholar of the British Council Foundation and acknowledges partial support from CONICET and Fundación Antorchas.
|
no-problem/9904/cond-mat9904172.html
|
ar5iv
|
text
|
# Functional Renormalization Description of the Roughening Transition
## 1 Introduction
The roughening transition has been studied in great detail, both theoretically and experimentally . Direct analogies with the (two dimensional) $`XY`$-model or the Coulomb gas furthermore make this problem particularly enticing . More recently, the role of disorder on the roughening transition or on the properties of the $`XY`$ model, has attracted considerable interest . In particular, replica calculations and Functional Renormalization Group (frg) methods have been applied to this problem, with sometimes conflicting results . In this paper, we wish to reconsider the problem of the roughening transition in the absence of disorder, from a frg point of view, where the flow is not a priori projected onto the first harmonic of the periodic potential. Within a local renormalization scheme, we establish exact equations for the evolution of the full periodic potential $`V(\phi )`$, and the surface tension $`\gamma `$ with the length scale $`L=e^{\mathrm{}}`$, which we analyze both numerically and analytically, in the low temperature phase. If we start with a sinusoidal periodic potential, the shape of the fixed point potential $`V^{}(\phi )`$ evolves to a nearly parabolic shape with matching points becoming more and more singular as the length scale increases. The nature of the singularity is investigated in detail close to the fixed point, that is for small values of the rescaled temperature $`\overline{T}={\displaystyle \frac{T}{2\pi \gamma \lambda ^2}}`$ where $`\lambda `$ is the periodicity of the potential and $`\gamma `$ the elastic stiffness. We find that the width $`\mathrm{\Delta }\phi `$ of the singular region scales as $`L^{3g(\overline{T})/5}`$, where $`g(\overline{T})`$ governs the scaling of the surface stiffness with the length scale according to $`\gamma (L)L^{g(\overline{T})}`$. The exponent $`g(\overline{T})`$ tends towards $`2`$ with negative corrections which we calculate, when $`\overline{T}`$ goes to $`0`$ (i.e. for $`L\mathrm{}`$).
The paper is organized as follows. In section $`\mathrm{𝟐}`$, we introduce the model: we outline the calculations involved and discuss the differences with the approach of Nozières and Gallet (ng), and briefly examine the problem for $`d<2`$. We then explain in section $`\mathrm{𝟑}`$, by a mean field argument the origin of the singularity that develops during the renormalization flow. In section $`\mathrm{𝟒}`$, we present a scaling form for the renormalized potential, around its maxima and close to the fixed point, which accounts for the nature of the singularity. Using our renormalization group flow, we compute in section $`\mathrm{𝟓}`$ the step energy as a function of temperature. Finally, in section $`\mathrm{𝟔}`$, we look at the case of a contact line in a periodic potential, as this is a physical realization of a non local elastic stiffness.
## 2 Model and functional renormalization group
We consider an elastic interface whose height fluctuations are described by a profile $`\mathrm{\Phi }(x)`$, where $`x`$ is a $`d`$-dimensional vector, in the presence of a deterministic periodic potential $`V`$. Supposing that the slope of the interface is everywhere small, the energy of the system is:
$$H[\mathrm{\Phi }]=\frac{\gamma }{2}d^dx(\mathrm{\Phi }(x))^2+d^dxV\left(\frac{\mathrm{\Phi }(x)}{\lambda }\right)$$
(1)
where $`\gamma `$ is the elastic stiffness and $`\lambda `$ the periodicity of the potential. In the absence of periodic potential, the height fluctuations of the surface on a length scale $`L`$ scale as $`L^{2d}`$. For $`d>2`$, the interface is therefore always flat. For the critical dimension $`d=2`$, the interface is rough only if the temperature exceeds a certain critical temperature $`T_R`$. When the potential $`V`$ is harmonic, this model is the continuous version of the Sine-Gordon model. ng have studied the statics of this problem using a two-parameter renormalization group scheme, and have written flow equations for $`\gamma (L)`$ and the amplitude $`v_o(L)`$ of the periodic potential. They suppose that during the flow, $`v_o`$ remains small compared with the temperature and neglect all higher harmonics of the potential. Correspondingly, within this procedure, the renormalization scheme ceases to be valid when $`v_o`$ becomes of the order of the temperature. In the low temperature ‘flat’ phase, this occurs after a finite renormalization since $`v_o`$ grows with distance.
In our calculation, we consider a general periodic function with the only constraint that it should be sufficiently smooth (we shall explain this more quantitatively in the following). Since we re-sum the whole perturbation expansion in $`v_o/T`$, there is however no constraint on the amplitude of the potential, and the renormalization procedure can be carried on any length scale without interruption. The relevant coupling constant appears to be $`v_o/\gamma `$ rather than $`v_o/T`$. During the renormalization flow, we keep track of the whole function $`V(\phi )`$ instead of projecting onto the first harmonic, so that we have a more quantitative knowledge of the behaviour of the potential for low temperatures.
Technically, we proceed by considering the partition function:
$$Z=𝑑\mathrm{\Phi }(x)e^{\beta H[\mathrm{\Phi }(x)]}$$
(2)
We perform the renormalization procedure by splitting the field $`\mathrm{\Phi }`$ into a slowly-varying and a rapidly-varying part as:
$$\mathrm{\Phi }(x)=\mathrm{\Phi }^<(x)+\mathrm{\Phi }^>(x)$$
(3)
The Fourier modes $`k`$ of $`\mathrm{\Phi }^<`$ are such that $`0|k||\mathrm{\Lambda }|/s`$, and those of $`\mathrm{\Phi }^>`$, such that $`|\mathrm{\Lambda }|/s|k||\mathrm{\Lambda }|`$, where $`s=e^d\mathrm{}`$, $`|\mathrm{\Lambda }|`$ being a high momentum cut-off, of the order of $`1/a`$, where $`a`$ is the lattice spacing. We integrate over the fast modes in the partition function and retain only the terms that renormalize the gradient term and the potential term. The other terms that are generated are discarded as irrelevant. Within this renormalization scheme, our calculation is exact. After some algebra detailed in appendix $`𝐀`$, we obtain a set of flow equations for $`d=2`$, for $`\overline{V}={\displaystyle \frac{V}{\gamma \lambda ^2|\mathrm{\Lambda }|^2}}`$ and $`\overline{T}={\displaystyle \frac{T}{2\pi \gamma \lambda ^2}}`$, the rescaled potential and temperature (note that $`\overline{V}`$ and $`\overline{T}`$ are dimensionless):
$$\begin{array}{c}\frac{d\overline{V}}{d\mathrm{}}=(2g)\overline{V}\pi \frac{\overline{V}_{}^{}{}_{}{}^{2}}{(1+\overline{V}^{\prime \prime })}+\frac{\overline{T}}{2}\mathrm{ln}(1+\overline{V}^{\prime \prime })\hfill \\ \\ \frac{d\gamma }{d\mathrm{}}=g\gamma \hfill \\ \\ \frac{d\overline{T}}{d\mathrm{}}=g\overline{T}\hfill \end{array}$$
(4)
where $`g`$ is given by:
$$g=4\pi _0^1𝑑\phi \frac{\overline{V}^2(\phi )\overline{V}^{\prime \prime \prime 2}(\phi )}{\left(1+\overline{V}^{\prime \prime }(\phi )\right)^5}+\frac{\overline{T}}{4}_0^1𝑑\phi \frac{\overline{V}^{\prime \prime \prime 2}(\phi )}{\left(1+\overline{V}^{\prime \prime }(\phi )\right)^4}$$
(5)
These equations call for some comments.
* The relevant perturbative parameter appears to be $`\overline{V}`$, rather than $`V/T`$. In the limit $`\overline{V}1`$, and in the case where the potential is purely harmonic (i.e. $`V(\phi )=v_o\mathrm{cos}(2\pi \phi )`$), the rg equations read:
$$\begin{array}{c}\frac{du_o}{d\mathrm{}}=\left(2\frac{\pi T}{\gamma \lambda ^2}\right)u_o\hfill \\ \\ \frac{d\gamma }{d\mathrm{}}=2\pi ^4\left(\frac{2\pi T}{\gamma \lambda ^2}\right)\frac{u_o^2}{\gamma ^2\lambda ^4}\hfill \end{array}$$
(6)
where $`u_o=v_o/|\mathrm{\Lambda }|^2`$. The first equation is trivial and identical to the one in ng, and immediately leads to the value of the roughening transition temperature: $`T_R=2\gamma _{\mathrm{}}\lambda ^2/\pi `$, where $`\gamma _{\mathrm{}}`$ is the renormalized value of $`\gamma `$. The second is close to, but different from the one obtained in the particular renormalization scheme used by ng: near the critical temperature $`T_R`$, the coefficient between parenthesis is equal to $`4`$ in our case and to $`0.4`$ according to ng.
* The renormalization of the surface tension, as measured by $`g`$, is always positive. One can check that, as has been pointed out by ng, if the initial potential is parabolic (i.e. $`V(\varphi )=v_0\varphi ^2`$), then the coefficient $`g`$ vanishes identically, and there is no renormalization of the surface tension. This is indeed expected since in this (quadratic) case, all modes are decoupled.
* The flow equations only make sense if $`\overline{V}^{\prime \prime }>1`$. We have checked numerically that if this condition is satisfied at the beginning, it prevails throughout the flow. On the other hand, if the initial potential is so steep that this condition is violated, the perturbative calculation is meaningless. This comes from the fact that metastable states, where the surface zig-zags between nearby minima of the potential, appear at the smallest length scales. In this respect, it is useful to note that the last term of the flow equation on $`\overline{V}`$ comes from the integration of the Gaussian fluctuations of the fast field around the slow field. The condition $`\overline{V}^{\prime \prime }>1`$ is a stability condition for these fast modes. If the unrenormalized potential is harmonic (i.e. $`V(\phi )=v_o\mathrm{cos}(2\pi \phi )`$), and the unrenormalized surface tension given by $`\gamma _o`$, then this condition reads $`{\displaystyle \frac{v_o}{\gamma }}\left({\displaystyle \frac{2\pi }{\lambda |\mathrm{\Lambda }|}}\right)^2<1`$, which simplifies to $`{\displaystyle \frac{v_o}{\gamma }}<1`$ in the case where $`\lambda =a`$. If the initial value of the potential is too large, one actually expects the transition to become first order (but see ). Actually, a variational calculation indeed predicts the transition to become first order when $`{\displaystyle \frac{v_o}{\gamma }}1`$ .
* The fundamentally new term in the above equation is the second term, proportional to $`\overline{V}^2`$, and independent of temperature. This term leads to the appearance of singularities in the flow equation: up to second order in $`\overline{V}`$, this equation is close to the Burgers’ equation (see below) for which it is well known that shocks develop in time. The fact that this term survives even in the zero temperature limit is at first sight strange, since one could argue that for $`T=0^+`$, there are no longer any thermal fluctuations, and thus no renormalization. This argument is not correct because we are computing a partition function, thereby implicitly assuming that the infinite time limit is taken before the zero temperature limit. Such a non trivial renormalization has also been found in the context of pinned manifolds , and can be understood very simply using a mean-field approximation, which we detail in the next section.
* For completeness let us consider the case $`1<d<2`$. By simple scaling arguments, we can see that for $`d<1`$ the interface is always rough. For $`1<d<2`$, we have a roughening transition between a flat phase and a rough phase. In this case, we allow $`\mathrm{\Phi }`$ to renormalize and suppose that $`\lambda `$ renormalizes in the same way according to:
$$\frac{d\lambda }{d\mathrm{}}=\zeta \lambda $$
(7)
The other flow equations in terms of the rescaled parameters $`\overline{V}={\displaystyle \frac{V}{\gamma \lambda ^2|\mathrm{\Lambda }|^2}}`$ and $`\overline{T}={\displaystyle \frac{K_d|\mathrm{\Lambda }|^{d2}T}{\gamma \lambda ^2}}`$ (with $`K_d=S_d/(2\pi )^d`$ where $`S_d`$ is the $`d`$-dimensional sphere) now read:
$$\begin{array}{c}\frac{d\overline{V}}{d\mathrm{}}=(dg_d2\zeta )\overline{V}\pi \frac{\overline{V}_{}^{}{}_{}{}^{2}}{(1+\overline{V}^{\prime \prime })}+\frac{\overline{T}}{2}\mathrm{ln}(1+\overline{V}^{\prime \prime })\hfill \\ \\ \frac{d\gamma }{d\mathrm{}}=(g_d+d2)\gamma \hfill \\ \\ \frac{d\overline{T}}{d\mathrm{}}=(g_d+d2+2\zeta )\overline{T}\hfill \end{array}$$
(8)
where $`g_d`$ is given by:
$$g_d=\frac{2}{d}g,$$
(9)
with $`g`$ given by equation (5) above. These equations have a non trivial fixed point for $`g_d=2d`$ and $`\zeta =0`$. This corresponds to a rescaled temperature $`T_R`$ and a renormalized rescaled potential $`\overline{V}`$ such that equations (8) and (9) are satisfied. For $`1<d<2`$, we obtain $`T_R`$ numerically by proceeding as follows: we self-consistently solve the differential equation on $`\overline{V}`$ obtained by putting $`{\displaystyle \frac{d\overline{V}}{d\mathrm{}}}=0`$ for different fixed rescaled temperatures $`\overline{T}`$, imposing that $`g`$ is given by equation (5). This enables us to plot $`g`$ as a function of $`\overline{T}`$. The rescaled temperature $`\overline{T}_R`$ corresponding to the transition temperature is such that $`g_d=2d`$. In the figure (1), we have plotted the result for $`d=3/2`$, with $`\gamma _o=1`$. In that case, $`\overline{T}1.2`$.
## 3 Mean field analysis and effective potential
In this section, we show on a simplified mean field version of the model how the non linear term $`V^2`$ arises in the flow equation of the potential. Using a discrete formulation of the problem and replacing the local elasticity modeled by the surface tension term by a coupling to all neighbours, we can rewrite the energy as:
$$H_{\mathrm{𝑚𝑓}}(\{\mathrm{\Phi }\}_i,\overline{\mathrm{\Phi }})=\frac{\gamma }{2a}\underset{i}{\overset{N}{}}(\mathrm{\Phi }_i\overline{\mathrm{\Phi }})^2+a\underset{i}{\overset{N}{}}V(\mathrm{\Phi }_i)$$
(10)
where $`\overline{\mathrm{\Phi }}`$ is the center of mass of the system, $`a`$ the lattice spacing and $`L=Na`$. Implementing the constraint $`\overline{\mathrm{\Phi }}=1/N{\displaystyle \mathrm{\Phi }_i}`$ by means of a Lagrange multiplier in the partition function, we have:
$$Z[\overline{\mathrm{\Phi }}]=𝑑\eta \underset{i}{}d\mathrm{\Phi }_ie^{\beta H_{\mathrm{𝑚𝑓}}(\left\{\mathrm{\Phi }\right\}_i,\overline{\mathrm{\Phi }})\eta \left(N\overline{\mathrm{\Phi }}{\displaystyle \underset{i}{}}\mathrm{\Phi }_i\right)}$$
(11)
which can also be expressed as:
$$Z[\overline{\mathrm{\Phi }}]=𝑑\eta e^{N\mathrm{log}z(\overline{\mathrm{\Phi }},\eta )+N{\displaystyle \frac{\eta ^2a}{2\beta \gamma }}}$$
(12)
where
$$z(\overline{\mathrm{\Phi }},\eta )=𝑑\mathrm{\Phi }e^{{\displaystyle \frac{\beta \gamma }{2a}}\left(\overline{\mathrm{\Phi }}\mathrm{\Phi }+{\displaystyle \frac{\eta a}{\beta \gamma }}\right)^2\beta aV\left(\mathrm{\Phi }\right)}=z\left(\overline{\mathrm{\Phi }}+\frac{\eta a}{\beta \gamma }\right)$$
(13)
We are left with a simpler problem since we now have a one-body problem. We introduce the auxiliary partition function $`z_R(\mathrm{\Psi },\tau )`$ defined as:
$$z_R(\mathrm{\Psi },\tau )=\sqrt{\frac{\beta \gamma }{2\pi a\tau }}𝑑\mathrm{\Phi }e^{{\displaystyle \frac{\beta \gamma }{2a}}{\displaystyle \frac{(\mathrm{\Psi }\mathrm{\Phi })^2}{\tau }}\beta aV\left(\mathrm{\Phi }\right)}$$
(14)
Up to a multiplicative constant, one has $`z\left(\overline{\mathrm{\Phi }}+{\displaystyle \frac{\eta a}{\beta \gamma }}\right)=z_R\left(\overline{\mathrm{\Phi }}+{\displaystyle \frac{\eta a}{\beta \gamma }},\tau =1\right)`$, where $`z_R(\mathrm{\Psi },\tau )`$ verifies the diffusion equation:
$$\frac{z_R}{\tau }=\frac{a}{\beta \gamma }\frac{^2z_R}{\mathrm{\Psi }^2}$$
(15)
with an initial condition given by:
$$z_R(\mathrm{\Psi },\tau =0)=e^{\beta aV\left(\mathrm{\Psi }\right)}$$
(16)
Defining now the effective pinning potential $`V_R`$ as:
$$aV_R(\overline{\mathrm{\Phi }}+\frac{\eta a}{\beta \gamma },\tau )=T\mathrm{log}z_R(\overline{\mathrm{\Phi }}+\frac{\eta a}{\beta \gamma },\tau )$$
(17)
we can then easily show that $`V_R`$ is the Hopf-Cole solution of the non linear Burgers’ equation :
$$\frac{V_R}{(\tau a)}=\frac{T}{\gamma }\frac{^2V_R}{\mathrm{\Psi }^2}\frac{a}{\gamma }\left(\frac{V_R}{\mathrm{\Psi }}\right)^2$$
(18)
where the temperature independent non linear term $`V_R^2`$ indeed appears.
It is easy to show that when $`N\mathrm{}`$ or $`T0`$, the original partition function can be solved by a saddle point method, leading after a change of variables to an effective potential per unit length:
$$V_{\mathrm{𝑒𝑓𝑓}}(\overline{\mathrm{\Phi }})=V_R(u,\tau =1)\frac{\gamma }{2a^2}(\overline{\mathrm{\Phi }}u)^2$$
(19)
where $`u`$ is given by:
$$\frac{\gamma }{2a}(u\overline{\mathrm{\Phi }})=\frac{V_R}{\phi }(\phi ,\tau =1)|_{\phi =u}$$
(20)
Now, by changing $`V_R`$ to $`V_R`$, one can see that $`V_{\mathrm{𝑒𝑓𝑓}}`$ can also be written as the solution of a Burgers’ equation. It is known from results on the Burgers’ equation, that, with ‘time’ $`\tau `$, the effective potential $`V_R`$ develops shocks, smoothed out at finite temperature, between which it has a parabolic shape. The appearance of singularities is due to the non linear term in the partial differential equation, which indeed survives in the limit $`T=0`$. It is interesting to see how this ‘toy’ renormalization group captures some important features of the full scheme, such as the one shown above for a non disordered potential.
## 4 Analysis for small $`\overline{T}`$
In this section, we go back to the model introduced in section $`\mathrm{𝟐}`$ and analyze the nature of $`\overline{V}`$ close to the low temperature fixed point, that is for small values of the rescaled temperature $`\overline{T}`$. Since $`g>0`$ in the low temperature phase, this corresponds to the large scale structure of the renormalized potential for all temperatures $`T<T_R`$.
Expanding $`\overline{V}`$ around one of its minima as $`\overline{V}(\phi )=\overline{V}_m+\frac{1}{2}\kappa (\phi \phi ^{})^2`$, and replacing $`\overline{V}`$ in the flow equation(4), we have
$$\begin{array}{c}\frac{d\overline{V}_m}{d\mathrm{}}=(2g)\overline{V}_m+\frac{\overline{T}}{2}\mathrm{ln}(1+\kappa )\hfill \\ \\ \frac{d\kappa }{d\mathrm{}}=(2g)\kappa 2\pi \frac{\kappa ^2}{1+\kappa }\hfill \end{array}$$
(21)
One can actually check that a parabolic shape for $`\overline{V}`$ is exactly preserved by the renormalization flow. However, since the potential has to be periodic, these parabolas should match periodically around each maximum that is for $`\phi \phi ^{}=0,1,2,\mathrm{}`$. The region of the maximum is therefore expected to be singular. To investigate the nature of the renormalized periodic potential around its maximum value, we will thus make a scaling ansatz on $`\overline{V}^{\prime \prime }`$ for small $`\overline{T}`$. For our perturbative calculation to be valid, we expect $`\overline{V}^{\prime \prime }(0)`$ to be $`>1`$. Now since we expect a singularity to develop as $`\overline{T}`$ goes to $`0`$, it is probable (and actually self-consistently checked) that $`\overline{V}^{\prime \prime }(0)`$ should tend towards $`1`$. As $`\overline{T}`$ goes to $`0`$, we thus make the scaling ansatz:
$$1+\overline{V}^{\prime \prime }(\phi )=\overline{T}^\delta ^{}\left(\frac{\phi }{\overline{T}^\alpha }\right)$$
(22)
where $`^{}(0)>0`$. This means that the width of the singular region behaves as $`\mathrm{\Delta }\phi \overline{T}^\alpha `$. Hence, in the scaling region:
$$\overline{V}^{}(\phi )=\phi +\overline{T}^{\delta +\alpha }\left(\frac{\phi }{\overline{T}^\alpha }\right)$$
(23)
with $`(0)=0`$ to ensure that $`\phi =0`$ is a maximum of $`\overline{V}`$. Integrating once more the above equation, one finds:
$$\overline{V}(\phi )=\overline{V}_M\frac{\phi ^2}{2}+\overline{T}^{\delta +2\alpha }𝒢\left(\frac{\phi }{\overline{T}^\alpha }\right)$$
(24)
with $`𝒢^{}=`$. Replacing this last equation in the flow equation for $`\overline{V}`$, we obtain:
$$\frac{d\overline{V}_M}{d\mathrm{}}=(2g)\overline{V}_M+\frac{\overline{T}}{2}\mathrm{log}\overline{T}^\delta $$
(25)
Suppose that equations (21) have a fixed point as $`\overline{T}`$ goes to zero, and that close to the fixed point one can neglect the left hand side of these equations. This leads to the relation $`\kappa =(2g)/(2\pi 2+g)`$. Supposing moreover that the parabolic solution extends almost over a whole period and that the correction brought about by the rounding off of the singularity around the maxima of the potential is negligible, we also have
$$(\overline{V}_M\overline{V}_m)\frac{\kappa }{2}=\frac{2g}{4\pi 4+2g}$$
(26)
Now, subtracting equation (25) from equation (21), we find in the limit $`\overline{T}0`$:
$$\frac{d}{d\mathrm{}}(\overline{V}_M\overline{V}_m)=(2g)(\overline{V}_M\overline{V}_m)+\frac{\delta }{2}\overline{T}\mathrm{log}\overline{T}$$
(27)
Combining equations (26) and (27), we find that for the previous equation to have a fixed point as $`\overline{T}0`$, $`g2`$ with negative corrections as:
$$(2g)\sqrt{2\pi \delta \overline{T}\mathrm{log}\frac{1}{\overline{T}}}$$
(28)
This result is independent of the way we calculate $`g`$, the correction to the surface tension. In particular, it shows that at zero temperature, the surface tension diverges as $`(L/a)^2`$, where $`L`$ is the size of the system.
We can deduce an equation satisfied by $`^{}`$, by plugging the ansatz for the derivatives of $`\overline{V}`$ in the flow equation for $`\overline{V}^{}`$:
$$\frac{d\overline{V}^{}}{d\mathrm{}}=(2g2\pi )\overline{V}^{}+2\pi \frac{\overline{V}^{}}{1+\overline{V}^{}}+\pi \overline{V}_{}^{}{}_{}{}^{2}\frac{\overline{V}^{\prime \prime \prime }}{(1+\overline{V}^{\prime \prime })^2}+\frac{\overline{T}}{2}\frac{\overline{V}^{\prime \prime \prime }}{1+\overline{V}^{\prime \prime }}$$
(29)
Close to the fixed point, we again suppose that to leading order in $`\overline{T}`$, $`{\displaystyle \frac{d\overline{V}}{d\mathrm{}}}=0`$ in the above equation. Plugging in the ansatz for $`\overline{V}^{}`$ and $`\overline{V}^{\prime \prime }`$, and considering the leading term in $`\overline{T}`$, we get to lowest order in $`\overline{T}`$:
$$\frac{2\pi u}{^{}(u)}\overline{T}^{\alpha \delta }+\frac{\pi u^2^{\prime \prime }(u)}{^{}(u)}\overline{T}^{\alpha \delta }+\frac{^{\prime \prime }(u)}{2^{}(u)}\overline{T}^{1\alpha }=0$$
(30)
We can show that necessarily $`\alpha \delta =1\alpha `$. Indeed, if $`1\alpha <\alpha \delta `$, $`^{\prime \prime }`$ would be equal to zero while if $`1\alpha >\alpha \delta `$, $`^{}(0)`$ would be equal to zero, both alternatives being thus impossible. Hence, equation (30) can be rewritten as
$$\frac{1}{2}\frac{d}{du}\mathrm{log}^{}\pi \frac{d}{du}\left(\frac{u^2}{^{}}\right)=0$$
(31)
which yields after integration:
$$^{}(u)\mathrm{log}\left(\frac{^{}(u)}{^{}(0)}\right)=2\pi u^2$$
(32)
At this stage, we can note that the exponent relation
$$2\alpha \delta =1$$
(33)
is independent of the scheme used to calculate of $`g`$ (see Appendix).
In the rest of this section, we calculate the exponents $`\alpha `$ and $`\gamma `$, and using $`g(\overline{T}0)=2`$, we also obtain $`^{}(0)`$. These results now somewhat depend on the precise renormalization scheme we use to calculate the correction $`g`$ to the surface tension. Replacing the derivatives of $`\overline{V}`$ by their expressions in terms of $`^{}`$ and $`^{\prime \prime }`$, in equation (5), and changing variables from $`\phi `$ to $`u={\displaystyle \frac{\phi }{\overline{T}^\alpha }}`$, we get to lowest order in $`\overline{T}`$:
$$g(\overline{T})\overline{T}^{\alpha 3\delta }8\pi _0^{\mathrm{}}𝑑uu^2\frac{^{\prime \prime 2}(u)}{^5(u)}+\overline{T}^{1\alpha 2\delta }\frac{1}{2}_0^{\mathrm{}}𝑑u\frac{^{\prime \prime 2}(u)}{^4(u)}$$
(34)
From the exponent relation $`2\alpha \delta =1`$ derived previously, and the fact that $`g(\overline{T}0)`$ is finite, we have another exponent relation $`\alpha =3\delta `$, so that $`\alpha =3/5`$ and $`\delta =1/5`$. We can also show that $`g(\overline{T}0)`$ can be expressed in terms of $`^{}(0)`$. From expression (32), one can see that $`^{}`$ is a strictly increasing function on $`[0,\mathrm{}]`$ taking its values in $`[e,\mathrm{}]`$, and so we can change variables from $`^{}`$ to its inverse function. Defining a new variable $`x`$ as
$$x=\frac{e^{}(u)}{^{}(0)}$$
(35)
we have:
$$u=\left(\frac{^{}(0)}{2\pi e}\right)^{1/2}(x\mathrm{log}(x))^{1/2}$$
(36)
and
$$\frac{du}{dx}=\frac{1}{2}\frac{\mathrm{log}(x)}{(x\mathrm{log}(x))^{1/2}}\left(\frac{2\pi e}{^{}(0)}\right)^{1/2}$$
(37)
We can now express $`g(\overline{T}0)`$ in terms of an integral over $`x`$ from $`e`$ to $`\mathrm{}`$ as
$$g(0)=\frac{2}{\pi ^2}\left(\frac{2\pi e}{^{}(0)}\right)^{5/2}\left\{_e^{\mathrm{}}𝑑x\frac{(x\mathrm{log}(x/e))^{3/2}}{x^5\mathrm{log}(x)}+_e^{\mathrm{}}𝑑x\frac{(x\mathrm{log}(x/e))^{1/2}}{x^4\mathrm{log}(x)}\right\}$$
(38)
Hence, using the fact that $`g(\overline{T}0)=2`$, we finally find the constant $`^{}(0)1.38`$.
## 5 Step energy as a function of temperature
From physical considerations we know that below the roughnening temperature, the interface grows by forming terraces. An important quantity governing the kinetics of growth is therefore the step energy. The width $`\xi `$ of a step and its energy per unit length $`\beta _S`$ can be obtained by comparing the elastic energy and the potential energy of a profile $`\mathrm{\Phi }(x)`$ which changes by one period over the length $`\xi `$. Requiring that these two energies are of the same order of magnitude leads to: $`\gamma /\xi ^2v_o`$ where $`v_o`$ is the amplitude of the periodic potential, or $`\xi \sqrt{{\displaystyle \frac{\gamma }{v_o}}}`$, and a step energy which scales as $`\beta _S\sqrt{v_o\gamma }`$. Since a step profile include Fourier modes such that $`\xi ^1<k<|\mathrm{\Lambda }|`$, it is natural to use in the above equations the values of $`\gamma `$ and $`v_o`$ calculated for the length $`L=ae^{\mathrm{}}=\xi `$. Since $`\xi (L)L/\sqrt{\overline{v}_o(L)}`$, one sees that this corresponds to stopping the renormalization procedure when $`\overline{v}_o(L)1`$. We have integrated numerically the rg flow, starting from $`\gamma =1`$ and from harmonic potentials of various amplitudes $`v_o1`$, and stopping for an arbitrary value $`\overline{V}`$, chosen here to be $`\overline{v}_c=0.4`$. <sup>2</sup><sup>2</sup>2Other values of $`\overline{v}_c`$ would not change the qualitative features reported below, provided $`\overline{v}_c`$ is not too large. The resulting step energy as a function of temperature is plotted in Figure (2). For $`T`$ close to $`T_R`$, one finds that $`\xi `$ diverges as $`e^{1/\sqrt{T_RT}}`$, as it should since our rg flow essentially boils down to the standard one . For small temperatures, however, we find that $`\beta _S`$ tends to a finite value with a linear slope in temperature. This slope is seen to decrease as the initial amplitude of the potential $`\overline{v}_o`$ increases. For $`\overline{v}_o=0.01`$, $`\beta _S`$ decreases by $`30\%`$ when $`T`$ increases from $`0`$ to $`0.25T_R`$. This decrease falls to $`10\%`$ for $`\overline{v}_o=0.1`$.
Experiments on Helium 4, on the other hand, have established that the step energy does only depend very weakly on temperature at small temperature, by not more than $`5\%`$ when the temperatures varies from $`0.05T_R`$ to $`0.25T_R`$. This suggests that the initial amplitude of the potential is of the same order as $`\gamma _o`$: in this case, the width of the step is of order $`a`$, and the bare parameters are not renormalized except possibly very close to $`T_R`$. The conclusion that experiments must be in the regime $`\overline{v}_o1`$ is in agreement with , where $`\overline{v}_o`$ is called $`t_c`$ (up to a numerical prefactor); $`v_o/\gamma _o`$ was estimated to be $`0.05`$. Since our rg flow is different from the one obtained by Nozières and Gallet, the values of the physical parameters obtained by a fit of our theory to the experiments will actually differ.
## 6 Case of the contact line
In this section, we repeat the previous analysis for the case of a contact line on a periodic substrate . The roughness of a contact line on a disordered substrate, at zero temperature, has been studied analytically and compared with the experimental predictions for the case of superfluid helium on a disordered cesium substrate, where the disorder arises from randomly distributed wettable heterogeneities which are oxydized areas of the substrate . A physical realization of the theoretical situation we consider here could be achieved by preparing a substrate with equally spaced oxydized lines which would act as periodic pinning grooves. In this case the critical dimension is $`d=1`$. We denote by $`\mathrm{\Phi }`$ the position of the line with respect to a mean position. The energy of the system is the sum of an elastic term and a potential term given by:
$$H[\mathrm{\Phi }]=\frac{\gamma }{2}\frac{dk}{2\pi }|k||\mathrm{\Phi }(k)|^2+_0^L𝑑xV(\mathrm{\Phi }(x))$$
(39)
where $`L`$ is the length of the substrate and $`\gamma `$ the stiffness.
The renormalization procedure is carried as before except that the propagator is now given by $`G(k)={\displaystyle \frac{1}{\beta \gamma |k|}}`$. Moreover the renormalization of the stiffness now only comes from the scale change leading to the much simpler flow equation for $`\gamma `$:
$$\frac{d\gamma }{d\mathrm{}}=\gamma $$
(40)
Defining as before the rescaled parameters $`\overline{V}`$ and $`\overline{T}`$ with $`\overline{V}={\displaystyle \frac{V}{\gamma \lambda ^2|\mathrm{\Lambda }|}}`$ and $`\overline{T}={\displaystyle \frac{2T}{\gamma \lambda ^2}}`$, the flow equations for $`\overline{V}`$ and $`\overline{T}`$ read:
$$\frac{d\overline{V}}{d\mathrm{}}=\overline{V}\frac{\overline{V}^2}{1+\overline{V}^{\prime \prime }}+\frac{\overline{T}}{2}\mathrm{log}(1+\overline{V}^{\prime \prime })$$
(41)
and
$$\frac{d\overline{T}}{d\mathrm{}}=\overline{T}$$
(42)
During the flow, $`\overline{T}`$ flows to zero and the renormalized rescaled potential $`\overline{V}`$ develops shocks between which it has a parabolic shape. We characterize the singularities that develop around the maxima of $`\overline{V}`$ by the following scaling ansatz:
$$1+\overline{V}^{\prime \prime }(\phi )=T^\delta e^{\frac{A}{\overline{T}}}^{}\left(\frac{\phi }{T^\alpha e^{\frac{B}{\overline{T}}}}\right)$$
(43)
where $`^{}(0)>0`$ and $`(0)=0`$. Putting $`u={\displaystyle \frac{\phi }{\overline{T}^\alpha e^{\frac{B}{\overline{T}}}}}`$, this implies that for $`u1`$,
$$\overline{V}^{}(u)=\overline{T}^\alpha e^{\frac{B}{\overline{T}}}u+\overline{T}^{\delta +\alpha }e^{\frac{A+B}{\overline{T}}}(u)$$
(44)
and
$$\overline{V}(u)=\overline{V}_M\overline{T}^{2\alpha }e^{\frac{2B}{\overline{T}}}\frac{u^2}{2}+\overline{T}^{\delta +2\alpha }e^{\frac{A+2B}{\overline{T}}}𝒢(u)$$
(45)
with $`𝒢^{}=`$. Plugging the previous expressions into the flow equation $`\overline{V}^{}`$:
$$\frac{d\overline{V}^{}}{d\mathrm{}}=\overline{V}^{}+2\overline{V}^{}\frac{\overline{V}^{\prime \prime }}{1+\overline{V}^{}}+\overline{V}_{}^{}{}_{}{}^{2}\frac{\overline{V}^{\prime \prime \prime }}{(1+\overline{V}^{\prime \prime })^2}+\frac{\overline{T}}{2}\frac{\overline{V}^{\prime \prime \prime }}{1+\overline{V}^{\prime \prime }}$$
(46)
and supposing that $`{\displaystyle \frac{d\overline{V}^{}}{d\mathrm{}}}=0`$, to leading order as $`\overline{T}`$ goes to zero, we have to leading order in $`\overline{T}`$:
$$\frac{2u}{^{}(u)}\overline{T}^{\alpha \delta }e^{\frac{AB}{\overline{T}}}+\frac{u^2^{\prime \prime }(u)}{^2(u)}\overline{T}^{\alpha \delta }e^{\frac{AB}{\overline{T}}}+\frac{^{\prime \prime }(u)}{2^{}(u)}\overline{T}^{1\alpha }e^{\frac{B}{\overline{T}}}=0$$
(47)
Since $`^{}(0)>0`$, we obtain a non trivial solution only if
$$2\alpha \delta =1\mathrm{and}AB=B$$
(48)
and $`^{}`$ is again solution of equation (32). We can obtain the values of the parameters $`A`$ and $`B`$ by considering separately the singular part and the regular part of the renormalized rescaled potential $`\overline{V}`$ close to the fixed point. We expand $`\overline{V}`$ around one of its minima $`\phi ^{}`$ as:
$$\overline{V}(\phi )=\overline{V}_m+\frac{\kappa }{2}(\phi \phi ^{})^2$$
(49)
and plug the resulting expression in the flow equation for $`\overline{V}`$. This yields:
$$\begin{array}{cc}\frac{dV_m}{d\mathrm{}}\hfill & =V_m+\frac{\overline{T}}{2}\mathrm{log}(1+\kappa )\hfill \\ & \\ \frac{d\kappa }{d\mathrm{}}\hfill & =\kappa \frac{2\kappa ^2}{1+\kappa }\hfill \end{array}$$
(50)
We note that the fixed point value for $`\kappa `$ is now finite as $`\overline{T}`$ goes to zero and is given by $`\kappa ^{}=1`$. Similarly the flow equation for $`\overline{V}(0)=\overline{V}_M`$ is:
$$\frac{d\overline{V}_M}{d\mathrm{}}=\overline{V}_M+\frac{\overline{T}}{2}\mathrm{log}\left(T^\delta e^{\frac{A}{\overline{T}}}^{}(0)\right)$$
(51)
Combining equations (50) and (51), the flow equation for the amplitude of $`\overline{V}`$ is given to leading order as $`\overline{T}0`$ by:
$$\frac{d}{d\mathrm{}}(\overline{V}_M\overline{V}_m)=(\overline{V}_M\overline{V}_m)\frac{A}{2}$$
(52)
Now, we expect that the singularity brings but a small correction to the parabolic part of the rescaled potential $`\overline{V}`$, so that at the fixed point the amplitude of $`\overline{V}`$ is given by $`{\displaystyle \frac{\kappa ^{}}{2}}`$, leading to $`A=1`$. The value of the exponents $`\alpha `$ and $`\delta `$ would require the analysis of subdominant terms. The conclusion of this section is that in the case of the contact line, the width of the singular region of the renormalized potential decreases exponentially with length scale: the potential quickly becomes a succession of matched parabolas.
## 7 Conclusion
In this paper, we studied the problem of the thermal roughening transition using a frg formalism. We have shown that below the roughening temperature, the periodic potential on large length scales cannot be described by its lowest harmonic and that during the flow shocks are generated in the effective pinning potential. We expect that this result is more generally valid, and also holds in the case of a disordered pinning potential . By performing a resummation of our perturbation expansion, our results are in principle valid in the strong coupling regime, where the coupling constant is proportional to $`V/\gamma `$ (rather than $`V/T`$). Correspondingly, we stop the renormalisation procedure not when $`V(L)T`$ (as in ng), but rather when $`L`$ reaches the size of the objects under investigation (for example the width of the steps). By comparing our numerical result with the experimental determination of the step energy of liquid Helium 4, we have concluded that the surface of Helium 4 crystals are such that the coupling to the lattice is of the same order of magnitude as the surface tension. This is in qualitative agreement with Balibar et al. , who estimate $`v_o/\gamma _o0.05`$.
## Acknowledgements
We wish to thank Sebastien Balibar, T. Emig and M. Mézard for very interesting discussions.
## Appendix A Derivation of the flow equations
In this appendix, we sketch the procedure to obtain the flow equations (4). We consider the partition function
$$Z=d[\mathrm{\Phi }]e^{\beta \left(\mathrm{\Phi }\left(x\right)\right)}$$
(53)
where $``$ is the hamiltonian given by (1). We split the field $`\mathrm{\Phi }`$ into a fast moving and a slow moving component and average over the fast moving part. We can rewrite the partition function, up to a multiplicative constant, as:
$$Z=d[\mathrm{\Phi }^<]e^{\beta {\displaystyle _<}{\displaystyle \frac{d^dk}{(2\pi )^d}}\left|k\right|^2\left|\mathrm{\Phi }^<\left(k\right)\right|^2}<e^{\beta {\displaystyle d^dxV\left(\frac{\mathrm{\Phi }(x)}{\lambda }\right)}}>_o$$
(54)
where $`<\mathrm{}>_o`$ represents the thermal average with respect to the gaussian weight:
$$e^{\beta {\displaystyle _>}{\displaystyle \frac{d^dk}{(2\pi )^d}}\left|k\right|^2\left|\mathrm{\Phi }^>\left(k\right)\right|^2}$$
(55)
In the rest of this section we denote $`{\displaystyle \frac{d^dk}{(2\pi )^d}}`$ by $`\stackrel{~}{d}k`$.
### A.1 Renormalization of the periodic potential $`V`$
We look for contributions to the potential $`V`$ resulting from the above averaging, which are of the same form as the terms present in the hamiltonian before starting the renormalization procedure and which are of order $`d\mathrm{}`$. These terms are represented by connected graphs and are all obtained by expanding the potential term with respect to $`\mathrm{\Phi }^>`$ up to second order. There are only two ways of obtaining such graphs of order $`d\mathrm{}`$:
$``$ By contracting $`p`$ two-legged terms $`{\displaystyle \frac{\beta }{2}}{\displaystyle d^dx\left(\frac{\mathrm{\Phi }^>(x)}{\lambda }\right)^2V^{\prime \prime }\left(\frac{\mathrm{\Phi }^<(x)}{\lambda }\right)}`$ with $`1p\mathrm{}`$. We must calculate:
$$\begin{array}{cc}\frac{1}{p!}\left(\frac{\beta }{2\lambda ^2}\right)^p\hfill & \underset{j=1}{\overset{p}{}}d^dx_jV^{\prime \prime }\left(\frac{\mathrm{\Phi }^<(x_j)}{\lambda }\right)\hfill \\ & \\ & \underset{j=1}{\overset{p}{}}\stackrel{~}{d}k_j\stackrel{~}{d}k_j^{}e^{i(k_1+k_1^{})x_1\mathrm{}i(k_p+k_p^{})x_p}<\mathrm{\Phi }^>(k_1)\mathrm{}\mathrm{\Phi }^>(k_p^{})>_o\hfill \end{array}$$
(56)
which gives after averaging over the fast modes:
$$\frac{(1)^p}{2p}\left(\frac{1}{\gamma \lambda ^2}\right)^p\underset{j=1}{\overset{p}{}}d^dx_jV^{\prime \prime }\left(\frac{\mathrm{\Phi }^<(x_j)}{\lambda }\right)\underset{j=1}{\overset{p}{}}\stackrel{~}{d}k_j\frac{e^{ik_1(x_1x_2)+\mathrm{}ik_p(x_px_1)}}{|k_1|^2\mathrm{}|k_p|^2}$$
(57)
In the above expression, the space dependence of $`V^{\prime \prime }`$ is slowly varying, and since the integral is dominated by the region where the $`x_j`$’s are close to one another, we can with little error, treat these terms as approximately equal to $`V^{\prime \prime }\left({\displaystyle \frac{\mathrm{\Phi }^<(x_1)}{\lambda }}\right)`$. After integrating over the rest of the $`x_j`$’s and summing up over $`p`$, we are left with:
$$\beta d\mathrm{}\frac{𝒦_d|\mathrm{\Lambda }|^dT}{2}d^dx\mathrm{log}\left(1+\frac{V^{\prime \prime }}{\gamma \lambda ^2|\mathrm{\Lambda }|^2}\right)$$
(58)
$``$ By contacting $`2`$ one-legged terms $`\beta {\displaystyle d^dx\left(\frac{\mathrm{\Phi }^>(x)}{\lambda }\right)V^{}\left(\frac{\mathrm{\Phi }^<(x)}{\lambda }\right)}`$ with $`p`$ two-legged terms $`{\displaystyle \frac{\beta }{2}}{\displaystyle d^dx\left(\frac{\mathrm{\Phi }^>(x)}{\lambda }\right)^2V^{\prime \prime }\left(\frac{\mathrm{\Phi }^<(x)}{\lambda }\right)}`$ with $`0p\mathrm{}`$. To illustrate our method, we begin with $`p=0`$. Expressing the fast modes in Fourier space, we have in discrete space:
$$\frac{1}{2!}\beta ^2\left(\frac{1}{\lambda ^2}\right)\left(\frac{a}{L}\right)^{2d}\underset{x}{}\underset{y}{}V^{}\left(\frac{\mathrm{\Phi }^<(x)}{\lambda }\right)V^{}\left(\frac{\mathrm{\Phi }^<(y)}{\lambda }\right)\underset{k}{}\underset{k^{}}{}e^{ikx+ik^{}y}<\mathrm{\Phi }^>(k)\mathrm{\Phi }^>(k^{})>_o$$
(59)
which gives after averaging over the fast modes:
$$\frac{\beta }{2}\left(\frac{1}{\gamma \lambda ^2}\right)a^{2d}\underset{x}{}\underset{y}{}V^{}\left(\frac{\mathrm{\Phi }^<(x)}{\lambda }\right)V^{}\left(\frac{\mathrm{\Phi }^<(y)}{\lambda }\right)\frac{1}{L}\underset{k}{}\frac{e^{ik(xy)}}{|k|^2}$$
(60)
In the above expression, the main contribution comes from the part $`x=y`$, while the part with $`xy`$, has a phase which averages to almost zero. Using the fact that $`|\mathrm{\Lambda }|={\displaystyle \frac{2\pi }{a}}`$, the result is:
$$\beta d\mathrm{}\frac{K_d(2\pi )^d}{2}d^dx\frac{\overline{V}^2(\mathrm{\Phi }^<(x)/\lambda )}{\gamma \lambda ^2|\mathrm{\Lambda }|^2}$$
(61)
Proceeding in a similar way for $`1p\mathrm{}`$, we have
$$\begin{array}{cc}\beta ^2\hfill & \left(\frac{\beta }{2}\right)^p\frac{1}{(p+2)!}\frac{(p+1)(p+2)}{2}\left(\frac{1}{\lambda ^2}\right)^{p+1}\left(\frac{a}{L}\right)^{(p+2)d}\hfill \\ & \\ & \underset{x,y,x_1\mathrm{}x_p}{}V^{}\left(\frac{\mathrm{\Phi }^<(x)}{\lambda }\right)V^{}\left(\frac{\mathrm{\Phi }^<(y)}{\lambda }\right)V^{\prime \prime }\left(\frac{\mathrm{\Phi }^<(x_1)}{\lambda }\right)\mathrm{}V^{\prime \prime }\left(\frac{\mathrm{\Phi }^<(x_p)}{\lambda }\right)\hfill \\ & \\ & \underset{k,k^{},k_1\mathrm{}k_p^{}}{}e^{ikx+ik^{}y+i(k_1+k_1^{})x_1+\mathrm{}+i(k_p+k_p^{})x_p}<\mathrm{\Phi }^>(k)\mathrm{\Phi }^>(k^{})\mathrm{\Phi }^>(k_1)\mathrm{}\mathrm{\Phi }^>(k_p^{})>_o\hfill \end{array}$$
(62)
which yields after retaining the $`x=y`$ part and averaging over the fast modes:
$$\begin{array}{cc}\frac{\beta }{2}(1)^pa^d\left(\frac{1}{\lambda ^2}\right)^{p+1}\underset{j=1}{\overset{p}{}}d^dx_jd^dx\hfill & V^2(\mathrm{\Phi }^<(x))\underset{j=1}{\overset{p}{}}V^{\prime \prime }(\mathrm{\Phi }^<(x_j))\hfill \\ & \\ & \underset{j=1}{\overset{p}{}}\stackrel{~}{d}k_j\stackrel{~}{d}k\frac{e^{ik(xx_1)+ik_1(x_1x_2)\mathrm{}+ik_p(x_px)}}{|k|^2|k_1|^2\mathrm{}|k_p|^2}\hfill \end{array}$$
(63)
Treating the above expression as equation(57) we are left with:
$$\beta d\mathrm{}(1)^p\frac{K_d(2\pi )^d}{2}d^dx\left(\frac{V^{\prime \prime }(\mathrm{\Phi }^<(x)/\lambda )}{\gamma \lambda ^2|\mathrm{\Lambda }|^2}\right)^p$$
(64)
Summing up over $`p`$, we finally obtain:
$$\beta d\mathrm{}\frac{K_d(2\pi )^d}{2}d^dx\frac{V^2(\mathrm{\Phi }^<(x)/\lambda )}{\gamma \lambda ^2|\mathrm{\Lambda }|^2}\left(1+\frac{V^{\prime \prime }(\mathrm{\Phi }^<(x)/\lambda )}{\gamma \lambda ^2|\mathrm{\Lambda }|^2}\right)$$
(65)
Taking into account the rescaling of the potential term, and supposing we are in the flat phase so that $`\mathrm{\Phi }`$ and $`\lambda `$ are not rescaled, we obtain for $`d=2`$:
$$\frac{dV}{d\mathrm{}}=2V\pi \frac{\left({\displaystyle \frac{V^2}{\gamma \lambda ^2|\mathrm{\Lambda }|^2}}\right)}{1+\left({\displaystyle \frac{V^{\prime \prime }}{\gamma \lambda ^2|\mathrm{\Lambda }|^2}}\right)}+\frac{T}{4\pi \gamma }\mathrm{log}\left(1+\frac{V^{\prime \prime }}{\gamma \lambda ^2|\mathrm{\Lambda }|^2}\right)$$
(66)
This flow equation can be rewritten in terms of the rescaled parameters $`\overline{V}={\displaystyle \frac{V}{\gamma \lambda ^2|\mathrm{\Lambda }|^2}}`$ and $`\overline{T}={\displaystyle \frac{T}{2\pi \gamma \lambda ^2}}`$. Putting $`g={\displaystyle \frac{1}{\gamma }}{\displaystyle \frac{d\gamma }{d\mathrm{}}}`$, we have:
$$\frac{d\overline{V}}{d\mathrm{}}=(2g)\overline{V}\pi \frac{\overline{V}_{}^{}{}_{}{}^{2}}{(1+\overline{V}^{\prime \prime })}+\frac{\overline{T}}{2}\mathrm{log}(1+\overline{V}^{\prime \prime })$$
(67)
### A.2 Renormalization of the surface tension $`\gamma `$
The contributions to the gradient term are obtained from equations (57) and (63).
$``$ Consider first the contribution due to equation (63). Here, $`x`$ is fixed since we have imposed that it should be equal to $`y`$. We can thus expand the $`V^{\prime \prime }`$ terms with respect to this variable as:
$$V^{\prime \prime }\left(\frac{\mathrm{\Phi }^>(x_j)}{\lambda }\right)=V^{\prime \prime }\left(\frac{\mathrm{\Phi }^>(x)}{\lambda }\right)+\frac{1}{\lambda }(x_jx)\mathrm{\Phi }^>(x)V^{\prime \prime \prime }\left(\frac{\mathrm{\Phi }^>(x)}{\lambda }\right)$$
(68)
The contribution of equation (63) to the gradient term is given by:
$$\begin{array}{cc}\frac{\beta \gamma }{2}a^d(1)^{p+1}\hfill & \left(\frac{1}{\gamma \lambda ^2}\right)^{p+2}\underset{n<m}{}\underset{j=1}{\overset{p}{}}d^d\stackrel{~}{x}_jd^dx\hfill \\ & \\ & (\stackrel{~}{x}_n\mathrm{\Phi }(x))(\stackrel{~}{x}_m\mathrm{\Phi }(x))V^2\left(\frac{\mathrm{\Phi }^>(x)}{\lambda }\right)V^{\prime \prime p2}\left(\frac{\mathrm{\Phi }^>(x)}{\lambda }\right)V^{\prime \prime \prime 2}\left(\frac{\mathrm{\Phi }^>(x)}{\lambda }\right)\hfill \\ & \\ & \underset{j=1}{\overset{p}{}}\stackrel{~}{d}k_j\stackrel{~}{d}k\frac{e^{ik\stackrel{~}{x}_1+ik_1(\stackrel{~}{x}_1\stackrel{~}{x}_2)\mathrm{}+ik_p\stackrel{~}{x}_p}}{|k|^2|k_1|^2\mathrm{}|k_p|^2}\hfill \end{array}$$
(69)
After integrating over $`\stackrel{~}{x}_i`$ for $`im,n`$, we are left with:
$$\begin{array}{cc}\hfill \frac{\beta \gamma }{2}a^d(1)^{p+1}\frac{1}{\gamma ^{p+2}}\underset{n<m}{}& d^dxd^d\stackrel{~}{x}_md^d\stackrel{~}{x}_n\underset{\nu =1}{\overset{N}{}}\stackrel{~}{x}_m^\nu \stackrel{~}{x}_n^\nu (\mathrm{\Phi }(x))_\nu ^2\hfill \\ & \\ \hfill V^2\left(\frac{\mathrm{\Phi }^>(x)}{\lambda }\right)& V^{\prime \prime p2}\left(\frac{\mathrm{\Phi }^>(x)}{\lambda }\right)V^{\prime \prime \prime 2}\left(\frac{\mathrm{\Phi }^>(x)}{\lambda }\right)\hfill \\ & \\ & \stackrel{~}{d}k𝑑\stackrel{~}{k}^{}\stackrel{~}{d}k^{\prime \prime }\frac{e^{ik\stackrel{~}{x}_n+ik^{\prime \prime }\stackrel{~}{x}_m+ik^{}(\stackrel{~}{x}_n\stackrel{~}{x}_m)}}{(|k|^2)^n(|k^{}|^2)^{mn}(|k^{\prime \prime }|^2)^{p+1m}}\hfill \end{array}$$
(70)
Using the fact that $`ix^\nu e^{ikx}={\displaystyle \frac{}{k^\nu }}e^{ikx}`$, and an integration by parts, we finally get after summing over $`n<m`$ and over $`p`$:
$$\frac{\beta \gamma }{2}d\mathrm{}\frac{4}{d}K_d(2\pi )^dd^dx\frac{\overline{V}^2\left({\displaystyle \frac{\mathrm{\Phi }^<(x)}{\lambda }}\right)\overline{V}^{\prime \prime \prime 2}\left({\displaystyle \frac{\mathrm{\Phi }^<(x)}{\lambda }}\right)}{\left(1+\overline{V}^{\prime \prime }\left({\displaystyle \frac{\mathrm{\Phi }^<(x)}{\lambda }}\right)\right)^5}(\mathrm{\Phi }^<(x))^2$$
(71)
Since we are looking for the contribution to the gradient term, only the projection of the periodic function $`{\displaystyle \frac{\overline{V}^2\overline{V}^{\prime \prime \prime 2}}{\left(1+\overline{V}^{\prime \prime }\right)^5}}`$ on the zeroth harmonic counts. The contribution of equation (63) to the elastic constant $`\gamma `$ is thus:
$$\gamma d\mathrm{}\frac{4}{d}K_d(2\pi )^d_0^1𝑑\phi \frac{\overline{V}^2(\phi )\overline{V}^{\prime \prime \prime 2}(\phi )}{\left(1+\overline{V}^{\prime \prime }(\phi )\right)^5}$$
(72)
$``$ Consider now equation (57). In order to obtain a term of the form $`(\mathrm{\Phi }^>(x))^2`$, we have to expand two $`V^{\prime \prime }`$ terms. We proceed as follows: if we choose to expand $`V^{\prime \prime }(\mathrm{\Phi }^<(x_n)/\lambda )`$ and $`V^{\prime \prime }(\mathrm{\Phi }^<(x_m)/\lambda )`$ with $`m<n`$, we perform the expansion with respect to $`(x_m+x_n)/2`$. The $`V^{\prime \prime }`$ terms thus give:
$$\frac{1}{4}\left((x_nx_m)\mathrm{\Phi }^<\left(\frac{x_n+x_m}{2}\right)\right)^2V^{\prime \prime \prime 2}\left(\frac{x_n+x_m}{2}\right)V^{\prime \prime p2}\left(\frac{x_n+x_m}{2}\right)$$
(73)
Integrating over $`x_j`$ for $`jm,n`$, we get:
$$\begin{array}{cc}\frac{\beta \gamma }{2}\hfill & \frac{(1)^p}{p}\frac{T}{\gamma }\left(\frac{1}{\gamma \lambda ^2}\right)^p\underset{m<n}{}d^dx_md^dx_n\hfill \\ & \\ & \left(\mathrm{\Phi }^<\left(\frac{x_n+x_m}{2}\right)\right)_\nu ^2V^{\prime \prime \prime 2}\left(\frac{x_n+x_m}{2}\right)V^{\prime \prime p2}\left(\frac{x_n+x_m}{2}\right)\hfill \\ & \\ & \stackrel{~}{d}k\stackrel{~}{d}k^{}(x_mx_n)_\nu ^2\frac{e^{i(kk^{})(x_mx_n)}}{(|k|^2)^{pn+m}(|k^{}|^2)^{nm}}\hfill \end{array}$$
(74)
Writing $`(x_mx_n)_\nu ^2e^{i(kk^{})(x_mx_n)}={\displaystyle \frac{}{k_\nu }}{\displaystyle \frac{}{k_\nu ^{}}}e^{i(kk^{})(x_mx_n)}`$, and performing the rest of the calculation as described above, we find that the contribution to the elastic term in terms of the rescaled parameters $`\overline{V}`$ and $`\overline{T}`$ is given by
$$\gamma d\mathrm{}\frac{\overline{T}}{2d}_0^1𝑑\phi \frac{\overline{V}^{\prime \prime \prime 2}(\phi )}{\left(1+\overline{V}^{\prime \prime }(\phi )\right)^4}$$
(75)
finally leading, for $`d=2`$, to the renormalization of $`\gamma `$ given in the main text. If we had chosen another expansion scheme to obtain the contribution to the gradient term, for instance if we had expanded the terms with respect to the centre of mass, we would have obtained a somewhat different value for $`g`$. This would only affect the precise value of the exponents $`\alpha `$ and $`\delta `$ obtained in the text, but not the qualitative features of the solution.
|
no-problem/9904/hep-ph9904371.html
|
ar5iv
|
text
|
# Acknowledgements
## Acknowledgements
I am indebted to Toni Riotto for many useful discussions.
|
no-problem/9904/cond-mat9904064.html
|
ar5iv
|
text
|
# DIMER ORDER WITH STRIPED CORRELATIONS IN THE 𝐽₁-𝐽₂ HEISENBERG MODEL
## I Introduction
There has been considerable study, over the last decade, of the frustrated spin-$`\frac{1}{/}2`$ square lattice Heisenberg antiferromagnet (the “$`J_1\text{-}J_2`$ antiferromagnet”). These studies include exact diagonalizations on small systems, spin-wave calculations, series expansions, and a field-theoretic large-$`N`$ expansions.
These studies, and others, have provided a substantial body of evidence that the ground state of this system, in the region $`0.4J_2/J_10.6`$, has no long-range magnetic order and has a gap to spin excitations. For $`J_2/J_10.4`$ the model has conventional antiferromagnetic Néel order whereas for $`J_2/J_10.6`$ the system orders in a columnar $`(\pi ,0)`$ phase. Whether this “intermediate phase” is a spatially homogeneous spin-liquid, or whether it has some type of spontaneously broken symmetry leading to a more subtle type of long-range order, has not been conclusively established.
Zhitomirsky and Ueda have proposed a plaquette resonating valence bond (RVB) phase, which breaks translational symmetry along both $`x`$ and $`y`$ axes, but preserves the symmetry of interchange of the two axes. The horizontal and vertical dimers resonate within a plaquette. An early series study had investigated the relative stability of various spontaneously dimerized states and had concluded that a columnar dimerized phase was the most promising candidate for the intermediate region, in agreement with the large-$`N`$ expansions. Zhitomirsky and Ueda claim their plaquette phase has a lower energy than this columnar dimer phase, but we find this to be incorrect.
Further support for the columnar dimer scenario comes from recent work of Kotov et al., who combine an analytic many body theory with extended series and diagonalization results to study the nature and stability of the excitations in the intermediate region. It is argued that where the Néel phase becomes unstable the system will develop not only a gap for triplet excitations but also a gapped low-energy singlet which reflects the spontaneous symmetry breaking. This is clearly seen in the calculations. At $`J_2/J_10.38`$ a second order transition occurs, with the energies of Néel phase and dimerized phase joining smoothly, and the energy gap and dimerization vanishing.
It is the aim of this paper to further investigate, using series methods, the competing possibilities of columnar dimerization versus plaquette order in the intermediate region of the $`J_1\text{-}J_2`$ antiferromagnet. It is conceivable that both occur, with a transition from one to the other. However, such a transition, reflecting a change of symmetry, is expected to be first-order and not well suited to series methods. If both phases are locally stable the most direct way to compare them is by comparison of the ground state energies. If one is unstable this should show up by the closing of an appropriate gap or by the divergence of an appropriate susceptibility. In this paper we calculate the ground state energy and the singlet and triplet excitation spectra by series expansions about a disconnected plaquette Hamiltonian. We also calculate the susceptibility for the dimer phase to break translational symmetry in the direction perpendicular to the dimers. This susceptibility will be large if there is substantial resonance in the dimer phase and will diverge if there is an instability to the plaquette RVB phase.
Combining the plaquette expansion results with the dimer expansions of Kotov et al., a very interesting picture emerges for the quantum disordered phase. We find that the plaquette phase is unstable and hence is not the ground state for this model. The dimer phase, on the other hand, is stable. However, there is substantial resonance in the dimer phase. The spin-spin correlations are not simply those of isolated dimers. Instead, the nearest neighbor correlations are nearly identical along the rungs and chains of dimer columns. In contrast, the correlations from one dimer column to the next are much weaker. The spin-gap phase appears separated from the Néel phase by a second order transition, whereas it is separated from the columnar phase by a first order transition. These results are in remarkable agreement with the large-N theories . The existence of a quantum critical point separating an antiferromagnetic phase and a quantum disordered phase with striped correlations in a microscopic model makes this critical point a particularly interesting one. The role of doping and its implications for high-$`T_c`$ materials deserves further attention.
## II Series Expansions and Results
We study the Hamiltonian
$$H=J_1\underset{\mathrm{n}.\mathrm{n}.}{}𝐒_i𝐒_j+J_2\underset{\mathrm{n}.\mathrm{n}.\mathrm{n}.}{}𝐒_i𝐒_j$$
(1)
where the first sum runs over the nearest neighbor and the second over the second nearest neighbor spin pairs of the square-lattice. We denote the ratio of couplings as $`y=J_2/J_1`$. The linked-cluster expansion method has been previously reviewed in several articles, and will not be repeated here. To carry out the series expansion about the disconnected-plaquette state for this system, we take the interactions denoted by the thick solid and dashed bonds in Fig. 1 as the unperturbed Hamiltonian, and the rest of the interactions as a perturbation. That is, we define the following Hamiltonian
$$H=H_0+H_1$$
(2)
where the unperturbed Hamiltonian ($`H_0`$) and perturbation ($`H_1`$) are
$`H_0`$ $`=`$ $`J_1{\displaystyle \underset{ijA}{}}𝐒_i𝐒_j+J_2{\displaystyle \underset{ijB}{}}𝐒_i𝐒_j`$ (3)
$`H_1`$ $`=`$ $`\lambda J_1{\displaystyle \underset{ijC}{}}𝐒_i𝐒_j+\lambda J_2{\displaystyle \underset{ijD}{}}𝐒_i𝐒_j`$ (5)
and the summations are over intra-plaquette nearest-neighbor bonds (A), intra-plaquette second nearest-neighbor bonds (B), inter-plaquette nearest-neighbor bonds (C), inter-plaquette second nearest-neighbor bonds (D), shown in Fig. 1. With this Hamiltonian, one can carry out an expansion in powers of $`\lambda `$, and at $`\lambda =1`$ one recovers the original Hamiltonian in Eq. (1). Thus, although we expand about a particular state, i.e. a plaquette state, our results at $`\lambda =1`$ describe the original system without broken symmetries, provided no intervening singularity is present. Such perturbation expansions about an unperturbed plaquette Hamiltonian have been used previously to study Heisenberg models for CaV<sub>4</sub>O<sub>9</sub>.
It is instructive to consider the states of an isolated plaquette. There are two singlet states, one with energy $`(2+y/2)J_1`$ and the other with energy $`(3y/2)J_1`$. The former is the ground state for $`y<1`$ and corresponds to pair singlets resonating between the vertical and horizontal bonds of the plaquette. It is even under a $`\pi /2`$ rotation. The latter is the ground state for $`y>1`$ and is odd under a $`\pi /2`$ rotation. The wavefunctions for these two singlet states are
$`\psi _1`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{12}}}\left[\left(\begin{array}{cc}+& +\\ & \end{array}\right)+\left(\begin{array}{cc}+& \\ +& \end{array}\right)+\left(\begin{array}{cc}& \\ +& +\end{array}\right)+\left(\begin{array}{cc}& +\\ & +\end{array}\right)2\left(\begin{array}{cc}+& \\ & +\end{array}\right)2\left(\begin{array}{cc}& +\\ +& \end{array}\right)\right]`$ (18)
$`=`$ $`{\displaystyle \frac{1}{\sqrt{3}}}\left[\left(\begin{array}{c}\text{}\\ \text{}\end{array}\right)+\left(\begin{array}{cc}\text{}& \text{}\end{array}\right)\right]`$ (22)
$`\psi _2`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left[\left(\begin{array}{cc}+& +\\ & \end{array}\right)\left(\begin{array}{cc}+& \\ +& \end{array}\right)+\left(\begin{array}{cc}& \\ +& +\end{array}\right)\left(\begin{array}{cc}& +\\ & +\end{array}\right)\right]`$ (31)
$`=`$ $`\left[\left(\begin{array}{cc}\text{}& \text{}\end{array}\right)\left(\begin{array}{c}\text{}\\ \text{}\end{array}\right)\right]`$ (35)
where means these two spins form a singlet. There are three triplet states, one with energy $`(1+y/2)J_1`$ and a degenerate pair with energy $`(y/2)J_1`$; like the singlets, these have a level crossing at $`y=1`$. Under a $`\pi /2`$ rotation the former is odd, while the latter two are even and odd, respectively. Finally there is a quintuplet state at $`(1+y/2)J_1`$, which is even under a $`\pi /2`$ rotation. For $`y<1/2`$ and $`y>2`$ the first excited state of the plaquette is a triplet, while for $`1/2<y<2`$ it is the other singlet. These states and corresponding energies are shown in Figure 2. The eigenstates of $`H_0`$, the unperturbed Hamiltonian, are direct products of these plaquette states.
To derive the plaquette expansions we identify each plaquette as a 16 state quantum object, and these lie at the sites of a square lattice with spacing $`2a`$, where $`a`$ is the original lattice spacing. Interactions between plaquettes connect first and second-neighbor sites on this new lattice. The cluster data is thus identical to that used by us previously to derive Ising expansions for this model. Because there are 16 states at each cluster site, the vector space grows very rapidly with the number of sites and thus limits the maximum attainable order for plaquette expansions to considerably less than can be achieved for dimer or Ising expansions.
We have computed the ground state energy $`E_0`$ to order $`\lambda ^7`$, for fixed values of the coupling ratio $`y`$. The series are analysed using integrated differential approximants, evaluated at $`\lambda =1`$ to give the ground state energy of the original Hamiltonian. The estimates with error bars representing confidence limits, are shown in Figure 3. For comparison we also show previous results obtained from Ising expansions and dimer expansions. We find that, in the intermediate region, the ground state energy for both plaquette and dimer phase are very close to each other and cannot be used to distinguish between them. The dimer expansion yields slightly lower energies near the transition to the Néel phase. We do not draw any conclusions from this.
Zhitomirsky and Ueda have claimed that the ground state energy from a second-order plaquette expansion is -0.63 (at $`y=0.5`$), much lower that the dimer expansion result -0.492. This result appears incorrect. At $`J_2/J_1=\frac{1}{/}2`$ the ground state energy is given by
$`4E_0/NJ_1`$ $`=`$ $`7/4277\lambda ^2/14400.001357\lambda ^30.0210609\lambda ^4`$ (37)
$`0.000319586\lambda ^50.00580643\lambda ^60.001822686\lambda ^7+O(\lambda ^8)`$
The second order result (at $`\lambda =1`$) is $`E_0/NJ_1=0.485`$, rather than $`0.63`$. We note that if the second order coefficient were 4 times larger then the resulting energy would be $`0.62986`$.
We have also derived series, to order $`\lambda ^6`$, for the singlet and triplet excitation energies, $`\mathrm{\Delta }_s(k_x,k_y)`$, $`\mathrm{\Delta }_t(k_x,k_y)`$ using the method of Gelfand, and taking as unperturbed eigenfunctions the corresponding plaquette states. The low order terms for $`J_2/J_1=0.5`$ are given by:
$`\mathrm{\Delta }_s(k_x,k_y)/J_1`$ $`=`$ $`1301\lambda ^2/1440+137\lambda ^3/86400+217\lambda ^3\mathrm{cos}(k_x)\mathrm{cos}(k_y)/172800`$ (39)
$`+(5\lambda ^2/1689\lambda ^3/9600)[\mathrm{cos}(k_x)+\mathrm{cos}(k_y)]/2`$
$`\mathrm{\Delta }_t(k_x,k_y)/J_1`$ $`=`$ $`13691\lambda ^2/30240+(2\lambda /3+11\lambda ^2/720)[\mathrm{cos}(k_x)+\mathrm{cos}(k_y)]/2`$ (42)
$`\lambda ^2[\mathrm{cos}(2k_x)+\mathrm{cos}(2k_y)]/120+(\lambda /35\lambda ^2/96)\mathrm{cos}(k_x)\mathrm{cos}(k_y)`$
$`\lambda ^2[\mathrm{cos}(2k_x)\mathrm{cos}(k_y)+\mathrm{cos}(k_x)\mathrm{cos}(2k_y)]/90+7\lambda ^2\mathrm{cos}(2k_x)\mathrm{cos}(2k_y)/360`$
The full series are available on request. We first consider the triplet excitations. Figure 4 shows $`\mathrm{\Delta }_t(k_x,k_y)`$ along high symmetry directions in the Brillouin zone for $`\lambda =0.5`$ and various coupling ratios $`y`$. For $`\lambda 0.6`$ the series are well converged and direct summation and integrated differential approximants give essentially identical results. We find that the minimum gap occurs at $`(0,0)`$ for $`J_2/J_10.55`$ and moves to $`(\pi ,0)`$ for $`J_2/J_10.55`$. Next we seek to locate the critical point $`\lambda _c`$ where the triplet gap vanishes. This is done using Dlog Padé approximants to the gap series at the appropriate $`(k_x,k_y)`$. In practice this works well when the minimum gap lies at $`(0,0)`$. For $`J_2=0`$ we find a critical point at $`\lambda _c=0.555(10)`$. We can compare this result with recent work of Koga et al. who obtain $`\lambda _c=0.112`$ from a modified spin-wave theory and $`\lambda _c0.54`$ from a 4th order plaquette expansion. The critical point $`\lambda _c`$ increases with increasing $`y`$. At $`y=0.5`$, at the approximate centre of the intermediate phase, we find $`\lambda _c0.89(7)`$. This result has some uncertainty but, if accurate, means that the plaquette phase becomes unstable before the full Hamiltonian ($`\lambda =1`$) is reached. The associated critical exponent $`\nu `$ describing the vanishing of the triplet gap is about 0.7 for $`J_2/J_1<0.4`$, suggesting that the transition lies in the universality class of the classical $`d=3`$ Heisenberg model. On the other hand, for $`J_2/J_10.4`$ the exponent $`\nu `$ is about 0.4. This supports the existence of an intermediate phase lying in a different universality class.
Figure 5 shows the singlet excitation energy $`\mathrm{\Delta }_s(k_x,k_y)`$ along high symmetry directions in the Brillouin zone for $`\lambda =0.5`$ and the same coupling ratios $`y`$ as Figure 4. Again the series are well converged and direct summation and integrated differential approximants give essentially identical results. We find that the minimum gap occurs at $`(0,0)`$ for all $`J_2/J_1`$. We have also noted that for $`J_2/J_1=0.5`$, the triplet excitation and the singlet excitation have same gap at $`\lambda =0`$, but at $`\lambda =0.5`$, the singlet gap is considerable larger than the triplet gap, this means probably that the triplet gap close before the singlet gap at $`J_2/J_1=0.5`$. The critical point obtained by the Dlog Padé approximant to the singlet gap is also generally slightly larger than that obtained from the triplet gap around $`J_2/J_1=0.5`$ (see Fig. 6).
The full phase diagram in the parameter space of $`J_2/J_1`$ and $`\lambda `$ could be very interesting from the point of view of quantum phase transitions, but may not be easy to determine by numerical methods. Some possible scenarios are shown in Fig. 7. One possibility is that the plaquette phase, for all $`J_2/J_1`$, has an instability to some magnetic phase and the dimerized phase exists only very close to $`\lambda =1`$ inside the magnetic phases. A second possibility is that the plaquette-Néel critical line meets the Néel-dimer critical line at some multicritical point at a value of $`J_2/J_1`$ around $`0.5`$, after which there is a first order transition between the plaquette and the dimer phases. A third possibility is that the plaquette-Néel, Néel-dimer and plaquette-columnar critical lines all meet at some multicritical point. The numerically determined phase diagram is particularly uncertain in the interesting region, $`0.5J_2/J_10.6`$., where incommensurate correlations could also become important.
Lastly we have derived expansions for a number of generalized susceptibilities. These are defined by adding an appropriate field term
$$\mathrm{\Delta }H=h\underset{ij}{}Q_{ij}$$
(43)
to the Hamiltonian and computing the susceptibility from
$$\chi _Q=\frac{1}{N}\underset{h0}{lim}\frac{^2E_0(h)}{h^2}$$
(44)
A divergence of any susceptibility signals an instability of that phase with respect to the particular type of order incorporated in $`\chi `$.
We have computed two different susceptibilities from the plaquette expansion. One is the antiferromagnetic (Néel) susceptibility with the operator $`Q_{ij}`$
$$Q_{i,j}=(1)^{i+j}S_{i,j}^z$$
(45)
The other is the dimerization susceptibility with the operator $`Q_{ij}`$
$$Q_{i,j}=𝐒_{i,j}𝐒_{i+1,j}𝐒_{i,j}𝐒_{i,j+1}$$
(46)
which breaks the symmetry of interchange of $`x`$ and $`y`$ axes. We have computed series to order $`\lambda ^5`$ for the antiferromagnetic susceptibility and to order $`\lambda ^4`$ for the dimerization susceptibility. The series have been analyzed by Dlog Padé approximants. The series for the antiferromagnetic susceptibility shows the same critical points (within error bars) as those obtained from the triplet gap for $`J_2/J_10.4`$. The series for the dimerization susceptibility is very irregular, and does not yield useful results. For example, for $`J_2/J_1=0.5`$, the series is:
$$\chi _d=629/90+101\lambda /300+2.0097647\lambda ^20.269629\lambda ^3+0.438527\lambda ^4+O(\lambda ^5)$$
(47)
For completeness, we also compute the susceptibility for the dimer phase to become unstable to the plaquette phase from an expansion about isolated columnar dimers, by adding the following field term:
$$\mathrm{\Delta }H=h\underset{i,j}{}(1)^j𝐒_{i,j}𝐒_{i,j+1}$$
(48)
which breaks the translational symmetry in the direction perpendicular to the dimers. The series has been computed up to order $`\lambda ^7`$, (note that $`\lambda `$ here is the parameter of dimerization).
An analysis of the series shows that this susceptibility becomes very large as $`\lambda 1`$, for all $`J_2/J_1`$ and the critical $`\lambda `$, where the susceptibility appears to diverge, approaches unity from above as $`J_2/J_1`$ is increased to $`0.5`$. This implies that there are staggered bond correlations in the direction perpendicular to the dimers, which extend over a substantial range. An interesting question is, in the absence of the plaquette phase as discussed earlier, what could these correlations represent? At this stage it is useful to recall another calculation by Kotov et al.. Within the dimer expansion, they calculated two different dimer order parameters,
$$D_x=|<𝐒_{i,j}𝐒_{i+1,j}><𝐒_{i+1,j}𝐒_{i+2,j}>|,$$
(49)
and,
$$D_y=|<𝐒_{i,j}𝐒_{i+1,j}><𝐒_{i,j}𝐒_{i,j+1}>|,$$
(50)
where the elementary dimers connect spins at $`i,j`$ and $`i+1,j`$. They found that for $`0.4J_2/J_10.5`$, $`D_y`$ is nearly zero, whereas $`D_x`$ only goes to zero at the critical point. These results suggest that the dimer phase consists of strongly correlated two-chain ladders, which are then weakly correlated from one ladder to next. This striped nature of spin correlations in the dimer phase has not been noted before and is clearly a very interesting result. The situation for $`J_2/J_10.5`$ is again less clear. As discussed before, there are many possibilities for the phase diagram in that region and much longer series are needed to throw more light on the situation. Perhaps there is an interesting multicritical point in that region of the phase diagram.
## III Discussion
We have attempted to further elucidate the nature of the intermediate, magnetically disordered, phase of the spin-$`\frac{1}{/}2`$ $`J_1\text{-}J_2`$ Heisenberg antiferromagnet on the square lattice. This phase is believed to occur in the range $`0.4J_2/J_10.6`$. Our approach has been to derive perturbation expansions (up to order $`\lambda ^7`$) for the ground state energy, singlet and triplet excitation energies, and various susceptibilities, starting from a system of decoupled plaquettes ($`\lambda =0`$) and extrapolating to the homogeneous lattice ($`\lambda =1`$). We have also derived expansions about an unperturbed state of isolated columnar dimers (“dimer phase”). Both of these have been proposed as candidates for the intermediate phase.
We find that the ground state energy for both plaquette and dimer phases are very similar, any difference lying within the error bars. From this result alone we cannot favor one phase over the other.
The analysis of the singlet and triplet excitation spectra suggests an instability in the plaquette phase. In particular, in the disconnected plaquette expansions, Dlog Padé analysis indicates that the gaps would vanish for $`\lambda `$ less than unity. The gap appears to close first for the triplets and then for the singlets. This is the strongest evidence that the plaquette phase is not realized in this model. However, we should mention here that the critical exponents associated with the vanishing of the gaps are rather small ($`<0.4`$) and the gap closes not too far from $`\lambda `$ equal to unity. Thus, with a relatively short series, this should be treated with some caution. One could ask why the energy series appear to converge well despite the instability. However, this is a well known feature of series expansions, that quantities having weak singularities may continue to show reasonable values even if extrapolated past the singularity.
A consistent interpretation of these results is that within the parameter space of our non-uniform Hamiltonian, the plaquette phase is first unstable to a magnetic phase, which then must give way to the columnar dimer phase. Similar results for the instability of the staggered dimer phase were suggested before by Gelfand et al. However, the full phase diagram in the $`J_2/J_1`$ and $`\lambda `$ parameter space is difficult to obtain reliably, especially near the transition to the columnar phase. There are possibilities of some novel multicritical points, which deserve further attention.
One of our most interesting results is the finding of striped spin correlations in the dimer phase. In this phase, the nearest neighbor spin correlations are nearly equal along the rungs and along the chains of a two spin column and there are extended bond correlations along the chains. However, spin correlations from one column to the next are much weaker. In other words, the dimers are strongly resonating along vertical columns. The existence of a quantum critical point separating an antiferromagnetic phase with such a quantum disordered phase with striped correlations is a very interesting feature of this model which deserves further attention in the context of high-$`T_c`$ materials.
###### Acknowledgements.
We would like to thank Subir Sachdev and Oleg Sushkov for many useful discussions. This work has been supported in part by a grant from the National Science Foundation (DMR-9616574) (R.R.P.S.), the Gordon Godfrey Bequest for Theoretical Physics at the University of New South Wales, and by the Australian Research Council (Z.W., C.J.H. and J.O.). The computation has been performed on Silicon Graphics Power Challenge and Convex machines. We thank the New South Wales Centre for Parallel Computing for facilities and assistance with the calculations.
|
no-problem/9904/solv-int9904020.html
|
ar5iv
|
text
|
# Compacton-like Solutions for Modified KdV and other Nonlinear Equations
## Abstract
We present compacton-like solution of the modified KdV equation and compare its properties with those of the compactons and solitons. We further show that, the nonlinear Schrödinger equation with a source term and other higher order KdV-like equations also possess compact solutions of the similar form.
Compactons are new class of localized solutions for families of fully nonlinear, dispersive, partial differential equations. Unlike the solitons, which, although highly localized, still have infinite span, these solutions have compact support; they vanish identically outside a finite region. Hence, these solitary waves have been christened as compactons . Remarkably, there is strong numerical evidence that, the collision of two compactons is elastic, a feature characterizing the solitons. These equations, arising in the context of pattern formation in nonlinear media, seem to have only a finite number of local conservation laws; yet, the behaviour of the solutions closely mimic those of the solitons of the integrable models.
The first, two parameter family of fully nonlinear, dispersive, equations $`K(m,n)`$, admitting compacton solutions, are of the form,
$$u_t+(u^m)_x+(u^n)_{3x}=0m>0,1<n<3,$$
(1)
with $`u_t\frac{u}{t}`$ and $`u_x\frac{u}{x}`$. These equations arose in the process of understanding the role of nonlinear dispersion, in the formation of structures like liquid drops . The compacton solution of $`K(2,2)`$ reads,
$$u_c=\frac{4\lambda }{3}\mathrm{cos}^2(\frac{x\lambda t}{4}),$$
(2)
when $`x\lambda t2\pi `$ and $`u_c=0`$, otherwise.
Unlike the solitons, the width here is independent of velocity; however, the amplitude depends on it. It has been shown that, $`K(2,2)`$ admits only four local conservation laws. Some of the other representative compactons are,
$$u_c=[37.5\lambda (x\lambda t)^2]/30,$$
(3)
and
$$u_c=\pm \sqrt{[3\lambda /2]}cos((x\lambda t)/3);$$
(4)
these are the solutions of $`K(3,2)`$, and $`K(3,3)`$ equations, respectively.
The $`K(m,n)`$ family is not derivable from a first order Lagrangian, except for $`n=1`$ . A generalized sequence of KdV-like equations, which could be given a Lagrangian formulation, have also been shown to admit compacton solutions. These equations
$$u_t+u_xu^{l2}+\alpha [2u_{3x}u^p+4pu^{p1}u_xu_{2x}+p(p1)u^{p2}(u_x)^3]=0,$$
(5)
have the same terms as in Eq.(1); the relative weights of the terms are different. Further generalizations to, one parameter generalized KdV equation , two parameter odd order KdV equations , enlarged the class of evolution equations, which admitted solutions with compact support. These type of solutions have also appeared in the context of baby Skyrmions .
The stability of the compacton solutions was considered in Ref. ; it was shown, by linear stability analysis, as well as, by Lyapunov stability criterion, that, these solutions are stable for arbitrary values of the nonlinear parameters.
None of the evolution equations possessing compacton solutions have been shown to be integrable; infact, some are non-integrable and possess only finite number of conserved quantities. Hence, it is of great interest to search for compacton-like solutions of the integrable nonlinear equations. In particular, one would like to compare their properties with the solitons on one hand, and the compactons on the other. Furthermore, the possibility of these compact solutions arising from the nonlinear equations, relevant for physical problems, will make them amenable for experimental detection.
In this note, we first show the existence of compacton-like solution to the modified KdV (MKdV) equation ,
$$u_t+u^2u_x+u_{3x}=0.$$
(6)
The solution is of the form,
$$u_c(x,t)=\frac{\sqrt{32}}{3}k\frac{\mathrm{cos}^2k(x4k^2t)}{(1\frac{2}{3}\mathrm{cos}^2k(x4k^2t))},$$
(7)
in the region $`(x4k^2t)\frac{\pi }{2k}`$ and zero otherwise.
Note that, for this compact solution, both the width and amplitude are proportional to the square root of the velocity. This behaviour is reminiscent of the solitons. Though the second derivative of the solution is discontinuous at the boundaries, it is a strong solution of the equation of motion, a feature similar to the compactons. We would like to point out that, $`u_c(x,t)`$, when unconstrained is a periodic solution of the MKdV equation, which to the best of our knowledge, has not appeared in the literature earlier. Judicious truncation of the periodic solution, by confining it to the fundamental strip, gives the compact support for this solution. Though this is a solution of the integrable equation, one needs to be careful with the conserved quantities, in light of the fact that some derivatives of the solution are discontinuous at the boundaries.
The Hamiltonian from which MKdV equation can be derived by variational principle,
$$H=\frac{1}{2}_{\mathrm{}}^{\mathrm{}}u_x^2𝑑x\frac{1}{12}_{\mathrm{}}^{\mathrm{}}u^4𝑑x,$$
(8)
and the momentum expression, given by,
$$P=\frac{1}{2}_{\mathrm{}}^{\mathrm{}}u^2𝑑x,$$
(9)
are well defined at the edges.
Explicitly, for the above solution, the energy and momentum are given, respectively by, $`E_c=(16/3)k^3`$ and $`P_c=4\pi k`$. For the soliton solution of the MKdV equation
$$u_s(x,t)=\sqrt{6}ksechk(xk^2t),$$
(10)
the corresponding quantities are $`E_s=2k^3`$ and $`P_s=6k`$. Hence, the compacton for postive values of $`k`$, has lower energy compared to the soliton. The situation is opposite for negative values of $`k`$. Other conserved quantities, involving the even derivatives of the solutions, are not well defined at the boundaries. In this sense, these solutions are similar to compactons.
Next, we show that this compacton-like solution of the MKdV equation is also a strong solution of the nonlinear Schrödinger equation (NLSE) and higher order KdV-like equations.
The solution with compact support obeys the NLSE with a source term:
$$iq_t+\frac{1}{2}q_{xx}+q^2q\eta =0.$$
(11)
Using the following anstaz,
$$q(x,t)=e^{i[\psi (\xi )\omega t]}a(\xi ),$$
(12)
where $`\xi =xvt`$, and choosing the source term as $`\eta (\xi )=Ke^{i[\psi (\xi )\omega t]}`$, we can separate the real and the imaginary parts of the equation as,
$$v\psi ^{}a+\omega a+\frac{a^{\prime \prime }}{2}\frac{\psi _{}^{}{}_{}{}^{2}a}{2}+a^3K=0,$$
(13)
$$va^{}+\frac{\psi ^{\prime \prime }a}{2}+\psi ^{}a^{}=0.$$
(14)
Eq.(14) can be straightforwardly solved to give
$$\psi ^{}=v+\frac{P}{a^2},$$
(15)
where $`P`$ is the integration constant. Choosing $`P=0`$, we arrive at the following solutions for the functions $`\psi (\xi )`$ and $`a(\xi )`$:
$$\psi (\xi )=v\xi ,$$
(16)
and
$$a(\xi )=(\frac{16K}{27})^{1/3}\frac{\mathrm{cos}^2[(\frac{27}{16})^{\frac{1}{6}}K^{\frac{1}{3}}(xvt)]}{(1\frac{2}{3}\mathrm{cos}^2[(\frac{27}{16})^{\frac{1}{6}}K^{\frac{1}{3}}(xvt)])},$$
(17)
where $`\omega `$ is related to $`v`$ by
$$2\omega +v^2=\frac{27}{4}(16K/27)^{2/3}.$$
(18)
Note that, the solution exists for values of $`\omega \frac{27}{4}(16K/27)^{2/3}`$. We would like to add that, we have taken the simplest solution of the equation involving the phase. In principle, one can choose $`P0`$, which may give rise to shock-wave type solutions.
It is worth emphasizing that the NLSE plays a significant role in nonlinear optics ,. Since $`q`$ there, represents the electric field, the corresponding source term $`\mathrm{`}\eta ^{}`$ can be understood as dipole sources. Hence, a fluid nonlinear medium with moving dipoles can be a plausible source of these type of solitary waves.
We now show that this compacton-like solution also satisfies higher order KdV like equations:
$$u_t+(lu+mu^4)u_x+5u^2u_{3x}+pu_{5x}=0,$$
(19)
where $`l=\frac{10}{3p},m=\frac{5}{3p}`$ and $`k^2=\frac{1}{4p}`$. The solution of this equation is of the form
$$u_c(x,t)=\frac{4}{3}\frac{\mathrm{cos}^2k(x4k^2t)}{(1\frac{2}{3}\mathrm{cos}^2k(x4k^2t))}.$$
(20)
Note that, for this solution the amplitude is independent of the velocity where as the width depends on it. Interestingly, the following fifth order KdV-like equation,
$$u_t+lu^4u_x+u_x^3+u^2u_{3x}+qu_{5x}=0,$$
(21)
has a compact solution of the form
$$u_c(x,t)=\frac{2\sqrt{10}}{3}\frac{\mathrm{cos}^2k(x4k^2t)}{(1\frac{2}{3}\mathrm{cos}^2k(x4k^2t))},$$
(22)
for $`l=\frac{243}{10Q}`$ and $`k^2=\frac{1}{4Q}`$.
In conclusion, we have shown the existence of solutions with compact support for MKdV, nonlinear Schrödinger equation and higher order KdV-like equations. Since the MKdV equation manifests in diverse physical phenomena, it will be exciting, if these waves can be realized in an experimental situation. For this purpose, one needs to check the stability of these solutions. This question is currently under study. Similarly, one can enquire about the possibilities of these type of solutions occurring in other NLSE type equations arising in the context of nonlinear optics.
ACKNOWLEDGEMENTS
The work of CNK is supported by CSIR, INDIA, through S.R.A. Scheme. We acknowledge useful discussions with Prof. V. Srinivasan.
|
no-problem/9904/hep-ph9904433.html
|
ar5iv
|
text
|
# Acknowledgments
## Acknowledgments
The author would like to thank M. Bando and T. Noguchi for useful discussions and careful reading of the manuscript. This work was supported in part by the Grant-in-Aid for JSPS Research Fellowships.
|
no-problem/9904/hep-ex9904026.html
|
ar5iv
|
text
|
# 𝐾⁺→𝜋⁺𝜇⁺𝜇⁻ IN E865 AT BNL
## Introduction and Theoretical Background
The primary interest of E865 is the forbidden decay $`K^+\pi ^+\mu ^+e^{}`$. Results from two independent analyses have yielded a limit of 2 $`\times 10^{10}`$ for a preliminary 1995 data set, data from 1996 and 1998 has a statistical reach of order $`10^{11}`$. However, an important by-product of this experiment has been that E865 has significantly increased world sample sizes for several other $`K^+`$ decay modes with 3 charged particles in the final state. (Final states containing a $`\pi ^0`$ are included, since the $`\pi ^0`$ is detected through $`\pi ^0\gamma e^+e^{}`$). A detailed understanding of the K decays, among them $`\pi ^+e^+e^{}`$ and $`\pi ^+\mu ^+\mu ^{}`$, tests models of the weak interaction and low energy QCD.
The focus of this paper is a new measurement of the branching fraction for $`K^+\pi ^+\mu ^+\mu ^{}`$. The previous best measurement, from E787 , based on a small number of fully reconstructed events and about 200 partially reconstructed events, is $`(5.0\pm 1.0)\times 10^8`$; they did not make a form factor measurement, because of limited acceptance and incomplete event information. With specific assumptions about the form of the interaction (typically, vector, with a linear q<sup>2</sup> dependence, as in Ke3), the expected $`\pi ^+\mu ^+\mu ^{}/\pi ^+e^+e^{}`$ ratio can be calculated from the E865 results, and from theory, and compared with the experimental observation. For the $`\pi ^+e^+e^{}`$ we use the most precise results, a branching fraction of $`(2.82\pm 0.04\pm 0.15)10^7`$ and $`\lambda `$ of $`0.182\pm 0.01\pm 0.007`$, based on our 10000 events .
Short distance contributions to $`K\pi \mathrm{𝑙𝑙}`$ are only of order $`10^9`$. Long distance contributions come close to the observed $`\pi ^+e^+e^{}`$ rate and give a $`\pi ^+\mu ^+\mu ^{}`$ to $`\pi ^+e^+e^{}`$ ratio of about 0.22-0.24. Vector and a1 meson dominance is an example of a simple model with a parameter free prediction for the branching fractions but small form factor dependence on $`q^2`$, similar to Ke3. Chiral Perturbation theory parameters allow a range of predictions for the form factor. In $`O(p^4)`$ , the form factor and its $`q^2`$ dependence are tightly correlated with the branching ratio. Using an $`O(p^6)`$ calculation of an explicit ”pion loop term” with a polynomial of the expected Chiral Perturbation behavior at $`O(p^6)`$ gives additional parameters and flexibility.
The E787 $`\pi ^+\mu ^+\mu ^{}/\pi ^+e^+e^{}`$ ratio, $`0.18\pm 0.04`$, is about 1.5 $`\sigma `$ below predictions.
## Experimental Apparatus
E865 at BNL is a magnetic spectrometer illuminated by an intense unseparated 6 GeV/c beam, of $`10^8K^+`$ and $`2\times 10^9\pi ^+`$ per 1.6 sec AGS pulse. Momentum measured in the spectrometer is compared with energy deposit in the 600 module 15 r.l. deep ”Shashlik” calorimeter. Electron and positron identification is done by two threshold Cerenkov counters with $`H_2`$ on the left (primarily negative particles) and $`CH_4`$ on the right (primarily positive particles). A 24 plane proportional tube - iron plate range stack identifies muons. The $`\pi ^+e^+e^{}`$ data were taken parasitically in 1995 and 1996, and $`\pi ^+\mu ^+\mu ^{}`$ data in a 1997 reduced intensity run.
## Event Selection and Analysis
The $`\pi ^+\mu ^+\mu ^{}`$ events are normalized to the $`\pi ^+\pi ^+\pi ^{}`$ final state, with similar kinematics. The trigger for all modes requires three particles in a kinematically plausible configuration in hodoscope counters and the calorimeter. The analysis required: a good reconstructed vertex; reasonable vector momentum; and two electrons or two muons, one negatively charged on the left and one positively charged on the right. For the $`\pi ^+e^+e^{}`$ events, Cerenkov counter light is required both in the trigger and in the analysis; and for the $`\pi ^+\mu ^+\mu ^{}`$ events, muon chamber signals were. Cuts in track chisquare help eliminate the primary background for the $`\pi ^+\mu ^+\mu ^{}`$ events, secondary decays ($`\pi \mu \nu `$) from $`K^+\pi ^+\pi ^+\pi ^{}`$ .
The information described qualitatively above was combined quantitatively into an ”event likelihood”. The stability of the branching fraction as a function of event likelihood is shown in the plot on the left in Figure 1, where the quantity R (proportional to the branching fraction) is plotted against the event likelihood cut. R is defined as: $`r_\mu /r_{3\pi }`$ where $`r_{\mu (or3\pi )}=Ndata_{\mu (or3\pi )}/Nmc_{mu(or3\pi )}`$ and Ndata and Nmc are respectively the accepted data (simulated) events. While R is relatively stable, the number of signal events (not shown because of space constraints) drops from 700 to 200 as the event likelihood moves from -19 to -10. This drop in signal is accompanied by a drop in background to signal from $`40\%`$ to $`2\%`$ , reflecting the large admixture of background at the large negative value of the event likehood, and loss of signal as the event likelihood approaches -10. For our final result, we use an event likelihood cut of -13, which gives $``$ 400 signal events.
The effective mass of the $`\pi ^+\mu ^+\mu ^{}`$ final state is shown in the right hand side of Figure 1. Background from $`K^+\pi ^+\pi ^+\pi ^{}`$, with two pions decaying to muons, is shown by the dark-shaded curve at low effective masses.
The systematic error is estimated as $`7\%`$, dominated by $`3\%`$ from selection criteria and background subtraction, and $`4\%`$ from normalization uncertainties.
## Results and Discussion
The $`\mu \mu `$ effective mass distribution ($`q^2`$) and the $`\mathrm{cos}\theta _{\pi \mu ^+}`$ distribution (where $`\theta _{\pi \mu ^+}`$ is the angle between the $`\pi ^+`$ and $`\mu ^+`$ in the $`\mu \mu `$ center of mass frame) are not shown but are consistent with a vector interaction in the decay, with a form factor $`0.2`$, as seen in the $`\pi ^+e^+e^{}`$ events . Fit contours for the branching fraction and $`\lambda `$ (the form factor $`q^2`$ dependence) are shown in Figure 2, assuming a linear $`q^2`$ dependence as in Ke3.
The $`\pi ^+\mu ^+\mu ^{}`$ branching fraction and $`\lambda `$ are larger than expected in a simple meson dominance model but agree with expectations from $`\pi ^+e^+e^{}`$. Our present understanding is that our $`\pi ll`$ data (branching ratios, and q<sup>2</sup> dependence, taken together, Ref. and from this paper) are inconsistent with Chiral Perturbation theory at $`O(p^4)`$ , but, due to the additional parameters available, are consistent with an O($`p^6`$) calculation . Detailed systematic studies and comparison with Chiral Perturbation theory are in progress.
## Acknowledgments
We thank our home country research institutes for financial support. The experiment would not have been possible without the valiant and sustained efforts of the AGS machine operators and technical support crew. The Pitt group owes a large debt of gratitude to high school teacher Ivan Ober and many students who helped in design, construction and commissioning of the Cerenkov counters, especially Elizabeth Battiste, Rebecca Chapman, Amy Freedman, Tuan Lu, Cindy Miller, Melinda Nickelson, Tim Stever, Paula Pomianowski, and Craig Valine.
## References
|
no-problem/9904/hep-ph9904393.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Despite the impressive success of the Standard Model, few are convinced that it is the final theory of particle interactions. For example, the supersymmetric modification of the Standard Model yields a very promising framework in which we are able to understand the stability of the electroweak scale. The Minimal Supersymmetric Standard Model (MSSM) provides a plethora of new phenomenological predictions which range from new charged and colored particles actively searched for in accelerators, to cold dark matter candidates, to new CP-violating phenomena such as the electric dipole moments of the neutron and electron which are generated if the additional CP-violating phases in the MSSM are non-zero. In this work, we study in detail the predictions of the MSSM for the electric dipole moment of the mercury atom and derive the constraints on the MSSM phases from the experimental limits on $`d_{Hg}`$.
The null experimental results for the electric dipole moments (EDMs) of the electron, neutron, heavy atoms and diatomic molecules can in general place very strong constraints on the CP-violating sector of a new theory and probe energy scales which are inaccessible for direct observations at colliders . In general, the relevant contribution to the dipole moments at scales of $``$1 GeV can be parameterized in terms of effective operators of different dimensions suppressed by corresponding powers of a high scale $`M`$ where these operators were generated:
$$_{eff}=\underset{n4}{}\frac{c_{ni}}{M^{n4}}𝒪_i^{(n)},$$
(1)
Here $`𝒪_i^{(n)}`$ are operators of dimension $`n`$, with its field content, Lorentz structures, etc., denoted by $`i`$. The fields relevant for the low-energy dynamics of interest are gluons, the three light quarks, the electron, and the electromagnetic field. This general form is independent of the particular construction of the new theory, and the details of a given model enter only through the values of the coefficients $`c_{ni}/M^{n4}`$.
In the MSSM, the number of operators which can generate an EDM is considerably smaller than in the generic case. In fact, all four-fermion operators are numerically insignificant. They can be generated in the MSSM only with additional factor of order $`(m_q/M_{SUSY})^2`$ modulo possible nontrivial flavor structure of the soft-breaking sector. Here we assume the minimal scenario with flavor-blind breaking of supersymmetry and therefore we can safely drop all four-fermion CP-violating operators. Hence, the relevant part of the effective Lagrangian at the scale of 1 GeV contains the theta term, the three-gluon Weinberg operator, the EDMs of quarks and electron and the color EDMs (CEDMs) of quarks,
$`_{eff}`$ $`=`$ $`\theta {\displaystyle \frac{g_s^2}{32\pi ^2}}G_{\mu \nu }^a\stackrel{~}{G}_{\mu \nu }^a+w{\displaystyle \frac{g_s^3}{6}}f^{abc}G_{\mu \nu }^a\stackrel{~}{G}_{\nu \alpha }^bG_{\alpha \mu }^c`$
$`+i{\displaystyle \underset{i=u,d,s}{}}{\displaystyle \frac{d_i}{2}}\overline{q}_iF_{\mu \nu }\sigma _{\mu \nu }\gamma _5q_i+i{\displaystyle \underset{i=u,d,s}{}}{\displaystyle \frac{\stackrel{~}{d}_i}{2}}\overline{q}_ig_st^aG_{\mu \nu }^a\sigma _{\mu \nu }\gamma _5q_i+i{\displaystyle \frac{\stackrel{~}{d}_e}{2}}\overline{e}F_{\mu \nu }\sigma _{\mu \nu }\gamma _5e.`$
We will assume here that the PQ mechanism of $`\theta `$-relaxation eliminates $`\theta O(1)`$ and sets $`\theta `$ to $`\theta _{eff}`$ at the minimum of the axion potential. When both the CEDMs and Weinberg operator are absent, the value of $`\theta _{eff}`$ is exactly zero. However, nonzero $`w`$ and $`\stackrel{~}{d}_i`$ induce a linear term in the axion potential, and the effective value of $`\theta `$ is different from zero. This value leads to an additional contribution to the EDM of the neutron, usually ignored in the literature.
The coefficients in front of the operators in Eq. (1) can be calculated for any given model of CP-violation and then evolved down to the low-energy scale, using standard renormalization group techniques. In the MSSM, in particular, one can compute effective Lagrangian (1) for any given point in the supersymmetric parameter space. Then, to get the final predictions for EDMs, one has to take various matrix elements for these operators over hadronic, nuclear and atomic states . In most cases this is a source of major uncertainty, especially when hadronic physics is involved. The exception is the case of a paramagnetic atom, in which the EDM is generated by the electron EDM $`d_e`$, and where the effects of nuclear CP-odd moments induced by the rest of the operators in (1) can be safely neglected. The EDM of <sup>205</sup>Tl is extremely sensitive to $`d_e`$ due to a very large relativistic enhancement factor $`c600`$, which relates the EDM of the atom with $`d_e`$, $`d_{Tl}=cd_e`$. The experimental bound on the EDM of the thallium atom , combined with good stability of atomic calculations (see and references therein), leads to the following limit on the EDM of the electron:
$$d_e<410^{27}ecm.$$
(3)
Therefore, the calculation of $`d_e`$ in the MSSM gives the most reliable limits on CP-violating phases. It is clear, however, that the electron EDM limit alone cannot exclude the possibility of large CP-violating phases. This is because $`d_e`$, as any other coefficient in Eq. (1), is in general a function of several CP-violating phases, and mutual cancellations are possible. This is what happens, for example, in the MSSM with the minimal number of parameters in the soft-breaking sector (see recent works ). In the MSSM, it is well known that there are two independent CP-violating phases, $`\theta _\mu `$ and $`\theta _A`$, associated with the supersymmetric Higgs mass parameter $`\mu `$ and the soft supersymmetry breaking trilinear parameter $`A`$. The calculation of the relevant one loop diagrams determines $`d_e`$ as a function of these two phases. If the phases are small, $`d_e`$ is simply a linear combination of $`\theta _A`$ and $`\theta _\mu `$. Therefore even a constraint as strong as that given in (3) leaves a band on the $`\theta _A`$$`\theta _\mu `$ plane, along which a cancellation occurs and the phases are not constrained. In general, a second constraint could be expected to lift this degeneracy and place a strong constraint on both phases. It has been common to use the limit on the neutron EDM as this second constraint. Although there are large uncertainties in the calculation of the neutron EDM, as we argue below, when the limit on the neutron EDM is used, cancellations in the electron EDM occur in many of the same regions as cancellations in the neutron EDM. Therefore, one is led to the conclusion that large phases are still possible.
In what follows, we critically reexamine the reliability of the calculation of the EDM of the neutron in the MSSM. We demonstrate that this calculation is subject to very large hadronic uncertainties, which makes the extraction of the limits on CP-violating phases in MSSM tenuous. Instead, we propose that useful limits may be obtained from the limits on the EDM of the mercury atom. This arises from the T-odd nucleon-nucleon interaction in the MSSM, induced mainly due to the CEDMs of light quarks. This interaction gives rise to an EDM of the mercury atom by inducing the Schiff moment of mercury nucleus. We demonstrate that the degree of QCD uncertainties related to this calculation is in fact smaller than in the case of $`d_n`$ and that it is possible to calculate the T-odd nucleon-nucleon interaction as a function of the different MSSM phases. As an example, we proceed with the calculation of the EDM of the mercury atom in one specific point of the supersymmetric parameter space where all squark, gaugino masses, $`|\mu |`$ and $`|A|`$ parameters are set equal. This “pilot” calculation demonstrates the sensitivity of $`d_{Hg}`$ to the CP-violating phases of MSSM. We find in this case that $`d_{Hg}`$ provides somewhat better limits on CP-violating phases than $`d_e`$. We proceed further and combine mercury EDM and electron EDM constraints to exclude most of the parameter space on $`\theta _A`$$`\theta _\mu `$ plane in this toy example. Finally, we consider more realistic constraints over the supersymmetric plane when supersymmetry breaking scalar and gaugino masses are unified at the GUT scale. In this case, we find that the limits on CP-violating phases obtained from $`d_{Hg}`$ is no longer more restrictive than $`d_e`$, as the RG evolution of soft-breaking parameters makes squarks and gluino heavier than sleptons, charginos and neutralinos. The combined limits are still very powerful as the cancellation of different supersymmetric contributions typically occur in different regions of parameter space.
## 2 The Neutron EDM in the MSSM
Limits on the neutron EDM are commonly used to set constraints on new CP violating interactions. In particular, the upper limit to $`d_n`$ is often used to limit the size of the CP-violating phases in the MSSM . The current experimental limit on the EDM of the neutron is
$$d_n<1.110^{25}ecm,$$
(4)
Indeed, the EDM of the neutron receives contribution from all operators listed above in Eq. (1) except $`d_e`$. However, there is a complication in using the neutron EDM as compared to the electron EDM, due to QCD uncertainties which make the extraction of the limits on CP-violating phases in the fundamental Lagrangian problematic. We demonstrate two aspects of this problem below.
The most straightforward contribution to the EDM of the neutron is due to the quark EDM operators. It is usually estimated using nonrelativistic SU(6) quark model. The result,
$$d_n=\frac{4}{3}d_d\frac{1}{3}d_u,$$
(5)
can be compared, in fact, with the model calculations and lattice simulations of light quark tensor charges in the nucleon . The matrix elements for the tensor charges of the nucleon are defined by
$$N|\overline{\psi }_q\sigma _{\mu \nu }\psi _q|N=\delta q\overline{N}\sigma _{\mu \nu }N,$$
(6)
whereas the axial charges are defined by
$$N|\overline{\psi }_q\gamma _\mu \gamma _5\psi _q|N=\mathrm{\Delta }q\overline{N}\gamma _\mu \gamma _5N$$
(7)
In the Naïve quark model, both Lorentz structures correspond to the spin of a nonrelativistic quark. In this case $`\delta u=\mathrm{\Delta }u=1/3`$, $`\delta d=\mathrm{\Delta }d=4/3`$, $`\delta s=\mathrm{\Delta }s=0`$, yielding eq. (5). Note that isospin symmetry gives us $`(\mathrm{\Delta }u)_n=(\mathrm{\Delta }d)_p`$, etc. However, as argued in , since it appears that the contribution to the nucleon spin from the strange quark ($`\mathrm{\Delta }_s`$) is non-vanishing , the naïve quark model may not be sufficient to describe the quark EDM contribution to the neutron EDM. While it is not the axial charges which need to be considered for the calculation of the neutron EDM, but rather the tensor charges, the departure of the axial charge values from their NQM values indicates that more realistic (non-NQM) values of the tensor charges ($`\delta q`$) must be used. According to calculations based on Lattice QCD , the tensorial charges of up and down quarks in the proton should be read as $`\delta u0.8`$ and $`\delta d0.23`$. This means that the naive nonrelativistic formula predicts the EDM of the neutron due to the quark EDMs to be 1.5-1.7 times larger than the lattice result. Slightly different values of $`\delta u1.1`$ and $`\delta d0.4`$ can be derived from the SU(3) chiral quark soliton model . The tensor charge of the strange quark is found to be consistent with zero in both methods . This is due to the fact that the $`\overline{s}\sigma _{\mu \nu }s`$ operator is odd under charge conjugation which must result in the Zweig-type suppression of this matrix element over the neutron state. Even with the usual $`m_s/m_d`$ enhancement of this operator, it is unlikely to be important. This does not exclude other possible CP-violating operators involving the $`s`$-quark, CEDM or generic four-fermion operators, as their contributions to the EDM of the neutron can be significant . Departures from the predictions of the non-relativistic quark model were recently considered in .
Unfortunately, the quantitative evaluation of the remaining contributions to the neutron EDM is complicated due to of our lack of knowledge about strong interaction dynamics at 1 GeV and below. Typically, one resorts to Naive Dimensional Analysis (NDA) , formulated within the constituent quark framework. This method is, however, only an order of magnitude estimate to be used when other methods of calculation fail to produce an answer. When the problem of estimating $`d_N`$ due to $`\stackrel{~}{d}_{u,d}`$ is considered, there are several possible answers in the literature:
$`d_Ne0.7(\stackrel{~}{d}_u+\stackrel{~}{d}_d)`$ $`\mathrm{Ref}.\text{[9]}`$ (8)
$`d_N{\displaystyle \frac{eg_s}{4\pi }}(O(1)\stackrel{~}{d}_u+O(1)\stackrel{~}{d}_d)`$ $`\mathrm{NDA},\mathrm{Ref}.\text{[20]}`$ (9)
We have chosen a normalization where $`g_s`$ is included in the definition of the operator in (1) and correspondingly include an additional $`g_s`$ in the estimate (9). The first result is based on a combination of chiral perturbation theory and QCD sum rules. The latter estimate is derived with the use of NDA <sup>1</sup><sup>1</sup>1We note that the estimate in is suppressed by an additional factor of $`g_s/4\pi `$.. For a realistic choice of the strong coupling constant at the scale of $`1`$ GeV, $`g_s\sqrt{0.54\pi }=2.5`$, the overall numerical coefficient in eq. (9) is about 3.6 times smaller than in (8). Estimates based on NDA imply that for natural relations among coefficients $`d_i/e\stackrel{~}{d}_i`$, the effects of color EDMs on the electric dipole moment of the neutron are negligible and the result can indeed be approximated by the linear combination of EDMS of quarks.
In fact, it is possible to show that the CEDMs can lead to a substantially larger contribution to the neutron EDM than some of the predictions based on NDA. The easiest way to see that CEDMs can be numerically important is to calculate the effective $`\theta `$-term induced by CEDMs in the presence of the PQ symmetry and then use the result for $`d_N(\theta )`$. This value, $`\theta _{eff}(\stackrel{~}{d}_i)`$ can be calculated within the current algebra approach, in a manner similar to the calculation of the vacuum topological susceptibility . The dynamically induced theta term can be expressed in the following compact form:
$$\theta _{eff}=\frac{m^2}{2}\left(\frac{\stackrel{~}{d}_u}{m_u}+\frac{\stackrel{~}{d}_d}{m_d}+\frac{\stackrel{~}{d}_s}{m_s}\right).$$
(10)
Here, $`m^2`$ is the ratio of the quark-gluon condensate to the quark condensate. It is known to good accuracy from QCD sum rules that,
$$m^2=\frac{0|g\overline{q}(G\sigma )q|0}{0|\overline{q}q|0}0.8\text{GeV}^2.$$
(11)
The accuracy of the estimate (10) is of order $`m_{\pi ,K}^2/m_\eta ^{}^2`$, which is acceptable for our purposes. If no interference with other terms is expected, then the expression (10) must be less than the current limit on $`\theta `$, extracted from the same neutron EDM data. Using the fact that in the simplest variant of the MSSM, $`\stackrel{~}{d}_d/m_d=\stackrel{~}{d}_s/m_s`$, and assuming for a moment that this is the only contribution to the EDM of the neutron, one can obtain the following, quite stringent, level of sensitivity for the CEDM:
$$\stackrel{~}{d}_d<10^{25}cm.$$
(12)
This fact alone suggests that CEDMs may contribute significantly to the EDM of the neutron, typically at the level of the prediction (9) and an order of magnitude above NDA predictions. Remarkably, the main uncertainty in the limit (12) comes not from the calculation of $`\theta (CEDM)`$, but rather from the principal difficulties in calculating $`d_N(\theta )`$. In the standard approach , the chiral loop diagram is used to estimate $`d_N(\theta )`$. This loop is logarithmically divergent in the exact chiral limit and therefore is distinguished from the rest of the contributions. For realistic values of the parameters, however, this logarithm is not large and other contributions can be equally important. This makes the whole calculation problematic even in predicting the sign of the $`\theta `$ term contribution to $`d_N`$.
Besides $`d_N(\theta (CEDM))`$, one should also consider direct CEDM-induced contributions to the EDM of the neutron which can be computed within the same chiral loop approach . Combining different contributions, we can symbolically write the result for the EDM of the neutron in the following form:
$$d_N0.8d_d0.23d_u+e\left[\stackrel{~}{d}_u\left(c_1\mathrm{ln}\frac{\mathrm{\Lambda }}{m_\pi }+c_2\right)+\stackrel{~}{d}_d\left(c_3\mathrm{ln}\frac{\mathrm{\Lambda }}{m_\pi }+c_4\right)+\stackrel{~}{d}_s\left(c_5\mathrm{ln}\frac{\mathrm{\Lambda }}{m_K}+c_6\right)\right].$$
(13)
The coefficients $`c_1`$, $`c_3`$ and $`c_5`$ were estimated in Ref. to be $`c_1\mathrm{ln}(m_\rho /m_\pi )=c_3\mathrm{ln}(m_\rho /m_\pi )0.7`$ and $`c_50.1`$. The cutoff parameter $`\mathrm{\Lambda }`$ corresponds to scales where chiral perturbation theory breaks down, that is, of order $`m_\rho `$. In the exact chiral limit, $`m_\pi ,m_K0`$, and the logarithmic terms dominate. In practice, however, the logarithmic terms are numerically not distinguished from the coefficients $`c_2`$, $`c_4`$ and $`c_6`$, which are a priori comparable with $`c_1`$, $`c_3`$ and $`c_5`$ and are not calculable in this approach. It is clear then that these terms can change both the value and the signs of different contributions to $`d_N`$. Therefore, although in principle very important as an order of magnitude estimate, Eq. (13) fails to provide $`d_N`$ as a known function of individual $`\stackrel{~}{d}_i`$-contributions and, ultimately, of different CP-violating phases.
As emphasized in Ref. , the NDA estimate of $`d_n(\theta )`$ essentially reproduces the calculation of Ref. . The source of the disagreement in the case of $`d_n(CEDMs)`$ can be traced to the problem of estimating the CP-odd $`\pi ^+pn`$–vertex, proportional to the matrix element $`p|\overline{u}g_s(G\sigma )d|n`$. In Ref. this matrix element was estimated to be -1.5 GeV<sup>2</sup> and is essentially proportional to the quark-gluon condensate parameter $`m^20.8\mathrm{GeV}^2`$ (11). On the other hand, it can be shown that NDA suggests for this matrix element a value of order $`4\pi f_\pi ^2\mathrm{GeV}^2/(4\pi )`$, i.e. one order of magnitude smaller. This difference is related to the fact that the NDA assumes nonrelativistic quarks whose chromomagnetic interactions are suppressed, whereas QCD sum rules use more realistic descriptions of hadronic properties in terms of vacuum quark–gluonic condensates.
To summarize this discussion, the extraction of reliable limits on the CP-violating phases in the MSSM from the EDM of the neutron is difficult and uncertain. Even the best estimates of $`d_n`$ based on the “chiral logarithm” approach , bear a large degree of uncertainty and cannot produce a precise prediction for $`d_n`$ as a function of the CP-violating phases. Useful limits are still available from the electron EDM; however, the magnitude of the phases is not terribly constrained on this basis alone, due to cancellations in the various MSSM contributions to $`d_e`$. Fortunately, the EDM of the neutron is not the only source of information about CP-violation in the strongly-interacting sector. Limits on T-violating nuclear forces are provided by experiments aimed at the detection of the EDM of paramagnetic atoms, among which the EDM of <sup>199</sup>Hg atom is the most constraining. In what follows, we will discuss the constraints these limits provide, both alone and in conjunction with the electron EDM limits.
## 3 CP-violating nucleon-nucleon interaction in MSSM
The limits on T-odd nuclear forces extracted from the atomic experiments are in general very important for particle physics . In the case of diamagnetic atoms, the most impressive limit is obtained for the EDM of <sup>199</sup>Hg :
$$d_{Hg}<910^{28}ecm.$$
(14)
The electric screening of the electric dipole moments of the atom’s constituents is violated by the finite size of the nucleus and can be conveniently expressed by the Schiff moment $`S`$, which parametrizes the effective interaction between the electron and nucleus of spin $`𝐈`$, $`V_{eff}=eS(𝐈)\delta (𝐫)`$ . Atomic calculations derive the atomic EDM as a function of $`S`$ and translate the experimental result (14) into the limit on the Schiff moment of the nucleus:
$`d_{Hg}=S3.210^{18}\text{fm}^2`$
$`S<2.810^{10}e\text{fm}^3.`$ (15)
The Schiff moment of the nucleus can be induced either due to the Schiff moment of the valence nucleons or due to the breaking of time invariance in the nucleon-nucleon interaction, the latter being enhanced by the collective effects in the nucleus. The calculation of the Schiff moment of the nucleus, originating from various $`\overline{N}N\overline{N}^{}i\gamma _5N^{}`$ interactions was done in the single particle approximation with square-well and Woods-Saxon potentials . The results show that the Schiff moment of mercury is primarily sensitive to the $`\overline{p}p\overline{n}i\gamma _5n`$ interaction. If we parameterize the coefficient in front of this interaction as $`\xi G_F/\sqrt{2}`$, the nuclear calculation provides us with the following value for $`S`$:
$$S=1.810^7\xi e\text{fm}^3.$$
(16)
Combined with Eq. (15), it gives the following constraint on $`\xi `$:
$$\xi <1.910^3.$$
(17)
Questions concerning the calculation of the strength of $`\overline{p}p\overline{n}i\gamma _5n`$ interaction induced by different operators was considered in (1), . The effective theta term, the Weinberg three-gluon operator and the CEDMs of quarks can generate this interaction. Numerically, the contributions provided by the CEDMs of up and down quarks are the most important and we concentrate our analysis on them, trying to incorporate the effect of $`\stackrel{~}{d}_s`$ as well.
Following , we approximate the T-violating nucleon-nucleon interaction by pseudoscalar exchange, as shown in Fig 1. In the limit of exact chiral symmetry this exchange has the power-like singularity $`m_\pi ^2`$, to be compared with the logarithmic singularity in the case of the EDM of the neutron. The CP violation resides in proton-meson vertex which can be calculated with QCD sum rules and current algebra techniques. The CP-conserving meson-neutron vertex is sufficiently well known from SU(3)-relations in baryon octet decay amplitudes and from the axial charges of nucleons. If only $`\stackrel{~}{d}_u`$ and $`\stackrel{~}{d}_d`$ are present, the pion exchange dominates $`\eta `$ exchange by a factor $`m_\eta ^2/m_\pi ^216`$. In the MSSM, though, the strange quark CEDM is enhanced relative to that of the down quark by a factor $`m_s/m_d`$ and $`\eta `$ meson exchange is not a priori negligible. In the chiral approach, CP-violating vertices of interest can be reduced to the following set of matrix elements:
$`\overline{g}_{\pi pp}={\displaystyle \frac{\stackrel{~}{d}_u+\stackrel{~}{d}_d}{4f_\pi }}\left(p|\overline{u}g_s(G\sigma )u\overline{d}g_s(G\sigma )d|p\right)+`$
$`{\displaystyle \frac{\stackrel{~}{d}_u\stackrel{~}{d}_d}{4f_\pi }}\left(p|\overline{u}g_s(G\sigma )u+\overline{d}g_s(G\sigma )d|pm^2p|\overline{u}u+\overline{d}d|p\right)`$
$`\overline{g}_{\eta pp}={\displaystyle \frac{\stackrel{~}{d}_s}{\sqrt{3}f_\pi }}\left(p|\overline{s}g_s(G\sigma )s|pm^2p|\overline{s}s|p\right)`$ (18)
Here $`m^2`$ is the ratio of quark-gluon condensate to quark condensate introduced earlier in Eqs. (10) and (11). At this point our results are already slightly different from . Namely, we have included additional contributions related to the fact that the octet combination of color EDM operators has the quantum numbers of the $`\pi ^0`$ and $`\eta `$ fields which can therefore be produced from the vacuum. $`\pi ^0`$, for example, can be “rescattered” on the nucleon with an amplitude proportional to $`(m_d+m_u)N|\overline{u}u+\overline{d}d|N`$. As a result, the diagram shown in Fig. 2 is responsible for a contribution directly proportional to $`m^2`$ which is effectively of the same order as the direct contribution considered in .
Further calculation relies on QCD sum rules and low-energy theorems in QCD. Matrix elements from $`qg_s(G\sigma )q`$ operators were evaluated in :
$$p|\overline{q}g_s(G\sigma )q|p\frac{5}{3}m^2p|\overline{q}q|p.$$
(19)
The matrix elements over the proton can be obtained from baryon mass splittings and pion-nucleon scattering data. Here we take the following values for the matrix elements of $`\overline{q}q`$ over the nucleon :
$$p|\overline{u}u|p4.8;p|\overline{d}d|p4.1;p|\overline{s}s|p2.8$$
(20)
These values of $`\overline{q}q`$ matrix elements correspond to the choice $`m_u=4.5`$ MeV, $`m_d=9.5`$ MeV and $`m_s=175`$ MeV. The values of these matrix elements, together with the factorization formula (19), suggest that T-odd nucleon-nucleon forces are primarily sensitive to $`\stackrel{~}{d}_u\stackrel{~}{d}_d`$ and insensitive to $`\stackrel{~}{d}_u+\stackrel{~}{d}_d`$, simply because the contribution to $`\overline{g}_{\pi pp}`$ proportional to $`\stackrel{~}{d}_u+\stackrel{~}{d}_d`$ in (18) is relatively suppressed by
$$\frac{\overline{g}_{\pi pp}(\stackrel{~}{d}_u+\stackrel{~}{d}_d)}{\overline{g}_{\pi pp}(\stackrel{~}{d}_u\stackrel{~}{d}_d)}\frac{2p|\overline{u}u\overline{d}d|p}{p|\overline{u}u+\overline{d}d|p}0.2$$
(21)
In this sense, the contribution furnished by $`\theta _{eff}`$ is numerically insignificant because $`\overline{g}_{\pi pp}`$ generated by $`\theta `$ is also proportional to $`p|\overline{u}u\overline{d}d|p`$.
Thus, these simple considerations suggest that due to the numerical dominance of the triplet combination of color EDM operators, the final answer for $`\xi `$ takes the following form:
$$\xi =G_F^1\frac{3g_{\pi pp}m_0^2}{f_\pi m_\pi ^2}(\stackrel{~}{d}_d\stackrel{~}{d}_u0.012\stackrel{~}{d}_s),$$
(22)
We can see that the contribution of the strange quark CEDM is numerically suppressed, mainly due to the additional smallness of $`\eta NN`$ CP-conserving interaction as compared to $`g_{\pi NN}`$.
Combining equations (15), (16) and (22), we arrive at the following prediction for the EDM of the mercury atom:
$$d_{Hg}=(\stackrel{~}{d}_d\stackrel{~}{d}_u0.012\stackrel{~}{d}_s)\times 3.210^2e,$$
(23)
where the the numerical coefficient $`3.210^2`$ corresponds to the choice of light quark masses given above. Using the experimental limits (14), we deduce the very strong constraint on the following combinations of the CEDMs of quarks:
$$|\stackrel{~}{d}_d\stackrel{~}{d}_u0.012\stackrel{~}{d}_s|<3.010^{26}cm.$$
(24)
It is important to note that the quark EDM operators cannot induce a large value for $`S`$. They do not induce the $`\overline{n}i\gamma _fn\overline{p}p`$ interaction, and their contribution to the Schiff moment of the nucleus is associated only with electric dipole moment of the external valence nucleon . Current limits on $`d_{Hg}`$ are only sensitive to quark EDMs larger than $`10^{24}ecm`$ and thus these operators can be safely neglected. Similarly, the potential contribution from the three gluon operator $`GG\stackrel{~}{G}`$ to $`d_{Hg}`$ are small. We rely here on the QCD sum rule estimates , showing no significant contribution from this operator to the T-odd nucleon-nucleon forces and thus to the EDM of mercury.
Finally, we would like to comment on the accuracy of the predictions (23) and (24), distinguishing between the error in the overall coefficients and the errors in the relative coefficients of $`\stackrel{~}{d}_i`$-proportional contributions. The uncertainties of the atomic calculations of $`d_{Hg}(S)`$ and nuclear calculations of $`S(\xi )`$ mostly affects the overall coefficients. Although the uncertainty in the overall coefficient can be significant , it is acceptable for our purpose as it influences only the width of the allowed region in $`\theta _\mu \theta _A`$-plane. What is more important, however, is that the prediction of the relative coefficients in front of individual $`\stackrel{~}{d}_i`$ in eqs.(23) and (24) can be done in a more reliable way and we estimate that the accuracy of keeping the triplet combination $`\stackrel{~}{d}_d\stackrel{~}{d}_u`$ and neglecting $`\stackrel{~}{d}_d+\stackrel{~}{d}_u`$, eq. (21), is at the level of 20%. In effect, it makes the constraints imposed by $`d_{Hg}`$ much more useful than those provided by $`d_n`$. Another advantage of the approach for calculating $`d_{Hg}`$ and $`d_n`$, developed in refs. and applied here, is that it reduces the error from the poor knowledge of the light quark masses. Indeed, even in the case of the naïve formula for the EDM of the neutron, $`d_n(4d_dd_u)/3`$, the individual quark EDM contributions are proportional to $`m_{u,d}`$ which are known to 50%. In the present approach, the answer for $`\xi `$ is ultimately proportional to a linear combination of $`m_i0|\overline{q}q|0`$, which can be rewritten as $`f_\pi ^2m_\pi ^2`$ times the function which depends only on the ratio of light quark masses, known to much better accuracy than the masses themselves.
## 4 The limits on the MSSM CP-violating phases
In previous work , limits on the neutron and electron dipole moments were used to constrain the two independent phases (of $`\mu `$ and $`A`$) in the MSSM assuming that all the terms in the Higgs potential and all gaugino masses are real and that all of the $`A`$-parameters are equal at the GUT scale and share a common phase. In absolute terms, the phases are not overly constrained, $`\theta _\mu \stackrel{<}{_{}}0.3`$, for $`\theta _A\pi /2`$. The reason for the lax limits, are several cancellations in the various contributions to the EDMs. Furthermore, in some regions of parameter space, these cancellations occur simultaneously for the electron and neutron EDMs.
As we argued above, there are several reasons to suspect that the limit due to the neutron EDM must be treated with caution. Instead, we have argued that the limit coming from the EDM of Hg is the result of a “cleaner” calculation and carries fewer QCD uncertainties. In what follows, we will explore in detail the limits on the two phases using the constraint based on the EDM of <sup>199</sup>Hg (24) derived above. We will compare these constraints on the phases to that obtained from the electron EDM. As we will see, the cancellations in the EDMs do not always occur at the same points in parameter space. To demonstrate the importance of the mercury EDM limit, we first consider a SUSY model with a single mass scale. We then present general results which assume gaugino and sfermion mass universality at the GUT scale.
Following , we analyze the limits on $`\theta _A`$ and $`\theta _\mu `$ for different values of supersymmetric parameters. To demonstrate the sensitivity of the mercury EDM to a common scale of the supersymmetric masses with arbitrary and uncorrelated phases, we choose $`m_{\stackrel{~}{f}}M_{\lambda _i}|\mu ||A_k|`$ at the electroweak scale and take $`\mathrm{tan}\beta =2`$. In Figures 3a and 3b, we show the sensitivity to the EDM of the mercury atom for the cases of $`\mathrm{sin}\theta _A=1,\mathrm{sin}\theta _\mu =0`$ and $`\mathrm{sin}\theta _A=0,\mathrm{sin}\theta _\mu =1`$. At this particular point of the supersymmetric parameter space all of the calculations are significantly simplified. When all soft-breaking parameters are sufficiently heavy, close to the TeV scale, the chargino and gluino propagators can be simply expanded in $`v_1/M`$ or $`v_2/M`$ and only the zeroth and first order terms in the expansions need be kept. If needed, for lower values of gaugino masses, the results can be generalized to include all effects of mixing in the gluino and chargino sectors.
The calculation of the chromoelectric dipole moments of quarks in MSSM was done in the series of papers . When the CEDMs of quarks are induced by $`\theta _A`$, (as in Fig 3a), the result is dominated by gluino exchange, with very small corrections coming from $`\lambda _1`$-exchange:
$`\stackrel{~}{d}_d=\eta {\displaystyle \frac{m_d|A|\mathrm{sin}\theta _A}{16\pi ^2M^3}}\left({\displaystyle \frac{5g_3^2}{18}}{\displaystyle \frac{g_1^2}{108}}\right)`$ (25)
$`\stackrel{~}{d}_u=\eta {\displaystyle \frac{m_u|A|\mathrm{sin}\theta _A}{16\pi ^2M^3}}\left({\displaystyle \frac{5g_3^2}{18}}+{\displaystyle \frac{g_1^2}{54}}\right).`$
Here $`\eta `$ denotes the renormalization group factor which reflects the QCD evolution of the color EDM from the weak scale to 1 GeV. When the color EDM operator is defined as in eq. (1), its anomalous dimension is negative and small so that the overall renormalization of $`\stackrel{~}{d}_i`$ is not important. The alternative definition of color EDM operator, frequently occurring in literature, is $`\frac{1}{2}\stackrel{~}{d}^{}\overline{q}t^aG_{\mu \nu }^a\sigma _{\mu \nu }\gamma _5q`$, where $`g_s`$ is included in $`\stackrel{~}{d}^{}`$. Defined this way, this operator acquires a renormalization factor roughly proportional to $`g_s(1\mathrm{G}\mathrm{e}\mathrm{V})/g_s(M_Z)2`$ which is smaller than the value 3.3 quoted in . This is because in Refs. a very large coupling constant at low energies, $`\alpha _s2`$, is used. There is, however, an important numerical contribution to $`\eta `$ which reflects the suppression of light quark masses at the high energy scale, $`m_d(M_Z)/m_d(1\mathrm{G}\mathrm{e}\mathrm{V})`$. We choose to use low energy values for $`m_u`$ and $`m_d`$, 4.5 and 9.5 MeV, and include the quark mass RG factor into $`\eta `$. For the scale $`M`$ of order $`M_Z`$, $`\eta `$ is numerically close to 0.35 and is mainly due to the suppression of the quark masses at the high energy scale. This suppression factor was omitted in Ref. where $`m_u(M_Z)=8`$ MeV is used.
Combining all numerical factors, we obtain the following value for the EDM of <sup>199</sup>Hg:
$$d_{Hg}=e1.510^2\frac{5\alpha _3}{72\pi }\frac{(m_dm_u0.012m_s)|A|\mathrm{sin}\theta _A}{M^3}210^{27}\left(\frac{1\mathrm{T}\mathrm{e}\mathrm{V}}{M}\right)^2ecm,$$
(26)
where we simply take $`|A|=M`$. We see from Fig. 3a that the mercury EDM places a constraint on $`M`$, $`M\stackrel{>}{_{}}1.5`$ TeV.
In the other case, with $`\mathrm{sin}\theta _A=0,\mathrm{sin}\theta _\mu =1`$, we have to include $`\lambda _2`$-higgsino and $`\lambda _1`$-higgsino exchanges as well, so that the result for the CEDMs, (shown in Fig. 3b), is as follows:
$`\stackrel{~}{d}_d=\eta {\displaystyle \frac{m_d|\mu |\mathrm{tan}\beta \mathrm{sin}\theta _\mu }{16\pi ^2M^3}}\left({\displaystyle \frac{5g_3^2}{18}}+{\displaystyle \frac{g_2^2}{8}}+{\displaystyle \frac{g_1^2}{216}}\right)`$ (27)
$`\stackrel{~}{d}_u=\eta {\displaystyle \frac{m_u|\mu |\mathrm{cot}\beta \mathrm{sin}\theta _\mu }{16\pi ^2M^3}}\left({\displaystyle \frac{5g_3^2}{18}}+{\displaystyle \frac{g_2^2}{8}}+{\displaystyle \frac{7g_1^2}{216}}\right).`$
As a result, the contribution of the up quark relative to that of the down quark is suppressed by $`m_u/(m_d\mathrm{tan}^2\beta )`$. Numerically, the gluino exchange diagram dominates again with less than a 10% contribution coming from $`\lambda _2`$-higgsino exchange. In this case, the limit is somewhat stronger giving, $`M\stackrel{>}{_{}}3`$ TeV.
As one can see, the EDM of mercury is sensitive to the scale of supersymmetric masses as high as 1.5-3 TeV. This can be compared with the sensitivity of the EDM of the electron, which we calculate at the same point of the supersymmetric parameter space, taking the slepton masses equal to the squark masses:
$`d_e`$ $`=`$ $`{\displaystyle \frac{m_e|A|\mathrm{sin}\theta _A}{16\pi ^2M^3}}{\displaystyle \frac{g_1^2}{12}}`$ (28)
$`d_e`$ $`=`$ $`{\displaystyle \frac{m_e|\mu |\mathrm{tan}\beta \mathrm{sin}\theta _\mu }{16\pi ^2M^3}}\left({\displaystyle \frac{5g_2^2}{24}}+{\displaystyle \frac{g_1^2}{24}}\right).`$
The limits based on the electron EDM, for the two cases considered, are weaker as can be seen from Fig. 3a and 3b where the limit on $`M`$ is 0.4 and 1.7 TeV.
There is also the possibility of destructive interference between two contributions induced by the CP-violating phases. Again, we choose the supersymmetric parameters to be equal and fix them to be in the range from $`250`$$`750`$ GeV. Fig. 4a-4c show the combined exclusion plots. The two bands correspond to the parts of the parameter space where the mercury or electron (Tl) constraints are lifted by the cancellation of different supersymmetric contributions. The allowed area lies on the intersection of these two bands. We observe that the band corresponding to the mercury EDM constraint has a different slope than that of the electron EDM, mainly because $`d_{Hg}`$ is by far more sensitive to $`\theta _A`$. We observe that both phases are sufficiently constrained for the low values of $`M`$.
## 5 EDMs in mSUGRA and Cosmological Constraints
We now consider the constraints on $`CP`$ violating phases in mSUGRA-like models, i.e. models with unified gaugino and sfermion masses. We recall that to one loop, the phase of $`\mu `$ does not evolve with scale, but the phases of $`A_u,A_d`$ and $`A_e`$ must be run separately from the unification scale to low energies. We follow the analysis of , but with two changes. First, we replace constraints from the neutron electric dipole moment with limits from the EDM of Hg, discussed above. Second, we include recent results on the effect of coannihilations of neutralinos with staus on the neutralino relic density . The latter has the effect of weakening the cosmological upper bound on the gaugino masses. This is demonstrated in Fig. 5, where the light shading indicates the region of the $`\{m_0,m_{1/2}\}`$ plane which yields a neutralino relic abundance in the cosmologically preferred range $`0.1\mathrm{\Omega }_{\stackrel{~}{\chi }}h^20.3`$. The upper limit of the light shaded region crosses below the line $`m_{\stackrel{~}{\chi }}=m_{\stackrel{~}{\tau }_\mathrm{R}}`$ at $`m_{1/2}1400\mathrm{GeV}`$; for greater $`m_{1/2}`$, either the relic density violates the upper bound $`\mathrm{\Omega }_{\stackrel{~}{\chi }}h^20.3`$ (which follows from a lower limit of $`12\mathrm{Gyr}`$ on the age of the universe) or the lightest supersymmetric particle is a stau, leading to an unacceptable abundance of charged dark matter. Here we’ve taken $`\mathrm{tan}\beta =2`$, but the light shaded region is quite insensitive to $`\mathrm{tan}\beta `$ for the values of $`\mathrm{tan}\beta `$ we consider, as well as insensitive to the phase of $`\mu `$. For comparison, the dashed lines demarcate the inferred cosmologically preferred region if one ignores the effects of neutralino-slepton coannihilation. Whereas in , the constraint $`\mathrm{\Omega }_{\stackrel{~}{\chi }}h^20.3`$ yielded an upper bound of $`450\mathrm{GeV}`$ on $`m_{1/2}`$, we now have to consider larger values of $`m_{1/2}`$. However, we will see that this does not effect the upper bound on $`\theta _\mu `$.
In contrast to the results of the previous section, we find that in mSUGRA-like models, constraints from the electron EDM are typically more restrictive than those from the EDM of Hg. This difference arises because in models with gaugino masses unified at the GUT scale, the gluino tends to be considerably heavier than the neutralino and charginos, and this suppresses the contribution to $`d_{Hg}`$ from the quark chromoelectric dipole moments due to gluino exchange. We recall that cancellations between the chargino and neutralino exchange contributions to the electron EDM allow for large values of $`\theta _\mu `$ . A similar effect also applies in the case of the Hg EDM, where cancellations can occur between the gluino exchange and neutralino and chargino exchange contributions to the quark chromoelectric dipole moments. The power of combining the electron and Hg limits lies in the fact that for fixed $`\theta _\mu `$ and $`\theta _A`$, the cancellations in the electron and Hg dipole moments occur for different, and often non-overlapping, ranges in $`m_{1/2}`$. Thus the combined limits are stronger than either limit alone.
Following , we compute the electron and Hg EDMs in mSUGRA as a function of $`\theta _\mu ,\theta _A`$ and $`m_{1/2}`$ for fixed $`A_0,m_0`$ and $`\mathrm{tan}\beta `$. In Fig. 6a-c we display the minimum value of $`m_{1/2}`$ required to bring both the electron and Hg EDMs below their experimental limits, for $`\mathrm{tan}\beta =2`$ and $`m_0=130\mathrm{GeV}`$. We exclude points which violate the current LEP2 chargino and slepton mass bounds . The EDMs are computed on a 40x40 grid in $`\{\theta _\mu ,\theta _A\}`$, and features smaller than the grid size are not significant. Although the dependence of the EDMs on $`m_{1/2}`$ is not monotonic, there is still a minimum value of $`m_{1/2}`$, due to cancellations, which is permitted. In the zone labeled “I”, $`m_{1/2}^{\mathrm{min}}<200\mathrm{GeV}`$, while the zones labeled “II”, “III”, “IV” and “V” correspond to $`200\mathrm{GeV}<m_{1/2}^{\mathrm{min}}<300\mathrm{GeV}`$, $`300\mathrm{GeV}<m_{1/2}^{\mathrm{min}}<450\mathrm{GeV}`$, $`450\mathrm{GeV}<m_{1/2}^{\mathrm{min}}<600\mathrm{GeV}`$ and $`m_{1/2}^{\mathrm{min}}>600\mathrm{GeV}`$, respectively. Comparing with Fig. 5, we see that values of $`m_{1/2}`$ larger than about $`600\mathrm{GeV}`$ are cosmologically excluded for this value $`m_0`$. Therefore, region V corresponds to an excluded region in the phase plane. Of course, for this value of $`\mathrm{tan}\beta `$, the current Higgs mass bound requires enormous sfermion masses $`1\mathrm{TeV}`$, which are cosmologically prohibited. We’ve chosen to plot our results for $`\mathrm{tan}\beta =2`$ in order to compare to our previous results . Qualitatively similar conclusions apply for larger $`\mathrm{tan}\beta `$, which we summarize at the end of this section.
Figure 2 of Ref. displays the corresponding contours to our Fig. 6a-c, but imposing only the constraint from the electron EDM<sup>2</sup><sup>2</sup>2In we take $`m_0=100\mathrm{GeV}`$, rather than $`130\mathrm{GeV}`$; however, taking $`m_0=130\mathrm{GeV}`$ makes only a small change in the displayed contours and a slight reduction in the upper bound on $`\theta _\mu `$. . Note that in we do not include a contour corresponding to $`450\mathrm{GeV}<m_{1/2}^{\mathrm{min}}<600\mathrm{GeV}`$, as this region would be cosmologically excluded in the absence of coannihilations of neutralinos with sleptons, whose effects were not included in . The effect of including the Hg EDM bounds is particularly significant at large $`A_0`$, where the cancellations are enhanced and the bounds on $`\theta _\mu `$ are weakest. Here the widths of the allowed region in $`m_{1/2}`$ at fixed $`\theta _A`$ and $`\theta _\mu `$ are narrowest, leaving less opportunity for overlap between the ranges allowed by the electron and Hg EDMs, respectively. Indeed, for $`A_0=1.5\mathrm{TeV}`$, the upper bound on $`\theta _\mu `$ is reduced from $`0.3\pi `$, in the case of the electron EDM alone, to $`0.18\pi `$, combining the two constraints, and, further, the width of the region in $`\theta _\mu `$ is considerably narrowed. The reduction in the bound on $`\theta _\mu `$ is minimal for small $`A_0`$, where the bounds on $`\theta _\mu `$ are strongest. However, notice that the $`m_{1/2}^{\mathrm{min}}`$ at the largest allowed values of $`\theta _\mu `$ is shifted from less than $`200\mathrm{GeV}`$ in the case of the electron EDM alone to between 200 and 300$`\mathrm{GeV}`$ for the combined bound, in the case $`A_0=300\mathrm{GeV}`$. For $`A_0=1\mathrm{TeV}`$ and $`1.5\mathrm{TeV}`$, $`m_{1/2}^{\mathrm{min}}`$ lies above $`300\mathrm{GeV}`$ at the largest $`\theta _\mu `$.
We note that the larger values of $`m_{1/2}`$ which neutralino-slepton coannihilation permit do not increase the maximum $`\theta _\mu `$, for this value of $`m_0`$. This is because, as we see from Fig. 6, the region of mutual cancellations happens to lie at lower $`m_{1/2}`$, between 300 and 400$`\mathrm{GeV}`$. The widths of the allowed regions in $`m_{1/2}`$ are typically between 50 and 80$`\mathrm{GeV}`$ for the lightest shaded zones in Fig. 6b and 6c and greater than 80$`\mathrm{GeV}`$ almost everywhere in Fig. 6a. Larger $`m_{1/2}`$ does, however, widen the allowed swath in $`\theta _\mu `$, by the region labeled “IV” in Fig. 6. It helps in particular at small $`\theta _\mu `$, where the electron EDM can be beaten down sufficiently by taking heavy gaugino masses, without resorting to cancellations between different contributions. At large $`m_{1/2}`$, the Hg EDM typically provides little constraint on $`\theta _\mu `$ due to the heaviness of the gluinos. As $`m_0`$ is increased, the regions of cancellation shift, and the maximum value of $`\theta _\mu `$ slowly decreases.
These effects are enhanced at larger $`m_0`$ and $`m_{1/2}`$, in the cosmologically allowed “trunk” which lies on top of the $`\stackrel{~}{\tau }_R`$ LSP region (see Fig. 5). The allowed region narrows as $`m_0`$ increases, and the $`\mathrm{\Omega }_{\stackrel{~}{\chi }}h^2=0.3`$ contour crosses the line $`m_{\stackrel{~}{\tau }_R}=m_{\stackrel{~}{\chi }}`$ and gives an upper bound on $`m_{1/2}`$ and $`m_0`$ at $`m_{1/2}1400\mathrm{GeV},m_0300\mathrm{GeV}`$. The trunk region yields much larger sparticle masses than are cosmologically permitted in the absence of coannihilations, and this can suppress the contributions to the electric dipole moments sufficiently so that significant cancellations between the various contributions are not necessary. For low $`A_0`$, where the bounds on $`\theta _\mu `$ are tightest, the bounds on $`\theta _\mu `$ are somewhat relaxed in the trunk area. In Fig. 6d, we display the allowed region in the $`\{\theta _\mu ,\theta _A\}`$ plane for $`A_0=300\mathrm{GeV},m_0=200\mathrm{GeV}`$. For this value of $`m_0`$, $`m_{1/2}`$ is cosmologically restricted to lie between $`850\mathrm{GeV}`$ and $`950\mathrm{GeV}`$. In the light region labeled “A”, the EDMs are below the experimental limits for all $`850\mathrm{GeV}m_{1/2}950\mathrm{GeV}`$, while in the regions labeled “B”, only part of this range of $`m_{1/2}`$ satisfies the EDM constraints. The dark regions at large $`|\theta _\mu |`$ require $`m_{1/2}>950\mathrm{GeV}`$ to satisfy the EDM bounds, and so these regions are cosmologically excluded, as they yield a stau LSP. The upper bound on $`\theta _\mu /\pi `$ is relaxed to $`0.055`$. Taking $`m_{1/2}`$ and $`m_0`$ at their maximal values allows $`\theta _\mu /\pi `$ up to about 0.1.
For large $`A_0`$, where $`\theta _\mu `$ can take its maximal values, the bound on $`\theta _\mu `$ does not weaken in the trunk region. As above, this is due to the fact that at larger $`\theta _\mu `$, cancellations are still required to bring the EDMs below their experimental limits, and the regions of cancellations occur at lower $`m_{1/2}`$. Even taking $`m_{1/2}`$ and $`m_0`$ at their largest cosmologically permitted values does not allow for $`\theta _\mu `$ larger than the bounds in Fig. 6b,c. Further, since the regions of low $`m_{1/2}`$ are cosmologically forbidden at large $`m_0`$, the bounds on $`\theta _\mu `$ at large $`A_0`$ actually decrease for large $`m_0`$. Thus the presence of the coannihilation trunk region does not increase the overall combined cosmology/EDM bound on the phase $`\theta _\mu `$.
Lastly, we plot in Fig. 7 the maximum value of $`\theta _\mu `$ allowed by the electron and Hg electric dipole moments and the upper limit on $`\mathrm{\Omega }_{\stackrel{~}{\chi }}h^2`$, as a function of $`\mathrm{tan}\beta `$. The thick lines are for $`m_0=100\mathrm{GeV}`$, while the thin lines are for $`m_0=200\mathrm{GeV}`$ and show the effect on the bounds described above as one moves into the trunk region.
## 6 Conclusions
We have shown that the calculation of the EDM of the neutron as the function of different MSSM phases is problematic due to large uncertainties related to the contributions of the color EDMs. This is in contrast to the electric dipole moment of the mercury atom, induced by the T-odd nucleon-nucleon interaction. In the chiral limit the coefficient $`\xi `$, characterizing the strength of T-odd forces has a power-like singularity $`m_\pi ^2`$, whereas $`d_N\mathrm{log}m_\pi ^2`$ in the same chiral approach. It is apparent that the $`\pi ^0`$ and $`\eta `$ exchange diagrams dominate both parametrically and numerically and therefore yield a very good approximation to the magnitude of the T-odd interaction. The final result is proportional to $`(\stackrel{~}{d}_d\stackrel{~}{d}_u0.012\stackrel{~}{d}_s)\times 3.210^2e`$ and can be further developed in terms of CP-violating phases of the MSSM.
There are two serious problems with the calculation of the T-odd nuclear forces due to the effective interaction (1) with the coefficients provided by the MSSM. First is the status of the factorization in Eq. (19), related with the low-energy theorem in $`0^+`$ channel. Following Refs. , we have have taken $`p|\overline{q}g_s(G\sigma )q|p1.3\mathrm{GeV}^2p|\overline{q}q|p`$. We note that a designated sum rule calculation of this quantity and/or its simulation on lattice is highly desirable for it is the main source of uncertainties in the calculation of T-odd nuclear forces. The second potentially troublesome point is the effective negative sign between the $`\stackrel{~}{d}_d`$ and $`\stackrel{~}{d}_s`$ contributions. Although the numerical suppression in front of $`\stackrel{~}{d}_s`$ is quite strong and $`d_d`$ dominates, destructive interference is still possible in both cases, $`\theta _A0`$ and $`\theta _\mu 0`$.
In this paper, we have considered first a very specific part of the supersymmetric parameter space, when all squark, slepton and gaugino masses were chosen to be equal. The theoretical prediction for $`d_{Hg}`$ exhibits remarkable sensitivity to the scale of soft-breaking mass parameters as high as 1.5-3 TeV. When the scale is fixed below 1 TeV, $`d_{Hg}`$ limits both phases. The constraints on the CP-violating supersymmetric phases, obtained in this way are the strongest constraints so far.
We have also considered the combined constraints from the Hg and electron EDMs in the mSUGRA, when all supersymmetry breaking gaugino masses, soft scalar masses, and soft trilinear terms are separately unified at the GUT scale. In this case, the sensitivity to the Hg EDM is weakened due to the relative size of the gluino mass. Nevertheless, the results are as strong or stronger (particularly when $`|A|`$ is large and the limits are weakest) than the combined results from $`d_e`$ and $`d_N`$. The improvement in the limit is due to the fact that cancellations among the contributions to the EDMs occur at slightly different regions of the SUSY parameter space.
## 7 Acknowledgments
M.P. would like to thank I.B. Khriplovich, A. Ritz, A.I. Vainshtein and A.R. Zhitnitsky for numerous important discussions. The work of M.P. and K.O. was supported in part by DOE grant DE–FG02–94ER–40823. The work of T.F. was supported in part by DOE grant DE–FG02–95ER–40896 and in part by the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation.
|
no-problem/9904/quant-ph9904004.html
|
ar5iv
|
text
|
# OBSERVATIONAL CONSEQUENCES OF MANY-WORLDS QUANTUM THEORY Alberta-Thy-04-99, quant-th/9904004
((1999 May 3))
## Abstract
Contrary to an oft-made claim, there can be observational distinctions (say for the expansion of the universe or the cosmological constant) between “single-history” quantum theories and “many-worlds” quantum theories. The distinctions occur when the number of observers is not uniquely predicted by the theory. In single-history theories, each history is weighted simply by its quantum-mechanical probability, but in many-worlds theories in which random observations are considered, there should also be the weighting by the numbers or amounts of observations occurring in each history.
Quantum mechanics is so mysterious that its precise content or interpretation is not agreed upon even by leading physicists. Although the number of versions or interpretations of quantum mechanics is huge, here I wish to focus upon two main classes of interpretations, which I shall call “single-history” versions and “many-worlds” versions, and show how they might be distinguished observationally. (Similar observational distinctions can be made between analogous “single-history” and “many-worlds” versions of classical physics, but since we know that the universe is quantum, here I shall focus on quantum theories.)
In single-history versions, the quantum formalism gives probabilities for various alternative sequences of events, but only one choice among the possible alternatives is assumed to occur in actuality. For example, a wavefunction that gives nonzero amplitudes for many different alternative events may be assumed to undergo a sequence of collapses to give a single sequence of actually occurring events, which may be considered to be a unique history.
On the other hand, the many-worlds versions began with Everett’s relative-state formalism in which the wavefunction never collapses. In a suitable basis each component of the wavefunction may be considered to be a different “world,” leading to this interpretation’s being labeled the “many-worlds” interpretation.
The consistent or decohering histories formulation of quantum mechanics does not by itself imply whether only a single coarse-grained history actually occurs, or whether many do, and the probabilities of histories that it gives do not depend on whether only one, or instead many, of the histories are actual rather than merely possible. However, I am considering probabilities for observations rather than merely probabilities for histories, so the consistent or decohering histories formalism needs to be extended in order to calculate these probabilities of interest here. The extension then depends on whether many, or only one, of the histories are actual.
It is often claimed that there is no observational distinction between many-world and single-history versions of a quantum theory , but here I shall refute that.
In processes with fixed observers that remember their observations, it does seem to be true that there is generally no distinction that a single observer can make between single-history and many-worlds quantum theories that are otherwise identical. This is because then the measure for each observation in a many-world theory is proportional to the probability of that observation in the corresponding single-history theory. This result depends upon the lack of interference between “worlds” in which different observations are made, which is assured if the memory records of the different observations are orthogonal.
To circumvent this no-observable-distinction result, David Deutsch has proposed an experiment in which an observer “splits” into two copies which make different observations and remember the fact of observation, but not the distinct observations themselves, and so can in principle be rejoined coherently back into a single copy. However, doing this in practice appears to be technologically extremely challenging.
On the other hand, what I wish to demonstrate here is that if different “worlds” do not have the same number of observers, then the measures for observations in the many-worlds theory can be different from being merely proportional to the probabilities in the corresponding single-history theory. Then what an observer would be typically expected to observe in the two theories can be distinct.
Consider a theory of quantum cosmology that gives a quantum state for the universe in which there are different “worlds” with greatly different numbers of observers. For calculating how typical various observations are, in a single-history theory one should weight the “worlds” purely by how probable they are, but in a many-worlds theory, one should weight the “worlds” not only by their quantum mechanical measures (the analogue in a deterministic many-worlds theory of the probabilities in an indeterministic single-histories theory), but also by how much observation occurs within each “world.” This distinction leads to different predictions as to which observations would be typical within the two types of theories.
As a grossly oversimplified illustration, consider the example in which a quantum cosmology theory gave a quantum state (before any possible collapse) that had one “world” with observers, and a second one with none. Suppose that the first “world” had a measure of 0.0000000001 and the second one had a measure of 0.9999999999.
In the single-history version of this theory, these two normalized measures would be the probabilities for the two “worlds,” so the probability would be extremely low that this theory led to any observers. A non-null observation would thus have such a low likelihood within this single-history theory that it would be strong evidence against this theory.
On the other hand, in the many-worlds version of this theory, both “worlds” would exist, with the measures indicating something like the “amount” by which they exist. But since the observations that occur in the first “world” definitely exist within this many-worlds theory as realities and not just as possibilities, the existence of an observation is not evidence against this many-worlds theory.
To put it another way, for considering observations within a many-worlds theory, one must multiply the measure for each world by a measure for the observations within that world. (Crudely, one may use the number of observations within the world, though in a final theory I would expect a refinement, so that, for example, a human’s observation is weighted more heavily than an ant’s). When one does this for the example above, the first “world” makes up the entirety of the weighting in the many-worlds theory, even though in the single-history theory that “world” has an extremely low probability and would be quite unexpected.
Now consider a second toy theory in which there are two “worlds” that both have observers, but their numbers and observations differ. For example, let World A last just barely long enough for it to have $`10^{10}`$ observers, all during the recontracting stage fairly near a big crunch, and let World B last much longer than the age range at which observers occur and have $`10^{90}`$ observers when the universe is expanding. Suppose World A has measure almost unity and World B has measure $`10^{30}`$.
In the single-history version of this theory, these (normalized) measures are probabilities, so with near certainty, we can deduce that we should be in World A and see a contracting universe in this theory. Our actual observation of an expanding universe would then be strong evidence against this single-history theory.
On the other hand, in the many-worlds version of this theory, all of the observations actually exist. To calculate which observations are typical, one needs the measures for the observations themselves. Presumably these are given by the expectation values of certain operators associated with the corresponding observations . Crudely one might suppose the total for all the observations within one “world” is roughly proportional to the number of observers within that “world,” multiplied by the measure for the “world.” At this level of approximation, the total measure for the observations in the many-worlds version of this second toy model is thus $`10^{10}`$ for World A and $`10^{60}`$ for World B. Therefore, an observation chosen at random in this many-worlds theory is $`10^{50}`$ more likely to be from World B, with the universe observed to be expanding, than from World A with the universe seen to be contracting. Our actual observation of an expanding universe would then be consistent with this theory.
Thus in this second toy cosmological model, we can reject its single-history version because of the low probability it gives, not for our existence this time, but for whether we see the universe expanding. In this way observations can in principle be used to distinguish between many-worlds and single-history quantum theories.
In these examples, the statistical predictions of what a random observer should be expected to observe would be the same for a many-worlds theory and for the corresponding single-history theory if the latter had its quantum-mechanical probability for each history also weighted by the number of observers in that history, but I am assuming that this is not the case. Note that one could still get observable distinctions even if the single-history theory had a sequence of wavefunction collapses, each of which had the weighting by the number of observers in each branch at the time of the collapse, but I shall not consider further this possibility either.
There is the challenge that at present we apparently do not know enough about the quantum state of the universe to say with certainty whether our observations favor a many-worlds theory or a single-history theory. Nevertheless, I can summarize some highly speculative evidence that gives a preliminary suggestion that a many-histories theory might be observationally favored.
This evidence starts with the Hartle-Hawking ‘no-boundary’ proposal for the quantum state of the universe , which of course is quite speculative but seems to me to be the most elegant sketch so far of a proposal (certainly not technically complete at present) for the quantum state of the universe. Under certain unproven assumptions and approximations, in a homogeneous, isotropic three-sphere minisuperspace toy model with a single massive inflaton scalar field, the no-boundary proposal leads in the semiclassical regime to a set of “worlds” or macroscopic classical spacetimes that are Friedmann-Robertson-Walker universes with various amounts of inflation and hence various total lifetimes and maximum sizes, and with measure approximately proportional to $`e^{\pi a_0^2}`$, where $`a_0`$ is the radius of the Euclidean four-dimensional hemisphere where the solution nucleates .
This nucleating radius $`a_0`$ is inversely proportional to the initial value of the inflaton scalar field, multiplied by its mass $`m`$ in Planck units, whereas the growth factor during inflation, and the lifetime of the resulting Friedmann-Robertson-Walker universe, go exponentially with the square of the initial value of the inflaton scalar field. Therefore, if one works out the quantum measure in terms of the volume of the universe at the end of inflation (say $`V`$ in Planck units), one finds that at the tree level it is very roughly proportional to $`\mathrm{exp}[(4.5\pi /m^2)/(\mathrm{ln}m^3V+1.5\mathrm{ln}\mathrm{ln}m^3V)]`$ for large values of $`m^3V`$ (the universe volume in units of the cube of the reduced Compton wavelength of the inflaton scalar field).
Since the inflaton mass $`m`$ is very small in Planck units, say roughly $`10^6`$ , the factor of $`4.5\pi /m^2`$ is very large, say roughly $`10^{13}`$, and one gets an utterly enormous exponential peak in the measure at relatively small values of $`m^3V`$. (There is a cutoff in $`m^3V`$ at a value of order unity, below which there is no inflationary solution , so the measure distribution does not actually have a divergence.) The expression above for the measure rapidly decreases with increasing volume and then flattens out to become asymptotically constant when $`m^3V`$ gets large in comparison with $`\mathrm{exp}(4.5\pi /m^2)`$.
If one takes at face value the expression above for the measure for all values of $`m^3V`$ above its lower cutoff (at some number of order unity), then although the measure has an utterly enormous exponential peak at small values of $`m^3V`$, this is in turn overwhelmed by the divergence one gets when one integrates the measure (actually a measure density) to infinite values of $`m^3V`$. Then the total measure would be completely dominated by universes with arbitrarily large amounts of inflation. This means that with unit normalized probability, our universe would be arbitrarily large and arbitrarily flat when one ignores density fluctuations from corrections to the homogeneous isotropic minisuperspace model .
However, the expression above for the measure is purely at the tree-level or zero-loop approximation, ignoring prefactors that are expected to distort the measure distribution significantly for $`m^3V`$ large in comparison with $`\mathrm{exp}(4.5\pi /m^2)`$, because these enormous universes are generated by inflation that starts with the inflaton potential exceeding the Planck density, where one cannot trust the tree-level approximation or any other approximation we have at present.
If the correct quantum measure distribution diverges when one integrates to infinity the spatial volume shortly after the end of inflation, then the universe is most probably arbitrarily large and very near the critical density (spatially very flat), whether a many-worlds or a single-history quantum theory is correct, and so our observation of a universe near the critical density would not distinguish between the two possibilities.
However, if the correct quantum measure density is cut off or damped for large initial values of the inflaton energy density so that one does not get arbitrarily large universes with certainty, then the enormous exponential peak in the distribution at small universes is likely to dominate and (in a single-history version in which the quantum state collapses to a single macroscopic Friedmann-Robertson-Walker universe) make the universe most probably have only a small amount of inflation and a very short lifetime, not sufficient to produce observers, like the first world in the first example above. If one said that somehow the quantum state collapsed to a Friedmann-Robertson-Walker universe that gets large enough for observers, then the most probable universe history under this requirement would be one that lasts just barely long enough for observers before the final big crunch. In this case the observers would most likely exist only near the end of the universe, when it is recollapsing, like World A in the second example above, which is contrary to our observations of an expanding universe. Thus a single-history version of this theory with the quantum measure cut off to produce a normalizable probability distribution would most likely be refuted by our observations of an expanding universe.
On the other hand, if one took a many-worlds version of this quantum cosmology theory, one would have to weight the “worlds” (classical universes) by something like the number of observers within them. One would expect this number to be proportional to the volume of space at the time and other conditions when observers can exist (other factors being equal) . Therefore, in the many-worlds version one would multiply the quantum measure given above for the “worlds” (the “bare” probability distribution for universe configurations ) by something like $`V`$ to get the measure for observations (the “observational” probability distribution ).
The result, $`V\mathrm{exp}[(4.5\pi /m^2)/(\mathrm{ln}m^3V+1.5\mathrm{ln}\mathrm{ln}m^3V)]`$, is then sufficiently rapidly rising with large $`m^3V`$ that the part with large $`m^3V`$, even if cut off at $`m^3V`$ of order $`\mathrm{exp}(4.5\pi /m^2)`$, dominates over the exponentially large peak near the minimum value of $`m^3V`$. There is thus enough space for the no-boundary proposal to be consistent with our observations of a large and expanding universe , but this argument implicitly assumed a many-worlds version of the no-boundary proposal. A similar assumption had been made earlier in the broader context of eternal stochastic inflation . In a single-history version, it seems plausible that the Hartle-Hawking ‘no-boundary’ quantum state may collapse with nearly unit probability to a classical universe configuration that only lasts of the order of the Compton wavelength of the inflaton scalar field, presumably far too short to be consistent with our observations.
This suggestive evidence against a single-history quantum cosmology theory is of course not yet conclusive, since we do not yet know what the quantum state of the universe is. Indeed, the ‘tunneling’ wavefunction proposals of Vilenkin, Linde, and others predict that the “bare” quantum measure for small universes is exponentially suppressed, rather than enhanced as discussed above for the Hartle-Hawking ‘no-boundary’ proposal. The ‘tunneling’ proposals would thus apparently be consistent with our observations whether one used a many-worlds version or a single-history version. But the possibility is open that increased theoretical understanding of quantum cosmology may lead us to favor a quantum theory, such as the ‘no-boundary’ one may turn out to be when it is better understood, that is consistent with our observations only in its many-worlds version rather than in its single-history version.
Another tentative piece of observational evidence in favor of many-worlds quantum theory is a comparison with the calculation of likely values of the cosmological constant. If the assumptions of that paper are correct, and if the “subuniverses” used there are the “worlds” used here (“terms in the state vector” ) rather than different spacetime regions within one “world” (“local bangs” ), then our observational evidence of the cosmological constant is consistent with many-worlds quantum theory but not with single-history quantum theory. However, we need a better understanding of physics to know whether the assumptions are correct (such as the assumption that “the cosmological constant takes a variety of values in different ‘subuniverses’ ” ).
Therefore, it may turn out, when we better understand fundamental physics and quantum cosmology, that the observational evidence of the expansion of the universe and of the cosmological constant may lead us to favor many-world quantum theories over single-history quantum theories.
I am grateful for very helpful discussions with Meher Antia, Jerry Finkelstein, Jim Hartle, and Jacques Mallah. This research was supported in part by the Natural Sciences and Engineering Research Council of Canada.
|
no-problem/9904/gr-qc9904061.html
|
ar5iv
|
text
|
# Generalized Second Law in Cosmology From Causal Boundary Entropy
## Abstract
A classical and quantum mechanical generalized second law of thermodynamics in cosmology implies constraints on the effective equation of state of the universe in the form of energy conditions, obeyed by many known cosmological solutions, forbids certain cosmological singularities, and is compatible with entropy bounds. This second law is based on the conjecture that causal boundaries and not only event horizons have geometric entropies proportional to their area. In string cosmology the second law provides new information about non-singular solutions.
Cosmological singularities have been investigated, relying on the celebrated singularity theorems of Hawking and Penrose , who concluded that if sources in Einstein’s equations obey certain energy conditions, cosmological singularities are inevitable. Entropy considerations were brought in only much later, when Bekenstein argued that if the entropy of a visible part of the universe obeys the usual entropy bound from nearly flat space situations , certain cosmological singularities are thermodynamically unacceptable. Recently, Veneziano suggested that since a black hole larger than a cosmological horizon cannot form , the entropy of the universe is always bounded. This suggestion is related, although not always equivalent, to the application of the holographic principle in cosmology .
I propose a concrete classical and quantum mechanical form of a generalized second law (GSL) of thermodynamics in cosmology, valid also in situations far from thermal equilibrium, discuss various entropy sources, such as thermal, geometric and quantum entropy, apply GSL to study cosmological solutions, and show that it is compatible with entropy bounds. GSL allows a more detailed description of how, and if, cosmological singularities are evaded. The proposed GSL is different from GSL for black holes , but the idea that in addition to normal entropy other sources of entropy have to be included has some similarities.
That systems with event horizons, such as black holes and a deSitter universe have entropy proportional to the area of their horizon is by now an accepted fact. The proposed GSL is based on the (reasonable) conjecture that causal boundaries and not only event horizons have geometric entropies proportional to their area. However, since the conjecture has not been proved yet, further investigation could reveal that it is incorrect or applies only in special situations. A proof of the conjecture will put our results on a much firmer ground.
The starting point of our classical discussion is the definition of the total entropy of a domain containing more than one cosmological horizon . For a given scale factor $`a(t)`$, and a Hubble parameter $`H(t)=\dot{a}/a`$, the number of cosmological horizons within a given comoving volume $`V=a(t)^3`$ is simply the total volume divided by the volume of a single horizon, $`n_H=a(t)^3/|H(t)|^3`$ (we will ignore numerical factors of order unity, use units in which $`c=1,G_N=1/16\pi ,\mathrm{}=1`$ and discuss only flat, homogeneous, and isotropic cosmologies). If the entropy within a given horizon is $`S^H`$, then the total entropy is given by $`S=n_HS^H`$. Classical GSL requires that the cosmological evolution, even when far from thermal equilibrium, must obey $`dS0,`$ in addition to Einstein’s equations. In particular,
$$n_H_tS^H+_tn_HS^H0.$$
(1)
In general, there could be many sources and types of entropy, and the total entropy is the sum of their contributions. If, in some epoch, a single type of entropy makes a dominant contribution to $`S^H`$, for example, of the form $`S^H=|H|^\alpha `$, $`\alpha `$ being a constant characterizing the type of entropy source, and therefore $`S=(a|H|)^3|H|^\alpha `$, eq.(1) becomes an explicit inequality,
$$3H+(3+\alpha )\frac{\dot{H}}{H}0,$$
(2)
which can be translated into energy conditions constraining the energy density $`\rho `$, and the pressure $`p`$ of (effective) sources. Using the Friedman-Robertson-Walker (FRW) equations,
$`H^2`$ $`=`$ $`{\displaystyle \frac{1}{6}}\rho `$ (3)
$`\dot{H}`$ $`=`$ $`{\displaystyle \frac{1}{4}}(\rho +p)`$ (4)
$`\dot{\rho }`$ $`+`$ $`3H(\rho +p)=0,`$ (5)
and assuming $`\alpha >3`$ (which we will see later is a reasonable assumption ) and of course $`\rho >0`$, we obtain
$`{\displaystyle \frac{p}{\rho }}`$ $``$ $`{\displaystyle \frac{2}{3+\alpha }}1\text{for}H>0,`$ (6)
$`{\displaystyle \frac{p}{\rho }}`$ $``$ $`{\displaystyle \frac{2}{3+\alpha }}1\text{for}H<0.`$ (7)
Adiabatic evolution occurs when the inequalities in eqs.(6,7) are saturated.
A few remarks about the allowed range of values of $`\alpha `$ are in order. First, note that the usual adiabatic expansion of a radiation dominated universe with $`p/\rho =1/3`$ corresponds to $`\alpha =3/2`$. Adiabatic evolution with $`p/\rho <1`$, for which the null energy condition is violated would require a source for which $`\alpha <3`$. This is problematic since it does not allow a flat space limit of vanishing $`H`$ with finite entropy. The existence of an entropy source with $`\alpha `$ in the range $`\alpha <2`$ does not allow a finite $`_tS`$ in the flat space limit and is therefore suspected of being unphysical. Finally, the equation of state $`p=\rho `$ (deSitter inflation), cannot be described as adiabatic evolution for any finite $`\alpha `$.
Let us discuss in more detail three specific examples. First, as already noted, we have verified that thermal entropy during radiation dominated (RD) evolution can be described without difficulties, as expected. In this case, $`\alpha =\frac{3}{2}`$, reproduces the well known adiabatic expansion, but also allows entropy production. The present era of matter domination requires a more complicated description since in this case one source provides the entropy, and another source the energy.
The second case is that of the conjectured geometric entropy $`S_g`$, whose source is the existence of a cosmological horizon . The concept of geometric entropy is closely related to the holographic principle, and it has appeared in this connection recently in discussion of cosmological entropy bounds. For a system with a cosmological horizon $`S_g^H`$ is given by (ignoring numerical factors of order unity)
$$S_g^H=|H|^2G_N^1.$$
(8)
The equation of state corresponding to adiabatic evolution with dominant $`S_g`$, is obtained by substituting $`\alpha =2`$ into eqs.(6,7), leading to $`p/\rho =1`$ for positive and negative $`H`$. This equation of state is simply that of a free massless scalar field, also recognized as the two dilaton-driven inflation (DDI) $`(\pm )`$ vacuum branches of ‘pre-big-bang’ string cosmology in the Einstein frame. In this was found for the $`(+)`$ branch in the string frame as an “empirical” observation. In general, for the case of dominant geometric entropy, GSL requires, for positive $`H`$, $`p\rho ,`$ obtained also by and using a different argument. Note that deSitter inflation (DSI) is definitely allowed. For negative $`H`$, GSL requires $`\rho p,`$ and therefore forbids, for example, a time reversed history of our universe, or a contracting deSitter universe with a negative constant $`H`$, unless some additional entropy sources appear.
The third case is that of quantum entropy $`S_q`$, associated with quantum fluctuations. This form of entropy was discussed in . Specific quantum entropy for a single physical degree of freedom is approximately given by (again, ignoring numerical factors of order unity)
$$s_q=d^3k\mathrm{ln}n_k,$$
(9)
where $`n_k1`$ are occupation numbers of quantum modes . Note that quantum entropy is large for highly excited quantum states, such as the squeezed states obtained by amplification of quantum fluctuations during inflation. Quantum entropy does not seem to be expressible in general as $`S_q^H=|H|^\alpha `$, since occupation numbers depend on the whole history of the evolution. We will discuss this form of entropy in more detail later, when the quantum version of GSL is proposed.
We would like to show that it is possible to formally define a temperature, and that the definition is compatible with the a generalized form of the first law of thermodynamics. Recall that the first law for a closed system states that $`TdS=dE+pdV=(\rho +p)dV+Vd\rho .`$ Let us now consider the case of single entropy source and formally define a temperature $`T`$, $`T^1=\left(\frac{S}{E}\right)_V=\frac{s}{\rho },`$ since $`E=\rho V`$ and $`S=sV`$. Using eqs.(4), and $`s=|H|^{\alpha +3}`$, we obtain $`\frac{s}{\rho }=\frac{\alpha +3}{12}|H|^{\alpha +1},`$ and therefore
$$T=\frac{12}{\alpha +3}|H|^{\alpha 1}.$$
(10)
Note that to ensure positive temperatures $`\alpha >3,`$ a condition which we have already encountered. Note also that for $`\alpha >1`$, $`T`$ diverges in the flat space limit, and therefore such a source is suspect of being unphysical, leading to the conclusion that the physical range of $`\alpha `$ is $`2\alpha 1`$. A compatibility check requires $`T^1=\frac{s}{t}/\frac{\rho }{t}`$, which indeed yields a result in agreement with (10). Yet another thermodynamic relation $`p/T=\left(\frac{S}{V}\right)_E`$, leads to $`p=sT\rho `$ and therefore to $`p/\rho =\frac{2}{\alpha +3}1`$ for adiabatic evolution, in complete agreement with eqs.(6,7). For $`\alpha =2`$, eq.(10) implies $`T_g=|H|`$, in agreement with , and for ordinary thermal entropy $`\alpha =3/2`$ reproduces the known result, $`T=|H|^{1/2}`$.
We turn now to discuss entropy bounds, GSL and cosmological singularities. First, we discuss compatibility of entropy bounds and GSL, and then use GSL to derive a new bound relevant to cosmological singularities. Bekenstein suggested that in flat space there is a universal entropy bound on the maximal entropy content in a region containing energy $`E`$ and of size $`L`$, $`S<EL`$, and then applied this idea to cosmology , by choosing the particle horizon $`d_p=a(t)\frac{dt^{}}{a(t^{})}`$ as $`L`$. Recently Veneziano argued that since a black hole larger than the horizon cannot form, the largest entropy in a region corresponds to having just one black hole per Hubble volume $`H^3`$, namely (introducing the Planck mass $`M_p=G_N^{1/2}`$) that $`sM_p^2|H|`$ and
$$S^HM_p^2|H|^2.$$
(11)
This conjecture was further supported in . Perhaps a link between the two distinct entropy bounds can be established by choosing instead of $`d_p`$, the Hubble radius $`H^1`$ and since $`E_H=M_p^2|H|^1`$, the condition $`S<EL`$ is translated into eq.(11). Note that when applied to non-inflationary cosmology, as done in , particle horizon and Hubble radius are about the same and therefore both bounds give similar constraints on $`S^H`$. A consequence of bound (11) is therefore that geometric entropy should always be the dominant source of entropy,
$$S^HS_g^H.$$
(12)
In an example of an expanding and recontracting universe with some matter and a small negative cosmological constant was presented, for which bound (11) seems to be violated. This example involves an epoch in which the causal range is very different from $`|H|^1`$, and is quite interesting, but its resolution will not affect our conclusions for the cases we are interested in, in which $`H^2`$ is at least as large as $`|\dot{H}|`$.
Is GSL compatible with entropy bounds? Let us start answering this question by considering a universe undergoing decelerated expansion, that is $`H>0`$, $`\dot{H}<0`$. For entropy sources with $`\alpha >2`$, going backwards in time, $`H`$ is prevented by the entropy bound (12) from becoming too large. This requires that at a certain moment in time $`\dot{H}`$ has reversed sign, or at least vanished. GSL allows such a transition. Evolving from the past towards the future, and looking at eq.(2) we see that a transition from an epoch of accelerated expansion $`H>0`$, $`\dot{H}>0`$, to an epoch of decelerated expansion $`H>0`$, $`\dot{H}<0`$, can occur without violation of GSL. But later we discuss a new bound appearing in this situation when quantum effects are included.
For a contracting universe with $`H<0`$, and if sources with $`\alpha >2`$ exist, the situation is more interesting. Let us check whether in an epoch of accelerated contraction $`H<0`$, $`\dot{H}<0`$, GSL is compatible with entropy bounds. If an epoch of accelerated contraction lasts, it will inevitably run into a future singularity, in conflict with bound (12). This conflict could perhaps have been prevented if at some moment in time the evolution had turned into decelerated contraction with $`H<0`$, $`\dot{H}>0`$. But a brief look at eq.(2), $`\dot{H}\frac{3}{3+\alpha }H^2`$, shows that decelerated contraction is not allowed by GSL. The conclusion is that for the case of accelerated contraction GSL and the entropy bound are not compatible.
To resolve the conflict between GSL and the entropy bound, we propose adding a missing quantum entropy term $`dS_{Quantum}=\mu dn_H,`$ where $`\mu (a,H,\dot{H},\mathrm{})`$ is a “chemical potential” motivated by the following heuristic argument. Specific quantum entropy is given by (9), and we consider for the moment one type of quantum fluctuations that preserves its identity throughout the evolution. Changes in $`S_q`$ result from the well known phenomenon of freezing and defreezing of quantum fluctuations. For example, quantum modes whose wavelength is stretched by an accelerated cosmic expansion to the point that it is larger than the horizon, become frozen (“exit the horizon”), and are lost as dynamical modes, and conversely quantum modes whose wavelength shrinks during a period of decelerated expansion (“reenter the horizon”), thaw and become dynamical again. Taking into account this “quantum leakage” of entropy, requires that the first law should be modified as in open systems $`TdS=dE+PdV\mu dN`$, as first suggested in .
In a universe going through a period of decelerated expansion, containing some quantum fluctuations which have reentered the horizon (e.g., a homogeneous and isotropic background of gravitational waves), physical momenta simply redshift, but since no new modes have reentered, and since occupation numbers do not change by simple redshift, then within a fixed comoving volume, entropy does not change. However, if there are some frozen fluctuations outside the horizon “waiting to reenter” then there will be a change in quantum entropy, because the minimal comoving wave number of dynamical modes $`k_{min}`$, will decrease due to the expansion, $`k_{min}(t+\delta t)<k_{min}(t)`$. The resulting change in quantum entropy, for a single physical degree of freedom, is $`\mathrm{\Delta }s_q=\underset{k_{min}(t+\delta t)}{\overset{k_{min}(t)}{}}k^2𝑑k\mathrm{ln}n_k,`$ and since $`k_{min}(t)=a(t)H(t)`$, $`\mathrm{\Delta }S_q=\underset{a(t+\delta t)H(t+\delta t)}{\overset{a(t)H(t)}{}}k^2𝑑k\mathrm{ln}n_k=\mathrm{\Delta }(aH)^3\mathrm{ln}n_{k=aH},`$ provided $`\mathrm{ln}n_k`$ is a smooth enough function. Therefore, for $`N`$ physical degrees of freedom, and since $`n_H=(aH)^3`$,
$$dS_q=\mu Ndn_H,$$
(13)
where parameter $`\mu `$ is taken to be positive. Obviously, the result depends on the spectrum $`n_k`$, but typical spectra are of the form $`n_kk^\beta `$, and therefore we may take as a reasonable approximation $`\mathrm{ln}n_kconstant`$ for all $`N`$ physical degrees of freedom.
We adopt proposal (13) in general,
$`dS`$ $`=`$ $`dS_{Classical}+dS_{Quantum}`$ (14)
$`=`$ $`dn_HS^H+n_HdS^H\mu Ndn_H,`$ (15)
where $`S^H`$ is the classical entropy within a cosmological horizon. In particular, for the case that $`S^H`$ is dominated by a single source $`S^H=|H|^\alpha `$,
$$\left(3H+3\frac{\dot{H}}{H}\right)n_H(S^H\mu N)+\alpha \frac{\dot{H}}{H}n_HS^H0.$$
(16)
Quantum modified GSL (16) allows a transition from accelerated to decelerated contraction. As a check, look at $`H<0`$, $`\dot{H}=0`$, in this case modified GSL requires $`3H(S^H\mu N)0,`$ which, if $`\mu NS^H`$, is allowed. If the dominant form of entropy is indeed geometric entropy, the transition from accelerated to decelerated contraction is allowed already at $`|H|M_p/\sqrt{N}`$. In models where $`N`$ is a large number, such as grand unified theories and string theory where it is expected to be of the order of 1000, the transition can occur at a scale much below the Planck scale, at which classical general relativity is conventionally expected to adequately describe background evolution.
If we reconsider the transition from accelerated to decelerated expansion and require that (16) holds, we discover a new bound derived directly from GSL, compatible with, but not relying on, bound (12). Consider the case in which $`\dot{H}`$ and $`H`$ are positive, or $`H`$ positive and $`\dot{H}`$ negative but $`|\dot{H}|H^2`$, relevant to whether the transition is allowed by GSL. In this case, (16) reduces to $`S^H\mu N0`$, that is, GSL puts a lower bound on the classical entropy within the horizon. If geometric entropy is the dominant source of entropy as expected, GSL puts a lower bound on geometric entropy $`S_g^H\mu N`$, which yields an upper bound on $`H`$,
$$H\frac{M_p}{\sqrt{N}}.$$
(17)
The scale that appeared previously in the resolution of the conflict between entropy bounds and GSL for a contracting universe has reappeared in (17), and remarkably, (17) is the same bound obtained in using different arguments. Bound (17) forbids a large class of singular homogeneous, isotropic, spatially flat cosmologies by bounding their curvature.
An interesting study case is ‘pre-big-bang’ string cosmology . In this scenario the evolution of the universe starts from a state of very small curvature and string coupling, undergoes a phase of dilaton-driven inflation (DDI) which joins smoothly standard radiation dominated (RD) cosmology, thus giving rise to a singularity free inflationary cosmology. The graceful exit transition from DDI to RD, has been studied intensely , with the following scenario emerging, first classical corrections limit the curvature by trapping the universe in an algebraic fixed point a linear dilaton deSitter solution, and then quantum corrections limit the string coupling, and end the transition . Modified GSL supports this exit scenario, clarifies the conditions for the existence of the algebraic fixed point, determines new energy conditions, and constrains sources required to complete a graceful exit transition.
We present here the case of dominant $`S_g`$. A candidate geometric entropy is given by the analog of eq.(8) by substituting $`M_p^2=e^\varphi M_S`$, $`M_S`$ being the (constant) string mass and $`\varphi `$ is the dilaton, $`S^H=e^\varphi H^2`$. The expression for $`n_H`$ is unchanged. Condition (2) now reads $`3H+\frac{\dot{H}}{H}\dot{\varphi }0.`$ Using $`\dot{\overline{\varphi }}=\dot{\varphi }3H`$, we obtain $`\frac{\dot{H}}{H}\dot{\overline{\varphi }}0,`$ leading, by using one of string cosmology equations of motion $`\overline{\sigma }2\dot{H}+2H\dot{\overline{\varphi }}=0`$ , for $`H>0`$, which is the natural choice for pre-big-bang phase, to the energy condition $`\overline{\sigma }0`$ . An immediate consequence is that if $`\dot{H}`$ vanishes, then $`\dot{\overline{\varphi }}<0`$, so an algebraic fixed point necessarily has to occur for $`\dot{\overline{\varphi }}<0`$. The same conclusion was reached in , and previously in . Further investigation is required clarify the correct comparison to the analysis in ordinary FRW cosmology.
###### Acknowledgements.
Work supported in part by the Israel Science Foundation. It is a pleasure to thank R. Madden and G. Veneziano for useful discussions and helpful suggestions and comments, S. Foffa and R. Sturani for discussions about string cosmology, D. Eichler and J. Donoghue for discussions, and J. Bekenstein, R. Easther, N. Kaloper and A. Linde for comments on the manuscript.
|
no-problem/9904/cond-mat9904369.html
|
ar5iv
|
text
|
# Low-Temperature Spin Dynamics of Doped Manganites: roles of Mn-𝑡_{2𝑔} and 𝑒_𝑔 and O-2𝑝 states
## I figure captions
Fig.1 The spin wave dispersions obtained by (a) LMTO calculations and (b) the tight binding approach (model A) along the symmetry directions $`\mathrm{\Gamma }`$X, XM and MR shown as a function of doping.
Fig. 2 The doping dependence of the exchange couplings $`J_1`$, $`J_2`$, $`J_4`$ and $`J_8`$ between atoms at ($`a`$ 0 0), ($`a`$ $`a`$ 0), (2$`a`$ 0 0) and (3$`a`$ 0 0), where $`a`$ is the lattice parameter.
Fig. 3 The (a) minority-spin and (c) majority-spin $`d`$ partial density of states within models A, B and C. The spin wave dispersions along $`\mathrm{\Gamma }`$X as a function of doping within models (b) B and (d) C are shown along with (e) the combined contributions of models B and C. $`y`$ refers to the hole concentration in the majority-spin $`e_g`$ band with reference to its half-filled case. $`z`$ is the electron concentration in the minority-spin $`t_{2g}`$ band. $`x`$ is the net concentration of the doped holes and is given by $`x=yz`$.
Fig. 4 The dependence of the spin wave energies on the $`e_g`$ hole doping $`y`$ along $`\mathrm{\Gamma }`$X within model D. The hopping between oxygen atoms and the $`t_{2g}`$ orbitals on the Mn atom have been left out of the model. (a) $`pd\sigma `$=-2.02 eV and (b) $`pd\sigma `$=-2.25 eV.
Fig. 5 The variation of the exchange couplings $`J_1`$, $`J_2`$, $`J_4`$ and $`J_8`$ with the $`e_g`$ hole doping $`y`$. Open circles are for the case (model B) including the hopping between oxygen atoms and $`pd\sigma `$=-2.02 eV. Open and filled squares are for the cases (model D) without the hopping between oxygen atoms: $`pd\sigma `$=-2.02 eV (open squares) and $`pd\sigma `$=-2.25 eV (filled squares). The $`t_{2g}`$ orbitals on the Mn atom have been left out of the basis set.
|
no-problem/9904/cond-mat9904096.html
|
ar5iv
|
text
|
# A new quantum phase between the Fermi glass and the Wigner crystal in two dimensions
## Abstract
For intermediate Coulomb energy to Fermi energy ratios $`r_s`$, spinless fermions in a random potential form a new quantum phase which is nor a Fermi glass, neither a Wigner crystal. Studying small clusters, we show that this phase gives rise to an ordered flow of enhanced persistent currents, for disorder strength and ratios $`r_s`$ where a metallic phase has been recently observed in two dimensions.
An important parameter for a system of charged particles is the Coulomb energy to Fermi energy ratio $`r_s`$. In a disordered two-dimensional system, the ground state is obvious in two limits. For large $`r_s`$, the charges form a kind of pinned Wigner crystal, the Coulomb repulsion being dominant over the kinetic energy and the disorder. For small $`r_s`$, the interaction becomes negligible and the ground state is a Fermi glass with localized one electron states. There is no theory for intermediate $`r_s`$, while many transport measurements following the pioneering works of Kravchenko et al and made with electron and hole gases give evidence of an intermediate metallic phase in two dimensions, observed for instance when $`6<r_s<9`$ for a hole gas in GaAs heterostructures. A simple model of spinless fermions with Coulomb repulsion in small disordered $`2d`$ clusters exhibits a new ground state characterized by an ordered flow of enhanced persistent currents for those values of $`r_s`$. In a given cluster, as we turn on the interaction, the Fermi ground state can be followed from $`r_s=0`$ up to a first level crossing. A second crossing occurs at a larger threshold after which the ground state can be followed to the limit $`r_s\mathrm{}`$. There is then an intermediate state between the two crossings. In small clusters, the location of the crossings depends on the considered potentials, but a study over the statistical ensemble of the currents supported by the ground state gives us two well defined values $`r_s^F`$ and $`r_s^W`$: Mapping the system on a torus threaded by an Aharonov-Bohm flux, we denote respectively $`I_l`$ and $`I_t`$ the total longitudinal (direction enclosing the flux) and transverse parts of the driven current. One finds for their typical amplitudes $`|I_t|\mathrm{exp}(r_s/r_s^F)`$ and $`I_l\mathrm{exp}(r_s/r_s^W)`$ with $`r_s^F<r_s^W`$. Below $`r_s^F`$, the flux gives rise to a glass of local currents and the sign of $`I_l`$ can be diamagnetic or paramagnetic, depending on the random potentials. Above $`r_s^F`$, the transverse current is suppressed while an ordered flow of longitudinal currents persists up to $`r_s^W`$, where charge crystallization occurs. The sign of $`I_l`$ can be paramagnetic or diamagnetic depending on the filling factor (as for the Wigner crystal), but does not depend on the random potentials (in contrast to the Fermi glass). One finds $`r_s^F`$ and $`r_s^W`$ in agreement with the values delimiting the new metallic phase when $`0.3<k_Fl<3`$, $`k_F`$ and $`l`$ denoting the Fermi wave vector and the elastic mean free path respectively. For $`k_Fl1`$, $`I_l`$ is strongly increased between $`r_s^F`$ and $`r_s^W`$. This suggests that the intermediate phase of our model is related to the new metal observed in two dimensions by transport measurement which we shortly review.
In exceptionally clean GaAs/AlGaAs heterostructures, an insulator-metal transition (IMT) of a hole gas results from an increase of the hole density induced by a gate. This occurs at $`r_s35`$, in close agreement to $`r_s^W37`$, where charge crystallization takes place according to Monte Carlo calculations , and makes highly plausible that the observed IMT comes from the quantum melting of a pinned Wigner crystal. The values of $`r_s`$ where an IMT has been previously seen in various systems (Si-Mosfet, Si-Ge, GaAS) are given in Ref. , corresponding to different degrees of disorder (measured by the elastic scattering time $`\tau `$). Those $`r_s`$ drop quickly from $`35`$ to a constant value $`r_s810`$ when $`\tau `$ becomes smaller. This is again compatible with $`r_s^W7.5`$ given by Monte Carlo calculations for a solid-fluid transition in presence of disorder. If the observed IMT are due to interactions, it might be expected that this metallic phase will cease to exist as the carrier density is further increased. This is indeed the case for a hole gas in GaAs heterostructures at $`r_s6`$ where an insulating state appears, characteristic of a Fermi glass with electron-electron interactions.
In this work, we take advantage of exact diagonalization techniques for large sparse matrices (Lanczos method) where tiny changes of energy can be precisely studied. This restricts us to small clusters and low filling factors. Fortunately, the dependence on particle number has proved to be remarkably weak in many cases. In the clean limit, calculations with 6-8 particles give the condensation of the electron gas into an incompressible quantum fluid when a magnetic field is applied. Pikus and Efros have obtained $`r_s^W35`$ from $`6\times 6`$ clusters with $`6`$ particles, close to $`r_s^W37`$ obtained by Tanatar and Ceperley for the thermodynamic limit. In the disordered limit which we consider, there is another reason for expecting weak finite size effects. When the energy levels do not depend very much on the boundary conditions, the periodic repetition of the same cluster cannot drastically differ from the thermodynamic limit obtained from an ensemble of different clusters. This usual localization criterion applies for insulators as the Fermi glass or the pinned Wigner crystal. Small cluster approximations should then be sufficient for small and large $`r_s`$. This explains why the critical factors $`r_s`$ which we will discuss are close to the thermodynamic limit given by the experiments. Finite size effects can be important only if one has a metal for intermediate $`r_s`$.
We consider a simple model of $`N=4`$ Coulomb interacting spinless fermions in a random potential defined on a square lattice with $`L^2=36`$ sites. The Hamiltonian reads:
$`H=t{\displaystyle \underset{<i,j>}{}}c_i^{}c_j+{\displaystyle \underset{i}{}}v_in_i+U{\displaystyle \underset{ij}{}}{\displaystyle \frac{n_in_j}{2r_{ij}}}.`$ (1)
$`c_i^{}`$ ($`c_i`$) creates (destroys) an electron in the site $`i`$, $`t`$ is the strength of the hopping terms between nearest neighbors (kinetic energy) and $`r_{ij}`$ is the inter-particle distance for a $`2d`$ torus. The random potential $`v_i`$ of the site $`i`$ with occupation number $`n_i=c_i^{}c_i`$ is taken from a box distribution of width $`W`$. The interaction strength $`U`$ yields a Coulomb energy to Fermi energy ratio $`r_s=U/(2t\sqrt{\pi n_e})`$ for a filling factor $`n_e=N/L^2`$. The disorder to hopping energy ratio $`W/t`$ is chosen such that $`k_Fl`$ takes values where the IMT has been observed . A Fermi golden rule approximation for $`\tau `$ gives $`k_Fl192\pi n_e(t/W)^2`$. One has $`n_e=1/9`$, $`W/t=5,10,15`$ corresponding to $`k_Fl=2.7,0.67`$ and $`0.3`$ respectively.
The boundary conditions are always taken periodic in the transverse $`y`$-direction, and such that the system becomes a torus enclosing an Aharonov-Bohm flux $`\varphi `$ in the longitudinal $`x`$-direction. Imposing $`\varphi =\pi /2`$ ($`\varphi =\pi `$ corresponds to anti-periodic condition), one drives a persistent current of total longitudinal and transverse components given by
$$I_l=\frac{E(\varphi )}{\varphi }|_{\varphi =\pi /2}=\frac{_iI_i^l}{L}$$
(2)
and $`I_t=_iI_i^t/L`$ respectively. The local current $`I_i^l`$ flowing at the site $`i`$ in the longitudinal direction is defined by $`I_i^l=2\mathrm{I}\mathrm{m}(\mathrm{\Psi }_0|c_{i_{x+1},i_y}^{}c_{i_x,i_y}|\mathrm{\Psi }_0)`$ and by a corresponding expression for $`I_i^t`$. The response is paramagnetic if $`I_l>0`$ and diamagnetic if $`I_l<0`$. We begin by showing behaviors characteristic of a single cluster when $`r_s`$ varies.
Fig. 1 corresponds to $`k_fl1`$ ($`W/t=15`$). Looking at the low energy part of the spectrum, one can see that, as we gradually turn on the interaction, classification of the levels remains invariant up to first avoided crossings, where a Landau theory of the Fermi glass is certainly no longer possible. Looking at the electronic density $`\rho _i=\mathrm{\Psi }_0|n_i|\mathrm{\Psi }_0`$ of the ground state $`|\mathrm{\Psi }_0`$, we have checked that it is mainly maximum in the minima of the site potentials for the Fermi glass. After the second avoided crossing, $`\rho _i`$ is negligible except for four sites forming a lattice of charges as close as possible to the Wigner crystal triangular network in the imposed square lattice. The degeneracy of the crystal is removed by the disorder, the array being pinned in $`4`$ sites of favorable energies.
For the same cluster, we have calculated $`C(r)=N^1_i\rho _i\rho _{ir}`$ and the parameter $`\gamma =\mathrm{max}_rC(r)\mathrm{min}_rC(r)`$ used by Pikus and Efros for characterizing the melting of the crystal. $`\gamma =1`$ for a crystal and $`0`$ for a liquid. Calculated for the ground state and the first excited state, $`\gamma (r_s)`$ allows us to identify the second crossing with the melting of the crystal. Moreover, one can see that the crystal becomes unstable in the intermediate phase, while the ground state is related to the first excitation of the crystal (Fig. 1 bottom left). Around the crossings, the longitudinal current $`I_l`$ and the participation ratio $`\xi _s=N^2(_i\rho _i^2)^1`$ of the ground state (i.e. of the number of sites that it occupies) are enhanced (Fig. 1 bottom right). The general picture is somewhat reminiscent of strongly disordered chains where level crossings associated to charge reorganizations of the ground state are accompanied by enhancements of the persistent currents. Fig.1 is representative of the ensemble, with the restriction that the location of the crossings fluctuates from one sample to another as well as the sign (paramagnetic or diamagnetic) of $`I_l`$ below the first crossing, in contrast to $`1d`$.
Fig. 2 corresponds to $`k_fl1`$ ($`W/t=5`$). The previous level crossings are now almost suppressed by a stronger level repulsion and charge crystallization occurs more continuously. There is instead a broad enhancement of $`I_l`$ which, in contrast to Fig. 1, is not accompanied by a corresponding increase of $`\xi _s`$, which smoothly decreases from $`20`$ of the $`36`$ possible sites down to $`4`$ when charge crystallization becomes perfect. A transition of the persistent current, from a disordered array of loops towards an ordered flow as $`r_s`$ increases has been noticed by Berkovits and Avishai. To illustrate this phenomenon, the total transverse current $`I_t`$ is shown in Fig. 2. One can see that $`I_t`$ is suppressed at $`r_s5`$ while $`I_l`$ continues to increase up to $`r_s15`$. We have checked that a disordered array of loops persists up to $`r_s5`$, followed by an ordered flow of enhanced longitudinal currents persisting up to $`r_s15`$. The disordered array of loops gives rise to a diamagnetic or paramagnetic current $`I_l`$, depending on the microscopic disorder. The ordered flow gives rise to a paramagnetic $`I_l`$. However, Coulomb repulsions do not always yield a paramagnetic response. For instance, $`4\times 6`$ clusters with $`N=6`$ become always diamagnetic at large $`r_s`$. One can only conclude that the sign of the response in $`2d`$ does not depend on the random potential when $`r_s`$ is sufficient for suppressing $`I_t`$. In $`1d`$, Legett’s theorem states that the sign of $`I_l`$ depends on the parity of $`N`$ only, for all disorder and interaction strength. The proof is based on the nature of “non symmetry dictated nodal surfaces ”, which is trivial in $`1d`$, but which has a quite complicated topology in higher $`d`$. It is likely that such a theorem could be extended in $`2d`$ when the transverse flow is suppressed.
We now present a statistical study of an ensemble of $`10^3`$ clusters for $`W/t=5,10,15`$. At the left top of Fig. 3, one can see an increase of the mean $`I_l`$ by about one order of magnitude when $`r_s7`$ for $`W/t=5`$. We note that the persistent currents measured in an ensemble of mesoscopic rings are typically higher by a similar amount than the theoretical prediction neglecting the interactions. At the right top of Fig. 3, the fraction of diamagnetic clusters is given as function of $`r_s`$, showing that the enhancement of the mean is partially related to the suppression of the diamagnetic currents. This suppression is faster for weak disorders. The mean number $`\xi _s`$ of sites occupied by the ground state is given at the middle right of Fig. 3, showing a negligible increase when $`W/t=15`$ at low $`r_s`$ and a regular decay otherwise. The paramagnetic $`I_{l,p}`$ and diamagnetic $`I_{l,d}`$ longitudinal currents, and $`|I_t|`$ have log-normal distributions for all values of $`r_s`$ when $`W/t5`$. The stronger is the disorder, the better is the log-normal shape of the distribution (see middle left of Fig. 3). The average of the logarithms give the typical values shown in the bottom part of Fig. 3. On the left, the longitudinal currents $`I_l`$ are given, the diamagnetic responses $`I_{l,d}`$ (filled symbols) being separated from the paramagnetic responses $`I_{l,p}`$ (empty symbols), while the transverse currents $`I_t`$ are given at the right side. The log-averages exponentially decay as $`I_{l,d}|I_t|\mathrm{exp}(r_s/r_s^F)`$ and $`I_{l,p}\mathrm{exp}(r_s/r_s^W)`$ when $`r_s`$ is large enough. The variances of $`\mathrm{log}|I_t|`$ and $`\mathrm{log}I_l`$ increase as $`r_s/r_s^F`$ and $`r_s/r_s^W`$ above $`r_s^F`$ and $`r_s^W`$ respectively. The values of $`r_s^F`$ and $`r_s^W`$ extracted from the exponential fits (straight lines of Fig. 3) are given in Fig. 4, where a sketch of the phase diagram is proposed.
Fig. 3 and Fig. 4 show that a simple model of spinless fermions with Coulomb repulsion in a random potential can account for the critical carrier densities and disorder strengths where the IMT occurs. The comparison between the curve $`r_s(\tau )`$ given in Ref. (summarizing the factors $`r_s`$ where the IMT has been observed) and the curve $`r_s^W(k_Fl)`$ of Fig. 4 (characterizing the suppression of $`I_l`$) is very striking. The value $`r_s=6`$ where the reentry has been observed in Ref. is also compatible with the curve $`r_s^F(k_Fl)`$ characterizing the suppression of $`I_t`$. We have not indicated in the proposed phase diagram the difference between $`k_Fl1`$ (where $`I_l`$ has a strong enhancement which may be the signature of a new metal at the thermodynamic limit) and $`k_Fl1`$ (where $`I_l`$ persists up to $`r_s^W`$ without noticeable enhancement). A study of the size dependence at a fixed filling factor will be necessary for studying a possible IMT driven by an increase of $`k_Fl`$ at intermediary $`r_s`$. This is unfortunately out of reach of exact diagonalization techniques. Another striking difference is that $`\xi _s`$ and $`I_l`$ convey similar information when $`k_Fl<1`$ while the increase of $`I_l`$ is accompanied by a decrease of $`\xi _s`$ when $`k_Fl>1`$. This suggests that transport for intermediary $`r_s`$ results more from a collective motion of charges than from a delocalization of individual charges. The spin degrees of freedom are not included in our model, the orbital part of the wave function being totally anti-symmetrized. This restriction is quite important for short range screened interactions, but is certainly less severe for long range interactions and low densities. However, there are many experimental evidences that spin effects play a role. A more complex phase diagram is possible when spins are included. But even for $`2d`$ spinless fermions, these small cluster studies allow us to conclude that there is a new quantum phase, clearly separated from the Fermi glass and from the Wigner crystal, identified by a plastic flow of currents without charge crystallization, which is likely to give a new metal in the thermodynamic limit when $`k_Fl1`$.
This work is partially supported by a TMR network of the EU.
|
no-problem/9904/astro-ph9904105.html
|
ar5iv
|
text
|
# RADIATIVELY–DRIVEN OUTFLOWS AND AVOIDANCE OF COMMON–ENVELOPE EVOLUTION IN CLOSE BINARIES
## 1. INTRODUCTION
The question of what happens to a compact object that is fed mass at rates far higher than its Eddington limit has a long history (Shakura & Sunyaev, 1973; Kafka & Mészáros, 1976; Begelman, 1979). In the context of accreting binary systems, this problem is particularly acute because of the possibility of common–envelope (CE) evolution at such rates. That is, the accreting component may be unable either to accept or to expel the mass at a sufficiently high rate to avoid the formation of an envelope engulfing the entire binary system. The frictional drag of this envelope can shrink the binary orbit drastically. If the resulting release of orbital energy is enough to unbind the envelope, the binary will emerge from the common envelope with a smaller separation; if not, the binary components may coalesce. CE evolution is probably required for the formation of binaries such as cataclysmic variables, in which the binary separation is far smaller than the radius of the accreting white dwarf’s red giant progenitor. However, it is in general an open question whether CE evolution occurs in any given binary.
This question is thrown into sharp relief by recent work on the evolution of the low–mass X–ray binary Cygnus X–2 (King & Ritter, 1999), which has a period of 9.84 d. The rather precise spectroscopic information found by Casares, Charles, & Kuulkers (1998), together with the observed effective temperature of the secondary, shows that this star has a mass definitely below $`0.7\mathrm{M}_{}`$ and yet a luminosity of order $`150\mathrm{L}_{}`$. King & Ritter (1999) consider several possible explanations and show that the only viable one is that Cygnus X–2 is a product of early massive Case B evolution. Here ‘Case B’ means that the mass–losing star has finished core hydrogen–burning, and is expanding across the Hertzsprung gap: ‘early’ means that the stellar envelope is radiative rather than convective, and ‘massive’ that the helium core is non–degenerate; see Kippenhahn & Weigert, 1967. In Cygnus X–2 an initially more massive ($`M_{2i}3.5\mathrm{M}_{}`$) secondary transferred mass on a thermal timescale ($`10^6`$ yr) to the neutron star. This idea gives a satisfying fit to the present observed properties of Cyg X–2, as well as a natural explanation for the large white dwarf companion masses found in several millisecond pulsar binaries with short orbital periods. CE evolution cannot have occurred, as Cyg X–2’s long orbital period means that there was far too little orbital energy available for the CE mechanism to have ejected so much mass. Thus an inescapable feature of this picture is that the neutron star is evidently able to eject essentially all of the matter ($`23\mathrm{M}_{}`$) transferred to it at highly super–Eddington rates $`10^6\mathrm{M}_{}`$ yr<sup>-1</sup>. Indeed, the neutron star mass in Cyg X–2 is rather close to the canonical value of $`1.4\mathrm{M}_{}`$. The aim of this paper is to determine under what conditions such expulsion can occur without the system going into a common envelope.
## 2. EXPULSION BY RADIATION PRESSURE
There are essentially two views as to the fate of matter dumped onto a compact object at a highly super–Eddington rate. In spherically–symmetric, dissipative accretion of an electron–scattering medium, the luminosity generated by infall down to radius $`R`$ will reach the Eddington limit at a radius
$$R_{\mathrm{ex}}\left(\frac{\dot{M}_{\mathrm{tr}}}{\dot{M}_{\mathrm{Edd}}}\right)R_S,$$
(1)
where $`\dot{M}_{\mathrm{tr}}`$ is the mass infall rate at large radius (i.e. the mass transfer rate from the companion star in our case), $`\dot{M}_{\mathrm{Edd}}=L_{\mathrm{Edd}}/c^2`$ is the Eddington accretion rate, and $`R_S`$ is the Schwarzschild radius (Begelman, 1979). This is also the “trapping radius”, below which photon diffusion outward cannot overcome the advection of photons inward. If the compact object is a black hole, the radiation generated in excess of the Eddington limit can thus be swept into the black hole, and lost. If the compact object is a neutron star, however, radiation pressure building up near the star’s surface must resist inflow in excess of $`\dot{M}_{\mathrm{Edd}}`$, causing the stalled envelope to grow outward. This situation would lead to the formation of a common envelope.
The outcome may be very different if the accretion flow has even a small amount of angular momentum. Shakura & Sunyaev (1973) suggested that super–Eddington flow in an accretion disk would lead to the formation of a strong wind perpendicular to the disk surface, which could carry away most of the mass. Such a model (an “Adiabatic Inflow-Outflow Solution,”, or ADIOS) was elaborated by Blandford & Begelman (1999: hereafter BB99), who considered radiatively inefficient accretion flows in general. BB99 recalled that viscous transfer of angular momentum also entails the transfer of energy outward. If the disk were unable to radiate efficiently (as would be the case at $`R<R_{\mathrm{tr}}`$), the energy deposited in the material well away from the inner boundary would unbind it, leading to the creation of powerful wind. BB99 described a family of self-similar models in which the mass inflow rate decreases inward as $`\dot{M}r^n`$ with $`0<n<1`$. The exact value of $`n`$ depends on the physical processes depositing energy and angular momentum in the wind. If these are very efficient (e.g., mediated by highly organized magnetic torques) $`n`$ could be close to zero, in which case little mass would be lost. However, if the wind is produced inefficiently, $`n`$ would have to be close to 1 and the mass flux reaching the central parts of the accretion disk would be much smaller than the mass transferred from the secondary. For example, two-dimensional hydrodynamical simulations of the evolution of a non-radiative viscous torus (Stone, Pringle & Begelman 1999) show the development of a convectively driven circulation with little mass reaching the central object, and $`n1`$. The development of this strong mass loss is generic and is not related to the assumption of self-similarity. In effect, what is happening is that the energy liberated by a small fraction of the mass reaching the deep gravitional potential serves to unbind the majority of the matter which is weakly bound at large distances.
While the specific details of mass loss from super–Eddington flow have not been worked out (in particular, radiation-dominated convection is poorly understood), it is reasonable to assume that the wind will be produced inefficiently, with $`n1`$ as in the case of hydrodynamic convection. We also assume that most of the matter will be blown away from $`R_{\mathrm{ex}}`$. Applying equation (1), we find
$$R_{\mathrm{ex}}1.3\times 10^{14}\dot{m}_{\mathrm{tr}}\mathrm{cm},$$
(2)
where $`\dot{m}_{\mathrm{tr}}`$ is the mass transfer rate expressed in $`\mathrm{M}_{}`$ yr<sup>-1</sup>. Note that $`R_{\mathrm{ex}}`$ is independent of the mass of the compact accretor. Since we restrict attention to electron scattering opacity only, we require that hydrogen should be strongly ionized at $`R_{\mathrm{ex}}`$. This is ensured by requiring the radiation temperature near $`R_{\mathrm{ex}}`$ to exceed $`T_H10^4`$ K. Since the luminosity emerging from $`R_{\mathrm{ex}}`$ is close to the Eddington limit, we require $`L_{\mathrm{Edd}}4\pi R_{\mathrm{ex}}^2\sigma T_H^4`$, which is satisfied if $`R_{\mathrm{ex}}/R_S10^7m_1^{1/2}`$, or equivalently (using equation)
$$\dot{M}_{\mathrm{tr}}10^7\dot{M}_{\mathrm{Edd}}m_1^{1/2}2\times 10^2m_1^{1/2}\mathrm{M}_{}\mathrm{yr}^1$$
(3)
where $`m_1=M_1/\mathrm{M}_{}`$ is the mass of the compact accretor (black hole or neutron star).
CE evolution will be avoided if $`R_{\mathrm{ex}}`$ is smaller than the accretor’s Roche lobe radius $`R_1`$. If the accretor is the less massive star (as will generally hold in cases of interest) we can use standard formulae to write
$$r_1=1.9m_1^{1/3}P_\mathrm{d}^{2/3},$$
(4)
where $`r_1=R_1/\mathrm{R}_{}`$ and $`P_\mathrm{d}`$ is the orbital period measured in days. Combining with equation (2) gives
$$\dot{M}_{\mathrm{tr}}10^3m_1^{1/3}P_\mathrm{d}^{2/3}\mathrm{M}_{}\mathrm{yr}^1.$$
(5)
This form of the limit can be compared directly with observation if we have estimates of the transfer rate, orbital period and the accretor mass. For more systematic study it is useful to replace the dependence on the accretor’s Roche lobe by that on its companion’s. Thus, since the mass transfer rate is specified by properties of the companion star, which is assumed to fill its Roche lobe radius $`R_2`$, we eliminate $`R_1`$ from the condition $`R_{\mathrm{ex}}R_1`$ by using the relation
$$\frac{R_1}{R_2}=\left(\frac{m_1}{m_2}\right)^{0.45},$$
(6)
(cf King et al., 1997) where $`M_2=m_2\mathrm{M}_{}`$ is the companion mass. Writing $`R_2=r_2\mathrm{R}_{}`$ we finally get the limit
$$\dot{M}_{\mathrm{tr}}5\times 10^4m_1^{0.45}m_2^{0.45}r_2\mathrm{M}_{}\mathrm{yr}^1$$
(7)
on the mass transfer rate if CE evolution is not to occur.
## 3. AVOIDANCE OF COMMON ENVELOPE EVOLUTION
By specifying the nature of the companion star we fix $`m_2,r_2`$ and $`\dot{M}_{\mathrm{tr}}`$ in (7), and so can examine whether CE evolution is likely in any given case. Rapid mass transfer occurs if the companion star is rather more massive than the accretor, since then the act of transferring mass shrinks the donor’s Roche lobe. The mass transfer proceeds on a dynamical or thermal timescale depending on whether the donor star’s envelope is largely convective or radiative (e.g. Savonije, 1983). In the first case, CE evolution is quite likely to ensue, as the mass transfer rate rises to very high values. However, even in this case it is worth checking the inequality (7) in numerical calculations, as the e–folding time for the mass transfer is $`t_e(H/R_2)t_M`$, where $`H`$ is the stellar scaleheight and $`t_M`$ is the mass transfer timescale set by whatever process (e.g., nuclear evolution) brought the donor into contact with its Roche lobe initially. For main–sequence and evolved stars we have $`H/R_210^4,10^2`$ respectively. Thus $`t_e`$ may be long enough that the companion mass is exhausted before (7) is violated.
Thermal–timescale mass transfer is rather gentler, and offers the possibility of avoidance of CE evolution. In addition to the case mentioned above, thermal–timescale mass transfer will also occur if the donor star is crossing the Hertzsprung gap and has not yet developed a convective envelope (i.e., is not close to the Hayashi line), even if it is the less massive star. Detailed calculations (Kolb, 1998) show that in both cases the mass transfer rate is given roughly by
$$\dot{M}_{\mathrm{tr}}\frac{M_2}{t_{\mathrm{KH}}},$$
(8)
where
$$t_{\mathrm{KH}}=3\times 10^7\frac{m_2^2}{r_2l_2}\mathrm{yr}$$
(9)
was the Kelvin–Helmholtz time of the star when it left the main sequence, and $`L_2=l_2\mathrm{L}_{}`$ was its luminosity. (Note that by definition the donor is not in thermal equilibrium, so an originally main–sequence donor will develop a non–equilibrium structure as mass transfer proceeds.) The condition of a radiative envelope requires a main–sequence mass $`m_21`$, so we may take
$$r_2m_2^{0.8},l_2m_2^3.$$
(10)
Inserting in (9) and (8) we find
$$\dot{M}_{\mathrm{tr}}3\times 10^8m_2^{2.8},$$
(11)
so comparing with (7) we require
$$m_253m_1^{0.18}$$
(12)
and thus (from 11)
$$\dot{M}_{\mathrm{tr},\mathrm{max}}2\times 10^3m_1^{0.51}\mathrm{M}_{}\mathrm{yr}^1.$$
(13)
Hence we expect CE evolution to be avoided in thermal–timescale mass transfer from a main–sequence star, or from a Hertzsprung gap star, provided that it has a radiative envelope. This is in agreement with the assumption of no CE evolution in Cyg X–2 made by King & Ritter (1999), where the initial donor mass was about $`3.5\mathrm{M}_{}`$.
## 4. CONCLUSIONS
We have derived a general criterion for the avoidance of common–envelope evolution in a binary in which the accretor is a neutron star or a black hole. This shows that thermal–timescale mass transfer from a main–sequence star is unlikely to lead to CE evolution, as is mass transfer from a Hertzsprung gap star, provided that the envelope is radiative. The first possibility allows the early massive Case B evolution inferred by King & Ritter (1999) for the progenitor of Cyg X–2. SS433 may be an example of the second possibility, with a fairly massive donor star. We will discuss this possibility in detail in a future paper.
The considerations of this paper suggest that common–envelope evolution with a neutron–star or black–hole accretor generally requires an evolved donor with a deep convective envelope. This represents a slight restriction on some of the routes invoked in the possible formation of Thorne–$`\dot{\mathrm{Z}}`$ytkow objects.
This research was carried out at the Institute for Theoretical Physics and supported in part by the National Science Foundation under Grant No. PHY94–07194. ARK gratefully acknowledges support by the UK Particle Physics and Astronomy Research Council through a Senior Fellowship. MCB acknowledges support from NSF grant AST95–29170 and a Guggenheim Fellowship.
|
no-problem/9904/astro-ph9904295.html
|
ar5iv
|
text
|
# Predicting Stellar Angular Sizes
## 1 Introduction
Prediction of stellar angular sizes is a tool that has come to be used with greater frequency with the advent of high resolution astronomical instrumentation. Structure at the tens to hundreds of milliarcsecond (mas) level is now being routinely observed with the Hubble Space Telescope, speckle interferometry, and adaptive optic systems. Single milliarcsecond observations, selectively available for many years with the technique of lunar occultations, are now becoming less specialized as prototype interferometers in the optical and near infrared evolve towards facility class instruments. For all of these telescopes and techniques, it is often desirable to predict angular size of stars, to select either appropriate targets or calibration sources.
Detailed photometric and spectrophotometric predictive methods provide results with high accuracy (1-2% diameters; cf. Blackwell & Lynas-Gray 1998, Cohen et al. 1996). However, diameters from these methods require large amounts of data that is often difficult to obtain, and as such, are available for a limited number of objects. For the general sample of stars, only limited information is available, and spectral typing, photometry, and parallaxes are all less available and less accurate as one examines stars at greater distances. Deriving expected angular sizes is a greater challenge in this case. Fortunately the general availability of $`B`$ or $`V`$ band data, and forthcoming release of the data from the 2MASS and DENIS surveys, which have limiting magnitudes of $`K>14.3`$ and $`13.5`$, respectively (Beichman et al. 1998, Epchtein 1997), will provide at least broad-band photometry on these more distant sources. Given these databases, of general utility is a method based strictly upon this widely available data. In this paper a method based solely upon $`K`$ and either $`B`$ or $`V`$ broad-band photometry will be presented, and it will be shown that angular sizes for a wide variety of sources can be robustly predicted with merely two-color information. A similar relationship is discussed by Mozurkewich et al. (1991), who present a ‘distance normalized uniform disk angular diameter’ as a function of $`RI`$ color, but with a limited number ($`N=12`$) of objects to calibrate the relationship. Related to these methods is the study of stellar surface brightness as a function of $`VK`$ color published by Di Benedetto (1993), which built on the previous work by Barnes & Evans (Barnes & Evans 1976, Barnes et al. 1976, 1978).
## 2 Sources of Data
The relationship between angular size and color to be presented in §3 is strictly empirical. The angular sizes and photometry utilized to calibrate the method are all available in the literature, and in many cases are also online, and their sources are presented below.
### 2.1 Available Angular Size Data
As a test of the method, we shall be examining its predictions against known angular diameters. For stars that have evolved off of the main sequence, angular diameters as determined in the near-infrared are preferred, as limb darkening - and the need for models to compensate for it - is less than at shorter wavelengths. There are four primary sources in the literature of near-infrared angular diameters (primarily K band):
Kitt Peak. The lunar occultation papers by Ridgway and his coworkers (Ridgway et al. 1974, Ridgway 1977, Ridgway et al. 1977, 1979, 1980a, 1980b, 1982a, 1982b, 1982c, Schmidtke et al. 1986) established the field of measuring angular sizes of cool stars in the near-infrared. This effort is no longer active.
TIRGO. The lunar occultation papers by Richichi and his coworkers (Richichi et al. 1988, 1991, 1992a, 1992b, 1995, 1998a, 1998b, 1998c, 1999, Di Giacomo et al. 1991) have further developed this particular technique of diameter determinations. The group is continuing to explore the high-resolution data obtainable from lunar occultations. The recent publications from the TIRGO group include data from medium to large aperture telescopes (1.23m - 3.5m), along with concurrent photometry.
IOTA. The K band angular diameters papers from the Infrared-Optical Telescope Array by Dyck and his coworkers (Dyck et al. 1996a, 1996b, 1998, van Belle et al. 1996, 1997, 1999b) provided a body of information on normal giant and supergiant papers, and also on more evolved sources such as carbon stars and Mira variables. Recently, results from this interferometer using the FLUOR instrument have become available (Perrin et al. 1998).
PTI. Although there is only one angular diameter paper currently available from the Palomar Testbed Interferometer (van Belle et al. 1999a), 69 objects are presented in the manuscript from this highly automated instrument.
Altogether, this collection from the literature represents 92 angular diameters for 67 carbon stars and Miras, and 197 angular diameters for 190 giant and supergiant stars. In addition to these near-infrared observations of evolved objects, shorter wavelength observations were used to obtain diameters for main sequence objects – few near infrared observations exist for these smaller sources. These objects were culled from the catalog by Fracassini et al. (1988), limiting the investigation to direct angular size measures found in that catalog: lunar occultations, eclipsing and spectroscopic binaries, and the intensity interferometer observations of Hanbury Brown et al. (1974). Unfortunately, this sample of 50 main sequence objects is much smaller than the evolved star sample, largely reflecting the current resolution limits of roughly 1 mas in both the interferometric and lunar occultation approaches: a one solar radius object at a distance of 10 pc has an angular size of 0.92 mas. Furthermore, many of main sequence stars did not have sufficient photometry to be used in the technique discussed in §3. Fortunately, added to this sample are the well-calibrated measurements for the Sun (Allen 1973).
Shorter wavelength observations of giant and supergiant stars, while available (eg., Hutter et al. 1989, Mozurkewich et al. 1991), were not utilized in this study for two reasons. First, there are complications arising from reconciling angular diameters inferred from short wavelength ($`\lambda <1.2\mu `$m) observations with the desired Rosseland mean diameters for these cooler stars. Second, the majority of the data collected on these stars, represented in the Mark III interferometer database, remains unpublished. Fortunately, these data are anticipated to be published soon (Mozurkewich 1999) and will be complimented by additional short-wavelength data from the NPOI interferometer (Nordgren 1999).
### 2.2 Sources of Photometry
The widespread availability of Internet access, coupled with the electronic availability of most (if not all) of the photometric catalogs, has made task of researching archival photometry much more tractable. When photometry was not directly available from the telescope observations, the archival sources utilized in this investigation were as follows:
General Data. One of the more thorough references on stellar objects is SIMBAD (Egret et al. 1991; http://simbad.u-strasbg.fr/ (France) and http://simbad.harvard.edu/ (US Mirror)). In addition to the web-based query forms, one may also obtain information from SIMBAD by telnet and email. It is important to note that SIMBAD is merely a clearing house of information from a wide variety of sources and is not an original source in and of itself; any information that ends up being crucial to the merit of an astrophysical investigation should be checked against its primary source.
Infrared Photometry ($`\lambda >1\mu `$m). The Catalog of Infrared Observations (CIO), a extensive collection of IR photometry by Gezari et al. (1993) has been updated, although the most recent version is available only online (Gezari, Pitts & Schmitz 1997). The latter catalog can be queried with individual stars or lists of objects at VizieR (Genova et al. 1997; http://vizier.u-strasbg.fr/ (France) and http://adc.gsfc.nasa.gov/viz-bin/VizieR (US Mirror)). As with the SIMBAD data, the CIO is merely a collection of the data in the literature, and examination of the primary sources is advised. Also, as noted in the introduction, the forthcoming release of the 2MASS and DENIS catalogs will greatly augment the collective database of near-infrared photometry (whose home pages are http://www.ipac.caltech.edu/2mass/ and http://www-denis.iap.fr/denis.html, respectively).
Visual Photometry. The General Catalog of Photometric Data (GCPD) provides a large variety of wide- to narrow-band visual photometric catalogs (Mermilliod et al. 1997; http://obswww.unige.ch/gcpd/gcpd.html). For variable stars, the American Association of Variable Star Observers (AAVSO) and its French counterpart, the Association Française des Observateurs d’Etoiles Variables (AFOEV) are both excellent sources of epoch-specific visible light photometry (Percy & Mattei 1993, Gunther 1996; http://www.aavso.org/ and http://cdsweb.u-strasbg.fr/afoev/, respectively).
## 3 Zero Magnitude Angular Size versus $`VK`$, $`BK`$ Colors
The large body of angular sizes now available allows for direct predictions of expected angular sizes, bypassing many astrophysical considerations, such as atmospheric structure, distance, spectral type, reddening, and linear size. To compare angular sizes of stars at different distances, one approach is to scale the sizes relative to a zero magnitude of $`V=0`$:
$$\theta _{V=0}=\theta \times 10^{V/5}.$$
(1)
The angular size thus becomes a measure of apparent surface brightness (a more detailed discussion of related quantities may be found in Di Benedetto 1993.) Conversion between a $`V=0`$ zero magnitude angular size, $`\theta _{V=0}`$, and actual angular size, $`\theta `$, is trivial with a known $`V`$ magnitude and the equation above. The same approach has been employed for $`K=0`$ (see Dyck et al. 1996a) and will also be applied in this paper to $`B=0`$. Given the general prevalence of $`V`$ band and the inclusion of $`B`$ band data in the 2MASS catalog, the apparent angular size approach will be developed here for $`VK`$ and $`BK`$ colors.
### 3.1 Evolved Sources: Giant and Supergiant Stars
163 normal giant and supergiant stars found in the interferometry and lunar occultation papers were also found to have available $`V`$ photometry. By examining their near-infrared angular sizes, we can establish a relationship between $`V=0`$ zero magnitude angular size and $`VK`$ color:
$$\theta _{V=0}=10^{0.669\pm 0.052+0.223\pm 0.010\times (VK)}.$$
(2)
The errors on the 2 parameters in the equation above are $`1\sigma `$ errors determined from a $`\chi ^2`$ minimization; given 2 degrees of freedom in the equation, $`\mathrm{\Delta }\chi ^2=2.30`$ about the $`\chi ^2`$ minimum for this case (Press et al. 1992). Similar error calculations will be given for all other relationships reported in this manuscript. Examining the distribution of the differences between the fit and the measured values, $`\mathrm{\Delta }\theta _{V=0}`$, we find an approximately Gaussian distribution with the rms value of the 163 differences yielding a fractional error of $`(\mathrm{\Delta }\theta _{V=0}/\theta _{V=0})_{rms}=11.7\%`$.
Similarly, for $`BK`$ color, 136 giant and supergiant stars had available photometry, resulting in the following fit:
$$\theta _{B=0}=10^{0.648\pm 0.072+0.220\pm 0.012\times (BK)},$$
(3)
with an rms error of 10.8%.
The relationship appears valid over a $`VK`$ range of 2.0 to 8.0. Blueward of $`VK=2.0`$, the subsample is too small ($`N=3`$) to confidently indicate whether or not the fit is valid, in spite of the goodness of fit for the whole subsample. The same is true redward of $`VK=8.0`$. Also, for stars redward of approximately $`VK=8`$, care must be taken to exclude variable stars (both semiregular and Miras). The data points and the fit noted above may be seen in Figure 1; $`\theta _{V=0}`$ and standard deviation by $`VK`$ bin is given in Table 1. The Miras are plotted separately in Figure 2 and will be discussed below.
For $`BK`$ between 3.0 and 7.5, the relationship exhibits a similar if not slightly superior validity. As with the $`VK`$ color, the relationship appears to be valid down blueward of the short edge of that range, down to $`BK=1`$, but the data are sparse. Redward of $`BK=7.5`$, the relationship also exhibits potential confusion with the Mira variable stars, although there appears to be less degeneracy, but this is possibly due to a lesser availability of $`B`$ band data on these very red sources. The data points and the fit noted above may be seen in Figure 3; $`\theta _{B=0}`$ and standard deviation by $`BK`$ bin is given in Table 2.
The potential misclassification of more evolved sources such as carbon stars and variables (Miras or otherwise) as normal giant and supergiant stars is a significant secondary consideration. For the dimmer sources for which little data is available, non-classification is perhaps the more appropriate term. What is reassuring with regards to the issue of classification errors is that the robust relationships between $`(\theta _{V=0},VK)`$ and $`(\theta _{B=0},BK)`$ are valid for stars of luminosity class I, II, and III, and that the more evolved stars occupy a redder range of $`BK`$ and $`VK`$ colors (cf. §3.2). Since the $`\theta _{V=0}`$ and $`\theta _{B=0}`$ relationships are insensitive to errors in luminosity class, this method is more robust than the linear radius-distance method, particularly for those stars in the $`2.0<VK<6.0`$ and $`3.0<BK<7.5`$ ranges, where few if any stars of significant variability exist. This relationship is also considerably easier to employ than the method of blackbody fits.
### 3.2 Evolved Sources: Variable Stars
By examining the 2.2 $`\mu `$m angular sizes for the 87 observations of 65 semiregular variables, Mira variables and carbon stars (broadly classified here as ‘variable stars’) found in the literature, we can establish a relationship between $`V=0`$ zero magnitude angular size and $`VK`$ color:
$$\theta _{V=0}=10^{0.789\pm 0.119+0.218\pm 0.014\times (VK)}.$$
(4)
The rms error associated with this fit is 26%. The data points and the fit noted above may be seen in Figure 3. Similarly, for $`BK`$ color, 19 evolved sources had available photometry for 29 angular size observations, resulting in the following fit:
$$\theta _{B=0}=10^{0.840\pm 0.096+0.211\pm 0.008\times (BK)},$$
(5)
with a rms error of 20%.
For the variable stars, the relationship appears valid over $`VK`$, $`BK`$ ranges of 5.5 to 13.0 and 9.0 to 16.0, respectively. Redward of $`VK=13`$, the sample is too small ($`N=3`$) to confidently indicate whether or not the fit is valid, in spite of the goodness of fit for the general sample. It is interesting to note that the slope of the fits for the variable stars and for the giant/supergiant stars is statistically identical for both $`VK`$ and $`BK`$ colors; only the intercepts are different. This corresponds to a $`\theta _{V=0}`$ size factor of $`1.40\pm 0.15`$ between the smaller normal and and the larger variable stars for a given $`VK`$ color, and a corresponding $`\theta _{B=0}`$ size factor of $`1.34\pm 0.21`$.
### 3.3 Main Sequence Stars
By examining the objects in the Fracassini catalog (1988; specifically, many objects from Hanbury Brown et al. 1974), there appears to be similar relationships between the $`VK`$ & $`BK`$ colors, and $`\theta _{V=0}`$ & $`\theta _{B=0}`$ angular sizes. The sample set of stars with adequate photometry is unfortunately limited to 11 objects. However, the Hanbury Brown objects and the Sun are measured with high accuracy and allow for accurate calibration of the stellar zero magnitude angular sizes in the ranges of $`0.4<VK<+1.5`$ and $`0.6<BK<+2.0`$. Limiting the fit analysis to the robust measurements from Hanbury Brown and for the sun, the relationships between the colors and their zero magnitude angular sizes are
$`\theta _{V=0}=10^{0.500\pm 0.023+0.264\pm 0.012\times (VK)}\text{,}`$ and (6)
$`\theta _{B=0}=10^{0.500\pm 0.012+0.290\pm 0.016\times (BK)}.`$ (7)
The resulting rms errors are only 2.2% for both the $`VK`$ and $`BK`$ relationships. The $`\theta _{V=0}`$ versus $`VK`$ data for these objects are plotted in Figure 4; the $`\theta _{B=0}`$ versus $`BK`$ data are similar in appearance and will not be plotted. The relationship holds not only for the B and A type objects in the $`0.5<VK<+0.5`$ range, but also for the Sun at $`VK1.5`$. Also plotted is the fit for giants and supergiants, which has a slightly different slope; the two fits are shown intersecting at $`VK2.5`$, although due to poor sampling in this region it is unclear how (or if) the two functions truly join.
### 3.4 Analysis of Errors
As was given in §3.1, the rms fractional error between the measured and predicted values for $`\theta _{V=0}`$ versus $`VK`$ for giants and supergiants is $`(\mathrm{\Delta }\theta _{V=0}/\theta _{V=0})_{rms}=11.7\%`$. There are three components of this error: (1) Angular size errors, (2) Errors in $`VK`$, and (3) Deviations in the relationship due to unparameterized phenomena, which shall be broadly labeled ‘natural dispersion’ in the relationship and will be discussed in more detail below. For the first component, the rms fractional error of the 163 measured $`\theta `$ values found in the literature is $`(\mathrm{\Delta }\theta /\theta )_{rms}=6.9\%`$. For the photometry, given the heterogeneous sources, we estimate that the $`V`$ and $`K`$ photometry will have errors between 0.1 and 0.2 magnitudes (resulting in $`VK`$ color errors of 0.14 to 0.28 magnitudes), which would result in an size error of 3.1-6.3%. Finally, subtracting these two sources of measurement error in quadrature from the measured dispersion, a natural dispersion in the relationship between 7.0 and 8.9% remains. A similar analysis for the giant/supergiant $`\theta _{B=0}`$ versus $`BK`$ results in $`(\mathrm{\Delta }\theta /\theta )_{rms}=7.0\%`$ for the 136 observations, indicating of 5.2-7.6% of natural dispersion. For both of these relationships, the natural dispersion is a factor as significant as the errors in angular size, and potentially the dominant factor.
For the main sequence stars, the errors in angular size for both colors were $`(\mathrm{\Delta }\theta /\theta )_{rms}=4.5\%`$; the errors in photometry were expected to be no different than the giant/supergiant stars, at 0.1-0.2 magnitudes per photometric band. The main sequence stars exhibited no measureable levels of natural dispersion, being able to fully account for the observed rms spread in both the $`VK`$ and $`BK`$ relationships with angular size or color errors.
For the variable stars, the difficulties in obtaining contemporaneous photometry result in larger measurement error, despite steps taken to ensure epoch-dependent observations. As such, the errors are expected to be between 0.2 and 0.4 magnitudes for the individual $`V`$ and $`K`$ measurements. The resulting characterization of natural dispersion of 20-23% for the $`VK`$ relationship, and 12-16% for $`BK`$, dominating the angular size dispersion of $`(\mathrm{\Delta }\theta /\theta )_{rms}=10\%`$ for both colors.
The specific nature of the natural dispersion term in the rms error is potentially due to stellar surface properties that affect current one-dimensional angular size determination techniques. The limited observations of individual objects with two-dimensional and more complete spatial frequency coverage have indicated asymmetries in stellar atmospheres that could potentially affect size determinations from both interferometric and lunar occultations. Early measurements of this nature were detection of asymmetries in the envelope of $`o`$ Cet with speckle interferometry (Karovska et al. 1991). Direct imaging of the surface of $`\alpha `$ Ori has provided evidence of a large hot spot on that supergiant’s surface (Gilliland & Dupree 1996). More recently, similar evidence for aspheric shapes of other Miras has been obtained, also with HST (Lattanzi et al. 1997), and evidence for more complicated morphologies in the structure of the M5 supergiant VY CMa in the near-IR has been obtained using nonredundant aperture masking on Keck 1 (Monnier et al. 1999, Tuthill et al. 1999). Various atmospheric phenomena, such as nonradial pulsations, spots on the stellar surface, and rotational distortion of the stellar envelope, potentially explain these observations. The progressive increase along stellar evolutionary states in observed natural dispersion from undetectable levels with the main sequence stars to dominant levels with the most evolved sources is consistent with the onset of these phenomena more significantly associated with extended atmospheres.
Interstellar Extinction. A brief discussion of the potential impact of interstellar extinction upon the results presented herein is warranted. The empirical reddening determination made by Mathis (1980), which agrees very well with van de Hulst’s theoretical reddening curve number 15 (see Johnson 1968), predicts that $`A_K=0.11A_V`$. From that relative reddening value, the effect of interstellar reddening upon the various angular size expressions may be derived to be:
$`\theta _{V=0}^{}{}_{}{}^{}=\theta _{V=0}\times 10^{0.225\times [(VK)^{}(VK)]}\text{,}`$ and (8)
$`\theta _{V=0}^{}{}_{}{}^{}=\theta _{V=0}\times 10^{0.218\times [(VK)^{}(VK)]}.`$ (9)
Comparison of the slopes of equations (2) and (3) with (8) and (9) demonstrates that the angular size predictions for giant and supergiant stars are almost wholly unaffected by the effects of interstellar extinction: any apparent reddening of a star’s $`VK`$ or $`BK`$ color is accompanied by an increase in the associated zero magnitude angular size, along the slope of the predict lines. This effect is independent of the absolute amount of reddening encountered by a star, since it is a relative effect between the two bandpasses of a given color.
For main sequence stars, the slopes between the predict lines and reddening effect indicates a gradual underestimation in actual stellar size as reddening increases. Based upon typical reddening values of $`A_V=0.81.9`$ mag/kpc, a 2.2% effect (consistent with the expected level of error in the angular size prediction) will be present for stars with $`A_V=0.18`$, corresponding to distances between 95 and 225 pc. For the variable stars, the slopes also skew, but slightly less so, and in the opposite sense: the gradual trend will be to overestimate sizes for reddened sources. A 20% effect for these stars will be present for stars at $`A_V=16.9`$, corresponding to distances between 8.8 and 21 kpc - clearly not a significant factor for the current accuracy of either the $`VK`$ or $`BK`$ relationship.
## 4 Comparison of the Various Methods
Previous approaches for estimating angular sizes have included estimates of stellar linear size coupled with distance measurements or estimates, and the extraction of angular sizes by treating the objects as blackbody radiators. The release of the Hipparcos catalog (Perryman et al. 1997), with its parallax data, has increased the utility of the first method. Spectral type and $`VK`$ color have been explored as indicators of intrinsic linear size for giant stars (van Belle et al. 1999a). Similarly, spectral type can be used to predict linear size for main sequence stars (Allen 1973), although this relationship appears to be poorly characterized. There does not appear to be a $`VK`$-linear radius relationship presented for these stars in the literature, which would be consistent with both photometric bands being on the Rayleigh-Jeans tail of the blackbody curve for these hotter ($`T>6000K`$) objects. The relative errors for predicting stellar angular diameters were calculated as discussed in §3 for the stars in the §2 sample using these alternative methods, and are summarized in Table 3. For all of the stars in question, deriving an apparent angular size from a $`\theta _{V=0}`$ or $`\theta _{B=0}`$ zero magnitude angular size delivers the best results.
## 5 Conclusion
The new approach of establishing the $`\theta _{V=0}`$ and $`\theta _{B=0}`$ zero magnitude angular sizes appears to be an unrecognized yet powerful tool for predicting the apparent angular sizes of stars of all classes. The very modest data requirements of this method make it an ideal tool for quantification of this fundamental stellar parameter.
Part of the work described in this paper was performed at the Jet Propulsion Laboratory, California Institute of Technology under contract with the National Aeronautics and Space Administration. I would like to thank Andy Boden, Mark Colavita, Mel Dyck, Steve Ridgway, and Bob Thompson for thoughtful comments during the development of this manuscript, and an anonymous referee who provided valuable feedback during the publication process. This research has made use of the SIMBAD, VizieR, and AFOEV databases, operated by the CDS, Strasbourg, France. In this research, we have used, and acknowledge with thanks, data from the AAVSO International Database, based on observations submitted to the AAVSO by variable star observers worldwide.
|
no-problem/9904/cond-mat9904176.html
|
ar5iv
|
text
|
# Optimal Path in Two and Three Dimensions
## Abstract
We apply the Dijkstra algorithm to generate optimal paths between two given sites on a lattice representing a disordered energy landscape. We study the geometrical and energetic scaling properties of the optimal path where the energies are taken from a uniform distribution. Our numerical results for both two and three dimensions suggest that the optimal path for random uniformly distributed energies is in the same universality class as the directed polymers. We present physical realizations of polymers in disordered energy landscape for which this result is relevant.
Recently, there have been much interest in the problem of finding the optimal path in a disordered energy landscape. The optimal path can be defined as follows. Consider a $`d`$-dimensional lattice, where each bond is assigned with a random energy value taken from a given distribution. The optimal path between two sites is defined as the path on which the sum of the energies is minimal. This problem is of relevance to various fields, such as spin glasses , protein folding , paper rupture , and traveling salesman problem . Though much effort has been devoted to study this problem, the general solution is still lacking. There exist two approaches developed recently to study this problem. Cieplak et al. applied the max-flow algorithem for a two-dimentional energy landscape. Another approach is to restrict the path to be directed, that is, the path can not turn backwards. This approach is the directed polymer problem which has been extensively studied in the past years, see e.g., .
In this manuscript, we adapt the Dijkstra algorithm from graph theory for generating the optimal path on a lattice with randomly distributed positive energies assigned to the bonds. This algorithm enables us to generate the optimal path between any two sites on the lattice, not restricted to directed paths. We study the geometrical and energetic properties of the optimal paths in $`d=2`$ and $`3`$ dimensions in random uniform distribution of energies. We calculate the scaling exponents for the width and the energy fluctuations of the optimal path. We find that for both $`d=2`$ and $`d=3`$ the exponents are very close to those of directed polymers suggesting that the non-directed optimal path (NDOP) is in the same universality class as directed polymer (DP). Our results are in agreement with those found by Cieplak et al. for the two dimentional case. This result indicates that in the case of uniformly distributed energies the NDOPs are self affine and overhangs do not play an important role in the geometry of NDOPs.
Our results are relevant, for example, in the following polymer realizations: (i) Consider a d-dimension energy landscape in which there is a spherical regime of randomly distributed high energies while outside this sphere the energies are zero or have very low values. Consider as well a polymer of length $`N`$ that one of its ends is attached to the center of the sphere while the other is free. The radius of the sphere is $`rN`$ (see Fig 1a). The section of the polymer inside the sphere will reach the lowest energy path which is the optimal path studied here i.e., with a self affine structure. (ii) Consider a polymer in a d-dimensional energy landscape which is divided into alternating strips of disordered low and high energies (see Fig 1b). In the strips of high energies the polymer is expected to behave like the optimal path.
The Dijkstra algorithm enables one to find the optimal path from a given source site to each site on a $`d`$-dimensional lattice. During the execution of the algorithm, each site on the lattice belongs to one of three sets (see Fig. 2):
* The first set includes sites for which their optimal path to the source site have been already found.
* The second set includes sites that are relaxed at least once but their optimal path to the source has not been determined yet. This set is the perimeter of the first set.
* The third set includes all sites on the lattice which have not been visited yet.
The algorithm itself consists of two parts, (i) initialization and (ii) the main loop. The main loop, in its turn, is composed from (a) the search and (b) the relaxation processes.
In the initialization part we prepare the lattice in the following way. Each bond is assigned with a random energy value taken from a given distribution. Each site is assigned with an energy value of infinity. We pick up a certain source site and assign it with the energy value of zero and insert it into the second set.
After that we enter the main loop. We perform the search among the sites from the second set and find the one with the minimal energy value. Then we add it to the first set and proceed to the relaxation process. This site is called the added site. The relaxation process deals with sites neighboring to the added site that do not belong to the first set.
In the relaxation process we compare two values, the energy value of the neighboring site and the sum of two values: the energy value of the added site and the energy value of the bond between these sites. If the value of the sum is smaller, then we a) assign it to the neighboring site, b) connect the neighboring and the added site by the path (thick bond in the Fig. 2), c) if the neighboring site belongs to the second set we break its previous connection to another site (thick bond), d) if the neighboring site does not belong to the second set we insert it into the second set. The first four steps are demonstrated in Fig. 2.
Normally, the main loop stops when the second set is empty, however one might wish to break the loop earlier, e.g., at the moment when the first set reaches the edge of the lattice in order to avoid boundary effects.
Each site that belongs to the first set is connected to the source by a permanent path (thick bonds) that does not change during the execution of the algorithm, so, if we stop the algorithm at any given time the first set will still be valid.
We simulate both DPs and NDOPs on a square lattice in the following way. Let $`x,y`$ be the horizontal and vertical axes. We choose the origin to be the source site and study the optimal paths connecting it with all the sites on the line between $`[0,t]`$ and $`[t,0]`$ for different values of $`t`$. The generalization to three dimensions is straightforward. The random energies assigned to bonds are taken from a uniform distribution. We find that our results are independent of the distribution interval.
In Fig. 3 we compare a configuration of DP and NDOP on the same disordered energy landscape. It is seen that in the NDOP only very few overhangs exist. To test the effect of the overhangs we calculate the mean end-to-end distance $`R`$ of the global optimal path (thick line in Fig. 3) as a function of its length $`\mathrm{}`$. The global optimal path is the minimal energy path among all the paths with the same value of $`t`$. Our numerical results clearly indicate the asymptotic relation $`\mathrm{}R`$ showing that the NDOPs are self-affine . We should compare this result to the strong disorder limit where the paths can be regarded as self-similar fractals, with $`\mathrm{}R^{d_{\mathrm{opt}}}`$, where $`d_{\mathrm{opt}}1.22`$ in $`d=2`$ and $`d_{\mathrm{opt}}1.42`$ in $`d=3`$ .
In order to compare NDOP to DP we study several properties, such as the roughness exponent $`\xi `$, the energy fluctuation exponent $`\zeta `$ for two and three dimensions, as well as the distribution of the endpoints of DP and NDOP. The above exponents are defined by the relations $`Wh^2^{1/2}t^\xi `$ and $`\mathrm{\Delta }E(EE)^2^{1/2}t^\zeta `$. Here, $`h`$ is the transverse fluctuation of the global optimal path which is distance between its endpoint and the line $`x=y`$; $`E`$ is the energy of the global optimal path which is the sum of all bond energies along the path. The average is taken over different realizations of randomness. Fig. 4 shows the dependence of width $`W`$ and energy fluctuation $`\mathrm{\Delta }E`$ of DP and NDOP on $`t`$ in two and three dimensions. The points are the data for both DP and NDOP and the dashed lines represent the exponents of the DP. Our results indicate that the exponents for the NDOP are very close to those of DP (see also Table I).
Our results may be related to the recent findings that the roughness exponent of the minimal energy of domain wall in random Ising model and fracture interface are the same in $`d=2`$. In these cases, similar to our case, although overhangs may occur they do not play an important role.
In summary, our results suggest that the optimal path in the case of uniformly distributed energies, for any energy interval, is in a different universality class from the strong disorder limit but in the same universality class as directed polymers. This result is relevant to several questions regarding the equilibrium state of polymers in different realizations of disordered energy landscape.
|
no-problem/9904/astro-ph9904365.html
|
ar5iv
|
text
|
# The fluid mechanics of dark matter formation
## 1 Introduction
Jeans’s theory fails because it is linear. Linear theories typically fail when applied to nonlinear processes; for example, applications of the linearized Navier-Stokes equations to problems of fluid mechanics give laminar solutions, but observations show that actual flows are turbulent when the Reynolds number is large.
Cosmology (e.g.; Peebles (1993), Padmanabhan (1993)) relies exclusively on Jeans’s theory in modelling the formation of structure by gravitational instability. Because $`L_J`$ for baryonic (ordinary) matter in the hot plasma epoch following the Big Bang is larger than the Hubble scale of causality $`L_Hct`$, where $`c`$ is the speed of light and $`t`$ is the time, no baryonic structures can form until the cooling plasma forms neutral gas. Star and galaxy formation models invented to accommodate both Jeans’s theory and the observations of early structure formation have employed a variety of innovative maneuvers and concepts. Nonbaryonic “cold dark matter” was invented to permit nonbaryonic condensations in the plasma epoch with gravitational potential wells that could guide the early formation of baryonic galaxy masses. Fragmentation theories were proposed to produce $`M_{\mathrm{}}`$-stars rather than Jeans-superstars at the $`10^5M_{\mathrm{}}`$ proto-globular-cluster Jeans mass of the hot primordial gas.
Both of these concepts have severe fluid mechanical difficulties according to the Gibson (1996) theory. Cold dark matter cannot condense at the galactic scales needed because its nonbaryonic, virtually collisionless, nature requires it to have an enormous diffusivity with supergalactic $`L_{SD}`$ length scales. Fragmentation theories (e.g., Low and Lynden-Bell (1976)) are based on a faulty condensation premise that implies large velocities and a powerful turbulence regime that would produce a first generation of large stars with minimum mass determined by the turbulent Schwarz scale $`L_{ST}\epsilon ^{1/2}/(\rho G)^{3/4}`$, where $`\epsilon `$ is the viscous dissipation rate of the turbulence, with a flurry of starbursts, supernovas, and metal production that is not observed in globular star clusters. The population of small, long-lived, metal-free, globular cluster stars observed is strong evidence of a quiet, weakly turbulent formation regime.
## 2 Jeans’s acoustic theory
Jeans considered the problem of gravitational condensation in a large body of nearly constant density, nearly motionless gas. Viscosity and diffusivity were ignored. The density and momentum conservation equations were linearized by dropping second order terms after substituting mean plus fluctuating values for the density, pressure, gravitational potential, and velocity. Details of the derivation are given in many cosmological texts (e.g.; Kolb and Turner (1994), p342) so they need not be repeated here. The mean gravitational force $`\varphi `$ is assumed to be zero, violating the Poisson equation
$$^2\varphi =4\pi G\rho ,$$
(1)
where $`\varphi `$ is the gravitational potential, in what is known as the Jeans swindle. Cross-differentiating the linearized perturbation equations produces a single, second order differential equation satisfied by Fourier modes propagating at the speed of sound $`V_s`$. From the dispersion equation
$$\omega ^2=V_s^2k^24\pi G\rho ,$$
(2)
where $`\omega `$ is the frequency and $`k`$ is the wavenumber, a critical wavenumber $`k_J=(4\pi G\rho /V_s^2)^{1/2}`$ exists, called the Jeans wavenumber. For $`k`$ less than $`k_J`$, $`\omega `$ is imaginary and the mode grows exponentially with time. For $`k`$ larger than $`k_J`$, the mode is a propagating sound wave. Density was assumed to be a function only of pressure (the barotropic assumption).
Either the barotropic assumption or the linearization of the momentum and density equations are sufficient to reduce the problem to one of acoustics. Physically, sound waves provide density nuclei at wavecrests that can trigger gravitational condensation if their time of propagation $`\lambda /V_s`$ for wavelength $`\lambda `$ is longer than the *gravitational free fall time* $`\tau _g(\rho G)^{1/2}`$. Setting the two times equal gives the Jeans gravitational instability criterion: gravitational condensation occurs only for $`\lambda L_J`$.
Jeans’s analysis fails to account for the effects of gravity, diffusivity, or fluid mechanical forces upon nonacoustic density maxima and density minima; that is, points surrounded on all sides by either lower or higher density. These move approximately with the fluid velocity, not $`V_s`$, (Gibson (1968)). The evolution of such *zero gradient points* and associated *minimal gradient surfaces* is critical to turbulent mixing theory (Gibson et al. (1988)). Turbulence scrambles passive scalar fields such as temperature, chemical species concentration and density to produce nonacoustic extrema, saddle points, doublets, saddle lines and minimal gradient surfaces. A quasi-equilibrium develops between convection and diffusion at such zero gradient points and minimal gradient surfaces that is the basis of a universal similarity theory of turbulent mixing (Gibson (1991)) analogous to the universal similarity theory of Kolmogorov for turbulence. Just as turbulent velocity fields are damped by viscosity at the Kolmogorov length scale $`L_K(\nu /\gamma )^{1/2}`$, where $`\nu `$ is the kinematic viscosity and $`\gamma `$ is the rate-of-strain, scalar fields like temperature are damped by diffusivity at the Batchelor length scale $`L_B(D/\gamma )^{1/2}`$, where $`D`$ is the molecular diffusivity. This prediction has been confirmed by laboratory experiments and numerical simulations (Gibson et al. (1988)) for the range $`10^2Pr10^5`$, where the Prandtl number $`Pr\nu /D`$.
On cosmological length scales, density fields scrambled by turbulence are not necessarily dynamically passive but may respond to gravitational forces. In the density conservation equation
$$\rho /t+v_i(\rho /x_i)=D_{\mathrm{eff}}^2\rho /x_jx_j$$
(3)
the effective diffusivity of density $`D_{\mathrm{eff}}DL^2/\tau _g`$ is affected by gravitation in the vicinity of minimal density gradient features, and reverses its sign to negative if the feature size $`L`$ is larger than the diffusive Schwarz scale $`L_{SD}`$ (Gibson and Schild 1998a ). $`L_{SD}(D^2/\rho G)^{1/4}`$ is derived by setting the diffusive velocity $`v_DD/L`$ of an isodensity surface a distance $`L`$ from a minimal gradient configuration equal to the gravitational velocity $`v_gL/\tau _g`$. Thus, nonacoustic density maxima in a quiescent, otherwise homogeneous, fluid are absolutely unstable to gravitational condensation, and nonacoustic density minima are absolutely unstable to void formation, on scales larger than $`L_{SD}`$.
Jeans believed from his analysis (Jeans (1929)) that sound waves with $`\lambda L_J`$ would grow in amplitude indefinitely, producing unlimited kinetic energy from his gravitational instability. This is clearly incorrect, since any wavecrest that collects a finite quantity of mass from the ambient fluid will also collect its zero momentum and become a nonacoustic density nucleus. From the enormous Jeans mass values indicated at high temperature, he believed he had proved his speculation that the cores of galaxies consisted of hot gas (emerging from other Universes!) and not stars, which could only form in the cooler (smaller $`L_J`$) spiral arms, thrown into cold outer space by centrifugal forces of the spinning core. The concepts of *pressure support* and *thermal support* often used to justify Jeans’s theory are good examples of bad dimensional analysis, lacking any proper physical basis.
## 3 Fluid mechanical theory
Gravitational condensation on a nonacoustic density maximum is limited by either diffusion or by viscous, magnetic or turbulent forces at diffusive, viscous, magnetic, or turbulent Schwarz scales $`L_{SX}`$, whichever is largest, where $`X`$ is $`D,V,M,T`$, respectively (Gibson (1996); Gibson and Schild 1998a ). Magnetic forces are assumed to be unimportant for the cosmological conditions of interest. Gravitational forces $`F_g\rho ^2GL^4`$ equal viscous forces $`F_V\rho \nu \gamma L^2`$ at $`L_{SV}(\nu \gamma /\rho G)^{1/2}`$, and turbulent forces $`F_T\rho (\epsilon )^{2/3}L^{8/3}`$ at $`L_{ST}\epsilon ^{1/2}/(\rho G)^{3/4}`$. Kolmogorov’s theory is used to estimate the turbulent forces as a function of length scale $`L`$.
The criterion (Gibson 1996, 1997a, 1997b; Gibson and Schild 1998a, 1998b) for gravitational condensation or void formation at scale $`L`$ is therefore
$$L\left(L_{SX}\right)_{\mathrm{max}};X=D,V,M,T.$$
(4)
## 4 Structures in the plasma epoch
Without the Jeans constraint, structure formation begins in the early stages of the hot plasma epoch after the Big Bang when decreasing viscous forces first permit gravitational decelerations and sufficient time has elapsed for the information about density variations to propagate; that is, the decreasing viscous Schwarz scale $`L_{SV}`$ becomes smaller than the increasing Hubble scale $`L_Hct`$, where $`c`$ is the velocity of light. Low levels of temperature fluctuations of the primordial gas indicated by the COsmic microwave Background Experiment (COBE) satellite ($`\delta T/T10^5`$) constrain the velocity fluctuations $`\delta v/c10^5`$ to levels of very weak turbulence. Setting the observed mass of superclusters $`10^{46}`$ kg equal to the Hubble mass $`\rho L_H^3`$ computed from Einstein’s equations (Weinberg (1972), Table 15.4) indicates the time of first structure was $`10^{12}`$ s, or $`\mathrm{30\hspace{0.17em}000}`$ y (Gibson 1997b ).
Setting $`L_H\mathrm{3\hspace{0.17em}10}^{20}`$ m (10 kpc) = $`L_{SV}`$ gives $`\nu \mathrm{6\hspace{0.17em}10}^{27}`$ $`\mathrm{m}^2\mathrm{s}^1`$ with $`\rho 10^{15}`$ $`\mathrm{kg}\mathrm{m}^3`$ and $`\gamma 1/t=10^{12}`$ $`\mathrm{rad}\mathrm{s}^1`$. Such a large viscosity suggests a neutrino-electron-proton coupling mechanism, presumably through the Mikheyev-Smirnov-Wolfenstein (MSW) effect (Bahcall (1997)), supporting the Neutrino-98 claim that neutrinos have mass.
The viscous condensation mass $`\rho L_{SV}^3`$ decreases to about $`10^{42}`$ kg (Gibson (1996)) as the Universe expands and cools to the plasma-gas transition at $`t10^{13}`$ s, or $`\mathrm{300\hspace{0.17em}000}`$ y, based on Einstein’s equations to determine $`T`$ and $`\rho `$ and assuming the usual dependence of viscosity $`\nu `$ on temperature $`T`$ (Weinberg (1972)). Assuming gravitational decelerations that are possible always occur, we see that protosupercluster, protocluster, and protogalaxy structure formation should be well underway before the emergence of the primordial gas.
## 5 Primordial fog formation
The first condensation scales of the primordial gas mixture of hydrogen and helium are the maximum size Schwarz scale, and an initial length scale $`L_{IC}(RT/\rho G)^{1/2}`$ equal to the Jeans scale $`L_J`$ but independent of Jeans’s linear perturbation stability analysis, and acoustics, where $`R`$ is the gas constant of the mixture. From the ideal gas law $`p/\rho =RT`$ we see that density increases can be compensated by pressure increases with no change in temperature in a uniform temperature gas, and that gravitational forces $`F_g\rho ^2GL^4`$ will dominate the resulting pressure gradient forces $`F_ppL^2=\rho RTL^2`$ for length scales $`LL_{IC}`$. Taking $`R\mathrm{5\hspace{0.17em}000}`$ $`\mathrm{m}^2\mathrm{s}^2\mathrm{K}^1`$, $`\rho 10^{18}`$ $`\mathrm{kg}\mathrm{m}^3`$ m (Weinberg (1972)) and $`T\mathrm{3\hspace{0.17em}000}`$ K gives a condensation mass $`\rho L_{IC}^310^5M_{\mathrm{}}`$, the mass of a globular cluster of stars. Because the temperature of the primordial gas was observed to be quite uniform by COBE, we can expect the protogalaxy masses of primordial gas emerging from the plasma epoch to immediately fragment into proto-globular-cluster (PGC) gas objects on $`L_{IC}\mathrm{3\hspace{0.17em}10}^{17}`$ m (10 pc) scales, with subfragments at $`\left(L_{SX}\right)_{\mathrm{max}}`$.
The kinematic viscosity $`\nu `$ of the primordial gas mixture decreased by a factor of about a trillion from plasma values at transition, to $`\nu \mathrm{2.4\hspace{0.17em}10}^{12}`$ $`\mathrm{m}^2\mathrm{s}^1`$ assuming the density within the PGC objects are about $`10^{17}`$ $`\mathrm{kg}\mathrm{m}^3`$. Therefore, the viscous Schwarz scale $`L_{SV}(\mathrm{2.4\hspace{0.17em}10}^{12}\mathrm{\hspace{0.17em}10}^{13}/10^{17}\mathrm{\hspace{0.17em}6.7\hspace{0.17em}10}^{11})^{1/2}=\mathrm{1.9\hspace{0.17em}10}^{13}`$ m, so the viscous Schwarz mass $`M_{SV}L_{SV}^3\rho =\mathrm{6.8\hspace{0.17em}10}^{22}`$ kg, or $`M_{SV}=\mathrm{6.8\hspace{0.17em}10}^{24}`$ kg using $`\rho =10^{18}`$. The turbulent Schwarz mass $`M_{ST}\mathrm{8.8\hspace{0.17em}10}^{22}`$ kg assuming $`10\%`$ of the COBE temperature fluctuations are due to turbulent red shifts ($`[(\delta v/c)/(\delta T/T)]=10^1`$) as a best estimate.
We see that the entire universe of primordial H-He gas turned to fog soon after the plasma-gas transition, with primordial fog particle (PFP) mass values in the range $`10^{23}`$ to $`10^{25}`$ kg depending on the estimated density and turbulence levels of the gas. The time required to form a PFP is set by the time required for void regions to grow from minimum density points and maximum density saddle points to surround and isolate the condensing PFP objects (Gibson and Schild 1998a ). Voids grow as rarefaction waves with a limiting maximum velocity $`V_s`$ set by the second law of thermodynamics, so the minimum PFP formation time is $`\tau _{PFP}(L_{SX})_{\mathrm{max}}/V_s`$, or about $`10^3`$ y. Full condensation of the PFP to form a dense core near hydrostatic equilibrium requires a much longer time, near the gravitational free fall time $`\tau _g\mathrm{2\hspace{0.17em}10}^6`$ y.
Radiation heat transport during the PFP condensation period before the creation of dust should have permitted cooling to temperatures near those of the expanding universe. After about a billion years hydrogen dew point and freezing point temperatures (20-13 K) would be reached, forming the micro-brown-dwarf conditions expected for these widely separated ($`10^310^4`$ AU) small planetary objects that comprise most of the baryonic dark matter of the present universe and the materials of construction for the stars and heavy elements. Because such frozen objects occupy an angle of less than a micro-arcsecond viewed from their average separation distance, they are invisible to most observations except by gravitational microlensing, or if a nearby hot star brings these volatile comets out of cold storage.
## 6 Observations
A variety of observations confirm the new theory that fluid mechanical forces and diffusion limit gravitational condensation (Gibson (1996)), and confute Jeans’s (1902 & 1929) acoustic criterion:
* quasar microlensing at micro-brown-dwarf frequencies (Schild (1996)),
* tomography of dense galaxy clusters indicating diffuse (nonbaryonic) superhalo dark matter at $`L_{SD}`$ scales with $`D_{nb}10^{28}\mathrm{m}^2\mathrm{s}^1`$ (Tyson and Fischer (1995)),
* the Gunn-Peterson missing gas sequestered as PFPs,
* the dissipation of ‘gas clouds’ in the $`Ly\alpha `$ forest,
* extreme scattering events, cometary globules, FLIERS, ansae, Herbig-Haro ‘chunks’, etc.
Evidence that the dark matter of galaxies is dominated by small planetary mass objects has been accumulating from reports of many observers that the multiple images of lensed quasars twinkle at corresponding high frequencies. After several years spent resolving a controversy about the time delay between images Q0957+561A,B, to permit correction for any intrinsic fluctuations of the light intensity of the source by subtraction of the properly phased images, Schild (1996) announced that the lensing galaxy mass comprises $`10^6M_{\mathrm{}}`$ “rogue planets” that are “likely to be the missing mass.”
Star-microlensing studies from the Large Magellanic Cloud have failed to detect lensing at small planetary mass frequencies, thus excluding this quasar-microlensing population as the Galaxy halo missing mass (Alcock et al. (1998), Renault et al. (1998)). However, the exclusion is based on the unlikely assumption that the number density of such small objects is uniform. The population must have mostly primordial gas composition since no cosmological model predicts this much baryonic mass of any other material, and must be primordial since it constitutes the material of construction, and an important stage, in the condensation of the gas to form stars. Gravitational aggregation is a nonlinear, self-similar, cascade process likely to produce an extremely intermittent lognormal spatial distribution of the PFP number density, with mode value orders of magnitude smaller than the mean. Since star-microlensing from a small solid angle produces a small number of independent samples, the observations estimate the mode rather than the mean, resolving the observational conflict (Gibson and Schild 1998b ).
## 7 Conclusions
1. Jeans’s gravitational instability criterion $`LL_J`$ is irrelevant to gravitational structure formation in cosmology and astrophysics, and is egregiously misleading in all of its applications.
2. The correct criterion for gravitational structure formation is that $`L`$ must be larger than the largest Schwarz scale; that is, $`L\left(L_{SX}\right)_{\mathrm{max}}`$, where $`X`$ is $`D,V,M,T`$, depending on whether diffusion or viscous, magnetic or turbulent forces limit the gravitational effects.
3. Structure formation began in the plasma epoch with protosupercluster to protogalaxy decelerations.
4. Gravitational condensations began soon after the plasma-gas transition, forming micro-brown-dwarfs, clustered in PGCs, that persist as the dominant dark matter component of inner galactic halos (50 kpc).
5. The present fluid mechanical theory and its cosmological consequences regarding the forms of baryonic and nonbaryonic dark matter (Gibson (1996)) is well supported by observations, especially the quasar-microlensing of Schild (1996) and his inference that the lens galaxy mass of Q0957+561A,B is dominated by small rogue planets (interpreted here as PFPs).
6. Star-microlensing studies that rule out MBDs as the Galaxy missing mass (Alcock et al. (1998), Renault et al. (1998)), contrary to the quasar-microlensing evidence and the present theory, are subject to extreme undersampling errors from their unwarranted assumption of a uniform number density distribution, rather than extremely intermittent lognormal distributions expected from nonlinear aggregational cascades of such small objects as they form nested clusters and stars (Gibson and Schild 1998b ).
###### Acknowledgements.
Numerous helpful suggestions were provided by Rudy Schild.
|
no-problem/9904/astro-ph9904201.html
|
ar5iv
|
text
|
# 1 Isotope composition for typical (𝑇₈=2, 𝑞=0.5, top) and optimistic (𝑇₈=1, 𝑞=0.7, bottom) cases
## References
Clayton D.D., Hoyle F., 1974, ApJ, 187, L101
Leising M.D., Clayton D.D., 1987, ApJ, 323, 159
Kudryashov A.D., Tutukov A.V., 1995, Astron. Reports, 39, 482
Smith D.M., Leventhal M., Cavallo R., et al., 1996, ApJ, 471, 783
|
no-problem/9904/cond-mat9904060.html
|
ar5iv
|
text
|
# ENERGY LANDSCAPES, SUPERGRAPHS, AND “FOLDING FUNNELS” IN SPIN SYSTEMS
## I INTRODUCTION
The concept of energy landscapes has played a significant role in elucidating kinetics of protein folding . An energy landscape can be visualized by using the so called disconnectivity graphs that show patterns of pathways between the local energy minima of a system. A pathway consists of consecutive moves that are allowed kinetically. The pathways indicated in a disconnectivity graph are selected to be those which provide a linkage at the lowest energy cost among all possible trajectories between two destinations. Thus, at each predetermined value of a threshold energy, the local energy minima are represented as divided into disconnected sets of minima which are mutually accessible through energy barriers. The local minima which share the lowest energy barrier are joined at a common node and are said to be a part of a basin corresponding to the threshold.
The disconnectivity graphs have proved to be useful tools to elucidate the energy landscape of a model of a short peptide and of several simple molecular systems. In particular, Wales, Miller, and Walsh have constructed disconnectivity graphs for the archetypal energy landscapes of a cluster of 38 Lennard-Jones atoms, the molecule of C<sub>60</sub>, and 20 molecules of water. The work on the Lennard-Jones systems has been recently extended by Doye et al. . The graph for a well folding protein is expected to have an appearance of a “palm tree.” This pattern has a well developed basin of the ground state and it also displays several branches to substantially higher lying local energy minima. Such a structure seems naturally associated with the existence of a folding funnel. The atomic level studies of the 4-monomer peptide considered by Becker and Karplus yield a disconnectivity graph which suggests that this expected behavior may be correct. Bad folders are expected to have disconnectivity graphs similar to either a “weeping willow” or a “banyan tree” in which there are many competing low lying energy minima.
We accomplish several tasks in this paper. The first of these, as addressed in Sec. II, is to construct disconnectivity graphs for two lattice heteropolymers the dynamics of which have been already studied exactly . One of them is a model of a protein, in the sense that it has excellent folding properties, and we shall refer to it as a good folder. The other has very poor folding properties, i.e. it is a bad folder and is thus a model of a random sequence of aminoacids. We show that, indeed, only the good folder has a protein-like disconnectivity graph.
In Sec. III we study the archetypal energy landscapes corresponding to small two dimensional (2D) Ising spin systems with the ferromagnetic and spin glassy exchange couplings. We demonstrate that disordered ferromagnets have protein-like disconnectivity graphs whereas spin glasses behave like bad folders. This is consistent with the concept of minimal structural frustration , or maximal compatibility, that has been introduced to explain why natural proteins have properties which differ from those characterizing random sequences of aminoacids. It is thus expected that spin systems which have the minimal frustration in the exchange energy, i.e. the disordered ferromagnets, would be the analogs of proteins. In fact, we demonstrate that the kinetics of “folding,” i.e. the kinetics of getting to the fully aligned ground state of the ferromagnet by evolving from a random state, depends on temperature, $`T`$, the way a protein does. Finding a ground state of a similarly sized spin glass takes place significantly longer.
The disconnectivity graphs characterize the phase space of a system and, therefore, they relate primarily to the equilibrium properties – the dynamics is involved only through a definition of what kinds of moves are allowed, but their probabilities of being implemented are of no consequence. Note that even if the disconnectivity graph indicates a funnel-like structure, the system may not get there if the temperature is not right. Thus a demonstration of the existence of a funnel must involve an actual dynamics. In fact, another kind of connectivity graphs between local energy minima has been introduced recently precisely to describe the $`T`$-dependent dynamical linkages in the context of proteins. We shall use the phrase a “dynamical connectivity graph” to distinguish this concept from that of a “disconnectivity graph” of Becker and Karplus. The idea behind the dynamical connectivity graphs is rooted in a coarse grained description of the dynamics through mapping of the system’s trajectories to underlying effective states. In ref. , the effective states are the local energy minima arising as a result of the steepest descent mapping. In ref. , the steepest descent procedure is followed by an additional mapping to a closest maximally compact conformation. The steepest descent mapping has been already used to describe glasses and spin glasses in terms of their inherent, or hidden, valley structures.
In the dynamical connectivity graphs, the linkages are not uniform in strength. Their strengths are defined by the frequency with which the two effective states are visited sequentially during the temporal evolution. The strengths are thus equal to the transition rates and they vary significantly from linkage to linkage and as a function of $`T`$. An additional characteristic used in such graphs is the fraction of time spent in a given effective state, without making a transition. This can be represented by varying sizes of symbols associated with the state.
In the context of these developments, it seems natural to combine the two kinds of coarse-graining graphs, equilibrium and dynamical, into single entities – the supergraphs. Such supergraphs can be constructed by placing the information about the $`T`$-dependent dynamical linkages on the energy landscape represented by the disconnectivity graph. This procedure is illustrated in Sec. IV for the case of the two heteropolymers discussed in Sec. II. The procedure is then applied to selected spin systems. In each case, knots of significant dynamical connectivities within the ground state basin develop around a temperature at which the specific heat has a maximum. These knots disintegrate on lowering the $`T`$ if the system is a spin glass or a bad folder. For good folders and non-uniform ferromagnets the dynamical linkages within the ground state basin remain robust.
We hope that this kind of combined characterization, by the supergraphs, of both the dynamics and equilibrium pathways existing in many body systems might prove revealing also in the case of other systems, e.g., such as the molecular systems considered in ref. .
## II ENERGY LANDSCAPES IN 2D LATTICE PROTEINS
Lattice models of heteropolymers allow for an exact determination of the native state, i.e. of the ground state of the system, and are endowed with a simplified dynamics. These two features have allowed for significant advancement in understanding of protein folding .
Here, we consider two 12-monomer sequences of model heteropolymers, $`A`$ and $`B`$, on a two-dimensional square lattice. These sequences have been defined in terms of Gaussian contact energies (the mean equal to $`1`$ and the dispersion to 1, roughly) in ref. . They have been studied in great details by the master equation and Monte Carlo approaches. Sequences $`A`$ and $`B`$ have been established to be the good and bad folders respectively. Among the $`\mathrm{15\hspace{0.33em}037}`$ different conformations that a 12-monomer sequence can take, 495 are the local energy minima for sequence $`A`$ and 496 for sequence $`B`$. The minima are either $`V`$\- or $`U`$-shaped. The $`U`$-shaped minima are those in which a move that does not change the energy is allowed, provided there are no moves that lower the energy. Both kinds of minima arise as a result of the steepest descent mapping from states generated along a Monte Carlo trajectory and both kinds are included in the disconnectivity graphs.
Constructing a disconnectivity graph requires determination of the energy barriers between each pair of the local energy minima. We do this through an exact enumeration. We divide the energy scale into discrete partitions of resolution $`\mathrm{\Delta }E`$ (we consider $`\mathrm{\Delta }E`$=0.5) and ask between what minima there is a pathway which does not exceed the threshold energy set at the top of the partition. These minima can then be grouped into clusters which are disconnected from each other. Local minima belonging to one cluster are connected by pathways in which the corresponding barriers do not exceed a threshold value of energy whereas the local minima that belong to different clusters are separated by energy barriers which are higher than the threshold level. At a sufficiently high value of the energy threshold all minima belong to one cluster. Enumeration of the pathways involves storing a table of size $`\mathrm{15\hspace{0.33em}037}\times 14`$ because each conformation may have up to 14 possible moves within the dynamics considered in ref. . (16-monomer heteropolymers can also be studied in this exact way – within any resolution $`\mathrm{\Delta }E`$.)
Figure 1 shows the resulting disconnectivity graphs for sequence $`A`$. For clarity, we show only this portion of the graph which involves the local minima with energies which are smaller than $`5`$ (there are 206 such minima). Throughout this paper, the symbol $`E`$ denotes energy measured in terms of the coupling constants in the Hamiltonian and is thus a dimensionless quantity. The native state, denoted as NAT in the Figure 1, belongs to the most dominant valley. One can see that the graph contains a remarkable “palm tree” branch that provides a linkage to the native state. This branch is a place within which a dynamically defined folding funnel is expected to be confined to. The large size of this branch associated with a big energy gap between the native state and other minima indicates large thermodynamic stability. At low temperatures, the glassy effects set in and contributions due to non-native valleys become significant. The local minimum denoted by TRAP in Figure 1 has been identified in ref. as giving rise to the longest lasting relaxation processes in the limit of $`T`$ tending to 0.
The disconnectivity tree for sequence $`B`$ is shown in Figure 2. Again, only the minima with energies smaller than $`5`$ are displayed (there are 203 such minima). In this case, there are several local energy minima which are bound to compete with the native state. The corresponding branches have comparable lengths and morphologies. The dynamics is thus expected not to be confined merely to the native basin. Instead, the system is bound to be frustrated in terms of what branch to choose to evolve in. At low $`T`$’s the valley containing the TRAP conformation is responsible for the longest relaxation and poor folding properties.
Other examples of disconnectivity trees for protein related systems have been recently constructed with the use of Go-like models (in which the aminoacid-aminoacid interactions are restricted to the native contacts) and they confirm the general pattern of differences in morphology between good and bad foldability as illustrated by Figures 1 and 2.
It should be noted that there are many ways to map out the multidimensional energy landscape of proteins. In particular, extensive energy landscape explorations for the HP lattice heteropolymers have been done with the use of the pathway maps . The pathway maps show the actual microscopic paths through conformations. The paths are enumerated either exactly or statistically, and thus provide a detailed but implicit representation of the energy landscape. The resulting “flow diagrams” indicate patterns of allowed kinetic connections between actual conformations, together with the energy barriers involved. They can also be additionally characterized by Monte Carlo determined probabilities to find a given path at a temperature under study. In this way, preferable pathways and important transition states can be identified. This approach is similar in spirit to the one undertaken by Leopold et al. in which the folding funnel is identified through determination of weights associated with paths that lead to the native state.
The coarse grained representation of energy landscapes in protein-like systems through the disconnectivity trees is quite distinct from that obtained through the pathway maps. The disconnectivity graphs indicate only the one best path for each pair of the local energy minima by showing the terminal points and the value of the energy barrier necessary to travel this path. This reduced information is precisely what allows one to provide an explicit and essentially automatic visualization of the energy landscapes.
The $`T`$-dependent frequencies of passages between conformations in the pathway maps give an account of the dynamics in the system. This information on the dynamics, however, does not easily fit the description provided by the disconnectivity graphs. The steepest descent mapping to the local energy minima that we propose here is, on the other hand, a perfect match.
## III ENERGY LANDSCAPES IN 2D SPIN SYSTEMS
We now consider the spin systems. The Hamiltonian is given by $`H=_{<ij>}J_{ij}S_iS_j`$ where $`S_i`$ is $`\pm 1`$, and the exchange couplings, $`J_{ij}`$, connect nearest neighbors on the square lattice. The periodic boundary conditions are adopted. When studying spin systems, a frequent question to ask about the dynamics is what are the relaxation times – characteristic times needed to establish equilibrium. Here, however, we are interested in quantities which are analogous to those asked in studies of protein folding. Specifically, what is the first passage time $`t_0`$? The first passage time is defined as the time needed to come across the ground state during a Monte Carlo evolution that starts from a random spin configuration. A mean value of $`t_0`$ in a set of trajectories (here, we consider 1000 trajectories for each $`T`$) will be denoted by $`t_0`$ and the median value by $`t_g`$. $`t_g`$ is an analogue of the folding time, $`t_f`$ of ref. . At low temperatures, the physics of relaxation and the physics of folding essentially agree . At high temperatures, however, the relaxation is fast but finding a ground state is slow due to a large entropy. Both for heteropolymers and spin systems the $`T`$-dependence of the characteristic first passage time is expected to be $`U`$-shaped. The fastest search for the ground state takes place at a temperature $`T_{min}`$ at which the $`T`$-dependence has its minimum.
The $`U`$-shape dependence of $`t_f`$ originates in the idea of a low $`T`$ glassy phase in heteropolymers advocated by Bryngelson and Wolynes within the context of the random energy model. It was subsequently confirmed in numerical simulations of lattice models . This shape is, actually expected for most disordered systems, including those involving spins. However, experimentalists measuring spin systems typically would not ask about the first passage time (at high $`T`$).
This overall behavior is illustrated in Figure 3 for two $`5\times 5`$ spin systems. The Gaussian couplings of zero mean and unit dispersion are selected for the spin glassy (SG) 2$`D`$ system. The disordered ferromagnetic system (DFM) is endowed with the exchange couplings which are the absolute values of the couplings considered for SG. Figure 3 shows that $`t_g`$ does depend on $`T`$ in the $`U`$-shaped fashion. $`T_{min}`$ for SG and DFM are comparable in values but the “folding” times for DFM are more than 4 times shorter than for SG. The times are defined in terms of the number of Monte Carlo steps per spin.
Figure 3 establishes some of the analogies between the heteropolymers and the spin systems. We now consider the disconnectivity graphs for selected $`L\times L`$ spin systems with $`L`$=4 and 5. For both system sizes, the list of the local energy minima is obtained through an exact enumeration. Determination of an energy barrier between two minima requires adopting some approximations. Suppose that the two minima differ by $`n`$ spins. There are then $`n!`$ possible trajectories which connect the two minima, assuming that a) no spin is flipped more than once, b) no other spins (or “external” spins) are involved in a pathway. These trajectories can be enumerated for $`L`$=4 but not for $`L`$=5. In the latter case we adopt the following additional approximation. We first identify the $`n(n1)(n2)(n3)`$ list of the first four possible steps in any trajectory together with the highest energy elevation reached during these four steps. We choose $`m`$=1500 trajectories which accomplish the smallest elevation. We then consider the next two-step continuations of the selected trajectories and among the $`m(n4)(n5)`$ continuations again select $`m`$ which result in the lowest elevation, and so on until all $`n`$ spins are inverted. The lowest elevation among the final set of the $`m`$ trajectories is an estimate of the energy threshold used in the disconnectivity diagram. This approximate method, when applied to the $`L`$=4 systems, generates results which agree with the exact enumeration. Figure 4 shows that our method clearly beats determination of barriers based on totally random trajectories (but still restricted to overturning of the $`n`$ differing spins).
Flipping of the “external” spins was found to give rise to an occasional reduction in the barrier height. We could not, however, come up with a systematic inclusion of such phenomena in the calculations and the resulting disconnectivity graphs have barriers which are meant to be estimates from above. The topology of the graph is expected to depend little on details of such approximations.
In some cases, the barrier for a direct travel from one minimum to another was found to be higher than when making a similar passage via an intermediate local energy minimum. An example of this situation is shown in Figure 5. However, this lack of transitivity, resulting from the approximate nature of the calculations, does not affect the disconnectivity graph because the states $`\gamma `$ and $`\beta `$ of Figure 5 are mutually accessible at energy $`E_{\beta \gamma }`$. Then, at a higher energy $`E_{\alpha \gamma }`$, state $`\alpha `$ is thus also accessible. If, at this energy level, the system can transfer between the states $`\alpha `$ and $`\gamma `$ then it can also transfer to state $`\beta `$.
We now present specific examples of disconnectivity graphs for several distinct spin systems.
Figure 6 shows the case of a $`4\times 4`$ uniform ferromagnet (FM). The energy landscape of the FM is not analogous to that of a protein because uniform exchange couplings generate states with high degrees of degeneracies. These degeneracies can be split either by a randomization. Figure 6 also shows a graph for a $`L=4`$ DFM’ system in which the $`J_{ij}`$’s are random numbers from the \[0.9,1.1\] interval – this is the case of a small perturbation away from the uniform FM. The graph for DFM’ has an overall appearance like the one for FM except for the lack of a high energy linkage to a set of state which cease to be minima. Another difference is the disappearance of all remaining $`U`$-shaped minima and formation of new true minima at somewhat spread out energies. In the uniform $`L=4`$ ferromagnet, there are five $`V`$-shaped energy minima: one is the ground state and the other four higher energy states are degenerate. In addition, there are 346 states which are the $`U`$-shaped energy minima. An example of what happens in a $`U`$-shaped minimum is shown in Figure 7. Here, the system can move between the 3- and 4-spin domains without a change in the energy. The 4-spin domain forms a $`U`$-shaped minimum but the 3-spin state is not a minimum because there is a move to a lower energy state. Only the 4-spin domain states would be shown in the disconnectivity graph.
Figure 8 shows the disconnectivity graphs for two $`L=4`$ spin glassy systems. The right hand panel shows the case of $`J_{ij}=\pm 1`$. The left hand panel shows a spin glass (SG’) with the exchange couplings which are randomly positive or negative and with their magnitudes coming from the interval \[0.9,1.1\] – this is the random sign counterpart of the DFM’ system. In both spin glassy systems of Figure 8 the allocation of signs to the couplings is identical. In the $`\pm 1`$ case, all minima, including the degenerate ground state, are $`U`$-shaped. The SG’ system, on the other hand, has a graph with an overall structure akin to that corresponding to the $`\pm 1`$ system with one important difference: the ground state is not degenerate and thus the ground state basin splits into several competing valleys.
The differences between the good and bad spin “folders” amplify as the system size is increased. As an illustration, Figure 9 shows the disconnectivity graphs for the $`5\times 5`$ DFM and SG – with the Gaussian couplings. The DFM systems has a very stable and well developed valley corresponding to the ground state whereas the SG system has many competing valleys. Thus indeed, DFM is a spin analogue of a protein whereas SG is an analogue of a random sequence of aminoacids.
The disconnectivity graphs can be represented in a form that gives a better illusion of an actual landscape, as shown in the bottom panels of Figure 9. The lines shown there connect the local energy minima to their energy barriers and then to the next minimum, and so on, forming an envelope of the original graph. This form is less cluttered and will be used in Sec. V. This envelope representation shows merely the smallest scale variations in energy and omits passages with large barriers.
## IV DYNAMICAL CONNECTIVITY GRAPHS FOR LATTICE HETEROPOLYMERS
We now construct the supergraphs for the lattice heteropolymers discussed in Sec. II. The strengths of the dynamical linkages have been already determined in ref. at several temperatures. Here, however, we plot the linkages on the graphs that represent the energy landscapes, i.e. we rearrange the labels associated with the local energy minima. We discuss only the case of $`T=T_{min}`$ which is equal to 1.0 for both sequences $`A`$ and $`B`$.
Figures 10 and 11 shows the supergraphs for sequences $`A`$ and $`B`$ respectively. The sizes of the circles are proportional to an occupancy of the minimum during the folding time. Similarly, the thicknesses of the lines connecting the circles are proportional to the connectivity (the linking frequency) between them. For clarity, we do not show connectivities which account for less than $`1\%`$ of all combined dynamical connectivities. The disconnectivity graphs themselves are drawn in dotted lines. All relevant dynamics is confined to these portion of the of the disconnectivity graphs which were marked, in Figures 1 and 2, by the dotted lines and are now magnified in Figures 10 and 11.
An inspection of the supergraphs clearly shows differences between the two sequences. Sequence $`A`$ has many inter-valley linkages but the linkages to the native basin, and the occupancies of conformations within that basin, are substantial. These are manifestations of a fast folding dynamics. For sequence $`B`$, on the other hand, the linkages tend to wither uncooperatively in multiple valleys. In addition, the combined occupancies away from the native valley outweigh the dynamical effects within the valley. On lowering the temperature, linkages in various valleys become disconnected and tend to avoid the native valley more and more, as discussed in ref. .
## V DYNAMICAL CONNECTIVITY GRAPHS FOR SPIN SYSTEMS
We now generate dynamical linkages for two spin systems, $`L=5`$ DFM and SG of Sec. III, and place them on the plots of the energy landscape. The “envelope” form of the representation of the landscape is chosen here, mostly for esthetic reasons. The connectivities are determined based on 200 Monte Carlo trajectories of a fixed length of 5000 steps per spin. The duration of these trajectories exceeds the “folding time” many times, at the temperatures studied, and thus the connectivities displayed refer to the essentially equilibrium situations (the equilibrium dynamics for heteropolymers $`A`$ and $`B`$ is illustrated in ref. ). The connectivity rates were updated any time (in terms of single spin events and not in terms of steps per spin) there is a transition from a local energy minimum to a local energy minimum, after the steepest descent mapping. Again, the $`1\%`$ display cutoff has been implemented when making the figure.
The main parts of Figures 12 and 13 show the supergraphs obtained at a temperature which corresponds to the $`T`$-location of the peak in specific heat. These temperatures, 1.8 for DFM and 1.4 for SG, are also close to $`T_{min}`$. The insets show the dynamically relevant parts of the energy landscape at lower temperatures. For the DFM, the dynamics becomes increasingly confined to the ground state basin when the temperature is reduced. On the other hand, for the SG, the dynamics in the ground state basin becomes less and less relevant, with a higher local energy minimum absorbing the majority of moves. This is indeed what happens with bad folding heteropolymers.
If we restrict counting of the transition rates only to the “folding stage,” i.e. till the ground state is encountered, the qualitative look of the supergraph for $`T`$ close to $`T_{min}`$ is as in the equilibrium case. The states involved are mostly the same but there is, by definition, only one link to the ground state per trajectory.
The dynamical connectivity graphs in 3D $`10\times 10\times 10`$ DFM systems are qualitatively similar to the 2D graphs but the underlying disconnectivity graphs are harder to display due to a substantially larger number of the energy minima.
In this paper we have pointed out the existence of many analogies between protein folding and dynamics of spin systems. These analogies have restrictions. For instance the simple Ising spin systems in 3$`D`$ have continuous phase transitions, in the thermodynamic limit, and not the first-order-like that are expected to characterize large proteins . This difference, however, is not crucial in the case of small systems. More accurate spin analogs of proteins, with the first order transition, can be constructed but the object of this paper was to discuss the basic types of spin systems.
On the other hand, it should be pointed out that these analogies are also more extensive. Consider, for instance, the Thirumalai criterion for good foldability of proteins. The criterion considers two quantities: the specific heat and the structural susceptibility of a heteropolymer. The latter is a measure of fluctuations in the structural deviations away from the native state. Both quantities have peaks at certain temperatures. The criterion specifies that if the two temperatures coincide a heteropolymer is a good folder. This is quite similar to what happens in uniform and disordered 3D ferromagnets: the peaks (singularities) in magnetic susceptibility and specific heat are located at the same critical temperature. On the other hand, in spin glasses, the broad maximum in the specific heat is located at a temperature which is substantially above the freezing temperature associated with the cusp in the susceptibility. Also in this sense then, spin glasses behave like bad folders.
The coarse-graining supergraphs that analyse dynamics in the context of the system’s energy landscape may become a valuable tool to understand complex behavior of many body systems.
## ACKNOWLEDGMENTS
This work was supported by KBN (Grants No. 2P03B-025-13 and 2P03B-125-16). Fruitful discussions with Jayanth R. Banavar are appreciated.
|
no-problem/9904/nucl-ex9904011.html
|
ar5iv
|
text
|
# A New Measurement of the ¹𝑆₀ Neutron-Neutron Scattering Length using the Neutron-Proton Scattering Length as a Standard
## Abstract
The present paper reports high-accuracy cross-section data for the $`{}_{}{}^{2}\mathrm{H}(n,nnp)`$ reaction in the neutron-proton (np) and neutron-neutron (nn) final-state-interaction (FSI) regions at an incident mean neutron energy of 13.0 MeV. These data were analyzed with rigorous three-nucleon calculations to determine the $`{}_{}{}^{1}S_{0}^{}`$ np and nn scattering lengths, $`a_{np}`$ and $`a_{nn}`$. Our results are $`a_{nn}`$ = -18.7 $`\pm `$0.6 fm and $`a_{np}`$ = -23.5 $`\pm `$0.8 fm. Since our value for $`a_{np}`$ obtained from neutron-deuteron (nd) breakup agrees with that from free np scattering, we conclude that our investigation of the nn FSI done simultaneously and under identical conditions gives the correct value for $`a_{nn}`$. Our value for $`a_{nn}`$ is in agreement with that obtained in $`\pi ^{}d`$ measurements but disagrees with values obtained from earlier nd breakup studies.
The difference in the $`{}_{}{}^{1}S_{0}^{}`$ neutron-neutron (nn) and proton-proton (pp) scattering lengths is an explicit measure of charge-symmetry breaking (CSB) of the the nuclear force. The high sensitivity of these scattering lengths to the nuclear potential strength makes them a valuable probe for detecting small potential-energy contributions like the isospin-dependent forces that cause CSB. For realistic potentials, a 1% change in the potential strength results in a 30% shift in the scattering length. Since it is not technically viable to measure $`a_{nn}`$ directly by conducting free nn scattering experiments, one uses few-nucleon reactions that emit two neutrons with low relative momentum, i.e., a final-state interaction (FSI) configuration. The two reactions used to measure $`a_{nn}`$ that have the smallest theoretical uncertainties are pion-deuteron capture ($`\pi ^{}+dn+n+\gamma `$) and neutron-deuteron breakup ($`n+dn+n+p`$). Curiously, measurements using these two reactions give significantly different values of $`a_{nn}`$: from $`\pi ^{}d`$ capture measurements the average value for $`a_{nn}`$ is $`18.6\pm 0.4`$ fm and from kinematically-complete nd breakup experiments the average value is $`16.7\pm 0.5`$ fm . It was suggested that the difference in these values has its origin in the three-nucleon force (3NF), which would act in the nd breakup reaction but not in the other.
The goal of the present work was to measure $`a_{nn}`$ with the nd breakup reaction to an accuracy better than $`\pm `$0.7 fm. To this aim $`a_{np}`$ was measured simultaneously and used as a standard for evaluating our methods and for determining the influence of the 3NF in nd breakup. We obtained $`a_{np}`$ and $`a_{nn}`$ from absolute cross-section measurements in the np and nn FSI regions, respectively, in nd breakup at an incident mean neutron energy of $`E_n=13.0`$ MeV. Rigorous nd breakup calculations with the Tucson-Melbourne (TM) 3NF potential show that the percentage change in np and nn FSI cross sections from the addition of a 3NF are equal. Based on this observation and the fact that $`a_{np}`$ has been determined to high accuracy by np scattering , we used our extracted values of $`a_{np}`$ to set an upper limit on the 3NF influence on the value of $`a_{nn}`$ determined in the present experiment. Because 3NF effects might have a stronger energy and angle dependence than indicated by our calculations with the TM 3NF model, we designed the experiment to obtain nn FSI data at the same energy and angle configurations as that for the np FSI data. Furthermore, guided by our calculations of the angle dependence of 3NF effects on the NN FSI cross-section enhancement, measurements were made at several np and nn emission angles. The measured cross-section distribution at each np (nn) angle was used to determine a value of $`a_{np}`$ ($`a_{nn}`$), and the predicted angle dependence of $`a_{np}`$ ($`a_{nn}`$) due to 3NF effects was investigated.
All measurements were made using the shielded neutron source at the Triangle Universities Nuclear Laboratory (TUNL). The experimental setup is shown in Fig. 1. The momentum of the two emitted neutrons and the energy of the proton in each breakup event were measured, thereby overdetermining the kinematics. The neutrons were detected in liquid organic scintillators. The energy of the emitted proton was measured in the deuterated liquid scintillator $`\mathrm{C}_6\mathrm{D}_{12}`$ (NE-232), which served as the deuteron scatterer and will be referred to as the central detector (CD). The energies of the outgoing neutrons were determined by measuring their flight times from the CD to the neutron detectors. The neutron detectors on the right side of the incident beam axis were used for the nn FSI measurements. At each angle of the nn pair, one neutron was detected in the ring-shaped detector placed 1.5 m from the CD and the other in the coaxial cylindrical detector placed 2.5 m from the CD, which was positioned to fill the solid angle of the opening in the ring detector.
In the np FSI one neutron moves in the same direction as the proton, and the other one is emitted on the opposite side of the incident neutron-beam axis. The neutrons emitted on the right side were detected in either the ring-shaped detector or in the cylindrical detector. The associated neutrons were emitted to the left side and were detected in cylindrical scintillators located as shown in Fig. 1. All neutron detectors were filled with a liquid scintillator fluid with $`n\gamma `$ pulse-shape sensitivity (either NE213 or BC501A). The active volume of each cylindrical detector was 12.6 cm dia. $`\times `$ 5.5 cm thick, and that of each ring detector was 7.6 cm inner dia. $`\times `$ 13.4 cm outer dia. $`\times `$ 4.0 cm thick. The neutron detector efficiencies were determined in a dedicated series of measurements using neutrons from the $`{}_{}{}^{2}\mathrm{H}(d,n)^3\mathrm{He}`$ reaction and from a <sup>252</sup>Cf source. The energy dependence of the relative detection efficiency of each detector was determined to an accuracy of $`\pm 1.0\%`$, and the absolute efficiency to $`\pm 2.5\%`$.
The neutron beam was produced using the $`{}_{}{}^{2}\mathrm{H}(d,n)^3\mathrm{He}`$ reaction. The production target was a 3-cm long cell pressurized with 7.8 atm of deuterium gas. The cell was bombarded with a 10.0-MeV d.c. deuteron beam, which entered the cell through a 6.35-$`\mu `$m thick Havar containment foil and was stopped in a gold end cap. The neutron energy spread was 400 keV. The detector area was shielded from the neutron production target by a 1.7-m thick multiple component wall. The neutron beam at the CD was defined by a rectangular double-truncated collimator, which was designed such that the CD was illuminated almost exclusively by unscattered neutrons produced in the deuterium gas cell. The deuteron beam current on target was about $`2\mu A`$, and the counting rate of the CD was about 400 kHz with a threshold setting of one-tenth of the Compton-scattering edge for $`\gamma `$ rays from a <sup>137</sup>Cs source. The rate in the CD electronics set the limit on the maximum acceptable beam current. The pulse-height thresholds on the neutron detectors were set at one-third of the $`{}_{}{}^{137}Cs`$ Compton-scattering edge. Data were collected for a total of 2000 hours.
The integrated beam-target luminosity was determined by measuring the yields for nd elastic scattering concurrently with the data from the breakup reaction. Since the differential cross section for nd elastic scattering can be calculated using realistic NN potentials in the Faddeev method with high numerical precision and the calculations agree well with existing data, we elected to use calculated cross sections rather than experimental data in the luminosity determination. The simultaneous accumulation of the breakup and nd elastic-scattering data reduced the sensitivity of our measurements to system deadtimes and absolute detection efficiencies. Also this technique eliminated the need for absolute measurements of the neutron flux and the thickness of the CD.
The width of the coincidence window in the event-trigger circuit was 400 ns and allowed for the concurrent measurements of true breakup events and events due to the accidental coincidences between signals from the CD and the neutron detectors. All events that satisfied conservation of energy within $`\pm `$2 MeV were projected into 0.5-MeV wide bins along the point-geometry kinematic locus. The distance along the kinematic curve of $`E_{n1}`$ versus $`E_{n2}`$, where $`E_{n1}`$ and $`E_{n2}`$ are the energies of the two emitted neutrons, will be referred to as S. The value of S is set to zero at the point where $`E_{n2}=0`$ and increases as one moves counterclockwise around the locus. The true+accidental and accidental events were projected onto the ideal locus separately. The projection was done using the minimum-distance technique described by Finckh et al. . The true events were obtained by subtracting accidental from true+accidental events.
The measured cross sections were compared to Monte-Carlo (MC) simulations that included the energy resolution and the finite geometry of the experimental setup. The basis of these simulations was theoretical point-geometry cross-section libraries generated for a range of $`a_{np}`$ ($`a_{nn}`$) values at incident neutron energies of 12.8, 13.0 and 13.2 MeV. The point-geometry cross sections were obtained from transition matrix elements of the breakup operator $`U_0`$
$`U_0`$ $`=`$ $`(1+P)\stackrel{~}{T},`$ (1)
where the $`\stackrel{~}{T}`$ operator sums up all multiple scattering contributions through the three-nucleon (3N) Faddeev integral equation
$`\stackrel{~}{T}|\varphi >`$ $`=`$ $`tP|\varphi >+(1+tG_0)V_4^{(1)}(1+P)|\varphi >+tPG_0\stackrel{~}{T}|\varphi >`$ (2)
$`+`$ $`(1+tG_0)V_4^{(1)}(1+P)G_0\stackrel{~}{T}|\varphi >.`$ (3)
Here $`G_0`$ is the free 3N propagator, t is the NN t-matrix, and operator P is the sum of a cyclical and anticyclical permutation of three nucleons. In the generation of the libraries, the terms containing $`V_4`$, the 3NF potential, were set to zero.
The cross-section libraries were obtained using the Bonn-B (OBEPQ) NN potential . This potential is fitted in the $`{}_{}{}^{1}S_{0}^{}`$ state to the experimentally-determined value of $`a_{np}`$. The charge-indepence breaking in the $`{}_{}{}^{1}S_{0}^{}`$ NN force is imposed by using for the $`{}_{}{}^{1}S_{0}^{}`$ nn force the version of the Bonn-B potential that was fitted to pp data. To account for charge-symmetry breaking in the calculations, the total 3N isospin T=3/2 admixture has been included . For the purpose of this analysis, modifications of the $`{}_{}{}^{1}S_{0}^{}`$ NN interaction were accomplished by adjusting the $`\sigma `$-meson coupling constant $`g_\sigma ^2/4\pi `$ . In this way $`{}_{}{}^{1}S_{0}^{}`$ np (nn) interactions with different np (nn) scattering lengths were generated.
Simulated cross sections in comparison to our data for 28.0 are shown in Figs. 2 and 3 for several values of $`a_{np}`$ and $`a_{nn}`$, respectively. A value of $`a_{NN}`$ and its statistical uncertainty were determined for each detector-pair configuration using a single-parameter (either $`a_{np}`$ or $`a_{nn}`$) minimum-$`\chi ^2`$ fit to the absolute cross-section data. The results are given in Table I. The uncertainties listed in Table I are statistical only. The systematic uncertainties in our determinations are $`\pm `$0.8 fm for $`a_{np}`$ and $`\pm `$0.6 fm for $`a_{nn}`$. Uncertainties in the neutron detector efficiencies and the integrated target-beam luminosity account for about 80% of the systematic uncertainty.
We observed no signficant angle dependence in $`a_{np}`$, and the consistency in the $`a_{nn}`$ data from one angle to the next is statistically acceptable. Combining the statistical and systematic uncertainties in quadrature, we obtain $`a_{np}`$ = -23.5 $`\pm `$ 0.8 fm and $`a_{nn}`$ = -18.7 $`\pm `$ 0.6 fm. Our result for $`a_{np}`$ is in agreement with the value of $`a_{np}`$ ($`23.748\pm 0.009`$ fm ) obtained from free np scattering measurements. We use this result to set an upper limit on the influence of 3NF on the value of NN scattering lengths determined from our experiment, i.e., $`\mathrm{\Delta }a_{np}^{3NF}a_{np}^{nd}a_{np}^{free}=0.2\pm 0.8`$ fm, where free and nd refer to values obtained from data for free np scattering and from nd breakup, respectively. This result is consistent with zero. Scaling our result for $`a_{np}`$ by the ratio of $`a_{nn}`$ to $`a_{np}`$, we obtain the upper limit due to 3NF effects in the nd breakup reaction to be $`\mathrm{\Delta }a_{nn}^{3NF}0.2\pm 0.6`$ fm, which is also consistent with zero.
Using the TM 3NF model we estimated possible effects of 3NF on np FSI cross sections. The calculations were produced by solving Eq.(3) using four modern NN potentials: AV18 , CD Bonn , NijmI and NijmII . In each calculation the 2$`\pi `$-exchange TM 3NF potential was included as $`V_4`$, which was split into 3 parts where each one was symmetrical under exchange of two particles. For instance, for the 2$`\pi `$-exchange 3NF , this corresponds to the three possible choices of the nucleon which undergoes the (off-shell) $`\pi N`$ scattering. In our calculations with the 3NF the strong cut-off parameter $`\mathrm{\Lambda }`$ in the TM 3NF model was adjusted separately for each NN potential to reproduce the experimental triton binding energy . For details of the formalism and the numerical treatment refer to refs. . We calculated the percentage deviation caused by effects of the TM 3NF on the np FSI point-geometry cross section as the function of $`\theta _{np}`$ at an incident neutron energy of 13 MeV. At all angles and for all potentials the change in the cross section due to the addition of the TM 3NF never exceeds 6%. For the np FSI production angles of the present experiment it is limited to the range of 1% to 4%, which corresponds to a theoretical range of $`(\mathrm{\Delta }a_{np}^{3NF})_{th}`$ from -0.8 to -0.2 fm. Our experimentally determined $`\mathrm{\Delta }a_{np}^{3NF}`$ is within two standard deviations of the predictions using any of the four NN potentials with the TM 3NF adjusted to fit the triton binding energy.
Summarizing, our measured values are $`a_{np}`$ = -23.5 $`\pm `$0.8 fm and $`a_{nn}`$ = -18.7 $`\pm `$0.6 fm. In our analysis magnetic interactions are not considered. Their possible effects have to be studied. By comparing our results for $`a_{np}`$ to the recommended value from np free scattering and scaling by the ratio of $`a_{nn}`$ to $`a_{np}`$, we set an upper limit of $`\mathrm{\Delta }a_{nn}^{3NF}=0.2\pm 0.6`$ fm on the contribution of 3NF effects on our value of $`a_{nn}`$. Though the experimental results suggest an opposite sign for $`\mathrm{\Delta }a_{np}^{3NF}`$ than predicted using the TM 3NF, our value is consistent with the theoretical predictions within the reported uncertainties. Since our value for $`a_{np}`$ obtained from nd breakup agrees with that from free np scattering, we conclude that our investigation of the nn FSI done under identical conditions should lead to a valid measure of $`a_{nn}`$. Our value for $`a_{nn}`$ is in agreement with the recommended value , which comes from $`\pi ^{}d`$ capture measurements, and disagrees with values obtained from earlier nd breakup studies .
###### Acknowledgements.
This work was supported in part by the U.S. Department of Energy, Office of High Energy and Nuclear Physics, under grant No. DE-FG02-97ER41033, by the Maria Sklodowska-Curie II Fund under grant No. MEN/NSF-94-161, by the USA-Croatia NSF Grant JF129, and by the European Community Contract No. CI1\*-CT-91-0894. The numerical calculations were performed on the Cray T916 of the North Carolina Supercomputing Center at the Research Triangle Park, North Carolina and on the Cray T90 and T3E of the Höechstleistungsrechenzentrum in Jülich, Germany.
|
no-problem/9904/cond-mat9904016.html
|
ar5iv
|
text
|
# Superconducting Proximity Effect and Universal Conductance Fluctuations
## Abstract
We examine universal conductance fluctuations (UCFs) in mesoscopic normal-superconducting-normal (N-S-N) structures using a numerical solution of the Bogoliubov-de Gennes equation. We discuss two cases depending on the presence (“open” structure) or absence (“closed” structure) of quasiparticle transmission. In contrast to N-S structures, where the onset of superconductivity increases fluctuations, we find that UCFs are suppressed by superconductivity for N-S-N structures. We demonstrate that the fluctuations in “open” and “closed” structures exhibit distinct responses to an applied magnetic field and to an imposed phase variation of the superconducting order parameter.
Transport properties of mesoscopic structures in contact with superconducting segments have attracted a great deal of interest . In these structures, the superconducting proximity effect induces novel transport phenomena, including sub-gap anomalies in normal-superconducting (N-S) contacts and phase-dependent conductances in Andreev interferometers. Quasiclassical theory, which has been widely applied, has been successful in explaining phenomena associated with ensemble-averaged properties but it cannot describe fluctuations about the mean .
In normal diffusive nanostructures of size smaller than the phase coherence length $`l_\varphi `$ but larger than the elastic mean free path $`l_{\text{el}}`$, the conductance fluctuates by an amount of order $`e^2/h`$. These fluctuations can be observed by varying either the magnetic field, gate potential, or impurity configuration. The fluctuations are universal within the diffusive regime: their magnitude is partly determined by symmetry and does not depend on the degree of disorder or system size. Such a symmetry-breaking operation is the application of an external magnetic field which breaks time reversal (TR) symmetry and decreases the rms fluctuations by a factor of $`\sqrt{2}`$.
For normal-superconducting systems, it has been shown that the onset of superconductivity increases the rms magnitude of UCFs by a factor which depends on the underlying order parameter. Beenakker and Brouwer (BB) predict that in the absence of phase gradients for the order parameter, breaking TR symmetry has a negligible effect on the rms conductance fluctuations $`\delta G_{\text{NS}}`$ of a N-S structure. Altland and Zirnbauer (AZ) , on the other hand, find that $`\delta G_{\text{NS}}`$ decreases by a factor of $`\sqrt{2}`$, provided that the average phase change of an Andreev-reflected quasiparticle is zero. When the latter condition is satisfied, AZ predict that in zero magnetic field,
$$\delta G_{\text{NS}}(B=0)=2\sqrt{2}\delta G_{\text{NN}}(B=0),$$
(1)
where $`\delta G_{\text{NN}}`$ is the rms conductance fluctuation when the S contact is in the normal state. Since switching on a magnetic field decreases both $`\delta G_{\text{NS}}`$ and $`\delta G_{\text{NN}}`$ by a factor of $`\sqrt{2}`$, they also find
$$\delta G_{\text{NS}}(B0)=2\sqrt{2}\delta G_{\text{NN}}(B0)=2\delta G_{\text{NN}}(B=0).$$
(2)
In contrast, since BB predict that $`\delta G_{\text{NS}}`$ is almost insensitive to TR symmetry breaking, they obtain
$$\delta G_{\text{NS}}(B=0)2\delta G_{\text{NN}}(B=0)$$
(3)
$$\delta G_{\text{NS}}(B0)=2\sqrt{2}\delta G_{\text{NN}}(B0)=2\delta G_{\text{NN}}(B=0).$$
(4)
Thus the AZ and BB scenarios yield essentially the same fluctuations for finite $`B`$, a result which has recently been tested experimentally , whereas they yield fluctuations of different magnitudes in zero field.
In this paper we examine for the first time the crossover between the AZ and BB scenarios and predict that this crossover is observable in closed Andreev interferometers. The BB and AZ calculations are applicable to N-S structures with a single N contact, whereas many experiments involve two or more N contacts . We also examine the field and phase dependence of fluctuations in the conductance $`G_{\text{NSN}}`$ of N-S-N structures possessing two normal (N) contacts . For the considered geometries, these have not been explored before in the literature.
We consider the two limiting cases of N-S-N structures shown in Fig. 1, comprising two N-reservoirs attached to a normal diffusive region with superconducting contacts and a current flowing from left to right. In Fig. 1a, the two superconductors do not intersect the diffusive normal region and affect transport only through the proximity effect. In this structure, quasiparticle transmission between the reservoirs is possible and therefore we refer to this as “open”. In the second, “closed”, structure of Fig. 1b, a superconductor of length $`L_\text{S}`$ divides the diffusive region into two normal regions, each of length $`L_\text{N}`$. The length $`L_\text{S}`$ of the superconducting segment is much greater than the superconducting coherence length $`\xi `$ and therefore quasiparticle transmission through the superconductor is suppressed.
FIG. 1. a) An open Andreev interferometer where the superconducting segments are located outside the classical path of the current. The N-S interfaces contain an optional Schottky barrier with transmissivity $`\mathrm{\Gamma }`$ (i.e., transmission probability per channel). The S segments have an order-parameter phase difference of $`\varphi =\varphi _1\varphi _2`$ b) A closed interferometer where the superconducting segments divide the normal diffusive region into two halves with no transmission from left to right. At the interface between the S segments there is a tunnel barrier of width 1 site and transmissivity $`\mathrm{\Gamma }=0`$.
In the linear-response limit and at zero temperature, the relationship between the current $`I`$ and the reservoir potential difference $`v_1v_2`$ was first obtained in and the corresponding conductance (in units of $`2e^2/h`$) may be written in the form ,
$$G_{\text{NSN}}=T_0+T_a+2\left[\frac{R_aR_a^{}T_aT_a^{}}{R_a+R_a^{}+T_a+T_a^{}}\right],$$
(5)
where unprimed \[primed\] quantities $`T_0`$ and $`R_0`$ ($`T_a`$ and $`R_a`$) are normal (Andreev) transmission and reflection probabilities for quasiparticles from lead 1 \[lead 2\]. To evaluate this expression, we solve the Bogoliubov-de Gennes equation numerically on a two-dimensional tight-binding lattice with diagonal disorder, utilizing the decimation method to evaluate the transport coefficients of a two-probe structure . The magnetic field is introduced via the Peierls substitution and the disorder in the normal diffusive area is introduced by varying the site energies at random within the range $`[w/2,w/2]`$.
Conductance fluctuations are universal only if transport through the normal structures is diffusive. Therefore, following , we choose the disorder $`w`$ such that the conductivity of the diffusive normal region is independent of its length $`L`$. For a sample of width $`M=35`$, in the absence of superconductivity, Fig. 2 shows the dimensionless conductivity $`l=GL/M`$ as a function of $`L`$ for four different values of disorder $`w`$. In the quasi-ballistic regime, $`G`$ approaches the Sharvin conductance and $`l`$ increases linearly with $`L`$, whereas at intermediate $`L`$, $`l`$ exhibits a plateau, corresponding to the diffusive regime of interest. In units of the lattice constant, this plateau value of $`l`$ is equal to the elastic mean free path $`l_{\text{e}l}`$. Therefore, Fig. 2 represents a mapping between the model-dependent parameter $`w`$ and the experimentally accessible quantity $`l_{\text{e}l}`$. Finally, for values of $`L`$ larger than the localization length $`\lambda Nl_{\text{el}}`$, where $`N=30`$ is the number of open channels, $`l`$ decreases with increasing $`L`$. In Figs. 4–5 we choose $`w=1.5`$, corresponding to a mean free path of $`l_{\text{el}}8`$. The other length scales (in units of the lattice constant) are $`M=35`$, $`L_\text{N}=75`$, $`L_\text{S}=40`$, $`W=10`$ and $`\xi =10`$, where $`\xi `$ is the superconducting coherence length setting the scale for the decay of zero-energy wavefunctions within the superconductor .
FIG. 2. Quantity $`GL/M`$ plotted as function of $`L`$ for different strengths $`w`$ of disorder. The region where the curve is essentially constant corresponds to diffusive transport. The constant defines the mean free path $`l_{\text{el}}`$. The dotted lines at $`l=l_{\text{el}}`$ are guides to the eye.
Figure 3 shows the rms fluctuations as a function of $`l`$ for a normal structure in the absence of superconductivity and for the N-S-N structures of Fig. 1 (with zero magnetic field and a constant order-parameter phase); it demonstrates that fluctuations are independent of $`l`$. This universal behavior is in agreement with previous numerical and analytical works. In the diffusive regime, the presence of superconductivity for both structures in Fig. 1 suppresses the fluctuations by a factor of order $`\sqrt{2}`$.
For the closed structure in Fig. 1b, all transmission coefficients necessarily vanish, and for structures of type 1a, the presence of disorder and a proximity-induced suppression of the density of states may also cause transmission coefficients to be negligible. In this case, Eq. (5) reduces to a sum of two N-S resistances, $`G_{\text{NSN}}=1/(2R_a)+1/(2R_a^{})`$, associated with Andreev reflection of quasiparticles from the two decoupled reservoirs. By adding two statistically independent N-S resistances in series, one finds that in the absence of transmission, the fluctuations $`\delta G_{\text{NSN}}`$ of the total conductance are related to the fluctuations $`\delta G_{\text{NS}}`$ of the N-S-conductance by $`\delta G_{\text{NSN}}=\delta G_{\text{NS}}/2\sqrt{2}`$. Therefore, in the absence of transmission, the $`\sqrt{2}`$ decrease in $`\delta G_{\text{NSN}}`$ due to the onset of superconductivity is a consequence of this relation and Eq. (3) .
FIG. 3. Universal conductance fluctuations (rms deviations) in units of $`2e^2/h`$ as function of $`l`$ for a normal structure (crosses), open N-S-N structure (open circles), and closed N-S-N structure (closed circles), obtained from over 500 realizations of disorder. The mean free path extends over three regimes: localized ($`l<6`$), diffusive ($`6<l<35`$) and quasiballistic ($`l>35`$). The solid lines at $`\delta G=\delta G_\text{N}=0.358`$ and $`\delta G=\delta G_\text{S}=0.267`$ show the average of the standard deviations within the diffusive regime in the normal and superconducting cases, respectively. The dashed lines indicate 95% confidence intervals.
Figure 4a shows that in the presence of a large enough magnetic field, the rms fluctuations of both a normal structure and the open structure in Fig. 1a are suppressed by a factor of order $`\sqrt{2}`$. However, in agreement with the prediction of BB, the suppression of $`\delta G_{\text{NSN}}`$ for the closed structure of Fig. 1b is at most of the order of 10%.
To demonstrate that a crossover from the BB to the AZ scenario is possible in such interferometers, we have also investigated the effect of imposing a difference $`\varphi `$ between the order parameter phases of the two superconductors. The results of these calculations are shown in Fig. 4b. For the closed structure, in the absence of magnetic field (solid line), $`\delta G_{\text{NSN}}`$ increases monotonically as $`\varphi `$ increases from zero, reaching a maximum at $`\varphi =\pi `$. The enhancement factor at $`\varphi =\pi `$ is of order $`\sqrt{2}`$, which is the ratio of the AZ and BB predictions for UCFs, thereby demonstrating that a crossover is indeed possible in closed interferometers. In the presence of a magnetic field (dashed line), the fluctuations become nearly independent of the phase difference $`\varphi `$ since time reversal symmetry is already broken by the field. Remarkably, the corresponding curve for the open structure shows no phase dependence. At present, there exists no analytic derivation of this result. In contrast with the mean conductance $`G`$, which at $`E=0`$ is independent of $`\varphi `$, but exhibits large-scale oscillations at finite $`E`$ , we have found no significant phase dependence in the fluctuations, even at energies of the order of the Thouless energy.
Figures 3 and 4 demonstrate that if the N-S interfaces are clean enough, the magnitude of UCFs are sensitive to the onset of superconductivity in the S contacts. As a final result, we note that this effect is suppressed by the presence of Schottky barriers at the N-S interfaces. This
FIG. 4. a) External magnetic field breaks time reversal symmetry and decreases the magnitude of fluctuations by a factor of 1.3 both in a normal structure (crosses) and in an open interferometer (open circles). However, for the closed structure of Fig. 1b (solid circles), the suppression is only of the order of 10%. The dashed lines at $`\delta G_{\text{NSN}}(B0)=0.207`$, $`\delta G_{\text{NSN}}(B0)=0.254`$ and $`\delta G_{\text{NN}}(B0)=0.275`$ show the average of fluctuations for fluxes $`\mathrm{\Phi }>\mathrm{\Phi }_0/2=h/2e`$ in open, closed, and normal structures, respectively. b) Magnitude $`\delta G_{\text{NSN}}`$ of universal conductance fluctuations as functions of the phase difference $`\varphi =\varphi _1\varphi _2`$ between the superconducting segments in the absence (solid line and circles) and the presence (dashed line and triangles) of a magnetic field. Results are presented for both closed (solid markers) and open interferometers (open markers).
is demonstrated in Fig. 5 which shows $`\delta G_{\text{NSN}}`$ as a function of the barrier conductance $`G_B=N\mathrm{\Gamma }`$ for different numbers of channels $`N`$ in the contact and transmissivities $`\mathrm{\Gamma }`$. For $`G_B10e^2/h`$, the fluctuations approach those of a purely normal structure and for $`G_B20e^2/h`$ fluctuations characteristic of a N-S-N structure with no barriers are obtained. From an experimental viewpoint, this suppression could be used as a novel probe of the strength of such barriers.
In summary, we have used the numerical solution of the Bogoliubov-de Gennes equations to demonstrate that conductance fluctuations in normal-superconducting mesoscopic structures depend on (i) the presence or absence of quasiparticle transmission through the structure, (ii) the external magnetic field, and (iii) the order-para-
FIG. 5. Tunnel barrier at the N-S interface of the open structure in Fig. 1a weakens the effect of superconductivity and for barrier conductance $`G_B10e^2/h`$, the fluctuations start to resemble those in the absence of superconductivity. Here the barrier conductance is expressed in the units of $`2e^2/h`$ and fluctuations are plotted for 47 (solid square markers), 38 (open squares), 35 (solid circles), 23 (open triangles) and 19 (solid triangles) open channels at the interface.
meter phase difference between the superconducting contacts. If the phase of the superconducting order parameter is constant and time-reversal symmetry is not broken by a magnetic field, the open and closed structures possess the same UCFs, namely $`\delta G_{\text{NSN}}0.26[2e^2/h]`$. In contrast to the predictions of AZ and BB for N-S structures, this is smaller by roughly a factor of $`\sqrt{2}`$ than the normal-state fluctuations $`\delta G_{\text{NN}}`$. Applying a magnetic field to the open structure decreases $`\delta G_{\text{NSN}}`$ by an additional factor of $`\sqrt{2}`$, a result which lies outside current analytic calculations , whereas for the closed structure the suppression is found to be much weaker (roughly 10 %), as suggested by BB. We have also examined the phase dependence of UCFs and shown that for a closed structure, $`\delta G_{\text{NSN}}`$ increases by almost $`\sqrt{2}`$ as the phase difference between two superconducting segments is varied from zero to $`\pi `$, thereby demonstrating a crossover
from the BB scenario to the AZ scenario. In contrast for the open structure, we find that $`\delta G_{\text{NSN}}`$ is essentially independent of phase. The calculated magnitudes of the UCFs are summarized in Table I.
The numerical simulations have been performed in a Cray C94 of the Center for Scientific Computing (CSC, Finland). TH acknowledges the postgraduate scholarship awarded by Helsinki University of Technology and the hospitality of the Lancaster University.
|
no-problem/9904/cond-mat9904206.html
|
ar5iv
|
text
|
# Evidence for a Trivial Ground State Structure in the Two-Dimensional Ising Spin Glass
\[
## Abstract
We study how the ground state of the two-dimensional Ising spin glass with Gaussian interactions in zero magnetic field changes on altering the boundary conditions. The probability that relative spin orientations change in a region far from the boundary goes to zero with the (linear) size of the system $`L`$ like $`L^\lambda `$, where $`\lambda =0.70\pm 0.08`$. We argue that $`\lambda `$ is equal to $`dd_f`$ where $`d(=2)`$ is the dimension of the system and $`d_f`$ is the fractal dimension of a domain wall induced by changes in the boundary conditions. Our value for $`d_f`$ is consistent with earlier estimates. These results show that, at zero temperature, there is only a single pure state (plus the state with all spins flipped) in agreement with the predictions of the droplet model.
\]
The nature of the ordering in spin glasses below the transition temperature, $`T_c`$, remains rather poorly understood. For the infinite range model, the replica symmetry breaking solution of Parisi is generally believed to be correct. An important aspect of this solution is that the order parameter is a non-trivial distribution, $`P(q)`$, where $`q`$ describes the overlap of the spin configuration between two copies of the system with identical interactions. The distribution is non-trivial because very different spin configurations occur with significant statistical weight. One loosely says that the system can be in many “pure states”. Monte Carlo simulations on (more realistic) short range models on quite small lattices, find a non-trivial $`P(q)`$ with a weight at $`q=0`$ which is independent of system size (for the range of sizes studied), as predicted by the Parisi theory.
An alternative approach, the “droplet model”, has been proposed by Fisher and Huse (see also Refs. ). Thermodynamic states and pure states are defined precisely by considering correlation functions of spins in a region small compared with the system size and far from the boundary, and asking whether they change or not upon changing the boundary conditions as the (linear) system size $`L`$, tends to infinity. Each different set of correlation functions corresponds to a different thermodynamic state. The droplet theory, the Parisi theory, and some other scenarios have been studied in detail by Newman and Stein.
By making some plausible and self-consistent assumptions, the droplet theory predicts that the structure of pure states is trivial in short range spin glasses below $`T_c`$. In zero field, trivial pure state structure means that any thermodynamic state is a combination of just two distinct pure states, related by flipping all the spins, which have the same free energy by symmetry. If one looks at the whole system, rather than a relatively small region far from the boundary, one might note that part of the system is in one pure state and the other part in the spin-flipped state, with a domain wall between them. Hence a global quantity like $`P(q)`$ could have a non-trivial form even though the structure of pure states is actually trivial.
To unambiguously distinguish between the droplet and Parisi pictures it is therefore better to study correlation functions, such as the overlap distribution, in a finite region far from the boundary, since the probability that the domain wall goes through this region vanishes as $`L\mathrm{}`$. More precisely, one should investigate whether these correlation functions change when the boundary conditions are changed. To our knowledge, however, this has not been done before.
Here, we perform such calculations numerically for the ground states of the Ising spin glass with Gaussian interactions in two dimensions. Although there is no spin glass order at finite temperature in this system, there is (complete) spin glass order in the ground state, so one can investigate the question of the number of pure states at zero temperature. Two dimensions has the additional advantage that there are efficient algorithms for computing exact ground states and so quite large sizes can be investigated. We find that the probability for the spin configuration in the center to change, when the boundary conditions are altered, goes to zero like $`L^\lambda `$ as $`L`$ increases, where $`\lambda `$ can be related to the fractal dimension of a domain wall which is induced by the boundary condition change. This result shows that there is only a single pure state at $`T=0`$ (i.e. a single ground state), plus the state with all spins flipped, in agreement with the droplet theory.
The Hamiltonian is given by
$$=\underset{i,j}{}J_{ij}S_iS_j,$$
(1)
where the sites $`i`$ lie on the sites of an $`L\times L`$ square lattice with $`L30`$, $`S_i=\pm 1`$, and the $`J_{ij}`$ are nearest-neighbor interactions chosen according to a Gaussian distribution with zero mean and standard deviation unity. Initially, we impose periodic boundary conditions, denoted by “P”. Since the distribution of the interactions, $`J_{ij}`$, is continuous, the ground state is unique (apart from the equivalent state obtained by flipping all the spins). We determine the energy and spin configuration of the ground state for a given set of bonds. Next we impose anti-periodic conditions (“AP”) along one direction, which is completely equivalent to changing the sign of the interactions along this boundary, and recompute the ground state. Finally we change the sign of half the bonds at random along this boundary, which we denote by “R”. Note that the different boundary conditions correspond to different choices of the interactions which occur with the same probability. Hence they are statistically equivalent.
For the smaller sizes, $`L8`$, we compute the ground state by rapidly quenching from a randomly chosen spin configuration, and repeating many times until we are confident that the ground state energy has been found. For the two largest sizes, $`L=16`$ and 30, this is impractical so instead we use the Cologne spin glass ground state server. We repeat the calculation of the ground state for the three copies with different boundary conditions for a minimum of 2000 samples for each size.
Next we discuss how to study the dependence of the spin configuration on boundary conditions. We consider a central block containing $`N_B=L_B^2`$ spins, and ask if the correlation functions between two spins, $`i`$ and $`j`$ say, in the block depend on the boundary conditions, i.e. whether $`S_i^\alpha S_j^\alpha _TS_i^\beta S_j^\beta _T`$ is non zero for $`L\mathrm{}`$, where $`\alpha `$ and $`\beta `$ refer to two distinct boundary conditions, P, AP, or R here, $`S_i^\alpha `$ refers to a spin in the copy with the $`\alpha `$ boundary condition, and $`\mathrm{}_T`$ denotes a thermal average. We consider even spin correlation functions because our boundary conditions do not distinguish between states which differ by flipping all the spins. Since the difference can have either sign, it is convenient to consider its square, $`(S_i^\alpha S_j^\alpha _TS_i^\beta S_j^\beta _T)^2`$. If we sum over all the spins in the block, normalize, and average over disorder, it is easy to see that this becomes
$$\mathrm{\Delta }=\left(q_{\alpha \alpha }^B\right)^2+\left(q_{\beta \beta }^B\right)^22\left(q_{\alpha \beta }^B\right)^2,$$
(2)
where
$$q_{\alpha \beta }^B=\frac{1}{N_B}\underset{i=1}{\overset{N_B}{}}S_i^\alpha S_i^\beta $$
(3)
is the overlap between the block configurations with $`\alpha `$ and $`\beta `$ boundary conditions, and the brackets $`\mathrm{}`$ refer to both a thermal average and an average over the disorder. Eq. (2) can be written as
$$\mathrm{\Delta }=_1^1q^2\left[P_{\alpha \alpha }^B(q)+P_{\beta \beta }^B(q)2P_{\alpha \beta }^B(q)\right]𝑑q,$$
(4)
where
$$P_{\alpha \beta }^B(q)=\delta \left(qq_{\alpha \beta }^B\right)$$
(5)
is the probability distribution for the block overlaps. We have written these expressions in a general form, valid for $`T>0`$ as well as $`T=0`$. Similar arguments can be made for correlations of a larger number of spins, which leads to expressions like Eq. (4) but with higher moments of the overlap distributions. Hence the crucial quantity is the difference in the block spin overlap distributions with different boundary conditions which occurs in Eq. (4), i.e.
$$\mathrm{\Delta }P_{\alpha \beta }^B(q)P_{\alpha \alpha }^B(q)+P_{\beta \beta }^B(q)2P_{\alpha \beta }^B(q).$$
(6)
If this difference tends to zero as $`L\mathrm{}`$ then the droplet picture is valid. We emphasize that this test does not require the size of the block to also become large.
Specializing now to $`T=0`$, $`P_{\alpha \alpha }^B(q)`$ is just the sum of two delta functions with equal weight at $`q=\pm 1`$, since the ground state is unique (apart from overall spin reversal). Hence, at $`T=0`$, it is sufficient to investigate the block overlap distribution $`P_{\alpha \beta }^B(q)`$ with $`\alpha \beta `$. We calculate this for $`\alpha =`$ P, and $`\beta =`$ AP and R.
Now we discuss our results, for which we take $`L_B=2`$. First of all, Fig. 1 shows data for the root mean square difference in ground state energy,
$$\mathrm{\Delta }E_{\alpha \beta }(E_\alpha ^0E_\beta ^0)^2^{1/2}$$
(7)
with $`E^0`$ the total ground state energy (not the energy per spin), for $`\alpha =`$ P and $`\beta =`$ AP and R. One sees that $`\mathrm{\Delta }E_{P,AP}`$ goes to zero like $`L^\theta `$ as $`L`$ increases, where $`\theta =0.285\pm 0.020`$. This is in agreement with earlier work. The negative value means that large domains cost very little energy and so the order in the ground state will spontaneously break up at any finite temperature, showing that $`T_c=0`$.
The results for $`\mathrm{\Delta }E_{P,R}`$ are quite different, however, increasing with $`L`$, roughly as $`L^{1/2}`$ for large $`L`$, rather than decreasing. This difference is easily understood, since the defect (i.e. the region where the energy is locally different for the two boundary conditions) can be locally removed in the P-AP case, by changing the sign of the spins to one side of the boundary. The defect will then be a single domain wall somewhere in the sample not necessarily near the boundary. However, this can not be done for the P-R case and a part of the defect, with an energy which one could guess to be $`L^{(d1)/2}`$ in $`d`$-dimensions, will stay close to the boundary, in addition to a domain wall which could be arbitrarily far away.
We show some of our data for the block overlaps in Fig. 2. The results for the P-AP and P-R overlaps are qualitatively similar to each other, with the weight away from the peaks at $`q=\pm 1`$ dropping as $`L`$ increases.
We characterize this trend by the weight at $`q=0`$ and show the results in Fig. 3. For both anti-periodic and random boundary condition changes, the weight at $`q=0`$ vanishes like $`L^\lambda `$, where $`\lambda =0.70\pm 0.08`$. This value is easy to understand since $`P_{\alpha \beta }^B(0)`$ is just the probability that the domain wall bisects the block. If the fractal dimension of the domain wall is $`d_f`$ then, generalizing to $`d`$-dimensions, the probability that it goes through any small region is proportional to $`L^{(dd_f)}`$. This immediately gives $`d_f=1.30\pm 0.08`$, which is consistent with other estimates for $`d_f`$: $`1.26\pm 0.03`$ by Bray and Moore, $`1.34\pm 0.10`$ by Rieger et al. and $`1.31\pm 0.1`$ (for a related model) by Gingras.
Earlier calculations have investigated some effects of changing the boundary conditions from periodic to anti-periodic, usually just the change in the ground state energy, though Bray and Moore and Rieger have also calculated the fractal dimension of the domain wall. They obtain a value less than $`d`$ (as noted above), which implies that $`P_{P,AP}^B(0)`$ vanishes for large $`L`$, as we find explicitly here. However, as noted in our discussion of Fig. 1, anti-periodic boundary conditions are special since the defect can be locally removed by flipping the spins to one side of it. This is why we also investigate random boundary condition changes, for which the defect cannot be locally eliminated. An important result of our work is that the fractal dimension of the domain wall is the same in both cases.
For each configuration of the bonds in the bulk we have only studied a single random change in the boundary conditions. It would be interesting to get statistics on a large number of boundary condition changes to see if the probability for the domain wall to go through the central block obtained by averaging over boundary conditions for a single large sample is the same as we find here by averaging over samples. It would also be interesting to investigate boundary conditions which are optimized to minimize the ground state energy, i.e. which are correlated with the bonds in the bulk.
Recently, we have been able to perform calculations similar to those presented here for the three-dimensional spin glass, which has a finite $`T_c`$. There too we find evidence for a unique ground state.
After this work was virtually complete, we became aware of related work by Middleton. Whereas we start with periodic boundary conditions Middleton takes free boundary conditions which allows him to find a ground state (for the models studied) in polynomial time, permitting the study of very large sizes, up to $`L=512`$. In this approach one can only perturb the system far away from the central region by making it grow bigger. Hence there are three relevant lengths: the block size, which we call $`L^B`$, the size inside which the bonds are not changed (which we will call $`L_{mid}`$), and the overall size $`L`$. One needs $`L_{mid}L^B`$ and at least some data which also satisfies $`LL_{mid}`$. Hence the largest sizes $`L`$ that Middleton studies need to be very large. In our work, we only have one inequality to satisfy, $`LL^B`$, rather than two, so the sizes do not need to be as large. Overall, the two approaches are complementary and, in our view, have similar validity for the two-dimensional spin glass. However we believe that our approach is preferable for the three-dimensional spin glass, for which there are no polynomial algorithms, since it requires smaller sizes.
This work was supported by the National Science Foundation under grant DMR 9713977. M.P. is supported by University of California, EAP Program, and by a fellowship of Fondazione Angelo Della Riccia. We would like to thank D. L. Stein and G. Parisi for their comments on an earlier version of the manuscript. We would also like to thank Prof. M. Jünger and his group at the University of Cologne for putting their spin glass ground state server in the public domain.
|
no-problem/9904/cond-mat9904093.html
|
ar5iv
|
text
|
# Scattering of Magnetic Solitons in two dimensions
## I Introduction
Localized solutions, often called solitons, play an increasingly important role in nonlinear field theories in two dimensions. Topological structures exist in particular in magnetic systems and have been studied extensively, both theoretically and experimentally .
An easy-plane ferromagnet is described by the Landau-Lifshitz equation
$$\dot{\text{n}}=\text{n}\times \text{f},$$
(1)
$$\text{f}=\mathrm{\Delta }\text{n}n_3\widehat{\text{e}},\text{n}^2=1.$$
The field n represents the local magnetization of the material. We shall study here the case of a two-dimensional medium so we assume $`\text{n}=\text{n}(x,y,t)`$. The dot denotes a time derivative, $`n_3`$ is the third component of n, $`\mathrm{\Delta }`$ is the Laplace operator and $`\widehat{\text{e}}=(0,0,1)`$ is the unit vector in the third direction. The normalization condition $`\text{n}=1`$ imposed in the initial condition is preserved by the equations of motion.
Static solutions of model (1) are the well-studied vortices. Isolated vortices are spontaneously pinned objects that is no vortex in free translational motion can be found in a 2D ferromagnet. The same is true for any isolated object with nontrivial topology in a 2D ferromagnet , the most well-known example being the magnetic bubbles in easy-axis ferromagnetic films .
Coherently traveling solutions of (1) have been found in . They have the form of a vortex-antivortex pair and their velocity may take any value between zero and unity, which is the velocity of magnons in the system.
We now turn to a different class of systems, namely antiferromagnets. The dynamics of the staggered magnetization in the antiferromagnetic continuum is given by the nonlinear $`\sigma `$-model :
$$\text{n}\times [\ddot{\text{n}}\text{f}]=0,$$
(2)
$$\text{f}=\mathrm{\Delta }\text{n}n_3\widehat{\text{e}},\text{n}^2=1,$$
where the double dot denotes a second time derivative. The above model has the same static vortex solutions as (1). On the other hand, vortices in model (2) can be found in free translational motion. This is due to the fact that the model is invariant under Lorentz transformations.
Our main purpose is to study collisions of solitons in models (1) and (2). In the ferromagnet no collision between two vortices can take place. Two vortices with the same topological charge will rotate around each other while a vortex and an antivortex undergo Kelvin motion perpendicular to the line connecting them. However, collisions can occur between two vortex-antivortex pairs. We elaborate on the arguments of and argue that head-on collisions between vortex-antivortex pair solitons give a right angle scattering pattern.
We also study collisions between vortices in antiferromagnets in Eq. (2). They scatter at right angles as found in numerical simulations. In fact, the right angle scattering phenomenon seems to be a robust feature in various two-dimensional models which have soliton solutions . However, it is a nontrivial and strange behaviour at least from the point of view of scattering of ordinary particles.
The colliding objects in the two models that we study are essentially different from each other. While vortices within model (2) are topologically nontrivial objects, the colliding vortex-antivortex pairs in (1) have a vanishing topological charge. However, we argue that the underlying Hamiltonian structure allows to study the soliton interaction in the two models in close analogy.
The outline of the paper is as follows. In Section II we simulate head-on collisions of vortex-antivortex pairs in (1) and give a theoretical description which exploits the form of the linear momentum. In Section III the results of head-on collision simulations of vortices in model (2) are given together with arguments for the understanding of this behaviour. The conclusions are given in Section IV.
## II Head-on collisions of vortex-antivortex pairs in planar ferromagnets
A ferromagnet can be described in terms of a magnetization vector which satisfies the Landau-Lifshitz equation (1). The constraint on the field n can be resolved and the theory can be formulated in terms of a complex variable
$$\mathrm{\Omega }=\mathrm{\Omega }(x,y,t)=\frac{n_1+in_2}{1+n_3}$$
(3)
which satisfies the equation
$$i\dot{\mathrm{\Omega }}=\mathrm{\Delta }\mathrm{\Omega }+\frac{2\overline{\mathrm{\Omega }}}{1+\mathrm{\Omega }\overline{\mathrm{\Omega }}}_\mu \mathrm{\Omega }_\mu \mathrm{\Omega }\frac{1\mathrm{\Omega }\overline{\mathrm{\Omega }}}{1+\mathrm{\Omega }\overline{\mathrm{\Omega }}}\mathrm{\Omega }.$$
(4)
$`\overline{\mathrm{\Omega }}`$ denotes the complex conjugate of $`\mathrm{\Omega }`$.
We use the formulation through the complex variable $`\mathrm{\Omega }`$ in all numerical simulations. We avoid the formulation through the vector variable n since the constraint on it makes an accurate computer calculation of the time derivatives of the field rather cumbersome.
The model has some interesting static vortex solutions of the form
$$\mathrm{\Omega }^o=f(\rho )e^{i\kappa \varphi },\kappa =\pm 1,$$
(5)
where $`\rho ,\varphi `$ are polar coordinates and $`f(\rho =0)=0,f(\rho \mathrm{})1`$. We call the configuration with $`\kappa =1`$ a vortex and the one with $`\kappa =1`$ an antivortex. Vortex solutions have infinite energy and it has been argued that they are physically relevant .
In the study of the dynamics of magnetic vortices the central role is played by a scalar quantity called the local vorticity
$$\gamma =\epsilon _{\mu \nu }_\mu \pi _\nu \psi ,$$
(6)
where $`\epsilon _{\mu \nu }`$ is the two-dimensional totally antisymmetric tensor. The two components of the linear momentum are then expressed as
$$p_x=y\gamma 𝑑x𝑑y,p_y=x\gamma 𝑑x𝑑y.$$
(7)
Of fundamental importance is the Poisson bracket relation between the two components of the linear momentum. This reads
$$\{p_x,p_y\}=\mathrm{\Gamma },$$
(8)
where
$$\mathrm{\Gamma }=\gamma 𝑑x𝑑y$$
(9)
is the total vorticity.
In the present model $`\pi =\mathrm{cos}\mathrm{\Theta }`$ and $`\psi =\mathrm{\Phi }`$ have been used as the canonical fields. They are defined through $`n_1=\mathrm{cos}\mathrm{\Theta }\mathrm{sin}\mathrm{\Phi },n_2=\mathrm{cos}\mathrm{\Theta }\mathrm{sin}\mathrm{\Phi },n_3=\mathrm{sin}\mathrm{\Theta }`$. The explicit form of the vorticity is
$$\gamma =\epsilon _{\mu \nu }\mathrm{sin}\mathrm{\Theta }_\nu \mathrm{\Theta }_\mu \mathrm{\Phi }.$$
(10)
The total vorticity of a vortex is
$$\mathrm{\Gamma }=\gamma 𝑑x𝑑y=2\pi \kappa ,$$
(11)
that is, $`\mathrm{\Gamma }=2\pi `$ for vortices and $`\mathrm{\Gamma }=2\pi `$ for antivortices.
The implications of a nonvanishing total vorticity to the dynamics is an issue which has been thoroughly studied in the case of magnetic vortices and bubbles as well as in the case of vortices in other interesting models . The most striking result is that it leads to spontaneous pinning of these topological objects.
Since a nonvanishing total vorticity $`\mathrm{\Gamma }`$ implies pinning of an object, we infer that a solution which moves freely should have a vanishing $`\mathrm{\Gamma }`$. In this respect, the vortex-antivortex ansatz offers the simplest possibility. Fig. 1 gives a schematic representation of it. This consists of two lumps, one having negative and the other one positive sign. We suppose that the vortex is roughly laying in the shaded area with the negative sign and the antivortex in the shaded area with the positive sign. This figure is supposed to act only as a guide for our discussion and there is no strict way to distinguish the two vortices and define their positions. However, if relation (8) is applied to each vortex separately then the quantity on the right hand side is nonvanishing. It is then implied that each vortex will propagate in the horizontal direction under the influence of the other vortex. The picture is consistent with linear momentum considerations. That is, an application of Eq. (7) to the full ansatz gives a nonvanishing $`x`$-component of the linear momentum. Fig. 1 will serve in the following discussion as a prototype and will motivate our theoretical arguments.
The Kelvin motion of a vortex-antivortex pair in a ferromagnet has been investigated in . The motion of a bubble-antibubble ansatz has also been studied . The situation is found to be similar in some other systems such as an antiferromagnet immersed in a uniform magnetic field , a model for superconductors , for superfluid helium and the nonlinear Schrödinger equation . It has been pointed out that the gross features of this dynamical behaviour are analogous to the planar motion of charges under the influence of a magnetic field perpendicular to the plane. In particular, two oppositely charged particles undergo Kelvin motion traveling along parallel trajectories. The analogy has been made precise by use of relation (8) and an analogous relation in the charge motion problem .
The calculation of steadily moving coherent structures in a 2D ferromagnet, which have the form of a vortex-antivortex pair has been done in . We have used here the numerical code of to reproduce them since there is no available analytical formula. Fig. 2 is an example contour plot for a soliton with velocity $`v=0.5`$. The upper entry gives contour plots of the quantity 10 $`|\mathrm{\Omega }|`$ and the two-vortex character of the configuration is rather obvious. The lower entry is a contour plot for the local vorticity (10). An important result of the analysis in is that the velocity is collinear with the linear momentum.
We are now sufficiently motivated to explore the possibility of scattering of vortex-antivortex pair solitons. We denote by $`\mathrm{\Lambda }_v(x,y)`$ the solution with velocity $`v`$ along the x-axis (set $`t=0`$). The product ansatz
$$\mathrm{\Omega }(x,y)=\mathrm{\Lambda }_v(x+\frac{\delta }{2},y)\mathrm{\Lambda }_v(x\frac{\delta }{2},y)$$
(12)
represents two vortex-antivortex pair solitons at a distance $`\delta `$ apart which are in a head-on collision course. The ansatz (12) is used as an initial condition in a straightforward numerical integration of Eq. (4). We typically set $`v=0.5`$, $`\delta =10`$.
We have set up a numerical mesh as large as 600$`\times `$600 with uniform lattice spacing $`h=0.1`$. The space derivatives are calculated by finite differences and the time integration is performed by a fourth order Runge-Kutta method. The results are presented in Figs. 3 and 4. Fig. 3 presents a contour plot for the field 10 $`|\mathrm{\Omega }|`$ at three characteristic snapshots. In the first entry the initial ansatz (12) is shown. The second snapshot is taken when the solitons are more or less at a minimum separation. No vortex-antivortex annihilation process takes place. This behaviour should be expected since the vortex-antivortex pair solitons are stable solutions of the equation. The argument is supported by numerical simulations showing that a vortex-antivortex ansatz preserves its character when traveling, provided that the vortex and antivortex are not very close to each other . The last snapshot shows the system after the collision. A right angle scattering pattern has been produced.
The situation becomes clearer in Fig 4 where we represent the solitons in terms of their local vorticity distribution. The soliton on the left half plane should be compared directly with that in the lower entry of Fig. 2. It obviously has a linear momentum and velocity pointing to the positive $`x`$-direction. The other soliton has the opposite linear momentum and velocity.
It is rather clear from the picture that the solitons will not bounce back after collision. This is precluded by the form of the local vorticity distribution. This possibility would require that the vortex and antivortex interchange their position. Instead, the possibility appears that, at collision time, the two pairs of vorticity lumps in the upper and lower half-planes will form two new vortex-antivortex pairs.
One has to apply Eq. (8) for each of the vorticity lumps separately. Alternatively, one can consider pairs of lumps which tend to travel parallel to each other undergoing Kelvin motion. Application of this idea to Fig. 4, determine the time evolution of the system. Finally, the two pairs on the upper and lower half planes tend to travel parallel to each other along the $`y`$-axis and form bound states.
An equivalent point of view is to follow the linear momentum of each soliton separately. The linear momentums of the outgoing solitons clearly lay on the $`y`$-axis and have opposite sign.
A subtle but important question is whether we can apply Eq. (8) separately for each of the vortices which consist the vortex-antivortex pair. A rigorous answer can not be given here. On the other hand the construction of solitary waves in suggests an affirmative answer whose range of validity is interesting to study.
The two solitons emerging after collision are very similar to the initial ones though not exactly the same. In fact the drift velocity of the outgoing solitons is somewhat larger. In Fig. 5 we have traced, during the time evolution, the points where the complex field $`\mathrm{\Omega }`$ vanishes and also the points where the vorticity $`\gamma `$ attains its maximum and minimum values. The two kinds of extrema are close during the whole period of time evolution. This is because the solitons used in the simulations of this chapter have a pronounced vortex-antivortex character. In traveling solutions of the Landau-Lifshitz equation have been studied which are different than the ones used here. Numerical simulations of scattering of these solitons also produce a right angle pattern.
It is possible to give a picture, corresponding to the scattering of vortex-antivortex pairs, in terms of 2D motion of charged particles interacting via their electric field and placed in a magnetic field perpendicular to the plane. In fact, we have to consider two electron-positron pairs. Consider the first electron-positron pair located at points $`A,B`$ of Fig. 5. and the second pair at $`C,D`$. The charges move similar to the vortices. Their actual orbits resemble those for the solitons shown in Fig. 5.
Our last remark in this section goes to some related work in hydrodynamics. There are solutions of the two-dimensional Euler equations which describe vorticity dipoles. Note that, in this context, vorticity has its ordinary hydrodynamic meaning. The best-known such solution seems to be the Lamb dipole . Another one has been found in . A head-on collision between two dipoles produces a pattern analogous to that in Fig. 4 of the present paper . Furthermore, a simple construction is given in page 223, to which our Fig. 5 can be compared. Further interesting cases of scattering between pairs of objects in hydrodynamics have been studied. The most complex behaviour has been observed in and includes stochastic and quasiperiodic motion of vortices.
## III Head-on collisions of vortices in antiferromagnets
Our main objective is to study scattering of vortices within model (2) and to show that the process can be studied in close analogy to the corresponding phenomenon in the ferromagnet. A right angle scattering behaviour of solitons has been observed in the isotropic $`\sigma `$-model , that is model (2) without the anisotropy term.
The examination of the local vorticity $`\gamma `$ has led to a successful approach for the collision of vortex-antivortex pairs in ferromagnets in Section II. We find it instructive to look at the collision process in terms of the vorticity also in the present model. A simple generalization of definition (6) can be used . The vorticity attains a simple form when it is expressed in terms of the vector field n:
$$\gamma =\epsilon _{\mu \nu }_\mu \dot{\text{n}}_\nu \text{n}=\epsilon _{\mu \nu }_\mu (\dot{\text{n}}_\nu \text{n}).$$
(13)
In the equation for an antiferromagnet in a uniform magnetic field was studied. Vortices in this system are spontaneously pinned, thus their dynamics is analogous to that of ferromagnetic vortices. This unexpected behaviour is probed by a topological term which enters the vorticity. However, such a term is absent in the model studied in this section.
The vorticity (13) has the form of a total divergence and can be integrated in all space to show that the total vorticity vanishes for solutions with reasonable behaviour at infinity:
$$\mathrm{\Gamma }=\gamma 𝑑x𝑑y=0.$$
(14)
In particular, it vanishes for the vortex solutions. Relations (7) - (9) apply in the present context without modification and they will be the fundamental relations to be used in the following analysis.
Eq. (13) shows that for a static vortex $`\gamma `$ vanishes identically. On the other hand, we can obtain a steadily traveling vortex by applying a Lorentz transformation to the static vortex solution (5). We denote the traveling vortex by $`\mathrm{\Omega }_v^o`$ and the velocity is $`0<v<1`$. The distribution of $`\gamma `$ for a Lorentz boosted vortex is nonvanishing and can be calculated numerically. The vortex with velocity $`v=0.7`$ is represented by a contour plot of the field $`10|\mathrm{\Omega }|`$ in the upper entry of Fig. 6. A corresponding plot for $`\gamma `$ is given in the lower entry of the figure.
The vorticity distribution has the form of two lumps, thus it resembles the sketch of Fig. 1. This is no surprise. In fact the following two remarks make it plausible. Firstly, we see that the total vorticity vanish according to relation (14). Secondly, an inspection of the form (7) of the linear momentum makes it clear that a nonvanishing component is furnished by two lumps of vorticity with opposite signs. This is not the only form of local vorticity that furnishes a nonvanishing linear momentum but it is certainly the simplest. Since the vortex solution with $`\kappa =1`$ is indeed the one with the simplest topological complexity, we expect its vorticity distribution to have the simplest possible form.
We calculate numerically the points where the maximum and minimum of the vorticity lumps are located. It turns out that these points are the $`(0,\pm 0.59)`$ for any value of the velocity $`v`$.
A further example on the present ideas is offered by the Belavin-Polyakov solutions . We apply a Lorentz transformation, with velocity $`v`$, to the simplest one: $`\mathrm{\Omega }=(xvt)/\sqrt{1v^2}+iy.`$ Its local vorticity is
$$\gamma =\frac{16v}{1v^2}\frac{y}{\left(1+\frac{(xvt)^2}{1v^2}+y^2\right)^3}.$$
(15)
In accordance with the above remarks, it has the shape of two lumps with opposite sign, located on either side of the $`x`$-axis and traveling along the $`x`$-axis.
We are now ready to present numerical simulations of head-on collisions of vortices. We make an ansatz representing two vortices. The choice is not unique and the simplest one seems to be the product ansatz:
$$\mathrm{\Omega }(x,y)=\mathrm{\Omega }^o(x+\frac{\delta }{2},y)\mathrm{\Omega }^o(x\frac{\delta }{2},y),$$
(16)
where $`\mathrm{\Omega }^o`$ is the single vortex solution given in Eq. (5). The two vortices are a distance $`\delta `$ apart.
The numerical mesh as well as the details of the algorithm that we use here are similar to those of Section II. We use vortices with $`\kappa =1`$. They are initially at rest but immediately start to drift away from each other due to their mutual repulsion and escape to infinity.
In order to invoke a head-on collision between vortices we consider the product ansatz of two vortices which have opposite velocities:
$$\mathrm{\Omega }(x,y)=\mathrm{\Omega }_v^o(x+\frac{\delta }{2},y)\mathrm{\Omega }_v^o(x\frac{\delta }{2},y).$$
(17)
$`\mathrm{\Omega }_v^o(x,y)`$ denotes the Lorentz transformed vortex solution with velocity $`v`$, at $`t=0`$.
We typically use $`\delta =6`$. In all simulations the vortices start to move against each other with velocities close to the value $`v`$ but they immediately begin to decelerate due to their mutual repulsion. The future of the process depends crucially on the magnitude of the velocity. At low velocities the two vortices approach to a minimum distance at which they come to rest and then turn round and move off in opposite directions. When the velocity exceeds a critical value (which is $`v_c0.65`$ for $`\delta =6`$) the two vortices collide and scatter at right angles. This result does not depend on the details of the initial ansatz or on the initial velocity of the vortices, as long as this exceeds the critical value. We have tested our algorithm for velocities up to the value $`v=0.9`$ in ansatz (17). However, one must keep in mind that the velocity of the vortices at the time of collision is smaller than the velocity in the initial ansatz.
In Fig. 7 we give a contour plot for the norm of the field $`\mathrm{\Omega }`$ at three characteristic snapshots. The first entry presents the initial configuration (17). In the middle snapshot, taken at collision time, it is clear that the two vortices come on top of each other. There is no topological reason, related to the field $`\mathrm{\Omega }`$, that could prevent this double vortex to form and there is also no such reason that could prevent the vortices either to continue traveling in the horizontal direction or to reemerge traveling in the vertical direction. We add that, at the present level of description, we can find no reason that would enforce them to follow either of the two possibilities. In the last snapshot the two new vortices that emerge after the collision, are drifting away from each other along the $`y`$-axis.
We find it instructive to look at the collision process using the vorticity. Our description will closely follow that in Section II in connection with the scattering of vortex-antivortex pairs. The dynamics in both systems is determined by the corresponding vorticity distribution. A comparison of the lower entries of Figs. 2 and 6 gives a hint that the underlying dynamics should be of a similar nature in both models.
In Fig. 8 we give the vorticity at three snapshots which correspond to those of Fig. 7. Fig. 8 should be compared directly with Fig. 4. An examination of these results shows that the arguments of Section II for the soliton scattering which rely upon the linear momentum relations (7), (8) are applicable here, too.
The upper entry in the figure corresponds to the initial ansatz and each of the two vorticity dipoles should be compared to that given in the lower entry of Fig. 6. The two vortices in the first entry of Fig. 8 are approaching each other while their dynamical features, as described by the vorticity, are not substantially modified. The repulsion which could decelerate them and make them turn round, is overcompensated by the large enough initial velocity. When the two vortices come close to each other (middle entry) the two pairs of vorticity lumps, lying in the upper and lower half plane, interact. The subsequent evolution of the vorticity lumps is governed by relation (8). In particular, this relation has to be applied to each of the vorticity lumps separately. As is indicated by the simulation, the lumps survive throughout the process and the simple dynamics implied by (8) is sustained during the scattering process. Following the discussion in Section II we examine the dynamics of lumps in a pairwise manner. The two pairs in the upper and lower plane tend to move along the $`y`$-axis. Consequently, the initial partners separate and two new vortices are formed which travel in opposite directions along the $`y`$-axis, as shown in the lower entry of the figure.
An equivalent approach is to follow the linear momentum of the solitons. The two pairs in the lower and upper half-plane have their linear momentums lying along the $`y`$-axis but with opposite signs. They subsequently tend to go off along the $`y`$-axis. The important numerical result is that the vorticity lumps roughly preserve their shape throughout the process. This is due to the fact that traveling vortices are stable solutions of the model.
We note here that our result is obtained when Eq. (8) is applied to each vorticity lump separately. We have been motivated to follow this approach because of its success with respect to studying the dynamics of vortex-antivortex pairs in a ferromagnet and because of the consistency of the picture with the linear momentum considerations. On the other hand a rigorous proof of its validity is lacking.
In Fig. 9 we track some characteristic points of the vortices throughout the simulation time. The stars denote successive points where the centers of the two vortices lie during the simulation, that is, where the complex function $`\mathrm{\Omega }`$ vanishes. The circles denote successive locations of the maximum and the diamonds the locations of the minimum of the vorticity distribution. We plot two circles and two diamonds at every time instant. At the beginning of the simulation the two vortices are centered at points $`A`$ and $`B`$ respectively. They immediately start to decelerate but when their centers reach a distance $`2.4`$ they seem to accelerate considerably, they eventually merge at the origin and then they separate along the $`y`$-axis. The trajectories of the extrema of the vorticity show that after the collision it takes some time until the two new vortices are organized again. The velocity at the end of the numerical simulation is somewhat lower than that in the initial ansatz. This should be due to spin waves emitted during the process.
The remarks of the last paragraph on the motion of the vortex centers are in agreement with an analytical solution obtained in representing scattering of solitons in an integrable chiral model. Close to collision time, the centers move according to the law $`x\pm \sqrt{t}`$ $`(t<0)`$. This gives a velocity $`dx/dt\mathrm{}`$ as $`t0`$. They collide at $`t=0`$ and the centers of the two new solitons, which emerge along the $`y`$ axis, follow $`y\pm \sqrt{t}(t>0)`$.
The arguments presented in this section are certainly not sufficient to exclude, other than right angle, interesting possibilities of interaction and scattering of solitons. Nevertheless, they imply that the right angle scattering process is expected to be generic for solitons in two-dimensional Hamiltonian models. In the present work we have found no relation of the topology of the solitons to the right-angle scattering phenomenon, Therefore right-angle scattering is also expected to occur among non-topological solitons .
## IV Conclusions
We have given a description of the right angle phenomenon of solitons through numerical simulations, as well as arguments which suggest that it should be generic in two dimensions. Two systems have been examined. Vortex-antivortex pairs in planar ferromagnets and vortices in antiferromagnets. The peculiar scattering behaviour is mainly attributed to the fact that solitons are extended structures rather than point like particles. This point is accounted for by the representation of a traveling soliton through a pair of lumps. Furthermore, we find that these two lumps act as independent physical entities at the time of collision.
It is desirable to observe experimentally scattering of solitons in ferromagnetic and antiferromagnetic films. We expect that the present theoretical analysis will be useful in studies of systems of a lot of vortices and in particular in studies of the thermodynamics of magnetic systems. Suffice it to say that, in a magnetic material, vortices are expected to appear in pairs. In the study of the thermodynamics of layered antiferromagnets one expects to find the signature of topological excitations. The remark may prove important especially in view of the difficulty to observe isolated antiferromagnetic solitons due to the lack of a significant total net magnetization.
The right angle scattering pattern appears also and has been understood from the point of view of the geometry of the moduli space for BPS monopoles in a three-dimensional Yang-Mills-Higgs theory .
We have proceeded in the simulation of scattering of a vortex and an antivortex in the $`\sigma `$-model (2). We use an initial ansatz similar to that of Eq. (16). We simulate the time evolution of the system numerically and find that the vortices attract each other. They eventually collide at the origin and annihilate. It is quite interesting that the energy is dissipated at right angles. The phenomenon is presumably closely related to the present study. A similar simulation with corresponding results has been done in for non-gauged cosmic strings.
The basic dynamical structure which leads to right angle scattering in Hamiltonian models is also present in the complex Ginzburg-Landau equation (CGLE) which describes nonlinear oscillatory media . There is actually a variety of nonconservative systems where scattering behaviour analogous to that studied in the present paper has been observed. An interesting example is a 2D fluid layer subjected to externally imposed oscillations. Coherent structures are formed which are experimentally observed to scatter at an almost right angle .
## V Acknowledgments
I am grateful to N. Papanicolaou and P.N. Spathis for providing me their numerical code which calculates the vortex-antivortex pairs used in the simulations of Section II and for useful discussions. I thank F.G. Mertens for discussion of the issues presented in this paper. I also thank L. Kramer for beneficial remarks on part of the text and L. Perivolaropoulos for his help with the numerical algorithm.
|
no-problem/9904/cond-mat9904321.html
|
ar5iv
|
text
|
# Comment on Numerical Study on Aging Dynamics in the 3D Ising Spin Glass Model
(April 1999)
## Abstract
We show that the dynamical behavior of the $`3D`$ Ising spin glass with Gaussian couplings is not compatible with a droplet dynamics. We show that this is implied from the data of reference , that, when analyzed in an accurate way, give multiple evidences of this fact. Our study is based on the analysis of the overlap-overlap correlation function, at different values of the separation $`r`$ and of the time $`t`$.
In a very interesting paper Komori, Yoshino and Takayama discuss the dynamical behavior of the three dimensional ($`3D`$) Edwards Anderson (EA) Ising Spin Glass with Gaussian couplings. A number of very accurate numerical evidences are used to try to determine if the $`3D`$ EA model behaves in analogy to the Replica Symmetry Broken (RSB) solution of the Mean Field Sherrington Kirkpatrick model , or if its behavior is the one typical of coarsening of domain walls , in agreement with a picture of separated droplets .
The evidences presented in can seem at first view to have mixed signs. For sake of completeness let us try to summarize all the available evidence.
As it is well explained in the paper some of the findings are obviously devastating for a droplet picture. The free energy barriers $`B_L`$ found on a lattice of linear size $`L`$ do not grow with $`L`$ like $`L^\psi `$, but with a logarithmic dependence $`B_L\mathrm{log}(L)`$ (this was already observed by Kisker, Rieger, Santen and Schreckenberg in ). Secondly, at variance with the droplet picture, the width of the distribution of the free energy barriers instead of increasing with $`L`$ turns out to be remarkably constant. These two features do not have any problem with the RSB picture.
Two more findings are not in contradiction with any of the two exemplar behaviors. The relaxation process has an Arrhenius-type form, with the characteristic time of the form $`\tau =\tau _0\mathrm{exp}\left(\frac{B}{T}\right)`$, and the decay of the energy density to its infinite time asymptotic value follows a power law. Both the droplet picture and the RSB scheme imply a power law decay of the energy density.
The crucial point, as rightly pointed out in , is the dynamical behavior of the overlap-overlap ($`qq`$) correlation functions $`G(r,t)`$ (computed starting from two different random spin configurations, considering overlaps at distance $`r`$ and equal time $`t`$). A simple scaling of $`G`$ as a function of $`\frac{r}{\xi (t)}`$ would indeed be a strong argument in favor of a droplet behavior, while evidence for a different scaling points to a RSB pattern. The analysis of claims to show a droplet like behavior of $`G(r,t)`$. Here we show that, on the contrary, the numerical data exclude this simple scaling form. We show that they behave in the non-standard way already discussed in reference . We perform a careful data analysis, and show in detail where the analysis of fails. In this way we show that the observed picture is fully compatible with an RSB behavior, and that none of the available numerical supports a droplet-like behavior: one more crucial piece of evidence, the one concerning the behavior of the overlap-overlap correlation functions, conjures with the observed behavior of the free energy barriers to exclude a dynamical droplet like behavior of $`3D`$ Ising spin glasses.
We will try to make clear to the reader how delicate this analysis is, how even a careful treatment like the one of reference can misdirected by only apparent straight lines, and why we can get a completely safe evidence of a non-trivial scaling of $`G(r,t)`$.
As we have already said the crucial difference in the approach to equilibrium among the droplet approach and the RSB theory is in the behavior of the $`qq`$ correlation function during the approach to equilibrium.
Let us define
$$G(r)\underset{t\mathrm{}}{lim}G(r,t).$$
(1)
In the droplet model we have
$$\underset{r\mathrm{}}{lim}G(r)=\overline{q}0,$$
(2)
while in the RSB solution of the Mean Field theory (in the $`q0`$ sector of the theory: we are analyzing this case since we start out from equilibrium on a very large lattice) we find <sup>1</sup><sup>1</sup>1A few details about our runs: we have analyzed four samples of a $`3D`$ Ising spin glass with Gaussian couplings, periodic boundary conditions, on a lattice of linear size $`L=64`$. We have computed the overlap-overlap correlation function for runs of time length $`t_n=1002^n`$, where $`n`$ runs from $`1`$ to $`13`$. We average the correlation function over the whole run: this gives a measured $`G`$ that is numerically slightly different from the one of but has the same scaling behavior. We have used random initial configurations.
$$G(r)r^\mu ,$$
(3)
and therefore
$$\underset{r\mathrm{}}{lim}G(r)=0.$$
(4)
In reference we have studied this problem and we have concluded that $`\mu .5`$, and therefore that a droplet-like behavior is inconsistent with the results of numerical simulations.
Reference argues that the numerical data for $`r4`$, $`T=0.7`$, scale like
$$G(r,t)=M\left(\frac{r}{\xi (t)}\right),$$
(5)
where $`\xi (t)=t^{\frac{1}{z(T)}}`$, $`z(T=0.7)8.71`$ and $`M`$ is a scaling function. As we have already argued the validity of this analysis would be a good point in favor of a droplet like behavior of $`3D`$ Ising spin glasses (at variance with the many other evidences of , that we have discussed before, that point against the validity of a droplet picture). Since this conclusion is different from the one we had reached in we will analyze here its derivation in better detail. In the rest of the paper (in particular in figures 1, 2 and 3) we use the value $`z(T=0.7)=8.71`$.
The conclusion of is based on the plot of the raw data for $`G(r,t)`$ as a function of $`\frac{r}{\xi (t)}`$ for $`r4`$ (figure $`5`$ of ). So at first, mainly to exclude the possibility of programming errors, we compare our raw data () to the ones of figure $`5`$ of . We plot in figure 1 our raw data for $`G(r,t)`$ versus $`\frac{r}{\xi }`$: by comparing with one can see they are not substantially different, so we can feel safe about the quality of the numerical data of both and . In figure 1 we have selected the same $`x`$ and $`y`$ scales than in figure $`5`$ of , to make the comparison of the two sets of data easier. In order to make a visual comparison easier we have also selected time values as close as possible to the ones of . In figure 1 of this note like in figure $`5`$ of the data can seem at first view well aligned on a straight line, i.e. decreasing with a simple exponential behavior. We will show now that this is just not true, and that these data clearly follow a non-exponential behavior.
The fact that the decay is not purely exponential can be clearly seen, for example, by plotting only the points with $`r=4`$ on the same semilogarithmic scale: we do that in figure 2. Here it is clear that the points are not on a straight line. It is just the dispersion of the many points in figure 1 that make them looking like being on a straight line: they are not, and figure 2 shows it clearly.
We continue our analysis by trying to double check the conclusion of leading to the functional dependence
$$G(r,t)=\frac{g}{r^{\frac{1}{2}}}\mathrm{exp}\left[\left(\frac{r}{\xi (t)}\right)^{\frac{3}{2}}\right].$$
(6)
We start by plotting in figure 3 the same data of figure 2 versus $`\left(\frac{x}{\xi (t)}\right)^{\frac{3}{2}}`$, together with the (very good) best fit to the form
$$G(r)\mathrm{exp}\left(\frac{A(r)}{\xi (t)^{\frac{3}{2}}}\right).$$
(7)
(at fixed $`r`$ as a function of $`t`$ for the two parameters $`G`$ and $`A`$). It is clear from figure 3 and from the quality of the best fit that this is the scale where the data follow a straight line, not the one of figure 2.
We repeat the procedure of figure 3 for different distances (finding always a very good best fit), and we fit the results to a power law (see figure 4). We fit
$$G(r)=r^{ϵ_1},A(r)=r^{ϵ_2},$$
(8)
and for the best fit $`ϵ_1=0.54`$ and $`ϵ_2=1.47`$, in very good agreement with our initial Ansatz.
Finally, in figure 5, we show $`G`$ as a function of $`r`$ for our larger available time ($`t=819200`$), and the best fit to the form $`ar^{\frac{1}{2}}\mathrm{exp}\left(br^{\frac{3}{2}}\right)`$. The best fit is again very good. It is interesting to note that, as a final confirmation of our point, the behavior of the function plotted in figure 5 is clearly not the one of a simple exponential, but that in a limited $`r`$ region it can be mistakenly fitted with a simple exponential.
We believe that these data and the data of , when correctly analyzed, give strong evidence in favor of a Replica Symmetry Breaking like dynamical behavior of $`3D`$ spin glasses, and that they show beyond any doubt that the droplet model cannot not describe the numerical data one obtains for the $`qq`$ correlation function $`G(r,t)`$.
|
no-problem/9904/hep-th9904183.html
|
ar5iv
|
text
|
# Topologically Massive Abelian Gauge Theory From BFT Hamiltonian Embedding of A First-order Theory
footnotetext: <sup>∗∗</sup> Electronic address: mssp@uohyd.ernet.in
## Abstract
We start with a new first order gauge non-invariant formulation of massive spin-one theory and map it to a reducible gauge theory viz; abelian $`BF`$ theory by the Hamiltonian embedding procedure of Batalin, Fradkin and Tyutin(BFT). This equivalence is shown from the equations of motion of the embedded Hamiltonian. We also demonstrate that the original gauge non-invariant model and the topologically massive gauge theory can both be obtained by suitable choice of gauges, from the phase space partition function of the emebedded Hamiltonian, proving their equivalence. Comparison of the first order formulation with the other known massive spin-one theories is also discussed.
I. Introduction
The construction and the study of massive spin-1 theories which are also gauge invariant has a long history , the well-known example being models with Higgs mechanism. Since the existence of Higgs particle has not yet been experimentally verified, it prompts a closer look at other models wherein mass and gauge invariance coexist. One such model which is currently being studied is the topological mechanism for gauge invariant mass for spin-one particle without a residual scalar field, wherein vector and tensor (2-form) fields are coupled in a gauge invariant way by a term known as $`BF`$ term . The Lagrangian for this model (for Abelian case) is given by
$$=\frac{1}{4}F_{\mu \nu }F^{\mu \nu }+\frac{1}{23!}H_{\mu \nu \lambda }H^{\mu \nu \lambda }+\frac{1}{4}ϵ_{\mu \nu \lambda \sigma }B^{\mu \nu }F^{\lambda \sigma },$$
(1)
where $`H_{\mu \nu \lambda }=_\mu B_{\nu \lambda }+\mathrm{cyclic}\mathrm{terms}.`$ This Lagrangian is invariant under
$`A_\mu A_\mu +_\mu \lambda ,`$ (2)
$`B_{\mu \nu }B_{\mu \nu }+(_\mu \lambda _\nu _\nu \lambda _\mu ).`$ (3)
A similar construction has also been made for non-Abelian theory .
The invariance of $`B_{\mu \nu }`$ when $`\lambda _\mu =_\mu \omega ,`$ in (3) necessitates the introduction of ghost for ghost terms in BRST quantization of the $`BF`$ theory . It can also be seen in the constraint quantization where this invariance makes the generators of gauge transformation linearly dependent . This theory shows a similar constraint structure as that of massless Kalb-Ramond theory in the existence of first stage reducible constraint .
The purpose of this paper is to construct a new first-order formulation of massive spin-one theory involving vector and 2-form fields (see eqn (8) below), which is gauge non-invariant and by following the idea of Hamiltonian embedding due to Batalin, Fradkin and Tyutin (BFT) , we show that the resulting theory is equivalent to $`BF`$ theory (1). Thus we have a novel representation of the abelian $`BF`$ theory in terms of a gauge non-invariant first order formulation.
The motivation for this study is two fold. One is that the $`BF`$ theory, in addition to being a candidate as an alternate for Higgs mechanism also appears in diverse areas like condensed matter physics and black-holes . Hence a model which is an equivalent realization of it has potential applications. The other is that the Hamiltionian emebedding procedure, employed here, by itself is of current interest. Several models like abelian and non-abelian self-dual model in $`2+1`$ dimensions, abelian and non-abelian Proca theories have been studied in detail in recent times applying the BFT procedure . Here we apply it, along the lines of to demonstrate the equivalence between self-dual model and Maxwell-Chern-Simons theory, to establish the equivalence of this gauge non-invariant theory (8)to the reducible gauge theory (1).
We work with $`g_{\mu \nu }=\mathrm{diag}(1111)`$ and $`ϵ_{0123}=1`$
In order to compare our formulation with the other known formulations of massive spin-1 theories, we first recollect them. The earliest formulation is in the form of a first order relativistic wave equation due to Duffin-Kemmer -Petiau(DKP) , which is given by
$$(i\beta ^\mu _\mu +m)\psi =0,$$
(4)
where $`\psi `$ is a $`10`$ dimensional column vector and $`\beta _\mu `$ are $`10\times 10`$ hermitian matrices obeying,
$$\beta _\mu \beta _\nu \beta _\lambda +\beta _\lambda \beta _\nu \beta _\mu =g_{\mu \nu }\beta _\lambda +g_{\lambda \nu }\beta _\mu .$$
(5)
The other well-known theory is that of Proca Lagrangian,
$$=\frac{1}{4}F_{\mu \nu }F^{\mu \nu }+\frac{m^2}{2}A_\mu A^\mu $$
(6)
There is, another formulation involving 2-form field given by
$$=\frac{1}{12}H_{\mu \nu \rho }H^{\mu \nu \rho }+\frac{m^2}{4}B_{\mu \nu }B^{\mu \nu },$$
(7)
where $`H_{\mu \nu \rho }=_\mu B_{\nu \rho }+\mathrm{cyclic}\mathrm{terms}.`$
Both these formulations (6, 7) can be made gauge invariant by adding suitable Stückelberg fields to compensate for the gauge variations of the mass terms, the compensating fields being scalar and vector fields for the Lagrangians (6 and 7) respectively . It should be pointed out that the Stückelberg formulations of both these theories are equivalent by duality transformation to $`BF`$ theory .
The new first order Lagrangian describing massive spin-1 theory involving a one form and a two form fields, which is proposed and studied in this paper, is given by
$$=\frac{1}{4}H_{\mu \nu }H^{\mu \nu }+\frac{1}{2}G_\mu G^\mu +\frac{1}{2m}ϵ_{\mu \nu \lambda \sigma }H^{\mu \nu }^\lambda G^\sigma .$$
(8)
This Lagrangian obviously has no gauge-invariance. The field equations following from this Lagrangian are
$`H_{\mu \nu }+{\displaystyle \frac{1}{m}}ϵ_{\mu \nu \lambda \sigma }^\lambda G^\sigma =0,`$ (9)
$`\mathrm{and}G_\mu +{\displaystyle \frac{1}{2m}}ϵ_{\mu \nu \lambda \sigma }^\nu H^{\lambda \sigma }=0.`$ (10)
From these equations, it follows that
$`_\mu H^{\mu \nu }=0,`$ (11)
$`\mathrm{and}_\mu G^\mu =0.`$ (12)
The fact that the above Lagrangian describes a massive spin$`1`$ theory can be easly seen by rewriting the coupled equations of motion (using the conditions (11) and (12)) as
$$(\mathrm{}+m^2)H_{\mu \nu }=0,$$
(13)
or alternatively
$$(\mathrm{}+m^2)G_\mu =0.$$
(14)
Since the equation of motion (14) along with the constraint (12) follows from Proca Lagrangian, we should expect the latter to emerge from the above Lagrangian (8). Indeed by integrating out $`H_{\mu \nu }`$ from the Lagrangian (8), Proca Lagrangian (6) is obtained. Similarly, by eliminating $`G_\mu `$ from the Lagrangian (8) we arrive at the Lagrangian (7). It is natural to ask how this Lagrangain (8) is different from the first-order formulation of Lagrangians (6, 7).
The standard first-order form of Proca Lagrangian is given by
$$=\frac{1}{4}B_{\mu \nu }B^{\mu \nu }+\frac{1}{2}B_{\mu \nu }F^{\mu \nu }+m^2A_\mu A^\mu .$$
(15)
Here, by eliminating the linearzing field $`B_{\mu \nu }`$, we get back to the Proca Lagrangian (6). But eliminating $`A_\mu `$ will not lead to the Lagrangian (7) for 2-form fields. Similarly, the standard first order form corresponding to the Lagrangian (7) is
$$=\frac{1}{23!}C_{\mu \nu \lambda }C^{\mu \nu \lambda }\frac{1}{3!}C_{\mu \nu \lambda }H^{\mu \nu \lambda }+m^2B_{\mu \nu }B^{\mu \nu }.$$
(16)
Here, too, eliminating $`B_{\mu \nu }`$ from the above Lagrangian will not lead to the Proca Lagrangian involving 1-form (6).
It should be stressed that (8) is different from the standard first-order formulations (15, 16), by being the first-order formulation for both (6) and (7).
The first order field equations (9) and (10) can be rewritten as
$$\left(\beta _\mu ^\mu +m\right)\psi =0,$$
(17)
where $`\psi `$ is a $`10`$ dimensional column vector whose elements are the independent components of $`G_\mu `$ and $`H_{\mu \nu }`$. But here the $`\beta _\mu `$ matrices are not found to obey the DKP algebra (5).
This paper is organised as follows: In section II, Hamiltonian embedding of the Lagrangian (8) is constructed along the lines of BFT and the embedded Hamiltonian is shown to be equivalent to that of $`BF`$ theory. Section III shows the equivalence in phase space path integral approach using the embedded Hamiltonian. Finally we end up with conclusion.
II. Hamiltonian Embedding
We start with the Lagrangian (8), with the last term expresed in a symmetric form as
$$=\frac{1}{4}H_{\mu \nu }H^{\mu \nu }+\frac{1}{2}G_\mu G^\mu +\frac{1}{4m}ϵ_{\mu \nu \lambda \sigma }H^{\mu \nu }^\lambda G^\sigma \frac{1}{4m}ϵ_{\mu \nu \lambda \sigma }^\mu H^{\nu \lambda }G^\sigma .$$
(18)
This Lagrangian can be re written as
$$=\frac{1}{4m}ϵ_{0ijk}H^{ij}^0G^k\frac{1}{4m}ϵ_{oijk}^0H^{ij}G^k_c$$
(19)
where $`_c`$, the Hamiltonian density following from the above Lagrangian (19) is
$$_c=\frac{1}{4}H_{ij}H^{ij}\frac{1}{2}G_iG^i+H_{0i}\left(\frac{1}{2}H^{0i}\frac{1}{m}ϵ^{0ijk}_jG_k\right)\frac{1}{2}G_0\left(G^0+\frac{1}{m}ϵ^{0ijk}_iH_{jk}\right),$$
(20)
The primary constraints are
$`\mathrm{\Pi }_00,`$ (21)
$`\mathrm{\Pi }_{0i}0,`$ (22)
$`\mathrm{\Omega }_i\left(\mathrm{\Pi }_i{\displaystyle \frac{1}{4m}}ϵ_{0ijk}H^{jk}\right)0,`$ (23)
$`\mathrm{\Lambda }_{ij}\left(\mathrm{\Pi }_{ij}+{\displaystyle \frac{1}{2m}}ϵ_{0ijk}G^k\right)0,`$ (24)
The persistence of the primary constraints leads to secondary constraints,
$`\mathrm{\Lambda }\left(G_0+{\displaystyle \frac{1}{2m}}ϵ_{0ijk}^iH^{jk}\right)0,`$ (25)
$`\mathrm{\Lambda }_i\left(H_{oi}+{\displaystyle \frac{1}{m}}ϵ_{0ijk}^jG^k\right)0.`$ (26)
The non-vanishing poisson brackets between these linearly independent constraints are
$`\{\mathrm{\Pi }_0(\stackrel{}{x}),\mathrm{\Lambda }(\stackrel{}{y})\}=\delta (\stackrel{}{x}\stackrel{}{y}),`$ (27)
$`\{\mathrm{\Pi }_{0i}(\stackrel{}{x}),\mathrm{\Lambda }^j(\stackrel{}{y})\}=\delta _i^j\delta (\stackrel{}{x}\stackrel{}{y}),`$ (28)
$`\{\mathrm{\Omega }_i(\stackrel{}{x}),\mathrm{\Lambda }_j(\stackrel{}{y})\}={\displaystyle \frac{1}{m}}ϵ_{0ijk}^k\delta (\stackrel{}{x}\stackrel{}{y}),`$ (29)
$`\{\mathrm{\Omega }_i(\stackrel{}{x}),\mathrm{\Lambda }_{jk}(\stackrel{}{y})\}={\displaystyle \frac{1}{m}}ϵ_{oijk}\delta (\stackrel{}{x}\stackrel{}{y}),`$ (30)
$`\{\mathrm{\Lambda }_{ij}(\stackrel{}{x}),\mathrm{\Lambda }(\stackrel{}{y})\}={\displaystyle \frac{1}{m}}ϵ_{oijk}^k\delta (\stackrel{}{x}\stackrel{}{y}).`$ (31)
Thus all the constraints are second class as expected of a theory without any gauge invariance. Note that the constraints $`\mathrm{\Omega }_i\mathrm{and}\mathrm{\Lambda }_{ij}`$ are due to the symplectic structure of the Lagrangian (8). Following Fadeev and Jackiw , the symplectic conditions, which are not true constraints, are implemented strongly leading to the the modified bracket,
$$\{G_i(\stackrel{}{x}),H_{jk}(\stackrel{}{y})\}=mϵ_{0ijk}\delta (\stackrel{}{x}\stackrel{}{y}).$$
(32)
Consequently $`\mathrm{\Omega }_i`$ and $`\mathrm{\Lambda }_{ij}`$ are implemented strongly.
Now we enlarge the phase space by introducing canonically conjugate auxiliary pairs $`(\alpha ,\mathrm{\Pi }_\alpha ,p_i,\mathrm{and}q_i)`$ and modify the remaining second class constraints such that they are in strong involution, i.e., have vanishing Poisson brackets. To this end, we define the non-vanishing Poisson brackets among the new phase space variables to be
$`\{\alpha (\stackrel{}{x}),\mathrm{\Pi }_\alpha (\stackrel{}{y})\}=\delta (\stackrel{}{x}\stackrel{}{y}),`$ (33)
$`\{q^j(\stackrel{}{x}),p_i(\stackrel{}{y})\}=\delta _i^j\delta (\stackrel{}{x}\stackrel{}{y}).`$ (34)
The modified constraints which are in strong involution read
$`\omega =\mathrm{\Pi }_0+\alpha ,`$ (35)
$`\mathrm{\Lambda }^{}=\mathrm{\Lambda }+\pi _\alpha ,`$ (36)
$`\theta _i=\mathrm{\Pi }_{0i}+q_i`$ (37)
$`\mathrm{\Lambda }_i^{}=\mathrm{\Lambda }_ip_i.`$ (38)
Following the general BFT procedure we construct the Hamiltonion which is weakly gauge invariant and is given by
$$H_{GI}=d^3x\left[_c+\frac{1}{2}\mathrm{\Pi }_{a}^{}{}_{}{}^{2}+\alpha ^iG_i\frac{1}{2}(^i\alpha )(_i\alpha )\frac{1}{2}p_ip^i\frac{1}{2}q_{ij}H^{ij}+\frac{1}{4}q_{ij}q^{ij}\right],$$
(39)
where $`q_{ij}=(_iq_j_jq_i).`$ The Poisson brackets of modified constraints with $`H_{GI}`$ are
$`\{\omega ,H_{GI}\}=\mathrm{\Lambda }^{},`$ (40)
$`\{\mathrm{\Lambda }^{},H_{GI}\}=0,`$ (41)
$`\{\theta _i,H_{GI}\}=\mathrm{\Lambda }_i^{},`$ (42)
$`\{\mathrm{\Lambda }_i^{},H_{GI}\}=0.`$ (43)
Thus all the modified constraints are in involution with the $`H_{GI}`$ as one can easily see from their Poisson brackets. The gauge transformations generated by these first class constraints (35 to 38) are
$`\{G^0,{\displaystyle d^3x\omega \stackrel{~}{\theta }}\}=\stackrel{~}{\theta },`$ (44)
$`\{\mathrm{\Pi }_a,{\displaystyle d^3x\omega \stackrel{~}{\theta }}\}=\stackrel{~}{\theta },`$ (45)
$`\{H_{0i},{\displaystyle d^3x\theta _i\psi ^i}\}=\psi _i,`$ (46)
$`\{p_i,{\displaystyle d^3x\theta _i\psi ^i}\}=\psi _i,`$ (47)
$`\{G_i,{\displaystyle d^3x\mathrm{\Lambda }^{}\stackrel{~}{\theta }}\}=_i\stackrel{~}{\theta },`$ (48)
$`\{\alpha ,{\displaystyle d^3x\mathrm{\Lambda }^{}\stackrel{~}{\theta }}\}=\stackrel{~}{\theta },`$ (49)
$`\{H_{ij},{\displaystyle d^3x\mathrm{\Lambda }_i^{}\psi ^i}\}=(_i\psi _j_j\psi _i),`$ (50)
$`\{q_i,{\displaystyle d^3x\mathrm{\Lambda }_j^{}\psi ^j}\}=\psi _i.`$ (51)
Thus the combinations
$`\overline{G}_0=G_0+\mathrm{\Pi }_a,`$ (52)
$`\overline{G}_i=G_i+_i\alpha ,`$ (53)
$`\overline{H}_{oi}=H_{oi}+p_i,`$ (54)
$`\overline{H}_{ij}=H_{ij}q_{ij}.`$ (55)
are gauge invariant under the transformation generated by the first class constraints (35 to 38). Next we re-express gauge invariant Hamiltonian density $`_𝒢`$ in terms of these gauge invariant combinations,
$$_𝒢=\frac{1}{4}\overline{H}_{ij}\overline{H}^{ij}\frac{1}{2}\overline{H}_{oi}\overline{H}^{oi}\frac{1}{2}\overline{G}_i\overline{G}^i+\frac{1}{2}\overline{G}_0\overline{G}^0G_0\overline{\mathrm{\Lambda }}^{}H^{0i}\overline{\mathrm{\Lambda }}_i^{},$$
(56)
where $`\overline{\mathrm{\Lambda }}^{}`$ and $`\overline{\mathrm{\Lambda }}_i^{}`$ are the constraints $`\mathrm{\Lambda }^{}`$ and $`\mathrm{\Lambda }_i^{}`$ expressed in terms of the gauge invariant combinations (45).
The equations of motion following from this Hamiltonian (46) are,
$`\overline{G}_0+{\displaystyle \frac{1}{2m}}ϵ_{0ijk}^i\overline{H}^{jk}=0,`$ (57)
$`\overline{G}_i+{\displaystyle \frac{1}{2m}}ϵ_{i\mu \nu \lambda }^\mu \overline{H}^{\nu \lambda }=0,`$ (58)
$`\overline{H}_{0i}+{\displaystyle \frac{1}{m}}ϵ_{0ijk}^jG^k=0,`$ (59)
$`\overline{H}_{ij}+{\displaystyle \frac{1}{m}}ϵ_{ij\mu \nu }^\mu G^\nu =0.`$ (60)
These equations can be expressed in a covariant way, i.e.,
$`\overline{G}_\mu +{\displaystyle \frac{1}{2m}}ϵ_{\mu \nu \lambda \sigma }^\nu \overline{H}^{\lambda \sigma }=0,`$ (61)
$`\overline{H}_{\mu \nu }+{\displaystyle \frac{1}{m}}ϵ_{\mu \nu \lambda \sigma }^\lambda \overline{G}^\sigma =0.`$ (62)
From these equations it follows that $`^\mu \overline{G}_\mu =0`$ and $`^\mu \overline{H}_{\mu \nu }=0`$. These also follow as the Hamiltonian eqations of motion for $`\overline{G}_0`$ and $`\overline{H_{0i}}`$, respectively. The gauge invariant solution for these equations which also satisfy the divergenless condition for $`\overline{G}_\mu `$and $`\overline{H}_{\mu \nu }`$ are
$`\overline{G}_\mu \stackrel{~}{H_\mu }={\displaystyle \frac{1}{3!}}ϵ_{\mu \nu \lambda \sigma }H^{\nu \lambda \sigma },`$ (63)
$`\overline{H}_{\mu \nu }\stackrel{~}{F_{\mu \nu }}={\displaystyle \frac{1}{2}}ϵ_{\mu \nu \lambda \sigma }F^{\lambda \sigma }.`$ (64)
where $`H^{\mu \nu \lambda }=^\mu B^{\nu \lambda }+\mathrm{cyclicterms}`$ and $`F_{\mu \nu }=(_\mu A_\nu _\nu A_\mu ).`$
Now, by substituting back the solutions for $`\overline{G}_\mu `$ and $`\overline{H}_{\mu \nu }`$ in $`_{GI}`$ (46), the involutive Hamiltonian density becomes
$$_𝒢=\frac{1}{4}F_{ij}F^{ij}\frac{1}{2}F_{0i}F^{0i}+\frac{1}{4}H_{0ij}H^{0ij}\frac{1}{23!}H_{ijk}H^{ijk}+G_0\stackrel{~}{\mathrm{\Lambda }}H_{0i}\stackrel{~}{\mathrm{\Lambda }}^i$$
(65)
where $`\stackrel{~}{\mathrm{\Lambda }}=(\frac{1}{3!}ϵ_{0ijk}H^{ijk}\frac{1}{m}^iF_{0i})`$ and$`\stackrel{~}{\mathrm{\Lambda }}^i=(\frac{1}{2}ϵ_{0ijk}F^{jk}+\frac{1}{4}^jH_{0ij}).`$
With
$`{\displaystyle \frac{1}{2}}ϵ_{0ijk}F^{jk}=B_i,F_{0i}=E_i,`$ (66)
$`{\displaystyle \frac{1}{3!}}ϵ_{0ijk}H^{ijk}=\overline{B},\mathrm{and}{\displaystyle \frac{1}{2}}ϵ_{0ijk}H^{0jk}=\overline{E}_i,`$ (67)
$`_𝒢`$ becomes,
$$_𝒢=\frac{1}{2}\left(E^2+B^2\right)+\frac{1}{2}\left(\overline{E}^2+\overline{B}^2\right)+G_0\stackrel{~}{\mathrm{\Lambda }}H_{0i}\stackrel{~}{\mathrm{\Lambda }}^i$$
(68)
This is the Hamiltonian following from the $`BF`$ Lagrangian (1). Note that $`\stackrel{~}{\mathrm{\Lambda }}\mathrm{and}\stackrel{~}{\mathrm{\Lambda }}^i`$ are the Gauss law constraints for the $`BF`$ theory. The latter, which was an irreducible constraint in terms of gauge invariant combination becomes a reducible constraint when expressed in terms of the solutions (51), obeying $`^i\stackrel{~}{\mathrm{\Lambda }}_i=0`$. By substituting back the solutions for $`\overline{G}_\mu `$ and $`\overline{H}_{\mu \nu }`$ into the equations of motion follwing from $`H_{GI}`$ (49, 50), they become
$`^\nu F_{\mu \nu }+{\displaystyle \frac{m}{3!}}ϵ_{\mu \nu \lambda \sigma }H^{\nu \lambda \sigma }=0,`$ (69)
$`^\lambda H_{\mu \nu \lambda }{\displaystyle \frac{m}{2}}ϵ_{\mu \nu \lambda \sigma }F^{\lambda \sigma }=0,`$ (70)
which are the same equations as the one following from the $`BF`$ theory. Thus under BFT embedding, the fields appearing in the original Hamiltonian (20) get mapped to gauge invariant combinations (45) of the embedded Hamiltonian-$`H_{GI}`$ (46). The solutions to the equations of motion follwing from $`H_{GI}`$ uniquely map the embedded Hamiltonian to that of $`BF`$ theory. Also the irreducible constraints $`\mathrm{\Lambda }_i\mathrm{and}\mathrm{\Lambda }`$ in the original Hamiltonian (20) get mapped to the reducible constraint $`\stackrel{~}{\mathrm{\Lambda }}_i`$ and $`\stackrel{~}{\mathrm{\Lambda }}`$ respectively.
III. Phase Space Path Integral Approach
The equivalence of the theory described by the Lagrangian (8) to $`BF`$ theory can also be established using phase space path integral method. By suitable choice of gauge fixing conditions, the partition function of the embedded model can become that of original massive spin-1, gauge non-invariant theory or that of $`BF`$ theory, proving their equivalence.
The partition function for the embedded model described by the Hamiltonian (39) is
$$Z_{emb}=D\eta \delta (\omega )\delta (\theta _i)\delta (\mathrm{\Lambda }^{})\delta (\mathrm{\Lambda }_i^{})\delta (\psi _i)\mathrm{exp}id^4x,$$
(71)
where we have omitted the trivial Fadeev-Popov determinant for the gauges chosen below. The measure is,
$$D\eta =D\mathrm{\Pi }_0D\mathrm{\Pi }_{oi}D\mathrm{\Pi }_\alpha D\alpha Dp_iDq_iDG_\mu DH_{\mu \nu },$$
and
$$=\mathrm{\Pi }_0\dot{G^0}+\mathrm{\Pi }_{0i}\dot{H^{0i}}+\mathrm{\Pi }_\alpha \dot{\alpha }+p_i\dot{q^j}+\frac{1}{4m}ϵ_{0ijk}H^{ij}^0G^k\frac{1}{4m}ϵ_{oijk}G^i^0H^{jk}_𝒢$$
(72)
where $`_𝒢`$ is the invariant Hamiltonian given in (39); $`\delta (\psi _i)`$ are the gauge fixing conditions coresponding to the first class constraints of the embedded model. Now we have twenty two independent phase space variables (remember $`\mathrm{\Pi }_i,\mathrm{and}\mathrm{\Pi }_{ij}`$ are not independent degrees of freedom) along with the eight first class constraints giving the correct degrees of freedom required for a massive spin-one theory.
Choosing the gauge fixing conditons
$$\delta (\psi _i)=\delta (\mathrm{\Pi }_0)\delta (\mathrm{\Pi }_{0i})\delta (\mathrm{\Lambda })\delta (\mathrm{\Lambda }_i),$$
(73)
and integrating out the canonical conjugate variables $`\alpha ,\mathrm{\Pi }_\alpha ,q_i\mathrm{and}p_i`$ and momenta $`\mathrm{\Pi }_0`$ and $`\mathrm{\Pi }_{0i}`$ from the partition function reduces $`Z_{emb}`$ to
$$Z=DG_\mu DH_{\mu \nu }\delta (\mathrm{\Lambda })\delta (\mathrm{\Lambda }_i)expid^4x,$$
(74)
where $``$ is the original first order Lagrangian (8), with the Gauss law constraints impossed through $`\delta (\mathrm{\Lambda })`$ and $`\delta (\mathrm{\Lambda }_i).`$ Note that the original second class constraints are the gauge fixing conditions (59).
Next we choose the gauge fixing conditions as
$$\delta (\psi _i)=\delta (^iH_{0i})\delta (^iq_i)\delta (\chi _i)\delta (\chi _{ij}),$$
(75)
where
$`\chi _i=(G_i{\displaystyle \frac{1}{m}}ϵ_{0ijk}^jH^{0k})`$ (76)
$`\chi _{ij}=(H_{ij}{\displaystyle \frac{1}{m}}ϵ_{0ijk}^kG^0),`$ (77)
to show the equivalence of the embedded model to the $`BF`$ theory. Owing to the constraints $`\omega \mathrm{and}\theta _i`$, the $`D\mathrm{\Pi }_0,D\mathrm{\Pi }_{0i},`$ integrations are trivial. The $`D\mathrm{\Pi }_\alpha ,\mathrm{and}Dp_i`$ integrations along with the constraints $`\delta (\mathrm{\Lambda }^{}),\mathrm{and}\delta (\mathrm{\Lambda }_i^{})`$ lead to the terms $`\frac{1}{4m^2}G_{ij}G^{ij}`$ and $`\frac{1}{23!}H_{ijk}H^{ijk}`$ in the exponent. Using the fact that $`^iG_i=0`$ and $`^iH_{ij}=0`$ on the constraint surface of $`\chi _i`$ and $`\chi _{ij}`$ and the gauge fixing conditions $`\delta (^iq_i)`$ and $`\delta (^iH_{0i})`$, we carry out the integrations over $`Dq_i`$ and $`D\alpha `$, which are just Gaussian. Thus the Lagrangian in the partition function becomes,
$`={\displaystyle \frac{1}{2m}}ϵ_{0ijk}^0G^iH^{jk}{\displaystyle \frac{1}{4m^2}}G_{ij}G^{ij}+{\displaystyle \frac{1}{23!m^2}}H_{ijk}H^{ijk}{\displaystyle \frac{1}{2m^2}}G_{0i}G^{0i}+{\displaystyle \frac{1}{4m^2}}H_{0ij}H^{0ij}`$ (78)
$`{\displaystyle \frac{1}{4}}H_{ij}H^{ij}+{\displaystyle \frac{1}{2}}G_iG^i{\displaystyle \frac{1}{2m^2}}H_{0i}^2H^{0i}+{\displaystyle \frac{1}{2m^2}}G_0^2G^0.`$ (79)
After using the constraints (62, 63) and the conditions $`^2H_{0i}=\frac{m}{2}ϵ_{0ijk}G^{jk}`$ and $`^2G_0=\frac{m}{2}ϵ_{0ijk}^iH^{jk}`$ implied by (62, 63), the partition function becomes,
$$Z=DA_\mu DB_{\mu \nu }\delta (\chi _i)\delta (\chi _{ij})expid^4x$$
(80)
where $``$, with the identifications
$`{\displaystyle \frac{1}{m}}G_\mu =A_\mu ,`$ (81)
$`{\displaystyle \frac{1}{m}}H_{\mu \nu }=B_{\mu \nu }`$ (82)
is the Lagrangian of $`BF`$ theory(18). The constraints $`\chi _i\mathrm{and}\chi _{ij}`$, in terms of $`A_\mu \mathrm{and}B_{\mu \nu }`$ become,
$`\chi _i=(mA_iϵ_{0ijk}^jB^{0k}),`$ (83)
$`\mathrm{and}\chi _{ij}=(mB_{ij}ϵ_{0ijk}^kA^0)`$ (84)
$`\chi _i`$ and $`\chi _{ij}`$ which are the gauge fixing conditions for the linearly independent generators $`\theta _i`$ and $`\mathrm{\Lambda }_i^{},`$ now play the role of Gauss law constraints of the $`BF`$ theory. Now
$`^2B_{0i}={\displaystyle \frac{m}{2}}ϵ_{0ijk}F^{jk},`$ (85)
$`\mathrm{and}^2A_0={\displaystyle \frac{m}{2}}ϵ_{0ijk}^iB^{jk},`$ (86)
($`F_{jk}=_jA_k_kA_j`$), are the Gauss law constraints in the gauge $`\chi _i=0`$ and $`\chi _{ij}=0`$. Since $`\theta _i`$ is not a reducible constraint, coresponding gauge fixing condition is also not reducible; but it implies the reducible Gauss law constraint (70) present in the $`BF`$ theory. It is interesting to note the complimentary behavior of Gauss law constraints which comes as the gauge fixing conditions in the partition function for the embedded model.
Conclusion
In this paper we have started with a new first order formulation of massive spin-one theory which is gauge non-invariant and converted it to a theory with only first class constraints following the line of Hamiltonian embedding of BFT. We showed that the embedded Hamiltonian is equivalent to the Hamiltonian of $`BF`$ theory. This was shown both from the solutions of equations of motion following from the embedded Hamiltonian and from the phase space path integral in a suitable gauge. We also point out how an irreducible constraint of the first order theory gets mapped to the reducible constraint of the $`BF`$ theory. It should be pointed out that the first-order lagrangian (8) and its equivalence to topologically massive gauge theory are both new results.
A similar first order Lagrangian can be formulated for massive spin-zero particle, but now involving a 3-form and a scalar fields as
$$=\frac{1}{23!}C_{\mu \nu \lambda }C^{\mu \nu \lambda }\frac{1}{2}\varphi \varphi +\frac{1}{4!m}ϵ_{\mu \nu \lambda \sigma }C^{\mu \nu \lambda \sigma }\varphi ,$$
(87)
where $`C_{\mu \nu \lambda \sigma }=_\mu C_{\nu \lambda \sigma }+\mathrm{cyclic}\mathrm{terms}.`$ Interestingly here the field content is the same as that of DKP formulation of spin-zero theory.
The equivalence demonstrated here is of the same nature as that between self-dual model and Maxwell-Chern-Simmon theory in $`2+1`$ dimensions, shown in . The behavior of the fields of the embedded Hamiltonian here is the same as that of $`2+1`$ dimension self-dual model; viz, the gauge variant fields of the embedded model can be mapped to the fundamental fields of the $`BF`$ theory or that of the original model. Despite this similarity, model in (8) is different from the self-dual model. The latter describes only half the degrees of freedom compared to that of massive spin-one theory in $`2+1`$ dimensions and consequently is equivalent to the parity violating Maxwell-Chern-Simmon theory. Also the self-duality condition is possible only in $`4k1`$ dimensions. But the former describes all the three states of polarization needed for massive spin-one particle and also this construction is possible in all dimensions and has a even-parity mass term. Owing to the even-parity mass term described by this model, the $`2+1`$ dimensional non-abelian generalization of (8) may be related to the recently constructed Jackiw-Pi model . The self-dual model and Maxwell-Chern-Simmon correspondence has proved to be useful in Bosonization in $`2+1`$ dimensions . It should be interesting to see similarly if the equivalence proved here has a role in studying Bosonization in $`3+1`$ dimensions. It is also interesting to study the Hamiltonian embedding of the non-abelian version of (8). Work along these lines are in progress.
We have exploited the gauge symmetry arising due to the Hamiltonian embedding procedure to prove the equivalence between two different formulations of massive spin-one theory. There is a different procedure which also has the potential to establish equivalence among different formulations . In this method, the new degrees of freedom are added by hand, which generates abelian gauge algebra and this is used to gauge fix suitably, to arrive at a different formulation of the original theory like for example Bosonisation . It should be interesting to investigate if the quantum equivalence proved here survives the case of coupling with external fields like gravitation and electromagnetism also.
Acknowledgements: We are grateful to Prof. V. Srinivasan for pointing out an error in the earlier manuscript. We acknowledge Dr. R. Banerjee for useful corespondence. We thank the referee for bringing the reference to our notice. EH thanks U.G.C., India for support through S.R.F scheme.
|
no-problem/9904/cond-mat9904374.html
|
ar5iv
|
text
|
# Electronic correlations, electron-phonon interaction, and isotope effect in high-𝑇_𝑐 cuprates
## I Introduction
High-T<sub>c</sub> cuprates show strong electronic correlations but also a non-neglegible electron-phonon interaction. The latter causes, for instance, the observed isotope effect in the transition temperature T<sub>c</sub>, especially, away from optimal doping . It is thus of interest to calculate superconducting instabilities taking both the electron-electron and the electron-phonon interaction into account. There are several calculations dealing with the case of weak electronic correlations . In these calculations the phonon-mediated part of the effective interaction is unaffected by electronic correlations. The electronic part to the effective interaction is calculated using a Hubbard model and assuming $`U`$ to be small compared to the band width. It is repulsive and strongly peaked near the $`M`$-point in the Brillouin zone. As an alternative, Refs. use the spin-fluctuation term of Ref.. Since the phonon part is attractive throughout the Brillouin zone it is plausible that the d-wave pairing due to the electronic part will be weakened if the electrons couple strongly to phonons near the $`M`$ point and only weakly affected if only phonons with small momentum are involved . Refs. , for instance, find that the $`T_c`$ for d-wave superconductivity is always lowered by phonons and that the isotope coefficient $`\beta `$ is negative for a constant electron-phonon coupling function. If only phonons with small momenta play a role $`\beta `$ becomes positive. Ref. even finds a change of sign in $`\beta `$ as a function of the phonon frequency.
The case of strong electronic correlations has quite different features compared to the weak correlated case. The purely electronic part of the effective interaction is rather a smooth function of momentum without a sharp peak at the $`M`$-point due to spin fluctuations . Furthermore, electronic correlations modify substantially the phonon-mediated part of the effective interaction . It is the purpose of this communication to present results for this case of strong electronic correlations. A suitable model for such calculations is a generalized $`tJ`$ model which also includes phonons within a Holstein model. Extending the spin degrees from 2 to $`N`$ systematic approximations for the effective interaction in terms of powers in $`1/N`$ can be carried out . In a first step we consider only the phonon-mediated effective interaction which, unlike in the case of weak correlations, is strongly modified by vertex corrections transforming the original constant electron-phonon coupling of the Holstein model into a strongly momentum- and frequency-dependent function. Explicit results for $`T_c`$ and $`\beta `$ will be given for the leading symmetry channels of the superconducting order parameter as a function of the doping. In a second step these results are extended to the case where also the purely electronic contribution to the effective interaction is taken into account.
## II Linearized equation for the superconductivity gap
Our Hamiltonian for the $`tJ`$ model plus phonons can be written as
$`H={\displaystyle \underset{\genfrac{}{}{0pt}{}{ij}{p=1\mathrm{}N}}{}}{\displaystyle \frac{t_{ij}}{N}}X_i^{p0}X_j^{0p}+{\displaystyle \underset{\genfrac{}{}{0pt}{}{ij}{p,q=1\mathrm{}N}}{}}{\displaystyle \frac{J_{ij}}{4N}}X_i^{pq}X_j^{qp}`$ $``$ (1)
$`{\displaystyle \underset{\genfrac{}{}{0pt}{}{ij}{p,q=1\mathrm{}N}}{}}{\displaystyle \frac{J_{ij}}{4N}}X_i^{pp}X_j^{qq}+{\displaystyle \underset{i}{}}\omega _0(a_i^{}a_i+{\displaystyle \frac{1}{2}})`$ $`+`$ (2)
$`{\displaystyle \underset{\genfrac{}{}{0pt}{}{i}{p=1\mathrm{}N}}{}}{\displaystyle \frac{g}{\sqrt{N}}}[a_i^{}+a_i](X_{i}^{}{}_{}{}^{pp}<X_{i}^{}{}_{}{}^{pp}>).`$ (3)
The first three terms correspond for $`N=2`$ to the usual $`tJ`$ model. For $`N=2`$ $`X`$ is identical with Hubbard’s projection operator $`X_i^{pq}=|\genfrac{}{}{0pt}{}{p}{i}><\genfrac{}{}{0pt}{}{q}{i}|`$, where $`|\genfrac{}{}{0pt}{}{p}{i}>`$ denotes for $`p=0`$ an empty and for $`p=1,2`$ a singly occupied state with spin up and down, respectively, at the site $`i`$. $`t_{ij}`$ and $`J_{ij}`$ are hopping and Heisenberg interaction matrix elements between the sites $`i,j`$. The fourth term in Eq.(1) represents one branch of dispersionless, harmonic phonons with frequency $`\omega _0`$. The fifth term in Eq.(1) describes a local coupling between the phonon and the change in the electronic density at site $`i`$ with the coupling constant $`g`$. $`<X>`$ denotes the expectation value of $`X`$. The extension from $`N=2`$ in Eq.(1) to a general $`N`$ has been discussed in detail in Ref. . The label $`p`$ runs then not only over the two spin directions but also over $`N/2`$ identical copies of the orbital. The symmetry group of $`H`$ is the symplectic group $`Sp(N/2)`$ which allows to perform $`1/N`$ expansions for physical observables. In Eq.(1) the electron-phonon coupling is scaled as $`1/\sqrt{N}`$ whereas the free phonon part is independent of $`N`$. As a result the leading contributions to superconductivity from the $`tJ`$ model alone and from the phonons are both of order $`O(1/N)`$ which allows to treat them on an equal footing.
Instabilities towards superconductivity in the $`tJ`$ model have been studied in the above framework in Refs. . The contributions to the anomalous self-energy from the phonon-mediated effective interaction have been derived in Refs. for the case $`J_{ij}=0`$. Generalizing the latter treatment to a finite $`J`$ similar as in Ref. the linearized equation for the superconducting gap $`\mathrm{\Sigma }_{an}`$ for the entire Hamiltonian Eq.(1) can be written as
$`\mathrm{\Sigma }_{an}(k)={\displaystyle \frac{T}{NN_c}}{\displaystyle \underset{k^{}}{}}\mathrm{\Theta }(k,k^{}){\displaystyle \frac{1}{\omega _n^{}^2+ϵ^2(𝐤^{})}}\mathrm{\Sigma }_{an}(k^{}).`$ (4)
$`N_c`$ is the number of cells and $`k`$ the supervector $`k=(\omega _n,𝐤)`$, where $`\omega _n`$ denotes a fermionic Matsubara frequency and $`𝐤`$ the wave vector. $`ϵ(𝐤)`$ is the one-particle energy which is unchanged by the phonons in O(1) and thus given by Eq.(41) of Ref. . The kernel $`\mathrm{\Theta }`$ in Eq.(2) consists of two parts
$`\mathrm{\Theta }(k,k^{})=\mathrm{\Theta }^{tJ}(k,k^{})+\mathrm{\Theta }^{ep}(k,k^{}).`$ (5)
The first contribution $`\mathrm{\Theta }^{tJ}`$ in Eq.(3) comes from the $`tJ`$ model. Explicit expressions for it have been given in Eqs.(42)-(52) in Ref. . The second term $`\mathrm{\Theta }^{ep}`$ in Eq.(3) is due to phonon-mediated interactions and is given by
$`\mathrm{\Theta }^{ep}(k,k^{})`$ $`=`$ $`{\displaystyle \frac{2g^2\omega _0}{(\omega _n\omega _n^{})^2+\omega _{0}^{}{}_{}{}^{2}}}`$ (6)
$``$ $`\gamma _c(k^{},kk^{})\gamma _c(k,k^{}k).`$ (7)
$`\gamma _c`$ is the charge vertex for which an explicit expression has been derived in Ref. . For uncorrelated electrons there are no vertex corrections, i.e., $`\gamma _c=1`$. Since $`g`$ and $`\omega _0`$ are assumed to be independent of $`𝐤`$ $`\mathrm{\Theta }^{ep}`$ is also independent of $`𝐤`$ and only s-wave superconductivity is possible. When correlations are present the kernel $`\mathrm{\Theta }^{ep}`$ depends on $`𝐤`$ and $`\omega _n`$ through the charge vertex $`\gamma _c`$ and general symmetries for the order parameter may become possible.
## III Calculation of $`T_c`$ and $`\beta `$ from the phonon-mediated part
Taking a square lattice with point group $`C_{4v}`$ $`\mathrm{\Theta }(k,k^{})`$ and $`ϵ(𝐤)`$ in Eq.(2) are invariant under $`C_{4v}`$ which means that $`\mathrm{\Sigma }_{an}(k)`$ can be classified according to the five irreducible representations $`\mathrm{\Gamma }_i`$ of $`C_{4v}`$. s-wave symmetry corresponds to $`\mathrm{\Gamma }_1`$, d-wave symmetry to $`\mathrm{\Gamma }_3`$, etc. In the weak-coupling case $`\mathrm{\Theta }(k,k^{})`$ can be approximated by its static limit $`\mathrm{\Theta }(𝐤,𝐤^{})`$. Putting all momenta right onto the Fermi line the sum over $`𝐤^{}`$ in Eq.(2) can be transformed into a line integral along the Fermi line. Assuming a certain irreducible representation $`\mathrm{\Gamma }_i`$ for the order parameter, the line integral can be restricted to the irreducible Brillouin zone (IBZ) introducing a symmetry-projected kernel $`\mathrm{\Theta }_i(𝐤,𝐤^{})`$ with $`𝐤,𝐤^{}`$ IBZ. Finally, the line integral was discretized by a set of points $`[𝐤_\alpha ^F]`$ along the Fermi line in the IBZ with line elements $`[s(𝐤_\alpha ^F)]`$. Denoting the smallest eigenvalue of the symmetric matrix
$$\frac{1}{4\pi ^2}\sqrt{\frac{s(𝐤_\alpha ^F)s(𝐤_\beta ^F)}{|ϵ(𝐤_\alpha ^F)||ϵ(𝐤_\beta ^F)|}}\mathrm{\Theta }_i(𝐤_\alpha ^F,𝐤_\beta ^F)$$
(8)
by $`\lambda _i`$ Eq.(2) yields for $`\lambda _i<0`$, $`N=2`$, and in the weak-coupling case the BCS-formula
$$T_{ci}=1.13\omega _0e^{1/\lambda _i}.$$
(9)
As usual we took $`\omega _0`$ as a suitable cutoff. If $`\lambda _i>0`$ we have, of course, $`T_{ci}=0`$. According to Eq.(6) the absolute value of $`\lambda _i`$ characterizes the strength of the effective interaction in the symmetry channel $`i`$. The overall strength of the electron-phonon coupling is conventionally expressed in terms of the dimensionless coupling $`\lambda `$ defined by
$`\lambda ={\displaystyle \frac{g^2}{8\omega _0}}.`$ (10)
In Eq.(7) we have introduced a factor $`1/2`$ to account for the prefactor $`1/2`$ in Eq.(2) after setting there $`N=2`$. We also used the average density of states $`1/(8t)`$ for the density of states factor in $`\lambda `$ and put $`t`$ equal to 1. For the range of dopings we will be interested in, i.e., $`0.15<\delta <0.8`$, the density of states varies only little with doping so that the definition Eq.(7) of $`\lambda `$ is appropriate for all dopings.
Fig. 1 shows the eigenvalues $`\lambda _i`$ for the two leading symmetries $`i=1,3`$ using the parameter value $`J=0.3`$ and $`t`$ as the energy unit. $`\delta `$ is the doping measured from half-filling. For $`\lambda `$ we have chosen the value $`0.75`$ which is a typical value obtained in LDA calculations . The lowest eigenvalue occurs always in s-wave symmetry, i.e., for $`i=1`$. It depends only weakly on doping for $`0.3<\delta <0.8`$. In this region the order parameter also varies only slowly along the Fermi line, describing thus rather isotropic s-wave superconductivity. Below $`\delta =0.3`$ $`\lambda _1`$ rapidly decreases with decreasing doping wich is caused by a soft mode which freezes into an incommensurate bond-order wave at $`\delta _{BO}0.14`$ . As a result the s-wave order parameter becomes less and less isotropic with decreasing $`\delta `$ changing, for instance, for $`\delta =0.2`$ by about a factor 3 along the Fermi line but it does not pass through zero. The d-wave coupling constant $`\lambda _3`$ would be zero in the uncorrelated case. Decreasing $`\delta `$ from large values correlation effects increase and the static vertex function $`\gamma (𝐤,𝐤𝐤^{})`$ develops more and more a forward scattering peak in the transferred momentum $`𝐤𝐤^{}`$ . In the extreme case where $`\gamma `$ is proportional to $`\delta (𝐤𝐤^{})`$ $`\lambda _1`$ and $`\lambda _3`$ would become degenerate. Fig. 1 shows that this degeneracy is nearly reached at low dopings. The strong localization of the effective interaction is caused by the forward scattering peak in $`\gamma `$ and, in addition, by the incipient instability at $`\delta _{BO}`$ which is also very localized in $`𝐤`$-space.
The neglect of retardation effects which leads to the BCS-formula Eq.(6) is doubtful for several reasons. First, $`\lambda =0.75`$ no longer corresponds really to a weak-coupling case. Secondly, $`T_c`$ is no longer small compared to the frequency of the soft mode near $`\delta _{BO}`$ which means that the frequency dependence of $`\mathrm{\Theta }`$ cannot be neglected for these dopings. Thirdly, and most severely, the formation of the forward scattering peak in the static vertex function $`\gamma `$ is accompagnied by a strong frequency dependence which should be taken into account on the same footing as its momentum dependence. We thus have solved the gap equation Eq.(2) assuming only that the momenta can be put right onto the Fermi line but keeping the full frequency and momentum dependence along the Fermi line.
Fig. 2 shows the obtained results for $`T_c`$ for a s-wave (dashed line) and a d-wave (solid line) order parameter, together with the $`T_c`$ curve for the uncorrelated case (dash-dotted line). The first observation is that correlation effects always suppress s-wave superconductivity. The suppression is large for $`\delta >0.25`$ and becomes less pronounced at smaller dopings in agreement with previous results based on the static approximation . The order parameter associated with the dashed line in Fig. 2 varies only slowly along the Fermi line, i.e., we have a usual s-wave order parameter without nodes. In a Gutzwiller description this order parameter would be exactly zero if retardation effects can be neglected and $`N=2`$ is taken: Integrating out the phonons the effective interaction becomes in the static limit proportional to the double occupancy operator. Any matrix element formed with Gutzwiller wave functions thus would be zero. In contrast to that the enforcement of our constraint at large $`N`$’s, namely, that only $`N/2`$ out of the total $`N`$ states at a given site can be occupied at the same time, gives rise to another effect, which is absent in the Gutzwiller treatment: the effective interaction becomes more and more long-ranged with decreasing doping. This makes isotropic s-wave superconductivity possible even in the presence of a $`N=2`$ constraint. Fig. 2 nevertheless shows that the reduction of $`T_c`$ due to the constraint is substantial except at very small dopings where the effective interaction becomes extremely long-ranged and the suppression by the constraint is small.
The kernel $`\mathrm{\Theta }^{ep}`$ is in the absence of correlations and in the static limit negative for all arguments $`𝐤`$ and $`𝐤^{}`$. This implies that the s-wave symmetry has always the highest $`T_c`$. According to our numerical studies the same is also true in the correlated case which explains why the solid line in Fig. 2, describing d-wave superconductivity, is always below the dashed line. As indicated previously the very existence of finite values for $`T_c`$ with d-wave symmetry is in our model due to electronic correlation effects. Fig. 1 showed that the momentum dependence of $`\gamma _c`$ causes in the static limit a finite coupling strength in the d-wave channel. The solid line in Fig. 2 proves that a finite $`T_c`$ in the d-wave channel results from this even if the strong frequency dependence of $`\gamma _c`$ is also taken into account.
Fig. 3 shows results for the isotope coefficient $`\beta `$ in the case of s-wave (dashed line) and d-wave (solid line) superconductivty. We assumed hereby that $`\omega _0`$ is inversely and $`g`$ directly proportional to the the square root of the mass rendering $`\lambda `$ independent of the mass. Neglecting the frequency dependence of $`\gamma _c`$ there is only one energy scale involved in the gap equation which yields $`\beta =1/2`$ independent of doping, $`\lambda `$, etc.
In contrast to that the curves in Fig. 3 show that $`\beta `$ increases monotonically with decreasing doping and is always larger than 1/2. The deviaton of $`\beta `$ from the value 1/2 is due to the frequency dependence of $`\gamma _c`$ which increases with decreasing doping and acts as a second energy scale. The non-adiabatic effects produced by electronic correlations thus always increase $`\beta `$ for our range of parameter values. This should be contrasted to the case where vertex corrections due to the electron-phonon interaction has been studied , and where $`\beta `$ may be larger or smaller than $`1/2`$. We can conclude from our results that models based on phonon-induced effective interactions in the presence of correlations seem not to be very attractive models for high-T<sub>c</sub> superconductivity because a) the largest transition temperatures are found for s-wave symmetry for all dopings and b) the isotope coefficient is larger than $`0.5`$, especially at low dopings. Experimentally, high-$`T_c`$ cuprates exhibit d-wave superconductivity and, at least near optimal dopings, small isotope effects.
## IV Calculation of $`T_c`$ and $`\beta `$ from the full effective interaction
Superconducting instabilites of the pure $`tJ`$ model have been studied in . The smallest eigenvalue of $`\mathrm{\Theta }^{tJ}`$ and, correspondingly, the largest $`T_c`$ always occur in the d-wave symmetry channel. On the other hand, the phonon-induced effective interaction described by $`\mathrm{\Theta }^{ep}`$ favors in the $`tJ`$ model mainly s-wave superconductivity. In this section, we will consider the case where both effective interactions are present and study the symmetry of the resulting superconducting state, its transition temperature and isotope coefficient.
Fig. 4 shows the two lowest eigenvalues of the matrix Eq.(5) which occur in the s- and d-wave symmetry channel. For $`\mathrm{\Theta }`$ both terms in Eq.(3) have been included. For $`\delta <0.6`$ the lowest eigenvalue has d-wave symmetry and decreases steadily and strongly with decreasing doping diverging finally at $`\delta _{BO}`$. Comparing the solid lines in Figs. 1 and 4 of this work and Fig. 2 in one sees that the lowest eigenvalue is roughly additive in the $`tJ`$ and the phonon contributions though, of course, $`\mathrm{\Theta }^{tJ}`$ and $`\mathrm{\Theta }^{ep}`$ do not commute with each other. Note also the considerable contribution of the phonon part to $`\lambda _3`$ though the overall dimensionless coupling constant has the rather moderate value $`\lambda =0.75`$. The eigenvalue $`\lambda _1`$ (dashed line in Fig. 4) is rather constant for $`\delta >0.3`$ and decreases much less towards lower dopings than $`\lambda _3`$. There is a crossover from d-wave to s-wave symmetry in the lowest eigenvalue at $`\delta 0.6`$ suggesting that the s-wave order parameter is more stable than the d-wave order parameter at high dopings. Such a crossover is expected for general reasons: For large dopings correlation effects play a minor role. As a result the d-wave part in the phonon-induced effective interaction as well as the whole $`tJ`$ part becomes small whereas the s-wave part of the Holstein model will dominate. A comparison of the solid lines in Figs. 1 and 4 nevertheless shows that the latter is very effectively suppressed by the $`tJ`$ part at practically all considered dopings. For instance, $`|\lambda _1|`$ in Fig. 1 is for $`\delta >0.25`$ about a factor 4 larger than in Fig. 4.
The solution of the gap equation Eq.(2) is not straightforward because $`\mathrm{\Theta }`$ contains both instantaneous and retarded contributions and because the various retarded contributions have different effective cutoff energies. We developed in a method to solve Eq.(2) directly avoiding the use of pseudopotentials. The only simplification is that we put the momenta in retarded (but not in instantaneous) contributions to the Fermi line. This approximation has been checked numerically and found to be very well satisfied in our case. Using the fact that the instantaneous kernel consists only of a few separable contributions Eq.(2) can for a given Matsubara frequency reduced to a linear matrix problem of order 1 and 3 for d- and s-wave symmetry, respectively.
In Fig. 5 we have plotted the calculated values for $`T_c`$ for d-wave symmetry $`\mathrm{\Gamma }_3`$ as a function of the doping $`\delta `$. The dashed-dotted line corresponds to $`\lambda =0`$, the solid line to $`\lambda =0.75`$. The Figure shows that phonons always increase the $`T_c`$ for d-wave superconductivity. The increase is especially large at low dopings. This is quite in contrast to calculations in the weak-coupling case where the $`T_c`$ of d-wave superconductivity is lowered by phonons. The physical picture in the latter case is that the effective interaction of the electronic part is repulsive and strongly peaked near the $`M`$-point whereas the phononic d-wave part is attractive and diminuishes especially at the $`M`$-point the repulsion causing a lowering of $`T_c`$. In our case the dependence of the static effective interaction of the pure $`tJ`$ model on the transferred momentum can be inferred from Figs.1a) and 1b) in Ref. . For small dopings the phonon part is restricted to small momenta which means that it would contribute only near the $`X`$-point in Fig. 1b) of Ref. . As a result the total effective interaction would become even more attractive around the $`X`$-point and elsewhere not be changed. This clearly would enhance the d-wave part of the effective interaction which is also in agreement with Fig. 4. From Figs. 1, 4 and 5 it is evident that the large lowering of the eigenvalue $`\lambda _3`$ due to phonons corresponds only to a rather moderate increase in $`T_c`$. This means that the use of a BCS-formula with a fixed effective cutoff would grossly overestimate the increase in $`T_c`$ for d-wave superconductivity due to phonons. What has been overlooked in such an approach is that we deal with at least three energy scales, namely $`t`$, $`J`$, and $`\omega _0`$. For instance, the instantaneous contribution of the $`tJ`$ part is characterized by the energy scale $`t`$ whereas the phononic one by $`\omega _0`$. Since $`t>>\omega _0`$ it is evident that the phononic part in $`\lambda _3`$ will contribute to $`T_c`$ much less than the $`tJ`$ part.
We were unable to find any finite transition temperatures $`T_{ci}`$ for $`i3`$, i.e., for symmetries different from the d-wave symmetry.. Taking the accuracy of our calculation into account this means that $`T_{ci}<0.002`$ for $`i3`$. As shown in Fig. 2 the phonon-induced effective interactions leads to a considerable $`T_c`$ for s-wave superconductivity. The $`tJ`$ part to the effective interaction, however, is very repulsive in the s-wave channel prohibiting s-wave superconductivity. We cannot exclude that the crossover at $`\delta 0.6`$ in Fig. 4 in the lowest eigenvalue from d-wave to s-wave could stabilize a s-wave order parameter at large dopings. Another possibility is that other symmetries than d- or s-wave become stable at large dopings in view of the approximate degeneracy of the eigenvalues of practically all symmetries in the pure $`tJ`$ model . In any case, the corresponding $`T_c`$’s would be smaller than $`0.002`$ for all dopings and thus be rather irrelevant.
The behavior of the isotope coefficient $`\beta `$ as a function of doping is another interesting test for any theory of high-$`T_c`$ superconductivity. Fig. 6 shows the calculated $`\beta `$ for $`\lambda =0.75`$, $`J=0.3`$, $`\omega _0=0.06`$ in the case of $`\mathrm{\Gamma }_3`$ symmetry. It is always positive, starting from small values at small dopings. With increasing doping it increases monotonously approaching values near $`1/2`$ at large dopings. Our calculated curve shows the typical behavior of the experimetal data, namely, a small value for $`\beta `$ at optimal doping and substantial values far away from optimal doping . The curve in Fig. 6 is the result of several
competing effects. We saw in Fig. 3 that correlation effects in the phonon-induced part cause a doping dependence of $`\beta `$ which is opposite to that in Fig. 6, namely, a monotonously increasing $`\beta `$ with decreasing doping starting at the classical value of $`1/2`$ at large dopings. Including also the $`tJ`$ part the large values of $`\beta `$ at low dopings are nearly completely and at larger dopings partially quenched. Part of this effect may be understood from Fig. 5. It shows that the relative importance of the phonon contribution increases steadily with increasing $`\delta `$ except at very small dopings. In a simple picture one thus may argue that at small dopings $`T_c`$ is mainly due to the $`tJ`$ part. This part decays faster than the phononic part with doping so that $`T_c`$ at larger dopings is mainly due to the phonons. A more realistic interpretation of Fig. 6 should, however, also take into account the complex competition between the three energy scales $`t`$, $`J`$, and $`\omega _0`$, pair breaking effects, etc.
In order to get more insight into the behavior of $`\beta `$ we have calculated $`T_c`$ and $`\beta `$ in the $`\mathrm{\Gamma }_3`$ symmetry channel for the fixed doping value $`\delta =0.20`$ as a function of the phonon frequency $`\omega _0`$. Fig. 7 shows the result for $`J=0.3`$ and $`\lambda =0.75`$. The solid line in Fig. 7 represents the value for $`T_c`$, multiplied with 10. Since we keep $`\lambda `$ fixed the coupling constant $`g`$ approaches zero in the adiabatic limit $`\omega _00`$ which means that $`T_c`$ in this limit is solely due to the $`tJ`$ part. With increasing phonon frequency $`T_c`$
increases monotonously, passing over from an initial sublinear to a linear behavior. The solid line shows that phonons always increase $`T_c`$ for d-wave superconductivity. The absolute values in the Figure are quite remarkable: Though the employed coupling constant for $`\lambda `$ is rather moderate and though the phonon contribution to $`T_c`$ in the d-wave channel is entirely due to correlation effects $`T_c`$ increases by nearly a factor 3 for $`\omega _0=0.2`$ corresponding to about the largest phonons in the cuprates. The dashed line in Fig. 7 represents the dependence of $`\beta `$ on $`\omega _0`$. For small $`\omega _0`$ $`\beta `$ is practically zero. $`\beta `$ increases monotonously with increasing $`\omega _0`$ and tends to the classical value $`1/2`$ at large phonon frequencies. The dashed line in Fig. 7 may be understood in physical terms roughly in the following way. In our model the static effective coupling is independent of the phonon mass, i.e., of $`\omega _0`$. $`\beta `$ is thus determined by an effective energy cutoff. For small $`\omega _0`$’s this cutoff is mainly given by electronic parameters in the $`tJ`$ part leading to a small value for $`\beta `$. At large $`\omega _0`$’s the phonon contribution to $`T_c`$, which is only due to electronic correlations, determines mainly the total effective energy cutoff causing a value of $`\beta `$ near $`1/2`$.
In conclusion, we have treated the electron-electron and the electron-phonon interactions in a generalized $`tJ`$ model by means of a systematic $`1/N`$ expansion and have solved the resulting linearized equation for the superconducting gap by reliable numerical methods. We found that electronic correlations affect phonon-induced superconductivity in several ways: Instabilities towards d-wave or other symmetries different from s-wave become possible. The corresponding transition temperatures are, however, always smaller than that of s-wave superconductivity. Moreover, the $`T_c`$ for s-wave superconductivity is at larger dopings heavily and at small dopings somewhat suppressed by correlations. The isotope coefficient $`\beta `$ is $`1/2`$ at large dopings but increases with increasing correlations, i.e., decreasing dopings, both in the s- and d-wave channel. Including also the $`tJ`$ part in the effective interaction we found that within our numerical accurracy only d-wave superconductivity is stable for dopings $`0.15<\delta <0.8`$ and that the phononic part always increases $`T_c`$ for the above dopings. The $`tJ`$ part in the effective interaction changes the dependence of $`\beta `$ on doping: $`\beta `$ assumes now small values at low dopings and increases monotonously with doping towards the classical value $`1/2`$ at large dopings. Keeping $`\lambda `$ fixed and varying the phonon mass, i.e., $`\omega _0`$ we find that $`T_c`$ and $`\beta `$ increase monotonously with $`\omega _0`$ and that $`\beta `$ varies between zero at small and roughly $`1/2`$ at large phonon frequencies within the interval $`0<\omega _0<0.2`$.
Our findings differ in many aspects from the corresponding results based on weak-coupling calculations . In these calculations $`T_c`$ for d-wave superconductivity induced either by a $`U`$ or a spin-fluctuation term is always suppressed by phonons. For instance, in the fluctuation-exchange approximation $`T_c`$ drops to zero if the phonon-mediated on-site attraction $`U_p`$ becomes comparable to the Hubbard term $`U`$. We treated the opposite case where $`U>>U_p`$ and found a different behavior, namely, that phonons enhance the $`T_c`$ for d-wave superconductivity. Our different result is mainly caused by corrections to the bare electron-phonon vertex due to the strong electron-electron interaction. This vertex develops for not too large dopings a forward scattering peak so that only phonon with small momenta can couple to the electrons. As a result the electron-phonon and the $`tJ`$ contributions to the gap decouple in $`𝐤`$ space. Our calculations show that the two contributions no longer cancel each other to a large extent but, on the contrary, enhance each other. For a momentum-independent bare electron-phonon coupling weak-coupling calculations yield a small, often negative value for the isotope coefficient $`\beta `$ which is rather independent of doping. In our case the strong momentum-dependence of the effective electron-phonon coupling, induced by electronic correlations, causes a strong dependence of $`\beta `$ on doping: being always positive, $`\beta `$ is small at optimal doping and assumes values of roughly $`1/2`$ at large dopings in agreement with the basic features of the experimental data in the cuprates.
The presented results are accurate at large $`N`$’s because we have taken only the leading terms of a $`1/N`$ expansion. Phonon renormalizations and vertex corrections due to the electron-phonon interaction are of $`O(1/N)`$ and thus have been omitted. On the other hand it is known that in the physical case $`N=2`$ anharmonic effects in the atomic potentials and the formation of polarons occur if $`\lambda `$ is about 1 or larger . This suggests that keeping only the leading order of the $`1/N`$ expansion cannot describe adequately the case $`N=2`$ at large coupling strengths or if the Migdal ratio $`\omega _0/t`$ is no longer small. Correspondingly, we have shown numerical results for rather moderate values for $`\lambda `$ and small Migdal ratios.
Acknowledgement: The authors are grateful to Secyt and the International Bureau of the Federal Ministry for Education, Science, Research and Technology of Germany for financial support (Scientific-technological cooperation between Argentina and Germany, Project No. ARG AD 3P). The first and second author thank the MPI-FKF, Stuttgart, Germany, and the Departamento de Física, Fac. Cs. Ex. e Ingeniería, U.N. Rosario, Argentina, respectively, for hospitality. The authors also thank P. Horsch for a critical reading of the manuscript.
|
no-problem/9904/cond-mat9904139.html
|
ar5iv
|
text
|
# Geometrical model for a particle on a rough inclined surface
## I Introduction
Several experimental studies have recently been conducted on the problem of a single ball falling under gravity on a surface of controlled roughness. These works have revealed interesting new aspects of granular dynamics that are not yet fully understood. Three distinct dynamical regimes have been identified as the tilting angle increases. For small inclinations there is (i) a decelerated regime where the ball always stops, then comes (ii) an intermediate regime where the ball reaches a steady state with constant mean velocity, and for larger inclinations the ball enters (iii) a jumping regime. Computer simulations have confirmed these results, particularly those concerning regimes (i) and (ii). A theoretical model has also been proposed in which steady-state solutions (but no detailed dynamics) can be obtained analytically. More recently, a one-dimensional map has been introduced to study the jumping regime. This map in its simplest version is linear, and to obtain non-linear behavior one has to vary spatially the properties of the rough surface , in which case the model become inaccessible analytically.
In this Paper we present a model for a single particle moving under the action of gravity on a rough surface of specified shape. Within this setting we will give a detailed analytical description of all possible dynamical regimes. Although the model we study is simplified, its predictions are in good qualitative agreement with the experimental findings.
Roughly speaking, our conclusions are as follows. There is (i) a sharp transition (as the surface inclination increases) from a regime of bounded velocity to one of accelerated motion. Within the region of bounded velocity various dynamical regimes are possible. First there is (ii) a range of inclinations for which the dynamics always has a unique attractor. For higher inclinations two other phases exist: (iii) a region where we have co-existing attractors for the dynamics and (iv) a region where instabilities give rise to chaotic behavior. For a fixed (sufficiently large) inclination a transition to the chaotic region will take place as the nature of the collisions between the particle and the surface becomes highly inelastic. Although our results are derived here in the context of a simple collision rule, it can be shown that they remain valid for a wide class of restitution laws.
The paper is organized as follows. In Sec. II we describe the model and study in detail its dynamical properties. The main results of this Section are then summarized in the phase diagram shown in Fig. 4. In Sec. III we carry out a comparison between the model predictions and the experimental findings. In particular, we argue that the jumping regime seen in the experiments might correspond to a true chaotic motion, as predicted by the model. Finally, in Sec. IV we collect our main conclusions and present further discussions.
## II the model
In our model, which is shown in Fig. 1, the rough surface is considered to have a simple staircase shape whose steps have height $`a`$ and length $`b`$. For convenience, we choose a system of coordinates in such a way that the step plateaus are aligned with the $`x`$ axis and the direction of the acceleration of gravity g makes an angle $`\varphi `$ with the $`y`$ axis. A grain is then imagined to be launched on the top of the ‘staircase’ with a given initial velocity. In what follows, we will be concerned with the problem of a point particle falling down this ‘staircase’ and will thus not take into account any effect due to the finite size of the grain. Upon reaching the end of a step plateau, the particle will undergo a ballistic flight until it collides with another plateau located a certain number $`n`$ of steps below the departure step (e.g., $`n=3`$ in Fig. 1). Accordingly, we will refer to the integer $`n`$ as the jump number associated with this flight.
We will assume, for simplicity, that the momentum loss due to collisions is determined by two coefficients of restitution $`e_t`$ and $`e_n`$, corresponding to the tangential and normal directions, respectively. More precisely, if $`𝐯=(v_x,v_y)`$ denote, respectively, the components of the particle velocity parallel and perpendicular to the surface before a collision, then we will take the velocity $`𝐯^{}=(v_x^{},v_y^{})`$ after the collision to be given by
$`v_x^{}`$ $`=`$ $`e_tv_x,`$ (2)
$`v_y^{}`$ $`=`$ $`e_nv_y,`$ (3)
where $`0e_t<1`$ and $`0e_n<1`$.
In the present paper we will for simplicity discuss only the case $`e_n=0`$; the advantage being that the model can then be described by a one-dimensional map. When $`e_n>0`$ the dynamics is governed by a three-dimensional map, the analysis of which is more complicated and will be left for forthcoming publications .
We now derive the equations governing the dynamics of the model presented above. Let us denote by $`E`$ the kinetic energy of the particle at the moment of departure for a given flight. We write $`E=\frac{1}{2}mV^2`$, where $`m`$ is the particle mass and $`V`$ is the launching velocity at the start of the flight (see Fig. 1). After this flight the particle will first collide with a step below, then slide along this step (recall $`e_n=0`$), and finally take off again on another flight with initial kinetic energy $`E^{}`$. We suppose that the main energy loss is due to collisions and so we neglect the energy dissipation as the particle slides along a step, where it then moves with a constant acceleration $`g\mathrm{sin}\varphi `$. Using simple arguments of energy conservation together with the collision conditions (1) and (2), one can write $`E^{}`$ in terms of $`E`$. The result is
$$E^{}=\frac{1}{2}me_t^2v_x^2+mg\mathrm{sin}\varphi (nbx),$$
(4)
where $`n`$ is the corresponding jump number for the flight and $`x`$ is the $`x`$-coordinate of the landing point. It takes a simple algebra to show that at the landing point $`(x,y)`$ we have the following identities:
$`x={\displaystyle \frac{g\mathrm{sin}\varphi }{2}}T^2+\sqrt{{\displaystyle \frac{2E}{m}}}T,`$ (6)
$`y={\displaystyle \frac{g\mathrm{cos}\varphi }{2}}T^2=na,`$ (7)
$`v_x=g\mathrm{sin}\varphi T+\sqrt{{\displaystyle \frac{2E}{m}}},`$ (8)
$`v_y=g\mathrm{cos}\varphi T,`$ (9)
where $`T`$ is the flight time.
It is convenient to introduce a dimensionless energy-like variable:
$$=\frac{E}{mga\mathrm{cos}\varphi }.$$
(10)
Eliminating $`T`$ from (4) and inserting the result into (4), we obtain that the dynamics of the model in terms of the variable $``$ is given by the following map:
$$^{}=f(,n)=n\left[e_t^2\left(\sqrt{/n}+t\right)^2+t\left(\tau t2\sqrt{/n}\right)\right].$$
(11)
where we have for conciseness introduced the notation
$`t=\mathrm{tan}\varphi ,`$ (12)
$`\tau =b/a.`$ (13)
The parameter $`\tau `$ above can be viewed as a measure of the surface roughness, with $`\tau ^1=0`$ corresponding to a perfectly smooth surface. As for the inclination parameter $`t`$, we need to consider only the interval $`0<t<\tau `$ for which non-trivial motion occurs. (Clearly, for $`t<0`$ the particle will always come to a rest, whereas for $`t>\tau `$ the particle undergoes a free fall without ever colliding again with the ramp.)
The flight jump number $`n`$ appearing in Eq. (11) is determined from the energy $``$ according to the following condition: $`n`$ is equal to the smallest integer such that $`nbx0`$ or, alternatively,
$$n(\tau t)2\sqrt{n}0.$$
(14)
This means that $``$ falls within the interval $`I_n`$:
$$I_n(t)(\frac{1}{4}(n1)(\tau t)^2,\frac{1}{4}n(\tau t)^2].$$
(15)
Thus the function $`f(,n)`$, as defined by Eqs. (11) and (15), exhibits jump discontinuities at energy values $`=\frac{1}{4}n\left(\tau t\right)^2`$, but each of its branches is smooth. This is illustrated in Fig. 2, where we graph the function (11) for $`e_t=0.7`$, $`\tau =3.7`$, and several values of the inclination $`t`$,
For later use, we note here that the average velocity $`\overline{V}`$ between two consecutive flights is given by
$$\overline{V}=\frac{nL}{T+(\sqrt{2E^{}/m}e_tv_x)/g\mathrm{sin}\varphi },$$
(16)
where $`L=\sqrt{a^2+b^2}`$ and the second term in the denominator corresponds to the time during which the particle moves on the ramp (see Fig. 1). If we now introduce a dimensionless mean velocity
$$\overline{𝒱}=\frac{\overline{V}}{\sqrt{ag\mathrm{cos}\varphi }},$$
(17)
then Eq. (16) becomes
$$\overline{𝒱}=\frac{t\sqrt{n(1+\tau ^2)/2}}{(1e_t)t+\sqrt{^{}/n}e_t\sqrt{/n}}.$$
(18)
In order to study the dynamical properties of the map above, we must first investigate the existence of fixed points. If we denote by $`_n`$ a fixed point with a jump number $`n`$, then $`_n`$ will be the solution to the equation
$$_n=f(_n,n).$$
(19)
In view of the homogeneity of the function $`f(,n)`$ \[see Eq. (11)\] we write
$$_n=n[z_0(t)]^2,$$
(20)
where the quantity $`z_0(t)`$ no longer bears any dependence on $`n`$. Using Eqs. (11) and (20), Eq. (19) becomes
$$(z_0+t)^2=e_t^2(t+z_0)^2+\tau t,$$
(21)
whose positive solution is
$$z_0(t)=t+\sqrt{\frac{\tau t}{1e_t^2}}.$$
(22)
Now a fixed point $`_n`$, as given in Eqs. (20) and (22), will exist if and only if $`_nI_n(t)`$, where the interval $`I_n(t)`$ is defined in (15). Thus, as $`t`$ increases, a fixed point with jump number $`n`$ will be born when $`_n`$ equals the left endpoint of $`I_n`$. Comparing Eqs. (15), (20) and (22), we see that this happens at an inclination $`t_n`$ such that
$$z_0(t_n)=t_n+\sqrt{\frac{\tau t_n}{1e_t^2}}=\frac{1}{2}\sqrt{1\frac{1}{n}}\left(\tau t_n\right).$$
(23)
This equation is quadratic in $`\sqrt{t_n}`$ and can thus be easily solved. However, we shall not bother to give the result here and will simply mention a few important facts that follow from Eq. (23). First, we note that $`t_1=0`$ so that a fixed point with jump number $`n=1`$ is always born at $`t=0`$. Then, as $`t`$ increases, fixed points with successively higher $`n`$ will appear in an increasing sequence of inclinations $`\{t_n\}_{n=1}^{\mathrm{}}`$. Finally, we have that for $`t>t_{\mathrm{}}`$, where $`t_{\mathrm{}}=lim_n\mathrm{}t_n`$, all fixed points cease to exist. Setting $`n=\mathrm{}`$ in Eq. (23) we obtain for the limit point $`t_{\mathrm{}}`$:
$$t_{\mathrm{}}=\tau \frac{1e_t}{1+e_t}.$$
(24)
The appearance of this sequence of fixed points can perhaps be best visualized by referring to Fig. 2, where we plot the function $`f(,n)`$ at increasing values of $`t`$, with $`e_t`$ and $`\tau `$ kept fixed. For small $`t`$ (lower-most curve in Fig. 2) there is only one intersection with the $`45^{}`$ line, corresponding to the fixed point with $`n=1`$. As $`t`$ increases fixed points with successively higher $`n`$ appear (second curve from the bottom). At $`t=t_{\mathrm{}}`$ there are infinitely many such fixed points (second curve from the top) and after this all of them cease to exist (uppermost curve).
One can also show that for $`t>t_{\mathrm{}}`$ we always have $`f(,n)>`$, whereas for $`0<t<t_{\mathrm{}}`$ there exists an energy $`^{}`$ such that $`f(,n)<`$ for $`>^{}`$ (see, e.g., Fig. 2). We thus conclude that for $`t>t_{\mathrm{}}`$ the particle velocity will become unbounded for any initial condition, whereas for $`0<t<t_{\mathrm{}}`$ the velocity remains always bounded. In other words, at the critical inclination $`t=t_{\mathrm{}}`$ there is a sharp transition (independent of initial conditions) from a regime of bounded velocity to accelerated motion. In the region of bounded velocity, several dynamical regimes are possible, depending on the stability of the fixed points, as discussed below.
The stability of a fixed point $`_n`$ is determined by the parameter $`\lambda =f^{}(_n,n)`$, where the prime denotes derivative with respect to $``$, so that if $`|\lambda |<1`$ ($`|\lambda |>1`$) the fixed point is stable (unstable) . Using Eqs. (11), (20) and (22), we obtain for the derivative $`\lambda `$ at the fixed point:
$$\lambda (t)=1\frac{1e_t^2}{1\sqrt{(1e_t^2)t/\tau }}.$$
(25)
Notice that $`\lambda `$ does not depend on $`n`$, thus implying that all existing fixed points $`_n`$ (for given values of the model parameters) have the same stability properties. Moreover, since $`\lambda `$ is always smaller than unity, we see that instability can occur only if $`\lambda (t)<1`$. Let us then denote by $`t_{\mathrm{inst}}`$ the inclination such that $`\lambda (t_{\mathrm{inst}})=1`$. From Eq. (25) we obtain that
$$t_{\mathrm{inst}}=\tau \frac{(1+e_t^2)^2}{4(1e_t^2)}.$$
(26)
Thus the fixed points are stable for $`t<t_{\mathrm{inst}}`$ and unstable for $`t>t_{\mathrm{inst}}`$.
If the fixed points are stable, the dynamics of the map will in general be attracted to one of the existing fixed points. For example, in the region of parameters such that $`0<t<t_2<t_{\mathrm{inst}}`$ the particle will almost always reach a periodic motion where the particle falls by one step at a time, since in this case only the fixed point with $`n=1`$ exists and is stable . On the other hand, for $`t_2<t<t_{\mathrm{inst}}`$ there are co-existing stable fixed points, in which case the final state (i.e., the fixed points to which the dynamics is attracted) will depend on the initial condition. Once the system has reached a given fixed point $`_n`$ the particle will accordingly be moving with a constant mean velocity $`\overline{𝒱}_n`$ whose value can be readily obtained by inserting Eqs. (20) and (22) into Eq. (18):
$$\overline{𝒱}_n=\left[\frac{n(1+\tau ^2)t}{2t_{\mathrm{}}}\right]^{1/2}.$$
(27)
When the fixed points are unstable ($`t_{\mathrm{inst}}<t<t_{\mathrm{}}`$), the particle motion becomes very irregular and no stationary (periodic) regime is ever reached. This is illustrated in Fig. 3, where we plot the jump number $`n`$ as a function of time (iteration step) for two orbits in the region where the fixed points are unstable. In this figure we clearly see that the jump number fluctuates erratically around a mean value. We have computed the Lyapunov exponent for several values of parameters in the region of unstable fixed points and have found it to be positive for all cases studied, thus indicating that the motion is indeed chaotic in this region.
The different dynamical regimes displayed by the model above can be conveniently summarized in terms of a “phase diagram” in the parameter space $`(e_t,t/\tau )`$, as shown in Fig. 4. In this figure we plot the curves corresponding to $`t_{\mathrm{}}`$ (solid line) and $`t_{\mathrm{inst}}`$ (dashed line) given by Eqs. (24) and (26), respectively. Also plotted is the curve representing the inclination $`t_2`$ (dot-dashed line) at which the fixed point with $`n=2`$ first appears. Thus in terms of the existence/stability of the fixed points the model displays the following four regions: (i) for $`0<t<\mathrm{min}(t_2,t_{\mathrm{inst}})`$ there is a unique stable fixed point, namely, that with $`n=1`$; (ii) for $`t_2<t<\mathrm{min}(t_{\mathrm{inst}},t_{\mathrm{}})`$ there are multiple stable fixed points (at least those with $`n=1`$ and $`n=2`$); (iii) for $`t_{\mathrm{inst}}<t<t_{\mathrm{}}`$ all existing fixed point are unstable and chaotic motion is observed; (iv) for $`t>t_{\mathrm{}}`$ no fixed point exists and the motion becomes accelerated.
Another interesting feature in Fig. 4 is the fact that the chaotic regime appears when the collisions are highly inelastic (i.e., small $`e_t`$). In particular, for $`e_t>\sqrt{2}1`$ (at which point $`t_{\mathrm{inst}}`$ equals $`t_{\mathrm{}}`$) the fixed points remain stable over their entire domain of existence. (The results shown in Fig. 4 are qualitatively different from the behavior seen in the model studied in Ref. , where chaotic motion appears as the restitution coefficient increases.)
## III Comparison with experiments
In this section we wish to compare our model with recent experimental studies of a single ball moving under gravity on a rough inclined surface. In these experiments, first performed by Jan et al. and later expanded by Ristow et al. , a rough surface was constructed by gluing steel spheres of radius $`r`$ on a L-shaped flume. Another steel sphere of radius $`R`$ was then launched with a small initial velocity and its subsequent motion analyzed. As the surface inclinations increases, the following three regimes are observed : for small inclinations the bead always stops (regime A), then comes a range of inclinations for which the ball reaches a steady state with constant mean velocity (regime B), and beyond this point the ball starts to jump (regime C). In Fig. 5 we show data taken from Ref. for the ball mean velocity $`\overline{V}`$ as a function of $`\mathrm{sin}\theta `$, where $`\theta `$ is the inclination angle with respect to the horizontal direction. As discussed in Ref. , the change in trend observed in the data as $`\theta `$ increases (for a given value of $`R/r`$) marks the beginning of the jumping regime.
The regime B seen in the experiments corresponds in our model to a stable fixed point with $`n=1`$, for in this case the particle reaches a periodic motion where it falls one step at a time (as in the experiments). In order to compare our model more closely with the experiments let us first express the mean velocity $`\overline{V}_1`$ (at the fixed point $`n=1`$) in terms of the angle $`\theta `$, where $`\theta =\varphi +\pi /2\alpha `$ (see Fig. 1). Setting $`n=1`$ in Eq. (27), returning to dimensionful units via Eq. (17), and expressing the final result in terms of $`\theta `$, we obtain
$$\overline{V}_1=\left[\frac{Lg(1+e_t)}{2(1e_t)}\right]^{1/2}\sqrt{\mathrm{sin}\theta \tau ^1\mathrm{cos}\theta }.$$
(28)
(We remark parenthetically that a similar expression can be obtained heuristically if one introduces an effective sliding friction in addition to inelastic collisions; see Refs. . Our formula follows however from a pure collision model.)
We have fitted the expression (28) to the experimental data shown in Fig. 5 — the corresponding results being displayed as solid curves in this figure. In our fitting procedure, we took $`L=2r=1`$ cm , $`g=980`$ cm/s<sup>2</sup>, and best-fitted the parameters $`\tau `$ and $`e_t`$ for each data set considering only points in regime B. As we see in Fig. 5, the model prediction for the dependence of $`\overline{V}`$ with $`\theta `$ is in a good agreement with the experimental data (in regime B).
The jumping regime observed in the experiments, on the other hand, would correspond in our model to the region of unstable fixed points, since in this case the particle jumps erratically never reaching a steady state (see Fig. 3). This analogy might then provide a possible explanation for the change in trend observed in the experimental data for large inclinations. To see this, consider the region of small $`e_t`$ in the phase diagram shown in Fig. 4. As the inclination $`t`$ increases (for a given $`e_t`$) the system goes from a region of stable periodic motion (with $`n=1`$) to a regime of chaotic jumps, in close resemblance to the experimental transition from steady-state to the jumping regime.
To probe this analogy further, we illustrate in Fig. 6 the behavior predicted by the model for the mean velocity $`\overline{V}`$ as a function of $`\mathrm{sin}\theta `$ in the region of small $`e_t`$. In this figure, the solid curve corresponds to the expression (28) for $`\overline{V}_1`$, up to the point where the fixed point goes unstable, and the crosses are computed values of $`\overline{V}`$ in the ensuing chaotic regime. Comparing Fig. 6 with Fig. 5, we see that the change in behavior predicted by the model at the onset of instability is in qualitative agreement with what is observed in the experiments (for small values of $`R/r`$) as the ball enters the jumping regime. Of course, more detailed experiments are necessary to verify whether chaotic motion does indeed take place in the jumping regime.
## IV Conclusions
We have studied a simple geometrical model for the gravity-driven motion of a single particle on a rough inclined line. In our model the rough line was chosen to have a regular staircase shape and a simple collision law was adopted. With these simplifications the dynamics is described by a one-dimensional map that is quite amenable to analytical treatment. Summarizing our findings, we have seen that our model displays the following four dynamical regimes:
1. for $`0<t<\mathrm{min}(t_2,t_{\mathrm{inst}})`$ there is a unique stable fixed point.
2. for $`t_2<t<\mathrm{min}(t_{\mathrm{inst}},t_{\mathrm{}})`$ the system has multiple stable fixed points.
3. for $`t_{\mathrm{inst}}<t<t_{\mathrm{}}`$ the fixed points are unstable and the dynamics is chaotic.
4. for $`t>t_{\mathrm{}}`$ no fixed point exists and the motion becomes accelerated.
Here the parameter $`t`$ measures the surface inclination and the quantities $`t_2`$, $`t_{\mathrm{inst}}`$, and $`t_{\mathrm{}}`$ separating the different regimes are given in terms of the other two model-parameters, namely, the restitution coefficient $`e_t`$ and the roughness parameter $`\tau `$. These regimes are indicated in the phase diagram shown in Fig. 4. Furthermore, it can be shown that the above conclusions, which were derived in the context of a simple collision rule, remain valid for a wide class of tangential restitution laws.
Despite its simplicity, our model does provide a theoretical framework within which the generic behavior seen in experiments on a ball moving on a rough surface can be qualitatively understood. For example, the model successfully predicts the existence of several dynamical regimes that are also observed in the experiments. In particular, the predicted functional dependence of the mean velocity with the inclination angle $`\theta `$ (in the steady-state regime) is in good agreement with the experiments. Moreover, the model provides a possible explanation for the change in trend seen in the experimental data as the ball enters the jumping regime. We have suggested that this jumping regime might correspond to a chaotic motion, as so happens in the model. Clearly, more experimental studies are required to investigate this interesting possibility.
This work was supported in part by FINEP and CNPq.
|
no-problem/9904/cond-mat9904284.html
|
ar5iv
|
text
|
# Plasmon excitation by charged particles interacting with metal surfaces
\[
## Abstract
Recent experiments (R. A. Baragiola and C. A. Dukes, Phys. Rev. Lett. 76, 2547 (1996)) with slow ions incident at grazing angle on metal surfaces have shown that bulk plasmons are excited under conditions where the ions do not penetrate the surface, contrary to the usual statement that probes exterior to an electron gas do not couple to the bulk plasmon. We here use the quantized hydrodynamic model of the bounded electron gas to derive an explicit expression for the probability of bulk plasmon excitation by external charged particles moving parallel to the surface. Our results indicate that for each $`𝐪`$ (the surface plasmon wave vector) there exists a continuum of bulk plasmon excitations, which we also observe within the semi-classical infinite-barrier (SCIB) model of the surface.
\]
It is well known that charged particles interacting with solid surfaces can create electronic collective excitations in the solid. These are bulk and surface plasmons. In the absence of electron-gas dispersion, the scalar electric potential due to bulk plasmons vanishes outside the surface; hence, in this case probes exterior to the solid can only generate surface excitations. That electron-gas dispersion allows external probes to interact with bulk plasmons was discussed by Barton and Eguiluz, and more recently by Nazarov et al. Nevertheless, the fact that within a non-local description of screening bulk plasmons do give rise to a potential outside the solid has been ignored over the years, even when electron-gas dispersion has been included; also, current non-local theories of plasmon excitation by external charges have not shown evidence for bulk plasmon excitation outside the solid. Recently, Baragiola and Dukes have studied the emission spectra produced by slow ions that were incident at grazing angle; their data indicate that the bulk plasmon is importantly involved in the emission process, though the projectiles are not expected to have penetrated into the solid. Bulk plasmon excitation in electron emission spectra produced by slow multiply charged ions has also been investigated, with projectiles that may enter the solid.
In this letter we derive, within the quantized hydrodynamic model of the bounded electron gas, an explicit expression for the probability of bulk plasmon excitation by external charged particles moving parallel to a jellium surface. Our model, which assumes a sharp electron density profile at the surface, neatly displays the role of bulk plasmon excitations in the interaction of charged particles moving near a metal surface. We also demonstrate that our results for the total energy-loss probability agree with standard calculations derived either by solving the linearized Bloch hydrodynamic equations or within the semi-classical infinite-barrier (SCIB) model of the surface with the hydrodynamic approximation for the bulk dielectric function. Though it has been generally believed that when charged particles move outside the solid the energy loss predicted in these models is fully described by the excitation of surface plasmons, we demonstrate that for each $`𝐪`$ (the surface plasmon wave vector) excitation of both a discrete surface plasmon and a continuum of bulk plasmons contribute to the total energy-loss probability, in agreement with the prediction of our quantized hydrodynamic scheme.
Take an inhomogeneous electron system capable of self-oscillations about a ground state described by density-functional theory (DFT). In the hydrodynamic limit the system is characterized by the electron density $`n(𝐫,t)`$ and a velocity field $`𝐮(𝐫,t)`$. The total energy of the system can then be expressed as (we use atomic units throughout, i.e., $`e^2=\mathrm{}=m_e=1`$)
$`H=`$ $`G\left[n(𝐫,t)\right]+{\displaystyle 𝑑𝐫n(𝐫,t)\left[\frac{1}{2}|\psi |^2V_0V_1\right]}`$ (3)
$`+{\displaystyle \frac{1}{2}}{\displaystyle 𝑑𝐫𝑑𝐫^{}\frac{n(𝐫,t)n(𝐫^{},t)}{|𝐫𝐫^{}|}},`$
where irrotational flow has been assumed, i.e., $`𝐮(𝐫,t)=\psi `$, and retardation effects have been neglected. $`G\left[n(𝐫,t)\right]`$ represents the exchange, correlation and internal kinetic energies of the electron system. $`V_0`$ is the electrostatic potential due to the neutralizing ionic background of density $`n_0`$, and $`V_1`$ represents the external perturbation. From Eq. (1) the basic hydrodynamic equations can be derived, following Bloch’s approach, and they can be linearized in the deviation $`nn_0`$ from the equilibrium value to find the existence of self-sustaining normal modes of oscillation.
We consider a charged particle moving with velocity $`𝐯`$ outside of a metallic surface, along a trajectory that is parallel to the surface, with a classical charge distribution given at $`𝐫=(𝐫_{},z)`$ by $`\rho _{\mathrm{ext}}(𝐫,t)=Z_1\delta (𝐫_{}𝐯t)\delta (zz_0)`$. We represent the ionic background by a jellium model (the jellium occupying the half-space $`z<0`$), and assume a sharp electron density profile at the surface. We also neglect exchange-correlation contributions to $`G[n]`$, and approximate it by the Thomas-Fermi functional. In this approximation, for each value of $`𝐪`$ (the wave vector parallel to the surface) there exist both bulk and surface normal modes of oscillation with frequencies given by the following dispersion relations:
$$(\omega _{q,p}^B)^2=\omega _\mathrm{p}^2+\beta ^2(q^2+p^2)$$
(4)
and
$$(\omega _q^S)^2=\frac{1}{2}\left[\omega _\mathrm{p}^2+\beta ^2q^2+\beta q(2\omega _\mathrm{p}^2+\beta ^2q^2)^{1/2}\right],$$
(5)
respectively, where $`\omega _\mathrm{p}=(4\pi n_0)^{1/2}`$ is the so-called plasma frequency and $`\beta `$ represents the speed of propagation of hydrodynamic disturbances in the electron system. We choose $`\beta =\sqrt{3/5}q_F`$, $`q_F`$ being the Fermi momentum.
Now we follow Ref. to quantize, after linearization, the hamiltonian of Eq. (1) on the basis of the normal modes corresponding to Eqs. (2) and (3), which we shall refer after quantization as bulk and surface plasmons, respectively. We find
$$H=H_G+H_0^B+H_0^S+H_1^B+H_1^S,$$
(6)
where $`H_G`$ represents the Thomas-Fermi ground state of the static unperturbed electron system. $`H_0^B`$ and $`H_0^S`$ are free bulk and surface plasmon hamiltonians:
$$H_0^B=\frac{1}{\mathrm{\Omega }}\underset{𝐪,p>0}{}[1/2+\omega _{q,p}^B]a_{𝐪,p}^{}(t)a_{𝐪,p}(t)$$
(7)
and
$$H_0^S=\frac{1}{A}\underset{𝐪}{}\left[1/2+\omega _q^S\right]b_𝐪^{}(t)b_𝐪(t),$$
(8)
where $`\mathrm{\Omega }`$ and $`A`$ represent the normalization volume and the normalization area of the surface, respectively, and where $`a_{𝐪,p}(t)`$ and $`b_𝐪(t)`$ are Bose-Einstein operators that annihilate bulk and surface plasmons with wave vectors $`(𝐪,p)`$ and $`𝐪`$, respectively. $`H_1^{B/S}`$ are contributions to the hamiltonian coming from the coupling between the external particle and bulk/surface plasmon fields:
$$H_1^{B/S}=d𝐫\rho _{\mathrm{ext}}(𝐫,t)\varphi ^{B/S}(𝐫,t),$$
(9)
$`\varphi ^{B/S}(𝐫,t)`$ representing operators corresponding to the scalar electric potential due to bulk/surface plasmons. Outside the metal ($`z>0`$),
$$\varphi ^B(𝐫,t)=\frac{1}{\mathrm{\Omega }}\underset{𝐪,p>0}{}f_{q,p}^B(z)\mathrm{e}^{\mathrm{i}𝐪𝐫_{}}\left[a_{𝐪,p}^{}(t)+a_{𝐪,p}(t)\right]$$
(10)
and
$$\varphi ^S(𝐫,t)=\frac{1}{A}\underset{𝐪}{}f_q^S(z)\mathrm{e}^{\mathrm{i}𝐪𝐫_{}}\left[b_𝐪^{}(t)+b_𝐪(t)\right],$$
(11)
where $`f_{q,p}^B(z)`$ and $`f_q^S(z)`$ are bulk and surface coupling functions, respectively:
$$f_{q,p}^B(z)=\frac{\sqrt{2\pi /\omega _{q,p}^B}\omega _\mathrm{p}p\mathrm{e}^{qz}}{\left[p^4+p^2(q^2+\omega _\mathrm{p}^2/\beta ^2)+\omega _\mathrm{p}^4/(4\beta ^4)\right]^{1/2}}$$
(12)
and
$$f_𝐪^S(z)=\frac{\sqrt{\pi \gamma _q/\omega _q^S}\omega _\mathrm{p}}{\left[q(q+2\gamma _q)\right]^{1/2}}\mathrm{e}^{qz},$$
(13)
and where $`\gamma _q`$ represents the so-called inverse decay length of surface plasmon charge fluctuations:
$$\gamma _q=\frac{1}{2\beta }\left[\beta q+\sqrt{2\omega _\mathrm{p}^2+\beta ^2q^2}\right].$$
(14)
We derive the potential induced by the presence of the external perturbing charge as the expectation value of the total scalar potential operator:
$$V^{ind}(𝐫,t)=\frac{<\mathrm{\Psi }_0|\varphi _H^B(𝐫,t)+\varphi _H^S(𝐫,t)|\mathrm{\Psi }_0>}{<\mathrm{\Psi }_0|\mathrm{\Psi }_0>},$$
(15)
where $`|\mathrm{\Psi }_0>`$ is the Heisenberg ground state of the interacting system and where $`\varphi _H^B(𝐫,t)`$ and $`\varphi _H^S(𝐫,t)`$ are the operators of Eqs. (8) and (9) in the Heisenberg picture. Our results reproduce previous calculations for the image potential \[defined as half the induced potential at the position of the charged particle that creates it\] of a static ($`v=0`$) external charged particle, which in the case of a non-dispersive electron gas ($`\beta =0`$) coincides with the classical image potential $`V_{\mathrm{im}}(z)=(4z)^1`$.
The energy loss per unit path length of a moving charged particle can be obtained as the retarding force that the polarization charge distribution in the electron gas exerts on the projectile itself:
$$\frac{dE}{dx}=\frac{1}{v}d𝐫\rho _{\mathrm{ext}}(𝐫,t)V^{ind}(𝐫,t)𝐯.$$
(16)
By introducing here the induced potential, which we evaluate from Eq. (13) up to first order in $`Z_1`$, we find
$$\frac{dE}{dx}=\frac{1}{v}_0^{q_c}𝑑q_0^{qv}𝑑\omega \omega \left[P_{q,\omega }^B+P_{q,\omega }^S\right],$$
(17)
where $`P_{q,\omega }^{B/S}`$ represent probabilities per unit time, unit wave number and unit frequency for the excitation of bulk/surface plasmons with wave number $`q`$ and frequency $`\omega `$:
$$P_{q,\omega }^B=Z_1^2\frac{q}{2\pi ^2}_0^{\mathrm{}}𝑑p\frac{\omega _{q,p}^B\left[f_{q,p}^B(z_0)\right]^2}{\sqrt{q^2v^2\omega ^2}}\delta (\omega \omega _{q,p}^B)$$
(18)
and
$$P_{q,\omega }^S=Z_1^2\frac{q\omega _q^S\left[f_q^S(z_0)\right]^2}{\pi \sqrt{q^2v^2\omega ^2}}\delta (\omega \omega _q^S).$$
(19)
For comparison, we note that by solving the linearized Bloch hydrodynamic equations the total probability per unit time, unit wave number and unit frequency for the external particle to transfer momentum $`q`$ and energy $`\omega `$ to the electron gas is found as:
$$P_{q,\omega }=2Z_1^2\omega _\mathrm{p}^2\frac{\mathrm{Im}\left\{1/\left[\omega _\mathrm{p}^22\beta ^2\mathrm{\Lambda }_q(\mathrm{\Lambda }_q+q)\right]\right\}}{\pi \sqrt{q^2v^2\omega ^2}}\mathrm{e}^{2qz_0},$$
(20)
where
$$\mathrm{\Lambda }_q=\frac{1}{\beta }\sqrt{\omega _\mathrm{p}^2+\beta ^2q^2\omega (\omega +\mathrm{i}\delta )},$$
(21)
and where $`\delta `$ is a positive infinitesimal. The same result is obtained within either the specular reflexion (SR) or the SCIB model of the surface, as long as the hydrodynamic approximation for the bulk dielectric function is considered.
Within the various semiclassical approaches leading to Eq. (18) the role played by bulk and surface plasmons goes unnoticed. Furthermore, Eq. (18) shows no evidence of losses at the bulk plasmon frequency, and it has been generally believed that the energy loss predicted by this equation originates entirely in the excitation of surface plasmons. That this is not the case is clearly shown in Fig. 1. This figure shows $`P_{q,\omega }`$, as computed from Eq. (18) with $`\omega _\mathrm{p}=15.8\mathrm{eV}`$ corresponding to the bulk plasma frequency of aluminum metal and with a finite damping parameter $`\delta `$ accounting for the finite lifetime of plasmon fields. The parallel momentum transfer, the velocity and the distance of the particle trajectory above the surface have been taken to be $`q=0.4\mathrm{a}.\mathrm{u}.`$, $`v=2\mathrm{a}.\mathrm{u}.`$ and $`z_0=1.0\mathrm{a}.\mathrm{u}.`$, respectively, and different values of $`\delta `$ have been considered. One sees that loss occurs at the surface plasmon energy $`\omega _q^S`$ given by Eq. (3), while a continuum of bulk plasmon excitations occurs at energies $`\omega _{q,p}^B`$ (see Eq. (2)), which all are over $`\omega _{q,0}^B=(\omega _\mathrm{p}^2+\beta ^2q^2)^{1/2}`$, as predicted by Eqs. (16) and (17). In the limit as $`\delta 0^+`$, both bulk and surface contributions to the total energy-loss probability of Eq. (18) exactly coincide with the predictions of Eqs. (16) and (17), thus demonstrating the full equivalence between our quantized hydrodynamic scheme and the more standard semiclassical approaches. Also, either introduction of both Eqs. (16) and (17) into Eq. (15) or replacement of the integrand in Eq. (15) by the energy-loss probability of Eq. (18) with $`\delta 0`$ result in exactly the same total energy loss. As the damping parameter increases, surface plasmon excitation broadens to energies over $`\omega _{q,0}^B`$, and for $`\delta \omega _p/10`$ bulk plasmon contributions to the energy loss go unnoticed.
Figure 2 exhibits, by a solid line, the energy loss per unit path length versus the distance $`z_0`$ from the surface, as obtained from Eq. (15). In this case a proton ($`Z_1=1`$) moves with velocity $`v=2\mathrm{a}.\mathrm{u}.`$ parallel to the surface of a bounded electron gas of density equivalent to that of aluminum ($`r_s=2.07`$). Separate contributions from the excitation of bulk and surface plasmons are represented by dashed and dotted lines, as obtained with the use of Eqs. (16) and (17), respectively. One sees that for $`z_0<2\mathrm{a}.\mathrm{u}.`$ the contribution to the energy loss from the bulk channel is important, while for larger values of $`z_0`$ the surface channel alone gives a sufficiently accurate description of the total energy loss.
Figure 3 shows, by a solid line, results for the energy loss of Eq. (15), as a function of the velocity of the projectile, with $`Z_1=1`$, $`r_s=2.07`$ and $`z_0=1\mathrm{a}.\mathrm{u}.`$. Dashed and dotted lines represent separate contributions to the total energy loss from the excitation of bulk and surface plasmons, respectively. We note that for a projectile moving at $`z_0=1\mathrm{a}.\mathrm{u}.`$ the contribution to the energy loss from the bulk channel is observable for all velocities, the relative importance of bulk plasmon excitations becoming more important at velocities around the plasmon threshold velocity when the projectile has enough energy to excite plasmons. For comparison, the result one obtains either from Eq. (17) or Eq. (18) in the case of a non-dispersive electron gas ($`\beta =0`$), which coincides with the classical formula of Echenique and Pendry, is represented by a dashed-dotted line.
In conclusion, we have used the quantized hydrodynamic model of the bounded electron gas to demonstrate that bulk plasmons undergo real excitations, even in the case of charged particles that do not penetrate into the solid. We have derived explicit expressions for the probability of both bulk and surface plasmon excitation by external charged particles moving parallel to the surface, which neatly display the role that bulk plasmon excitation plays in the interaction of charged particles moving near a metal surface. The full equivalence between our quantized hydrodynamic scheme and the more standard semiclassical approaches has been demonstrated. It has been generally believed that energy loss predicted by these approaches originates entirely in the excitation of surface plasmons. However, we have shown that for each value of the wave vector parallel to the surface both a discrete surface plasmon excitation and a continuum of bulk plasmon excitations contribute to the total energy-loss probability. We have also presented explicit calculations of the energy loss per unit path length of protons moving outside of a metal along a trajectory that is parallel to the surface, and our results indicate that the contribution from the bulk channel is important for all projectile velocities, as long as the distance from the surface is smaller than a few atomic units.
We acknowledge partial support by the Basque Unibertsitate eta Ikerketa Saila and the Spanish Ministerio de Educación y Cultura.
|
no-problem/9904/cond-mat9904361.html
|
ar5iv
|
text
|
# Percolation-type description of the metal-insulator transition in two dimensions
## Abstract
A simple non-interacting-electron model, combining local quantum tunneling and global classical percolation (due to a finite dephasing time at low temperatures), is introduced to describe a metal-insulator transition in two dimensions. It is shown that many features of the experiments, such as the exponential dependence of the resistance on temperature on the metallic side, the linear dependence of the exponent on density, the $`e^2/h`$ scale of the critical resistance, the quenching of the metallic phase by a parallel magnetic field and the non-monotonic dependence of the critical density on a perpendicular magnetic field, can be naturally explained by the model.
PACS numbers: 71.30.+h, 73.40.Qv,73.50.Jt
The experimental observation of a metal-insulator transition in two dimensions has been a subject of extensive investigation, since it is in disagreement with the predictions of single-parameter scaling theory for noninteracting electrons . Several theories, based on the treatment of disorder and electron-electron interactions by Finkelstein , have been put forward . Other approaches considered spin-orbit scattering , percolation of electron-hole liquid or scattering by impurities . To date there is no acceptable microscopic theory that describe quantitatively the observed data.
Here we present a simple non-interacting electron model, combining local quantum tunneling and global classical percolation, to explain several features of the experimental observations. The main observations we want to understand are the following:
* As the system is cooled down the resistance of samples with density higher than some critical density extrapolates to a finite value at zero temperature, while that of samples with lower density diverges.
* The resistance of the sample with the critical density does not depend on temperature (at least for a limited range of low temperatures).
* The conductance of the critical-density sample is of the order of $`e^2/h`$.
* On the metallic side the functional dependence of the resistance is of the form $`R(T)=R_0+R_1\mathrm{exp}(A/T)`$. The parameter $`A`$ varies linearly with the density and vanishes at the transition.
* In perpendicular magnetic fields this transition is continuously connected with the quantum Hall–Insulator transition . The critical density varies nonmonotonically with magnetic field, with a minimum around $`\nu =1`$.
* Parallel magnetic fields destroy the metallic phase, at least for densities near the transition .
Before introducing the model let us mention three other experimental observations \- (a) The strong disorder is crucial to see the transition. In GaAs the transition is seen only at samples with low mobility (even with the same density). In fact, Ribeiro et al. have recently observed a zero-field metal-insulator transition in high-density n-type GaAs sample with strong enough disorder (which was introduced by a matrix of randomly distributed quantum dots). (b) There is additional experimental indications that the transition is not driven by interactions – Yaish and Sivan studied a system of two parallel gases, one of electrons and one of holes. The observed metal-insulator transition in the hole gas depended only slightly on the electron density, even though one expects that increasing electron density will screen the interactions between holes and suppress the metallic phase in the hole gas. On the contrary, increasing electron density led to increasing conductance in the hole gas, indicating that its main role is to screen the impurity potentials in the hole gas. (c) There is a growing experimental evidence that even at the lowest available temperatures, the dephasing length, $`L_\varphi `$, is finite .
Based on all these observations we now suggest the following scenario to explain the experimental observations – the potential fluctuations due to the disorder define density puddles of size $`L_\varphi `$ or larger in which the electron wavefunction totally dephases. (Density separation into puddles in gated GaAs was indeed observed experimentally by Eytan et al. , using near-field spectroscopy.) Locally, between these puddles, transport is via quantum tunneling through saddle points, or quantum point-contacts (QPCs). Since between such tunneling events dephasing takes place, the conductance of the system will be determined by adding classically these quantum resistors. A related model was introduced by Shimshoni et al. to describe successfully transport in the quantum Hall (QH) regime. In fact, it was later concluded that the observation of a finite, quantized Hall resistance in the Hall insulator phase can only occur when the dephasing length is smaller than the size of these puddles – otherwise the Hall resistance diverges . In addition, this model also accounted for the observed current-voltage duality around the transition , a duality which was also observed in the zero-field transition . The percolative nature of the system in the QH regime was indeed verified experimentally . Finite dephasing length can also explain the observed non-critical behavior of the resistance near the QH-insulator transition . We will return to the QH regime below.
We characterize each saddle point by its critical energy $`ϵ_c`$, such that the transmission through it is given by $`T(ϵ)=\mathrm{\Theta }(ϵϵ_c)`$. Thus the conductance through each QPC is given by the Landauer formula,
$`G(\mu ,T)`$ $`=`$ $`{\displaystyle \frac{2e^2}{h}}{\displaystyle 𝑑ϵ\left(\frac{f_{FD}(ϵ)}{ϵ}\right)T(ϵ)}`$ (1)
$`=`$ $`{\displaystyle \frac{2e^2}{h}}{\displaystyle \frac{1}{1+\mathrm{exp}[(ϵ_c\mu )/kT]}},`$ (2)
where $`\mu `$ is the chemical potential, and $`f_{FD}`$ is the Fermi-Dirac distribution function.
The system is now composed of classical resistors, where the resistance of each one of them is given by (2), with random QPC energies. In the numerical data presented below, we solved a 20x20 system of QPCs (which, for simplicity, has the topology of a square lattice), each averaged over 1000 realizations of disorder, where the QPC energies were taken from a square distribution of width $`W`$. At zero temperature the conductors have either zero conductance or a conductance equal to $`2e^2/h`$ and one has the usual second-order percolation transition. The critical conductance exponent $`t`$ is known at two dimensions and is equal to $`1.3`$ . In Fig. 1 we fit the experimental data of and of to the expected critical dependence. Clearly, the agreement with the classical percolation prediction is excellent. Moreover, the fact that while the density scale is so different between the two experiments, the conductance scale is identical, clearly demonstrates that $`e^2/h`$ is the only conductance scale in the system.
Fig. 1. Comparison of the lowest temperature data of (two sets of data, triangles and squares, 330mK, density given by the lower axis) and of (circles, 57mK, density given by the upper axis) to the prediction of percolation theory (solid line). Inset: Logarithmic derivative of the data which gives a line whose slope is the critical exponent. The percolation prediction $`(t=1.3)`$ is given by the solid line. For comparison a $`t=1`$ slope is also shown (broken line).
As temperature increases, the Fermi-Dirac distribution is broadened. Consequently the conductance of the insulating QPCs ($`ϵ_c>\mu `$) increases exponentially, while that of the transparent ones ($`ϵ_c<\mu `$) decreases exponentially. Thus we expect to see rather dramatic effects as a function of temperature. This is indeed depicted in Fig. 2. As temperature is lowered systems with slightly different resistance at high temperatures will diverge exponentially with decreasing temperatures. The resistance of systems on the metallic side ($`n>n_c`$) will saturate at zero temperature, while that of insulating samples will diverge. Note that there is an upward turn even on the metallic side of the transition. We will come back to this point below. The high-temperature resistance of the critical density network is naturally around $`h/e^2`$, the only resistance scale in this model.
Fig. 2. Temperature dependence of the resistance for systems of different densities. Below the critical line (bold curve) all curves saturate at zero temperature, while above it the resistance diverges. The resistance of the more metallic samples decreases exponentially (inset).
For systems of exponentially distributed resistors the resistance of the whole circuit is determined by the critical resistor - the worse resistor in the minimal percolating network . Then the resistance of the network would be equal to the inverse of the Fermi-Dirac function, $`\{1+\mathrm{exp}[(ϵ_c\mu )/kT]\}h/e^2`$, where $`ϵ_c`$ is its threshold energy. Then clearly the overall resistance will be of the form observed experimentally, $`R=R_0+R_1\mathrm{exp}(A/T)`$, with $`A`$ varying linearly with the density and vanishing at the transition. In our case, the resistors on the metallic side have a bound distribution, and accordingly there will be other resistors, in parallel and in series, that will contribute to the overall resistance of the circuit. This will not change the functional form, but will renormalize the parameters $`R_0`$, $`R_1`$ and $`A`$. Such a functional dependence on the metallic side is indeed found numerically and displayed in the inset. In fact, close to the transition, on the metallic side, as temperature increases, some QPCs that before had zero conductance, start to conduct and add to the overall conductance. Since the critical percolation cluster is very ramified (in fact of fractal dimension), there will be many such resistors in parallel to the main conducting network, and the effect of improving these resistors will overcome the fact that resistors on the conducting network itself become worse. This leads to a downward turn of the resistance with increasing temperature even on the metallic side, the details of which may depend sensitively on the geometry. Only deeper into the metallic regime, as seen in Fig. 2, the overall resistance increases with increasing temperature. This also suggests that the density at which the resistance is approximately temperature independent is not the true critical point, but rather deeper on the metallic side. This is clearly seen in Fig. 3, where one can see a point where all the low-temperature curves nearly cross, well inside the metallic regime. The above discussion suggests that one should be cautious in associating the critical point with the “temperature-independent” point, as done routinely in the experiment interpretations.
Fig. 3. Conductance vs. density (Fermi energy) for several temperatures. There is a density, well above the true critical point, where the curves seem to cross each other.
We turn next to the effects of a magnetic field. The effect of a parallel field is straightforward to understand, as there have been several studies of transport through a QPC in parallel fields . These experimental and theoretical studies demonstrated that the threshold density where the QPC opens up increases parabolically with the in-plane magnetic field. This effect was attributed to coupling of the in-plane motion to the strong confinement in the vertical direction, leading to an increase in the energy levels. Since this increase occurs for all QPCs in the sample, such a field in our case will strongly inhibit the metallic behavior: QPCs which were conducting at zero field, will have exponentially small conductance with increasing field. Once the density of conducting QPCs falls below the critical density, the system becomes an insulator, in agreement with experimental observations.
The situation in perpendicular magnetic fields is more interesting, as QH states are formed. Transport through a single QPC in perpendicular field and the crossover between the zero field limit and the QH limit have been studied in detail . As expected, one finds that the critical energy oscillates with magnetic field due to the depopulation of Landau levels. In our case, we expect the oscillations to be smoothed out by the disorder and by the averaging over many QPCs. Thus only the strongest oscillation, near $`\nu =1`$, may survive, leading to a single dip in the critical density vs. magnetic field plot, as was observed experimentally. This is indeed in agreement with our numerical calculation. We studied the energy levels of one puddle of electrons, which we modeled by a circular disk, in the presence of disorder . In Fig. 4 we plot the “critical density” – the number of electrons that need to occupy the puddle, so that the energy of the highest-energy electron will be enough to transverse the QPC , equivalent in the bulk system to the critical density – as a function of magnetic field. Indeed we see a dip near $`\nu =1`$ with all other oscillations smoothed out by the disorder. This curve has a strong resemblance to the experimental data (inset). In addition, we expect that as the magnetic field is lowered below the $`\nu =1`$ minimum, more than one channel will transverse some QPCs, leading to an increase in the critical conductance, as indeed reported experimentally.
Fig. 4. The critical density - the number of electrons in the puddle, so that the topmost energy will allow transport through the point contact - as a function of magnetic field, in the presence of a finite disorder. The continuous curve is an averaged fit through the (necessarily integer) data points. Inset: the corresponding experimental data .
All the above results and discussion demonstrated that many of the experimental observations can be explained in the context of the simple model introduced here. Nevertheless there are clearly other physical effects that need to be included in order to have a full picture of the experiments. In particular, electron-electron interactions are expected to play an important role in these low densities. As we can regard the metallic puddles described above as quantum dots, one can use the abundant information about the role of interactions in such structures , to gain additional understanding of the characteristics of the puddles and the phase separation. Other effects, including the energy dependence of the transmission coefficient and the possibility of more than one channel through the QPCs, the temperature dependence of the dephasing length, the role of interband-scattering and temperature-dependent impurities may also be important to understand quantitative aspects of the data. Nevertheless, the fact that several important aspects of the experimental data can be explained in the context of a simple model is quite encouraging. We expect that the model presented here apply mostly to the GaAs samples.
The picture described above can be checked experimentally. The experiments verifying the percolative structure in the QH regime can be extended to the zero-field systems. (An experimental evidence for phase separation was observed at zero field in .) An even more direct evidence of the percolative nature of the system will be local probes . In fact, an enhancement in the fluctuations of the local chemical potential has already been observed as the system enters the “insulating” phase, in a similar fashion to the enhancement of chemical potential fluctuations with closing of the barriers forming a quantum dot . In fact, a “smoking gun” verification of the picture presented here, will be periodic oscillations of the local chemical potential on the insulating side, due to depopulation of the Landau levels, as was observed in quantum dots .
I thank many of my colleagues for fruitful discussions: A. Auerbach, Y. Gefen, Y. Hanein, D. Shahar, E. Shimshoni, U. Sivan, A. Stern, A. Yacoby and Y. Yaish. In particular, I would like to thank Y. Hanein & D. Shahar and U. Sivan & Y. Yaish for making their data available to me. This work was supported by THE ISRAEL SCIENCE FOUNDATION - Centers of Excellence Program, and by the German Ministry of Science.
|
no-problem/9904/chao-dyn9904003.html
|
ar5iv
|
text
|
# Twofold-broken rational tori and the uniform semiclassical approximation
## 1 Introduction
Periodic orbits provide the skeleton of the dynamics of classical Hamiltonian systems. Generic dynamical systems display a mixed phase space, consisting of islands of stability residing in chaotic seas. The periodic orbits are neither grouped in families, like in integrable systems, nor are they well isolated and all unstable, as for chaotic systems. One rather finds also stable periodic orbits surrounded by islands of regular behaviour. These islands of stability look locally like an almost integrable system, altogether with a KAM structure of invariant tori and chains of periodic orbits, remnants of rational tori of a supposedly contiguous integrable situation .
In a semiclassical treatment of the corresponding quantum system, clusters of proximate orbits display a collective behaviour. The bifurcations at the centre of the island and their semiclassical treatment have been addressed in a number of recent works, both for the generic variants as well as for classically non-generic, but semiclassically still relevant cases .
In the class of near-integrable systems has been addressed, and a uniform semiclassical approximation for the most frequently encountered broken rational tori (consisting of a stable and an unstable periodic orbit) was presented. We will call these tori the ‘simple’ tori. Interestingly, the semiclassical approximation works reasonably well even beyond the point where the stable orbit becomes unstable . The same configuration of a stable and an unstable orbit is also typical close (in parameter space) to most types of period-$`n`$-tupling bifurcations at the centre of a stability island. For bifurcation number $`n5`$, for the island-chain scenario with $`n=4`$, and also (as a consequence of a more complicated bifurcation scenario) for $`n=3`$, two satellite orbits are expelled from the centre. At a certain distance from the bifurcation the satellite orbits can be treated as isolated from the central orbit; however, a collective semiclassical treatment of the two satellites is often still necessary, and this can be achieved by using the abovementioned approximation for the simple torus.
Although encountered less frequently, there are situations where a broken torus consists not of two, but a higher number of periodic orbits. In the islands of stability tori of this type appear especially at larger distance from a bifurcation. In this work we study the twofold-broken rational torus, consisting of two stable and two unstable periodic orbits. It is described by a normal form which is obtained from the normal form of the simple torus by including the second harmonic in an angular coordinate. From the normal form we construct a uniform approximation that can be used to improve semiclassical trace formulae. Indeed, the relevance of this configuration is much enhanced in the semiclassical context: Here one has to consider also ‘ghost’ orbits with complex coordinates , and even a simple broken torus can be affected by nearby ghosts, making a treatment as a ‘pre-formed’ twofold-broken torus advisable. This is illustrated in a model system, a periodically driven angular momentum vector (the kicked top), where we find a configuration of four period-three orbits which can be regarded as a twofold-broken rational torus. A reduction of the error of the trace formula by a factor of about $`23`$ is found even when two of these satellites are ghosts. An even higher accuracy gain is attained for so-called ‘inverse-$`\mathrm{}`$ spectroscopy’.
## 2 Normal forms and uniform approximations
We restrict the analysis to two-dimensional area-preserving maps. (The results are also applicable to autonomous Hamiltonian systems with two degrees of freedom.) The quantum version of the map is generated by the unitary Floquet operator $`F`$ which acts on the vectors of a Hilbert space, mapping the space onto itself. The semiclassical trace formula relates the traces $`\text{tr}F^n`$ to the classical periodic orbits. Isolated orbits of primitive period $`n_0`$ give an additive contribution
$$C=A\mathrm{exp}\left[\mathrm{i}\frac{S}{\mathrm{}}\mathrm{i}\frac{\pi }{2}\mu \right]$$
(1)
with amplitude
$$A=n_0|2\mathrm{t}rM|^{1/2}$$
(2)
to all traces $`\text{tr}F^n`$ with $`n=n_0r`$ and integer repetition number $`r`$. Besides the primitive period, three classical quantities of the ($`r`$-th return of the) periodic orbit enter, the action $`S`$, the trace of the linearized $`n`$-step map $`M`$, and the Maslov index $`\mu `$.
The expression (1) is derived by a stationary-phase approximation and becomes inaccurate when orbits lie close together. Then a collective treatment of the region $`\mathrm{\Omega }`$ inhabited by the proximate orbits becomes necessary. This can be achieved by introducing normal forms for a phase function $`\mathrm{\Phi }`$ and an amplitude function $`\mathrm{\Psi }`$ into the more general expression
$$C_\mathrm{\Omega }=\frac{1}{2\pi \mathrm{}}_\mathrm{\Omega }d\phi ^{}dI\mathrm{\Psi }(\phi ^{},I)\mathrm{exp}\left[\frac{\mathrm{i}}{\mathrm{}}\mathrm{\Phi }(\phi ^{},I)\mathrm{i}\frac{\pi }{2}\nu \right].$$
(3)
Here $`I`$, $`\phi `$ are canonical polar (or cylinder) coordinates, and $`\nu `$ is the Morse index.
The famous Poincaré–Birkhoff theorem states that a perturbation of an integrable system causes tori with rational winding number to break into chains of alternating stable and unstable periodic points. However, it does not give a quantitative criterion for the number of distinct orbits that lie on this chain. The simple broken torus is described by the normal form
$$S(I,\phi ^{})=S_0+I\phi ^{}aI^2b\mathrm{cos}\phi ^{}$$
(4)
for the generating function $`S`$, with constants $`S_0`$, $`a`$, and $`b`$. The corresponding map $`(I,\phi )(I^{},\phi ^{})`$, implicitly given by
$$\phi =\frac{S}{I},I^{}=\frac{S}{\phi ^{}},$$
(5)
is the well-known standard map. This normal form has been used in to obtain a uniform semiclassical approximation for the simple broken torus, smoothly interpolating between the two non-commuting classical and integrable limits ($`\mathrm{}0`$ and $`b0`$, respectively).
A more complete picture can be obtained when one includes the second harmonic in the angular variable $`\phi ^{}`$ and works with the extended normal form
$$S(I,\phi ^{})=S_0+I\phi ^{}aI^2b\mathrm{cos}\phi ^{}c\mathrm{cos}(2\phi ^{}+2\phi _0).$$
(6)
The strategy of including higher-order terms in normal forms has been pursued before, with two different incentives. Firstly, the inclusion of higher orders is a tool to equip a normal form with a sufficient number of independent parameters. This can be necessary to account for all classical properties (stabilities and actions) of the periodic orbits described by the normal form. Secondly, the higher orders describe additional periodic orbits, and hence more complicated configurations than the usual normal forms . A semiclassical description often succeeds only when all orbits of an extended normal form are treated collectively. Presently we aim at the inclusion of additional periodic orbits.
The periodic orbits satisfy the fixed point conditions
$`I={\displaystyle \frac{S}{\phi ^{}}}=I+b\mathrm{sin}\phi ^{}+2c\mathrm{sin}(2\phi ^{}+2\phi _0),`$ (7)
$`\phi ^{}={\displaystyle \frac{S}{I}}=\phi ^{}2aI,`$ (8)
resulting in $`I=0`$ and
$`b\mathrm{sin}\phi ^{}+2c\mathrm{sin}(2\phi ^{}+2\phi _0)=0.`$ (9)
This condition amounts to finding the roots of a fourth-order polynomial in $`\mathrm{sin}\phi ^{}`$. Depending on the parameters $`b`$, $`c`$, and $`\phi _0`$ there are either four real solutions or two real and two complex solutions. For $`|b|<2|c|`$ there are always four real solutions. For $`|b|>|c|`$ only two real solutions are found. When $`|b/c|`$ is fixed in the range $`(1,2)`$ and $`\phi _0`$ is varied one finds tangent bifurcations with two real solutions on one side of the bifurcation and four real solutions on the other side. The real solutions correspond to conventional periodic orbits while the complex solutions are ‘ghosts’. They are of no consequence for the classical dynamics but can be important for the semiclassical description of the quantum system, as has been shown in and as we shall see once more below.
The joint contribution of the orbits on the twofold-broken torus is found by introducing into Eq. (3) the normal form
$$\mathrm{\Phi }=S_0aI^2b\mathrm{cos}\phi ^{}c\mathrm{cos}(2\phi ^{}+2\phi _0)$$
(10)
\[cf. Eq. (6)\] for the phase function and
$$\mathrm{\Psi }=1+d\mathrm{cos}(\phi ^{}+\phi _1)+e\mathrm{cos}(2\phi ^{}+2\phi _0)$$
(11)
for the amplitude function. The stationary-phase limit of Eq. (3) is a sum of four additive contributions of form (1), each representing one orbit. The four parameters $`S_0`$, $`b`$, $`c`$, $`\phi _0`$ are determined by matching the phases of each contribution to the actions $`S`$ of the periodic orbits. The parameters $`a`$, $`d`$, $`e`$, and $`\phi _1`$ are fixed by the stability amplitudes $`A`$. This strategy works also when two of the orbits are ghosts: Phases and amplitudes become then complex, but are related by complex conjugation, and the number of real independent parameters remains unchanged.
Eqs. (3,10,11) represent a uniform approximation of the joint contribution of the twofold-broken torus. The integration over $`I`$ is readily carried out, which leaves us with a one-dimensional strongly oscillating integral over the coordinate $`\phi ^{}`$. Numerically it is most conveniently evaluated by the method of steepest descent, for which the integration contour is deformed into the complex plain. On the new contour the integrand decreases exponentially. The new contour can also visit stationary points with complex coordinates, i. e., ghost orbits.
## 3 Numerical results
We shall illustrate our findings, and especially the relevance of ghosts, for a situation encountered in a model system, the dynamics of a periodically driven angular momentum vector $`𝐉`$ (the kicked top ). The components of $`𝐉`$ obey the usual commutation rules $`[J_x,J_y]=iJ_z`$ (and cyclic permutations). The total angular momentum $`𝐉^2=j(j+1)`$ is conserved, restricting the dynamics to the irreducible representations of the angular-momentum algebra. The Hilbert space dimension is $`2j+1`$. The effective Planck’s constant is $`1/(j+1/2)`$ and the classical limit is attained for $`j\mathrm{}`$. We work here with a Floquet operator of the explicit form
$`F`$ $`=`$ $`\mathrm{exp}\left[\mathrm{i}k_z{\displaystyle \frac{J_z^2}{2j+1}}\mathrm{i}p_zJ_z\right]\mathrm{exp}\left[\mathrm{i}p_yJ_y\right]`$ (12)
$`\times \mathrm{exp}\left[\mathrm{i}k_x{\displaystyle \frac{J_x^2}{2j+1}}\mathrm{i}p_xJ_x\right].`$
The dynamics consists of a sequence of linear rotations by angles $`p_i`$ alternating with torsions of strength $`k_i`$. We hold the $`p_i`$ fixed ($`p_x=0.3`$, $`p_y=1.0`$, $`p_z=0.8`$) while varying the control parameter $`kk_z=10k_x`$. Complete semiclassical spectra of this system throughout the full transition from integrable ($`k=0`$) to well-developed chaotic behaviour ($`k10`$) have been presented in . The system has also been used to illustrate uniform semiclassical approximations for various kinds of bifurcations .
We concentrate on a particular configuration of period-three orbits which suggests a treatment as a twofold-broken torus. The configuration comes about in a sequence of three bifurcations: At $`k=1.9715\mathrm{}`$ a pair of period-three satellites is born in close vicinity to a period-one orbit in the centre of a stability island. At $`k=1.9753`$ a period-tripling bifurcation takes place where the stable satellite collides with the central orbit. On the other side of the bifurcation the satellites form a simple broken torus around the centre. This sequence of bifurcations can be described by extended normal forms , and the close neighbourhood of the two bifurcations is not exceptional (see e. g. ). As the control parameter $`k`$ is increased further the satellites move away from the centre. At $`k=3.7856`$ another pair of period-three satellites appears in a tangent bifurcation. For $`k<5`$ the four satellites form a configuration that resembles a twofold-broken torus. Figure 1 displays a phase space portrait for $`k=4`$, and Figure 2 shows steepest-descent contours in the complex $`\phi ^{}`$-plane on both sides of the final bifurcation.
We evaluated $`\text{tr}F^3`$ at $`k=3.0`$ with the quantum number $`j`$ ranging from $`1`$ to $`50`$. At the given value of the control parameter only one of the pairs of satellites mentioned above has real coordinates, and its distance to the centre of the stability island is already quite large. The other two satellites are still ghosts, but their bifurcation is not far away. This leaves us with the choice between two semiclassical approximations: i) We can group the two real satellites together with the central orbit and treat the ghosts separately, or ii) we can group the four satellites as a twofold-broken torus and treat the central orbit separately. (In principle, we could complicate matters even more and group all the orbits together, but this is rather impractical and beyond the scope of the present work.) The semiclassical evaluation of the trace involves also five other orbits. The error of the two approximations is shown in Figure 3. The error of approximation ii) is about a factor $`23`$ smaller than that of approximation i). Although the accuracy gain is not dramatic, this result favours clearly a treatment of the satellites as a pre-formed twofold-broken torus.
A somewhat more demanding test is aided by ‘inverse-$`\mathrm{}`$ spectroscopy’ . We consider the discrete truncated Fourier analysis
$$T(S)=\frac{1}{32}\underset{j=1}{\overset{32}{}}\text{tr}F^3(j)\mathrm{exp}[\mathrm{i}(j+\frac{1}{2})S]$$
(13)
of the trace with respect to the quantum number $`j`$. The ‘action spectrum’ $`|T(S)|^2`$ displays peaks at the actions of the periodic orbits that contribute to $`\text{tr}F^3`$. Since accidental action degeneracies do not occur in the present example, the quality of the semiclassical approximations i) and ii) can now be assessed directly, without interference of the remaining orbits. Figure 4 shows the collective peak of the four satellites and the central orbit. Approximation ii) agrees almost perfectly with the exact result, while approximation i) overestimates the peak-height distinctively.
Of course, the uniform approximation presented here is not restricted to the situation where two of the orbits are ghosts, but is valid for four real orbits on the broken torus as well. Indeed we find an improvement in the semiclassical accuracy of $`\text{tr}F^3`$ over the full range $`2.8<k<5`$.
## 4 Conclusion
In this paper a uniform approximation for a broken rational torus consisting of four periodic orbits has been presented. The approximation was tested in a model system where the phase-space is mixed, and the tori are grouped around (but not too close to) a central orbit. It can be expected that the approximation will be even more useful in studies of globally near-integrable systems.
## Acknowledgments
This work was supported by the DFG (Sonderforschungsbereich 237) and the European Community (Program for the Training and Mobility of Researchers).
## References
|
no-problem/9904/astro-ph9904085.html
|
ar5iv
|
text
|
# Cartography for Martian Trojans
## 1 INTRODUCTION
The Lagrange points $`L_4`$ and $`L_5`$ are stable in the restricted three body problem (e.g., Danby 1988). However, the long-term survival of Trojans around the Lagrange points of the planets in the presence of perturbations from the remainder of the Solar System is a difficult and still unsolved problem (e.g., Érdi 1997). Jovian Trojan asteroids have been known since the early years of this century, while a number of Saturnian moons (e.g., Dione and Helene, Tethys and Calypso, Tethys and Telesto) also form Trojan configurations with their parent planet. However, whether there exist Trojan-like bodies associated with the other planets has been the matter of both observational activity (e.g., Tombaugh 1961, Kowal 1971) and intense theoretical speculation (e.g., Weissman & Wetherill 1974, Mikkola & Innanen 1990, 1992). The answer to this problem came in 1990, with the discovery of 5261 Eureka, the first Trojan around Mars (see Mikkola et al. 1994 for details). The last few months of 1998 have seen further remarkable progress with the discovery of one certain Martian Trojan, namely 1998 VF31, as well as two further candidates, namely 1998 SD4 and 1998 QH56 (see the Minor Planet Electronic Circulars 1998-W04, 1998-R02, 1998-S20 and the Minor Planet Circular 33085). The suggestion that 1998 QH56 and 1998 VF31 might be Martian Trojans was first made by G.V. Williams.
These recent discoveries raise very directly the following questions. Are there any more Martian Trojans? If so, where should the observational effort be concentrated? Of course, the first question can only be answered at the telescope, but the resolution of the second question is provided in this Letter. By integrating numerically an ensemble of inclined and in-plane orbits in the vicinity of the Martian Lagrange points for 25 and 60 million years respectively, the stable régimes are mapped out. On re-simulating and sampling the ensemble of stable orbits, the probability density of Martian Trojans as a function of longitude and inclination can be readily obtained. If a comparatively puny body such as Mars possesses Trojans, the existence of such objects around the larger terrestrial planets also merits very serious attention. There are Trojan orbits associated with Venus and the Earth that survive for tens of millions of years (e.g., Tabachnik & Evans 1998). If objects populating such orbits exist, they must be small else they would have been found by now.
## 2 MARTIAN TROJANS
Saha & Tremaine (1992, 1994) have taken the symplectic integrators developed by Wisdom & Holman (1991) and added individual planetary timesteps to provide a fast code that it is tailor-made for long numerical integrations of low eccentricity orbits in a nearly Keplerian force field. In our simulations, the model of the Solar System consists of the eight planets from Mercury to Neptune, together with test particles starting near the Lagrange points. The effect of Pluto on the evolution of Martian Trojans is quite negligible. Of course, the Trojan test particles are perturbed by the Sun and planets but do not themselves exert any gravitational forces. The initial positions and velocities of the planets, as well as their masses, are provided by the JPL Planetary and Lunar Ephemerides DE405 and the starting epoch is JD 2440400.5 (28 June 1969). All our simulations include the most important post-Newtonian corrections, as well as the effects of the Moon. Individual timesteps are invaluable for this work, as orbital periods are much smaller in the Inner Solar System than the Outer. For all the computations described in this Letter, the timestep for Mercury is $`14.27`$ days. The timesteps of the planets are in the ratio $`1:2:2:4:8:8:64:64`$ for Mercury moving outwards, so that Neptune has a timestep of $`2.5`$ years. The Trojan particles all have the same timestep as Mercury. These values were chosen after some experimentation to ensure the relative energy error has a peak amplitude of $`10^6`$ over the tens of million year integration timespans. After each timestep, the Trojan test particles are examined to see whether their orbits have become hyperbolic or if they have entered the planet’s sphere of influence (defined as $`r_\mathrm{s}=a_\mathrm{p}M_\mathrm{p}^{2/5}`$ where $`a_\mathrm{p}`$ and $`M_\mathrm{p}`$ are the semimajor axis and mass of the planet). If so, they are terminated. In methodology, our calculations are very similar to the magisterial work on the Trojan problem for the four giant planets by Holman & Wisdom (1993). The earlier calculations of Mikkola & Innanen (1994, 1995) on the Trojans of Mars for timespans of between tens of thousands and 6 million years have also proved influential. Our integrations of Trojan orbits are pursued for durations ranging from 25 to 60 million years, the longest integration periods currently available. Nonetheless, the orbits have been followed for only a tiny fraction of the age of the Solar System ($`4.5`$ Gigayears), so it is wise to remain a little cautious about our results.
Figure 1 shows the results of our first experiment. Here, the orbits of 1080 Trojan test particles around Mars are integrated for 25 million years. The initial inclinations of the test particles (with respect to the plane of Mars’ orbit) are spaced every $`2^{}`$ from $`0^{}`$ to $`90^{}`$ and the initial longitudes (again with respect to Mars) are spaced every $`15^{}`$ from $`0^{}`$ to $`360^{}`$. The starting semimajor axes and the eccentricities of the Trojans are the same as the parent planet. Only the test particles surviving till the end of the 25 million year integration are marked on the Figure. The survivors occupy a band of inclinations between $`10^{}`$ and $`40^{}`$ and longitudes between $`30^{}`$ and $`120^{}`$ (the $`L_4`$ Lagrange point) or $`240^{}`$ and $`330^{}`$ (the $`L_5`$ point). On the basis of 4 million year timespan integrations, Mikkola & Innanen (1994) claim that stable Martian Trojans have inclinations between $`15^{}`$ and $`30^{}`$ and between $`32^{}`$ and $`44^{}`$ with respect to Jupiter’s orbit. Our longer integrations seem to suggest a more complex picture. Mikkola & Innanen’s instability strip between $`30^{}`$ and $`32^{}`$ can be detected in Figure 1, but only for objects near $`L_4`$ with initial longitudes $`<60^{}`$. In particular, this instability strip does not exist around $`L_5`$ and here Trojans with starting inclinations $`30^{}<i<32^{}`$ seem to be stable – as is also evidenced by the recent discovery of 1998 VF31. Marked on the figure are the instantaneous positions of the two certain Martian Trojans, namely 5261 Eureka (marked as a red circle) and 1998 VF31 (a green circle), as well as the two candidates 1998 QH56 (a blue circle) and 1998 SD4 (a yellow circle). It is delightful to see that the two securely established Trojans lie within the stable zone, which was computed by Tabachnik & Evans (1998) before the discovery of 1998 VF31. In fact, they live deep within the heart of the zone, suggesting that they may even be primordial. The two candidates (1998 QH56 and 1998 SD4) lie closer to the rim. Let us finally note that Trojans starting off in or near the plane of Mars’ orbit are unstable. This has been confirmed by an extensive survey of in-plane Martian Trojans. On integrating 792 test particles with vanishing inclination but with a range of longitudes and semimajor axes, we found that all are unstable on timescales of $`60`$ million years. Martian Trojans with low inclinations are not expected.
It is useful to an observer hoping to discover further Trojans to provide plots of the probability density. Accordingly, let us re-simulate the stable zones with much greater resolution. This is accomplished by placing a total of 746 test particles every $`1^{}`$ in initial inclination and every $`5^{}`$ in initial longitude so as to span completely the stable regions. This ensemble of orbits is then integrated and the orbital elements are sampled every 2.5 years to provide the plots displayed in Figure 2. The upper panel shows the meshed surface of the probability density as a function of both inclination to the invariable plane and longitude with respect to the planet. The asymmetry between the two Lagrange points is evident. The lower panels show the projections of the meshed surface onto the principal planes – in particular, for the inclination plot, we have shown the contribution at each Lagrange point separately. There are a number of interesting conclusions to be drawn from the plots. First, as shown by the dotted line, the probability density is bimodal at $`L_4`$. It possesses a flattish maximum at inclinations between $`15^{}`$ and $`30^{}`$ and then falls sharply, before rising to a second maximum at $`36^{}`$. At $`L_5`$, all inclinations between $`15^{}`$ and $`40^{}`$ carry a significant probability, though the smaller inclinations in this band are most favored. It is within these inclination windows that the observational effort should be most concentrated. Second, the probability density is peaked at longitudes of $`60^{}`$ ($`L_4`$) and $`300^{}`$ ($`L_5`$). The most likely place to observe one of these Trojans is indeed at the classical locations of the Lagrange points. This is not intuitively obvious, as any individual Trojan is most likely to be seen at the turning points of its longitudinal libration. There are two reasons why this effect is not evident in our probability density plots. First, our figures refer to an ensemble of Trojans uniformly populating the stable zone. So, the shape of the stable zone also plays an important role in controlling the position of the maximum of the probability density. Second, the positions of the Lagrange points themselves are oscillating and so the turning points of the longitudinal libration do not occur at the same locations, thus smearing out the enhancement effect.
Table 1 lists the orbital elements of the two secure Martian Trojans and the two candidates, as recorded by the Minor Planet Center. From the instantaneous elements, it is straightforward to simulate the trajectories of the objects. Figure 3 shows the orbits plotted in the plane of longitude (with respect to Mars) versus semimajor axis. As the figures illustrate, both 5261 Eureka and 1998 VF31 are stable and maintain their tadpole character (see e.g., Garfinkel 1977) for durations of 50 million years. Based on preliminary orbital elements, Mikkola et al. (1994) integrated the orbit of 5261 Eureka and found that its longitudinal libration was large, quoting $`297^{}\pm 26^{}`$ as the typical range in the longitudinal angle. Our orbit of 5261 Eureka, based on the latest orbital elements, seems to show a smaller libration of $`285314^{}`$. The remaining two objects that have been suggested as Martian Trojans, 1998 QH56 and 1998 SD4, both enter the sphere of influence of Mars – in the former case after $`\mathrm{500\hspace{0.17em}000}`$ years, in the latter case after $`\mathrm{100\hspace{0.17em}000}`$ years. Although the orbits are Mars crossing, their eccentricities remain low and their inclinations oscillate tightly about mean values until the Mars’ sphere of influence is entered. It is possible that these objects were once Trojans and have been ejected from the stable zones, a possibility that receives some support from their locations in Figure 1 at the fringes of the stable zones. Of course, another possibility is that they are ejected asteroids from the Main Belt.
The fact that both confirmed Martian Trojans lie deep within the stable zones in Figure 1 suggests that these objects may be primordial. If so, we can get a crude estimate of possible numbers by extrapolation from the number of Main Belt asteroids (c.f., Holman 1997, Evans & Tabachnik 1999). The number of Main Belt asteroids $`N_{\mathrm{MB}}`$ is $`N_{\mathrm{MB}}<\mathrm{\Sigma }_{\mathrm{MB}}A_{\mathrm{MB}}f`$, where $`A_{\mathrm{MB}}`$ is the area of the Main Belt, $`\mathrm{\Sigma }_{\mathrm{MB}}`$ is the surface density of the proto-planetary disk and $`f`$ is the fraction of primordial objects that survive ejection (which we assume to be a universal constant). Let us take the Main Belt to be centered on $`2.75`$ AU with a width of $`1.5`$ AU. The belt of Martian Trojans is centered on $`1.52`$ AU and has a width of $`<0.0025`$ AU. If the primordial surface density falls off inversely proportional to distance, then the number of Martian Trojans $`N_{\mathrm{MT}}`$ is
$$N_{\mathrm{MT}}<\left(\frac{2.75}{1.52}\right)\left(\frac{1.52\times 0.0025}{2.75\times 1.5}\right)N_{\mathrm{MB}}0.0016N_{\mathrm{MB}}$$
(1)
The number of known Main Belt asteroids with diameters $`>1`$ km is $`>40000`$, which suggests that the number of Martian Trojans is $`>50`$.
## 3 CONCLUSIONS
Motivated by the recent discovery of a new Mars Trojans (1998 VF31) as well as further possible candidates (1998 QH56, 1998 SD4), this paper has provided maps of the stable zones for Martian Trojans and estimates of the numbers of undiscovered objects. For Mars, the observational effort should be concentrated at inclinations satisfying $`15^{}<i<30^{}`$ and $`32^{}<i<40^{}`$ for the $`L_4`$ Lagrange point and between $`15^{}`$ and $`40^{}`$ for $`L_5`$. These are the spots where the probability density is significant (see Figure 2), though the lower inclinations in these bands are slightly more favored than the higher. Trojans in or close the orbital plane of Mars are unstable. Crude estimates suggest there may be as many as $`50`$ undiscovered Martian Trojans with sizes $`>1`$ km . The orbits of 5261 Eureka and 1998 VF31 remain Trojan-like for durations of at least 50 million years. The other candidates, 1998 QH56 and 1998 SD4, are not currently Trojans, though it is conceivable that they may once have been. Both objects will probably enter the sphere of influence of Mars after $`<0.5`$ million years.
NWE is supported by the Royal Society, while ST acknowledges financial help from the European Community. We wish to thank John Chambers, Luke Dones, Seppo Mikkola, Prasenjit Saha and Scott Tremaine for many helpful comments and suggestions. We are also grateful for the remarkable service to the academic community provided by the Minor Planet Center. The anonymous referee helpfully provided improved orbital elements for the Trojan candidates for our integrations.
|
no-problem/9904/cond-mat9904113.html
|
ar5iv
|
text
|
# Reduction formula for fermion loops and density correlations of the 1D Fermi gas
## Abstract
Fermion $`N`$-loops with an arbitrary number of density vertices $`N>d+1`$ in $`d`$ spatial dimensions can be expressed as a linear combination of $`(d+1)`$-loops with coefficients that are rational functions of external momentum and energy variables. A theorem on symmetrized products then implies that divergencies of single loops for low energy and small momenta cancel each other when loops with permuted external variables are summed. We apply these results to the one-dimensional Fermi gas, where an explicit formula for arbitrary $`N`$-loops can be derived. The symmetrized $`N`$-loop, which describes the dynamical $`N`$-point density correlations of the 1D Fermi gas, does not diverge for low energies and small momenta. We derive the precise scaling behavior of the symmetrized $`N`$-loop in various important infrared limits.
KEYWORDS: Fermi systems, Feynman amplitudes, density correlations, surface fluctuations
1. Introduction
The properties of fermion loops with density vertices (see Fig. 1) play a role in the theory of Fermi systems and various other problems in statistical mechanics. Symmetrized loops, obtained by summing all permutations of the $`N`$ external energy-momentum variables of a single $`N`$-loop, describe dynamical $`N`$-point density correlations of a (non-interacting) Fermi gas. Single loops have no direct physical meaning (for $`N>2`$), but contribute as subdiagrams of Feynman diagrams in the perturbation expansion of interacting Fermi systems. Symmetrized loops appear as integral kernels in effective actions for interacting Fermi systems, where fermionic degrees of freedom have been eliminated in favor of collective density fluctuations . The behavior of symmetrized loops for small energy and momentum variables is particularly important for Fermi systems with long-range interactions, whose Fourier transform is singular for small energy and momentum transfers .
Besides their relevance for interacting electron systems and other fermionic systems in nature, the theory of Fermi systems has also a bearing on various problems in classical statistical mechanics, which can be mapped to an effective Fermi system (gas or interacting). For example, the statistical mechanics of directed lines in two dimensions can be mapped to the quantum mechanics of fermions in one spatial dimension . This mapping has been exploited extensively to study fluctuations of crystal surfaces .
The 2-loop, corresponding to the 2-point density correlation function has been computed long ago in one, two, and three dimensions . Recently, Feldman et al. have obtained an exact expression for the $`N`$-loop with arbitrary energy and momentum variables in two dimensions. We have evaluated that expression explicitly and analyzed the small energy-momentum limit of the symmetrized loops, showing in particular that infrared divergencies of single loops cancel completely in the sum over permutations .
Most recently, Wagner has published a reduction formula for fermion loops in the static case, where all energy variables are set zero. This formula reduces the $`N`$-loop for a $`d`$-dimensional Fermi system to a linear combination of $`(d+1)`$-loops, with coefficients that are rational functions of the momenta. In this work we point out that Wagner’s formula and derivation can be easily extended to the case of finite energy variables (Sec. 3). In the two-dimensional case, the possibility of such an extension is evident from the exact expression for $`N`$-loops . The small energy-momentum behavior of symmetrized $`N`$-loops can be analyzed by applying a theorem on symmetrized products derived in our work on two-dimensional systems , which we formulate for the general $`d`$-dimensional case in Sec. 4. We apply the reduction formula to a one-dimensional system, where the $`N`$-loop can be expressed in terms the 2-loop, which is very easy to compute (Sec. 5). We finally compute the infrared scaling behavior of symmetrized $`N`$-loops in a one-dimensional Fermi system.
2. Loops
The amplitude of the $`N`$-loop with density vertices, represented by the Feynman diagram in Fig. 1, is given by
$$\mathrm{\Pi }_N(q_1,\mathrm{},q_N)=I_N(p_1,\mathrm{},p_N)=\frac{d^dk}{(2\pi )^d}\frac{dk_0}{2\pi }\underset{j=1}{\overset{N}{}}G_0(kp_j)$$
(1)
at temperature zero. Here $`k=(k_0,𝐤)`$, $`q_j=(q_{j0},𝐪_j)`$, and $`p_j=(p_{j0},𝐩_j)`$ are $`(d+1)`$-dimensional energy-momentum vectors. We use natural units, i.e. $`\mathrm{}=1`$. The variables $`q_j`$ and $`p_j`$ are related by the linear transformation
$$q_j=p_{j+1}p_j,j=1,\mathrm{},N$$
(2)
where $`p_{N+1}p_1`$. Energy and momentum conservation at all vertices yields the restriction $`q_1+\mathrm{}+q_N=0`$. The variables $`q_1,\mathrm{},q_N`$ fix $`p_1,\mathrm{},p_N`$ only up to a constant shift $`p_jp_j+p`$. Setting $`p_1=0`$, one gets
$`p_2`$ $`=`$ $`q_1`$
$`p_3`$ $`=`$ $`q_1+q_2`$
$`\mathrm{}`$
$`p_N`$ $`=`$ $`q_1+q_2+\mathrm{}+q_{N1}.`$ (3)
We use the imaginary time representation, with a non-interacting propagator
$$G_0(k)=\frac{1}{ik_0(ϵ_𝐤\mu )}$$
(4)
where $`ϵ_𝐤`$ is the dispersion relation and $`\mu `$ the chemical potential of the system. For a continuum (not lattice) Fermi system the dispersion relation is $`ϵ_𝐤=𝐤^2/2m`$, where $`m`$ is the fermion mass. The $`k_0`$-integral in Eq. (1) can be easily carried out using the residue theorem; one obtains
$$I_N(p_1,\mathrm{},p_n)=\underset{i=1}{\overset{N}{}}_{|𝐤𝐩_i|<k_F}\frac{d^dk}{(2\pi )^d}\left(\underset{\genfrac{}{}{0pt}{}{j=1}{ji}}{\overset{n}{}}f_{ij}(𝐤)\right)^1$$
(5)
where $`f_{ij}(𝐤)=i(p_{i0}p_{j0})+ϵ_{𝐤𝐩_i}ϵ_{𝐤𝐩_j}`$.
The 2-loop $`\mathrm{\Pi }_2(q,q)\mathrm{\Pi }(q)`$ is known as polarization insertion or particle-hole bubble, and has a direct physical meaning: $`\mathrm{\Pi }(q)`$ is the dynamical density-density correlation function of a non-interacting Fermi system . For $`N>2`$, the $`N`$-loop is not a physical quantity, but the symmetrized $`N`$-loop
$$\mathrm{\Pi }_N^S(q_1,\mathrm{},q_N)=𝒮\mathrm{\Pi }_N(q_1,\mathrm{},q_N)=\frac{1}{N!}\underset{P}{}\mathrm{\Pi }_N(q_{P1},\mathrm{},q_{PN}),$$
(6)
where the symmetrization operator $`𝒮`$ imposes summation over all permutations of $`q_1,\mathrm{},`$ $`q_N`$, is proportional to the (connected) dynamical $`N`$-point density correlation function:
$$\rho (q_1),\mathrm{},\rho (q_N)_{con}=(1)^{N1}(N1)!\mathrm{\Pi }_N^S(q_1,\mathrm{},q_N)$$
(7)
Here $`\rho (q)`$ is the Fourier transform of the particle density operator. Eq. (7) is easily verified by applying Wick’s theorem . Note that Wick’s theorem yields a sum of $`(N1)!`$ distinct loops with non-equivalent permutations of $`q_1,\mathrm{},q_N`$, while the sum in Eq. (6) includes cyclic permutations which produce $`N`$ equivalent copies of each loop.
3. Reduction formula
We now state the reduction formula that reduces the $`N`$-loop for a $`d`$-dimensional system with $`N>d+1`$ to a linear combination of $`(d+1)`$-loops with coefficients that are explicitly computable rational functions of momentum and energy variables. This formula is a straightforward generalization of a result derived recently by Wagner for the static case $`p_{j0}=0`$.
Let $`p_1,\mathrm{},p_N`$ be such that for each tupel of integers $`𝐣=(j_1,\mathrm{},j_{d+1})`$ with $`1j_1<\mathrm{}<j_{d+1}N`$, the complex d-dimensional vectors $`𝐝^𝐣`$ determined by the linear equations
$$f_{j_1j_r}(𝐝^𝐣)=i(p_{j_10}p_{j_r0})+\frac{1}{2m}(𝐩_{j_1}^2𝐩_{j_r}^2)+\frac{1}{m}(𝐩_{j_r}𝐩_{j_1})𝐝^𝐣=0$$
(8)
for $`r=2,\mathrm{},d+1`$ are well-defined and unique. Suppose that for $`n=1,\mathrm{},N`$ with $`nj_1,\mathrm{},j_{d+1}`$ the numbers
$$f_n^𝐣:=f_{j_rn}(𝐝^𝐣)=i(p_{j_r0}p_{n0})+\frac{1}{2m}(𝐩_{j_r}^2𝐩_n^2)+\frac{1}{m}(𝐩_n𝐩_{j_r})𝐝^𝐣$$
(9)
are non-zero. Then
$$I_N(p_1,\mathrm{},p_N)=\underset{\genfrac{}{}{0pt}{}{j_1,\mathrm{},j_{d+1}}{1j_1<\mathrm{}<j_{d+1}N}}{}\left(\underset{\genfrac{}{}{0pt}{}{n=1}{nj_1,\mathrm{},j_{d+1}}}{\overset{N}{}}\frac{1}{f_n^𝐣}\right)I_{d+1}(p_{j_1},\mathrm{},p_{j_{d+1}})$$
(10)
Note that the numbers $`f_{j_rn}(𝐝^𝐣)`$ with $`r=1,\mathrm{},d+1`$ are all equal, as a consequence of Eq. (8). The vector $`𝐝^𝐣`$ is uniquely defined if the vectors $`𝐩_{j_r}𝐩_{j_1}`$, where $`r=2,\mathrm{},d+1`$, are linearly independent. The real part of $`𝐝^𝐣`$ is the center of the uniquely defined circumscribing sphere through the points $`𝐩_{j_1},\mathrm{},𝐩_{j_{d+1}}`$ in $`d`$-dimensional euclidean space. In contrast to $`𝐝^𝐣`$, the numbers $`f_n^𝐣`$ are invariant under a shift $`p_jp_j+p`$ and can thus be expressed in terms of the variables $`q_1,\mathrm{},q_N`$.
The proof of the above reduction formula, a simple generalization of the proof given by Wagner for the static case, is presented in the Appendix.
4. Symmetrized products
Symmetrized loops are obtained by summing over all permutations of external energy-momentum variables $`q_1,\mathrm{},q_N`$ as in Eq. (6). Up to a trivial constant, symmetrized $`N`$-loops are the connected $`N`$-point density correlation functions of the Fermi gas. The behavior of these functions in the infrared limit $`q_j0`$ determines the long-distance (in space and time) density correlations, and is a crucial ingredient for power-counting of contributions to effective actions for collective density fluctuations. We will consider two important scaling limits:
i) small energy-momentum limit $`lim_{\lambda 0}\mathrm{\Pi }_N^S(\lambda q_1,\mathrm{},\lambda q_N)`$ ,
ii) dynamical limit $`lim_{\lambda 0}\mathrm{\Pi }_N^S[(q_{10},\lambda 𝐪_1),\mathrm{},(q_{N0},\lambda 𝐪_N)]`$ .
Single $`N`$-loops diverge generally (for almost all choices of $`q_1,\mathrm{},q_N`$) as $`\lambda ^{2N}`$ in the small energy-momentum limit, which is what one would expect from simple power-counting applied to the integral (1). A notable exception is the socalled static limit, where the momenta $`𝐪_j`$ tend to zero after all energy variables $`q_{j0}`$ have vanished. In that case one obtains a unique finite limit $`\mathrm{\Pi }_N\frac{(1)^{N1}}{(N1)!}\frac{d^{N2}}{dϵ^{N2}}D(ϵ)|_{ϵ=\mu }`$, where $`D(ϵ)`$ is the density of states . In the following we will show that systematic cancellations occur in the sum over permutations in the general small energy-momentum limit and also in the dynamical limit.
The factor multiplying the $`(d+1)`$-loops in the reduction formula can be written as
$$^𝐣:=\underset{\genfrac{}{}{0pt}{}{n=1}{nj_1,\mathrm{},j_{d+1}}}{\overset{N}{}}\frac{1}{f_n^𝐣}=\underset{r=1}{\overset{d+1}{}}F_r^𝐣$$
(11)
where
$$F_r^𝐣=\{\begin{array}{ccc}F_r^𝐣(q_{j_r},q_{j_r+1},\mathrm{},q_{j_{r+1}1})=_{n=j_r+1}^{j_{r+1}1}\frac{1}{f_n^𝐣}\hfill & \text{for}& \hfill j_{r+1}>j_r+1\\ 1\hfill & \text{for}& \hfill j_{r+1}=j_r+1\end{array}$$
(12)
Here $`j_{d+2}j_1`$, i.e. for $`r=d+1`$ the index $`n`$ runs from $`j_{d+1}+1`$ to $`N`$ and then from $`1`$ to $`j_11`$. Note that $`F_r^𝐣`$ depends also on differences of the energy-momentum variables $`p_{j_1},\mathrm{},p_{j_{d+1}}`$, besides the explicitly written arguments. As a product of $`M_r=j_{r+1}j_r1`$ factors $`(f_n^𝐣)^1`$, $`F_r^𝐣`$ diverges as $`\lambda ^{M_r}`$ in the small energy-momentum limit, since each $`f_n^𝐣`$ vanishes linearly. We define a symmetrized product
$$S_r^𝐣(k_1,\mathrm{},k_{M_r+1})=\frac{1}{(M_r+1)!}\underset{P}{}F_r^𝐣(k_{P1},\mathrm{},k_{P(M_r+1)})$$
(13)
where all permutations of $`k_1,\mathrm{},k_{M_r+1}`$ are summed. According to the following theorem, the symmetrized product $`S_r^𝐣`$ can be expressed such that the cancellations of singularities in the infrared limit become obvious.
Factorization theorem: The symmetrized product $`S_r^𝐣`$ can be written as $`\frac{m^{M_r}}{(M_r+1)!}`$ times a sum over fractions with numerators
$$(𝐤_{\sigma _1}𝐤_{\sigma _1^{}})(𝐤_{\sigma _2}𝐤_{\sigma _2^{}})(𝐤_{\sigma _{M_r}}𝐤_{\sigma _{M_r}^{}})$$
(14)
where $`\sigma _i\sigma _i^{}`$ and $`M_r=j_{r+1}j_r1`$, and products of $`2M_r`$ functions $`f^𝐣`$ as denominators. The functions $`f^𝐣`$ have the form
$$f^𝐣(p,p^{})=i(p_0p_0^{})+\frac{1}{2m}(𝐩^2𝐩^2)+(𝐩^{}𝐩)𝐝^𝐣$$
(15)
where $`p=p_{j_r}`$ and $`p^{}=p_{j_r}+(\text{partial sum of}k_1,\mathrm{},k_{M_r+1})`$. In each numerator, each momentum variable $`k_1,\mathrm{},k_{M_r+1}`$ appears at least once as a factor in one of the scalar products.
For example, in the simplest case $`M_r=1`$ one obtains
$$F_r^𝐣(k_1,k_2)+F_r^𝐣(k_2,k_1)=\frac{m(𝐤_1𝐤_2)}{f^𝐣(p_{j_r},p_{j_r}+k_1)f^𝐣(p_{j_r},p_{j_r}+k_2)}$$
(16)
The factorization theorem has been derived recently in the context of two-dimensional systems. The proof provides a concrete algorithm leading to the factorized expression. Since the algorithm is actually independent of the dimensionality of the system, we will not repeat the derivation here.
The infrared scaling behavior of $`S_r^𝐣`$ follows directly:
i) $`S_r^𝐣`$ is finite (of order one) and real in the small energy-momentum limit.
ii) $`S_r^𝐣`$ vanishes as $`\lambda ^{2M_r}`$ in the dynamical limit.
To see this, note that the functions $`f^𝐣(p,p^{})`$ vanish linearly in the small energy-momentum limit, and are purely imaginary to leading order in $`\lambda `$, while they remain finite in the dynamical limit.
The symmetrized product is thus much smaller for small energy and momentum variables than each single term, namely by a factor $`\lambda ^{M_r}`$ in the small energy-momentum limit, and even by a factor $`\lambda ^{2M_r}`$ in the dynamical limit. This result holds in any dimension $`d`$.
5. One-dimensional systems
We now apply the general results from Secs. 3 and 4 to one-dimensional systems , where particularly simple expressions can be obtained. We consider first single, then symmetrized loops.
A) Single Loops
In one dimension, the reduction formula (10) reduces $`N`$-loops to linear combinations of 2-loops:
$$I_N(p_1,\mathrm{},p_N)=\underset{\genfrac{}{}{0pt}{}{j_1,j_2}{1j_1<j_2N}}{}\left[\underset{\genfrac{}{}{0pt}{}{n=1}{nj_1,j_2}}{\overset{N}{}}\frac{1}{f_n^𝐣}\right]I_2(p_{j_1},p_{j_2})$$
(17)
where $`d^𝐣`$ is given explicitly by
$$d^𝐣=\frac{1}{2}(p_{j_11}+p_{j_21})+im\frac{p_{j_10}p_{j_20}}{p_{j_11}p_{j_21}}$$
(18)
and
$$f_n^𝐣=\frac{1}{2m}(p_{n1}p_{j_11})(p_{n1}p_{j_21})+i(p_{j_10}p_{n0})+i(p_{n1}p_{j_11})\frac{p_{j_10}p_{j_20}}{p_{j_11}p_{j_21}}$$
(19)
Here $`p_{n1}`$ and $`p_{j_r1}`$ are the one-dimensional momentum components of the energy-momentum vectors $`p_n=(p_{n0},p_{n1})`$ and $`p_{j_r}=(p_{j_r0},p_{j_r1})`$, respectively. The 2-loop can be computed very easily, the result being
$$I_2(p_{j_1},p_{j_2})=\frac{m}{\pi }\frac{1}{p_{j_11}p_{j_21}}\mathrm{log}\left|\frac{k_F\alpha _{j_1j_2}}{k_F+\alpha _{j_1j_2}}\right|$$
(20)
where
$$\alpha _{j_1j_2}=\frac{1}{2}(p_{j_11}p_{j_21})+im\frac{p_{j_10}p_{j_20}}{p_{j_11}p_{j_21}}$$
(21)
We have thus obtained an explicit expression in terms of elementary functions for $`N`$-loops in one dimension. One may easily perform an analytic continuation to real (instead of imaginary) energy variables, $`ip_{j0}ϵ_j`$, in the above expressions to analyze, for example, the non-linear dynamical density response of the Fermi gas.
In the zero energy limit $`p_{j0}0`$ one obtains the simple result
$`\underset{\genfrac{}{}{0pt}{}{p_{j0}0}{j=1,\mathrm{},N}}{lim}I_N(p_1,\mathrm{},p_N)=`$
$`{\displaystyle \underset{\genfrac{}{}{0pt}{}{j_1,j_2}{1j_1<j_2N}}{}}\left[{\displaystyle \underset{\genfrac{}{}{0pt}{}{n=1}{nj_1,j_2}}{\overset{N}{}}}{\displaystyle \frac{2m}{(p_{n1}p_{j_11})(p_{n1}p_{j_21})}}\right]{\displaystyle \frac{m}{\pi (p_{j_11}p_{j_21})}}\mathrm{log}\left|{\displaystyle \frac{2k_F(p_{j_11}p_{j_21})}{2k_F+(p_{j_11}p_{j_21})}}\right|`$ (22)
Note that the above expression has a finite limit for $`p_{j1}0`$, although each contribution to the sum diverges.
B) Symmetrized Loops
It is well known that for a linearized dispersion relation $`ϵ_k=v_F(|k|k_F)`$, as in the one-dimensional Luttinger model, the symmetrized $`N`$-loop $`\mathrm{\Pi }_N^S(q_1,\mathrm{},q_N)`$ vanishes identically for $`N>2`$ even for finite $`q_j`$ with sufficiently small momenta $`q_{j1}`$ . We now analyze the infrared behavior of symmetrized $`N`$-loops in a one-dimensional system with the usual quadratic dispersion relation. Symmetrizing the reduction formula, we can write symmetrized loops as
$$\mathrm{\Pi }_N^S(q_1,\mathrm{},q_N)=𝒮\underset{\genfrac{}{}{0pt}{}{j_1,j_2}{1j_1<j_2N}}{}S_1^𝐣S_2^𝐣I_2(p_{j_1},p_{j_2})$$
(23)
where $`𝒮`$ is the symmetrization operator introduced in Sec. 2 and $`S_1^𝐣`$ and $`S_2^𝐣`$ are the symmetrized products defined in Sec. 4. Note that first symmetrizing partially (with respect to a subset of variables, as in the products $`S_r^𝐣`$) and then completely (by applying $`𝒮`$) yields the same result as symmetrizing everything just once.
We can now easily derive the scaling behavior of $`\mathrm{\Pi }_N^S`$ in the small energy-momentum and dynamical limit, respectively. The 2-loop $`\mathrm{\Pi }(q_1)\mathrm{\Pi }_2(q_1,q_1)=I_2(0,q_1)`$ tends to the finite value
$$\mathrm{\Pi }(\lambda q_1)\frac{1}{\pi v_F}\frac{1}{1+[q_{10}/(v_Fq_{11})]^2}$$
(24)
in the small energy-momentum limit and vanishes quadratically as
$$\mathrm{\Pi }(q_{10},\lambda q_{11})\frac{v_F}{\pi }\frac{q_{11}^2}{q_{10}^2}\lambda ^2$$
(25)
in the dynamical limit, where $`v_F=k_F/m`$ is the Fermi velocity. The same behavior is found for the 2-loop with a linearized $`ϵ_k`$. Since $`S_1^𝐣`$ and $`S_2^𝐣`$ are both finite in the small energy momentum limit, the symmetrized $`N`$-loop remains finite, too:
$$\mathrm{\Pi }_N^S(\lambda q_1,\mathrm{},\lambda q_N)=O(1)\text{for}\lambda 0.$$
(26)
Only in the static case $`q_{j0}=0`$ each single loop $`\mathrm{\Pi }_N`$ has a finite limit for $`q_{j1}0`$, while in general the above result is due to systematic cancellations of infrared divergencies. In the dynamical limit the product $`S_1^𝐣S_2^𝐣`$ vanishes as $`\lambda ^{2M_1+2M_2}`$ where $`M_1+M_2=N2`$, such that
$$\mathrm{\Pi }_N^S[(q_{10},\lambda q_{11}),\mathrm{},(q_{N0},\lambda q_{N1})]=O(\lambda ^{2N2})\text{for}\lambda 0.$$
(27)
The same scaling behavior has been found previously for two-dimensional systems .
6. Conclusion
We have derived a formula that reduces the evaluation of fermion loops with $`N`$ density vertices in $`d`$ dimensions to the computation of loops with only $`d+1`$ vertices. This was obtained by a straightforward extension of a recent result by Wagner for the zero energy limit to arbitrary energy variables. Using a theorem about symmetrized products, we have shown that infrared divergencies of single loops cancel to a large extent when permutations of external energy-momentum variables are summed. The symmetrized $`N`$-loop, which is proportional to the $`N`$-point density correlation function of the Fermi gas, is thus generally much smaller in the infrared limit than unsymmetrized loops. For one-dimensional systems, we have obtained an explicit expression for arbitrary $`N`$-loops in terms of elementary functions of the energy-momentum variables. We have shown that symmetrized loops do not diverge for low energies and small momenta. In the dynamical limit, where momenta scale to zero at fixed energy variables, the symmetrized $`N`$-loop vanishes as the $`(2N2)`$th power of the scale parameter.
We finally outline some applications of our results.
Evaluation of Feynman diagrams: Analytical results for loops are of course useful for computing Feynman diagrams containing fermion loops as subdiagrams. The number of energy-momentum variables that remain to be integrated (analytically or numerically) is thus reduced. In particular, the mutual cancellation of contributions associated with different permutations of energy-momentum transfers entering a loop can be treated analytically, avoiding numerical “minus-sign” problems.
Effective actions: Effective actions for interacting Fermi systems, where the fermionic degrees of freedom have been eliminated in favor of collective density fluctuations, contain symmetrized $`N`$-loops as kernels . A good control of the infrared behavior of these kernels is essential for assessing the relevance of non-Gaussian terms in the effective action, especially in the presence of long-range interactions. In one-dimensional systems one can use our results to compute the scaling dimensions of corrections to the leading low-energy behavior of Luttinger liquids by analyzing the non-quadratic corrections in the bosonized action.
Surface fluctuations: Some models of surface fluctuations lead to the statistical mechanics of directed lines in two dimensions, which can be mapped to the quantum mechanics of fermions in one spatial dimension . Most recently, Prähofer and Spohn have shown that the probability distribution of height fluctuations in such models is Gaussian at long distances on the surface. For this result it was enough to establish that symmetrized $`N`$-loops in the associated Fermi system are less singular than the naive power-counting estimate. Our result Eq. (26) yields the precise scaling dimension of non-Gaussian terms, and implies in particular that high order corrections vanish very rapidly at long distances.
Acknowledgments:
We are grateful to H. Knörrer, H. Spohn, and E. Trubowitz for valuable discussions.
Appendix A: Proof of reduction formula
Following Wagner’s derivation for the static case, we prove the reduction formula (10) by applying the following many-dimensional version of Lagrange’s interpolation formula:
Lemma: Suppose that $`1d+1<N`$ and the $`(d+1)`$-dimensional complex vectors $`𝐚_1,\mathrm{},𝐚_N`$ are such that $`𝐚_{j_1},\mathrm{},𝐚_{j_{d+1}}`$ as well as $`(𝐚_{j_1}𝐚_n),\mathrm{},(𝐚_{j_{d+1}}𝐚_n)`$ are linearly independent for pairwise different indices $`j_1,\mathrm{},j_{d+1},n\{1,\mathrm{},N\}`$. For $`𝐣=(j_1,\mathrm{},j_{d+1})`$ with $`1j_1<\mathrm{}<j_{d+1}N`$ determine the complex $`(d+1)`$-dimensional vector $`𝐳^𝐣`$ by the system of linear equations $`𝐚_{j_r}𝐳^𝐣=1`$ for $`r=1,\mathrm{},d+1`$. Then each complex homogeneous polynomial $`P(z_0,z)`$ of degree $`N(d+1)`$ in the $`d+2`$ variables $`z_0,𝐳=(z_1,\mathrm{},z_{d+1})`$ can be written as
$$P(z_0,𝐳)=\underset{\genfrac{}{}{0pt}{}{j_1,\mathrm{},j_{d+1}}{1j_1<\mathrm{}<j_{d+1}N}}{}P(1,𝐳^𝐣)\underset{\genfrac{}{}{0pt}{}{n=1}{nj_1,\mathrm{},j_{d+1}}}{\overset{N}{}}\frac{(z_0𝐚_n𝐳)\mathrm{det}(𝐚_{j_1},\mathrm{},𝐚_{j_{d+1}})}{\mathrm{det}\left(\begin{array}{cccc}1& 1& \mathrm{}& 1\\ 𝐚_n& 𝐚_{j_1}& \mathrm{}& 𝐚_{j_{d+1}}\end{array}\right)}$$
(28)
where the vectors $`𝐚_1,\mathrm{},𝐚_N`$ enter the determinants as column vectors.
For a proof, see Ref. .
We apply the above lemma to the polynomial $`P(z_0,𝐳)=z_1^{N(d+1)}`$ and
$$𝐚_n=\left(\begin{array}{c}i(k_0p_{n0})+\xi _{𝐩_n}\\ (𝐤2𝐩_n)/\sqrt{2m}\end{array}\right)$$
(29)
where $`\xi _𝐩=𝐩^2/(2m)\mu `$. Since $`P(1,𝐳^𝐣)=(z_1^𝐣)^{N(d+1)}`$ and $`𝐚_{j_r}𝐳^𝐣=1`$ for $`r=1,\mathrm{},d+1`$, Cramer’s rule yields
$`P(1,𝐳^𝐣){\displaystyle \underset{\genfrac{}{}{0pt}{}{n=1}{nj_1,\mathrm{},j_{d+1}}}{\overset{N}{}}}\mathrm{det}(𝐚_{j_1},\mathrm{},𝐚_{j_{d+1}})={\displaystyle \underset{\genfrac{}{}{0pt}{}{n=1}{nj_1,\mathrm{},j_{d+1}}}{\overset{N}{}}}z_1^𝐣\mathrm{det}(𝐚_{j_1},\mathrm{},𝐚_{j_{d+1}})=`$
$`{\displaystyle \underset{\genfrac{}{}{0pt}{}{n=1}{nj_1,\mathrm{},j_{d+1}}}{\overset{N}{}}}\mathrm{det}\left(\begin{array}{ccc}1& \mathrm{}& 1\\ \frac{𝐤2𝐩_{j_1}}{\sqrt{2m}}& \mathrm{}& \frac{𝐤2𝐩_{j_{d+1}}}{\sqrt{2m}}\end{array}\right)=`$ (32)
$`{\displaystyle \underset{\genfrac{}{}{0pt}{}{n=1}{nj_1,\mathrm{},j_{d+1}}}{\overset{N}{}}}\left(\sqrt{2/m}\right)^d\mathrm{det}(𝐩_{j_2}𝐩_{j_1},\mathrm{},𝐩_{j_{d+1}}𝐩_{j_1})`$ (33)
In the last step we have subtracted the first column of the determinant from all the others and then applied Laplace’s theorem. We now evaluate the denominator in (28),
$$D=\mathrm{det}\left(\begin{array}{cccc}1& 1& \mathrm{}& 1\\ i(k_0p_{n0})+\xi _{𝐩_n}& i(k_0p_{j_10})+\xi _{𝐩_{j_1}}& \mathrm{}& i(k_0p_{j_{d+1}0})+\xi _{𝐩_{j_{d+1}}}\\ (𝐤𝐩_n)/\sqrt{2m}& (𝐤𝐩_{j_1})/\sqrt{2m}& \mathrm{}& (𝐤𝐩_{j_{d+1}})/\sqrt{2m}\end{array}\right)$$
(34)
Subtracting the first column from all the others and applying Laplace’s theorem yields
$$D=\mathrm{det}\left(\begin{array}{ccc}f_{j_1n}(\mathrm{𝟎})& \mathrm{}& f_{j_{d+1}n}(\mathrm{𝟎})\\ \sqrt{\frac{2}{m}}(𝐩_n𝐩_{j_1})& \mathrm{}& \sqrt{\frac{2}{m}}(𝐩_n𝐩_{j_{d+1}})\end{array}\right)$$
(35)
Adding $`𝐝^𝐣(𝐩_n𝐩_{j_r})/m`$ to the $`r`$-th matrix element in the first row (adding thus multiples of the other rows to the first one) one obtains
$`D`$ $`=`$ $`\mathrm{det}\left(\begin{array}{ccc}f_{j_1n}(𝐝^𝐣)& \mathrm{}& f_{j_{d+1}n}(𝐝^𝐣)\\ \sqrt{\frac{2}{m}}(𝐩_n𝐩_{j_1})& \mathrm{}& \sqrt{\frac{2}{m}}(𝐩_n𝐩_{j_{d+1}})\end{array}\right)`$ (38)
$`=`$ $`\left(\sqrt{2/m}\right)^df_n^𝐣\mathrm{det}\left(\begin{array}{ccc}1& \mathrm{}& 1\\ 𝐩_n𝐩_{j_1}& \mathrm{}& 𝐩_n𝐩_{j_{d+1}}\end{array}\right)`$ (41)
Subtracting the first column from all others and applying Laplace’s theorem once again one obtains
$$D=\left(\sqrt{2/m}\right)^df_n^𝐣\mathrm{det}(𝐩_{j_2}𝐩_{j_1},\mathrm{},𝐩_{j_{d+1}}𝐩_{j_1})$$
(42)
Eq. (Reduction formula for fermion loops and density correlations of the 1D Fermi gas) and Eq. (42) yield
$$P(1,𝐳^𝐣)\underset{\genfrac{}{}{0pt}{}{n=1}{nj_1,\mathrm{},j_{d+1}}}{\overset{N}{}}\frac{\mathrm{det}(𝐚_{j_1},\mathrm{},𝐚_{j_{d+1}})}{\mathrm{det}\left(\begin{array}{cccc}1& 1& \mathrm{}& 1\\ 𝐚_n& 𝐚_{j_1}& \mathrm{}& 𝐚_{j_{d+1}}\end{array}\right)}=\underset{\genfrac{}{}{0pt}{}{n=1}{nj_1,\mathrm{},j_{d+1}}}{\overset{N}{}}\frac{1}{f_n^𝐣}$$
(43)
We now set $`z_0=0`$ and $`𝐳=(1,𝐤/\sqrt{2m})`$, such that
$$z_0𝐚_n𝐳=i(k_0p_{n0})\xi _{𝐤𝐩_n}=G_0^1(kp_n)$$
(44)
With this choice of variables the above lemma thus yields the algebraic identity
$$1=\underset{\genfrac{}{}{0pt}{}{j_1,\mathrm{},j_{d+1}}{1j_1<\mathrm{}<j_{d+1}N}}{}\underset{\genfrac{}{}{0pt}{}{n=1}{nj_1,\mathrm{},j_{d+1}}}{\overset{N}{}}\frac{1}{f_n^𝐣}G_0^1(kp_n)$$
(45)
Multiplying this equation with $`_{j=1}^NG_0(kp_j)`$ and integrating over $`k`$ one finally obtains the reduction formula Eq. (10).
|
no-problem/9904/cond-mat9904404.html
|
ar5iv
|
text
|
# Structure and properties of a novel fulleride 𝑆𝑚₆𝐶₆₀
## acknowlegments
This work is supported by Grant from Natural Science Foundation of China.
FIGURE CAPTIONS
Figure 1:
X-ray diffraction pattern of the sample $`Sm_6C_{60}`$ collected with synchrotron radiation. The synchrotron beam was monochromatized to 0.8500 Å. The crosses are experimental points and the solid line is a Rietveld fit to the model $`Sm_6C_{60}`$ in the space group $`I_m`$$`{}_{}{}^{}{}_{3}{}^{}`$. The allowed reflection positions are denoted by ticks.
Figure 2:
Room temperature Raman spectrum of $`Sm_6C_{60}`$. The results of a line-shape analysis for $`H_g(2)`$ mode are shown (inset). The dash lines are computer fits for the individual components, which add up to the full line on the top of the experimental results.
Figure 3:
The temperature dependence of resistivity for the polycrystalline $`Sm_6C_{60}`$. Inset plots the same data as $`ln\rho `$ vs $`T^{1/2}`$. A linear relation would be expected if the charge transport did follow the granular-metal theory.
|
no-problem/9904/cond-mat9904075.html
|
ar5iv
|
text
|
# On the Scaling Behavior of the Abelian Sandpile Model
## Abstract
The abelian sandpile model in two dimensions does not show the type of critical behavior familar from equilibrium systems. Rather, the properties of the stationary state follow from the condition that an avalanche started at a distance $`r`$ from the system boundary has a probability proportional to $`1/\sqrt{r}`$ to reach the boundary. As a consequence, the scaling behavior of the model can be obtained from evaluating dissipative avalanches alone, allowing not only to determine the values of all exponents, but showing also the breakdown of finite-size scaling.
PACS numbers: 64.60.Lx
Since its introduction in 1987, the sandpile model has been considered as the prototype of a self-organized critical (SOC) system . Computer simulations suggest that irrespective of the initial conditions and of details of the model rules, the system self organizes into a “critical” state with a power-law size distribution of avalanches. The concept of SOC is thought to explain the frequent occurrence of power laws in nature.
Considerable effort has been made to determine the scaling behavior of the two-dimensional abelian sandpile model. The advantage of this model is that several of its properties can be calculated analytically . However, the exponent $`\tau _r`$ characterizing the distribution of avalanche radii, and related exponents for the distribution of the avalanche duration $`t`$ and the number of topplings $`s`$ during an avalanche have resisted any attempt of an analytical calculation. The numerical determination of these exponents is hampered by the fact that the double logarithmic plots show slopes that increase with increasing system size, until finite-size effects set in, indicating that the asymptotic scaling behavior does not yet occur for the system sizes accessible to computer simulations. Thus, the predictions for the value of $`\tau _s`$ vary between 1.22 and 1.27 or even 1.29 . The latter two results where obtained under the assumption that the system displays finite-size scaling (FSS).
Recently, evidence was found that the abelian sandpile model does not display FSS . This result was obtained from an investigation of multifractal spectra. Indeed, there is no a priori reason why the abelian sandpile model should display FSS. The concept of FSS has its origin in equilibrium critical phenomena where a small finite system cannot be distinguished from a small part of a large system. However, this is not the case for the abelian sandpile model in two dimensions. In a small system, the sites that participate in an avalanche may topple a few times during the duration of the avalanche. In a larger system, the number of topplings per site during a large avalanche is larger. Thus, locally collected data (i.e. the number of topplings of a given site) contains information about the size of the system. Also, finite-size scaling is based on the assumption that boundaries play no special role in the system. However, the boundaries of the abelian sandpile model play an essential role as they are the only place where sand can leave the system.
It is the purpose of this letter to elucidate the scaling behavior of the abelian sandpile model. Since boundaries play a special role in the system, it is essential to consider avalanches that reach the boundaries of the system separately from those that don’t. It turns out that the dissipative avalanches display beautiful power laws. These power laws imply that each avalanche started at a distance $`r`$ from the system boundary has a probability proportional to $`1/\sqrt{r}`$ to reach the boundary and, if it does so, dissipates on an average $`\sqrt{r}`$ sand grains. From the scaling behavior of dissipative avalanches, we obtain the values of the critical exponents characterizing the system, and we see compelling evidence for the violation of FSS.
The two-dimensional abelian sandpile model is defined on a square lattice with $`L^2`$ sites. At each site $`i`$ an integer variable $`z_i`$ represents the number of grains. Grains are added individually to randomly chosen sites of the system. When the number of grains at a site $`i`$ exceeds $`z_c=3`$, site $`i`$ is unstable and topples, its height being reduced to $`z_i4`$, and the heights $`\{z_j\}`$ of all four nearest neighbors being increased by one. If $`i`$ is a boundary site with $`l<4`$ neighbors, $`4l`$ grains leave the system. If a neighbor $`j`$ becomes unstable due to the addition of a grain, it also topples, and the avalanche stops when a new stable configuration is reached. During an avalanche, no new grains are added to the system. The size $`s`$ of an avalanche is defined to be the total number of topplings. The radius of an avalanche is in this paper taken to be the maximum distance of toppled sites from the starting site of the avalanche. It has proven useful to decompose an avalanche into “waves of toppling” . The $`n`$th wave of toppling begins when the starting point of an avalanche topples for the $`n`$th time, and all those sites belong to it that topple immediately after a nearest neighbor that belongs to the same wave has toppled.
Let us first discuss the radius distribution of avalanches, $`n_r(r,L)`$. Since its asymptotic (i.e., $`L\mathrm{}`$) behavior is difficult to extract from simulation data, recent results for the exponent $`\tau _r`$ vary between $`7/5`$ and $`5/3`$ . In , a version of the abelian sandpile model was studied where sand grains were only dropped at the center of the system. It was found that the fraction $`1/\sqrt{L}`$ of all avalanches reach the boundaries. This finding would imply that $`\tau _r=3/2`$, just as predicted in . This is indeed the correct exponent, as a separate evaluation of dissipative and nondissipative avalanches shows. We denote by $`n_r^{(1)}(r,L)`$ the radius distribution of nondissipative avalanches, i.e., those avalanches that do not reach the boundaries. The distribution of dissipative avalanches is denoted $`n_r^{(2)}(r,L)`$. Normalization of the radius distribution requires
$$_1^{\mathrm{}}[n_r^{(1)}(r,L)+n_r^{(2)}(r,L)]𝑑r=1.$$
Fig. 1 shows the result of computer simulations for $`n_r^{(2)}(r,L)`$.
The distribution $`n_r^{(2)}(r,L)`$ has the form
$$n_r^{(2)}(r,L)L^{3/2}g(r/L).$$
(1)
The prefactor is such that the total weight of $`n_r^{(2)}(r,L)`$ is $`_1^Ln_r^{(2)}(r,L)𝑑r1/\sqrt{L}`$, in agreement with the result in that the fraction $`1/\sqrt{L}`$ of all avalanches reach the boundaries. Since grains are dropped randomly into the system, the probability that an avalanche is triggered at a fixed distance $`rL`$ from the boundary is proportional to $`1/L`$, leading to $`n_r^{(2)}(r,L)=(1/L)f(r)`$ for $`rL`$. Together with Eq. (1), this gives $`g(r/L)(r/L)^{1/2}`$ for small $`r/L`$. From Fig. 1, it can be seen that this $`1/\sqrt{r}`$-behavior extends almost over the entire range of $`r`$ values. An avalanche triggered at a distance $`r`$ from the boundary consequently has a probability proportional to $`1/\sqrt{r}`$ to reach the boundary, and if it reaches the boundary it dissipates on an average of the order $`\sqrt{r}`$ sand grains.
Fig. 2 shows the radius distribution of avalanches that do not reach the boundaries. For the system sizes studied in the simulations, no power law is visible. The slope seems to become steeper with increasing $`r`$. (The last part of each curve, which begins at the point where it splits from the curves for larger system sizes, is due to finite-size effects and it must be ignored in the subsequent discussion which refers to the thermodynamic limit $`L\mathrm{}`$.) However, the curves $`n_r^{(1)}(r,L)`$ must approach a power law $`n_r^{(1)}(r,\mathrm{})r^{\tau _r}`$ for sufficiently large $`r`$. The reason is that the last wave of topplings of an avalanche has been proved to be distributed according to a power law $`r^{7/4}`$ , and $`n_r^{(1)}(r,\mathrm{})`$ cannot be steeper than this. From the above results for $`n_r^{(2)}(r,L)`$, we can even deduce the value of $`\tau _r`$: The probability that an avalanche triggered at distance $`r`$ from the boundary reaches the boundary is proportional to $`Ln_r^{(2)}(r,L)`$, and it is identical to the probability that an avalanche triggered in the interior of the system reaches at least a radius $`r`$, which is given by $`_r^{\mathrm{}}n_r^{(1)}(r^{},\mathrm{})𝑑r^{}`$. The reason is that the landscape of height values in the abelian sandpile model does not show long-range correlations. Rather, correlations decay as fast as $`r^4`$ , implying that avalanches spread in the same way everywhere in the system, as long as they do not encounter the boundaries. Consequently, $`n_r^{(1)}(r,\mathrm{})`$ must fall off as $`r^{3/2}`$ for sufficiently large $`r`$, fixing the value of $`\tau _r=3/2`$. Fig. 2 shows that this asymptotic value is barely reached for the system sizes accessible to computer simulations. This explains why evaluations based on the complete radius distribution $`n_r^{(1)}(r,L)+n_r^{(2)}(r,L)`$ do not reveal the correct exponent.
The distribution of the number of topplings $`n_s(s,L)=n_s^{(1)}(s,L)+n_s^{(2)}(s,L)`$ in avalanches can be analyzed in a similar way. The superscript $`(1)`$ refers again to nondissipative, the superscript $`(2)`$ to dissipative avalanches. If the number of topplings $`s`$ in an avalanche of radius $`r`$ was always of the order $`r^D`$, simple scaling would hold, resulting in a FSS form $`n_s(s,L)s^{\tau _s}𝒞(s/L^D)`$ and a scaling relation $`D=(\tau _r1)/(\tau _s1)`$. The breakdown of simple scaling was pointed out for the first time in . It can best be visualized by looking at the distribution of the number of topplings $`n_s^{(\mathrm{spanning})}(s,L)`$ of those avalanches that span the entire system, i.e. that touch all four edges of the boundary. These avalanches have a radius of the order $`L`$.
Evaluation of the curves represented in Fig. 3 shows that the maximum scales as $`L^D`$ with $`D=9/4`$, the lower cutoff as $`L^2`$, and the upper cutoff as $`L^{11/4}`$. These exponents are obtained by scaling $`s`$ in such a way that the maxima, the left tails, or the right tails collapse. The mean scales roughly as $`L^{5/2}`$. Clearly, a complete scaling collapse would only be possible if the four quantities were characterized by the same exponent. The scaling behavior of the mean implies
$$\overline{s}=_1^{\mathrm{}}sn_s(s,L)𝑑sL^2,$$
in agreement with the analytical result given in .
Fig. 4 shows the distributions $`n_s^{(2)}(s,L)`$ of the number of topplings in dissipative avalanches.
The curves have the form $`n_s^{(2)}(s)L^1s^{7/9}`$ for sufficiently small $`s`$, and then have a cutoff that does not display FSS but is similar to the one for spanning avalanches. The total weight of dissipative avalanches, $`1/\sqrt{L}`$, is proportional to $`_1^{L^{9/4}}n_s^{(2)}(s,L)𝑑s`$. The exponent $`\tau _s`$ can be derived from the condition
$$s^{11/D}Ln_s^{(2)}(s,L)_s^{\mathrm{}}n_s^{(1)}(s^{},\mathrm{})𝑑s^{},$$
giving $`\tau _s=11/9`$, which agrees very well with the value obtained by Manna . The asymptotic slope $`\tau _s`$ is barely visible in $`n_s^{(1)}(s,L)`$ for the system sizes accessible to computer simulations. For the value of the exponent $`\tau _s`$, the multiscaling features are irrelevant, since only the exponent $`D=9/4`$ enters the evaluation of $`\tau _s`$. However, the multiscaling behavior is relevant for the result $`\overline{s}=L^2`$, which stands in no relation to the value of $`\tau _s`$.
The failure of FSS in the system is due to a broad distribution of the number of waves of toppling in avalanches. Single waves of toppling display FSS, as shown in Fig. 5 for the first wave of toppling.
The scaling with $`L^2`$ follows directly from the fact that waves are compact and that each site topples once in a wave. Repeating the evaluation performed already twice, we find an exponent $`\tau _s=11/8`$ for the first wave, which is identical to the analytically derived exponent for the last wave , and different from the exponent $`5/4`$ for boundary avalanches in a sector of $`360^0`$ . From the condition that waves are compact, it follows immediately that $`\tau _r=7/4`$ for the first wave, which means that a fraction proportional to $`1/L^{3/4}`$ of all first waves reach the boundary. The result $`\tau _r=7/4`$ follows also directly from a scaling collapse of the radius distribution of dissipative first waves.
For the distribution of duration times $`t`$ of avalanches exactly the same analysis can be performed as for the number of topplings $`s`$. One finds again a violation of FSS, the typical duration time of an avalanche of radius $`r`$ being $`r^z`$ with $`z=4/3`$, the upper cutoff of the duration time being larger than $`r^{3/2}`$, and the mean duration time of avalanches, averaged over all avalanche sizes, being $`\overline{t}L`$.
The following table summarizes the results for the exponents:
| | $`\tau _r`$ | $`\tau _t`$ | $`\tau _s`$ | $`z`$ | $`D`$ |
| --- | --- | --- | --- | --- | --- |
| $`1^{st}`$ wave | 7/4 | 8/5 | 11/8 | 5/4 | 2 |
| avalanche | 3/2 | 11/8 | 11/9 | 4/3 | 9/4 |
In summary, we have shown that the scaling behavior of the abelian sandpile model in two dimensions is tied to the condition that the fraction $`1/\sqrt{r}`$ of all avalanches started at a distance $`r`$ from the boundary reach the boundary, rather than to a self-similarity of large and small avalanches. As a consequence, the scaling properties are seen most clearly from the dissipative avalanches, and a violation of FSS can occur.
The condition that a fraction proportional to $`1/\sqrt{L}`$ of all avalanches reach the boundaries, together with the two conditions $`\overline{s}L^2`$ and $`\overline{t}L`$ seem so simple that they should be derivable from simple argument. In fact, $`\overline{s}L^2`$ follows from the diffusive motion of sandgrains , and appears to be also valid in higher dimensions . The first condition is equivalent to the statement that the probability of triggering a system spanning avalanche, $`p`$, is proportional to the density of surface sites, $`\rho `$, that topple during a system-spanning avalanche. The reason is that the product $`p\rho L^{d1}`$ is the mean number of grains dissipated during an avalanche, which must be identical to 1, resulting in $`p\rho 1/L^{(d1)/2}`$ ($`d`$ denotes the dimension of the system). A condition $`p\rho `$ holds for example for percolation clusters in systems above the percolation threshold. It also holds at the percolation threshold, if there are only a finite number of spanning clusters in the system. In three dimensions, this condition would lead to $`\tau _r=2`$, which agrees with the simulation result given in . If avalanches are compact, as suggested by computer simulations, the exponent $`\tau _a`$ characterizing the area distribution in three dimensions is then $`\tau _a=4/3`$, giving a mean avalanche area $`L^2`$. Together with the condition $`\overline{s}=L^2`$, it follows that the mean number of topplings is of the same order as the mean area, leading to $`\tau _s=4/3`$ and to the conclusion that the number of waves of topplings remains finite in the thermodynamic limit, and that FSS is not violated in three dimensions. Finally, a compact avalanche of $`\overline{s}=L^2`$ sites has a radius $`L^{2/d}`$. In two dimensions, this corresponds to the mean avalanche duration $`\overline{t}`$. If we assume the same in three dimensions, we have $`\overline{t}L^{2/3}`$, and we find $`\tau _t=8/5`$, again in agreement with simulation results . In dimensions above the upper critical dimension 4 , avalanches are no more compact, and the system may contain many spanning clusters, and the above arguments can therefore not be applied.
It thus seems that the scaling behavior of the abelian sandpile model follows from a few simple principles, a connection which still has to be explored in greater depth.
###### Acknowledgements.
This work was supported by EPSRC Grant No. GR/K79307.
|
no-problem/9904/hep-ex9904022.html
|
ar5iv
|
text
|
# Implication of 𝑊-boson Charge Asymmetry Measurements in 𝑝𝑝̄ Collisions for Models of Charge Symmetry Violations in Parton Distributions
## Abstract
A surprisingly large charge symmetry violation of the sea quarks in the nucleon has been proposed in a recent article by Boros et al. as an explanation of the discrepancy between neutrino (CCFR) and muon (NMC) nucleon structure function data at low $`x`$. We show that these models are ruled out by the published CDF $`W`$ charge asymmetry measurements, which strongly constrain the ratio of $`d`$ and $`u`$ quark momentum distributions in the proton over the $`x`$ range of $`0.006`$ to $`0.34`$. This constraint also limits the systematic error from possible charge symmetry violation in the determinations of $`\mathrm{sin}^2\theta _W`$ from $`\nu N`$ scattering experiments.
In a recent Physical Review Letter , Boros et al. proposed a model in which a substantial charge symmetry violation (CSV) for parton distributions in the nucleon accounts for the experimental discrepancy between neutrino (CCFR) and muon (NMC) nucleon structure function data at low $`x`$. Charge symmetry (sometimes also referred to as isospin symmetry) is a symmetry which interchanges protons and neutrons, thus simultaneously interchanging up and down quarks, which implies the equivalence between the up (down) quark distribution in the proton and the down (up) quarks in the neutron. Currently, all fits to Parton Distribution Functions (PDFs) are preformed under the assumption of charge symmetry between neutrons and protons.
Boros et al. have proposed that charge symmetry is broken such that the $`d`$ sea quark distribution in the nucleon is larger than the $`u`$ sea quark distribution for $`x<0.1`$, which also results in a violation of flavor symmetry. Their paper notes that structure functions extracted in neutrino deep inelastic scattering experiments are dominated by the higher statistics data taken with neutrino (versus antineutrino) beams. They note that neutrino-induced charged current interactions couple to $`d`$ quarks and not to $`u`$ quarks, while the muon coupling to the 2/3 charged $`u`$ quark is much larger than the coupling to the 1/3 charge $`d`$ quark. Therefore, if the $`d`$ sea quark distribution is significantly larger than the $`u`$ sea quark distribution in the nucleon, there would be a significant difference between the nucleon structure functions as measured in neutrino and muon scattering experiments. However, both neutrino and muon scattering data have been taken on approximately isoscalar targets, such as iron or deuterium. Isoscalar targets have an equal number of neutrons and protons. A larger number of $`d`$ sea quarks than $`u`$ sea quarks in an isoscalar target implies a violation of charge symmetry. Therefore, Boros et al. proposed that a large charge symmetry violation of the sea quarks in the nucleon might explain the observed discrepancy $`(1015\%)`$ between neutrino and muon structure function data.
Boros et al. define the following charge symmetry violations in the nucleon sea.
$`\delta \overline{u}(x)`$ $`=\overline{u}^p(x)\overline{d}^n(x),`$ (1)
$`\delta \overline{d}(x)`$ $`=\overline{d}^p(x)\overline{u}^n(x),`$ (2)
where $`\overline{u}^p(x)`$ and $`\overline{d}^p(x)`$ are the distribution of the $`u`$ and $`d`$ sea anti-quarks in the proton, respectively. Similarly $`\overline{u}^n(x)`$ and $`\overline{d}^n(x)`$ are the distribution of the $`u`$ and $`d`$ sea anti-quarks in the neutron, respectively. The distributions for the quarks and antiquarks in the sea is assumed to be the same. The relations for CSV in the sea quark distributions are analogous to equations (1) and (2) for the sea anti-quarks. Charge symmetry in the valence quarks is assumed to be conserved, since there is good agreement between the neutrino and muon scattering data for $`x>0.1`$.
Within this model, Boros et al. extract a large CSV from the difference in structure functions as measured in neutrino and muon scattering experiments. Theoretically, such a large charge symmetry violation (of order of 25% to 50%) is very unexpected. Therefore, the article has generated a significant amount of interest both within and outside the high energy physics community . If the proposed model is valid, all parametrizations of PDFs would have to be modified. In addition, physics analyses which rely on the knowledge of PDFs (e.g. the extraction of the electro-weak mixing angle from the ratio of neutral current and charged current cross sections) would be significantly affected.
In this communication we show that the CSV models proposed by Boros et al. are ruled out by the $`W`$ charge asymmetry measurements made by the CDF experiment at the Fermilab Tevatron collider . These $`W`$ data provide a very strong constraint on the ratio of $`d`$ and $`u`$ quark momentum distributions in the proton over the $`x`$ range of $`0.006`$ to $`0.34`$.
Figure 1 shows the quantity $`x\mathrm{\Delta }(x)=x[\delta \overline{d}(x)\delta \overline{u}(x)]/2`$ required to explain the difference between neutrino and muon data, as given in Fig. 3 of Boros et al. . The average $`Q^2`$ of these data is about 4 (GeV$`/c`$)<sup>2</sup>. The dashed line is the strange sea quark distribution \[$`xs(x)`$\] in the nucleon as measured by the CCFR collaboration using dimuon events produced in neutrino nucleon interactions. Boros et al. state that the magnitude of implied charge symmetry violation is somewhere between the full magnitude of the strange sea and half the magnitude of the strange sea. Since the strange sea itself has been measured to be about half of the average of the $`d`$ and $`u`$ sea, this implies a charge symmetry violation of order 25% (at $`x=0.05`$) and 50% (at $`x=0.01`$).
However, as can be seen in Fig. 1, the shape of the strange sea does not provide a good parametrization of the charge symmetry violation, therefore, we have parametrized $`x\mathrm{\Delta }(x)`$ (at $`Q^2=4`$ (GeV$`/c`$)<sup>2</sup> ) as follows. For $`x>0.1`$, $`x\mathrm{\Delta }(x)=0.`$ For $`x<0.01`$, $`x\mathrm{\Delta }(x)=0.15`$, and for $`0.01<x<0.1`$, $`x\mathrm{\Delta }(x)=.15[log(x)log(.1)]/[log(.01)log(.1)]`$. This parametrization is shown as the solid line in Fig. 1. The dot-dashed line shows the value of our parametrization when evolved to $`Q^2=M_W^2`$. Boros et al. suggest that it is theoretically expected that $`\mathrm{\Delta }(x)=\delta \overline{d}(x)=\delta \overline{u}(x)`$, which means that the sum of $`u`$ and $`d`$ sea distributions for protons and neutrons is the same. Within the assumption that $`\mathrm{\Delta }(x)=\delta \overline{d}(x)=\delta \overline{u}(x)`$, we use two models to parametrize the range of allowed changes in PDFs to introduce the proposed charge symmetry violations.
In Model 1, it is assumed that the standard PDF parametrizations are dominated by neutrino data and therefore represent the average of $`d`$ and $`u`$ sea quark distributions. Therefore, half of the CSV is introduced into the $`u`$ sea quark distribution and half of the effect is introduced into the $`d`$ sea quark distribution such that the average of $`d`$ and $`u`$ sea quark distributions is unchanged.
$`\overline{u}^p(CSV)`$ $`=\overline{u}^p\mathrm{\Delta }(x)/2,`$ (3)
$`\overline{d}^p(CSV)`$ $`=\overline{d}^p+\mathrm{\Delta }(x)/2,`$ (4)
$`\overline{u}^n(CSV)`$ $`=\overline{u}^n\mathrm{\Delta }(x)/2,`$ (5)
$`\overline{d}^n(CSV)`$ $`=\overline{d}^n+\mathrm{\Delta }(x)/2.`$ (6)
In Model 2, it is assumed that standard PDFs are dominated by muon scattering data, and therefore are good representation of the 2/3 charge $`u`$ quark distribution. In this model, the entire effect is introduced into the $`d`$ sea quark distribution as follows;
$`\overline{u}^p(CSV)`$ $`=`$ $`\overline{u}^p,`$ (7)
$`\overline{d}^p(CSV)`$ $`=`$ $`\overline{d}^p+\mathrm{\Delta }(x),`$ (8)
$`\overline{u}^n(CSV)`$ $`=`$ $`\overline{u}^n,`$ (9)
$`\overline{d}^n(CSV)`$ $`=`$ $`\overline{d}^n+\mathrm{\Delta }(x).`$ (10)
Model 2 would change the total quark sea.
In order to have a precise test for the CSV effect, all PDFs have to be refitted based on the above two models. However, the ratio of $`d`$ and $`u`$ distribution will be almost the same whether we refit the PDFs or not. The $`d/u`$ ratio which has been extracted from $`F_2^n/F_2^p`$ measurements (assuming charge symmetry) is in fact the quantity $`u^n/u^p`$ which does not have any sensitivity to the proposed CSV effect. In order to test for CSV effects, measurements of $`d^p/u^p`$ or $`d^n/u^n`$ are required. Therefore, the CDF measurements of the $`W`$ charge asymmetry in $`p\overline{p}`$ collisions provide a unique test of CSV effects, because of the direct sensitivity of these data to the $`d/u`$ ratio in the proton (note that the $`d`$ and $`u`$ quark distributions at small $`x`$ are dominated by the quark-antiquark sea). We now proceed to show that these implementations of CSV in the nucleon sea are ruled out by the CDF $`W`$ charge asymmetry measurements at the Tevatron.
At Tevatron energies, $`W^+`$ ($`W^{}`$) bosons are produced in $`p\overline{p}`$ collisions primarily by the annihilation of $`u`$ ($`d`$) quarks in the proton and $`\overline{d}`$ ($`\overline{u}`$) quarks from the antiproton. Because $`u`$ quarks carry on average more momentum than $`d`$ quarks , the $`W^+`$ bosons tend to follow the direction of the incoming proton and the $`W^{}`$ bosons’ that of the antiproton. The charge asymmetry in the production of $`W`$ bosons as a function of rapidity ($`y_W`$) is therefore related to the difference in the $`u`$ and $`d`$ quark distributions, and is roughly proportional to the ratio of the difference and the sum of the quantities $`d(x_1)/u(x_1)`$ and $`d(x_1)/u(x_2)`$, where $`x_1`$ and $`x_2`$ are the fractions of the proton momentum carried by the $`u`$ and $`d`$ quarks, respectively. (Note that the quark distributions in the proton are equal to the antiquark distributions in the antiproton). At large rapidity, $`x_1`$ is larger than 0.1, which is a region where CSV does not exist. On the other hand $`x_2`$ is in general less than 0.1, and a 25% to 50% CSV effect would imply a very large effect on the $`W`$ asymmetry. Since the $`W`$ charge asymmetry is sensitive to the $`d/u`$ ratio, it does not matter if the CSV effect at small $`x`$ is present in either $`d`$ or $`u`$ sea quark. All of these models would result in a similar change in the $`W`$ asymmetry.
Experimentally, the $`W`$ rapidity is not determined because of the unknown longitudinal momentum of the neutrino from the $`W`$ decay. What is actually measured by the CDF collaboration is the lepton charge asymmetry which is a convolution of the $`W`$ production charge asymmetry and the well known asymmetry from the $`V`$-$`A`$ $`W`$ decay. The two asymmetries are in opposite directions and tend to cancel at large values of rapidity. However, since the $`V`$-$`A`$ asymmetry is well understood, the lepton asymmetry is still sensitive to the parton distributions. The lepton charge asymmetry is defined as:
$$A(y_l)=\frac{d\sigma ^+/dy_ld\sigma ^{}/dy_l}{d\sigma ^+/dy_l+d\sigma ^{}/dy_l},$$
(11)
where $`d\sigma ^+`$ ($`d\sigma ^{}`$) is the cross section for $`W^+`$ ($`W^{}`$) decay leptons as a function of lepton rapidity, with positive rapidity being defined in the proton beam direction. The CDF data shown in Fig. 2 span the broad range of lepton rapidity ($`0.0<|y_l|<2.2`$), and provide information about the $`d/u`$ ratio in the proton over the wide $`x`$ range ($`0.006<x<0.34`$). Therefore, the CDF $`W`$ asymmetry data would provide a strong tool to test the CSV models over a broad range of $`x`$, and not just in part of the range proposed in the Boros et al. model.
Also shown in Fig. 2 (solid line) are the predictions for the $`W`$ asymmetry from QCD calculated to Next-to-Leading-Order (NLO) using the program DYRAD , with the CTEQ4M PDF parametrization for the $`d`$ and $`u`$ quark distributions in the proton ( we have used CTEQ4 because it is the PDF set that has been used by Boros et al. in their paper ). As pointed out by Yang and Bodek , the small difference between the data and the prediction of the CTEQ4M PDF at high rapidity is because the $`d`$ quark distribution is somewhat underestimated at high $`x`$ in the standard PDF parametrizations. The predictions of the CTEQ4M PDF with the proposed modifications by Yang and Bodek are shown as the dashed-dotted line in the figure.
The two dotted lines in Fig. 2 show the predicted $`W`$ asymmetry for the CTEQ4M PDF with the proposed Boros et al. charge symmetry violation in the sea for Model 1 and Model 2, respectively. The CDF $`W`$ data clearly rule out these models.
Most striking in this analysis is the broad range of lepton rapidity over which this disagreement occurs with the CSV models. This is suggestive that models of this class would be ruled out over a broad range of $`x`$, and not just in part of the range proposed in the Boros et al. model.
In the direct measurement of the $`W`$ mass at the Tevatron, the CDF $`W`$ asymmetry data have been used to limit the error on $`M_W`$ from PDFs to about 15 MeV. This has been done by calculating the deviation between the error weighted average measured asymmetry over the rapidity range of the data, and the predictions from various PDFs. This measured average asymmetry for the data is $`0.087\pm 0.003`$. The predicted average asymmetries (weighted by the same errors as the data) are 0.094, 0.125, and 0.141 for the CTEQ4M PDF, and for Model 1 and Model 2, respectively. If we only accept PDFs which are within two standard deviations of the CTEQ4M PDF, the $`W`$ asymmetry data rule out CSV effects at the level of more than 10 standard deviations for the two models with CSV effects.
Another precision measurement which is sensitive to CSV effects is the measurement of neutral-current scattering in neutrino-nucleon collisions. Just as the magnitude of the couplings to $`u`$ and $`d`$ quarks differ in neutral-current $`\mu `$$`q`$ scattering at NMC, the couplings to $`u`$ and $`d`$ quarks also differ in neutral-current $`\nu `$$`q`$ scattering. In this case, the left-handed and right-handed couplings of the neutral current to quarks are given by $`g_L=I_3Q\mathrm{sin}^2\theta _W`$ and $`g_R=Q\mathrm{sin}^2\theta _W`$, where $`Q`$ is the quark charge and $`I_3`$ is the third component of the weak isospin in the quark doublet, $`+1/2`$ for $`u`$-type quarks and $`1/2`$ for $`d`$-type quarks. Therefore the CSV-inspired enhancement in the $`d`$ quark distributions will change the the cross-section for neutral-current scattering, even for an isoscalar target. Because these cross-section measurements are used to extract electroweak parameters, a CSV effect could then affect the precision measurements of $`\mathrm{sin}^2\theta _W`$.
The most precise measurements of neutral-current neutrino-quark scattering come from the CCFR and NuTeV experiments. As noted above, CCFR had a beam of mixed neutrinos and anti-neutrinos, dominated by neutrinos. The NuTeV experiment uses separate neutrino and anti-neutrino beams in its measurements to allow separation of neutral-current neutrino and anti-neutrino interactions. The NuTeV and CCFR experiments measure combinations of the ratios, $`R^\nu `$ and $`R^{\overline{\nu }}`$, (above a fixed hadron energy $`\nu `$ threshold of $`20`$ GeV or $`30`$ GeV in NuTeV and CCFR, respectively), where
$$R^{\nu (\overline{\nu })}\frac{\sigma _{\nu (\overline{\nu })}^{\mathrm{NC}}}{\sigma _{\nu (\overline{\nu })}^{\mathrm{CC}}}=\frac{1}{2}\mathrm{sin}^2\theta _W+\frac{5}{9}(1+r^{\pm 1})\mathrm{sin}^4\theta _W,$$
(12)
and $`r\sigma _{\overline{\nu }}^{\mathrm{CC}}/\sigma _\nu ^{\mathrm{CC}}0.4`$. The NuTeV experiment has extracted $`\mathrm{sin}^2\theta _W`$ using the combination $`R^\nu rR^{\overline{\nu }}`$ which is insensitive to the effects of sea quarks, and thus not changed by CSV effects in sea, as in the Boros et al. model. However, the CCFR measurement with a mixed beam is equivalent to $`R^\nu +0.13R^{\overline{\nu }}`$ in which the sea quark contributions do not cancel.
Within the framework of Model 1, the modified PDFs leave the charged current neutrino data unchanged, but affect the level of the neutral current cross section. The effect of the Model 1 implementation of the Boros et al. model on the CCFR result has been calculated using the CTEQ4L PDF in the cross-section model. The CCFR experiment extracts a $`\mathrm{sin}^2\theta _W`$ which is equivalent to $`M_W=80.35\pm 0.21`$ GeV , which can be compared to the current average of all direct $`M_W`$ measurements, $`80.39\pm 0.06`$ GeV. Model 1 would increase the CCFR measured $`M_W`$ by $`0.26`$ GeV. Since the CDF $`W`$ asymmetry data rule out a CSV effect at the the level of 1/5 of the magnitude of Model 1, the error from possible CSV effects in PDFs is less than 50 MeV. This illustrates the value of the CDF $`W`$ asymmetry data in limiting the systematic error from PDF uncertainties not only in the direct measurement of the $`W`$ mass in hadron colliders, but also in the indirect measurement of the $`W`$ mass in neutrino experiments.
In conclusion, the CDF $`W`$ asymmetry data rule out the Boros et al. model for charge symmetry violation in parton distributions as the source of the difference between neutrino (CCFR) and muon (NMC) deep inelastic scattering data. Sources such as a possible difference in nuclear effects between neutrino and muon scattering, or a possible underestimate of the strange quark sea in the nucleon have been ruled out . The experimental systematic errors between the two experiments, and improved theoretical analyses of massive charm production in both neutrino and muon scattering are both presently being investigated as possible reasons for this discrepancy.
|
no-problem/9904/astro-ph9904060.html
|
ar5iv
|
text
|
# GALACTIC ROTATION AND LARGE SCALE STRUCTURES
## 1 Introduction
In a previous communication we had described a cosmological scheme which is consistent with observations and yet does not invoke dark matter. It is ofcourse well known that the puzzle of galactic rotational velocities can be explained by the dark matter hypothesis. Briefly put, using the well-known relation for rotation under gravitation,
$$\frac{mV^2}{r}=\frac{GMm}{r^2}$$
(1)
we would expect that from (1) the rotational velocities $`V`$ at the edges of galaxies would obey the relation
$$V^2=\frac{GM}{r}$$
(2)
where $`M`$ is the galactic mass, $`r`$ the radius of the galaxy and $`G`$, the gravitational constant. That is the velocities would fall off with increasing distance from the centre of the galaxy. However, it is observed that these velocities tend to a constant,
$$V300Km/sec$$
(3)
Alternatively, it is observed that the mass of the universe obeys the law,
$$MR^n,n1$$
(4)
These discrepancies can be explained if there is unobserved or unaccounted, that is missing or dark matter, whose gravitational influence is nevertheless present. This would also close the universe, that is the expansion would halt and a collapse would ensue. While no dark matter has yet been discovered, it must be mentioned that one candidate is a massive neutrino. Recently, the Superkamiokande experiments have yielded the first evidence for this, but it is recognized that this mass, roughly of the order of a billionth that of the electron is far too small to be the missing or dark matter.
On the other hand, latest observations of distant supernovae by different teams of observers show that the universe would continue to expand for ever.
The cosmological scheme considered in reference (cf. also ref.) predicts precisely such a behaviour and moreover, explains (4) without invoking dark matter. In this scheme, particles, typically pions are fluctuationally created out of a background ZPF. (Other mysterious, hitherto empirical relations, like Dirac’s large number equations or the inexplicable Weinberg pion-Hubble constant relation are deduced in this theory).
We will now show that in this cosmological scheme, not only the puzzling galactic rotation relation (3)is explained, but also the fact that structures like galaxies and superclusters would naturally arise.
## 2 Galactic Rotation
We first observe that for a typical galaxy, the mass $`{}_{}{}^{}M_{}^{}`$ which is about $`10^{11}`$ solar masses, is given by
$$M=Nm=10^{70}.m$$
(5)
where $`{}_{}{}^{}m_{}^{}`$ is the mass of a typical elementary particle, which in the literature has been taken to be a pion and $`{}_{}{}^{}N_{}^{}`$ their number.
We next observe that the size $`{}_{}{}^{}r_{}^{}`$ of a typical galaxy (about a $`100000`$ light years) is given by,
$$r=\sqrt{N}l$$
(6)
where $`{}_{}{}^{}l_{}^{}`$ is the pion Compton wavelength and $`{}_{}{}^{}N_{}^{}`$ is given in (5).
Finally in the cosmological scheme referred to (cf.ref.), we have,
$$G=a/\sqrt{\overline{N}}$$
(7)
where $`\overline{N}10^{80}`$ is the number of pions in the universe and a $`10^{32}`$.
Introducing (5), (6) and (7) into (2), we get for the rotational velocity, because as we go to the edge, the number of particles $`N`$, the relation (3), consistent with observations.
## 3 Large Scale Structures
It is quite remarkable that equation (6) is true for the universe itself, as originally pointed out by Eddington, and also for superclusters, as can be verified. Moreover (6) is a very general relation in the theory of Brownian motion - it shows that the system under consideration could be thought of as a collection of these elementary particles in random motion. Further, (6) also shows that these structures have an overall flat or two-dimensional character, which is indeed true. In particular, galaxies have vast flat discs and superclusters have a cellular character. It must also be pointed out that recent observations do indeed suggest such an anisotropy. Finally, the recently discovered neutrino masses are small, in which case the particles have relativistic speeds, and this is known to imply the above type of flat structures.
On the other hand, (6) is not valid for stars. At these distance scales, gravitation is strong enough so that the Brownian approximation (6) is no longer valid.
## 4 Conclusion
We have thus explained without invoking dark matter, the galactic rotational relation (3) and also have obtained a rationale for the existence of structures like galaxies and superclusters.
|
no-problem/9904/astro-ph9904185.html
|
ar5iv
|
text
|
# Introduction
## Introduction
As we reconstruct the past history of the universe, we learn that the universe as we see it today must have evolved from a special state that was extremely flat and homogeneous. Such a state is highly unstable toward both the formation of inhomogeneities and evolution away from flatness, because the influence of gravity tends to drive matter away from a flat homogeneous state. Until the advent of inflation, cosmologists had no way of explaining how the universe could have started out so precisely balanced in a state of such high instability.
Inflation addresses this issue by changing the story at very early times. During an early period of inflation, the matter is placed in a peculiar potential dominated state where the effects of gravity are very different. During inflation, flatness is an attractor, and the deviations (or perturbations) from homogeneity destined for our observable universe can be calculated. With a suitable tuning of model parameters, the perturbations can be given a sufficiently small amplitude and even naturally acquire a spectrum which gives good agreement with observations.
The theory of cosmic perturbations has for some time now benefited from the existence of alternative models. The presence of alternatives has made it possible to systematically evaluate each alternative, and has even helped us to discover the most fundamental way in which inflationary theory could be falsified. So far, things are looking very good for an inflationary origin for the cosmic perturbations.
But inflation offers us much more than a theory of perturbations. It also is supposed to explain the origin of the flatness and overall smoothness of the universe. In this role, inflation theory has faced no serious competition. While this fact in itself might be seen as a success, it would certainly be much more gratifying if the significant place in the theoretical landscape currently occupied by inflation could be earned by doing better than some serious contenders. After all, inflation is just the first idea we have had to explain these cosmic puzzles.
The above reasoning has motivated me to search for alternatives for some time now. So far everything that I have tried has ended up being just another version of standard inflation, once it was forced into a workable form. This experience encourages the view that inflation might be the unique mechanism by which the initial condition problems can be addressed, and also explains the extreme nature of the idea I outline below.
The idea I present here starts with a very simple observation. A statement of the unusual nature of the initial conditions in standard cosmology usually carries with it a description of the “horizon problem”. Basically, in the standard big bang model any mechanism which operates in the early universe and attempts to “set up” the correct initial conditions would have to act acausally, because what we observe today is composed of many causally disconnected regions in the early universe (see Figure 1). Inflation solves this problem because a period of superluminal expansion radically changes the causality structure of the universe. Another way of changing the causality structure is to have light travel faster in the early universe (Figure 2).
Joao Magueijo and I have pursued this idea, to the extent of setting up a phenomenological model of how physics might look with a time varying speed of light (VSL). Interestingly, we have found that our model exhibits energy non-conservation of just the sort that can fill in energy deficits and pull down energy peaks to produce a flat homogeneous universe from a wide range of initial conditions.
Of course our model also breaks Lorentz invariance, which may seem unreasonable to may physicists. In defense of choosing this radical direction, let me comment that many theorists are prejudiced against the idea that the spacetime continuum is really a continuum down to arbitrarily small scales. Any deviation from a true continuum would necessarily break Lorentz invariance. In particular many ideas about our 3+1 dimensional world that are coming from superstring theory (and its offspring) suggest that properties of this (3+1D) manifold are emergent as a low-energy limit of something quite different. If the VSL picture really takes root, the picture I describe below could well be a phenomenology of the dynamics of the universe as the 3+1 manifold we inhabit emerges from very different physics governing high energies. In this picture the speed of light varies in the early universe and then holds constant, so that the standard cosmology can proceed. This much is in the same spirit as inflation, where a period of unusual physics is placed in the early universe, without changing the standard physical picture which does an excellent job of explaining many aspects of the universe.
I should mention that the idea of using a varying speed of light to explain initial conditions first appears in print in a paper by Moffat. His paper takes the idea in a somewhat different direction than we have. Also, subsequent work by Barrow and Magueijo has taken VSL in a variety of different directions. I will mention this briefly in the final section.
## Our prescription
To pursue the idea of VSL, Magueijo and I have used the following simple prescription: VSL models necessarily have preferred frame, because Lorentz invariance is broken. We assume that in that special frame, the Lagrangian is the same as usual, with the substitution $`cc(t)`$. We also assume that the dynamics of $`c`$ do not affect the curvature, so that the Riemann tensor and the Ricci scalar are to be computed (in the preferred frame) with $`c`$ held fixed.
This scheme is spelled out in . There we carefully discuss the question of why this scheme is not simply ordinary physics under a strange reparameterization, and thus why what I describe below can not be viewed as simply an unusual way of describing inflation.
## Cosmological Equations
Under our VSL scheme energy is not conserved when $`c`$ is varying. In a cosmology which is Robertson-Walker in the preferred frame we get
$$\dot{\rho }+3\frac{\dot{a}}{a}\left(\rho +\frac{p}{c^2}\right)=\rho \frac{\dot{G}}{G}+\frac{3Kc^2}{4\pi Ga^2}\frac{\dot{c}}{c}.$$
(1)
This equation is the usual equation for energy conservation when $`c`$ and $`G`$ are constant, and I have included the term in $`\dot{G}`$ for future reference. To observe the effect on the flatness of the universe, we look at the evolution of $`ϵ\mathrm{\Omega }1`$ ($`\mathrm{\Omega }\rho /\rho _c`$):
$$\dot{ϵ}=(1+ϵ)ϵ\frac{\dot{a}}{a}\left(1+3w\right)+2\frac{\dot{c}}{c}ϵ$$
(2)
where we have taken $`p=w\rho c^2`$ with constant $`w`$. Here we can see how in standard big bang cosmology ($`w>1/3`$, $`\dot{c}=0`$) $`ϵ=0`$ is an unstable fixed point, leading to the need to tune $`ϵ`$ to extremely small values at the beginning, in order to match a value of $`ϵ`$ which is not large even today. Eqn. 2 also shows how inflation makes $`ϵ=0`$ an attractor, and how $`\dot{G}`$ drops out of the equation, making at least this version of a varying $`G`$ ineffective at producing the desired effect. One can also see how negative values of $`\dot{c}/c`$ will make $`ϵ=0`$ an attractor.
There is also an interesting effect on the cosmological constant. In order to discuss this, we must be careful about which constant we are talking about.
$$S=𝑑x^4\sqrt{g}\left(\frac{c^4(R+2\mathrm{\Lambda }_1)}{16\pi G}+_M+_{\mathrm{\Lambda }_2}\right)$$
(3)
Equation 3 shows the action in the preferred frame, where $`_M`$ is the matter fields Lagrangian. The term in $`\mathrm{\Lambda }_1`$ is a geometrical cosmological constant, as first introduced by Einstein. The term in $`\mathrm{\Lambda }_2`$ represents the vacuum energy density of the quantum fields . VSL is only able to affect $`\mathrm{\Lambda }_1`$, and we simply call this $`\mathrm{\Lambda }`$ in what follows. If we define $`ϵ_\mathrm{\Lambda }=\rho _\mathrm{\Lambda }/\rho _m`$ where $`\rho _\mathrm{\Lambda }=\frac{\mathrm{\Lambda }c^2}{8\pi G}`$ we find
$$\dot{ϵ}_\mathrm{\Lambda }=ϵ_\mathrm{\Lambda }\left(3\frac{\dot{a}}{a}(1+w)+2\frac{\dot{c}}{c}\frac{1+ϵ_\mathrm{\Lambda }}{1+ϵ}\right).$$
(4)
Ordinary cosmology has a $`\mathrm{\Lambda }`$ problem in the sense that $`\mathrm{\Lambda }`$ rapidly comes to dominate, and must be tuned initially in order not to be super-dominant today. Here we can see that inflation, with $`1w1/3`$, cannot provide the necessary tuning, while VSL ($`\dot{c}<0`$) can have that effect. This, by the way also helps illustrate how VSL is not physically equivalent to inflation.
We have also considered the evolution of perturbations. We have found that for a sudden change in $`c`$, the density contrast $`\mathrm{\Delta }`$ obeys
$$\frac{\mathrm{\Delta }^+}{\mathrm{\Delta }^{}}=\frac{c^+}{c^{}}.$$
(5)
For the large variation in $`c`$ required to produce a flat universe, we have found that the universe has all perturbations reduced to unobservable levels. This leave a blank slate which requires something like the defect models of cosmic structure formation to provide perturbations.
## Scenario Building
So far I have discussed the machinery of VSL, but how can this be turned into a cosmological scenario? One can start the discussion by considering a sudden transition between $`c^{}`$ and $`c^+`$. Let us assume for a moment that before the transition we have a flat FRW universe. We find under these circumstances the temperature obeys
$$T^+/T^{}=c^{}/c^+.$$
(6)
If we want $`T^+T_{Planck}`$ and require $`c^{}/c^+>10^{60}`$ to solve the flatness problem, on is starting with a very cold $`T^{}`$. Immediately, one can see that fine tuning is required to have a cold flat universe before the transition, and this scenario does not make much sense. Interestingly, unlike inflation Equation 2 shows that VSL can create energy even in a empty open ($`ϵ=1`$) universe. It might be more interesting to build scenarios based on that starting point. Just as inflation has seen the scenario-building change radially over the years, we feel there is a lot to be learned about how to implement VSL before we understand the best scheme to use. What we have so far is a very interesting mechanism that can move the universe toward a flat homogeneous state.
## Discussion and Conclusions
Since our paper there have been a number of other publications on VSL. One new direction pursued by Barrow and Magueijo looks at a possible power-law $`c(t)`$ that could have $`c`$ varying even today. They have found interesting attractor solutions which keeps $`\mathrm{\Omega }_\mathrm{\Lambda }`$ at a constant fraction of the total $`\mathrm{\Omega }`$, but they have their work cut out for them understanding primordial nucleosynthesis in that model. Also, Moffat has further explored the idea of spontaneous breaking of Lorentz symmetry.
Probably the greatest problem with the VSL idea is that we have no fundamental picture of what makes $`c`$ vary. The phenomenological treatment in does not address this question. What we can say is our work shows that there are interesting cosmological rewards for considering a time varying speed of light, and with that motivation it may be possible to make some interesting discoveries.
Despite this problem, it is already possible to falsify at least the fast-transition version of VSL. Since the perturbations turn out to be infinitesimal after the transition, structure formation must be left to active models which have their own characteristic signatures that differentiate them from inflation. If the microwave background comes out with characteristic inflationary features, these VSL models will be ruled out.
This work was supported in part by UC Davis. I would like to thank Joao Magueijo for a fruitful collaboration, and Richard Garavuso for his comments on this manuscript.
|
no-problem/9904/cond-mat9904180.html
|
ar5iv
|
text
|
# References
Eur. Phys. J. B
Sandpiles on fractal bases:
pile shape and phase segregation
N.Vandewalle<sup>(a)</sup> <sup>1</sup><sup>1</sup>1corresponding author, e-mail: nvandewalle@ulg.ac.be, R.D’hulst<sup>(b)</sup>
<sup>(a)</sup> GRASP, Institut de Physique B5, Université de Liège,
B-4000 Liège, Belgium.
<sup>(b)</sup> Dept. of Mathematics and Statistics, Brunel University, Uxbridge, Middlesex UB8 3PH, London, UK.
PACS: 64.75.+g — 46.10.+z — 05.40.+j
Abstract
Sandpiles have become paradigmatic systems for granular flow studies in statistical physics. New directions of investigations are discussed here. Rather than varying the nature of the pile (sand, salt, rice,..) we have investigated changes in the boundary conditions. We have investigated experimentally and numerically sandpile structures on bases having a fractal perimeter. This type of perimeter induces the formation of a quite complex set of ridges and valleys. A screening effect of the valleys is observed and depends on the angle of repose. Binary granular systems have also been investigated on such bases: a spectacular demixing is obtained along the valleys. This type of phase segregation is discussed with respect to numerical studies.
1. Introduction
Despite their everyday familiarity, sandpiles and granular flows have become paradigmatic systems of new complexity problems in physics . Granular matter shows behaviors that are intermediate between those of solids and liquids: powders pack like solids but flow like liquids. However, granular flows are non-Newtonian . Thus, dry and wet granular frictions are a practical and theoretical challenge .
The most basic property of sandpiles is certainly the angle of repose , i.e. the angle $`\theta `$ made between the horizontal and the apparent surface of the pile. This angle can take values between $`\theta _r`$ (the angle below which the pile is stationary) and $`\theta _d`$ (the angle above which avalanches flow down the surface). In between $`\theta _r`$ and $`\theta _d`$, the sandpile manifests some bistability: it can be either stationary or in a state of avalanches.
The symmetry of the base on which the sandpile is constructed is a relevant parameter which can sometimes lead to exotic pile shapes. When the base is a disk (Figure 1a), the pile has a conic shape; every point of the surface being characterized by the angle of repose $`\theta `$. When the base is a square (see Figure 1b), a pyramidal pile is usually obtained. Again, every point of the surface is characterized by a local angle $`\theta `$. However, ridges emerge within a four-fold structure (see the grey lines in Figure 1b). It should be noticed that the ridges are characterized by an angle less than $`\theta `$. For more complex bases having a constant convexity, it has been proven that the pile is a so-called tectohedron geometrical object for which all facets are inclined with similar $`\theta `$ angles. The facets meet on a network of ridges which can describe the pile shape. When the base has a low symmetry, various distinct states exist for the pile shape (see the grey lines in Figure 1c). Up to now, the case of non-convex bases has not received much attention.
Intuitively, one can imagine that for non-convex bases, the pile exhibits numerous valleys in addition to the network of ridges. The situation is then more complex.
In order to import other basic symmetry conditions, it seems of interest to consider fractal-like systems. This should lead to a wide variety of length scales as well as simple power laws for characterizing physical properties .
In section 2, we focus our experimental investigations on the effects of such bases with a fractal perimeter as the one illustrated in Figure 1d. This will underline thereby the interest for studying such sandpiles and the effect of the boundary conditions on the pile structure. We also present some experiments with binary mixtures of sand. In section 3, some simulation work is presented allowing some qualitative interpretation of our findings.
2. Experiments
Figure 2 presents a picture of a sandpile built on the base illustrated in Figure 1d. The perimeter of such a base results from 3 iterations of a generator and has a fractal dimension close to $`D_f=\frac{\mathrm{ln}5}{\mathrm{ln}3}`$. The arrows point to a large valley and inner sub-valleys. The fractal perimeter implies that a hierarchy of valleys emerges on the sides of the pile.
One could first ask if well placed holes in the inner part of the base instead of a fractal perimeter can produce the same phenomenon. The answer is that the formation of valleys can be created using holes in the base but the valleys and ridges are not especially hierarchically distributed. A fractal perimeter allows for the study of “natural” structures. These structures look like natural erosion patterns which have been also recognized as being fractal objects .
Other sandpiles on various bases with different fractal dimensions $`D_f`$ have been numerically and experimentally investigated. The pictures are not shown here because in all cases, a hierarchical structure of valleys and ridges is formed. However, depending on the repose angle $`\theta _r`$ of the pile, different shapes are observed on the same fractal base. It can be understood that the pile extension is small and avalanches are mainly dissipated on the valleys close to the center of the base if the sand has a high repose angle. Thus, only a limited part of the fractal perimeter participates to the dissipation dynamics or ejection of grains when $`\theta >\theta _r`$. However, with a low repose angle, the pile is large and the whole fractal perimeter participates to the dissipation of avalanches.
Consider further the case of a binary sand mixture. It is known that the difference in the repose angles of two kinds of particles can produce phase segregation inside the sandpile . Demixing is often observed but depending on various parameters like (i) the input flow, (ii) the grain size and (iii) the morphology of each species. A spectacular self-stratification can be sometimes produced. This has also been proven experimentally , numerically and theoretically . However, this self-stratification observed in a vertical Hele Shaw cell cannot be visualized from the exterior of a tri-dimensional sandpile.
However, on bases with a fractal perimeter, the situation is quite different. As an example, a mixture of dark and white sand grains has been used: (i) white grains: with diameter between 0.2 – 0.3 mm and (ii) brown grains: with diameter between 0.07 – 0.1 mm. The two species have different angles of repose. The angles $`\theta _r`$ of brown and white grains have been estimated to be $`38^o`$ and $`32^o`$ respectively. We have checked that this mixture leads to an internal self-stratification as observed in a vertical Hele Shaw cell. When this mixture is poured on a classical regular base (a disk), no specific structure is observed on the conic surface. However, on the base of Figure 1d, a phase segregation is clearly observed along the valleys (see Figure 3). Indeed, valleys remains in dark and ridges are in white. Also in the subvalleys, the phase segregation is observed. Due to the fractality of the perimeter, the phase segregation results in alernating vertical strates. Thus, the phase segregation can be visualized from the pile exterior itself for sandpile having a fractal-like perimeter. This effect is relevant in e.g. industry where granular piles do not have especially a conic shape.
3. Simulations
In order to find out some information on the geometrical structure made of valleys and ridges, we have performed numerical simulations of pile shapes. These simulations are based on the common cellular automaton which has been extensively discussed in the relevant literature . In this model, the sandpile is built on a two-dimensional lattice, where the grains occupy only a single lattice site. Here, $`h_{i,j}`$ denotes the height of the sandpile at coordinate $`i,j`$. At each time step, an arbitrary number $`N`$ of grains is dropped at the top of the central column of the lattice. Then, the pile is relaxed as follows. The dynamics of each grain at position $`i,j`$ on the sandpile surface is governed by the four local angles of repose: $`\mathrm{tan}^1|h_{i,j}h_{i+1,j}|`$, $`\mathrm{tan}^1|h_{i,j}h_{i1,j}|`$, $`\mathrm{tan}^1|h_{i,j}h_{i,j+1}|`$, and $`\mathrm{tan}^1|h_{i,j}h_{i,j1}|`$. When one or several of these angles is larger than the repose angle $`\mathrm{tan}^1(z_c)`$, some grains are assumed to roll on the surface until they reach a stable configuration where the local slope is less than the repose angle. The “rolling grain” follows downward or/and sideways random paths on the pile surface. Figure 4 presents a tri-dimensional sketch of the top of the pile where a single grain is deposited and rolls down until it reaches a stable local configuration. The simulation stops when the sandpile reaches one site on the border of the base and when the pile shape is then nearly invariant.
The model is thus based on the classical sandpile models encountered in the scientific literature since the introduction of the Bak-Tang-Wiesenfeld model . The new ingredient we introduce in view of the observation in section 2 is the shape of the base which is herein considered to have a fractal perimeter. In addition to the perimeter geometry, only one parameter controls the pile shape: $`z_c`$. For convenience, only integer values of $`z_c`$ have been considered in this work. Lattice sizes up to $`200\times 200`$ have been used for the larger bases.
A typical example of the network of ridges that is numerically obtained is illustrated in Figure 5. The base is the one of Figure 1d. The structure is markedly branched and reaches a high level of complexity. The main features of the simulated structures are recognized to be those of the experimental sandpiles: both valleys and ridges exist and are hierarchically distributed.
One should note that the network of ridges of Figure 5 is calculated with a low $`\theta _r`$ value ($`z_c=2`$) in order to reach all parts of the perimeter. The partial screening effect has deep analogies with the diffusion of entities through fractal interfaces studied by Sapoval and coworkers .
The amazing phase segregation described in section 2 can be also simulated. The above numerical model can be generalized to the case of two distinct species. Following the Head and Rodgers model , four parameters should then be considered: $`z_c^{\alpha \beta }`$ corresponds to the maximum slope on which a particle of type $`\alpha =1,2`$ can remain on the top of a particle $`\beta =1,2`$ without starting to roll down. A typical set of parameter values giving self-stratification is: $`z_c^{11}=5`$, $`z_c^{12}=4`$, $`z_c^{21}=3`$ and $`z_c^{22}=3`$. Figure 6 presents the top view of a pile obtained with this set of parameter values. The binary sandpile was built on a base with a fractal perimeter. In Figure 6, a phase segregation is clearly observed at proximity of the holes, i.e. near the valleys. This is consistent with experimental results.
However, it is not yet clear how to choose the relevant parameters $`z_c^{\alpha \beta }`$. Indeed, the diagonal elements $`z_{\alpha \alpha }`$ of the $`z_c`$-matrix are obviously the angles of repose of each pure species $`\alpha `$. These parameters can be measured in the macroscopic world. However, the non-diagonal elements of $`z_c`$ do not have a macroscopic counterpart. It should be noted that some sets of parameter values lead to “exotic” patterns which are not encountered in the experiments.
In the light of our numerical results, the phase segregation is interpreted as follows. The species having the largest angle $`\theta _r`$, i.e. the brown species, cannot reach the whole perimeter areas, before the other type, whence avalanches are dissipated in the primary valleys. Also, the complete network of ridges of Figure 4 cannot form. The white species has however a low angle of repose and reaches the borders of the base. Thus, primary valleys are controlled by the largest angle of repose and secondary valleys and ridges are controlled by the second angle of repose leading approximately to the picture of dark valleys and white ridges as in Figure 3.
4. Conclusion
In summary, we have investigated experimentally sandpiles on bases with a fractal perimeter. The shapes of the sandpiles exhibit a hierarchical (fractal) structure of valleys and ridges. Phase segregation has been found to be vizualized. The repose angle $`\theta _r`$ seems to be the fundamental parameter since $`\theta _r`$ allows or not the pile to take the whole fractal shape or not. Moreover, we have shown that lattice models provide useful numerical tools for describing qualitatively the phenomenology that we discovered herein. More precise experimental studies should be made in the future.
Our work also suggests new directions of investigations. Instead of varying the nature of the pile (sand, rice,…) , we have suggested hereabove to change the conditions on the boundaries where avalanches are dissipated. Theoretical investigations of these are fascinating goals.
Acknowledgements
NV is financially supported by the FNRS. A grant from FNRS/LOTTO allowed to perform specific numerical work. Fruitful discussions with E.Bouchaud, S.Galam, E.Clement and J.Rajchenbach were appreciated.
Figure Captions
Figure 1 — Illustration of bases on which sandpiles are built: (a) a circular base; (b) the four-fold network of ridges of the pile build on a square base; (c) irregular convex polygon and its network of ridges; (d) a base having a fractal perimeter ($`D_f=\frac{\mathrm{ln}5}{\mathrm{ln}3}`$). The perimeter of the bases is drawn in black while the network of ridges is denoted in grey.
Figure 2 — Sandpile made on the fractal base as in Figure 1d. The diameter of the sand grains is in the range 0.2 – 0.3 mm. Valleys and subvalleys are indicated.
Figure 3 — Sandpile on the fractal base as in Figure 1c. A mixture of two kinds of sand grains has been used: (i) diameter of white grains: 0.2 – 0.3 mm and (ii) diameter of brown grains: 0.07 – 0.1 mm. Phase segregation (demixing) is clearly observed along the valleys.
Figure 4 — Tri-dimensionnal sketch of the rule for the present cellular automaton model. Each grain is deposited at the top of the pile and rolls down following the relaxation rule. The parameter is herein $`z_c=3`$.
Figure 5 — Top view of a numerical simulation of the network of ridges obtained for a granular pile on the base as in Figure 1d.
Figure 6 — Top view of a typical configuration of the binary pile using the parameters $`z_c^{11}=5`$, $`z_c^{12}=4`$, $`z_c^{21}=3`$ and $`z_c^{22}=3`$.
|
no-problem/9904/astro-ph9904101.html
|
ar5iv
|
text
|
# A Medium Survey of the Hard X-Ray Sky with ASCA. II.: The Source’s Broad Band X-Ray Spectral Properties
## 1 Introduction
In the last few years many efforts have been made to understand the origin of the cosmic X-ray background (CXB), discovered more than 35 years ago by Giacconi et al. (1962). One of the competing hypotheses - the truly diffuse emission origin (see e.g. Guilbert and Fabian, 1986) - has been rejected by the small deviation of the cosmic microwave background spectrum from a blackbody shape (Mather et al., 1994). Therefore only the alternative interpretation - the discrete sources origin - is left.
Indeed, ROSAT deep surveys, reaching a source density of $`1000`$ deg<sup>-2</sup> at a limiting flux of $`10^{15}`$ ergs cm<sup>-2</sup> s$`^1`$(Hasinger et al., 1998; McHardy et al., 1998), have already resolved the majority (70$``$80 %) of the soft ($`E<2`$ keV) CXB into discrete sources. Spectroscopic observations (Shanks et al., 1991; Boyle et al., 1993, 1994; McHardy et al., 1998; Schmidt et al., 1998) of the sources with fluxes greater than $`5\times 10^{15}`$ ergs cm<sup>-2</sup> s$`^1`$have shown that the majority ($`5080`$ %) of these objects are broad line AGN at $`<z>1.5`$. An important minority ($`1020`$ %) of ROSAT sources are spectroscopically identified with X-ray luminous Narrow Emission Line Galaxies (Griffiths et al., 1995, 1996; McHardy et al., 1998), whose real physical nature (obscured AGN, starburst) is at the moment debated in the literature (see e.g. Schmidt et al., 1998). Since their average X-ray spectrum is harder than that of the broad line AGNs (Almaini et al., 1996; Romero-Colmenero et al., 1996) and similar to that of the remaining unresolved CXB, these objects could also be substantial contributors to the CXB at higher energies. About 10% of the ROSAT sources are identified with clusters of galaxies (see e.g. Rosati et al.,1998). Thus, it is clear that the ROSAT satellite has been successful in resolving almost all the soft CXB into discrete sources. Furthermore, optical observations of the faint ROSAT sources has lead to the understanding of the physical nature of the objects contributing to it.
On the contrary, at harder energies, closer to where the bulk of the CXB resides, the origin of the CXB is still matter of debate. Before the ASCA and Beppo-SAX satellites, which carry the first long lived imaging instruments in the 2-10 keV energy band, surveys in this energy range were made using passively collimated X-ray detectors that, because of their limited spatial resolution, allowed the identification of the brightest X-ray sources only, which represent a very small fraction ($`<5\%`$) of the CXB (Piccinotti et al., 1982). The so called “spectral paradox” further complicates the situation: none of the single classes of X-ray emitters in the Piccinotti et al., (1982) sample is characterized by an energy spectral distribution similar to that of the CXB. Due to the lack of faint, large and complete samples of X-ray sources selected in this energy range, the contribution of the different classes of sources to the hard CXB was evaluated through population synthesis models and different classes of X-ray sources were proposed as the major contributors by a number of authors (e.g. starburst galaxies, absorbed AGN, reflection dominated AGN, see e.g. Griffiths and Padovani, 1990; Madau, Ghisellini and Fabian, 1994; Comastri et al., 1995; Zdziarski et al., 1993). Recent results from ASCA and Beppo-SAX observations of individual objects and/or medium-deep survey programs (Bassani et al., 1999; Maiolino et al., 1998a; Turner et al., 1998; Boyle et al., 1998a; 1998b; Akiyama et al., 1998) seem to favor the strongly absorbed AGN hypothesis but deeper investigations are still needed to confirm this scenario.
At the Osservatorio Astronomico di Brera, a serendipitous search for hard (2-10 keV band) X-ray sources using data from the GIS2 instrument onboard the ASCA satellite is in progress (Cagnoni, Della Ceca and Maccacaro 1998, hereafter Paper I; Della Ceca et al. 1999, in preparation) with the aim of extending to faint fluxes the census of the X-ray sources shining in the hard X-ray sky. The strategy of the survey, the images and sources selection criteria and the definition of the sky coverage are discussed in Cagnoni, Della Ceca and Maccacaro (1998).
In Paper I, a first sample of 60 serendipitous X-ray sources, detected in 87 GIS2 images at high galactic latitude ($`|b|>20^o`$) covering $`21`$ square deg. was presented. This sample has allowed the authors to extend the description of the number-counts relationship down to a flux limit of $`6\times 10^{14}`$ ergs cm<sup>-2</sup> s$`^1`$(the faintest detectable flux), resolving directly about 27% of the (2-10 keV) CXB.
Here we study the spectral properties of the 60 ASCA sources listed in Paper I, combining GIS2 and GIS3 data. We have carried out both an analysis of the “stacked” spectra of the sources in order to investigate the variation of the source’s average spectral properties as a function of the flux, and a “hardness-ratio” (HR) analysis of the single sources. This latter method, which is equivalent to the “color-color” analysis largely used at optical wavelengths, is particularly appropriate when dealing with sources detected at a low signal-to-noise ratio (e.g. Maccacaro et al., 1988; Netzer, Turner and George, 1994). We have defined two independent HR and we have compared the position of the sources in the HR diagram with a grid of theoretical spectral models which are found to describe the X-ray properties of known classes of X-ray emitters.
The paper is organized as follows. In section 2 we present the sample and we define the “Faint” and the “Bright” subsamples. In section 3 we present the data, we discuss the data analysis and we define the two HR used. In section 4 we report the results of the stacked spectra and of the HR analysis and we compare them with those expected from simple spectral models and with the CXB spectra. Summary and conclusions are presented in section 5.
## 2 Definition of the “Faint” and “Bright” Subsamples
The basic data on the 60 X-ray sources used in this paper are reported in Table 2 of Paper I.
To investigate if the spectral properties of the sources depend on their brightness we have defined two subsamples according to the “corrected” 2-10 keV count rate (hereafter CCR) <sup>1</sup><sup>1</sup>1Due to the vignetting of the XRT and the PSF, a source with a given flux will yield an observed count rate depending on the position of the source in the field of view. With “corrected count rate” we mean the count rate that the source would have had if observed at some reference position in the field of view and whitin a given extraction region. The definition of the source extraction region and of the reference position in the field of view, used to determine the corrected CCR, are discussed in section 3.2. The CCR used here can be obtained dividing the unabsorbed 2-10 keV flux reported in Paper I by the count rate to flux conversion factor of 1cnt/s (2-10) = $`11.46\times 10^{11}`$ ergs cm<sup>-2</sup> s$`^1`$. This value is appropriate for a power law model with energy spectral index of 0.7, filtered by a Galactic absorbing column density of $`3\times 10^{20}`$ cm<sup>-2</sup>. .
The 20 brightest sources (CCR $`3.9\times 10^3`$ cts s$`^1`$ ) define the “Bright” subsample, while the remaining 40 sources define the “Faint” subsample. The dividing line of $`3.9\times 10^3`$ cts s$`^1`$ corrisponds to an unabsorbed 2 - 10 keV flux of $`5.4\times 10^{13}`$ or $`3.1\times 10^{13}`$ ergs cm<sup>-2</sup> s$`^1`$for a source described by a power law model with energy spectral index of 0.0 and 2.0, respectively absorbed by a Galactic column density of $`3\times 10^{20}`$ cm<sup>-2</sup>. We note that the numbers reported above are a very weak function of the Galactic absorbing column density along the line of sight (which ranges from $`10^{20}`$ cm<sup>-2</sup> to $`9\times 10^{20}`$ cm<sup>-2</sup> for the present sample).
We prefer to use the CCR instead of the flux because the CCR is (once the corrections due to the vignetting and the PSF have been applied) an observed quantity and it is independent on the spectral properties of the source; on the contrary the flux is model dependent. In first approximation fainter CCR corresponds to fainter sources.
## 3 Data analysis
All the ASCA images used in Paper I are now in “REV 2” Processing status (see http://heasarc.gsfc.nasa.gov/docs/asca/ascarev2.html), thus in this paper we have used this new revision of the data. Furthermore, in order to improve the statistics, we have combined <sup>2</sup><sup>2</sup>2This combination was possible for all the sources but a0447-0627, a0506-3726, a0506-3742 and a0721+7111. For these 4 sources we have used only the GIS2 data. data from the GIS2 and GIS3 instruments as explained below.
Data preparation has been done using version 1.3 of the XSELECT software package and version 4.0 of FTOOLS (supplied by the HEASARC at the Goddard Space Flight Center). Good time intervals were selected applying the “Standard REV 2 Screening” criteria (as reported in chapter 5 of the ASCA Data Reduction Guide, rev 2.0), with the only exception of having used a magnetic cutoff rigidity threshold of 6 GeV c<sup>-1</sup> (as done in Paper I). HIGH, MEDIUM and LOW bit rate data were combined together. Spectral analysis (see below) has been performed using version 9.0 of the XSPEC software package. We use the detector Redistribution Matrix Files (RMF) gis2v4\_0.rmf and gis3v4\_0.rmf.
### 3.1 “Stacked” spectra
For each source and for the GIS2 and GIS3 data sets, total counts (source + background) were extracted from a circular extraction region of 2 arcmin radius around the source centroid. Background counts were taken from two circular uncontaminated regions of 3.5 arcmin radius, close to the source, or symmetrically located with respect of the center of the image. Source and background data were extracted in the “Pulse Invariant” (PI) energy channels, which have been corrected for spatial and temporal variations of the detector gain. The Ancillary Response File (ARF) relative to each source was created with version 2.72 of the FTOOLS task ASCAARF <sup>3</sup><sup>3</sup>3 The task ASCAARF (which is part of the FTOOLS software package) is able to produce a position-dependent PSF-corrected effective area, A(E,x,y,d), of the (XRT + GIS2 or XRT + GIS3) combination. The inputs of this task are the position x,y (in detector coordinates) and the dimension d of the source’s extraction region. at the location of the individual sources in the detectors.
For each source we have then produced a combined GIS spectrum (adding GIS2 and GIS3 data) and the corresponding background and response matrix files, following the recipe given in the ASCA Data Reduction (rev 2.0) Guide (see section 8.9.2 and 8.9.3 and reference therein). Finally, we have produced the combined spectrum of a) the 20 sources belonging to the Bright Sample and b) the 40 sources of the Faint Sample. We note that each object contributes to the stacked counts at most for 6% in the case of the Faint Sample and at most for 25% in the case of the Bright Sample.
In the spectral analysis, being interested in comparing these stacked spectra with that of the hard CXB, we have considered only the counts in the 2.0 - 10.0 keV energy range. The total net counts in the Bright and Faint sample are about 3400 and 2900 respectively. The stacked spectra were rebinned to give at least 50 total counts per bin.
### 3.2 Hardness Ratios
For each source and for the GIS2 and GIS3 data set, source plus background counts were extracted in three energy bands: 0.7-2.0 keV (S band), 2.0-4.0 keV (M band) and 4.0-10.0 keV (H band). The S, M and H spectral regions were selected so as to have similar statistics in each band for the majority of the sources. The background counts have been evaluated by using the two background regions considered above; first, we have normalized the background counts to the source extraction region and then we have averaged them. Net counts in S, M and H have been then obtained for each source by subtracting the corresponding S,M and H normalized background counts from the total ones.
To combine the GIS2 and GIS3 data for each source and to compare our results with those expected from simple models, the net counts obtained must be corrected for the position dependent sensitivity of the GIS detectors (see the discussion in section 2.3 of Paper I). In particular we must: a) define a source extraction region and a reference position in the GIS2 or GIS3 field; b) re-normalize the S,M,H net counts from each source to this region (since sources are detected in different locations of the GIS2 or GIS3 field of view); c) perform the simulations for simple models by using the effective area of this region. We will now discuss these points in turn.
As reference region we have used a source extraction region of 2 arcmin radius at the position x=137, y=116 for the (XRT + GIS2) combination. Using ASCAARF we have produced the effective area values at the position of each source detected in the GIS2 (GIS3) detector. Using these effective area values and through spectral simulations with XSPEC we have derived the correction factors to be applied to each source in the S, M and H band. For each source we have then applied these correction factors to the net counts of GIS2 and GIS3 separately. Finally, we have combined the GIS2 and GIS3 corrected net counts for each source. In summary, using this procedure we have first re-normalized the GIS2 and GIS3 data for each source to the reference region separately and then combined them. We note that the applied method is very similar to the “flat field” procedure normally used in the analysis of optical imaging and spectroscopic data.
Using the corrected net counts in the S, M and H band we have then computed for each source two “hardness ratios” defined in the following way:
$$HR1=\frac{MS}{M+S}HR2=\frac{HM}{H+M}$$
The 68% error bars on HR1 and HR2 have been obtained via Monte Carlo simulations using the total counts, the background counts and the correction factors relative to each source.
Similarly, HR1 and HR2 values expected from simple spectral models (see section 4) have been obtained with XSPEC using the effective area of the reference region.
## 4 Results
### 4.1 The Hard Energy Range and the CXB
In this section we discuss the hard (2-10 keV) X-ray average spectrum of the present sample and we compare it with that of the CXB; to this end we use the HR2 values and the “stacked” spectra introduced in section 3.
In figure 1, for all sources, we plot the HR2 value versus the GIS2 CCR; the filled squares represent the sources detected with a signal to noise ratio greater than 4.0 while the open squares represent the sources detected with a signal to noise ratio between 3.5 and 4.0. The HR2 values are then compared with those expected from a non absorbed power-law model with energy spectral index $`\alpha _E`$ ($`f_XE^{\alpha _E}`$) ranging from $``$1.0 to 2.0.
Figure 1 clearly shows a broadening of the HR2 distribution going to fainter CCR; furthermore a flattening of the mean spectrum with decreasing fluxes is evident. A similar broadening and flattening of the HR2 distribution is still detected if only the 40 sources (18 Bright and 22 Faint) detected with a signal to noise ratio greater than 4.0 are used, showing that this result is not due to the sources near the detection threshold limit.
It is worth noting the presence of many sources which seem to be characterized by a very flat 2-10 keV spectrum with $`\alpha _E0.5`$ and of a number of sources with “inverted” spectra (i.e. $`\alpha _E0.0`$). This is particularly evident in the Faint Sample where about half of the sources seem to be described by $`\alpha _E0.5`$ and about 30% by “inverted” spectra. These latter objects could represent a new population of very hard serendipitous sources or, alternatively, a population of very absorbed sources as expected from the CXB synthesis models based on the AGN Unification Scheme.
We have checked whether the observed hardening of the mean spectral index can be attributed to a spectral bias in the source’s selection. Since sources with the same flux but different spectrum will deposit a different number of counts in the detector, it is evident that, as one approaches the flux limit of a survey, sources with a favourable spectrum will be detected (and thus included in the sample), while sources with an unfavourable spectrum will become increasingly under-represented (see Zamorani et al., 1988 for a discussion of this effect). However, in the case of the ASCA/GIS, this selection effect favours the detection of steep sources, thus giving further support to the reality of our findings <sup>4</sup><sup>4</sup>4As an example, in a 50.000 s observation, a source with 2-10 keV flux = $`5\times 10^{13}`$ ergs cm<sup>-2</sup> s$`^1`$characterized by an absorbed power-law spectrum with $`\alpha _E=2.0`$ and $`N_H=3\times 10^{20}`$ will deposit (at the reference position and inside a region of 2 arcmin radius) $`300`$ (2-10) keV counts while a source with the same flux but power-law spectrum with $`\alpha _E=0.0`$ and same $`N_H`$ will deposit $`180`$ counts..
We note that a population of very hard X-ray sources has been recently suggested also by Giommi et al., 1998 in order to explain the spectral properties of the faint BeppoSAX sources and to reconcile the 2-10 keV LogN($`>`$S)–LogS with the 5-10 keV LogN($`>`$S)–LogS obtained from BeppoSAX data.
To further investigate the flattening of the source’s mean spectral index we have used the “stacked” spectra introduced in section 3.1.
In Table 1 we report the results of the power-law fits to the stacked spectra of the Faint and the Bright sample; the unfolded spectra of the two samples are shown in figure 2 (Bright sample: open squares; Faint sample: filled squares).
As can be seen from the $`\chi _\nu ^2`$ values reported in Table 1 and from the spectra shown in figure 2, a simple power-law spectral model represents a good description of the stacked spectra of the two samples in the 2-10 keV energy range. The $`N_H`$ values corresponding to the mean line of sight Galactic absorption for the two independent samples have been used in the fits. However, given the energy range of interest (2.0 - 10.0 keV), the spectra are not significantly affected by the Galactic $`N_H`$ value. Consistent results are obtained if we use either the lowest or highest ($`0.7\times 10^{20}`$ cm<sup>-2</sup> or $`9.06\times 10^{20}`$ cm<sup>-2</sup> respectively) Galactic $`N_H`$ value sampled in the present survey.
It is worth noting that the unfolded spectra of the Faint and the Bright sample reported in figure 2 does not show any compelling evidence of emission lines. A strong emission line should be expected if, for example, sources with a strong Iron emission line at 6.4 keV, contributing in a substantial way to the CXB, were strongly clustered at some particular redshift. This subject has been already discussed by Gilli et al., 1999, reaching the conclusion that the maximum contribution of the Iron line to the CXB is less than few percent. The unfolded spectra shown in figure 2 confirm their results.
In Figure 3 the results obtained here are compared with those obtained using data from other satellites or from other ASCA medium-deep survey programs. The best-fit energy index of the Bright sample ($`<\alpha _E>=0.87\pm 0.08`$) is in good agreement with the mean spectral properties of the objects in the Ginga and HEAO1 A-2 sample and with the mean spectral properties of the broad line AGNs detected by ROSAT (Almaini et al., 1996; Romero-Colmenero et al., 1996). The Faint sample is best described by $`<\alpha _E>=0.36\pm 0.14`$; this is consistent with other ASCA results (Ueda et al., 1998) and with the spectra of the CXB in the 2-10 keV energy range (Marshall et al., 1980; Gendreau et al., 1995). For the purpose of figure 3 we have used a count rate to flux conversion factor adeguate for a power law spectral model with $`\alpha _E=0.36`$ (Faint sample) or $`\alpha _E=0.87`$ (Bright sample).
We have evaluated the influence that the sources with the hardest energy distribution have on the combined spectra. If we exclude the 6 sources with HR2$`>`$0.2 (5 from the Faint sample and 1 from the Bright sample) we find that the combined spectra are still significantly different, being described by a power-law model with $`<\alpha _E>=0.53\pm 0.14`$ and $`<\alpha _E>=1.04\pm 0.10`$ respectively.
Finally, in the case of the Bright sample two sources contribute about 35% of the total counts; if we exclude these two objects the remaining stacked spectrum is described by a power-law model with $`<\alpha _E>=1.00\pm 0.16`$, showing that the inclusion or the exclusion of these two objects do not change any af our results. Note that in the case of the Faint sample each object contributes to the stacked counts at most for 6% or less.
These results clearly show that: a) we have detected a flattening of the source’s mean spectral properties toward fainter fluxes and b) we are beginning to detect those X-ray sources having a combined X-ray spectra, consistent with that of the 2 - 10 keV CXB.
### 4.2 The Broad Band Spectral Properties and the AGN Unification Scheme
In this section we do not intend to derive specific spectral properties and/or parameters for each source; the limited statistics and the complexity of AGNs broad band X-ray spectra (see Mushotzky, Done and Pounds, 1993 for a review of the subject) prevent us from doing so. Rather we regard this sample as representative of the hard X-ray sky and we try to investigate if the currently popular CXB synthesis models based on the AGNs Unification Scheme can describe the overall spectral properties of the ASCA sample as inferred from the hardness ratios. According to the AGNs Unification model for the synthesis of the CXB, a population of unabsorbed (Type 1: $`N_H<10^{22}`$ cm<sup>-2</sup>) and absorbed (Type 2: $`N_H>10^{22}`$ cm<sup>-2</sup>) AGNs can reproduce the shape and intensity of the CXB from several keV to $``$ 100 keV (see Madau, Ghisellini and Fabian, 1994; Comastri et al., 1995; Maiolino et al., 1998b). Because $`90\%`$ of the ASCA sources in this sample <sup>5</sup><sup>5</sup>5Up to now 12 sources have been spectroscopically identified. The optical breakdown is the following: 1 star, 2 cluster of galaxies, 7 Broad Line AGNs and 2 Narrow Line AGNs. However we stress that this small sample of identified objects is probably not representative of the whole population. are expected to be AGNs (see Paper I), in the following we will consider this sample as being well approximated by a population of (Type 1 + Type 2) AGNs with flux above $`1\times 10^{13}`$ ergs cm<sup>-2</sup> s$`^1`$.
In figure 4 we show (open squares) the position of the sources in the “Hardness Ratio” diagram for the Faint (panel a) and Bright (panel b) sample; in figure 4c we have reported the 12 sources spectroscopically identified so far (see footnote 8). Also reported in figure 4a,b,c (solid lines) are the loci expected from X-ray spectra described by an absorbed power law model. The energy index of the power law ranges from $``$1.0 up to 2.0, while the absorbing column densities ranges from $`10^{20}`$ cm<sup>-2</sup> up to $`10^{24}`$ cm<sup>-2</sup> (see figure 4d); the absorption has been assumed to be at zero redshift (i.e. the sources are supposed to be in the local universe).
Figure 4 clearly shows how the “Hardness Ratio” diagram, combined with spectral simulations, can be used to obtain information on the sources’ X-ray spectral properties, as well as the power of using two hardness ratios to investigate the broad band spectral properties of the sources. If only one ratio is known, say HR2 for example, a source with HR2 $`00.1`$ could be described by an absorbed power law with either energy index of 0.0 and $`N_H`$ = 10<sup>20</sup> cm<sup>-2</sup> or energy index of $``$ 2.0 and $`N_H`$ = 10<sup>23</sup> cm<sup>-2</sup>. The use of both HR1 and HR2 allows the ambiguity to be solved.
We note that the hardness ratios computed as described in section 3.2 have not be corrected for the different Galactic absorbing column density along the line of sight of each source. For the present sample this ranges from $`10^{20}`$ cm<sup>-2</sup> to $`9\times 10^{20}`$ cm<sup>-2</sup>; figure 4 clearly show that this correction is insignificant.
One of the most striking features of figure 4 is the large spread in HR1 and HR2 displayed by the ASCA sources and the departure from the loci of absorbed, single power law spectra. This implies that the broad band (0.7 - 10 keV) spectrum of the sources is more complex than the simple model of an absorbed power law. In particular the ASCA sources located on the left side of the line representing power laws with $`N_H`$ 10<sup>20</sup> cm<sup>-2</sup> (see figure 4d) are not explained within the absorbed power-law model. A similar result is obtained even if the absorbing material is assumed to be at higher redshift e.g. z = 1 or 2.
We note that the object in figure 4b (or 4c) characterized by HR1 $``$ 0.0 and HR2 $`0.71`$ (a1800+6638; its X-ray flux is $`6.5\times 10^{13}`$ ergs cm<sup>-2</sup> s$`^1`$) is spectroscopically identified with the Seyfert 2 galaxy NGC 6552. The X-ray spectrum of this object (Reynolds et al., 1994 and Fukazawa et al., 1994) is consistent with a model composed of a narrow Gaussian line plus an empirical “leaky-absorber” continuum; the latter is composed by an absorbed power-law and a non absorbed power law having a common photon index. Fukazawa et al. (1994) found that the NGC 6552 spectrum requires a photon index of $`1.4`$, an absorbing column density of $`6\times 10^{23}`$ cm<sup>-2</sup>, an uncovered fraction of $`2\%`$ and a narrow Gaussian line (consistent with the $`K_\alpha `$ iron emission line at 6.4 keV) having an equivalent width of $``$ 0.9 keV.
That the simple absorbed power law model is unable to explain the scatter in the hardness ratios is not surprising given the observational evidence that the broad band X-ray spectra of Type 1 and, even more, of Type 2 AGN are complex and affected by several parameters like, for instance, the viewing angle and the torus thickness (see e.g. Turner et al., 1997; Turner et al., 1998; Maiolino et al., 1998a; Bassani et al., 1999). In particular, for Type 2 AGNs we could have, as a function of the absorbing column density, one of the following three cases:
1) $`10^{22}<N_H<5\times 10^{23}`$ cm<sup>-2</sup>. The hard X-ray continuum above a few keV is dominated by the directly-viewed component making the source nucleus visible to the observer and the column density measurable; the reflected/scattered component is starting to become relevant in the soft energy range. In this case the observed spectrum is that of an absorbed Type 1 AGNs with some extra flux at low energies;
2) $`5\times 10^{23}<N_H<10^{25}`$ cm<sup>-2</sup>. In this case both the directly-viewed component and the reflected/scattered component are observed and the resulting spectrum becomes very complex.
3) $`N_H>10^{25}`$ cm<sup>-2</sup>. The torus is very thick. The continuum source is blocked from direct view up to several tens of keV and the observed spectrum is dominated by the scattered/reflected component. In this case the observed continuum is that of a Type 1 AGNs (in the case of scattering by warm material near the nucleus) or that of a Compton reflected spectrum (in the case of cold reflection from the torus).
Given this spectral complexity we have tested the Hardness Ratios plot against the following simplified AGNs spectral model composed of:
a) an absorbed power-law spectrum. This component represents the continuum source, i.e. for an absorbing column density of about $`10^{20}`$ cm<sup>-2</sup> this represents the “zero” order continua of a Type 1 object;
b) a non absorbed power-law component, characterized by the same energy spectral index of the absorbed power-law and by a normalization in the range 1-10% of that of the absorbed power-law component. This component represents the scattered fraction (by warm material) of the nuclear emission along the line of sight; theoretical and observational evidence (see Turner et al., 1997 and reference therein) suggest that this scattered fraction is in the range 1-10%;
c) a narrow emission line at 6.4 keV, having an equivalent width of 230 eV for low values of $`N_H`$ ($`10^{20}`$ cm<sup>-2</sup>). This component represents the mean equivalent width of the Fe K<sub>α</sub> emission line in Seyfert 1 galaxies as determined by Nandra et al., 1997 using ASCA data.
In summary, the AGN spectral model used is:
$$S(E)=E^{\alpha _E}e^{\sigma _E\times N_H}+KE^{\alpha _E}+Fe_{6.4keV}$$
where E is the photon energy, $`\alpha _E`$ is the energy spectral index, $`\sigma _E`$ is the energy dependent absorbing cross-section, $`N_H`$ is the absorbing column density along the line of sight to the nucleus, K is the scattered fraction and $`Fe_{6.4keV}`$ is the Iron narrow emission line at 6.4 keV. The above simplified AGNs spectral model is equivalent to the model used, by Fukazawa et al., 1994 in the case of NGC 6552 and, by several authors to describe the “first order” X-ray spectra of Seyfert 2 galaxies (see e.g. Turner et al., 1997).
We have tested this model as a function of $`\alpha _E`$, the absorbing column density $`N_H`$ and the scattered fraction K. Figure 5 shows, as an example, the results of the spectral simulations in the case of local sources ($`z0.0`$) with $`\alpha _E=1`$ and K = 0.01 (filled triangles) or K = 0.1 (filled circles). The input spectral models for the case K = 0.01 and Log$`N_H`$ = 20, 22, 23, 24 and 24.5 are also shown.
We would like to stress that using this simplified AGNs spectral model we can go, in a continuous way, from a “first order” Type 1 spectra to a “first order” Type 2 spectra only by changing the $`N_H`$ parameter. In this respect we note that the equivalent width of the Iron narrow emission line at 6.4 keV (which, as we said, is fixed to 230 eV for $`N_H10^{20}`$ cm<sup>-2</sup>) is $`2`$ keV when $`N_H3\times 10^{24}`$ cm<sup>-2</sup>; this latter value is very similar to that measured in many Seyfert 2 galaxies characterized by a similar value of the absorbing column density (Bassani et al., 1999).
As anticipated at the beginning of this section some qualitative conclusions can be drawn from the results reported in Figure 5. First of all it appears that in the context of this simplified AGN spectral model, and for very high $`N_H`$ values, we are able to explain the hardness ratios of the objects located on the left side of the dashed line representing unabsorbed power laws. Second, the results reported in Figure 5 seem to indicate that many of the latter objects could be characterized by a very high absorbing column density; if this indication is confirmed by deeper investigation with XMM and AXAF, then the number of Compton “thick” systems could be significantly higher than previously estimated. In the meantime we note that this result is consistent with recent findings obtained by Maiolino et al., 1998a, Bassani et al., 1999 and Risaliti, Maiolino and Salvati, 1999 by studying the fraction of Compton “thick” systems in a sample of Seyfert 2 galaxies observed with BeppoSAX and/or with ASCA.
Finally, to investigate the Hardness Ratio plot in a model independent way, we have used a comparison sample of nearby (z$`<`$1) Seyfert 1 and Seyfert 2 galaxies pointed at with ASCA. This sample was selected from a data set of about 300 ASCA pointings that were treated in the same manner (see section 2) and that are used to extend the survey presented in Cagnoni, Della Ceca and Maccacaro (1998). Amongst this restricted ASCA data set we have considered those observations pointed on nearby Seyfert 1 and Seyfert 2 galaxies; furthermore, we have considered only the Seyfert 2 galaxies also reported in Table 1 of Bassani et al.,1999 in order to have an uniform data set on their Compton thickness. The comparison sample is composed of 13 Seyfert 1, 15 Compton thin Seyfert 2 and 7 Compton thick Seyfert 2 <sup>6</sup><sup>6</sup>6The Seyfert 1 are: NGC 1097, MKN 1040, RE 1034+39, MKN 231, MKN 205, 3C 445, I ZW 1, 2E1615+0611, MKN 478, EXO055620-3820.2, MKN 507, NGC 985, TON 180. The Compton thin Seyfert 2 are: NGC 3079, NGC 4258, NGC 3147, NGC 5252, MKN 463e, NGC 1808, NGC 2992, NGC 1365, NGC 6251, MKN 273, IRAS05189-2524, NGC 1672, PKS B1319-164, MKN 348, NGC 7172. The Compton thick Seyfert 2 are: MKN 3, NGC 6240, NGC 1667, NGC 4968, NGC 1386, MKN 477, NGC 5135.. The targets were analyzed in the same way as the serendipitous sources, combining the GIS2 and GIS3 data (see section 3.2).
The results are shown in figure 6. The Seyfert 1 galaxies are strongly clustered around the locii representing low $`N_H`$ values; on the contrary the Seyfert 2 galaxies are characterized by a very large spread in their HR1, HR2 values. In particular many of the Seyfert 2 are located on the left side of the line representing power laws with $`N_H10^{20}`$ cm<sup>-2</sup>; about a half of these sources have been classified as Compton thick system from Bassani et al., 1999. We are tempted to suggest that also the other half are Compton thick objects, whose nature is still unrevealed because of the presently poor data quality. XMM and/or AXAF observations are needed to confirm this suggestion. However, whatever their properties (Compton Thin or Thick) are, the comparison of figures 6 and figures 4a,b strongly suggest that at least some of the serendipitous ASCA sources located on the left side of the dashed line connecting the $`N_H=10^{20}`$ values could be Type 2 AGNs.
## 5 Summary and Conclusion
In this paper we have used ASCA GIS2 and GIS3 data for a complete and well defined sample of 60 hard (2-10 keV) selected sources to study the spectral properties of X-ray sources down to a flux limit of $`10^{13}`$ ergs cm<sup>-2</sup> s$`^1`$. To investigate if the spectral properties of the sources depend on their brightness we have defined two subsamples according to the “corrected” 2-10 keV count rate; the Bright sample is defined by the 20 sources with CCR $`3.9\times 10^3`$ cts s$`^1`$ , while the Faint sample is defined by the 40 fainter sources. The dividing line of $`3.9\times 10^3`$ cts s$`^1`$ corrispond to unabsorbed 2-10 keV fluxes in the range $`5.4\times 10^{13}`$ to $`3.1\times 10^{13}`$ ergs cm<sup>-2</sup> s$`^1`$for a source described by a power law model with energy spectral index between 0.0 and 2.0, respectively.
The main results of this investigation are the following:
a) the average (2-10 keV) source’s spectrum hardens towards fainter fluxes. The “stacked” spectra of the sources in the Bright sample is well described by a power-law model with an energy spectral index of $`<\alpha _E>=0.87\pm 0.08`$, while the “stacked” spectra of the sources in the Faint sample require $`<\alpha _E>=0.36\pm 0.14`$; this means that we are beginning to detect those sources having a combined X-ray spectrum consistent with that of the 2-10 keV CXB.
b) the hardness ratios analysis shows that this flattening is due to the appearance of sources with very hard spectra in the Faint sample. About a half of the sources in the latter sample require $`\alpha _E<0.5`$ while only 10% of the brighter sources are consistent with an energy spectral index so flat. Furthermore about 30% of the sources in the Faint sample seem to be characterized by “inverted” ($`\alpha _E<0.0`$) 2-10 keV X-ray spectra. These latter objects could represent a new population of very hard serendipitous sources or, alternatively, a population of very absorbed sources as expected from the CXB synthesis models based on the AGN Unification Scheme (see below).
c) a simple absorbed power-law model is unable to explain the broad band (0.7 - 10 keV) spectral properties of the sources, as inferred from the Hardness Ratios diagram. X-ray spectral models based on the AGNs Unification Scheme seem to be able to explain the overall spectral properties of this sample. This seems also suggested by the comparison of our results with those obtained using a sample of nearby and well known Seyfert 1 and Seyfert 2 galaxies observed with ASCA.
Acknowledgments
We are grateful to L.Bassani for stimulating discussions on the comparison sample of Seyfert galaxies, A.Wolter for a carefull reading of the manuscript and the anonymous referee for useful comments. G.C. acknowledge financial support from the “Fondazione CARIPLO”. This work received partial financial support from the Italian Ministry for University and Research (MURST) under grant Cofin98-02-32. This research has made use of the NASA/IPAC extragalactic database (NED), which is operated by the Jet Propulsion Laboratory, Caltech, under contract with the National Aeronautics and Space Administration. We thank all the members of the ASCA team who operate the satellite and maintain the software data analysis and the archive.
Table 1
| Sample | Objects | Net counts | $`\alpha _E`$ | $`N_{HGal}`$ | $`\chi _\nu ^2`$/$`\nu `$ |
| --- | --- | --- | --- | --- | --- |
| | | $`(210)`$ keV | | $`(10^{20}cm^2)`$ | |
| Faint | 40 | $`2900`$ | $`0.36\pm 0.14`$ | 2.75 | 1.09/105 |
| Bright | 20 | $`3400`$ | $`0.87\pm 0.08`$ | 3.66 | 1.06/82 |
## 6 Figure Captions:
Figure 1: HR2 value versus the ”corrected” 2 - 10 keV count rate for all the serendipitous ASCA sources belonging to the Cagnoni, Della Ceca and Maccacaro, 1998 sample. The filled squares represent the sources detected with a signal to noise ratio greater than 4.0 while the open squares represent the sources detected with a signal to noise ratio between 3.5 and 4.0. The dotted lines represent the expected HR2 values for a non absorbed power-law model with an energy spectral index as indicated in the figure. The dashed line represent the dividing line between the Faint and the Bright sample.
Figure 2: The unfolded stacked ASCA spectrum of the Faint (filled squares) and the Bright (open squares) sample. Note that this spectra are shown in arbitrary units.
Figure 3: Comparison of the results obtained from the stacked spectra of the Faint and the Bright sample with those obtained using data from other satellites or from other ASCA medium-deep survey programs (adapted from figure 3 of Ueda et al., 1998).
Figure 4: a) “Hardness Ratio” distribution for the sources in the Faint sample. The grid represents the locii expected from absorbed power law spectra (values are shown in panel d); b) As panel a but for the Bright sample; c) As panel a but for the identified sources. Symbols are as follows: star: star; filled triangles: clusters of galaxies; filled squares: Narrow Line AGNs; open squares: Broad Line AGNs; d) Expected HR1 and HR2 values as a function of the absorbing column density and of the power law energy index of the spectrum. The absorption is assumed to be at zero redshift.
Figure 5: The open squares represent the position of the sources in the “Hardness Ratio” diagram (Faint + Bright Sample). The solid lines are the locii expected from the simplified AGN spectral model discussed in the text (see section 4.2). We have highlighted with filled dots or filled triangles the absorbing column densities of Log $`N_H`$ = 20, 21.5, 22, 22.5, 23, 23.5, 24, 24.5 (from left to right); the absorption has been assumed to be at zero redshift (i.e. the sources are supposed to be in the local universe). The filled triangles represent the model relative to an energy spectral index equal to 1.0 and scattered fraction of 1%, while the filled dots represent the model relative to an energy spectral index equal to 1.0 and scattered fraction of 10%. The dashed line represents the locus expected from unabsorbed power laws ($`N_H10^{20}`$ cm<sup>-2</sup>) and energy spectral index ranging from $``$1.0 up to 2.0. We also show the input spectral models for the case of K=10% and LogN<sub>H</sub> = 20, 22, 23, 24, and 24.5.
Figure 6: As figure 4a but for the comparison sample of Seyfert 1 and Seyfert 2 galaxies. Filled triangles: Seyfert 1; Open squares: Compton Thick Seyfert 2; Filled Squares: Compton Thin Seyfert 2 (see section 4.2).
|
no-problem/9904/cond-mat9904184.html
|
ar5iv
|
text
|
# Liquid-Gas phase transition in Bose-Einstein Condensates
\[
## Abstract
We study the effects of a repulsive three-body interaction on a system of trapped ultra-cold atoms in a Bose-Einstein condensed state. The corresponding $`s`$wave non-linear Schrödinger equation is solved numerically and also by a variational approach. A first-order liquid-gas phase transition is observed for the condensed state up to a critical strength of the effective three-body force.
PACS 03.75.Fi, 36.40.Ei, 05.30.Jp, 34.10.+x
\]
The experimental evidences of Bose-Einstein condensation (BEC) in magnetically trapped weakly interacting atoms brought a considerable support to the theoretical research on bosonic condensation. The nature of the effective atom-atom interaction determines the stability of the condensed state: the two-body pseudopotential is repulsive for a positive $`s`$wave atom-atom scattering length and it is attractive for a negative scattering length . The ultra-cold trapped atoms with repulsive two-body interaction undergoes a Bose-Einstein phase-transition to a stable condensed state, in a number of cases found experimentally, as for <sup>87</sup>Rb , for <sup>23</sup>Na and <sup>7</sup>Li . However, a condensed state of atoms with negative $`s`$wave atom-atom scattering length would be unstable for a large number of atoms .
It was indeed observed in the <sup>7</sup>Li gas , for which the $`s`$wave scattering length is $`a=(14.5\pm 0.4)`$ Å, that the number of allowed atoms in the condensed state was limited to a maximum value between 650 and 1300, which is consistent with the mean-field prediction . An earlier experiment suggested that the number of atoms in the condensate state was significantly larger than the theoretical predictions with two-body pseudopotential. This is consistent with an addition of a repulsive three-body interaction, which can extend considerably the region of stability for the condensate even for a very weak three-body force.
It was reported in Ref. that a sufficiently dilute and cold bosonic gas exhibits similar three-body dynamics for both signs of the $`s`$wave atom-atom scattering length and the long-range three-body interaction between neutral atoms is effectively repulsive for either sign of the scattering length. It was suggested that, for a large number of bosons the three-body repulsion can overcome the two-body attraction, and a stable condensate will appear in the trap . Singh and Rokhsar have also observed that above the critical value $`n`$ (which is proportional to their $`\gamma _c`$) the only local minimum is a dense gas state, where the neglect of three-body collisions fails.
In this work, using the mean-field approximation, we investigate the competition between the leading term of an attractive two-body interaction, which is originated from a negative two-atom $`s`$wave scattering length, and a repulsive three-body interaction, which can happen in the Efimov limit ($`|a|\mathrm{}`$) as discussed in Ref. <sup>*</sup><sup>*</sup>*The physics of three-atoms in the Efimov limit is discussed in Ref. . This reference extends a previous study of universal aspects of the Efimov effect . The relevance of three-body effects in BEC was also previously reported in Refs. , where it is discussed the stability of the numerical solutions. We show that a kind of liquid-gas phase transition appears inside the Bose condensate.
The Ginzburg - Pitaevskii - Gross (GPG) nonlinear Schrödinger equation (NLSE) is extended to include the effective potential coming from the three body interaction and then solved numerically in the $`s`$wave channel. The dimensionless parameters are related to the two-body scattering length, the strength of the three-body interaction and the number of atoms in the condensed state. As particularly observed in Ref. , to incorporate all two-body scattering processes in such many particle system, the two-body potential should be replaced by the many-body $`T`$matrix. Usually, at very low energies, this is approximated by the two-body scattering matrix, which is directly proportional to the scattering length $`a`$ . To obtain the desired equation, we first consider the effective Lagrangian density, which describes the condensed wave-function in the Hartree approximation, implying in the GPG energy functional for the trial wave function $`\mathrm{\Psi }`$:
$``$ $`=`$ $`{\displaystyle \frac{i\mathrm{}}{2}}\left(\mathrm{\Psi }^{}{\displaystyle \frac{\mathrm{\Psi }}{t}}{\displaystyle \frac{\mathrm{\Psi }^{}}{t}}\mathrm{\Psi }\right)+{\displaystyle \frac{\mathrm{}^2}{2m}}\mathrm{\Psi }^{}^2\mathrm{\Psi }`$ (2)
$`{\displaystyle \frac{m}{2}}\omega ^2r^2|\mathrm{\Psi }|^2+_\mathrm{I}.`$
In our description, the atomic trap is given by a rotationally symmetric harmonic potential, with angular frequency $`\omega `$, and $`_\mathrm{I}`$ gives the effective atom interactions up to three particles.
The effective interaction Lagrangian density for ultra-low temperature bosonic atoms, including two and three-body effective interaction at zero energy, is written as:
$$_\mathrm{I}=\frac{2\pi \mathrm{}^2a}{m}\left|\mathrm{\Psi }\right|^4\frac{2\lambda _3}{3!}\left|\mathrm{\Psi }\right|^6,$$
(3)
where $`\lambda _3`$ is the strength of the three-body effective interaction and $`a`$ the scattering length.
The NLSE, which describes the condensed wave-function in the mean-field approximation, is variationally obtained from the effective Lagrangian given in Eq. (2). By considering a stationary solution, $`\mathrm{\Psi }(\stackrel{}{r},t)=e^{i\mu t/\mathrm{}}`$ $`\psi (\stackrel{}{r})`$ where $`\mu `$ is the chemical potential and $`\psi (\stackrel{}{r})`$ is normalized to 1 and by rescaling the NLSE for the $`s`$wave solution, we obtain
$$\left[\frac{d^2}{dx^2}+\frac{1}{4}x^2\frac{|\mathrm{\Phi }(x)|^2}{x^2}+g_3\frac{|\mathrm{\Phi }(x)|^4}{x^4}\right]\mathrm{\Phi }(x)=\beta \mathrm{\Phi }(x)$$
(4)
for $`a<0`$, where $`x\sqrt{2m\omega /\mathrm{}}r`$ and $`\mathrm{\Phi }(x)N^{1/2}\sqrt{8\pi |a|}r\psi (\stackrel{}{r})`$. The dimensionless parameters, related to the chemical potential and the three-body strength are, respectively, given by $`\beta \mu /\mathrm{}\omega `$ and $`g_3\lambda _3\mathrm{}\omega m^2/(4\pi \mathrm{}^2a)^2`$. The normalization for $`\mathrm{\Phi }(x)`$ reads $`_0^{\mathrm{}}𝑑x|\mathrm{\Phi }(x)|^2=n`$ where the reduced number $`n`$ is related to the number of atoms $`N`$ by $`n2N|a|\sqrt{2m\omega /\mathrm{}}.`$ The boundary conditions in Eq.(4) are given by $`\mathrm{\Phi }(0)=0`$ and $`\mathrm{\Phi }(x)`$ $`C\mathrm{exp}(x^2/4+[\beta \frac{1}{2}]\mathrm{ln}(x))`$ when $`x\mathrm{}.`$
The above equation, (4) will be treated by numerical procedures for non-linear differential equations, employing the Runge-Kutta (RK) and shooting methods. However, it will be helpful first to consider a variational procedure , using a trial Gaussian wave-function for $`\psi (\stackrel{}{r})`$
$$\psi _{var}(\stackrel{}{r})=\left(\frac{1}{\pi \alpha ^2}\frac{m\omega }{\mathrm{}}\right)^{\frac{3}{4}}\mathrm{exp}\left[\frac{r^2}{2\alpha ^2}\left(\frac{m\omega }{\mathrm{}}\right)\right],$$
(5)
where $`\alpha `$ is a dimensionless variational parameter. The corresponding root-mean-square radius is proportional to the variational parameter $`\alpha `$, as $`r^2_{var}`$ $`=\alpha ^23\mathrm{}/(2m\omega )`$, while the central density is given by $`\rho _{c,var}(\alpha )`$ $`=\alpha ^3\left(m\omega /\pi \mathrm{}\right)^{3/2}`$. The expression for the total variational energy is given by
$$E_{var}(\alpha )=\mathrm{}\omega N\left[\frac{3}{4}\left(\alpha ^2+\frac{1}{\alpha ^2}\right)\frac{n}{4\sqrt{\pi }\alpha ^3}+\frac{2n^2g_3}{9\sqrt{3}\pi \alpha ^6}\right].$$
(6)
In the same way, we can obtain the corresponding chemical potential , Eq. ( 4):
$$\mu _{var}(\alpha )=\mathrm{}\omega \left[\frac{3}{4}\left(\alpha ^2+\frac{1}{\alpha ^2}\right)\frac{n}{2\sqrt{\pi }\alpha ^3}+\frac{2n^2g_3}{3\sqrt{3}\pi \alpha ^6}\right].$$
(7)
The variational solutions of $`E_{var}(\alpha )`$ are given, as a function of $`n`$ and $`g_3`$ (where $`a<0`$ and $`g_3>0`$), by finding the extrema of Eq. (6) with respect to $`\alpha `$.
In Fig. 1, we first illustrate the variational procedure considering an arbitrarily small three-body interaction, chosen as $`g_3=0.005`$. In the upper part of the figure, we show five small plots for the total variational energy $`E`$, in terms of the variational width $`\alpha `$. Each one of the small plots corresponds to particular values of $`n`$. For each number $`n`$ we report the energy of the variational extrema in the lower part of figure 1. In region (I) where the number of atoms is still small, the attractive two body force dominates over the repulsive three-body force and just one minima of the energy as a function of the variational parameter $`\alpha `$ is found. That is also the case for $`g_3=0`$. When the number of atoms is further increased (region (II)) two minima appear in the energy $`E\left(\alpha \right).`$ An unstable maximum is also found between the two minima. The lower energy minimum is stable while the solution corresponding to the smaller $`\alpha `$ is metastable. This solution has a higher density and, consequently, its metastability is justified by the repulsive three-body force acting at higher densities. The minimum number $`n`$ for the appearance of the metastable state is characterized by an inflection point in the energy as a function of $`\alpha `$. The value of $`n`$ at the inflection point corresponds to the beak in the plot of extremum energy versus $`n`$ because for larger $`n`$ three variational solutions are found as depicted in the lower part of figure 1. The attractive two-body and trap potentials dominate the condensed state in the low-density stable phase up to the crossing point (C). At this point, the denser metastable solution becomes degenerate in energy with the lower-density stable solution and a first order phase transition takes place. Since the two solutions differ by their density this transition is analogous to a gas-liquid phase transition for which the density difference between the liquid and the gas is the order parameter. In the variational calculation this occurs at the transition number $`n`$1.3 while the numerical solution of the NLSE gives $`1.2`$. In region (III), we observe two local minima with different energies, a higher-density stable point and a lower-density metastable point. The metastable solution disappears in the beak at the boundary between region (III) and (IV). In regions (III) and (IV) the three-body repulsion stabilized a dense solution against the collapse induced by the two-body attraction. The qualitative features of the variational solution is clearly verified by the numerical solution of the NLSE, as shown by the dashed curve.
In Fig. 2, considering several values of $`g_3`$ (0, 0.012, 0.015, 0.0183 and 0.02), using exact numerical calculations, we present the evolution of some relevant physical quantities $`E,`$ $`\mu ,`$ $`\rho _c`$ and $`r^2`$ as functions of the reduced number of atoms $`n`$. For $`g_3=0`$, our calculation reproduces the result presented in Ref. , with the maximum number of atoms limited by $`n_{max}1.62`$ ($`n`$ is equal to $`|C_{nl}^{3D}|`$ of Ref. ). In the plot for the energy as a function of $`n`$ it is shown that for values of $`g_3>0.0183`$ the phase transition is absent. At $`g_30.0183`$ and $`n1.8`$, the stable, metastable and unstable solutions come to be the same. This corresponds to a critical point associated with a second order phase transition. At this point the derivatives of $`\mu ,`$ $`\rho _c`$ and $`r^2`$ as a function of $`n`$ all diverge.
As shown in the figure, for $`0<g_3<0.0183`$, the density $`\rho _c,`$ the chemical potential $`\mu `$ and the root-mean-squared radius $`r^2`$ present back bendings typical of a first order phase transition. For each $`g_3,`$ the transition point given by the crossing point in the $`E`$ versus $`n`$ corresponds to a Maxwell construction in the diagram of $`\mu `$ versus $`n`$. At this point an equilibrated condensate should undergo a phase transition from the branch extending to small $`n`$ to the branch extending to large $`n.`$ The system should never explore the back bending part of the diagram because as we have seen in figure 1 it is an unstable extremum of the energy. From this figure it is clear that the first branch is associated with large radii, small densities and positive chemical potentials while the second branch presents a more compact configuration with a smaller radius a larger density and a negative chemical potential. This justify the term gas for the first one and liquid for the second one. However we want to stress that both solutions are quantum fluids. With $`g_3=0.012`$ the gas phase happens for $`n<1.64`$ and the liquid phase for $`n>1.64`$. For $`g_3>0.0183`$ all the presented curves are well behaved and a single fluid phase is observed. We also checked that calculations with the variational expression of $`r^2`$, $`\rho _c`$ and $`\mu `$ are in good agreement with the ones depicted in Fig.2, following the same trend shown in Fig.1 for the energy.
Finally, in the lower frame of Fig. 3, we show the phase boundary separating the two phases in the plane defined by $`n`$ and $`g_3`$ and the critical point at $`n1.8`$ and $`g_30.0183`$. In the upper frame, we show the boundary of the forbidden region in the central density versus $`g_3`$ diagram.
To summarize, our calculation presents, at the mean-field level, the consequences of a repulsive three-body effective interaction for the Bose condensed wave-function, together with an attractive two-body interaction. A first-order liquid-gas phase-transition is observed for the condensed state as soon as a small repulsive effective three-body force is introduced. In dimensionless units the critical point is obtained when $`g_30.0183`$ and $`n1.8`$. The characterization of the two-phases through their energies, chemical potentials, central densities and radius were also given for several values of the three-body parameter $`g_3`$.
The results presented in this paper can be relevant to determine a possible clear signature of the presence of repulsive three-body interactions in Bose condensed atoms. It points to a new type of phase transition between two Bose fluids. Because of the condensation of the atoms in a single wave-function this transition may present very peculiar fluctuations and correlations properties. As a consequence, it may fall into a different universality class than the standard liquid-gas phase transition, which are strongly affected by many-body correlations. This question certainly deserves further studies.
Acknowledgments
We thank the organizers of the “International Workshop on Collective Excitations in Fermi and Bose Systems” (São Paulo, Brazil), C. Bertulani, L.F. Canto and M. Hussein, to provide the conditions for stimulating discussions and for the starting of this collaboration. AG, TF and LT also thank N. Akhmediev and M.P. Das for correspondence related to Ref., after conclusion of this work, which has pointed out their previous suggestion of relevance of three-body effects in BEC . This work was partially supported by Fundação de Amparo à Pesquisa do Estado de São Paulo and Conselho Nacional de Desenvolvimento Científico e Tecnológico.
|
no-problem/9904/hep-th9904128.html
|
ar5iv
|
text
|
# Eigenvalue Repulsion and Matrix Black Holes
## 1 Introduction
Since Matrix theory is conjectured to be the discrete light-cone quantization of M-theory, it is natural to use it as a way of investigating the quantum properties of black holes.
Banks, Fischler, Klebanov and Susskind (BFKS) and Martinec and Li, have studied black holes whose entropy, $`S`$, roughly the same as their light-like momentum, $`N`$ . They can be formed by reducing the energy of a highly excited cluster of D0 branes until the momentum of the individual D0 branes is the inverse of the cluster’s transverse size (saturating the Heisenberg uncertainty bound). The transverse size is found by setting the potential equal to the kinetic energy (in accord with the virial theorem). The choice of potential is crucial. The obvious choice, $`v^4/r^7`$, gives the Schwarzschild radius, $`R_s`$. However, at this point the expansion parameter for the potential is 1, so the expansion is on the verge of breaking down .
Nonetheless, once the correct radius is found, Matrix theory makes many correct predictions about black holes: the right mass for a given radius and the correct long range gravitational potential between equal mass black holes . The constituents have the correct properties to be Hawking radiation . If the particles are treated with Boltzman statistics, the entropy is even correct .
The problem becomes more pronounced when $`NS`$. Simply replacing the D0 branes with bound states and using the $`v^4/r^7`$ potential fails to give correct results. Li identified other terms in the matix theory effective potential that could give the correct radius, but it is not clear why those terms would dominate . Li and Martinec get the correct results using $`v^4/r^7`$ to approximate processes that exchange longitudinal momentum .
Eigenvalue repulsion can be used to find the size of black holes without any knowledge of the potential. Using this size, all of the previous black hole results follow . Eigenvalue repulsion allows the results to be extended to $`NS`$. Because eigenvalue repulsion involves the off-diagonal elements of the matrices, it will be natural to treat the system with Boltzman statistics.
First, I describe eigenvalue repulsion in Matrix theory. Then I find the radius and energy at $`R=R_s`$ and discuss entropy in this case. Finally, I discuss $`RR_s`$. All factors of order one are ignored throughout.
## 2 Low energies and eigenvalue repulsion
To study black holes we need to understand matrix theory in the region where the energy per D0 brane is very low (much less than the plank energy). This will mean two things: First, remote D0 branes are not connected by strings (i.e. the off-diagonal elements between well separated D0 branes are in the ground state).
As the D0 branes get within $`l_\text{p}`$ of each other, the quantum fluctuations in the off diagonal elements connecting the D0 branes also become of order $`l_\text{p}`$, so the D0 branes are able to explore their full matrix degrees of freedom. This region is sometimes called the stadium. The second consequence of the low energy is that the wave-function must be almost constant across the width of the stadium since variations on this scale would require very large kinetic energies.
The flatness of the wave function in the stadium means that we can look to the theory of random matrices for intuition about how the system will explore the full matrix degrees of freedom. It is known that the statistics of random matrices strongly favors matrices with well separated eigenvalues, pushing the D0 branes to the edge of the stadium.
The role of eigenvalue repulsion can be made a bit more quantitative. In the stadium, the distance between two D0 branes is $`r^2=TrX^iX^i`$. For fixed $`r`$, this is the equation for a sphere in 27 dimensions (nine directions times the three independent generators of traceless, antihermitian, $`2\times 2`$ matrices). Since the wave function is relatively flat in this region, the probability that the D0 branes are a distance $`r`$ apart falls off like $`r^{26}`$. That is 18 powers of $`r`$ faster than the $`r^8`$ probability that one would expect for the nine spatial directions. This means that the wave function is strongly dominated by configurations which have the D0 branes near the edge of the stadium. There is no actual potential pushing the D0-branes apart, but this statistical effect is the Matrix Theory manifestation of eigenvalue repulsion.<sup>1</sup><sup>1</sup>1 The gauge symmetry does not affect this result. The gauge field degrees of freedom are removed by going to $`A=0`$ gauge. The assumption that the wave function is flat satisfies the requirement that the state be annihilated by the gauge generators.
The situation is similar when $`N`$ D0 branes are present. When the separation between *any two* D0 branes falls below $`l_\text{p}`$, new matrix degrees of freedom open up and eigenvalue repulsion turns on. This causes the wave function to strongly favor configurations with size at least $`N^{1/9}l_\text{p}`$, one D0 brane per plank volume.
It has been shown that composite systems of D0 branes grow like $`N^{1/3}`$, much faster than the $`N^{1/9}`$ growth predicted by eigenvalue repulsion . However, this rapid growth is caused by fluctuations in very high energy, off-diagonal elements of the matrix connecting widely separated D0 branes. Only very high energy probes will be able to see these high frequency quantum fluctuations. Eigenvalue repulsion is a zero energy effect, so it should be seen by any probe. Similarly, a low energy probe will be captured by a black hole if its impact parameter is less than $`R_s`$, but a high energy probe (with energy much greater than the mass of the black hole) will be captured at a much greater distance.
## 3 Black holes at the BFKS point
A very highly excited clump of D0 branes has enough mass to create a black hole whose radius is larger than the light-like compactification radius. Lowering the system’s energy will slow the constituent D0 branes until the momentum saturates the Heisenberg uncertainty bound, i.e. the momentum of individual D0 branes is one over the size of the clump. This is the BFKS point . Using the size, $`N^{1/9}l_\text{p}`$, given by eigenvalue repulsion one can find the mean kinetic energy.
$`E_k`$ $`={\displaystyle \frac{R}{\mathrm{}}}Tr\mathrm{\Pi }^i\mathrm{\Pi }^i`$ (1)
$`{\displaystyle \frac{R}{\mathrm{}}}{\displaystyle \underset{a=1}{\overset{N}{}}}{\displaystyle \frac{\mathrm{}^2}{R_s^2}}`$ (2)
$`{\displaystyle \frac{RR_s^7}{G_N}}`$ (3)
We know from the virial theorem that the potential energy will be of the same order as the kinetic energy, so we do not need to know anything about its form.
Other than the form of the potential, this is exactly the system studied by previous authors. As they have observed, this system has the right relationship between mass and radius ,
$$M^2=E\frac{N\mathrm{}}{R}=\frac{R_s^{16}}{G_N^2},$$
(4)
and gives the correct long range gravitational interaction for equal mass black holes . The constituent D0 branes have the right energy and momentum to be Hawking radiation, should they escape .
## 4 Entropy
To get the correct entropy the D0 branes must be treated with Boltzman statistics . This occurs because the wave function is dominated by states that break all of the gauge symmetry mixing remote D0 branes, even the permutation symmetry.
The large separation between the majority of D0 branes breaks the continuous symmetry down to the permutation symmetry. If all the D0 branes were far apart one could gauge transform all of the matrices into diagonal form simultaneously. Since this diagonalization would only be unique up to permutations, the permutation symmetries would be unbroken.
However, since neighboring D0 branes are near enough to explore the full matrix degrees of freedom, the off diagonal elements connecting them cannot be integrated out . The matrices cannot be simultaneously diagonalized, only put in band diagonal form.<sup>2</sup><sup>2</sup>2 Here I am using “band diagonal” in a very general sense. The D0 branes will be connected to their nearest neighbors in all nine dimensions. There will be no way to write these matrices as two dimensional arrays with zeros away from the diagonal. This breaks the permutation symmetry because a gauge transformation that permutes remote D0 branes would take the matrices out of band diagonal form.
For example, consider a situation where D0 branes $`a,b`$ and $`c`$ are close together and $`e,f`$ and $`g`$ are close together, but the two sets are far apart. This will be represented by matrices like this (All the 1’s represent terms of order $`l_\text{p}`$, not literally 1):
$$X^i=\left[\begin{array}{ccccccccccc}\mathrm{}& 1& & & & & & & & & \\ 1& x_a^i& 1& & & & & & & & \\ & 1& x_b^i& 1& & & & & & & \\ & & 1& x_c^i& 1& & & & & & \\ & & & 1& \mathrm{}& \mathrm{}& & & & & \\ & & & & \mathrm{}& \mathrm{}& 1& & & & \\ & & & & & 1& x_e^i& 1& & & \\ & & & & & & 1& x_f^i& 1& & \\ & & & & & & & 1& x_g^i& 1& \\ & & & & & & & & 1& \mathrm{}& \end{array}\right]$$
(5)
Permuting the $`b`$ and $`f`$ D0 branes takes the matrices out of band diagonal form:
$$U^{}X^iU=\left[\begin{array}{ccccccccccc}\mathrm{}& 1& & & & & & & & & \\ 1& x_a^i& & & & & & 1& & & \\ & & x_f^i& & & & 1& & 1& & \\ & & & x_c^i& 1& & & 1& & & \\ & & & 1& \mathrm{}& \mathrm{}& & & & & \\ & & & & \mathrm{}& \mathrm{}& 1& & & & \\ & & 1& & & 1& x_e^i& & & & \\ & 1& & 1& & & & x_b^i& & & \\ & & 1& & & & & & x_g^i& 1& \\ & & & & & & & & 1& \mathrm{}& \end{array}\right]$$
(6)
where $`U`$ is the matrix permuting $`b`$ and $`f`$.
The symmetries mixing neighboring D0 branes is not broken at all. However, the number of near neighbors is of order one, so in doing statistical machanics for large $`N`$ the D0 branes should be treated as distinguishable.
This description is similar to the proposal of BFKS in which the D0 branes are tethered to a background configuration . Here, however, there is no firm distinction between the background and the excitations.
## 5 Beyond the BFKS point
At the BFKS point the constituent D0 branes have already saturated their uncertainty bound, so the energy of the system cannot be lowered by further reducing their momentum. They will have to band together into bound states. These bound states will have a larger mass (in the Matrix theory, they are of course massless in M-theory) and will be able to live in a smaller volume with less energy .
The arguments of the previous section would appear to apply to the bound states of Matrix theory, causing them to be seen by low energy probes as large objects. However, even in the large $`N`$ limit, these objects should not have hard interactions for impact parameters greater than $`l_\text{p}`$. Some miracle of the bound state wave function and supersymmetry must cause cancellations that allow the objects to pass through each other without interaction.
These miracles will not occur when the wave functions are excited. In eleven dimensions this corresponds to changing the supergravitons into black holes. Excitations will cause the graviton to break into a metastable collection of $`n`$ smaller gravitons. Each of these is unexcited, and is therefore still able to pass through the others without interaction. The distance at which the eigenvalue repulsion plays a role is still $`l_\text{p}`$.
Consider a system of two well separated bound states, consisting of $`m_1`$ and $`m_1`$ D0 branes each. They will break the $`U(m_1+m_2)`$ symmetry down to $`U(m_1)\times U(m_2)`$. There are $`m_1m_2`$ strings connecting them, each with 8 transverse, complex polarizations. The remaining $`U(m_1)\times U(m_2)`$ gauge symmetry and the $`O(8)`$ rotation symmetry about the axis of separation is enough to relate all of these off diagonal modes, so they must all have the same mass. The mass is given by the separation of the bound states exactly as in the case of two separated D0 branes. When the centers of the two bound states get within a distance $`l_\text{p}`$ of each other, all $`16m_1m_2`$ of these degrees of freedom open up, causing tremendous eigenvalue repulsion.
This bound state cluster picture closely mimics the description at the BFKS point, and has been studied by several previous authors (once the radius is found). Once the radius is found, many of the successes of $`N=S`$ can be carried over in a straightforward manner to $`NS`$. In particular, the mass to radius relationship is correct, the constituent bound states have the right properties to become Hawking radiation, and Boltzman statistics give the correct entropy .
The breaking of the statistics symmetries by off diagonal fluctuations should work for bound states exactly as it did for individual D0 branes, explaining the use of Boltzman statistics.
### 5.1 Newtonian Potential
As a demonstration, consider the potential between two static black holes of different masses.<sup>3</sup><sup>3</sup>3 This analysis is similar to the equal mass case studied by Banks et. al. , but differs from Gao and Zhang’s treatment . We will assume that their separation is greater than $`R`$, where $`v^4/r^7`$ dominates the two body interactions.
It will be much more convenient to use the velocities of the bound states rather than their momenta. The mass of a bound state is $`\mathrm{}m/R`$. Since the bound states that make up a black hole have momenta $`\mathrm{}/R_s`$ the velocities are $`R/R_sm`$.
First, observe that the velocity of the constituent bound states is the boost parameter required to bring the black hole to the rest frame.
$`{\displaystyle \frac{E}{P_{}}}`$ $`={\displaystyle \frac{ER}{N\mathrm{}}}`$ (7)
$`={\displaystyle \frac{R^2R_s^7}{G_NmN\mathrm{}}}`$ (8)
$`={\displaystyle \frac{R^2}{m^2R_s^2}}`$ (9)
$`=v^2`$ (10)
Black holes that are at rest relative to each other will be made of bound states with the same velocity, though not of the same momentum.
It will be useful to note:
$`M^2`$ $`=EP_{}=v^2P_{}^2`$ (11)
$`M`$ $`={\displaystyle \frac{vN\mathrm{}}{R}}`$ (12)
Next, we find the energy shift due to the $`v^4/r^7`$ interaction. Since the velocities of the constituents are roughly the same in the two black holes, $`(v_1v_2)^4=v^4`$.
$`\mathrm{\Delta }E`$ $`={\displaystyle \frac{G_N\mathrm{}^2}{R^3}}{\displaystyle \underset{n_1,n_2}{}}m_1m_2{\displaystyle \frac{(v_1v_2)^4}{r^7}}`$ (13)
$`={\displaystyle \frac{G_N\mathrm{}^2}{R^3}}N_1N_2{\displaystyle \frac{v^4}{r^7}}`$ (14)
$`=G_N{\displaystyle \frac{M_1M_2v^2}{Rr^7}}`$ (15)
Finally, this can be used to find the potential. The potential is much smaller than the masses of the black holes.
$`E{\displaystyle \frac{N\mathrm{}}{R}}`$ $`=M^2=(M_1+M_2+V)^2`$ (16)
$`V`$ $`={\displaystyle \frac{\mathrm{\Delta }E(N_1+N_2)\mathrm{}}{R(M_1+M_2)}}`$ (17)
$`={\displaystyle \frac{\mathrm{\Delta }E}{v}}`$ (18)
$`=G_N{\displaystyle \frac{M_1M_2}{R^{}r^7}},`$ (19)
where $`R^{}=R/v`$ is the compactification radius in the rest frame of the black holes.
## 6 Conclusions
Eigenvalue repulsion predicts the size of black holes without requiring knowledge of the effective potential between the constituent D0 branes, and suggests a mechanism for breaking the statistics symmetry of the constituents. However, this understanding is still rather primitive. Much work will have to be done even to turn the sketch presented here into a derivation of black hole thermodynamics.
Little has been said about the fermion degrees of freedom in this theory. Might supersymmetry cause terms that cancel the eigenvalue repulsion? This does not appear to happen. Supersymmetry requires that fermionic ground state energy cancels bosonic ground state energy. However, eigenvalue repulsion is purely statistical and is not affected by the fermionic states.
I would like to thank Jeff Harvey, Emil Martinec and Miao Li for helpful conversations about these matters.
|
no-problem/9904/hep-ph9904473.html
|
ar5iv
|
text
|
# Untitled Document
WIS-99/17/Apr-DPP, hep-ph/9904473
Lepton Parity in Supersymmetric Flavor Models
Galit Eyal and Yosef Nir
Department of Particle Physics, Weizmann Institute of Science, Rehovot 76100, Israel
We investigate supersymmetric models where neither $`R`$ parity nor lepton number nor baryon number is imposed. The full high energy theory has an exact horizontal $`U(1)`$ symmetry that is spontaneously broken. Quarks and Higgs fields carry integer horizontal charges but leptons carry half integer charges. Consequently, the effective low energy theory has two special features: a $`U(1)`$ symmetry that is explicitly broken by a small parameter, leading to selection rules, and an exact residual $`Z_2`$ symmetry, that is lepton parity. As concerns neutrino parameters, the $`Z_2`$ symmetry forbids contributions from $`R_p`$-violating couplings and the $`U(1)`$ symmetry induces the required hierarchy. As concerns baryon number violation, the $`Z_2`$ symmetry forbids proton decay and the $`U(1)`$ symmetry provides sufficient suppression of double nucleon decay and of neutron $``$ antineutron oscillations.
1. Introduction
In contrast to the Standard Model (SM), the Supersymmetric Standard Model (SSM) does not have accidental lepton- ($`L`$) and baryon-number ($`B`$) symmetries. This situation leads to severe phenomenological problems, e.g. fast proton decay and large neutrino masses. The standard solution to this problem is to impose a discrete symmetry, $`R_p`$. The SSM with $`R_p`$ does have $`L`$ and $`B`$ as accidental symmetries.
Both the SM and the SSM provide no explanation for the smallness and hierarchy in the Yukawa couplings. One way of explaining the flavor puzzle is to impose an approximate horizontal symmetry. Such symmetries suppress not only the Yukawa couplings but also the $`B`$ and $`L`$ violating terms of the SSM \[1--16\]. Consequently, it is possible to construct viable supersymmetric models with horizontal symmetries and without $`R_p`$. The phenomenology of these models is very different from that of $`R_p`$-conserving models. In particular, the LSP is unstable, various $`L`$ and $`B`$ violating processes may occur at an observable level, and an interesting pattern of neutrino masses is predicted.
It is not simple, however, to solve all problems of $`L`$ and $`B`$ violation by means of horizontal symmetries:
(a) Constraints from proton decay require uncomfortably large horizontal charges for quarks to sufficiently suppress the $`B`$ violating terms ;
(b) In models where the $`\mu `$ terms are not aligned with the $`B`$ terms, constraints from neutrino masses require uncomfortably large horizontal charges for leptons to sufficiently suppress the $`L`$ violating terms \[3,,8\].
Therefore, most models with horizontal symmetries and without $`R_p`$ still impose baryon number symmetry and assume $`\mu B`$ alignment at some high energy scale. In this work we show that it is not necessary to make these assumptions: one can construct viable supersymmetric models without $`R`$ parity, without lepton number, without baryon number and with horizontal charges that are not very large. The crucial point is that the horizontal $`U(1)_H`$ symmetry is not completely broken: a residual discrete symmetry, lepton parity, forbids proton decay and aligns the $`\mu `$ and $`B`$ terms. The constraints on the baryon number violating terms from double nucleon decay and neutron-antineutron oscillations are easily satisfied and interesting neutrino parameters can be accommodated naturally in these models.
The idea that lepton parity could arise from the spontaneous breaking of a horizontal symmetry was first suggested, to the best of our knowledge, in . Explicit models, with a horizontal $`U(1)`$ symmetry and a residual $`Z_2`$ (different from lepton parity), were presented in refs. \[10,,16\].
The plan of this paper is as follows. In section 2, we define our notations. In section 3 we present an explicit model and its predictions for the Yukawa parameters in the quark and in the lepton sectors. We emphasize that it is not easy to accommodate the parameters of the MSW solution to the solar neutrino problem with a small mixing angle. In section 4 we investigate the consequences of the residual lepton parity on $`R`$-parity violating couplings. A summary is given and various comments are made in section 5.
2. Notations
The matter supermultiplets are denoted in the following way:
$$\begin{array}{cc}& Q_i(3,2)_{+1/6},\overline{u}_i(\overline{3},1)_{2/3},\overline{d}_i(\overline{3},1)_{+1/3},\hfill \\ & L_i(1,2)_{1/2},\overline{\mathrm{}}_i(1,1)_{+1},N_i(1,1)_0,\hfill \\ & \varphi _u(1,2)_{+1/2},\varphi _d(1,2)_{1/2}.\hfill \end{array}$$
The $`N_i`$ supermultiplets are Standard Model singlets. Their masses are assumed to be much heavier than the electroweak breaking scale but lighter than the scale of $`U(1)_H`$ breaking. We denote this intermediate mass scale by $`M`$. Lepton number is violated by bilinear terms in the superpotential,
$$\mu _iL_i\varphi _u,$$
and by trilinear terms in the superpotential,
$$\lambda _{ijk}L_iL_j\overline{\mathrm{}}_k+\lambda _{ijk}^{}L_iQ_j\overline{d}_k.$$
Baryon number is violated by trilinear terms in the superpotential,
$$\lambda _{ijk}^{\prime \prime }\overline{u}_i\overline{d}_j\overline{d}_k.$$
There are also $`L`$ breaking supersymmetry breaking bilinear terms in the scalar potential:
$$B_iL_i\varphi _u,$$
and
$$\stackrel{~}{m}_{i0}^2L_i^{}\varphi _d,$$
where here $`L_i`$, $`\varphi _d`$ and $`\varphi _u`$ denote scalar fields.
3. The Yukawa Hierarchy
3.1. A Simple Model
Consider a model with a horizontal symmetry $`U(1)_H`$. The symmetry is broken by two small parameters, $`\lambda `$ and $`\overline{\lambda }`$, to which we attribute $`H`$-charges of $`+1`$ and $`1`$, respectively. We give them equal values (so that the corresponding $`D`$ terms do not lead to supersymmetry breaking at a high scale). For concreteness we take $`\lambda =\overline{\lambda }=0.2`$. At low energies, we have then the following selection rules:
a. Terms in the superpotential and in the Kahler potential that carry an integer $`H`$-charge $`n`$ are suppressed by $`𝒪(\lambda ^{|n|})`$.
b. Terms in the superpotential and in the Kahler potential that carry a non-integer charge vanish.
We set the $`H`$ charges of the matter fields as follows:
$$\begin{array}{cc}\varphi _u& \varphi _d\\ (0)& (0)\end{array}$$
$$\begin{array}{ccccccccccc}Q_1& Q_2& Q_3& & \overline{u}_1& \overline{u}_2& \overline{u}_3& & \overline{d}_1& \overline{d}_2& \overline{d}_3\\ (3)& (2)& (0)& & (4)& (2)& (0)& & (4)& (3)& (3)\end{array}$$
$$\begin{array}{ccccccccccc}L_1& L_2& L_3& & \overline{\mathrm{}}_1& \overline{\mathrm{}}_2& \overline{\mathrm{}}_3& & N_1& N_2& N_3\\ (7/2)& (1/2)& (1/2)& & (11/2)& (11/2)& (7/2)& & (1/2)& (1/2)& (1/2)\end{array}$$
The selection rules dictate then the following form for the quark mass matrices:
$$M_u\varphi _u\left(\begin{array}{ccc}\lambda ^7& \lambda ^5& \lambda ^3\\ \lambda ^6& \lambda ^4& \lambda ^2\\ \lambda ^4& \lambda ^2& 1\end{array}\right),M_d\varphi _d\left(\begin{array}{ccc}\lambda ^7& \lambda ^6& \lambda ^6\\ \lambda ^6& \lambda ^5& \lambda ^5\\ \lambda ^4& \lambda ^3& \lambda ^3\end{array}\right).$$
In eq. (3.1) and below, unknown coefficients of $`𝒪(1)`$ are not explicitly written. These mass matrices give order of magnitude estimates for the physical parameters (masses and mixing angles) that are consistent with the experimental data (extrapolated to a high energy scale):
$$\begin{array}{cc}& m_t/\varphi _u1,m_c/m_t\lambda ^4,m_u/m_c\lambda ^3,\hfill \\ & m_b/m_t\lambda ^3,m_s/m_b\lambda ^2,m_d/m_s\lambda ^2,\hfill \\ & |V_{us}|\lambda ,|V_{cb}|\lambda ^2,|V_{ub}|\lambda ^3.\hfill \end{array}$$
For the charged leptons mass matrix $`M_{\mathrm{}}`$, the neutrino Dirac mass matrix $`M_\nu ^{\mathrm{Dirac}}`$, and the Majorana mass matrix for the singlet neutrinos $`M_N`$, we have
$$M_{\mathrm{}}\varphi _d\left(\begin{array}{ccc}\lambda ^9& \lambda ^9& \lambda ^7\\ \lambda ^5& \lambda ^5& \lambda ^3\\ \lambda ^5& \lambda ^5& \lambda ^3\end{array}\right),$$
$$M_\nu ^{\mathrm{Dirac}}\varphi _u\left(\begin{array}{ccc}\lambda ^3& \lambda ^4& \lambda ^4\\ \lambda & 1& 1\\ \lambda & 1& 1\end{array}\right),M_NM\left(\begin{array}{ccc}\lambda & 1& 1\\ 1& \lambda & \lambda \\ 1& \lambda & \lambda \end{array}\right).$$
These matrices give the following order of magnitude estimates:
$$\begin{array}{cc}& m_\tau /\varphi _d\lambda ^3,m_\mu /m_\tau \lambda ^2,m_e/m_\mu \lambda ^4,\hfill \\ & m_{\nu _3}/(\varphi _u^2/M)1/\lambda ,m_{\nu _2}/m_{\nu _3}\lambda ^2,m_{\nu _1}/m_{\nu _2}\lambda ^4,\hfill \\ & |V_{e\nu _2}|\lambda ^2,|V_{\mu \nu _3}|1,|V_{e\nu _3}|\lambda ^4.\hfill \end{array}$$
The neutrino parameters fit the atmospheric neutrino data and the small mixing angle (SMA) MSW solution of the solar neutrino problem .
3.2. The Neutrino Mass Hierarchy
As concerns neutrino parameters, the most predictive class of models is the one where $`s_{23}`$ and $`\mathrm{\Delta }m_{\mathrm{SN}}^2/\mathrm{\Delta }m_{\mathrm{AN}}^2`$ depend only on the horizontal charges of $`L_2`$, $`L_3`$, $`\overline{\mathrm{}}_2`$ and $`\overline{\mathrm{}}_3`$ \[19,,20\]. We call such models, where the horizontal charges of neither the first generation nor sterile neutrinos affect the above parameters, (2,0) models. Models with $`n_a`$ active and $`n_s`$ sterile neutrinos are denoted by ($`n_a,n_s`$). It was proven in that in ($`2,n_s2`$) models, for neutrinos with large mixing, $`s_{23}1`$, we have $`m_2/m_3\lambda ^{4n}`$ ($`\mathrm{\Delta }m_{\mathrm{SN}}^2/\mathrm{\Delta }m_{\mathrm{AN}}^2\lambda ^{8n}`$). Therefore, the MSW solutions, which require $`\mathrm{\Delta }m_{\mathrm{SN}}^2/\mathrm{\Delta }m_{\mathrm{AN}}^2\lambda ^2\lambda ^4`$, cannot be accommodated in this framework. The LMA solution can be achieved in $`n_a=3`$ models (for any $`n_s`$) but the SMA solution requires $`n_s3`$. (For an $`n_s=3`$ model, see, for example, .) This means a loss of predictive power, particularly in comparison with $`n_s=0`$ models.
The proof in ref. referred to models with only integer horizontal charges (in units of the charge of the breaking parameters). The question arises then whether one can have a hierarchy for $`\mathrm{\Delta }m_{\mathrm{SN}}^2/\mathrm{\Delta }m_{\mathrm{AN}}^2`$ that is milder than $`\lambda ^{8n}`$ in models where leptons carry half-integer charge even for $`n_s2`$. We now show that the answer to this question is negative.
Consider (2,0) models with $`H(L_2)H(L_3)`$. The large mixing can be obtained from the charged lepton mass matrix if the following condition is fulfilled :
$$H(L_2)+H(L_3)=2H(\overline{\mathrm{}}_3).$$
The hierarchy is given by
$$\frac{m(\nu _2)}{m(\nu _3)}\lambda ^{2|H(L_2)+H(L_3)|4|H(L_3)|}.$$
From (3.1) and (3.1) we find
$$\frac{\mathrm{\Delta }m_{\mathrm{SN}}^2}{\mathrm{\Delta }m_{\mathrm{AN}}^2}\lambda ^{8(|H(\overline{\mathrm{}}_3)||H(L_3)|)}.$$
Since $`H(\overline{\mathrm{}}_3)`$ and $`H(L_3)`$ are both half-integers, the difference $`|H(\overline{\mathrm{}}_3)||H(L_3)|`$ is an integer and the hierarchy is $`\lambda ^{8n}`$.
The same statement ($`\mathrm{\Delta }m_{\mathrm{SN}}^2/\mathrm{\Delta }m_{\mathrm{AN}}^2\lambda ^{8n}`$) holds also in (2,2) models. (The proof for that is quite lengthy; it follows lines similar to Appendix A in and we do not present it here.) We conclude then that models where leptons carry half-integer charges do not provide new ways to achieve a mild hierarchy between $`\mathrm{\Delta }m_{\mathrm{SN}}^2`$ and $`\mathrm{\Delta }m_{\mathrm{AN}}^2`$. For the MSW solutions we have either the LMA solution with $`\nu _e\nu _\mu `$ forming a pseudo-Dirac neutrino or at least three sterile neutrinos playing a role in the light neutrino flavor parameters.
4. $`L`$ and $`B`$ Violation
The model described above has an exact $`Z_2`$ symmetry, that is lepton parity. This symmetry follows from the selection rules. But it can be understood in a more intuitive way from the full high energy theory. We assume here that our low energy effective theory given in the previous section comes from a (supersymmetric version of) the Froggatt-Nielsen mechanism . The full high energy theory has an exact $`U(1)_H`$ symmetry that is spontaneously broken by the VEVs of two scalar fields, $`\varphi `$ and $`\overline{\varphi }`$, of $`H`$-charges $`+1`$ and $`1`$, respectively. Quarks and leptons in vector representation of the SM gauge group and of $`U(1)_H`$, and with very heavy masses $`M_{\mathrm{FN}}`$ communicate the information about the breaking to the SSM fields ($`\lambda =\varphi /M_{\mathrm{FN}}`$ and $`\overline{\lambda }=\overline{\varphi }/M_{\mathrm{FN}}`$).
The $`U(1)_H`$ symmetry has a $`Z_2`$ subgroup where all fields that carry half-integer $`H`$-charges are odd, while all those that carry integer $`H`$-charges are even. This symmetry is not broken by $`\varphi `$ and $`\overline{\varphi }`$ since $`\varphi `$ and $`\overline{\varphi }`$ are $`Z_2`$ even. Our choice of $`H`$-charges is such that all leptons ($`L_i`$, $`\overline{\mathrm{}}_i`$ and $`N_i`$) carry half-integer charges and therefore are $`Z_2`$-odd. All other fields (quarks and Higgs fields) carry integer charges and therefore are $`Z_2`$-even. We can identify the exact residual symmetry then as lepton parity.
Lepton parity is very powerful in relaxing the phenomenological problems that arise in supersymmetric models without $`R_p`$. In particular, it forbids the bilinear $`\mu `$ terms of eq. (2.1), the $`B`$ terms of eq. (2.1), the $`\stackrel{~}{m}^2`$ terms of eq. (2.1), and the trilinear terms of eq. (2.1). The only allowed renormalizable $`R_p`$ violating terms are the baryon number violating couplings of eq. (2.1).
This situation has two interesting consequences:
(i) Similarly to $`R_p`$ conserving models, the only allowed $`\mu `$ term is $`\mu \varphi _u\varphi _d`$ and the only allowed $`B`$ term is $`B\varphi _u\varphi _d`$. The $`\mu `$ and $`B`$ terms are then aligned. Furthermore, the mass-squared matrix for the scalar $`(1,2)_{1/2}`$ fields can be separated to two blocks, a $`3\times 3`$ block for the three slepton fields and a single term for $`\varphi _d`$. Therefore there will be no renormalizable tree-level contribution to neutrino masses \[25,,3\]. Consequently, the very large $`H`$ charges that are needed to achieve precise $`\mu `$-$`B`$ alignment are not necessary here.
Since the $`\lambda _{ijk}`$ and $`\lambda _{ijk}^{}`$ couplings vanish, there will also be no $`R_p`$ breaking loop contributions to neutrino masses. On the other hand, the usual see-saw contributions which break lepton number by two units are allowed. This justifies why we considered (3.1) as the only source for neutrino masses.
(ii) Since processes that violate lepton number by one unit are forbidden, the proton is stable. (We assume here that there is no fermion that is lighter than the proton and does not carry lepton number.) The most severe constraints on baryon number violating couplings are then easily satisfied.
On the other hand, the $`\lambda ^{\prime \prime }`$ couplings of eq. (2.1) contribute to double nucleon decay, to neutron-antineutron oscillations and to other rare processes \[26--32\].
The non-observation of baryon number violating processes gives strong constraints on all the $`\lambda ^{\prime \prime }`$ couplings, e.g.
$$\lambda _{112}^{\prime \prime }10^6,\lambda _{113}^{\prime \prime }5\times 10^3.$$
The first bound comes from double nucleon decay and the second from neutron-antineutron oscillations, and they correspond to a typical supersymmetric mass $`\stackrel{~}{m}300GeV`$. In our models, all the relevant constraints are satisfied since the $`\lambda ^{\prime \prime }`$ couplings are suppressed by the selection rules related to the broken $`U(1)`$. Explicitly, our choice of $`H`$-charges in eq. (3.1) leads to the following order of magnitude estimates:
$$\begin{array}{cc}\hfill \lambda _{112}^{\prime \prime }& \lambda _{113}^{\prime \prime }\lambda ^{11},\lambda _{123}^{\prime \prime }\lambda ^{10},\hfill \\ \hfill \lambda _{212}^{\prime \prime }& \lambda _{213}^{\prime \prime }\lambda ^9,\lambda _{223}^{\prime \prime }\lambda ^8,\hfill \\ \hfill \lambda _{312}^{\prime \prime }& \lambda _{313}^{\prime \prime }\lambda ^7,\lambda _{323}^{\prime \prime }\lambda ^6.\hfill \end{array}$$
The value that is closest to the bound is that of $`\lambda _{112}^{\prime \prime }`$, predicting double nucleon decay at a rate that, for $`\stackrel{~}{m}100GeV`$, is four orders of magnitude below the present bound.
Note, however, that reasonable variations on our model can easily give larger $`\lambda ^{\prime \prime }`$ couplings and allow the upper bound on double nucleon decay to be saturated. For example, replacing the $`H`$ charges in eq. (3.1) with a linear combination of $`H`$ and baryon number $`B`$ ($`H^{}=a_1H+a_2B`$) does not affect the $`B`$ conserving quantities and, in particular, the mass matrices (3.1), (3.1) and (3.1), but does affect (and, in particular, can enhance) the $`\lambda ^{\prime \prime }`$ couplings in (4.1). The couplings could also be affected by $`\mathrm{tan}\beta `$. Our choice of charges corresponds to $`\mathrm{tan}\beta 1`$. But for large $`\mathrm{tan}\beta `$ and the same choice of $`H`$-charges for $`\varphi _d`$ and $`Q_i`$, the $`\lambda ^{\prime \prime }`$ couplings are enhanced by $`\mathrm{tan}^2\beta `$. We conclude that, within our framework, baryon number violating processes could occur at observable rates.
5. Summary and Comments
In the framework of supersymmetric models, horizontal $`U(1)`$ symmetries can lead to many interesting consequences, the most important being a natural explanation of the smallness and hierarchy in the Yukawa parameters. We have investigated a particular class of models, where the horizontal $`U(1)`$ is the only symmetry imposed on the model beyond supersymmetry and the Standard Model gauge symmetry. In particular, we have imposed neither $`R`$-parity, nor lepton number nor baryon number. Usually, such models can be made viable only at the price of assigning uncomfortably large horizontal charges to various matter fields. It is possible, however, that the horizontal symmetry leads to exact lepton parity at low energy. The constraints that usually require the large charges are irrelevant because proton decay is forbidden and because mixing between neutrinos and neutralinos is forbidden. The remaining constraints from double nucleon decay and from neutrino masses are easily satisfied by the selection rules of the broken $`U(1)`$.
Our emphasis here has been put on lepton and baryon number violation. Therefore, we have ignored two other aspects of our framework. First, we did not insist that the horizontal symmetry solves the supersymmetric flavor problem. It is actually impossible to sufficiently suppress the supersymmetric contributions to flavor changing neutral currents by means of a single horizontal $`U(1)`$ symmetry. It is possible that this problem is solved by a different mechanism. For example, squarks and sleptons could be degenerate as a result of either dilaton dominance in Supersymmetry breaking or a universal gaugino contribution in the RGE (for a recent discussion, see ). Alternatively, one could complicate the model by employing a $`U(1)\times U(1)`$ symmetry to achieve alignment \[34,,35\]. In either case, the implications for the issues discussed here do not change.
We note, however, that we cannot embed our models in the framework of gauge mediated Supersymmetry breaking (GMSB) \[36--39\] with a low breaking scale. The reason is that such models predict that the gravitino is lighter than the proton. If baryon number is not conserved, the proton decays via $`pG+K^+`$. The $`\lambda _{112}^{\prime \prime }`$ coupling contributes to this decay at tree level and is therefore very strongly constrained \[40,,32\]:
$$\lambda _{112}^{\prime \prime }5\times 10^{16}\left(\frac{\stackrel{~}{m}}{300GeV}\right)^2\left(\frac{m_{3/2}}{1eV}\right).$$
All other $`\lambda _{ijk}^{\prime \prime }`$ couplings contribute at the loop level and are constrained as well . For $`m_{3/2}1eV`$, the bound (5.1) would be violated (with $`\lambda _{112}^{\prime \prime }\lambda ^{11}`$) by about eight orders of magnitude. Therefore, our models of horizontal $`U(1)`$ symmetry broken to lepton parity can be embedded in the GMSB framework only for $`m_{3/2}>10^8eV`$, that is a Supersymmetry breaking scale that is higher than $`𝒪(10^8GeV)`$.
Second, we have not worried about anomaly constraints . The reason is that these could be satisfied by extending the matter content of the model and this, again, would have no effect on the problems of interest to us here.
In the single explicit model that we presented in section 3, our choice of lepton charges has been dictated by the implications from the atmospheric neutrino anomaly and from the MSW solution of the solar neutrino problem with a small mixing angle. We emphasize that it is actually simpler to accommodate the large angle solutions (MSW or vacuum oscillations) of the solar neutrino problem. We used the small angle option because we wanted to demonstrate that, first, it can be accommodated in our framework but that, second, the model does not offer a simplification in this regard compared to models with integer charges. The use of half-integer charges in the lepton sector also does not make significant changes for models using holomorphic zeros to achieve simultaneously large mixing and large hierarchy . Finally, we note that models where some of the $`L_i`$ and $`\overline{\mathrm{}}_i`$ carry half-integer charges and other integer charges do not yield acceptable phenomenology.
Acknowledgements
We thank Yuval Grossman and Yael Shadmi for useful discussions. Y.N. is supported in part by the United States – Israel Binational Science Foundation (BSF) and by the Minerva Foundation (Munich).
References
relax I. Hinchliffe and T. Kaeding, Phys. Rev. D47 (1993) 279. relax V. Ben-Hamo and Y. Nir, Phys. Lett. B339 (1994) 77, hep-ph/9408315. relax T. Banks, Y. Grossman, E. Nardi and Y. Nir, Phys. Rev. D52 (1995) 5319, hep-ph/9505248. relax A.Y. Smirnov and F. Vissani, Nucl. Phys. B460 (1996) 37, hep-ph/9506416. relax C.D. Carone, L.J. Hall and H. Murayama, Phys. Rev. D53 (1996) 6282, hep-ph/9512399; Phys. Rev. D54 (1996) 2328, hep-ph/9602364. relax P. Binetruy, S. Lavignac and P. Ramond, Nucl. Phys. B477 (1996) 353, hep-ph/9601243. relax E.J. Chun and A. Lukas, Phys. Lett. B387 (1996) 99, hep-ph/9605377. relax F.M. Borzumati, Y. Grossman, E. Nardi and Y. Nir, Phys. Lett. B384 (1996) 123, hep-ph/9606251. relax E. Nardi, Phys. Rev. D55 (1997) 5772, hep-ph/9610540. relax K. Choi, E.J. Chun and H. Kim, Phys. Lett. B394 (1997) 89, hep-ph/9611293. relax P. Binetruy, N. Irges, S. Lavignac and P. Ramond, Phys. Lett. B403 (1997) 38, hep-ph/9612442. relax G. Bhattacharyya, Phys. Rev. D57 (1998) 3944, hep-ph/9707297. relax P. Binetruy, E. Dudas, S. Lavignac and C.A. Savoy, Phys. Lett. B422 (1998) 171, hep-ph/9711517. relax N. Irges, S. Lavignac and P. Ramond, Phys. Rev. D58 (1998) 035003, hep-ph/9802334. relax J. Ellis, S. Lola and G.G. Ross, Nucl. Phys. B526 (1998) 115, hep-ph/9803308. relax K. Choi, E.J. Chun and K. Hwang, hep-ph/9811363. relax L.E. Ibanez and G.G. Ross, Nucl. Phys. B368 (1992) 3. relax R. Barbieri, L.J. Hall, D. Smith, A. Strumia and N. Weiner, JHEP 12 (1998) 017, hep-ph/9807235. relax Y. Grossman and Y. Nir, Nucl. Phys. B448 (1995) 30, hep-ph/9502418. relax Y. Grossman, Y. Nir and Y. Shadmi, JHEP 10 (1998) 007, hep-ph/9808355. relax Y. Nir and Y. Shadmi, hep-ph/9902293. relax G. Altarelli and F. Feruglio, JHEP 11 (1998) 021, hep-ph/9809596; Phys. Lett. B451 (1999) 388, hep-ph/9812475. relax M. Leurer, Y. Nir and N. Seiberg, Nucl. Phys. B398 (1993) 319, hep-ph/9212278. relax C.D. Froggatt and H.B. Nielsen, Nucl. Phys. B147 (1979) 277. relax L.J. Hall and M. Suzuki, Nucl. Phys. B231 (1984) 419. relax F. Zwirner, Phys. Lett. B132 (1983) 103. relax R. Barbieri and A. Masiero, Nucl. Phys. B267 (1986) 679. relax V. Barger, G.F. Giudice and T. Han, Phys. Rev. D40 (1989) 2987. relax J.L. Goity and M. Sher, Phys. Lett. B346 (1995) 69, hep-ph/9412208, Erratum-ibid. B385 (1996) 500. relax G. Bhattacharyya, D. Choudhury and K. Sridhar, Phys. Lett. B355 (1995) 193, hep-ph/9504314. relax C.E. Carlson, P. Roy and M. Sher, Phys. Lett. B357 (1995) 99, hep-ph/9506328. relax D. Chang and W.Y. Keung, Phys. Lett. B389 (1996) 294, hep-ph/9608313. relax G. Eyal, hep-ph/9903423. relax Y. Nir and N. Seiberg, Phys. Lett. B309 (1993) 337, hep-ph/9304307. relax M. Leurer, Y. Nir and N. Seiberg, Nucl. Phys. B420 (1994) 468, hep-ph/9310320. relax M. Dine and A.E. Nelson, Phys. Rev. D48 (1993) 1277, hep-ph/9303230. relax M. Dine, A.E. Nelson and Y. Shirman, Phys. Rev. D51 (1994) 1362, hep-ph/9408384. relax M. Dine, A.E. Nelson Y. Nir and Y. Shirman, Phys. Rev. D53 (1996) 2658, hep-ph/9507378. relax G.F. Giudice and R. Rattazzi, hep-ph/9801271. relax K. Choi, E.J. Chun and J.S. Lee, Phys. Rev. D55 (1997) 3924, hep-ph/9611285. relax K. Choi, K. Hwang and J.S. Lee, Phys. Lett. B428 (1998) 129, hep-ph/9802323.
|
no-problem/9904/astro-ph9904336.html
|
ar5iv
|
text
|
# Revisiting the shape of pulsar beams
## 1 Introduction
Most widely accepted emission models assume that pulsar radiation is emitted over a (hollow) cone centered around the magnetic dipole axis. The observed emission is generally highly linearly polarized with a systematic rotation of the position angle across the pulse profile. This behaviour, following Radhakrishnan & Cooke (1969), is interpreted in terms of the radiation being along the cone of the dipolar open field-lines emerging from the polar cap, and the plane of the linear polarization is that containing the field line associated with the emission received at a given instant. During each rotation of the star, the emission beam crosses the observers line-of-sight resulting in a pulse of emission. The observed pulse profile thus corresponds to a thin cut across the beam at a fixed rotational latitude. The information on the beam shape as a function of latitude, although generally not measurable directly, may be forthcoming from observations at widely separated frequencies, as emission at different frequencies is believed to originate at different heights from the star leading to changes in beam size. For this, the dependence of the radiation frequency on the height, the so called radius-to-frequency mapping, should be known a priori. Alternatively, it is possible to use the data on an ensemble of pulsars sampling a range of impact parameters. However, it is important that all the pulsars in the sample form a homogeneous set in terms of the profile types etc. Several attempts to model the pulsar beam have used the latter approach. Based on their study, Narayan and Vivekanand (1983) concluded that the beam is elongated in the latitude. Lyne & Manchester (1988), on the other hand, have argued that the beam is essentially circular (see also Gil & Han 1996, Arendt & Eilek 1998). Based on the dipole geometry of the cone of open field-lines, Biggs (1990) found that the beam shape is a function of the angle ($`\alpha `$) between the rotation and the magnetic axes. The reasons that all these analyses predict different results could be manifold. For example, Narayan & Vivekanand used a data set consisting of only 16 pulsars and assessed the beam axial ratio on the basis of the total change in the position angle of the linear polarization across the pulse profile. Apart from poor statistics, their analysis suffered from the large uncertainties in the polarization measurements available then. Lyne & Manchester (1988) used a much larger data set in comparison and examined the distribution of normalized impact parameter $`\beta _n\beta _{90}/\rho _{90}`$, where $`\beta _{90}`$ & $`\rho _{90}`$ are the impact angle and the beam radius computed for $`\alpha =90^{}`$. Based on their observation that the distribution of $`\beta _n`$ is ‘essentially uniform’, they concluded that the beams are circular in shape. The apparent deficit at large $`\beta _n`$ is attributed to a luminosity bias. It is worth noting that the deficit is seen despite the fact that $`\beta _n`$ overestimates the true $`\beta /\rho `$ (given that they disregarded the sign of $`\beta `$), this is particularly so at large $`\beta `$ values.
Biggs (1990) used the same data set as well as the $`\beta _n`$ distribution as used by Lyne and Manchester (1988), but drew attention to a ‘peak’ in the distribution at low $`\beta _n`$. The shapes of the polar cap defined by the region of open field lines, as derived by Biggs, show that the beam is circular for an aligned rotator, but undergoes compression along the latitudinal direction with increasing inclination $`\alpha `$.
In this paper, we address this question within the basic framework advanced by Rankin (1993a) which, at the least, is qualitatively different from that of Lyne & Manchester (1988). The classification scheme (Rankin, 1983a), based on the phenomenology of pulse profiles, polarization and other fluctuation properties etc., provides a sound basis for explicit distinction between the core and the conal components, with each of them following a predictable geometry (see also Oster & Sieber 1976; Gil & Krawczyk 1996 for conal beams). Lyne & Manchester (1988), on the other hand, prefer to interpret the observed variety in pulse shape and other properties as a result of patchy illumination, rather than any particular pattern within the radiation cone. The observed differences in the properties of pulse components are then to be understood as gradual changes as a function of the distance from the center of the basic emission cone. Their analysis thus naturally disregards the possible existance of conal features.
Assuming the possibly confined ‘conal-component’ geometry and by accounting for all the relevant geometrical effects, we re-examine the shape of pulsar beams and their frequency dependence. Recently published multifrequency polarization data, at six frequencies in the range between 234-1642 MHz (Gould & Lyne, 1998), has made this investigation possible.
## 2 Data set
For the present investigation requiring reliable estimates of $`\alpha `$ & $`\beta `$, we use the data set comprised of only those pulsars whose pulse profiles are identified as ‘triple’ ($`𝐓`$) or ‘multiple’ ($`𝐌`$), as classified by Rankin (1993a, 1993b). The reason for the choice is that the $`𝐓`$ and $`𝐌`$ pulsars show a core component in addition to the conal components, so that a reliable estimation of the angle ($`\alpha `$) between the rotation axis and the magnetic axis is possible, using Rankin’s (1990) method. In this method, the ratio of the observed core-width to the limiting width ($`2.45^{}P^{0.5}`$) is interpreted as the geometric factor $`1/\mathrm{sin}(\alpha )`$, providing by far the most reliable estimates of $`\alpha `$. For the conal doubles and conal singles, devoid of any core component, the estimates of $`\alpha `$ are less reliable. The core singles are naturally excluded from this analysis of the conal emission geometry. For each pulsar in our selected sample, we define the conal width as the separation between the peaks of the outermost conal components. It is important to note that the nominally ‘central’ core component, which is argued to originate closer to the stellar surface, may not necessarily be along the cone axis. Such a possibility is clearly reflected in many pulse profiles where the core component is displaced from the ‘center’ definable from the conal components. Hence, the location of the core component is disregarded in our estimation of the conal separation. Columns 1 and 2 of table 1 list the name and profile type of these pulsars. Columns 3 to 8 list the calculated widths of the pulsars at frequencies 234, 408, 610, 925, 1400, and 1642 MHz respectively. Column 9 gives the pulsar period in seconds. Columns 10 and 11 list the $`\alpha `$ and $`\beta `$ values of the pulsars taken from Rankin (1993b).
Rankin (1990) has estimated the inclination angle $`\alpha `$ using the relation, $`\mathrm{sin}(\alpha )=2.45^{}P^{0.5}/W_{\mathrm{core}}`$, where $`W_{\mathrm{core}}`$ is the half-power width of the core component (at a reference frequency 1 GHz) and the period $`P`$ is in seconds. The impact angle $`\beta `$ has been estimated based on the rotating vector model of Radhakrishnan & Cooke (1969), using the relation $`\mathrm{sin}(\beta )=(d\chi /d\varphi )_{\mathrm{max}}/\mathrm{sin}(\alpha )`$, where $`(d\chi /d\varphi )_{\mathrm{max}}`$ is the maximum rate of change of the polarization angle $`\chi `$ with respect to the longitude $`\varphi `$.
In the following analysis, we treat the different frequency measurements on a given pulsar as ‘independent’ inputs much the same way as the data on different pulsars, since the pulsar beam size is expected to evolve with frequency. Thus, at different frequencies one obtains independent cuts (at different $`\beta /\rho `$) across the beam, though $`\beta `$ remains constant for a given pulsar. This increases the number of independent constraints by a usefully large factor. In fact, we would like to contrast this approach with the one where, for each pulsar, one obtains a best fit frequency dependence of the observed widths and then uses the data to obtain the width at a chosen reference frequency. The latter approach fails to take into account the dependence of the observed widths on $`\beta /\rho `$ that is inherent for any non-rectangular shape of the beam.
## 3 A direct test for the shape of beams
The Fig 1 is a schematic diagram illustrating the geometry of pulsar emission cone. The emission cone, with half-opening angle $`\rho _\nu `$, sweeps across the observers line-of-sight with an impact parameter (distance of closest approach to the magnetic axis) $`\beta `$. The spherical triangle PQS (refer to Fig. 1) relates the angles $`\alpha `$, $`\beta `$ and the profile half-width $`\varphi _\nu `$ to the beam radius $`\rho _\nu `$ by the following relation (Gil, Gronkowski & Rudnicki 1984),
$$\mathrm{sin}^2(\rho _\nu /2)=\mathrm{sin}^2(\varphi _\nu /2)\mathrm{sin}(\alpha )\mathrm{sin}(\alpha +\beta )+\mathrm{sin}^2(\beta /2)$$
(1)
The subscript $`\nu `$ in $`\rho _\nu `$ and $`\varphi _\nu `$ denotes that these quantities depend on frequency $`\nu `$. This equation assumes that the cone is circular, in which case $`\rho _\nu `$ becomes independent of $`\beta `$.
In reality, the beam may not be circular, but rather elliptical with, say, $`R`$ the axial ratio and b the longitudinal semi-axis of the ellipse as shown in Fig. 2. It is easy to see that the length of the radius vector r depends on the angle $`\theta `$ (with the longitudinal axis) when $`R`$ is not equal to 1. The variation of r as a function of $`\theta `$ for three different $`R`$ values (namely 1, 1.5 and 0.5) are shown as examples in Fig. 3. The $`\rho _\nu `$, determined assuming that the cone shape is circular (as in Rankin 1993b) is indeed a measure of the radius vector r, once the period and frequency dependences are corrected for. Such data on (r,$`\theta `$) spanning a wide enough range in $`\theta `$ can therefore be examined to seek a consistent value of the axial-ratio $`R`$. However, if $`R`$ is a function of $`\alpha `$, as suggested by Biggs (1990), then the (r,$`\theta `$) samples would show a spread bounded by the curves corresponding to the maximum and minimum values of $`R`$.
Such an examination of the present data suggests a spread below the line for $`R=1`$, indicating that the beam deviates from circularity and that the spread could be due to the $`\alpha `$ dependence of $`R`$. However, this deviation from circularity is not very significant. We discuss this in detail later in section 5.
We have also examined the $`\rho _\nu `$ values obtained by Rankin (1993b) through such a test. However, no significant deviation from circular beams was evident. We became aware of a similar study by C.-I. Bj$`\ddot{o}`$rnsson (1998), also with a similar conclusion. We note that the only difference between our estimates of $`\rho _\nu `$ and those of Rankin is in the definition of the conal widths. Rankin defines the width as the distance between the outer half-power points (rather than the peaks) of the two conal outriders, and the widths were then ‘interpolated’ to a reference frequency of 1 GHz. Such estimates are prone to errors due to mode changes, differing component shapes etc., and to the effects of dispersion & scattering (some of which she attempted to accommodate). We measure the widths as the peak-to-peak separations of the outer conal components, which are less sensitive to the sources of error mentioned above. We have also confirmed (in the PSRs 0301+19, 0525+21, 0751+32, 1133+16, 1737+13, 2122+13 and 2210+29 using the data from Blaskiewicz et al. 1991) that the ‘peaks’ of the conal components are symmetrically placed with respect to the “zero-longitude” (associated with the maximum rate of change of the position angle), which is not always true for the outer half-power points.
## 4 The model of the pulsar beam
We model the pulsar beam shape as elliptical in general and express it analytically as,
$$\frac{\mathrm{sin}^2(\varphi _\nu /2)\mathrm{sin}(\alpha )\mathrm{sin}(\alpha +\beta )}{\mathrm{sin}^2(\rho _\nu /2)}+\frac{\mathrm{sin}^2(\beta /2)}{\mathrm{sin}^2(R\rho _\nu /2)}=1$$
(2)
While $`\alpha `$, $`\beta `$ and $`\varphi _\nu `$ can be estimated, directly or indirectly, from observations, $`R`$ and $`\rho _\nu `$ are the two parameters which in turn define the beam shape and size— and the available data set of $`𝐓`$ and $`𝐌`$-profiles is expected to sample most of the $`\beta /\rho _\nu `$ range (0–1) with reasonable uniformity. The implicit assumption in this statistical approach is that a common description for $`R`$ & $`\rho _\nu `$ is valid for all pulsars. The common description should, however, account for relevant dependences on quantities, such as frequency, period, $`\alpha `$, etc. properly.
### 4.1 Frequency dependence of $`\rho _\nu `$
The radio emission at different frequencies is expected to originate at different altitudes above the stellar surface, with the higher frequency radiation associated with regions of lower altitude. This phenomenon known as radius-to-frequency mapping, finds overwhelming support from observations. Thorsett (1991) has suggested an empirical relation for the observed pulse width as a function of frequency, which seems to provide adequate description of the observed behaviour. We adopt a similar relation for the frequency evolution of the beam radius $`\rho _\nu `$ as follows
$$\rho _\nu =\widehat{\rho }(1+K\nu ^\zeta ),$$
(3)
where $`\widehat{\rho }`$ is the value of $`\rho _\nu `$ at infinite frequency, $`\zeta `$ the spectral index, and $`K`$ a constant. Note that both $`\zeta `$ & $`K`$ are expected to have positive values, so that the minimum value of $`\rho _\nu `$ is $`\widehat{\rho }`$, which should correspond to the angular size of the polar cap.
### 4.2 Period dependence on $`\rho _\nu `$
Rankin (1993a) has demonstrated (see also Gil, Kijak & Seiradakis 1993; Kramer et al. 1994) that the beam radius $`\widehat{\rho }`$ varies as $`P^{0.5}`$ (where $`P`$ is the period of the pulsar), a result which is in excellent agreement with that expected from a dipole geometry (Gil 1981). Eq 3 thus takes the form
$$\rho _\nu =\rho _{}(1+K\nu ^\zeta )P^{0.5},$$
(4)
where $`\rho _{}`$ is the minimum beam radius for $`P=1`$ sec.
### 4.3 Functional dependence of $`R`$ on $`\alpha `$
Biggs (1990) has suggested that $`R`$ should be a function of $`\alpha `$, such that the beam shape is circular for $`\alpha =0`$ and is increasingly compressed in the latitudinal direction as $`\alpha `$ increases to $`90^{}`$. We therefore model the functional dependence of $`R`$ on $`\alpha `$ as $`R=R_{}\tau `$, where $`R_{}`$ is the axial ratio of the beam at $`\alpha =0`$, and $`\tau `$ is a function of $`\alpha `$. According to Biggs (1990), $`R_{}=1`$ and $`\tau `$ is given by
$$\tau (\alpha )=\mathrm{\hspace{0.33em}1}K_1\times 10^4\alpha K_2\times 10^5\alpha ^2,$$
(5)
where $`K_1`$, $`K_2`$ are constants and $`\alpha `$ is in degrees. Biggs finds that $`K_1`$ and $`K_2`$ are 3.3 and 4.4, respectively. We, however, treat $`K_{1,2}`$ as free parameters in our model.
### 4.4 The number of hollow cones
Based on the study of conal components, Rankin (1993a) has argued for two nested hollow cones of emission– namely, the outer and the inner cone. Assuming the beams to be circular in shape, opening half angles of the two cones at 1 GHz were found to be $`4.3^{}`$ and $`5.7^{}`$, respectively.
During our preliminary examination of the present sample, we noticed a need to allow for three cones of emission. To incorporate this feature in our model, we introduce two ratios, $`r1<1`$ and $`r2>1`$, to define the size scaling of the inner-most and the outer-most cone, respectively, with reference to a ‘middle’ cone, for which the detailed shape is defined.
Using the model here defined, we need to solve for $`R_{}`$, $`\zeta `$, $`K`$, $`\rho _{}`$, $`K_1`$, $`K_2`$, $`r1`$ and $`r2`$ in this three-conal-ring model. The parameter set thus represents an ‘average’ description of the beam.
## 5 Results and Discussion
An optimized grid search was performed for suitable ranges of the parameter values and in fine enough steps. For $`\zeta `$, the search range allowed for both +ve and -ve values. By definition, $`r_11`$ and $`r_21`$. The best fit was obtained by minimizing the standard deviation $`\sigma _{}`$ defined by
$$\sigma _{}=\sqrt{\frac{_{i=1}^nD_i^2}{N_{dof}}}\times \frac{180^{}}{\pi },$$
(6)
where $`D_i`$ is the deviation of the $`i^{th}`$ data point from the nearest conal ring in the model and $`N_{dof}`$ denotes the degrees of freedom.
The factor $`180/\pi `$ gives $`\sigma _{}`$ in units of degrees under the small-angle approximation. Table 2 lists the parameter values which correspond to the best fit for the entire sample set. With these values, the eq. 4 can now be rewritten as
$$\rho _\nu =4.8^{}(1+66\nu _{\mathrm{MHz}}^1)P^{0.5},$$
(7)
where $`\rho _\nu `$ is in degrees. This average description for the ‘middle’ cone applies also to the other two cones when $`\rho _\nu `$ is scaled by the ratio $`r1=0.8`$ or $`r2=1.3`$ (for the inner and the outermost, respectively). Fig 4 shows the data (plotted to a common scale) for one quadrant of the beam and the three solid curves corresponding to the best fit cones. The points in the figure, though corresponding to different pulsars and frequencies, are translated to a common reference scale appropriate for $`P=1`$ sec, $`\alpha =0`$ and $`\nu =\mathrm{}`$.
We have assumed the period dependence of $`\rho _\nu `$ as $`P^{0.5}`$, whereas Lyne and Manchester (1988) found a dependence of $`P^{\frac{1}{3}}`$. We have examined the latter possibility and found that the difference in the standard deviation is at the level of 2.5-3 $`\sigma `$ and we cannot rule out the $`P^{\frac{1}{3}}`$ law with confidence. We have also checked for the dependence of $`R`$ on $`\alpha `$ by using 3 sub-sets, each of range $`30^{}`$ in $`\alpha `$. The best fit values for $`R`$ in the different $`\alpha `$ segments are $`1\pm _{0.2}^{0.4}`$, $`0.8\pm _{0.2}^{0.4}`$ & $`0.5\pm _{0.2}^{0.4}`$ for $`\alpha `$ ranges $`0^{}30^{}`$, $`30^{}60^{}`$ & $`60^{}90^{}`$, respectively. This dependence of $`R_{}`$ on $`\alpha `$, even if it were significant, is quite consistent with our values of $`K1`$, $`K2`$ (Table 2) as well as with the results of Biggs (1990). However, given the uncertainties in the $`R`$ estimates for the three ranges, it is not possible presently to rule out a dependence of $`R`$ on $`\alpha `$. Indeed, this part of the goodness-of-fit is negligible, $`\sigma _{}`$ (the standard deviation) is $`0.18^{}`$ when $`K1`$ and $`K20`$ and $`0.2^{}`$ when $`K1,K2=0`$. Earlier Narayan & Vivekanand (1983) had argued that $`R`$ is a function of the pulsar period. To assess this claim, our sample was divided into three period ranges and the corresponding $`R`$ estimates compared. However, no period dependence was evident and it was possible to rule out such a dependence with high confidence.
The number and thickness of conal rings: As already noted and can be seen in Figure 4, we do see evidence for a possible cone outside the two cones discussed by Rankin (1993a). Also, presence of a ‘further inner’ cone has been suggested by Rankin & Rathnasree (1997) in the case of PSR 1929+10. The pulsars suggestive of this outer cone (refer Figure 4) are PSRs 0656+14, 1821+05, 1944+17 and 1952+29 (at frequencies 234 MHz and higher). We have examined the possibility that these cases really belong to the central-cone, but are well outside of it due to an error in the assumed values of $`\alpha `$. We rule out the possibility as the implied error in $`\alpha `$ turns out to be too high to be likely. It is important to point out that a noisy sample like the present one would appear increasingly consistent, judging by the best-fit criterion, with models that include more cones. The question, therefore is whether we can constrain the number of cones by some independent method. In this context, we wish to discuss the noticeable deficit of points at high $`\beta /\rho _{}`$. Since the deficit reflects the absence of conal singles and conal doubles in our data set, the size of the related ‘gap’ at large $`\theta `$ values, can be used to estimate the possible thickness of the conal rings. The absence of points at $`\theta \stackrel{>}{_{}}60^{}`$ (Figure 4) suggests that the conal rings are rather thin, since a radial thickness $`\delta r`$ comparable to the ring radius would imply a wider gap in $`\theta `$. To quantify this, we write the following relation,
$$\delta r=2r\frac{(1\mathrm{sin}\theta _g)}{(1+\mathrm{sin}\theta _g)},$$
(8)
where $`\theta _g`$ is the $`\theta `$ at the start of the gap (as illustrated in Fig. 2). With $`\theta _g60^{}`$, $`\delta r/r`$ would be about 20%. The presence of more than one distinguishable peak in the distribution of beam radii (shown in the bottom panel of Fig. 4) clearly indicates that the conal separation is larger than the cone width. This combined with our cone-width estimate suggests the number of cones is 3 (for the present range of radii), providing an independent support for our model. This picture is consistent with the estimates by Gil & Krawczyk (1997) and Gil & Cheng (1999).
Component separation vs. frequency: It is interesting to note that for certain pulsars the cone associated with the emission seems to change with frequency. For example, the conal emission in PSR 1920+21 appears to have ‘switched’ at 610 MHz to the innermost cone while being associated with the central cone at other frequencies. Rankin (1983b), in a comprehensive study of the dependence of component separation with frequency, invokes deep ‘absorption’ features to explain the apparent anomalous reduction in the component separation in certain frequency ranges. We suggest that such anomalous reduction in the separations could be due to switching of the emission to an inner cone at some frequencies. Observations at finely spaced frequencies in the relevant ranges would be helpful to study this effect in detail. The other pulsars which show similar trends are PSRs 1804-08, 2003-08, 1944+17 and 1831-04. It should be noted that such switching is possibly reflected, also, in mode changes.
The deficit at low $`\beta /\rho _{}`$: The absence of points near $`\beta =0`$ is clearly noticeable in Fig. 4. Such a ‘gap’ is also apparent in the distribution of $`\beta /\rho _{}`$ plotted in Fig. 5. The gap was already noted by Lyne & Manchester (1988). They argued that it arises because the rapid position-angle swings (expected at small $`\beta `$’s) are difficult to resolve due to intrinsic or instrumental smearing, leading to underestimation of the sweep-rates. With the improved quality of data now available, the intrinsic smearing is likely to be the dominant cause for this circumstance. There are a number of clear instances among the general population of pulsars where the polarization angle traverse near the central core component is distorted. PSR 1237+25 provides an extreme examples of such distortion, and Ramachandran & Deshpande (1997) report promising initial efforts to model its polarization-angle track as distorted by by a low-$`\gamma `$ core-beam. Another possibility for the low-$`\beta /\rho _{}`$ gap is that it could simply be a selection effect caused by less intense emission in the cone center than at intermediate traverses. If so, the low frequency turn-overs in the energy spectra of pulsars may at least be partly due to this, since at lower radio frequencies the $`\beta /\rho _{}`$ is relatively smaller.
The sources of uncertainties in the present analysis: The standard deviation $`\sigma _{}`$ corresponding to the best-fit model amounts to about 15% of the conal radius. This fractional deviation (comparable to the thickness of the cone) is too large to allow any more detailed description of the beam shape (such as dependence on $`\alpha `$, for example). We find it useful to assess and quantify the sources of error, partly to help possible refinement for future investigations. The three data inputs to our analysis are $`\alpha ,\beta `$ and $`\varphi _\nu `$, while the basic observables are the maximum polarization-angle sweep rate and core width, apart from the conal separation measured. It is easy to see that the errors in the core-widths will affect directly both $`\alpha `$ and $`\beta `$ estimates. Over the range of $`\theta `$s spanned by the present data set the errors in $`\alpha `$ are likely to dominate, since the x & y (in figure 4) are almost linearly proportional to $`\mathrm{sin}(\alpha )`$. Hence, the fractional deviation may be nearly equal to (or define the upper limit of) the fractional error in $`\mathrm{sin}(\alpha )`$ and therefore in the core-width estimates.
Rankin (1990, 1993b) notes that in several cases the apparent core-widths might suffer from ‘absorption’ and the widths might be underestimated if the effect is not properly accounted for. Also, in some cases, the widths were extrapolated to a reference frequency of 1 GHz using a $`\nu ^{0.25}`$ dependence. There have been several suggestions regarding the ‘appropriate’ frequency dependence which would give significantly different answers when used for width extrapolation. For example, if our best-fit dependence for conal width is used for the core-width extrapolation, the values would differ from Rankin’s estimates (through extrapolation) by as much as 15%, enough to explain the present deviation in some cases. Another possible source of error is the uncertainty in the sign of $`\beta `$ (important only for the $`\mathrm{sin}(\alpha +\beta )`$ term in equation 2 and hence for small $`\alpha `$). As Rankin points out, it is difficult to determine the sign unambiguously in most cases and hence the information is only available for a handful of pulsars.
Evidence in favour of ‘conal’ emission: The significant implication of the gap at $`\theta \stackrel{>}{_{}}60^{}`$ (referred to earlier) deserves further discussion. If the ‘conal’ components were results of a merely patchy (random) illumination across the beam area, (as argued by Lyne & Manchester, 1988), then such a gap should not exist. If a single thick hollow cone were to be responsible for the conal components, a gap (corresponding to the conal-single types) would still be apparent but then it should be above a cut-off y value (refer figure 4) and not in a angular sector like that observed. On the other hand, if indeed the conal emission exists in the form of nested cones (as distinct from the core emission), then the shape of the gap is a natural consequence of our not including conal-single profiles in this analysis. This gap, therefore, should be treated as an important evidence for a pulsar beam form comprised, in general, of nested cones of emission.
## 6 Summary
Using the multifrequency pulse profiles of a large number of conal-triple and multiple pulsars we modelled the pulsar beam shape in an improved way. Our analysis benefits from the different frequency measurements being treated as independent samples, thus increasing the number of independent constrains. The main results are summarized below.
1) Our profile sample is consistent with a beam shape that is a function of $`\alpha `$, circular at $`\alpha =0`$ and increasingly compressed in the latitudinal direction as $`\alpha `$ increases, as suggested by Biggs (1990). However, the data is equally consistent with the possibility that the beam is circular for all values of $`\alpha `$.
2) We identify three nested cones of emission based on a normalized distribution of outer components. The observed gap ($`\theta \stackrel{>}{_{}}60^{}`$) in the distribution independently suggests three cones in the form of annular rings whose widths are typically about 20% of the cone radii. We consider this circumstance as an important evidence for the nested-cone structure.
Any further significant progress in such modelling would necessarily need refined estimates of the observables, particularly the core-widths.
## Acknowledgement
We thank V. Radhakrishnan, Rajaram Nityananda and Joanna Rankin for fruitful discussions and for several suggestions that have helped in improving the manuscript. We acknowledge Ashish Asgekar, D. Bhattacharya and R. Ramachandran for useful discussions and thank our referee, J. A. Gil, for critical comments and suggestions.
|
no-problem/9904/quant-ph9904018.html
|
ar5iv
|
text
|
# Sonoluminescence: Two-photon correlations as a test of thermality
## I Introduction
In this Letter we propose a fundamental test for experimentally discriminating between various classes of theoretical models for sonoluminescence. It is well known that the optical photons measured in sonoluminescence are characterized by a broadband spectrum, often described as approximately thermal with a “temperature” of several tens of thousands of Kelvin . Whether or not this “temperature” represents an actual thermal ensemble is less than clear. For instance, according to the “shock wave approach” of Barber, Putterman et al., or the “adiabatic heating hypothesis”, thermality of the spectrum is due to a high physical temperature caused by compression of the gases contained in the bubble. On the other hand, in models based on variants of Schwinger’s “dynamical Casimir approach” , it is possible to avoid reaching high physical temperatures and yet to obtain a thermal spectrum (or at least pseudo-thermal characteristics for the emitted photons) because of the peculiar statistical properties of the two-photon squeezed-states produced by this class of mechanism.
We stress that thermal characteristics in single photon measurements can be associated with at least two hypotheses: (a) real physical thermalization of the photon environment; (b) pseudo-thermal single photon statistics due to tracing over the unobserved member of a photon pair that is actually produced in a two-mode squeezed state. We shall call case (a) real thermality; while case (b) will be denoted effective thermality. Of course, case (b) has no relation with any concept of thermodynamic temperature, though to any such squeezed state one may assign a (possibly mode-dependent) effective temperature.
Our aim is to find a class of measurements able to discriminate between cases (a) and (b), and to understand the origin of the roughly thermal spectrum for sonoluminescence in the visible frequency range. In principle, the thermal character of the experimental spectrum could disappear at higher frequencies, but for such frequencies the water medium is opaque, and it is not clear how we could detect them. (Except through heating effects.) Our key remark is that it is not necessary to try to measure higher than visible frequencies in order to get a definitive answer regarding thermality. It is sufficient, at least in principle, to measure photon pair correlations in the visible portion of the sonoluminescence spectrum. Thus regardless of the underlying mechanism, two-photon correlation measurements are a very useful tool for discriminating between broad classes of theory and thereby investigating the nature of sonoluminescence. We note that two-photon correlations have already been proposed, for the first time in and subsequently in , as an efficient tool for measuring the shape and the size of the emission region. It was proposed in that precise Hanbury–Brown–Twiss interferometry measurements could in principle distinguish between chaotic (thermal) light emerging from a hot bubble and the possible production of coherent light via the dynamical Casimir effect. Unfortunately in the dynamical Casimir effect photons are always pair-produced from the vacuum in two–mode squeezed states, not in coherent states. Pair-production via the dynamical Casimir effect appears to imply that all the photon pairs form two–mode squeezed states, which are very different from the coherent states analyzed in .
## II Real thermal light versus two-mode squeezed states
The quantum optics mechanism that simulates a thermal spectrum \[case (b)\] is based on two-mode squeezed-states defined by
$$|\zeta _{ab}=\text{e}^{\zeta (a^{}b^{}ba)}|0_a,0_b,$$
(1)
where $`\zeta `$ is (for our purposes) a real parameter though more generally it can be chosen to be complex . In quantum optics a two-mode squeezed-state is typically associated with a so called non-degenerate parametric amplifier (one of the two photons is called “signal” and the other “idler” ). Consider the operator algebra
$$[a,a^{}]=1=[b,b^{}],[a,b]=0=[a^{},b^{}],$$
(2)
and the corresponding vacua
$$|0_a:a|0_a=0,|0_b:b|0_b=0.$$
(3)
The two-mode vacuum is the state $`|\zeta |0(\zeta )`$ annihilated by the operators
$$A(\zeta )=\mathrm{cosh}(\zeta )a\mathrm{sinh}(\zeta )b^{},$$
(4)
$$B(\zeta )=\mathrm{cosh}(\zeta )b\mathrm{sinh}(\zeta )a^{}.$$
(5)
A characteristic of two-mode squeezed states is that if we measure only one photon and “trace away” the second, a thermal density matrix is obtained . Indeed, if $`O_a`$ represents an observable relative to one mode (say mode “a”) its expectation value on the squeezed vacuum is given by
$$\zeta _{ab}|O_a|\zeta _{ab}=\frac{1}{\mathrm{cosh}^2(\zeta )}\underset{n=0}{\overset{\mathrm{}}{}}[\mathrm{tanh}(\zeta )]^{2n}n_a|O_a|n_a.$$
(6)
In particular, if we consider $`O_a=N_a`$, the number operator in mode $`a`$, the above reduces to
$$\zeta _{ab}|N_a|\zeta _{ab}=\mathrm{sinh}^2(\zeta ).$$
(7)
These formulae have a strong formal analogy with thermofield dynamics (TFD) , where a doubling of the physical Hilbert space of states is invoked in order to be able to rewrite the usual Gibbs (mixed state) thermal average of an observable as an expectation value with respect to a temperature-dependent “vacuum” state (the thermofield vacuum, a pure state in the doubled Hilbert space). In the TFD approach, a trace over the unphysical (fictitious) states of the fictitious Hilbert space gives rise to thermal averages for physical observables, completely analogous to the averages in equation (6) except that we must make the following identification
$$\mathrm{tanh}(\zeta )=\mathrm{exp}\left(\frac{1}{2}\frac{\mathrm{}\omega }{k_BT}\right),$$
(8)
where $`\omega `$ is the mode frequency and $`T`$ is the temperature. We note that the above identification implies that the squeezing parameter $`\zeta `$ in TFD is $`\omega `$-dependent in a very special way.
The formal analogy with TFD allows us to conclude that, provided we measure only one photon mode, the two-mode squeezed-state acts as a thermofield vacuum and the single-mode expectation values acquire a pseudo-thermal character corresponding to a “temperature” $`T_{\mathrm{s}queezing}`$ related with the squeezing parameter $`\zeta `$ by
$$k_BT_{\mathrm{s}queezing}=\frac{\mathrm{}\omega _i}{2\mathrm{log}(\mathrm{coth}(\zeta ))},$$
(9)
where the index $`i=a,b`$ indicates the signal mode or the idler mode respectively; note that “signal” and “idler” modes can have different effective temperatures (in general $`\omega _{signal}\omega _{idler}`$.
## III A toy model and sonoluminescence
To treat sonoluminescence, we introduce a quantum field theory characterized by an infinite set of bosonic oscillators (as in bosonic TFD; not just two oscillators as in the case of “signal-idler” systems studied in quantum optics). The simple two-mode squeezed vacuum is replaced by
$`|\mathrm{\Omega }[\zeta (k,k^{})]\mathrm{exp}\left[{\displaystyle d^3kd^3k^{}\zeta (k,k^{})(a_kb_k^{}a_k^{}b_k^{}^{})}\right]|0,`$ (10)
where the function $`\zeta (k,k^{})`$ is peaked near $`k+k^{}=0`$, and becomes proportional to a delta function in the case of infinite volume \[$`\zeta (k,k^{})\zeta (k)\delta (k+k^{})`$\] when the photons are emitted strictly back-to-back . To be concrete, let us refer to the homogeneous dielectric model presented in . In this limit there is no “mixing” and everything reduces to a sum of two-mode squeezed-states, where each pair of back-to-back modes is decoupled from the other. The frequency $`\omega `$ is the same for each photon in the couple, in such a way that we are sure to get the same “temperature” for both. The two-mode squeezed vacuum then simplifies to
$$|\mathrm{\Omega }(\zeta _k)\mathrm{exp}\left[d^3k\zeta _k(a_ka_ka_k^{}a_k^{})\right]|0.$$
(11)
The key to the present proposal is that, if photons are pair produced via the dynamical Casimir effect, then they are actually produced in some combination of these two-mode squeezed-states . In this case $`T_{\mathrm{s}queezing}`$ is a function of both frequency and squeezing parameter, and in general only a special “fine tuning” would allow us to get the same effective temperature for all couples. If we consider the expectation value on the state $`|\mathrm{\Omega }(\zeta _k)`$ of $`N_ka_k^{}a_k`$ we get
$$\mathrm{\Omega }(\zeta _k)|N_k|\mathrm{\Omega }(\zeta _k)=\mathrm{sinh}^2(\zeta _k),$$
(12)
so we again find a “thermal” distribution for each value of $`k`$ with temperature
$$k_BT_k\frac{\mathrm{}\omega _k}{2\mathrm{log}(\mathrm{coth}(\zeta _k))}.$$
(13)
The point is that for $`k\overline{k}`$ we generally get $`T_kT_{\overline{k}}`$ unless a fine tuning condition holds. This condition is implicitly enforced in the definition of the thermofield vacuum and it is possible only if we have
$$\mathrm{coth}(\zeta _k)=\text{e}^{\kappa \omega _k},$$
(14)
with $`\kappa `$ some constant, so that the frequency dependence in $`T_k`$ is canceled and the same $`T_{\mathrm{s}queezing}`$ is obtained for all couples.
For models of sonoluminescence based on the dynamical Casimir effect (i.e. squeezing the QED vacuum) we cannot rely on a definition to provide the fine tuning, but must perform an actual calculation. Our model is again a useful tool for a quantitative analysis. We have (omitting indices for notational simplicity; our Bogolubov transformation is diagonal) the following relation between the squeezing parameter and the Bogolubov coefficient $`\beta `$
$$N=\mathrm{sinh}^2(\zeta )=|\beta |^2.$$
(15)
By defining $`\tau \pi t_0/(n_{\mathrm{i}n}^2+n_{\mathrm{o}ut}^2)`$, where $`t_0`$ is the timescale on which the refractive index changes, one has
$`|\beta (\stackrel{}{k}_1,\stackrel{}{k}_2)|^2`$ $`=`$ $`{\displaystyle \frac{\mathrm{sinh}^2\left(|n_{\mathrm{i}n}^2\omega _{\mathrm{i}n}n_{\mathrm{o}ut}^2\omega _{\mathrm{o}ut}|\tau \right)}{\mathrm{sinh}\left(2n_{\mathrm{i}n}^2\omega _{\mathrm{i}n}\tau \right)\mathrm{sinh}\left(2n_{\mathrm{o}ut}^2\omega _{\mathrm{o}ut}\tau \right)}}{\displaystyle \frac{V}{(2\pi )^3}}\delta ^3(\stackrel{}{k}_1+\stackrel{}{k}_2).`$ (16)
In the adiabatic limit (large frequencies) we get a Boltzmann factor
$$|\beta |^2\mathrm{exp}\left(4\mathrm{min}\{n_{\mathrm{i}n},n_{\mathrm{o}ut}\}n_{\mathrm{o}ut}\omega _{\mathrm{o}ut}\tau \right).$$
(17)
Since $`|\beta |`$ is small, $`\mathrm{sinh}(\zeta )\mathrm{tanh}(\zeta )`$, so that in this adiabatic limit
$$|\mathrm{tanh}(\zeta )|^2\mathrm{exp}\left(4\mathrm{min}\{n_{\mathrm{i}n},n_{\mathrm{o}ut}\}n_{\mathrm{o}ut}\omega _{\mathrm{o}ut}\tau \right).$$
(18)
Therefore
$$k_BT_{\mathrm{e}ffective}\frac{\mathrm{}}{8\pi t_0}\frac{n_{\mathrm{i}n}^2+n_{\mathrm{o}ut}^2}{n_{\mathrm{o}ut}\mathrm{min}\{n_{\mathrm{i}n},n_{\mathrm{o}ut}\}}.$$
(19)
Thus for the entire adiabatic region we can assign a single frequency-independent effective temperature, which is really a measure of the speed with which the refractive index changes. Physically, in sonoluminescence this observation applies only to the high-frequency tail of the photon spectrum.
In contrast, in the low frequency region, where the bulk of the photons emitted in sonoluminescence are to be found, the sudden approximation holds and the spectrum is phase-space-limited (a power law spectrum), not Planckian . It is nevertheless still possible to assign a different effective temperature for each frequency.
Finite volume effects smear the momentum-space delta function so we no longer get exactly back-to-back photons. This represents a further problem because we have to return to the general squeezed vacuum of equation (10). It is still true that photons are emitted in pairs, pairs that are now approximately back-to-back and of approximately equal frequency. We can again define an effective temperature for each photon in the couple as in the “signal-idler” systems of quantum optics. This effective temperature is no longer the same for the two photons belonging to the same couple and no “special condition” for getting the same temperature for all the couples exists. Hence the analysis of these finite volume distortions is not easy , but the qualitative result that in any dynamic Casimir effect model of sonoluminescence there should be strong correlations between approximately back-to-back photons is robust.
Indeed, if we work with a plane wave approximation for the electromagnetic eigen-modes (this is essentially a version of the Born approximation, modified to deal with Bogolubov coefficients instead of scattering amplitudes) and further modify the infinite-volume model of , both by permitting a more general temporal profile for the refractive index, and by cutting off the space integrations at the surface of the bubble, then the squared Bogolubov coefficient takes the form
$`|\beta (\stackrel{}{k}_1,\stackrel{}{k}_2)|^2`$ $`=`$ $`F(k_1,k_2;n(t))\left|S\left(|\stackrel{}{k}_1+\stackrel{}{k}_2|R\right)\right|^2.`$ (20)
Here $`F(k_1,k_2;n(t))`$ is some complicated function of the refractive index temporal profile, which encodes all the dynamics, while $`S\left(|\stackrel{}{k}_1+\stackrel{}{k}_2|R\right)`$ is a purely kinematical factor arising from the limited spatial integration:
$`S\left(|\stackrel{}{k}_1+\stackrel{}{k}_2|R\right){\displaystyle _{rR}}d^3\stackrel{}{r}\mathrm{exp}\left[i(\stackrel{}{k}_1+\stackrel{}{k}_2)\stackrel{}{r}\right].`$
Indeed in the infinite volume limit $`|S(\stackrel{}{k}_1,\stackrel{}{k}_2)|^2[V/(2\pi )^3]\delta (\stackrel{}{k}_1+\stackrel{}{k}_2)`$. It is now a standard calculation to show that
$`S\left(|\stackrel{}{k}_1+\stackrel{}{k}_2|R\right)={\displaystyle \frac{4\pi }{|\stackrel{}{k}_1+\stackrel{}{k}_2|^3}}\left[\mathrm{sin}(|\stackrel{}{k}_1+\stackrel{}{k}_2|R)(|\stackrel{}{k}_1+\stackrel{}{k}_2|R)\mathrm{cos}(|\stackrel{}{k}_1+\stackrel{}{k}_2|R)\right].`$
So, independent of the temporal profile, kinematics will provide characteristic angular correlations between the outgoing photons: this result depends only on the the existence of a vacuum squeezing effect driven by a time-dependent refractive index (which is what is needed to make the notion of a Bogolubov coefficient meaningful in this context).
The plane-wave approximation used to obtain this formula is valid provided the wavelength of the photons, while they are still inside the bubble, are small compared to the dimensions of the bubble
$$\lambda _{\mathrm{inside}}R;\omega \frac{c}{nR}.$$
(21)
While there is still considerable disagreement about the physical size of the bubble when light emission occurs , and almost no data concerning the value of the refractive index of the bubble contents at that time, the scenario developed in is very promising in this regard. In particular, high frequency photons are more likely to exhibit the back-to-back effect, and depending on the values of $`R`$ and $`n`$ this could hold for significant portions of the resulting emission spectrum. Experimentally, one should work at as high a frequency as possible—at the peak close to the cutoff.
These observations lead us to the following proposal.
## IV Two-photon observables
Define the observable
$$N_{ab}N_aN_b,$$
(22)
and its variance
$$\mathrm{\Delta }(N_{ab})^2=\mathrm{\Delta }N_a^2+\mathrm{\Delta }N_b^22N_aN_b+2N_aN_b.$$
(23)
These number operators $`N_a,N_b`$ are intended to be relative to photons measured, e.g., back to back. In the case of true thermal light we get
$$\mathrm{\Delta }N_a^2=N_a(N_a+1),$$
(24)
$$N_aN_b=N_aN_b,$$
(25)
so that
$$\mathrm{\Delta }(N_{ab})_{\mathrm{t}hermallight}^2=N_a(N_a+1)+N_b(N_b+1).$$
(26)
For a two-mode squeezed-state
$$\mathrm{\Delta }(N_{ab})_{\mathrm{t}womodesqueezedlight}^2=0.$$
(27)
Due to correlations, $`N_aN_bN_aN_b`$. Note also, that if you measure only a single photon in the couple, you get (as expected) a thermal variance $`\mathrm{\Delta }N_a^2=N_a(N_a+1)`$. Therefore a measurement of the covariance $`\mathrm{\Delta }(N_{ab})^2`$ can be decisive in discriminating if the photons are really physically thermal or if non classical correlations between the photons occur . If the “thermality” in the sonoluminescence spectrum is of this squeezed-mode type, we will ultimately desire a much more detailed model of the dynamical Casimir effect involving an interaction term that produces pairs of photons in two-mode squeezed-states. Apart from our model and its finite volume generalization , the Eberlein model also possesses this property . For this type of squeezed-mode photon pair-production in a linear medium with spacetime-dependent dielectric permittivity and magnetic permeability see ; for nonlinearity effects see .
In summary: The main experimental signature for squeezed-state photons being pair-produced in sonoluminescence is the presence of strong spatial correlations between photons emitted back-to-back and having the same frequency. These correlations could be measured, for example, by back-to-back symmetrically placed detectors working in coincidence. Finite-size effects have been shown in to perturb only slightly this back-to-back character of the emitted photons, in the sense that back-to-back emission remains largely dominant. (Additionally it has been verified that the form of the spectrum is not violently affected.) Of course, a detailed analysis of the many technical experimental problems (such as e.g. filtering and multi-mode signals in the detectors) has also to be done (on these topics see ), but such technical details are beyond the scope of the current work.
## V Discussion
The main aims of the present Letter are to clarify the nature of the photons produced in Casimir-based models of sonoluminescence, and to delineate the available lines of (theoretical as well experimental) research that should be followed in order to discriminate Casimir-based models from thermal models, preferably without having to understand all of the messy technical details of the condensed matter physics taking place inside the collapsing bubble.
We have shown that “effective thermality” can manifest itself at different levels. What is certainly true is that two-mode squeezed states will exhibit, at a given fixed three-momentum, occupation numbers which in that mode follow Bose–Einstein statistics. This can be called “thermality at fixed wavenumber”. In contrast, it is sometimes possible to assign, at least for a reasonably wide range of wavenumbers, the same temperature to all modes. This “thermality across a range of wavenumbers” gives rise, at least in this range of wavenumbers, to a spectrum which is approximately Planckian.
Our sonoluminescence model exhibits Bose–Einstein thermality but not a truly Planckian spectrum (since the bulk of the photon emission occurs at frequencies where the sudden approximation holds and a common temperature for all the momenta is lacking). The spectrum is generically a power law at low frequencies followed by a cut-off . Although precise measurements in the low frequency tail of the spectrum could also (in principle) allow us to discriminate class “a” models from class “b” models, this possibility has to be considered strongly model-dependent. Furthermore the spectral data available at the present time is in this regard relatively crude: spectral analysis by itself does not seem to be an appropriate tool for discriminating between class “a” and class “b” models.
Despite this limitation we have shown that there is still the possibility of obtaining a clear discrimination between real and effective thermality, without relying on the detailed features of the model, by looking at two-photon correlations. For thermal light one should find thermal variance for photon pairs. On the other hand, thermofield–like photons should show zero variance in appropriate pair correlations. Moreover, our analysis points out that a key point in discriminating, by means of photon measurements alone, between classes of models for sonoluminescence is the mechanism of photon production: Any form of pair-production is associated with two–mode squeezed states and their strong quantum correlations. In contrast, any single-photon production mechanism (thermal, partially thermal, non-thermal) is not. In either case, two-photon correlation measurements are potentially a very useful tool for looking into the nature of sonoluminescence.
## ACKNOWLEDGMENTS
This research was supported by the Italian Ministry of Scientific Research (DWS, SL, and FB), and by the US Department of Energy (MV). MV particularly wishes to thank SISSA (Trieste, Italy) and Victoria University (Te Whare Wananga o te Upoko o te Ika a Maui; Wellington, New Zealand) for hospitality during various stages of this research. FB is indebted to A. Gatti for her very helpful remarks about photon statistics. SL wishes to thank G. Barton, G. Plunien, and R. Schützhold for illuminating discussions.
|
no-problem/9904/chao-dyn9904041.html
|
ar5iv
|
text
|
# Microscopic chaos from Brownian motion?
## Microscopic chaos from Brownian motion?
In a recent Letter in Nature, Gaspard et al. claimed to present empirical evidence for microscopic chaos on a molecular scale from an ingenious experiment using a time series of the positions of a Brownian particle in a liquid. The Letter was preceded by a lead article emphasising the fundamental nature of the experiment. In this note we demonstrate that virtually identical results can be obtained by analysing a corresponding numerical time series of a particle in a manifestly microscopically nonchaotic system.
As in Ref. we analyse the position of a single particle colliding with many others. We use the Ehrenfest wind-tree model where the pointlike (“wind”) particle moves in a plane colliding with randomly placed fixed square scatterers (“trees”, Fig. 1a). We choose this model because collisions with the flat sides of the squares do not lead to exponential separation of corresponding points on initially nearby trajectories. This means there are no positive Lyapunov exponents which are characteristic of microscopic chaos here. In contrast the Lorentz model used in as an example similar to Brownian motion is a wind-tree model where the squares are replaced by hard (circular) disks (cf., Fig. 1) and exhibits exponential separation of nearby trajectories, leading to a positive Lyapunov exponent and hence microscopic chaos.
Nevertheless, we now demonstrate that the nonchaotic Ehrenfest model reproduces all the results of Ref. . A particle trajectory segment shown in Fig. 1b is strikingly similar to that for the Brownian particle, (cf., Fig. 2). Our subsequent analysis parallels that of Ref. , where more details may be found. Thus the microscopic chaoticity is determined by estimating the Kolmogorov-Sinai entropy $`h_{KS}`$, using the method of Procaccia and others via the information entropy $`K(n,ϵ,\tau )`$ obtained from the frequency with which the partical retraces part of its (previous) trajectory within a distance $`ϵ`$ for $`n`$ measurements spaced at a time interval $`\tau `$. Since for the systems considered here $`h_{KS}`$ equals the sum of the positive Lyapunov exponents, the determination of a positive $`h_{KS}`$ would imply microscopic chaos. As in we find that $`K`$ grows linearly with time (Fig 1c and Fig. 3), giving a positive (non-zero) bound on $`h_{KS}`$ (Fig 1d and Fig. 4). Indeed our Figs. 1b-d for a microscopically nonchaotic model are virtually identical with the corresponding figures 2-4 of . Therefore Gaspard et al. did not prove microscopic chaos for Brownian motion.
The algorithm of as applied here cannot determine the microscopic chaoticity of Brownian motion since the time interval between measurements, $`1/60s`$ in , is so much larger than the microscopic time scale determined by the inverse collision frequency in a liquid, approximately $`10^{12}s`$. A decisive determination of microscopic chaos would involve, it seems at the very least, a time interval $`\tau `$ of the same order as characteristic microscopic time scales.
C. P. Dettmann, E. G. D. Cohen
Center for Studies in Physics and Biology,
Rockefeller University,
New York, NY 10021, USA
H. van Beijeren
Institute for Theoretical Physics,
University of Utrecht,
3584 CC Utrecht, The Netherlands
|
no-problem/9904/astro-ph9904049.html
|
ar5iv
|
text
|
# Why Cosmologists Believe the Universe is Accelerating
## 1 Introduction
If theoretical cosmologists are the flyboys of astrophysics, they were flying on fumes in the 1990s. Since the early 1980s inflation and cold dark matter (CDM) have been the dominant theoretical ideas in cosmology. However, a key prediction of inflation, a flat Universe (i.e., $`\mathrm{\Omega }_0\rho _{\mathrm{total}}/\rho _{\mathrm{crit}}=1`$), was beginning to look untenable. By the late 1990s it was becoming increasingly clear that matter only accounted for 30% to 40% of the critical density (see e.g., Turner, 1999). Further, the $`\mathrm{\Omega }_M=1`$, COBE-normalized CDM model was not a very good fit to the data without some embellishment (15% or so of the dark matter in neutrinos, significant deviation from from scale invariance – called tilt – or a very low value for the Hubble constant; see e.g., Dodelson et al., 1996).
Because of this and their strong belief in inflation, a number of inflationists (see e.g., Turner, Steigman & Krauss, 1984 and Peebles, 1984) were led to consider seriously the possibility that the missing 60% or so of the critical density exists in the form of vacuum energy (cosmological constant) or something even more interesting with similar properties (see Sec. 3 below). Since determinations of the matter density take advantage of its enhanced gravity when it clumps (in galaxies, clusters or superclusters), vacuum energy, which is by definition spatially smooth, would not have shown up in the matter inventory.
Not only did a cosmological constant solve the “$`\mathrm{\Omega }`$ problem,” but $`\mathrm{\Lambda }`$CDM, the flat CDM model with $`\mathrm{\Omega }_M0.4`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }0.6`$, became the best fit universe model (Turner, 1991 and 1997b; Krauss & Turner, 1995; Ostriker & Steinhardt, 1995; Liddle et al., 1996). In June 1996, at the Critical Dialogues in Cosmology Meeting at Princeton University, the only strike recorded against $`\mathrm{\Lambda }`$CDM was the early SN Ia results of Perlmutter’s group (Perlmutter et al., 1997) which excluded $`\mathrm{\Omega }_\mathrm{\Lambda }>0.5`$ with 95% confidence.
The first indirect experimental hint for something like a cosmological constant came in 1997. Measurements of the anisotropy of the cosmic background radiation (CBR) began to show evidence for the signature of a flat Universe, a peak in the multipole power spectrum at $`l=200`$. Unless the estimates of the matter density were wildly wrong, this was evidence for a smooth, dark energy component. A universe with $`\mathrm{\Omega }_\mathrm{\Lambda }0.6`$ has a smoking gun signature: it is speeding up rather than slowing down. In 1998 came the SN Ia evidence that our Universe is speeding up; for some cosmologists this was a great surprise. For many theoretical cosmologists this was the missing piece of the grand puzzle and the confirmation of a prediction.
## 2 The theoretical case for accelerated expansion
The case for accelerated expansion that existed in January 1998 had three legs: growing evidence that $`\mathrm{\Omega }_M0.4`$ and not 1; the inflationary prediction of a flat Universe and hints from CBR anisotropy that this was indeed true; and the failure of simple $`\mathrm{\Omega }_M=1`$ CDM model and the success of $`\mathrm{\Lambda }`$CDM. The tension between measurements of the Hubble constant and age determinations for the oldest stars was also suggestive, though because of the uncertainties, not as compelling. Taken together, they foreshadowed the presence of a cosmological constant (or something similar) and the discovery of accelerated expansion.
To be more precise, Sandage’s deceleration parameter is given by
$$q_0\frac{(\ddot{R}/R)_0}{H_0^2}=\frac{1}{2}\mathrm{\Omega }_0+\frac{3}{2}\underset{i}{}\mathrm{\Omega }_iw_i,$$
(2.1)
where the pressure of component $`i`$, $`p_iw_i\rho _i`$; e.g., for baryons $`w_i=0`$, for radiation $`w_i=1/3`$, and for vacuum energy $`w_X=1`$. For $`\mathrm{\Omega }_0=1`$, $`\mathrm{\Omega }_M=0.4`$ and $`w_X<\frac{5}{9}`$, the deceleration parameter is negative. The kind of dark component needed to pull cosmology together implies accelerated expansion.
### 2.1 Matter/energy inventory: $`\mathrm{\Omega }_0=1\pm 0.2`$, $`\mathrm{\Omega }_M=0.4\pm 0.1`$
There is a growing consensus that the anisotropy of the CBR offers the best means of determining the curvature of the Universe and thereby $`\mathrm{\Omega }_0`$. This is because the method is intrinsically geometric – a standard ruler on the last-scattering surface – and involves straightforward physics at a simpler time (see e.g., Kamionkowski et al., 1994). It works like this.
At last scattering baryonic matter (ions and electrons) was still tightly coupled to photons; as the baryons fell into the dark-matter potential wells the pressure of photons acted as a restoring force, and gravity-driven acoustic oscillations resulted. These oscillations can be decomposed into their Fourier modes; Fourier modes with $`klH_0/2`$ determine the multipole amplitudes $`a_{lm}`$ of CBR anisotropy. Last scattering occurs over a short time, making the CBR is a snapshot of the Universe at $`t_{\mathrm{ls}}300,000`$yrs. Each mode is “seen” in a well defined phase of its oscillation. (For the density perturbations predicted by inflation, all modes the have same initial phase because all are growing-mode perturbations.) Modes caught at maximum compression or rarefaction lead to the largest temperature anisotropy; this results in a series of acoustic peaks beginning at $`l200`$ (see Fig. 1). The wavelength of the lowest frequency acoustic mode that has reached maximum compression, $`\lambda _{\mathrm{max}}v_st_{\mathrm{ls}}`$, is the standard ruler on the last-scattering surface. Both $`\lambda _{\mathrm{max}}`$ and the distance to the last-scattering surface depend upon $`\mathrm{\Omega }_0`$, and the position of the first peak $`l200/\sqrt{\mathrm{\Omega }_0}`$. This relationship is insensitive to the composition of matter and energy in the Universe.
CBR anisotropy measurements, shown in Fig. 1, now cover three orders of magnitude in multipole and are from more than twenty experiments. COBE is the most precise and covers multipoles $`l=220`$; the other measurements come from balloon-borne, Antarctica-based and ground-based experiments using both low-frequency ($`f<100`$GHz) HEMT receivers and high-frequency ($`f>100`$GHz) bolometers. Taken together, all the measurements are beginning to define the position of the first acoustic peak, at a value that is consistent with a flat Universe. Various analyses of the extant data have been carried out, indicating $`\mathrm{\Omega }_01\pm 0.2`$ (see e.g., Lineweaver, 1998). It is certainly too early to draw definite conclusions or put too much weigh in the error estimate. However, a strong case is developing for a flat Universe and more data is on the way (Python V, Viper, MAT, Maxima, Boomerang, CBI, DASI, and others). Ultimately, the issue will be settled by NASA’s MAP (launch late 2000) and ESA’s Planck (launch 2007) satellites which will map the entire CBR sky with 30 times the resolution of COBE (around $`0.1^{}`$) (see Page and Wilkinson, 1999).
Since the pioneering work of Fritz Zwicky and Vera Rubin, it has been known that there is far too little material in the form of stars (and related material) to hold galaxies and clusters together, and thus, that most of the matter in the Universe is dark (see e.g. Trimble, 1987). Weighing the dark matter has been the challenge. At present, I believe that clusters provide the most reliable means of estimating the total matter density.
Rich clusters are relatively rare objects – only about 1 in 10 galaxies is found in a rich cluster – which formed from density perturbations of (comoving) size around 10 Mpc. However, because they gather together material from such a large region of space, they can provide a “fair sample” of matter in the Universe. Using clusters as such, the precise BBN baryon density can be used to infer the total matter density (White et al., 1993). (Baryons and dark matter need not be well mixed for this method to work provided that the baryonic and total mass are determined over a large enough portion of the cluster.)
Most of the baryons in clusters reside in the hot, x-ray emitting intracluster gas and not in the galaxies themselves, and so the problem essentially reduces to determining the gas-to-total mass ratio. The gas mass can be determined by two methods: 1) measuring the x-ray flux from the intracluster gas and 2) mapping the Sunyaev - Zel’dovich CBR distortion caused by CBR photons scattering off hot electrons in the intracluster gas. The total cluster mass can be determined three independent ways: 1) using the motions of clusters galaxies and the virial theorem; 2) assuming that the gas is in hydrostatic equilibrium and using it to infer the underlying mass distribution; and 3) mapping the cluster mass directly by gravitational lensing (Tyson, 1999). Within their uncertainties, and where comparisons can be made, the three methods for determining the total mass agree (see e.g., Tyson, 1999); likewise, the two methods for determining the gas mass are consistent.
Mohr et al. (1998) have compiled the gas to total mass ratios determined from x-ray measurements for a sample of 45 clusters; they find $`f_{\mathrm{gas}}=(0.075\pm 0.002)h^{3/2}`$ (see Fig. 2). Carlstrom (1999), using his S-Z gas measurements and x-ray measurements for the total mass for 27 clusters, finds $`f_{\mathrm{gas}}=(0.06\pm 0.006)h^1`$. (The agreement of these two numbers means that clumping of the gas, which could lead to an overestimate of the gas fraction based upon the x-ray flux, is not a problem.) Invoking the “fair-sample assumption,” the mean matter density in the Universe can be inferred:
$`\mathrm{\Omega }_M=\mathrm{\Omega }_B/f_{\mathrm{gas}}`$ $`=`$ $`(0.3\pm 0.05)h^{1/2}(\mathrm{Xray})`$ (2.2)
$`=`$ $`(0.25\pm 0.04)h^1(\mathrm{S}\mathrm{Z})`$
$`=`$ $`0.4\pm 0.1(\mathrm{my}\mathrm{summary}).`$
I believe this to be the most reliable and precise determination of the matter density. It involves few assumptions, most of which have now been tested. For example, the agreement of S-Z and x-ray gas masses implies that gas clumping is not significant; the agreement of x-ray and lensing estimates for the total mass implies that hydrostatic equilibrium is a good assumption; the gas fraction does not vary significantly with cluster mass.
### 2.2 Dark energy
The apparently contradictory results, $`\mathrm{\Omega }_0=1\pm 0.2`$ and $`\mathrm{\Omega }_M=0.4\pm 0.1`$, can be reconciled by the presence of a dark-energy component that is nearly smoothly distributed. The cosmological constant is the simplest possibility and it has $`p_X=\rho _X`$. There are other possibilities for the smooth, dark energy. As I now discuss, other constraints imply that such a component must have very negative pressure ($`w_X\begin{array}{c}<\hfill \\ \hfill \end{array}\frac{1}{2}`$) leading to the prediction of accelerated expansion.
To begin, parameterize the bulk equation of state of this unknown component: $`wp_X/\rho _X`$ (Turner & White, 1997). This implies that its energy density evolves as $`\rho _XR^n`$ where $`n=3(1+w)`$. The development of the structure observed today from density perturbations of the size inferred from measurements of the anisotropy of the CBR requires that the Universe be matter dominated from the epoch of matter – radiation equality until very recently. Thus, to avoid interfering with structure formation, the dark-energy component must be less important in the past than it is today. This implies that $`n`$ must be less than $`3`$ or $`w<0`$; the more negative $`w`$ is, the faster this component gets out of the way (see Fig. 3). More careful consideration of the growth of structure implies that $`w`$ must be less than about $`\frac{1}{3}`$ (Turner & White, 1997).
Next, consider the constraint provided by the age of the Universe and the Hubble constant. Their product, $`H_0t_0`$, depends the equation of state of the Universe; in particular, $`H_0t_0`$ increases with decreasing $`w`$ (see Fig. 4). To be definite, I will take $`t_0=14\pm 1.5`$Gyr and $`H_0=65\pm 5\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ (see e.g., Chaboyer et al., 1998 and Freedman, 1999); this implies that $`H_0t_0=0.93\pm 0.13`$. Fig. 4 shows that $`w<\frac{1}{2}`$ is preferred by age/Hubble constant considerations.
To summarize, consistency between $`\mathrm{\Omega }_M0.4`$ and $`\mathrm{\Omega }_01`$ along with other cosmological considerations implies the existence of a dark-energy component with bulk pressure more negative than about $`\rho _X/2`$. The simplest example of such is vacuum energy (Einstein’s cosmological constant), for which $`w=1`$. The smoking-gun signature of a smooth, dark-energy component is accelerated expansion since $`q_0=0.5+1.5w_X\mathrm{\Omega }_X0.5+0.9w<0`$ for $`w<\frac{5}{9}`$.
### 2.3 $`\mathrm{\Lambda }`$CDM
The cold dark matter scenario for structure formation is the most quantitative and most successful model ever proposed. Two of its key features are inspired by inflation: almost scale invariant, adiabatic density perturbations with Gaussian statistical properties and a critical density Universe. The third, nonbaryonic dark matter is a logical consequence of the inflationary prediction of a flat universe and the BBN-determination of the baryon density at 5% of the critical density.
There is a very large body of data that is consistent with it: the formation epoch of galaxies and distribution of galaxy masses, galaxy correlation function and its evolution, abundance of clusters and its evolution, large-scale structure, and on and on. In the early 1980s attention was focused on a “standard CDM model”: $`\mathrm{\Omega }_0=\mathrm{\Omega }_M=1`$, $`\mathrm{\Omega }_B=0.05`$, $`h=0.50`$, and exactly scale invariant density perturbations (the cosmological equivalent of DOS 1.0). The detection of CBR anisotropy by COBE DMR in 1992 changed everything.
First and most importantly, the COBE DMR detection validated the gravitational instability picture for the growth of large-scale structure: The level of matter inhomogeneity implied at last scattering, after 14 billion years of gravitational amplification, was consistent with the structure seen in the Universe today. Second, the anisotropy, which was detected on the $`10^{}`$ angular scale, permitted an accurate normalization of the CDM power spectrum. For “standard cold dark matter”, this meant that the level of inhomogeneity on all scales could be accurately predicted. It turned out to be about a factor of two too large on galactic scales. Not bad for an ab initio theory.
With the COBE detection came the realization that the quantity and quality of data that bear on CDM was increasing and that the theoretical predictions would have to match their precision. Almost overnight, CDM became a ten (or so) parameter theory. For astrophysicists, and especially cosmologists, this is daunting, as it may seem that a ten-parameter theory can be made to fit any set of observations. This is not the case when one has the quality and quantity of data that will soon be available.
In fact, the ten parameters of CDM + Inflation are an opportunity rather than a curse: Because the parameters depend upon the underlying inflationary model and fundamental aspects of the Universe, we have the very real possibility of learning much about the Universe and inflation. The ten parameters can be organized into two groups: cosmological and dark-matter (Dodelson et al., 1996).
Cosmological Parameters
1. $`h`$, the Hubble constant in units of $`100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$.
2. $`\mathrm{\Omega }_Bh^2`$, the baryon density. Primeval deuterium measurements and together with the theory of BBN imply: $`\mathrm{\Omega }_Bh^2=0.02\pm 0.002`$.
3. $`n`$, the power-law index of the scalar density perturbations. CBR measurements indicate $`n=1.1\pm 0.2`$; $`n=1`$ corresponds to scale-invariant density perturbations. Many inflationary models predict $`n0.95`$; range of predictions runs from $`0.7`$ to $`1.2`$.
4. $`dn/d\mathrm{ln}k`$, “running” of the scalar index with comoving scale ($`k=`$ wavenumber). Inflationary models predict a value of $`𝒪(\pm 10^3)`$ or smaller.
5. $`S`$, the overall amplitude squared of density perturbations, quantified by their contribution to the variance of the CBR quadrupole anisotropy.
6. $`T`$, the overall amplitude squared of gravity waves, quantified by their contribution to the variance of the CBR quadrupole anisotropy. Note, the COBE normalization determines $`T+S`$ (see below).
7. $`n_T`$, the power-law index of the gravity wave spectrum. Scale-invariance corresponds to $`n_T=0`$; for inflation, $`n_T`$ is given by $`\frac{1}{7}\frac{T}{S}`$.
Dark-matter Parameters
1. $`\mathrm{\Omega }_\nu `$, the fraction of critical density in neutrinos ($`=_im_{\nu _i}/90h^2`$). While the hot dark matter theory of structure formation is not viable, we now know that neutrinos contribute at least 0.3% of the critical density (Fukuda et al., 1998).
2. $`\mathrm{\Omega }_X`$ and $`w_X`$, the fraction of critical density in a smooth dark-energy component and its equation of state. The simplest example is a cosmological constant ($`w_X=1`$).
3. $`g_{}`$, the quantity that counts the number of ultra-relativistic degrees of freedom. The standard cosmology/standard model of particle physics predicts $`g_{}=3.3626`$. The amount of radiation controls when the Universe became matter dominated and thus affects the present spectrum of density inhomogeneity.
A useful way to organize the different CDM models is by their dark-matter content; within each CDM family, the cosmological parameters vary. One list of models is:
1. sCDM (for simple): Only CDM and baryons; no additional radiation ($`g_{}=3.36`$). The original standard CDM is a member of this family ($`h=0.50`$, $`n=1.00`$, $`\mathrm{\Omega }_B=0.05`$), but is now ruled out (see Fig. 5).
2. $`\tau `$CDM: This model has extra radiation, e.g., produced by the decay of an unstable massive tau neutrino (hence the name); here we take $`g_{}=7.45`$.
3. $`\nu `$CDM (for neutrinos): This model has a dash of hot dark matter; here we take $`\mathrm{\Omega }_\nu =0.2`$ (about 5 eV worth of neutrinos).
4. $`\mathrm{\Lambda }`$CDM (for cosmological constant) or more generally xCDM: This model has a smooth dark-energy component; here, we take $`\mathrm{\Omega }_X=\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$.
Figure 5 summarizes the viability of these different CDM models, based upon CBR measurements and current determinations of the present power spectrum of inhomogeneity (derived from redshift surveys). sCDM is only viable for low values of the Hubble constant (less than $`55\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$) and/or significant tilt (deviation from scale invariance); the region of viability for $`\tau `$CDM is similar to sCDM, but shifted to larger values of the Hubble constant (as large as $`65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$). $`\nu `$CDM has an island of viability around $`H_060\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and $`n0.95`$. $`\mathrm{\Lambda }`$CDM can tolerate the largest values of the Hubble constant. While the COBE DMR detection ruled out “standard CDM,” a host of attractive variants were still viable.
However, when other very relevant data are considered too – e.g., age of the Universe, determinations of the cluster baryon fraction, measurements of the Hubble constant, and limits to $`\mathrm{\Omega }_\mathrm{\Lambda }`$$`\mathrm{\Lambda }`$CDM emerges as the hands-down-winner of “best-fit CDM model” (Krauss & Turner, 1995; Ostriker & Steinhardt, 1995; Liddle et al., 1996; Turner, 1997b). At the time of the Critical Dialogues in Cosmology meeting in 1996, the only strike against $`\mathrm{\Lambda }`$CDM was the absence of evidence for its smoking gun signature, accelerated expansion.
### 2.4 Missing energy found!
In 1998 evidence for the accelerated expansion anticipated by theorists was presented in the form of the magnitude – redshift (Hubble) diagram for fifty-some type Ia supernovae (SNe Ia) out to redshifts of nearly 1. Two groups, the Supernova Cosmology Project (Perlmutter et al., 1998) and the High-z Supernova Search Team (Riess et al., 1998), working independently and using different methods of analysis, each found evidence for accelerated expansion. Perlmutter et al. (1998) summarize their results as a constraint to a cosmological constant (see Fig. 7),
$$\mathrm{\Omega }_\mathrm{\Lambda }=\frac{4}{3}\mathrm{\Omega }_M+\frac{1}{3}\pm \frac{1}{6}.$$
(2.3)
For $`\mathrm{\Omega }_M0.4\pm 0.1`$, this implies $`\mathrm{\Omega }_\mathrm{\Lambda }=0.85\pm 0.2`$, or just what is needed to account for the missing energy! As I have tried to explain, cosmologists were quick than most to believe, as accelerated expansion was the missing piece of the puzzle found.
Recently, two other studies, one based upon the x-ray properties of rich clusters of galaxies (Mohr et al., 1999) and the other based upon the properties of double-lobe radio galaxies (Guerra et al., 1998), have reported evidence for a cosmological constant (or similar dark-energy component) that is consistent with the SN Ia results (i.e., $`\mathrm{\Omega }_\mathrm{\Lambda }0.7`$).
There is another test of an accelerating Universe whose results are more ambiguous. It is based upon the fact that the frequency of multiply lensed QSOs is expected to be significantly higher in an accelerating universe (Turner, 1990). Kochanek (1996) has used gravitational lensing of QSOs to place a 95% cl upper limit, $`\mathrm{\Omega }_\mathrm{\Lambda }<0.66`$; and Waga and Miceli (1998) have generalized it to a dark-energy component with negative pressure: $`\mathrm{\Omega }_X<1.3+0.55w`$ (95% cl), both results for a flat Universe. On the other hand, Chiba and Yoshii (1998) claim evidence for a cosmological constant, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7_{0.2}^{+0.1}`$, based upon the same data. From this I conclude: 1) Lensing excludes $`\mathrm{\Omega }_\mathrm{\Lambda }`$ larger than 0.8; 2) Because of the modeling uncertainties and lack of sensitivity for $`\mathrm{\Omega }_\mathrm{\Lambda }<0.55`$, lensing has little power in strictly constraining $`\mathrm{\Lambda }`$ or a dark component; and 3) When larger objective surveys of gravitational-lensed QSOs are carried out (e.g., the Sloan Digital Sky Survey), there is the possibility of uncovering another smoking-gun for accelerated expansion.
### 2.5 Cosmic concordance
With the SN Ia results we have for the first time a complete and self-consistent accounting of mass and energy in the Universe. The consistency of the matter/energy accounting is illustrated in Fig. 7. Let me explain this exciting figure. The SN Ia results are sensitive to the acceleration (or deceleration) of the expansion and constrain the combination $`\frac{4}{3}\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda }`$. (Note, $`q_0=\frac{1}{2}\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda }`$; $`\frac{4}{3}\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda }`$ corresponds to the deceleration parameter at redshift $`z0.4`$, the median redshift of these samples). The (approximately) orthogonal combination, $`\mathrm{\Omega }_0=\mathrm{\Omega }_M+\mathrm{\Omega }_\mathrm{\Lambda }`$ is constrained by CBR anisotropy. Together, they define a concordance region around $`\mathrm{\Omega }_01`$, $`\mathrm{\Omega }_M1/3`$, and $`\mathrm{\Omega }_\mathrm{\Lambda }2/3`$. The constraint to the matter density alone, $`\mathrm{\Omega }_M=0.4\pm 0.1`$, provides a cross check, and it is consistent with these numbers. Further, these numbers point to $`\mathrm{\Lambda }`$CDM (or something similar) as the cold dark matter model. Another body of observations already support this as the best fit model. Cosmic concordance indeed!
## 3 What is the dark energy?
I have often used the term exotic to refer to particle dark matter. That term will now have to be reserved for the dark energy that is causing the accelerated expansion of the Universe – by any standard, it is more exotic and more poorly understood. Here is what we do know: it contributes about 60% of the critical density; it has pressure more negative than about $`\rho /2`$; and it does not clump (otherwise it would have contributed to estimates of the mass density). The simplest possibility is the energy associated with the virtual particles that populate the quantum vacuum; in this case $`p=\rho `$ and the dark energy is absolutely spatially and temporally uniform.
This “simple” interpretation has its difficulties. Einstein “invented” the cosmological constant to make a static model of the Universe and then he discarded it; we now know that the concept is not optional. The cosmological constant corresponds to the energy associated with the vacuum. However, there is no sensible calculation of that energy (see e.g., Zel’dovich, 1967; Bludman and Ruderman, 1977; and Weinberg, 1989), with estimates ranging from $`10^{122}`$ to $`10^{55}`$ times the critical density. Some particle physicists believe that when the problem is understood, the answer will be zero. Spurred in part by the possibility that cosmologists may have actually weighed the vacuum (!), particle theorists are taking a fresh look at the problem (see e.g., Harvey, 1998; Sundrum, 1997). Sundrum’s proposal, that the gravitational energy of the vacuum is close to the present critical density because the graviton is a composite particle with size of order 1 cm, is indicative of the profound consequences that a cosmological constant has for fundamental physics.
Because of the theoretical problems mentioned above, as well as the checkered history of the cosmological constant, theorists have explored other possibilities for a smooth, component to the dark energy (see e.g., Turner & White, 1997). Wilczek and I pointed out that even if the energy of the true vacuum is zero, as the Universe as cooled and went through a series of phase transitions, it could have become hung up in a metastable vacuum with nonzero vacuum energy (Turner & Wilczek, 1982). In the context of string theory, where there are a very large number of energy-equivalent vacua, this becomes a more interesting possibility: perhaps the degeneracy of vacuum states is broken by very small effects, so small that we were not steered into the lowest energy vacuum during the earliest moments.
Vilenkin (1984) has suggested a tangled network of very light cosmic strings (also see, Spergel & Pen, 1997) produced at the electroweak phase transition; networks of other frustrated defects (e.g., walls) are also possible. In general, the bulk equation-of-state of frustrated defects is characterized by $`w=N/3`$ where $`N`$ is the dimension of the defect ($`N=1`$ for strings, $`=2`$ for walls, etc.). The SN Ia data almost exclude strings, but still allow walls.
An alternative that has received a lot of attention is the idea of a “decaying cosmological constant”, a termed coined by the Soviet cosmologist Matvei Petrovich Bronstein in 1933 (Bronstein, 1933). (Bronstein was executed on Stalin’s orders in 1938, presumably for reasons not directly related to the cosmological constant; see Kragh, 1996.) The term is, of course, an oxymoron; what people have in mind is making vacuum energy dynamical. The simplest realization is a dynamical, evolving scalar field. If it is spatially homogeneous, then its energy density and pressure are given by
$`\rho `$ $`=`$ $`{\displaystyle \frac{1}{2}}\dot{\varphi }^2+V(\varphi )`$
$`p`$ $`=`$ $`{\displaystyle \frac{1}{2}}\dot{\varphi }^2V(\varphi )`$ (3.4)
and its equation of motion by (see e.g., Turner, 1983)
$$\ddot{\varphi }+3H\dot{\varphi }+V^{}(\varphi )=0$$
(3.5)
The basic idea is that energy of the true vacuum is zero, but not all fields have evolved to their state of minimum energy. This is qualitatively different from that of a metastable vacuum, which is a local minimum of the potential and is classically stable. Here, the field is classically unstable and is rolling toward its lowest energy state.
Two features of the “rolling-scalar-field scenario” are worth noting. First, the effective equation of state, $`w=(\frac{1}{2}\dot{\varphi }^2V)/(\frac{1}{2}\dot{\varphi }^2+V)`$, can take on any value from 1 to $`1`$. Second, $`w`$ can vary with time. These are key features that may allow it to be distinguished from the other possibilities. The combination of SN Ia, CBR and large-scale structure data are already beginning to significantly constrain models (Perlmutter, Turner & White, 1999), and interestingly enough, the cosmological constant is still the best fit (see Fig. 8).
The rolling scalar field scenario (aka mini-inflation or quintessence) has received a lot of attention over the past decade (Freese et al., 1987; Ozer & Taha, 1987; Ratra & Peebles, 1988; Frieman et al., 1995; Coble et al., 1996; Turner & White, 1997; Caldwell et al., 1998; Steinhardt, 1999). It is an interesting idea, but not without its own difficulties. First, one must assume that the energy of the true vacuum state ($`\varphi `$ at the minimum of its potential) is zero; i.e., it does not address the cosmological constant problem. Second, as Carroll (1998) has emphasized, the scalar field is very light and can mediate long-range forces. This places severe constraints on it. Finally, with the possible exception of one model (Frieman et al., 1995), none of the scalar-field models address how $`\varphi `$ fits into the grander scheme of things and why it is so light ($`m10^{33}`$eV).
## 4 Looking ahead
Theorists often require new results to pass Eddington’s test: No experimental result should be believed until confirmed by theory. While provocative (as Eddington had apparently intended it to be), it embodies the wisdom of mature science. Results that bring down the entire conceptual framework are very rare indeed.
Both cosmologists and supernova theorists seem to use Eddington’s test to some degree. It seems to me that the summary of the SN Ia part of the meeting goes like this: We don’t know what SN Ia are; we don’t know how they work; but we believe SN Ia are very good standardizeable candles. I think what they mean is they have a general framework for understanding a SN Ia, the thermonuclear detonation of a Chandrasekhar mass white dwarf, and have failed in their models to find a second (significant) parameter that is consistent with the data at hand. Cosmologists are persuaded that the Universe is accelerating both because of the SN Ia results and because this was the missing piece to a grander puzzle.
Not only have SN Ia led us to the acceleration of the Universe, but also I believe they will play a major role in unraveling the mystery of the dark energy. The reason is simple: we can be confident that the dark energy was an insignificant component in the past; it has just recently become important. While, the anisotropy of the CBR is indeed a cosmic Rosetta Stone, it is most sensitive to physics around the time of decoupling. (To be very specific, the CBR power spectrum is almost identical for all flat cosmological models with the same conformal age today.) SNe Ia probe the Universe just around the time dark energy was becoming dominant (redshifts of a few). My student Dragan Huterer and I (Huterer & Turner, 1998) have been so bold as to suggest that with 500 or so SN Ia with redshifts between 0 and 1, one might be able to discriminate between the different possibilities and even reconstruct the scalar potential for the quintessence field (see Fig. 9).
###### Acknowledgements.
My work is supported by the US Department of Energy and the US NASA through grants at Chicago and Fermilab.
|
no-problem/9904/hep-ph9904312.html
|
ar5iv
|
text
|
# Neutrino Magnetic Moments and Atmospheric Neutrinos Talk presented at WIN99, Cape Town, South Africa, Jan. 25–30, 1999. This work is supported in part by KOSEF, MOE through BSRI 98-2468, and Korea Research Foundation.
## Introduction
For a long time, it was assumed that neutrinos have vanishing quantities: $`m_\nu =0,Q_{\mathrm{em}}=0`$, and $`\mu _\nu =0`$. Among these the neutrino mass problem has attracted the most attention, and finally we might have an evidence for nonzero neutrino mass Super . The other important neutrino property to be exploited is the electromagnetic property, in particular the magnetc moment.
The reason for vanishing neutrino mass was very naive in 50’s and 60’s: the hypothesis of the $`\gamma _5`$-invariance. Under the $`\gamma _5`$-invariance, $`\nu =\pm \gamma _5\nu `$, $`\nu `$ appears only in one chirality. In the standard model (SM), this is encoded as no right-handed neutrino.
In gauge theory models, the story changes because one can calculate the properties of the neutrino at high precision. In SM, one cannot write a mass term for $`\nu `$ in $`d4`$ terms. To write a mass term for $`\nu `$, one has to introduce $`d5`$ terms, or introduce singlet neutrino(s). The two-component neutrino we consider in the left-handed doublet can be Weyl or Majorana type.
If it is a Weyl neutrino, it satisfies $`\nu _i=a_i\gamma _5\nu _i`$ where $`a_i=1`$ or $`1`$. Then the magnetic moment term is given by $`\overline{\nu }_i\sigma _{\mu \nu }q^\nu \nu _j\overline{\nu }_j\sigma _{\mu \nu }q^\nu \nu _ia_ia_j[\overline{\nu }_i\sigma _{\mu \nu }q^\nu \nu _j\overline{\nu }_j\sigma _{\mu \nu }q^\nu \nu _i].`$ Therefore, to have a nonvanishing magnetic moment (or mass), we must require $`a_ia_j=1`$, i.e. the existence of right-handed singlet neutrino(s).
For Majorana neutrinos, $`\psi _c=C\psi ^{}`$, it is possible to write a mass term without introducing right-handed neutrino(s).
Thus, it is possible to introduce neutrino masses and magnetic moments in the SM, by a slight extension of the model. The question is how large they are.
For the detection of neutrino masses and oscillations, there have been numerous studies from solar-, atmospheric-, reactor-, and accelerator-neutrino experiments. The effect of neutrinos in cosmology was also used to get bounds on neutrino masses. On the other hand, for the neutrino magnetic moment astrophysical constraints gave useful bounds.
Usually, the bound of the neutrino magnetic moment is given in units ($`f`$) of Bohr magneton ($`\mu _B`$),
$$\mu _{\nu _i}=f_i\mu _B.$$
(1)
## History and the known bounds
The first significant bound on magnetic moment of $`\nu _e`$ was given from astrophysics, $`|f_e|<10^{10}`$, by Bernstein et al. bernstein . A better bound on $`f_e`$ was obtained from SN1987A, $`|f_e|<10^{13}`$ SN .
For the muon neutrino, the useful bound was obtained from the neutral current data kmo , $`|f_\mu |<0.8\times 10^8`$. The first bound on the transition magnetic moment was given also from the neutral current experiment kim .
For the tau neutrino, $`|f_\tau |<1.3\times 10^7`$ has been obtained recently sergei .
For the theoretical side, it has been known from early days that it is possible to generate large magnetic moments for neutrinos cal . Note that the see-saw mass for neutrinos appear as in Fig. 1. Here, there does not exist any charged particle and hence there is no contribution to magnetic moment at this level. Thus to have a large neutrino magnetic moments, one needs a Feynman diagram of Fig. 2 type, where we introduced a heavy lepton $`L`$ coupling to $`W`$ via
$$\left(\begin{array}{c}\nu _l\\ l^{}\mathrm{cos}\alpha +L^{}\mathrm{sin}\alpha \end{array}\right)_L,\left(\begin{array}{c}L^0\\ l^{}\mathrm{sin}\alpha +L^{}\mathrm{cos}\alpha \end{array}\right)_L,\left(\begin{array}{c}\nu _l\\ L^{}\end{array}\right),\left(\begin{array}{c}L^0\\ l^{}\end{array}\right).$$
(2)
Then one can easily estimate the magnetic moment of neutrinos as cal <sup>1</sup><sup>1</sup>1$`f^{}`$ is the transition moment.
$$f\mathrm{or}f^{}=\frac{G_Fm_Lm_e}{2\sqrt{2}\pi ^2}abI\left(\frac{1}{2}+\frac{1}{2}\delta _{\nu \nu ^{}}\right)$$
(3)
where $`abI`$, which is a function of mixing and Feynman integral, is of order 1. One can also draw Feynman diagrams with charged scalars in the loop with appropriate Yukawa interactions introduced. This kind of diagrams generally introduce transition magnetic moment of order$`f^{}m_Lm_e/M^2`$ where $`M`$ is the mass of the intermediate scalar or gauge boson. Note that without extra charged leptons $`m_L`$ should be neutrino mass, rendering an extremely small $`\mu _\nu `$.
## NC, $`\mu _\nu ^{}`$, and Single $`\pi ^0`$ Production by $`\nu _\mu ^{\mathrm{atm}}`$
The Super-K collaboration has reported the ratio
$$R_{\pi ^0/e}=\frac{(\pi ^0/e)_{\mathrm{data}}}{(\pi ^0/e)_{\mathrm{MC}}}=0.93\pm 0.07\pm 0.19$$
(4)
which is consistent with 1 at present. However, one may narrow down the experimental errors and can observe whether it is different from 1 or not. One assumes that $`\nu _e`$ is not oscillated in the atmospheric neutrino data sample, and hence the MC electrons are estimated with the standard CC cross section. The denominator is calculated assuming that the NC is the same for the cases with and without neutrino oscillation. So $`R_{\pi ^0/e}`$ is expected to be 1 if there is no oscillation of SM neutrinos to sterile neutrinos. If there exist oscillation of $`\nu _\mu `$ to sterile neutrinos, then one expects that $`R_{\pi ^0/e}<1`$.
However, in our recent work kkl we pointed out that one should be careful to draw a firm conclusion on this matter because if a sizable transition magnetic moment of $`\nu _\mu `$ exists then one expects a different conclusion.
For the study of NC, the single $`\pi ^0`$ production is known to be very useful. Most dominant contribution to the single $`\pi ^0`$ production at the atmospheric neutrino energy is through $`\mathrm{\Delta }`$ production,
$$\nu +N\nu +\mathrm{\Delta },\mathrm{\Delta }N+\pi ^0.$$
(5)
In this calculation, we used the form factors given in Ref. fogli . For $`E_\nu <10`$ GeV or the kinetic energy of recoil nucleon $`<1`$ GeV, the process $`\nu +N\nu ^{}+N`$ is difficult to observe at Super-K. So the $`\pi ^0`$ production is the cleanest way to detect NC interactions through Cherenkov ring (from $`\pi ^0`$ decay) at Super-K.
For transition magnetic moment parametrized by $`f^{}`$, $`if^{}\mu _B\overline{u}(l^{})\sigma _{\mu \nu }q^\nu u(l)_{\nu _\mu }`$ where $`q=ll^{}`$, the single $`\pi ^0`$ production cross section through $`\mathrm{\Delta }`$ production is given in Ref. kkl . In Fig. 3, we show the result (the ratio of the neutrino magnetic moment contribution and the NC contribution) as a function of neutrino energy.
Note that the magnetic moment part is more important at low $`q^2`$ region due to the photon propagator. In principle, one can distinguish neutrino magnetic moment interactions from the NC interactions. From Fig. 3, if we require $`r_{f^{}/NC}0.13`$, then we obtain a bound $`f^{}2.2\times 10^9`$.
In conclusion, the transition magnetic moment $`f^{}`$ can be large. For $`\nu ^{}`$ heavy, it is not restricted by SN1987A bound. But atmospheric neutrinos of 1–10 GeV can produce $`\nu ^{}`$, and can mimick NC data kim . Before interpreting NC effects from atmospheric neutrino data, one has to separate out the $`\mu _\nu ^{}`$ contribution.
## Models with Large $`\mu _\nu `$
Before closing, we point out $`\mu _\nu `$ and solar neutrino problem. One possibility to reduce solar $`\nu _e`$ flux is to precess $`\nu _{eL}`$ to $`\nu _R`$ with a large $`\mu _\nu `$ in a strong magnetic field okun . But this idea seems to be ruled out by the nucleosynthesis argument morgan , $`\mu _\nu <10^{11}\mu _B`$, and the SN1987A argument SN , $`\mu _\nu <10^{13}\mu _B`$. The SN1987A bound is coming from the energy loss mechanism: if $`\nu _{eR}`$ is created, it takes out energy out of the core. But if it is trapped, then the bound does not apply.<sup>2</sup><sup>2</sup>2The transition magnetic moments to $`\nu ^{}`$ are not restricted by these bounds for a heavy enough $`\nu ^{}`$, but then the transition magnetic moment cannot account for the solar neutrino deficit.
The solar neutrino problem requires $`\mu _{\nu _e}10^{11}\mu _B`$ which is considered to be large. To use the idea of trapping, the oscillation is
$$\nu _{eL}\nu _{\mu R}^c.$$
(6)
Namely, we use the Konopinski-Mahmoud scheme where $`\nu _{\mu R}^c`$ is the weak interaction partner of $`\mu _R^+`$, implying the oscillated neutrino participates in weak interactions; hence trapped in the supernova core. This picture is very restrictive as suggested by models given in Ref. voloshin . These models try to get a large neutrino magnetic moment while keeping the neutrino mass small. The $`SU(2)_V`$ symmetry in which $`(\nu _e,\nu _{\mu R}^c)`$ forms a doublet under $`SU(2)_V`$ is introduced for this purpose. In this case, $`\mu `$-number minus electron-number is conserved. The reason for vanishing mass is that $`\nu ^TC\nu ^c`$ is symmetric under $`SU(2)_V`$, i.e. the mass term is a triplet under $`SU(2)_V`$ and hence is forbidden. On the other hand the magnetic moment term, $`\nu ^TC\sigma _{\alpha \beta }\nu ^cF^{\alpha \beta }`$, is a singlet and is allowed, as given in Fig. 4. But, why $`SU(2)_V`$? It is like the dilemma asked in any fermion mass ansatz problem.
|
no-problem/9904/physics9904004.html
|
ar5iv
|
text
|
# Extruded Plastic Scintillation Detectors
## I Introduction
Plastic scintillation detectors have been used in nuclear and high energy physics for many decades . Their advantages and disadvantages are recognized. Among their benefits are fast response, ease of manufacture and versatility. Their main drawbacks are radiation resistance and cost. Many research projects have concentrated on improving the fundamental properties of plastic scintillators , but little attention has focussed on their cost. Currently available plastic scintillating materials are high quality products whose cost is relatively expensive, and because of that, their use in very large detectors has not been a feasible option. For instance, MINOS (Main Injector Neutrino Oscillation Search) will require 400,000 Kg of plastic scintillator for its detector . With the price of cast scintillator at approximately $40 per Kg, such a detector would not be affordable. However, recent studies using commercial polystyrene pellets as the base material for extrudable plastic scintillators have allowed the MINOS collaboration to consider a less expensive alternative in building a plastic scintillation detector. Furthermore, the D0 experiment at Fermilab has been able to use extruded plastic scintillator to build and upgrade their Forward and Central Preshower Detectors. In this case, the driving force was not the cost of the material, since only 2,000 Kg of plastic scintillator were needed, but the opportunity to use a particular shape (triangular bar) that would have been expensive to machine out of cast plastic scintillator sheets .
## II Extruded Plastic Scintillators
Several factors contribute to the high cost of plastic scintillating sheets and wavelength shifting fibers. The main reason is the labor-intensive nature of the manufacturing process. The raw materials, namely styrene, vinyltoluene, and the dopants, need to be highly pure. These purification steps often take place just prior to the material utilization. Cleaning and assembly of the molds for the polymerization process is a detail-oriented operation that adds to the overall timeline. The polymerization cycle lasts several days. It consists of a high temperature treatment to induce full conversion from monomer to polymer, followed by a controlled ramp-down to room temperature to achieve a stress-free material. Finally, there are machining charges for sheets and tiles, and drawing charges for fibers that cannot be overlooked.
In order to significantly lower the cost of plastic scintillators, extruded plastic scintillation materials need to be considered. In an extrusion process, polymer pellets or powder must be used. Commercial polystyrene pellets are readily available, thus eliminating monomer purification and polymerization charges. In addition, extrusion can manufacture nearly any shape, increasing detector geometry options. There are, however, some important disadvantages. The extruded plastic scintillator is known to have poorer optical quality than the cast material. The main cause is high particulate matter content within the polystyrene pellets. General purpose polystyrene pellets are utilized in numerous products but few of them have strict optical requirements. A way to bypass the short attenuation length problem is to extrude a scintillator shape and use a wavelength shifting (WLS) fiber as readout. Our first approach was a two-step process that involved adding dopants to commercial polystyrene pellets to produce scintillating polystyrene pellets, which were then used to extrude a scintillator profile with a hole in the middle for a WLS fiber (Figure 1). The goal in the first step was to prepare scintillating pellets of acceptable optical quality in a factory environment. In addition to careful selection of the raw materials, the manufacturing concerns dealt with possible discoloration of the scintillating pellets because of either residues present in the equipment or degradation of the polymer pellets and the dopants in the processing device. The latter could be induced by the presence of oxygen at the high temperatures and pressures which constitute the typical operating conditions.
After producing the first batch of scintillating pellets, samples were cast to perform light yield and radiation degradation studies. Samples of standard cast scintillators such as BC404 and BC408, and samples prepared through bulk polymerization at Fermilab were also included in the studies (Table I). All the samples had similar dopant composition and were cut as 2-cm cubes. The light yield measurements were performed using a <sup>207</sup>Bi source (1 MeV electrons). The light yield results (Table I) showed no significant difference among them. The Bicron samples are made of poly(vinyltoluene) instead of polystyrene which accounts for the 20% increase in light output . The samples for radiation damage studies were placed in stainless steel cans and connected to a vacuum pump for two weeks to remove dissolved air and moisture. The cans were then back-filled with nitrogen and irradiated with a <sup>60</sup>Co source at the Phoenix Memorial Laboratory of the University of Michigan. The irradiations took place at a rate of approximately 15 KGy/h to a total dose of 1 KGy. After irradiation and annealing, the extruded scintillator cubes showed a 5% decrease in light yield which is similar to the losses observed in regular scintillator of this composition. Based on these tests, there was no sign of degradation in the scintillating pellets. The material was then used to produce extruded scintillator of different profiles with a hole in the middle for a WLS fiber.
TABLE I. Relative light yield of samples with similar compositions but from different manufacturing processes.
| Scintillator | Bicron<sup>a</sup> | extruded | bulk polymerized |
| --- | --- | --- | --- |
| 404 | 1.0 | 0.80 | 0.78 |
| 408 | 1.0 | 0.85 | 0.77 |
<sup>a</sup>Bicron scintillator has a poly(vinyltoluene) matrix which yields 20% more light than a polystyrene one .
### A Selection of Raw Materials
There are many manufacturers and grades of polystyrene pellets. Most of them fall under the category of general purpose polystyrene. Only a few offer optical quality polystyrene pellets. Needless to say, there is a substantial difference in price. Nonetheless, the first plastic scintillating pellets were prepared using an optical grade polystyrene from Dow, labeled XU70251, which was later superseded by XU70262 (Dow 262). The price for Dow 262 is about $4.5 per Kg. After confirmation by the initial tests that high quality extruded plastic scintillators were feasible, the quest began to replace the costly optical grade pellets with general purpose material. Various samples of different polystyrene grades were received from Dow, Fina, Nova, BASF, Huntsman, etc. These samples had been selected based on price, availability and melt flow rate for ease of extrusion. These materials were cast into cylinders up to 3 inches long. Transmittance measurements were performed using a Hewlett-Packard 8452 spectrophotometer. The materials tested were compared to cast samples of Dow 262 pellets. Often polystyrene contained additives that absorbed at long wavelengths such as the Fina pellets illustrated in Figure 2. This absorption would diminish the amount of light produced by the dopants that need to be added to make a particular scintillator. Other features observed were long absorption tails and haziness caused by additives and debris in the pellets. There were a couple of materials that were repeatedly tested and showed high clarity and lack of absorptions at long wavelengths. Dow Styron 663 (Dow 663) was chosen as the general purpose polystyrene grade to conduct our extrusion studies. Its price ranges from $1.3/Kg to $1.7/Kg depending, among other things, on the quantity ordered.
A variety of organic fluorescent compounds can be used as primary and secondary dopants in plastic scintillator applications. The primary dopant is commonly used at a 1–1.5% (by weight) concentration. The secondary dopant or wavelength shifter in scintillator is utilized at a concentration of 0.01–0.03% (by weight). The goal was to prepare a blue-emitting scintillator that could be readout with a green WLS fiber. Most green fibers are doped with K27, and thus the emission of the scintillator would have to match as best as possible the absorption of K27 in the fiber. The selection of dopants was based on these spectroscopic requirements as well as price and ease of manufacture. para-Terphenyl ($200–225/Kg) and PPO ($100–160/Kg) were considered as primary dopants. POPOP and bis-MSB (both at $0.5–1/g) were tested as secondary dopants. The final choice for the extruded plastic scintillator was PPO and POPOP in Dow 663. Figure 3 plots the transmittance spectrum of an extruded scintillator sample of this type.
### B Manufacturing Techniques
The majority of the extruded scintillator prepared has used Method 1, a two-step process conducted at two separate facilities. Figure 4 depicts the flow chart for this method. The first step was carried out at a company whose function was to add the dopants to the polystyrene pellets. (In the plastics industry, this trade is typically referred to as a color or compounding business.) Prior to the coloring run, polystyrene pellets were purged for several days with an inert gas, generally argon, to remove dissolved oxygen and moisture. The coloring step was a batch process where polystyrene pellets and dopants were tumble-mixed for 15 min. and then added to the hopper of an extruder. Each batch prepared 45 Kg of mixture. A silicone oil was used as a coating aid to achieve better distribution of the dopants on the pellet surface. An argon flow was also added to the hopper to minimize the presence of oxygen in the extruder. The die at the extruder head generated several strings of material which were cut yielding the scintillating pellets. At the end, the scintillating pellets collected in many containers during the run were blended to homogenize the material. These pellets could now be used to produce plastic scintillators through several procedures — namely extrusion, casting and injection molding. In this case, the scintillating pellets were taken to an extrusion company to extrude the desired scintillator profile.
Using this batch process, there is also the possibility of directly extruding the scintillator profile and thus by-passing the pelletizing step. This is the route that the MINOS collaboration has chosen to investigate. This variation of Method 1 can be less expensive since all the work is done in one facility. It also reduces the heat history of the product by removing its exposure to another high temperature cycle and minimizes the chance of optical degradation. The drawback is in the batch work since the polymer and the dopants still need to be weighed for each mixture, and in the tumble-mixing step which is susceptible to contamination and prone to errors.
An alternative to these operations is given by Method 2 which is summarized in Figure 5. Method 2 is a continuous in-line coloring and extrusion process. It emphasizes the most direct pathway from polystyrene pellets to the scintillator profile with the least handling of raw materials. In this situation, the purged polystyrene pellets and dopants are metered into the extruder at the correct rate for the required composition of the scintillator. An argon flow is still used at the hopper. Coating agents are no longer needed. The appropriate die profile gives rise to the extruded scintillator form of choice. If the die can produce strands, these can also be pelletized and the scintillating pellets used in other processes. Method 2 has been tested and produces plastic scintillator of high quality and homogeneity. Although it is a simple concept, the equipment needed to accurately meter small quantities of powders such as the dopants and to achieve a good distribution of the powders in the molten polymer is not widely available. The difficulty in testing this process was finding a facility with the adequate instrumentation.
## III Light Yield of Extruded Plastic Scintillators
Light yield studies have been performed on many samples of extruded plastic scintillators. Although a variety of shapes and sizes is available, the measurements have mostly been carried out on 11.5-cm long rectangular extrusions (1cm x 2cm) with a hole in the middle for a green WLS fiber. Each extrusion is tightly wrapped in Tyvek for this test. The WLS fiber utilized is BC91A (0.835 mm diameter, 1.5 m long) with one mirrored end. The light yield test setup uses an electron spectrometer with <sup>106</sup>Ru source whose 3-MeV beam is momentum selected. There is a small trigger counter in front of the extruded sample. The photomultiplier tube used is a Hamamatsu R2165 which has excellent single photo-electron resolution. The fiber is held at a fixed position from the PMT surface to minimize fluctuations among measurements. The light yield is determined from the following calculation:
$`LightYield={\displaystyle \frac{MeanPedestal}{Gain}}`$
where the mean and the gain are defined as:
$`Mean={\displaystyle \frac{_i^nv_ix_i}{_i^nv_i}}`$
$`Gain=FirstPeakPedestal`$
where v<sub>i</sub> is the number of entries for each ADC value, x<sub>i</sub>. The data are fitted to locate the position of the first and second peaks, and the pedestal. Figure 6 presents the light yield distribution of an extruded sample and the fit for the first and second electron peaks.
The results from a series of light yield measurements are listed in Table II. RDN 262 extrusions were prepared by the two-step batch process (Method 1) using Dow 262 optical grade pellets. Leistritz 262 and 262P samples were produced by the continuous procedure (Method 2) using Dow 262 polymer. Leistritz 663 samples were also prepared by Method 2 but used general purpose polystyrene pellets (Dow 663). Although the samples are from different runs, their light output is similar. The Leistritz 262 samples show a slightly lower light yield but their profile is smaller than that of the remaining samples. These samples were collected early in the extrusion run when the profile was not completely to specification. These results indicate that there is no major difference in light yield between Method 1 and Method 2. This test proves that the continuous in-line coloring and extrusion process (Method 2) yields a homogeneous part with the right concentration of dopants. This aspect is less of a concern in Method 1 since the first step includes batch tumble-mixing and post-blending of the scintillating pellets before the scintillating profile is extruded. In addition, these numbers confirm that Dow 663 (general purpose polystyrene pellets) can replace the optical grade pellets initially utilized. More measurements are underway to compare these extrusions to samples of commercial plastic scintillator sheets which have been cut to the same profile.
TABLE II. Light yield of extruded plastic scintillator samples.
| Scintillator | No. of samples | Light Yield | St. Dev. | Characteristics |
| --- | --- | --- | --- | --- |
| RDN 262 | 30 | 2.05 | 0.09 | Dow 262, Method 1 |
| Leistritz 262 | 10 | 1.81 | 0.14 | Dow 262, Method 2 |
| Leistritz 262P | 10 | 2.02 | 0.10 | Dow 262, Method 2 |
| Leistritz 663 | 15 | 2.22 | 0.07 | Dow 663, Method 2 |
## IV Conclusions
Research on extruded plastic scintillator was driven by the high cost of cast plastic scintillator. The goal was to use commercially available polystyrene pellets, in particular from a general purpose grade, and standard extrusion equipment to lower the price of plastic scintillators. Extruded plastic scintillator strips have been manufactured and tested. The estimated price for extruded scintillator ranges from $3.5/Kg to $6/Kg. About 50% of the cost is due to raw materials with the remaining 50% due to processing. The results indicate that the extruded scintillator profile with a WLS fiber as readout is a valid system for scintillation detectors. The MINOS experiment will build a very large detector using this technology. The D0 experiment is assembling the Central and Forward Preshower detectors using extruded scintillating triangular strips.
|
no-problem/9904/hep-ph9904235.html
|
ar5iv
|
text
|
# Implications of Precision Electroweak Measurements for the Standard Model Higgs Boson11footnote 1Talk presented at the 17th International Workshop on Weak Interactions and Neutrinos (WIN99), Cape Town, South Africa, January 24–30, 1999.
## 1 Introduction
Besides the recent high precision measurements of the $`W`$ mass , $`M_W`$, the most important input into precision tests of electroweak theory continues to come from the $`Z`$ factories LEP 1 and SLC . The vanguard of the physics program at LEP 1 is the analysis of the $`Z`$ lineshape. Its parameters are the $`Z`$ mass, $`M_Z`$, the total $`Z`$ width, $`\mathrm{\Gamma }_Z`$, the hadronic peak cross section, $`\sigma _{\mathrm{had}}`$, and the ratios of hadronic to leptonic decay widths, $`R_{\mathrm{}}=\frac{\mathrm{\Gamma }(\mathrm{had})}{\mathrm{\Gamma }(\mathrm{}^+\mathrm{}^{})}`$, where $`\mathrm{}=e`$, $`\mu `$, or $`\tau `$. They are determined in a common fit with the leptonic forward-backward (FB) asymmetries, $`A_{FB}(\mathrm{})=\frac{3}{4}A_eA_{\mathrm{}}`$. With $`f`$ denoting the fermion index,
$$A_f=\frac{2v_fa_f}{v_f^2+a_f^2}$$
(1)
is defined in terms of the vector ($`v_f=I_{3,f}2Q_f\mathrm{sin}^2\theta _f^{\mathrm{eff}}`$) and axial-vector ($`a_f=I_{3,f}`$) $`Zf\overline{f}`$ coupling; $`Q_f`$ and $`I_{3,f}`$ are the electric charge and third component of isospin, respectively, and $`\mathrm{sin}^2\theta _f^{\mathrm{eff}}\overline{s}_f^2`$ is an effective mixing angle.
The polarization of the electron beam at the SLC allows for competitive and complementary measurements with a much smaller number of $`Z`$’s than at LEP. In particular, the left-right (LR) cross section asymmetry, $`A_{LR}=A_e`$, represents the most precise determination of the weak mixing angle by a single experiment (SLD). Mixed FB-LR asymmetries, $`A_{LR}^{FB}(f)=\frac{3}{4}A_f`$, single out the final state coupling of the $`Z`$ boson.
For several years there has been an experimental discrepancy at the $`2\sigma `$ level between $`A_{\mathrm{}}`$ from LEP and the SLC. With the 1997/98 high statistics run at the SLC, and a revised value for the FB asymmetry of the $`\tau `$ polarization, $`𝒫_\tau ^{FB}`$, the two determinations are now consistent with each other,
$$\begin{array}{c}A_{\mathrm{}}(\mathrm{LEP})=0.1470\pm 0.0027,\hfill \\ A_{\mathrm{}}(\mathrm{SLD})=0.1503\pm 0.0023.\hfill \end{array}$$
(2)
The LEP value is from $`A_{FB}(\mathrm{})`$, $`𝒫_\tau `$, and $`𝒫_\tau ^{FB}`$, while the SLD value is from $`A_{LR}`$ and $`A_{LR}^{FB}(\mathrm{})`$. The data is consistent with lepton universality, which is assumed here. There remains a $`2.5\sigma `$ discrepancy between the two most precise determinations of $`\overline{s}_{\mathrm{}}^2`$, i.e. $`A_{LR}`$ and $`A_{FB}(b)`$ (assuming no new physics in $`A_b`$).
Of particular interest are the results on the heavy flavor sector including $`R_q=\frac{\mathrm{\Gamma }(q\overline{q})}{\mathrm{\Gamma }(\mathrm{had})}`$, $`A_{FB}(q)`$, and $`A_{LR}^{FB}(q)`$, with $`q=b`$ or $`c`$. At present, there is some discrepancy in $`A_{LR}^{FB}(b)=\frac{3}{4}A_b`$ and $`A_{FB}(b)=\frac{3}{4}A_eA_b`$, both at the $`2\sigma `$ level. Using the average of Eqs. (2), $`A_{\mathrm{}}=0.1489\pm 0.0018`$, both can be interpreted as measurements of $`A_b`$. From $`A_{FB}(b)`$ one would obtain $`A_b=0.887\pm 0.022`$, and the combination with $`A_{LR}^{FB}(b)=\frac{3}{4}(0.867\pm 0.035)`$ would yield $`A_b=0.881\pm 0.019`$, which is almost $`3\sigma `$ below the SM prediction. Alternatively, one could use $`A_{\mathrm{}}(\mathrm{LEP})`$ above (which is closer to the SM prediction) to determine $`A_b(\mathrm{LEP})=0.898\pm 0.025`$, and $`A_b=0.888\pm 0.020`$ after combination with $`A_{LR}^{FB}(b)`$, i.e., still a $`2.3\sigma `$ discrepancy. An explanation of the 5–6% deviation in $`A_b`$ in terms of new physics in loops, would need a 25–30% radiative correction to $`\widehat{\kappa }_b`$, defined by $`\overline{s}_b^2\widehat{\kappa }_b\mathrm{sin}^2\widehat{\theta }_{\overline{\mathrm{MS}}}(M_Z)`$. Only a new type of physics which couples at the tree level preferentially to the third generation , and which does not contradict $`R_b`$ (including the off-peak measurements by DELPHI ), can conceivably account for a low $`A_b`$. Given this and that none of the observables deviates by $`2\sigma `$ or more, we can presently conclude that there is no compelling evidence for new physics in the precision observables, some of which are listed in Table 1.
## 2 Bayesian Higgs mass inference
The data show a strong preference for a low $`M_H𝒪(M_Z)`$,
$$M_H=107_{45}^{+67}\text{ GeV},$$
(3)
where the central value (of the global fit to all precision data, including $`m_t`$) maximizes the likelihood, $`Ne^{\chi ^2(M_H)/2}`$. Correlations with other parameters, $`\xi ^i`$, are accounted for, since minimization w.r.t. these is understood, $`\chi ^2\chi _{\mathrm{min}}^2`$.
Bayesian methods, on the other hand, are based on Bayes theorem ,
$$p(M_H|\mathrm{data})=\frac{p(\mathrm{data}|M_H)p(M_H)}{p(\mathrm{data})},$$
(4)
which must be satisfied once the likelihood, $`p(\mathrm{data}|M_H)`$, and prior distribution, $`p(M_H)`$, are specified. $`p(data)p(\mathrm{data}|M_H)p(M_H)𝑑M_H`$ in the denominator provides for the proper normalization of the posterior distribution on the l.h.s. The prior can contain additional information not included in the likelihood model, or chosen to be non-informative.
Occasionally, the Bayesian method is criticized for the need of a prior, which would introduce unnecessary subjectivity into the analysis. Indeed, care and good judgement is needed, but the same is true for the likelihood model, which has to be specified in both approaches. Moreover, it is appreciated among Bayesian practitioners, that the explicit presence of the prior can be advantageous: it manifests model assumptions and allows for sensitivity checks. From the theorem (4) it is also clear that the maximum likelihood method corresponds, mathematically, to a particular choice of prior. Thus Bayesian methods differ rather in attitude: by their strong emphasis on the entire posterior distribution and by their first principles setup.
Given extra parameters, $`\xi ^i`$, the distribution function of $`M_H`$ is defined as the marginal distribution, $`p(M_H|\mathrm{data})=p(M_H,\xi ^i|\mathrm{data})_ip(\xi ^i)d\xi ^i`$. If the posterior factorizes, $`p(M_H,\xi ^i)=p(M_H)p(\xi ^i)`$, the $`\xi ^i`$ dependence can be ignored. If not, but $`p(\xi ^i|M_H)`$ is (approximately) multivariate normal, then
$$\chi ^2(M_H,\xi ^i)=\chi _{\mathrm{min}}^2(M_H)+\frac{1}{2}\frac{^2\chi ^2(M_H)}{\xi _i\xi _j}(\xi ^i\xi _{\mathrm{min}}^i(M_H))(\xi ^j\xi _{\mathrm{min}}^j(M_H)).$$
(5)
The latter applies to our case, where $`\xi ^i=(m_t,\alpha _s,\alpha (M_Z))`$. Integration yields,
$$p(M_H|\mathrm{data})\sqrt{detE}e^{\chi _{\mathrm{min}}^2(M_H)/2},$$
(6)
where the $`\xi ^i`$ error matrix, $`E=(\frac{^2\chi ^2(M_H)}{\xi _i\xi _j})^1`$, introduces a correction factor with a mild $`M_H`$ dependence. It corresponds to a shift relative to the standard likelihood model, $`\chi ^2(M_H)=\chi _{\mathrm{min}}^2(M_H)+\mathrm{\Delta }\chi ^2(M_H)`$, where
$$\mathrm{\Delta }\chi ^2(M_H)\mathrm{ln}\frac{detE(M_H)}{detE(M_Z)}.$$
(7)
For example, $`\mathrm{\Delta }\chi ^2(300\text{ GeV})0.1`$, which would tighten the $`M_H`$ upper limit by at most a few GeV. At present, we neglect this effect.
We choose $`p(M_H)`$ as the product of $`M_H^1`$, corresponding to a uniform (non-informative) distribution in $`\mathrm{log}M_H`$, times the exclusion curve from LEP 2. This curve is from Higgs searches at center of mass energies up to 183 GeV. We find the 90 (95, 99)% confidence upper limits,
$$M_H<220\text{ (255, 335) GeV}.$$
(8)
Theory uncertainties from uncalculated higher orders increase the 95% CL by about 5 GeV. These limits are robust within the SM, but we caution that the results on $`M_H`$ are strongly correlated with certain new physics parameters .
The one-sided confidence interval (8) is not an exclusion limit. For example, the 95% upper limit of the standard uniform distribution, $`x[0,1]`$, is at $`x=0.95`$, but all values of $`x`$ are equally likely, and $`x>0.95`$ cannot be excluded. If there is a discrete set of competing hypotheses, $`H_i`$, one can use Bayes factors, $`p(\mathrm{data}|H_i)/p(\mathrm{data}|H_j)`$, for comparison. For example, LEP 2 rejects a standard Higgs boson with $`M_H<90`$ GeV at the 95% CL, because
$$\frac{p(\mathrm{data}|M_H=M_0)}{p(\mathrm{data}|M_HM_0)}<0.05M_0<90\text{ GeV}.$$
(9)
On the other hand, the probability for $`M_H<90`$ GeV is only $`5\times 10^4`$.
One could similarly note, that $`p(M_H=M_0)<0.05p(M_H=107\text{ GeV})`$ for $`M_0>334`$ GeV; but the (arbitrary) choice of the best fit $`M_H`$ value as reference hypothesis is hardly justifiable. This affirms that variables continuously connecting a set of hypotheses should be treated in a fully Bayesian analysis.
## Acknowledgement
I would like to thank the organizers of WIN 99 for a very pleasant and memorable meeting and Paul Langacker for collaboration.
## References
|
no-problem/9904/astro-ph9904312.html
|
ar5iv
|
text
|
# Geometric Distortion of the Correlation function of Lyman-break Galaxies
## 1 Introduction
Alcock & Paczyński (1979) suggested the possibility of using the clustering statistics of galaxies in redshift space to constrain the global geometry in the universe. The basic idea is that, since clusters of galaxies should not be preferentially aligned along any direction relative to a fixed observer, their average shape ought to be spherically symmetric. Therefore, if galaxies were following the Hubble expansion of the universe, without any peculiar velocities, the average extent of clusters in radial velocity $`v_r`$ (measured from redshifts) and their angular size $`\psi `$ are related to the physical size of the cluster $`L`$ by $`v_r=H(z)L`$, and $`\psi =L/D(z)`$, respectively. Here, $`H(z)`$ and $`D(z)`$ are the Hubble constant and the angular diameter distance at the redshift $`z`$ where the clusters are observed. The condition that clusters are spherical on average can then yield the value of $`H(z)D(z)`$. Of course, the effect of peculiar velocities must be included in order to apply this method, since any clustering induced by gravity will generally introduce peculiar velocities (Kaiser 1987) that will cause a distortion of similar or greater magnitude than the differences between cosmological models.
Recently, the rate at which galaxies at high redshift are being identified has dramatically increased thanks to the Lyman limit technique, using the fact that the reddest objects among faint galaxies will often be galaxies at the redshift where the Lyman limit wavelength is between the two bands used to measure the color (Guhathakurta et al. 1990; Steidel & Hamilton 1993; Steidel et al. 1996). For example, very red objects in $`UB`$ are likely to be galaxies at redshift $`z3`$.
The galaxy correlation function, $`\xi (𝐫)`$, which measures the probability in excess of a random distribution of finding a galaxy at a real space separation vector $`𝐫`$ from another galaxy, has been measured for the first time for the population of Lyman-break galaxies (Giavalisco et al. 1998). The correlation length, defined to be the separation at which the excess probability is equal to that of a random distribution, has been estimated to be $`2.1h^1`$ Mpc (for an $`\mathrm{\Omega }_0=1`$ universe; the symbol $`\mathrm{\Omega }`$ is used here for the ratio of the density of matter in the universe to the critical density, the subscript $`0`$ indicates redshift zero), about half of the correlation length of galaxies at $`z=0`$. The bias, defined as the ratio of the correlation function of galaxies to that of matter at a fixed separation, is estimated to be large, $`4`$ for an $`\mathrm{\Omega }_0=1`$ universe and smaller for universes with smaller dark matter content (Giavalisco et al. 1998). Count-in-cells analysis of the Lyman-break sample used in conjunction with a Press-Schechter mass function for the halos also indicate that these galaxies are likely to reside in rare, massive halos that existed at the time (Adelberger et al. 1998; Steidel et al. 1998; see also Coles et al. 1998 and Wechsler et al. 1998 for models of clustering of Lyman-break galaxies). These rare halos are expected to be much more clustered than the underlying matter distribution as originally suggested by Kaiser (1984) (see also Mo & white 1996, for analytic models of bias as a function of the mass of halos). Both these analysis indicate that the population of Lyman-break galaxies is likely to be highly biased with respect to the underlying matter distribution.
In this paper we investigate the feasibility of using the distortion of the redshift space correlation function of this population of galaxies to measure cosmological parameters. This possibility has been suggested before by Matsubara and Suto (1996) who proposed using the ratio of the value of the correlation function parallel to the line of sight to its value perpendicular to the line of sight at a fixed separation as a measure of the distortion. In this paper we express the angular dependence of the cosmological redshift space distortion of the correlation function as a multipole expansion. We are also specifically interested in applying this method to the highly biased, high redshift population of Lyman-break galaxies. Ballinger et al. (1996) have investigated the use of the full functional form of the redshift space power spectrum to separately measure the peculiar velocity effects and cosmological geometry effects. In essence this reduces to using both the quadrupolar as well as the octapolar distortion of the redshift space power spectrum to simultaneously constrain the cosmological constant as well as the parameter $`\beta =\mathrm{\Omega }^{0.6}/b`$, where b is the linear theory bias. In this paper we fix the bias of the galaxy distribution by using the constraints on the matter power spectrum at redshift zero derived from observations of cluster abundances. On large scales the power spectrum of matter at any redshift is related to the power spectrum at redshift zero through the linear growth factor. We can then use the lowest order quadrupolar distortion of the power spectrum alone to constrain other cosmological parameters such as the cosmological constant. Ballinger et al. (1996) also estimated the errors involved in such a survey although in Fourier space. We estimate the errors in estimating cosmological parameters directly from the correlation function.
On sufficiently large scales, where density fluctuations are in the linear regime, the angular form of the redshift space correlation function depends only on two parameters: the cosmological term $`H(z)D(z)`$, and the bias of the galaxy population. This paper presents a general method of estimating these two parameters from the basic data of a galaxy redshift survey, and evaluates the size of the survey that is necessary to determine the two parameters (or a combination of them, given other constraints from the galaxy distribution at the present time) with a given accuracy. We shall analyze the sensitivity of the method to a variety of cosmological models, placing special emphasis on models that contain a cosmological constant or a new component of the energy density of the universe with negative pressure christened Quintessence (e.g. Kodama & Sasaki 1984; Peebles & Ratra 1988; Caldwell et al. 1998), given the recent evidence from the luminosity distances to Type Ia supernovae (Garnavich et al. 1998; Perlmutter et al. 1997; Reiss et al. 1998) suggesting an accelerating universe. As pointed out by Alcock & Paczyński, the quantity $`H(z)D(z)`$ is more sensitive to this type of component than to space curvature.
The paper is arranged as follows. In §2 we describe the effect of geometric distortion. In §3 we introduce the method for measuring the effects of cosmological geometry and peculiar velocity effects on the redshift space correlation function. In §4 we present predictions for a variety of cosmological models, and in §5 we estimate the errors in the observational determination of the redshift space correlation function contributed by shot noise and by the finite size of the observed volume. Our discussion is given in §6.
## 2 Method
A redshift survey consists of measuring the radial velocity and angular position of every galaxy included in the sample. We denote by $`𝐧`$ the unit vector along the line of sight, which, if the survey does not extend over a very large area, can be considered constant for all galaxies. Given a pair of galaxies, let $`v`$ be the difference between their radial velocities, and $`\psi `$ their angular separation. We define their vector separation in redshift space $`𝐰`$ as (see Figure 1)
$`𝐰𝐧`$ $`=`$ $`v,`$
$`|𝐰(𝐰𝐧)𝐧|`$ $`=`$ $`H(z)D(z)\psi ,`$
$`w^2`$ $`=`$ $`v^2+\left[H(z)D(z)\psi \right]^2.`$ (1)
where $`H(z)`$ and $`D(z)`$ are the Hubble constant and the angular diameter distance at the mean redshift of the survey, $`z`$. We also define $`\mu `$, for future use, as the cosine of the angle between the vector separation between two galaxies and the line of sight:
$$\mu =\frac{v}{w}$$
(2)
The quantity $`H(z)D(z)`$ contains the dependence on the cosmological model. If we could measure the correlation function of galaxies directly in real space (measuring distances to galaxies instead of radial velocities), then the simple requirement that the correlation function should be isotropic would yield the value of $`H(z)D(z)`$. However, peculiar velocities should obviously introduce an anisotropy in the correlation function, and their effect needs to be included.
### 2.1 Model Dependence of $`H(z)D(z)`$
Figures 2 and 3 show the ratio $`H(z)D(z)/H_s(z)D_s(z)`$ for various models, where $`H_s(z)D_s(z)`$ is the value of $`H(z)D(z)`$ for a “fiducial” model, here adopted to be the Einstein-de Sitter model, with $`\mathrm{\Omega }_0=1`$ in the form of pressureless matter. The symbol $`\mathrm{\Omega }_0`$ is used here for the present ratio of the density of matter in the universe to the critical density.
Two of the models shown in Figures 2 and 3 are the open model (with space curvature but no negative pressure components) and the cosmological constant (or $`\mathrm{\Lambda }`$) model (with no space curvature and a component with pressure $`p=\rho c^2`$). The third of the models shown is a Quintessence or Q model with no spatial curvature and a component with equation of state $`p=\rho c^2/3`$.
The quantity $`H(z)D(z)`$ is much more sensitive to $`\mathrm{\Lambda }`$ than to space curvature, and is also sensitive to the Q model, with a different redshift dependence. In general, a component of the energy density in the universe with negative pressure can have any equation of state, but the case $`p=\rho c^2/3`$ implies an expansion mimicking exactly that of an open universe. Therefore, $`H(z)`$ in our Q model is exactly the same as in the open model. However, whereas in the open model the negative space curvature increases the angular diameter distance compared to the Einstein-de Sitter model, cancelling almost exactly the decrease in $`H(z)`$, the flat geometry of the Q model results in smaller angular diameter distances, so $`H(z)D(z)`$ is smaller than in the Einstein-de Sitter model due to the decrease of $`H(z)`$.
It is useful to note at this point that in order to obtain useful constraints on cosmological models, $`H(z)D(z)`$ must be measured to an accuracy better than $`10\%`$. In order to distinguish, between a cosmological constant and a Q model, $`H(z)D(z)`$ must of course be measured at several redshifts with even higher accuracy. In practice, we can expect that any constraints obtained from measuring $`H(z)D(z)`$ should be combined with other knowledge obtained, for example, from the luminosity distances to Type Ia supernovae.
## 3 Effect of peculiar velocities on the redshift space correlation function
For a given value of $`H(z)D(z)`$ the effect of peculiar velocities on the shape of the redshift space correlation function is well described in the literature (e.g. McGill 1990; Hamilton 1992;Fisher 1995) and the redshift space correlation function, $`\stackrel{~}{\xi }(𝐰)`$, is given by :
$`\stackrel{~}{\xi }(𝐰)`$ $`=`$ $`{\displaystyle \underset{l=0,2,4}{}}D_l(\beta ,w,z)P_l(\mu ),`$ (3)
where,
$`\beta {\displaystyle \frac{\mathrm{\Omega }(z)^{0.6}}{b(z)}},`$
where $`b(z)`$ is the bias parameter for the class of objects under survey and $`\mathrm{\Omega }(z)`$ is the ratio of the density of matter to the critical density at redshift $`z`$. The coefficients of the expansion in Legendre polynomials, $`D_l`$, can be expressed as:
$`D_l(\beta ,w,z)`$ $`=`$ $`(1)^lA_l(\beta )\xi _l(w,z),`$ (4)
where
$`A_0`$ $`=`$ $`\left(1+{\displaystyle \frac{2}{3}}\beta +{\displaystyle \frac{1}{5}}\beta ^2\right),`$
$`A_2`$ $`=`$ $`\left({\displaystyle \frac{4}{3}}\beta {\displaystyle \frac{4}{7}}\beta ^2\right),`$
$`A_4`$ $`=`$ $`\left({\displaystyle \frac{8}{35}}\beta ^2\right),`$
and
$`\xi _l(w,z)`$ $`=`$ $`{\displaystyle \frac{b(z)^2}{2\pi ^2}}{\displaystyle 𝑑kk^2P(k,z)j_l(kw)},`$ (5)
and $`j_l`$ is the lth order spherical Bessel function. The function $`P(k,z)`$ is the linear matter power spectrum at redshift z in terms of the k vector in velocity space.
Note that $`\xi _0(w,z)`$ is proportional to the real space matter correlation function at redshift z. Hence, $`D_0`$ is equal to the real space two point correlation function for this class of objects except for the factor of $`A_0[\beta (z)]`$. We also mention here that the $`D_2`$ coefficient is negative implying a squashing of the contours of the correlation function along the line of sight as is expected due to the peculiar velocities from infall on large scales.
Throughout this paper we use the simple model of linear, local, constant biasing of galaxies, i.e. the overdensity in the number of galaxies, $`\delta _g(\stackrel{}{w})`$, is given by $`b\times \delta _m(\stackrel{}{w})`$, where $`\delta _m(\stackrel{}{w})`$ is the overdensity in matter and b the bias. For general deterministic local bias models, this is valid in linear theory where $`\delta _g<1`$ ( for $`b>1`$ and $`\delta _m1,\delta _g<1`$ is unphysical) (Gaztañaga & Baugh 1998). Hence, our results are likely to be valid on large scales where the correlation function is smaller than one. In reality, biasing is not easily modeled since it depends on the complex process of galaxy formation, which is poorly understood. Several alternative models of galaxy biasing have been suggested, including non-local biasing mechanisms (Babul & White 1991; Bower et al. 1993) and stochastic biasing (Dekel & Lahav 1998; Tegmark & Peebles 1998). However, for stochastic (local) models, on large scales, the bias (the ratio of the correlation function of galaxies to that of matter) will be independent of scale (Scherrer and Weinberg 1998) as in the case of a linear, local, constant biasing scheme, although the variance in the measured correlation function will be larger for such models. On the other hand, non-local models of galaxy biasing in which the efficiency of galaxy formation is modulated coherently over large scales, result in scale dependent bias. In the absence of a well motivated model for bias, we have assumed the simplest scale independent model for the bias. It is valid only on large scales and is not generally valid for non-local biasing models. We also mention that we have only taken the linear infall velocities into account in calculating the redshift space correlation function (see Equation 3). On small scales non-linear velocity effects (‘Fingers of God’) will also be important (e.g. see Fisher et al. 1994 for the redshift space correlation function of IRAS galaxies).
### 3.1 Effect of geometric distortion
In order to test the magnitude of the geometric distortion, we calculate the anisotropy introduced in the correlation function by varying $`H(z)D(z)`$ about its fiducial value, $`H_s(z)D_s(z)`$. Let the product $`H(z)D(z)`$ for any other model be given by
$$H(z)D(z)=H_s(z)D_s(z)\sqrt{1+\alpha (z)},$$
(6)
where $`\alpha (z)`$ is defined as the geometric distortion parameter. Then, using equations (1) and (2) we have,
$`w^2`$ $`=`$ $`w_s^2\eta ^2(\alpha ,\mu _s),`$
$`\mu ^2`$ $`=`$ $`{\displaystyle \frac{\mu _s^2}{\eta ^2(\alpha ,\mu _s)}},`$ (7)
where
$$\eta ^2(\alpha ,\mu _s)=1+\alpha (1\mu _s^2).$$
(8)
We can now express equation (3) in terms of the variables $`w_s,\mu _s`$ in the fiducial model:
$$\stackrel{~}{\xi }(𝐰)=\underset{l=0,2,4}{}D_l(\beta ,w_s\eta ,z)P_l(\frac{\mu _s}{\eta }).$$
(9)
Rewriting this as a series in $`P_l(\mu _s)`$,
$$\stackrel{~}{\xi }(𝐰)=\underset{l}{}C_l(\beta ,w_s,z)P_l(\mu _s),$$
(10)
one can immediately see from the angular dependence in $`\eta `$ that the expansion is an infinite series in $`P_l(\mu _s)`$, with the new coefficients of the Legendre polynomials, $`C_l`$, being given by,
$$C_n(\beta ,w_s,z)=\left(\frac{2n+1}{2}\right)\underset{l=0,2,4}{}D_l(\beta ,w_s\eta ,z)P_l(\frac{\mu _s}{\eta })P_n(\mu _s)𝑑\mu _s.$$
(11)
Thus, expressing the coefficients $`D_l`$ of a given model in terms of fiducial coordinates introduces angular distortion in the redshift space correlation function.
## 4 Results for the geometric distortion
In this section we present our results for the sensitivity of the anisotropy of the correlation function to the geometric distortion parameter, $`\alpha `$. We consider here a galaxy survey with a mean redshift of $`3`$, the typical redshift of the current Lyman limit galaxy surveys. Our fiducial model is $`\mathrm{\Omega }_0=1.0`$, with the standard cold dark matter (SCDM) power spectrum. On large scales the power spectrum at redshift $`3`$ is related to the power spectrum at redshift zero by the linear growth factor. We adopt the cluster normalization for the power spectrum at redshift zero, obtained by requiring that the observed density of galaxy clusters with a given X-ray temperature matches the theoretical prediction. The constraint obtained in this way can be expressed in terms of the fluctuation in a sphere of radius $`8h^1`$ Mpc, $`\sigma _8`$, given by (Eke et al. 1996):
$`\sigma _8`$ $`=`$ $`0.52\mathrm{\Omega }_0^{0.46\mathrm{\Omega }_0},\mathrm{for}\mathrm{\Lambda }_0=0,`$
and
$`\sigma _8`$ $`=`$ $`0.52\mathrm{\Omega }_0^{0.52\mathrm{\Omega }_0},\mathrm{for}\mathrm{\Omega }_0+\mathrm{\Lambda }_0=1.`$ (12)
In Figure 4 we plot the $`C_l(\beta ,w_s)`$ coefficients for $`l=0,2`$ and $`4`$, for our fiducial model (lighter lines) and the $`\mathrm{\Lambda }`$ model with $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Lambda }_0=0.7`$ (bold lines). The horizontal axis has been labeled both in units of velocity ($`w_s`$) and comoving space separation $`s`$, calculated for the fiducial model. The observed correlation function is given by the monopole term in the Legendre polynomial expansion, $`C_0(\beta ,w_s)`$. In order to match the value of the $`C_0`$ coefficient to unity at the observed correlation length of $`2.1h^1\mathrm{Mpc}`$ (comoving) for $`\mathrm{\Omega }_0=1`$ (Giavalisco et al. 1998), which corresponds to a correlation velocity of $`450\mathrm{km}\mathrm{s}^1`$, the bias required is $`b=4`$. For the $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Lambda }_0=0.7`$ model, the bias required to match the computed $`C_0`$ coefficient to $`1`$ at the observed correlation velocity of $`450\mathrm{km}\mathrm{s}^1`$ is $`2.3`$. On account of the large bias in these two models, we can rely on the linear theory that we have used for peculiar velocity distortions of the correlation function we have shown in §3 for $`C_01`$, or $`w_s400\mathrm{km}\mathrm{s}^1`$.
Our goal is to measure the multipoles $`C_l(\beta ,w_s)`$ of the correlation function and use it to constrain the geometric distortion parameter, $`\alpha `$. From Figure 4 we see that on scales of approximately $`10^3\mathrm{km}\mathrm{s}^1`$, the $`C_2`$ coefficient is a $`10\%`$ perturbation on the monopole term, whereas the octapolar term $`C_4`$ is a smaller contribution at $`3\%`$ for the $`\mathrm{\Lambda }_0=0.7`$ model. Once we fix the bias, the $`C_0`$ coefficients for the two models shown are similar to each other, except on very large scales. The quadrupolar coefficients for the two models on the other hand are very different. The $`C_2`$ coefficient for the $`\mathrm{\Lambda }`$ model, affected by geometric distortion, is larger by a factor of 2 compared to the $`\mathrm{\Omega }_0=1`$ model and comes to within a factor of 2 of the monopole term on scales $`10^4\mathrm{km}\mathrm{s}^1`$. As we mentioned in §2, the $`C_2`$ coefficient is less than zero which implies a squashing of the contours of constant $`\stackrel{~}{\xi }(𝐰)`$ along the line of sight. Thus we see that for our choice of the fiducial model, the primary effect of geometric distortion caused by a model with a positive cosmological constant is to cause a further squashing of the contours of $`\stackrel{~}{\xi }(𝐰)`$. We mention here that the $`C_4`$ coefficient is even more sensitive to geometric distortion than the $`C_2`$ coefficient. Its value is approximately $`10`$ times larger for the $`\mathrm{\Lambda }_0=0.7`$ model as compared to the fiducial $`\mathrm{\Lambda }=0`$ model. A measurement of the $`C_4`$ coefficient will give us additional information with which to test the bias model that we have used. It will be interesting to compare the value of the linear bias parameter parameter derived from a simultaneous measurement of the cosmological constant and the $`\beta `$ parameter using both the $`C_4`$ and $`C_2`$ coefficients, to the value obtained by comparison of the galaxy distribution to the matter power spectrum at redshift zero.
We now show that the difference in angular distortion of the correlation function in the two models is primarily due to the change in the distortion parameter and is not strongly dependent on the choice of the power spectrum. In Figure 5, we plot the coefficients $`C_l`$, for the fixed cosmological model $`\mathrm{\Omega }_0=0.3,\mathrm{\Lambda }_0=0.7`$, but two different correlation functions. The bold lines correspond to the power spectrum of the $`\mathrm{\Lambda }`$ model with these same parameters. The lighter lines are for the same cosmological model, but with the power spectrum of an $`\mathrm{\Omega }_0=1`$ CDM model as a function of $`k/H(z)`$. We see from this figure that at velocity separations $`10^4\mathrm{km}\mathrm{s}^1`$, the differences in the power spectra dominate the differences in the $`C_l`$ coefficients. But on smaller scales the geometric distortion effect is the most important effect. In particular the $`C_2`$ and $`C_4`$ coefficients are similar once the monopoles for both models are normalized to unity at the observed correlation length. Thus the ratios of the coefficients $`C_2/C_0`$ and $`C_4/C_0`$ are only weakly dependent on the shape of the power spectrum. This shows that it should be possible to measure the geometric distortion parameter even if the power spectrum is not known accurately from independent methods.
## 5 Error estimates
In this section we compute the accuracy in the measurement of the multipoles of the redshift space correlation function from a typical survey volume and test the feasibility of the method described above. Currently, the typical observed fields have a size $`12\mathrm{}`$ on each side. The redshift range of each field extends from $`z=2.6`$ to $`z=3.4`$ with a surface density of approximately $`1.25`$ Lyman-break objects per square arc minute within this redshift range (Adelberger et al. 1998). In our fiducial model ($`\mathrm{\Omega }_0=1`$), this corresponds to a width of $`2\times 10^3\mathrm{km}\mathrm{s}^1`$ and a depth of $`6\times 10^4\mathrm{km}\mathrm{s}^1`$. We consider for the purpose of error estimation, a wide field of view of $`3^{}`$. We shall later discuss the scaling of the errors with the angular size of the field of view.
Any detailed calculation of the errors in a survey will depend upon the precise geometry of the survey volume and the selection effects involved in the survey. Here, we consider two of the sources of error, shot noise and cosmic variance. Shot noise is caused by the discrete nature of the galaxies from which we measure the correlation function. Cosmic variance arises due to the finite volume we use to estimate a statistical quantity. We calculate these errors for a single cylindrical survey volume with a radius of $`1.5^{}`$ ($`75h^1\mathrm{Mpc}`$ for $`\mathrm{\Omega }_0=1`$ model), and depth extending from $`z=2.6`$ to $`z=3.4`$ ($`300h^1\mathrm{Mpc}`$ for $`\mathrm{\Omega }_0=1`$ model). At the current estimate of surface density of Lyman-break galaxies of $`1.25`$ per square arcmin (Adelberger et al. 1998), approximately $`30000`$ galaxies would be included in our survey volume.
### 5.1 Shot noise
In order to estimate the redshift space correlation function, we bin pairs of galaxies with respect to their separation velocity $`w_s`$ (computed in the fiducial model) in widths of $`\mathrm{\Delta }w_s`$. The redshift correlation function is then estimated (denoted by subscript E) as,
$$\stackrel{~}{\xi }_E(𝐰_s)=\frac{N_p(w_s,\mu _s)}{\overline{N}_p(w_s,\mu _s)}1,$$
(13)
where $`N_p(w_s,\mu _s)`$ are the number of pairs with separations between $`w_s`$ and $`\mathrm{\Delta }w_s`$ with the separation vector making an angle $`\mathrm{cos}^1(\mu _s)`$ with the line-of-sight and $`\overline{N}_p(w_s)`$ is the ensemble average of a random distribution of the same quantity. There are various different estimators for the correlation function discussed in the literature that minimize the error of the estimator due to the unknown true average density of the galaxies at the redshift of the survey (for a discussion see Hamilton 1993). Our shot noise will be dominated by the small number of pairs of galaxies we have in each of our bins. Since we are currently only interested in an estimate of this error, we have adopted the simpler estimator for the correlation function. To analyze the data from a survey one should use a more sophisticated estimator to minimize its variance.
Using equation (13) we obtain the estimate of the $`C_l`$ coefficients as given below for $`l0`$
$$C_{l,E}(w_s)=\frac{2l+1}{2}\frac{1}{\overline{N}_p(w_s)}\underset{i=1}{\overset{N_p}{}}P_l(\mu _{si}),$$
(14)
where $`N_p`$ is the number of pairs with separations between $`w_s\mathrm{\Delta }w_s`$ and $`w_s+\mathrm{\Delta }w_s`$, and $`\overline{N}_p(w_s,\mu _s)`$ is the average number of pairs for a random distribution of galaxies in the same bin . The summation is performed over the pairs of galaxies (denoted by subscript i) in the bin centered at $`w_s`$. In order to calculate the statistical average of the estimator, we have to perform two integrals. First, for a given number of pairs separated by $`w_s`$, we average over their possible orientations. The probability that a given pair of galaxies with separation $`w_s`$ is oriented along $`\mu _s`$ is given by $`\psi (w_s,\mu _s)`$, where
$$\psi (w_s,\mu _s)d\mu _s=\frac{1+\stackrel{~}{\xi }(w_s,\mu _s)}{1+C_0(w_s)}d\mu _s.$$
(15)
The $`1+C_0(w_s)`$ factor in the denominator comes from normalizing $`1+\stackrel{~}{\xi }(w_s,\mu _s)`$ over $`\mu _s`$. Here we have assumed that a given pair of galaxies can have any orientation with respect to the line of sight. This is clearly not true, for example for a pair of galaxies close to the edge of the survey volume. In order the circumvent this difficulty we consider a smaller volume within the total survey volume which we call the “reduced volume”, hereafter denoted as $`V_R`$ such that the edges of $`V_R`$ are a distance $`w_s`$ away from the edges of the total survey volume. We only consider pairs of galaxies such that at least one of the galaxies is within $`V_R`$ . For a random distribution of galaxies, a pair chosen in this way is not biased to be aligned along a particular direction. We can see that the largest separation at which we can measure the coefficients $`C_l`$ is the radius of the survey for which $`V_R`$ goes to zero.
Secondly, we have to average over the distribution of the number of pairs of galaxies in each bin. Calculating the averages (denoted by brackets) yields
$$<C_{l,E}(w_s)>=\frac{2l+1}{2}\frac{<N_p(w_s)>}{\overline{N}_p(w_s)}\psi (w_s,\mu _s)P_l(\mu _s)𝑑\mu _s.$$
(16)
This gives us that $`<C_{l,E}(w_s)>=C_l(w_s)`$. This result can also be shown to hold for the monopole term.
In a similar way to the calculation of the statistical average of the $`C_l`$ coefficients, we can calculate the mean square variation of $`C_l`$ coefficients. Using equation (14) we have,
$$C_{l,E}^2(w_s)=\left(\frac{2l+1}{2}\right)^2\left(\frac{1}{\overline{N}_p}\right)^2\left(\underset{i=1}{\overset{N_p}{}}P_l(\mu _{si})^2+2\underset{i<j}{\overset{N_p}{}}P_l(\mu _{si})P_l(\mu _{sj})\right).$$
(17)
The statistical average of the above equation gives the mean square variance of the $`C_l`$ coefficients, $`<C_{l,E}^2<C_{l,E}^2>>`$, denoted by $`\sigma _l^2`$, as,
$$\sigma _l^2(w_s)=\left(\frac{2l+1}{2}\right)\frac{(1+C_0(w_s))}{\overline{N}_p(w_s)}.$$
(18)
We mention here that in deriving the above equation we have assumed a Poisson distribution for the number of pairs in each bin. This assumption is not strictly valid since every pair separation is not independent. Hence, one may expect some underestimation in the Poisson errors we have calculated but this should be small since the second term in equation (17) is proportional to $`C_l^2`$. Figure 6 shows the expected $`1\sigma `$ error for the $`C_l`$ coefficients due to shot noise. Each successive bin is centered at $`w_s`$ with value $`1.5`$ times that of the previous bin and hence, each bin has width $`2/5w_s`$. The average number of pairs $`\overline{N}_p(w_s)`$ in the bin centered at $`w_s`$ and width $`2\times \mathrm{\Delta }w_s`$ for a random distribution of galaxies within the survey volume is given by
$$\overline{N}_p(w_s)=\frac{\overline{n}_g^2}{2}\times \mathrm{V}_\mathrm{R}\times 4\pi w_s^22\mathrm{\Delta }w_s,$$
(19)
where $`\overline{n}_g`$ is the average density of galaxies within the survey volume. This is an underestimate of the number of pairs in the bin since it counts only half of the pairs of galaxies of which one of the galaxies is outside $`V_R`$. At larger separations this underestimation is maximum since, in this case, a larger fraction of all the pairs in the bin have one of the galaxies outside of $`V_R`$. Thus our shot noise is an overestimate by a factor $`\sqrt{2}`$.
We can see from Figure $`6`$ that with shot noise alone, the $`C_l`$ coefficients are best measured in the velocity range $`10^2w_s10^4\mathrm{km}\mathrm{s}^1`$ for a survey of the size and geometry that we have assumed. The errors on the multipoles scale as $`(2l+1)^{\frac{1}{2}}`$, and so they are smaller for the $`C_0`$ coefficient and higher for the $`C_4`$ coefficient as compared to the quadrupole. For scales close to the radius of the survey the shot noise error increases rapidly since $`V_R`$ is now very small. For scales $`10^3\mathrm{km}\mathrm{s}^1`$, with shot noise alone, we can measure the $`C_2`$ coefficient to a few percent accuracy, both for the fiducial model as well as the $`\mathrm{\Lambda }`$ model and hence distinguish a large cosmological constant as in our model with $`\mathrm{\Lambda }_0=0.7`$ to high statistical significance. The shot noise error on the $`C_4`$ coefficient is small for the $`\mathrm{\Lambda }_0=0.7`$ model we have shown but larger for models with smaller cosmological constants. Considering shot noise alone, on scales of $`\mathrm{\hspace{0.17em}10}^3\mathrm{km}\mathrm{s}^1`$, the $`C_4`$ coefficient can be measured if it is present at the level of a few percent of the monopole, which in turn would indicate a large energy density in the form of a cosmological constant or some form of quintessence. We also note here that the number of pairs of galaxies at a fixed separation is proportional to $`V_R`$. Hence for separations small compared to the radius of the survey, the shot noise error scales as the inverse of the angular size of the survey.
### 5.2 Cosmic variance
The cosmic variance of a survey volume results from the sparse sampling of the universe made by the small survey volume. It occurs even if the overdensity at each point within the survey volume is accurately known, and is independent of the number of observed galaxies. We estimate the cosmic variance in this section using linear theory.
A finite volume estimate (denoted by subscript E) of the correlation function is given by,
$$\stackrel{~}{\xi }_E(\stackrel{}{w_s},\widehat{n})=\frac{1}{\mathrm{V}_\mathrm{R}}_{\mathrm{V}_\mathrm{R}}d^3x\delta (\stackrel{}{x},\widehat{n})\delta (\stackrel{}{x}+\stackrel{}{w_s},\widehat{n}).$$
(20)
In the above equation, $`\stackrel{}{x}`$ is constrained to be within $`V_R`$ such that its boundary are a distance $`w_s`$ away from that of the full survey volume and $`\stackrel{}{x}+\stackrel{}{w_s}`$ is within the full survey volume. For every point $`\stackrel{}{x}`$ within $`V_R`$, the overdensities at $`\stackrel{}{x}`$ and $`\stackrel{}{x}+\stackrel{}{w_s}`$ are accurately known. The ensemble average of the estimator gives,
$`<\stackrel{~}{\xi }_E(\stackrel{}{s})>={\displaystyle \frac{1}{\mathrm{V}_\mathrm{R}}}{\displaystyle _{\mathrm{V}_\mathrm{R}}}d^3x\stackrel{~}{\xi }(\stackrel{}{s}),`$
$`=\stackrel{~}{\xi }(\stackrel{}{s}),`$ (21)
where as previously, quantities without the subscript $`E`$ stand for their true values. Similarly,
$$<C_{l,E}(w_s)>=C_l(w_s).$$
(22)
The variance in $`C_{l,E}(w_s)`$ can be computed using,
$$<C_{l,E}^2(w_s)>=\left(\frac{2l+1}{2}\right)^2𝑑\mu _{s1}𝑑\mu _{s2}<\stackrel{~}{\xi }_E(s,\mu _{s1})\stackrel{~}{\xi }_E(s,\mu _{s2})>P_l(\mu _{s1})P_l(\mu _{s2}),$$
(23)
where,
$$<\stackrel{~}{\xi }_E(w_s,\mu _{s1})\stackrel{~}{\xi }_E(w_s,\mu _{s2})>=\frac{1}{(\mathrm{V}_\mathrm{R})^2}d^3x_1d^3x_2<\delta (\stackrel{}{x_1})\delta (\stackrel{}{x_1}+\stackrel{}{w_{s1}})\delta (\stackrel{}{x_2})\delta (\stackrel{}{x_2}+\stackrel{}{w_{s2}})>,$$
(24)
where $`|\stackrel{}{w}_{s1}|=|\stackrel{}{w}_{s2}|`$ and $`\widehat{w}_{s1}\widehat{n}=\mu _{s1}`$, $`\widehat{w}_{s2}\widehat{n}=\mu _{s2}`$.
In order to simplify the above expression, we approximate the overdensities to be in the linear regime. The linear overdensities are Gaussian distributed and the four point expression in the above equation can be expressed in terms of two point correlation functions :
$`<\delta (\stackrel{}{x_1})\delta (\stackrel{}{x}_1+\stackrel{}{w}_{w1})\delta (\stackrel{}{x}_2)\delta (\stackrel{}{x}_2+\stackrel{}{w}_{s2})>=\stackrel{~}{\xi }(\stackrel{}{w}_{s1})\stackrel{~}{\xi }(\stackrel{}{w}_{s2})`$
$`+\stackrel{~}{\xi }(\stackrel{}{x}_1\stackrel{}{x}_2)\stackrel{~}{\xi }(\stackrel{}{x}_1\stackrel{}{x}_2+\stackrel{}{w}_{s1}\stackrel{}{w}_{s2})`$
$`+\stackrel{~}{\xi }(\stackrel{}{x}_1\stackrel{}{x}_2+\stackrel{}{w}_{s1})\stackrel{~}{\xi }(\stackrel{}{x}_1\stackrel{}{x}_2\stackrel{}{w}_{s2}).`$ (25)
Thus the last two terms in equation (25) contribute to the root mean square variance in $`C_{l,E}(w_s)`$. Since $`\stackrel{~}{\xi }_E(w_s,\mu _s)`$ is independent of the azimuthal angle in equation (23), we can also integrate over this angle. Therefore we can express the root mean square variance as,
$`<\sigma _{C_{l,E}}^2(w_s)>=\left({\displaystyle \frac{2l+1}{2\mathrm{V}_\mathrm{R}}}\right)^2{\displaystyle d^3x_1d^3x_2\frac{d\mathrm{\Omega }_1}{2\pi }\frac{d\mathrm{\Omega }_2}{2\pi }P_l(\mu _{s1})P_l(\mu _{s2})}`$
$`\left\{\stackrel{~}{\xi }(\stackrel{}{x}_1\stackrel{}{x}_2+\stackrel{}{w}_{s1})\stackrel{~}{\xi }(\stackrel{}{x}_1\stackrel{}{x}_2\stackrel{}{w}_{s2})+\stackrel{~}{\xi }(\stackrel{}{x_1}\stackrel{}{x_2})\stackrel{~}{\xi }(\stackrel{}{x}_1\stackrel{}{x}_2+\stackrel{}{w}_{s1}\stackrel{}{w}_{s2})\right\}.`$ (26)
The method we employed in the calculation of the above integrals is detailed in the Appendix. Figure 7 displays the expected cosmic variance errors for our survey volume for the multipole coefficients. For all three coefficients, the cosmic variance dominates the error on large scales, while the shot noise contribution is larger on smaller scales. The error is smallest in the region $`w_s3000\mathrm{km}\mathrm{s}^1`$, so this is the best scale at which to measure the quadrupole and octapole coefficients and hence estimate the geometric distortion factor.
As mentioned before, the cosmic variance error that we have calculated assumes linear theory and hence we have underestimated the contribution to the errors from fluctuations and non-linear velocity effects on small scales. As mentioned in §3 if we adopt a local but stochastic model for the distribution of galaxy number density as a function of the underlying mass density, then the bias will still be scale independent on large scales, but there will be a larger variance in the measured correlation function. In the absence of a well motivated stochastic biasing model, we have not estimated the variance in the $`C_l`$ coefficients arising from such a model of the bias. Depending on the true nature of bias, we may be underestimating the variance of the measured correlation function. A more precise estimate of the error can only be given by the direct analysis of numerical simulations, a project we plan to return to in a later paper.
We note here that we have not assumed that the mean overdensity within the survey volume is zero. The fluctuation in the mean overdensity is the primary source of the cosmic variance error for the monopole component of the correlation function. This fluctuation of course does not affect the higher multipoles of the correlation function and hence on small scales the error on the higher multipoles is smaller than on the monopole coefficient. With the combined shot noise and cosmic variance errors the $`C_2`$ coefficient can be measured to a few percent accuracy both for the fiducial $`\mathrm{\Omega }_0=1`$ model as well as the $`\mathrm{\Lambda }`$ models which have a larger quadrupole coefficient compared to the fiducial model. Thus, with our estimate of the errors, we can distinguish a geometric distortion factor of about $`15\%`$ corresponding to a $`\mathrm{\Lambda }`$ model with $`\mathrm{\Lambda }_0=0.7`$ to high statistical significance. From Figures 6 and 7 we also see that for our survey volume, the $`C_4`$ coefficient can be measured to $`20\%`$ accuracy for the $`\mathrm{\Lambda }_0=0.7`$ Model. The errors are larger for models with smaller cosmological constants. Therefore, a measurement of the octapolar coefficient is possible if it is present at the level of a few percent of the monopole on scales of $`3\times 10^3\mathrm{km}\mathrm{s}^1`$ as in case of a large cosmological constant. Since the error on $`C_4`$ is large, a simultaneous measurement of both the $`\beta `$ parameter as well as the cosmological constant from the anisotropy of the redshift space correlation function alone is difficult. This has been indicated earlier by Ballinger et al. (1996). If the the octapolar coefficient can be measured, and the bias parameter constrained, it will be interesting to compare its value to the one obtained by comparison of galaxy clustering to the assumed underlying matter distribution. But we emphasize that when we assume that the amplitude of the matter power spectrum is known, and only one parameter needs to be measured from the redshift space correlation function, then the quadrupolar geometric distortion effect of the cosmological constant can be measured to high accuracy.
For scales much smaller than the radius of the survey, the cosmic variance error scales as the inverse square root of the volume of the survey and hence as the inverse of the angular size of the survey. Thus both shot noise and cosmic variance have similar dependence on the angular size of the survey on small length scales. A large cosmological constant may be distinguished with high statistical significance for smaller angular size surveys depending upon other sources of error. Considering only the shot noise and the cosmic variance that we have estimated, for a survey of angular size $`1^{}`$, with a factor of three increase in the errors, we can still measure the quadrupolar coefficient affected by a geometric distortion parameter of $`15\%`$ with an accuracy of approximately $`10\%`$ on a scale of $`3\times 10^3\mathrm{km}\mathrm{s}^1`$. The error is larger for smaller distortion factors. Since a variation of the cosmological constant from zero to $`0.7`$ changes the quadrupolar coefficient by a factor of 2, we can use a linear relation between the two to make an approximate estimate of the accuracy with which the value of the cosmological constant can be measured. This gives us that a large cosmological constant, for which the error in the difference of the $`C_2`$ coefficient with respect to its value in the fiducial model is small, can be constrained with an error bar of approximately $`20\%`$ with a $`1^{}`$ field of view. Since in fact this linear relation is incorrect and the geometric distortion parameter is more sensitive to a variation in the cosmological constant when it is large (Ballinger et al. 1996), the error we have quoted will be somewhat smaller for large $`\mathrm{\Lambda }_0`$ ($`0.5`$). For a field of view of this size, the $`C_4`$ coefficient can also be measured, although with a large error of $`60\%`$, if it is present at the level of a few percent of the monopole as in the case of geometric distortion with respect to the fiducial $`\mathrm{\Omega }_0=1`$ model by a cosmological constant $`\mathrm{\Lambda }_0=0.7`$.
For a smaller field of view, the monopole coefficients have to be measured on scales smaller than $`3000\mathrm{km}\mathrm{s}^1`$ where shot noise is the dominant source of error. For example for a field size of $`1/2^{}`$, the quadrupole coefficient corresponding to a $`15\%`$ geometric distortion parameter can still be measured to an accuracy of approximately $`50\%`$ on a scale of $`10^3\mathrm{km}\mathrm{s}^1`$. Hence it can be distinguished from the fiducial $`\mathrm{\Omega }_0=1`$ model at the $`2\sigma `$ level. For smaller scales the error is larger while to measure the distortion parameter at larger scales a larger field size is required. Thus a field at least $`1/2^{}`$ in diameter, corresponding to an area approximately four times the currently used field size, is required to distinguish a $`\mathrm{\Lambda }_0=0.7`$ model from our fiducial $`\mathrm{\Omega }_0=1`$ model.
## 6 Discussion and Conclusions
In this paper we have investigated the feasibility of using the high redshift population of Lyman-break galaxies to measure the geometric distortion effect and hence constrain cosmological parameters. The method is particularly sensitive to components of energy density with negative pressure and in particular to the cosmological constant. The principal advantage of using this population of galaxies is their high bias with respect to the underlying matter distribution. This tends to suppress the peculiar velocity effects and makes it easier to measure the geometric distortion effect. As pointed out by Ballinger et al. (1996), a simultaneous measurement of the bias and the cosmological constant using the redshift space distortion alone is difficult except in case of a large cosmological constant. In this paper we assumed that the matter power spectrum at redshift $`3`$ is related by the linear growth factor to the matter power spectrum at redshift zero which is constrained by observations of cluster abundances. We fixed the bias of the Lyman-break galaxies by comparing their clustering to the assumed matter power spectrum at redshift $`3`$. Then we only need to measure one parameter, the geometric distortion parameter, from the anisotropy of the correlation function. This permits us to use the lowest order quadrupolar distortion of the redshift space power spectrum to constrain the geometric distortion parameter to high accuracy. In cases of a large energy density in a cosmological constant or quintessence, the octapolar coefficient may also be measured. An interesting test would then be to compare the value of the bias parameter derived from the additional information provided by the octapolar term to that determined by comparing the galaxy clustering to the matter power spectrum.
We estimated that in order to distinguish a flat model with $`\mathrm{\Lambda }_0=0.7`$ from the Einstein-de Sitter case, at least a $`1/2^{}`$ sized circular field of view is required. Currently the observation fields have sizes of approximately $`10^{^{}}`$, which are too small for measurements of geometric distortion, both due to shot noise and cosmic variance. It is preferable to measure the distortion effect on large scales where the effects of peculiar velocities can be analytically computed using linear theory. For this reason, it is better to use a single large field of view than to combine data from several small fields of view which provide data only on smaller scales. For a more accurate measurement of the distortion parameter larger field sizes are required. We estimated that for a field size of $`3^{}`$, the best scale at which to measure the ratio of the quadrupole coefficient to the monopole is approximately $`3000\mathrm{km}\mathrm{s}^1`$, or $`15h^1\mathrm{Mpc}`$ in the $`\mathrm{\Omega }_0=1`$ model and somewhat smaller for a smaller field. Since the difference in the quadrupolar coefficients for the flat $`\mathrm{\Lambda }_0=0`$ and $`\mathrm{\Lambda }_0=0.7`$ models can be measured to $`20\%`$ accuracy with a circular field of diameter $`1^{}`$, we made a rough estimate that a large cosmological constant $`0.5`$ can be measured with this precision.
Our cosmic variance was estimated using the linear correlation function and we have underestimated the error due to fluctuations and non linear velocity effects on small scales. We have also used a very simple local non-stochastic scale independent model for the bias. Stochastic bias will lead to variance in the measured correlation function which we have not accounted for. A full calculation of the errors including non linear effects will require analysis of numerical simulations, which we will discuss in a future paper.
I wish to acknowledge Jordi Miralda-Escudé, my thesis advisor, who gave me the original motivation for this work and for the numerous insightful comments and discussions I have had with him. I also wish to thank Patrick MacDonald, Brian Mason and David Moroz for their comments on the paper. I would also like to acknowledge the anonymous referee for his comments and suggestions that have improved the content and presentation of the paper.
## Appendix A Appendix
### Calculation of integrals for the cosmic variance
We perform the integrals required in equation (26) for the case of a cylindrical volume of radius R and length $`L_z`$. Since equation (25) depends only on the difference vector $`\stackrel{}{x_1}\stackrel{}{x_2}`$, the six dimensional integral over $`\stackrel{}{x_1}`$ and $`\stackrel{}{x_2}`$ can be reduced to a two dimensional integral. We define sum and difference vectors $`\stackrel{}{x}_+`$ and $`\stackrel{}{x}_{}`$ respectively as
$`\stackrel{}{x}_+=\stackrel{}{x}_1+\stackrel{}{x}_2`$
$`\stackrel{}{x}_{}=\stackrel{}{x}_1\stackrel{}{x}_2,`$ (A1)
Denoting our integrand as $`f(\stackrel{}{x}_{})`$ we have the following result.
$$d^3x_1d^3x_2f(\stackrel{}{x}_{})=\frac{1}{8}d^3x_{}V_+(\stackrel{}{x}_{})f(\stackrel{}{x_{}}),$$
(A2)
where $`V_+(\stackrel{}{x}_{})`$ is the volume occupied by the sum vector $`\stackrel{}{x}_+`$ for a fixed difference vector $`\stackrel{}{x}_{}`$. Denoting the components of the $`\stackrel{}{x}_{}`$ in cylindrical coordinates as $`\rho _{}`$ and $`z_{}`$, $`V_+(\stackrel{}{x}_{})`$ is given as
$$V_+(\stackrel{}{x}_{})=\{2\mathrm{R}^2cos^1(\frac{\rho _{}}{2\mathrm{R}})\rho _{}\left(\mathrm{R}^2\frac{\rho _{}^2}{4}\right)^{\frac{1}{2}}\}2(\mathrm{L}_\mathrm{z}|z_{}|).$$
(A3)
Let us first consider the contribution of the first term in equation (26), $`\stackrel{~}{\xi }(\stackrel{}{x}_{}+\stackrel{}{w})\stackrel{~}{\xi }(\stackrel{}{x}_{}\stackrel{}{w}_2)`$, and denote it by $`<\sigma _{C_{l,E}}^2(w)>_I`$:
$$<\sigma _{C_{l,E}}^2(w)>_I=\left(\frac{2l+1}{2}\right)^2d^3x_1d^3x_2\left(I_l(\stackrel{}{x}_{},w)\right)^2,$$
(A4)
where,
$$I_l(\stackrel{}{x}_{},w)=\frac{d\mathrm{\Omega }_1}{2\pi }P_l(\mu _1)\stackrel{~}{\xi }(\stackrel{}{x}_{}+\stackrel{}{w}_1).$$
(A5)
We now calculate $`I_l(\stackrel{}{x}_{},w)`$. The Fourier transform of $`\stackrel{~}{\xi }`$ is
$$\stackrel{~}{\xi }(\stackrel{}{x}_{}+\stackrel{}{w}_1)=\frac{1}{(2\pi )^3}d^3k\stackrel{~}{P}(k,\widehat{k}\widehat{n})e^{i\stackrel{}{k}(\stackrel{}{x}_{}+\stackrel{}{w_1})},$$
(A6)
where $`\stackrel{~}{P}(k,\widehat{k}\widehat{n})`$ is the redshift space power spectrum given by,
$$\stackrel{~}{P}(k,\widehat{k}\widehat{n})=P(k)\underset{l=0,2,4}{}(1)^lA_l(\beta )P_l(\widehat{k}\widehat{n}).$$
(A7)
The coefficients $`A_l(\beta )`$ are defined in equation (3). For convenience of computation, we take the line-of-sight vector $`\widehat{n}`$ to lie along the z axis. We first perform the integrals over the angles $`\mathrm{\Omega }_1`$ and $`\mathrm{\Omega }_2`$ in equation (26). Using,
$$e^{i\stackrel{}{k}\stackrel{}{w}_1}=4\pi \underset{L,M}{}i^Lj_L(kw)Y_{LM}(\mathrm{\Omega }_k)Y_{LM}^{}(\mathrm{\Omega }_{w_1}),$$
(A8)
we have,
$$\frac{d\mathrm{\Omega }_1}{2\pi }e^{i\stackrel{}{k}\stackrel{}{w_1}}P_l(\mu _1)=2i^lj_l(kw)P_l(\mu _k),$$
(A9)
where $`\mu _k=cos(\widehat{k}\widehat{n})`$. Thus,
$$I_l(\stackrel{}{x}_{},w)=\frac{i^l}{4\pi ^3}d^3k\stackrel{~}{P}(k,\widehat{k}\widehat{n})e^{i\stackrel{}{k}\stackrel{}{x}_{}}j_l(kw)P_l(\mu _k).$$
(A10)
Substituting equation (A7) in above equation And integrating over $`d^3k`$ ,
$$I_l(\stackrel{}{x}_{},w)=\frac{i^l}{2\pi ^2}\underset{l^{^{}}=0}{\overset{8}{}}i^l^{^{}}(2l^{^{}}+1)D1(l,l^{^{}})\chi (l,w,l^{^{}},x_{})P_l^{^{}}(cos\theta _x_{}),$$
(A11)
where,
$`D1(l,l^{})={\displaystyle \underset{l^{^{\prime \prime }}=0,2,4}{}}A_{l^{^{\prime \prime }}}(\beta )B1(l,l^{^{}},l^{^{\prime \prime }}),`$
$`B1(l,l^{^{}},l^{^{\prime \prime }})={\displaystyle 𝑑\mu _kP_l\left(\mu _k\right)P_l^{^{}}\left(\mu _k\right)P_{l^{^{\prime \prime }}}\left(\mu _k\right)},`$ (A12)
and
$$\chi (l,w,l^{^{}},x_{})=𝑑kk^2j_l(kw)j_l^{^{}}(kx_{})P(k).$$
(A14)
The contribution of the second term in equation. (26) to $`<\sigma _{C_{l,E}}^2(w)>`$ can be computed in a similar fashion.
|
no-problem/9904/cond-mat9904115.html
|
ar5iv
|
text
|
# CONVERGENT APPROXIMATION FOR THE 2-BODY CORRELATION FUNCTION IN AN INTERFACE
## I Introduction
Density correlations in the vicinity of equilibrium planar interfaces have been extensively studied by numerous authors . Several methods, including the density functional theory , the capillary wave model , and the eigenstate expansion of the fluctuations were used to calculate the correlation function. Most existing approaches have to confront a problem of divergence, caused by the vanishing energy cost of rigid shifts of the interface in an infinite system without external potential. We will focus in this work on the Hamiltonian second derivative $`\frac{\delta ^2[\rho ]}{\delta \rho (z_1)\delta \rho (z_2)}|_{\rho =\rho _0}`$ eigenstate expansion method (sometimes also called “field theoretical method”, ). In the framework of this method the divergence manifests itself in the term $`\varphi _0(z_1)\varphi _0(z_2)/\lambda _0`$ where $`\varphi _0(z)`$ and $`\lambda _0`$ are the “zero” eigenstate and eigenvalue; $`\varphi _0(z)d\rho _0(z)/dz`$ where $`\rho _0(z)`$ is the equilibrium density profile. The eigenvalue $`\lambda _0`$ is zero for vanishing external localizing field, The traditional physical explanation for the divergence is the following: the density fluctuations corresponding to the interface shifts, to the first order of shift amplitude, have component only along the zero eigenstate, $`\mathrm{\Delta }\rho (z)=\rho _0(z+\mathrm{\Delta }z)\rho _0(z)=d\rho (z)/dz\mathrm{\Delta }z+𝒪(\mathrm{\Delta }z)^2`$, and free wandering of the interface as a whole results in the ambiguity in defining the density-density correlation function. In the framework of the capillary wave model () this ambiguity is overcome by renormalizing the equilibrium density profile by taking into account the free wandering mode. As a result, average width of the renormalized interface diverges for vanishing external field, and the zero-order term contribution to the density-density correlation function tends to zero. However, after such renormalization, one loses information about instantenous and local fluctuations of the interface.
In an attempt to resolve this ambiguity, we would like to focus our attention on the question of divergence of the zero eigenstate term The main motivation of our approach lies in the following.
It is true that in an infinite system without a confining external field the energy cost of rigid shifts of the planar two-phase interface is zero, $`[\rho (z_0+\mathrm{\Delta }z)]=[\rho (z_0)]`$. Yet this does not mean that the amplitude $`a_0`$ of the density fluctuation proportional to the zero eigenstate, $`\delta \rho (z)=a_0\rho _o^{}(z)`$ can grow infinitely without a free-energy penalty. Physically it follows from the fact, that although the position of the interface is undetermined, the actual values of the density near the interface cannot go significantly beyond the density of either of the bulk phases. For all the realistic Hamiltonian functionals describing stable systems, this non-divergence is controlled by a positive coefficient in front of the highest power of the density. For the Ginzburg-Landau functional
$$[\rho ]=(\{\frac{d}{dz}\rho (z)\}^2+\{1\rho ^2(z)\}^2)𝑑z$$
(1)
the latter is $`+\rho ^4(z)`$. The divergence that appears when only the harmonic (second-order) terms of the expansion of the Hamiltonian are used is the direct consequence of neglect of all higher-order terms. Hence, to eliminate the divergence of the zero-eigenstate term in the correlation function, it is natural to try using higher-order terms of the expansion of the Hamiltonian around the equilibrium density profile (see where the equilibrium profile was calculated using higher-order terms). It turns out that fourth-order terms are sufficient to keep the zero-eigenstate contribution finite. We propose an approximate method to consider these fourth order terms; this method is formally introduced in Section II. Concrete examples for the Ginzburg-Landau Hamiltonian for one- and three- dimensional systems are presented in Sections III and IV.
However, after eliminating the divergence of the zero-order eigenstate term, it is natural to ask, which eigenstate or combination of eigenstates of the second derivative matrix describe the macroscopic shifts of the interface $`\mathrm{\Delta }\rho (z)=\rho _0(z+\mathrm{\Delta }z)\rho _0(z)`$? The expansion coefficient of this density fluctuation along the zero eigenstate remains finite even for infinite shifts,
$$_{\mathrm{}}^+\mathrm{}\mathrm{\Delta }\rho (z)\rho _0^{}(z)𝑑z\rho _0(+\mathrm{})[\rho _0(+\mathrm{})\rho _0(\mathrm{})]$$
(2)
for $`\mathrm{\Delta }z+\mathrm{}`$. This statement could serve as another argument for a finite average value of the amplitude of fluctuation along $`\rho _0^{}(z)`$. The same is true for all other bound (localized) eigenstates. Yet the projection of the shift $`\rho _0(z+\mathrm{\Delta }z)\rho _0(z)`$ onto low-lying continuum states diverges as $`\mathrm{\Delta }z\mathrm{}`$. For the Ginzburg-Landau Hamiltonian the mean-field equilibrium density profile is $`\rho _0(z)=\mathrm{tanh}(z)`$, and the first non-localized eigenstate of the second derivative matrix is proportional to $`(3\mathrm{tanh}(z)1)/2`$. The integral
$$_{\mathrm{}}^+\mathrm{}(\mathrm{tanh}(z+\mathrm{\Delta }z)\mathrm{tanh}(z))(3/2\mathrm{tanh}(z)1/2)𝑑z2\mathrm{\Delta }z$$
(3)
diverges linearly for $`\mathrm{\Delta }z\mathrm{}`$. It means that since macroscopic shifts have no energy cost, the appropriate linear combination of low-lying continuum states with some of the coefficients growing proportionally to the magnitude of the shift also has no energy cost. The finiteness of the average values of all the continuum spectrum amplitudes is another artifact of second-order truncation; in particular, it is a result of the neglect of the mixing of different harmonics in the third and higher-order terms. Consequently, our approximation is convergent only because, after going to the higher-order expansion in the zero term, we stopped short of going beyond the harmonic approximation for terms containing the lowest-lying continuum states.
However, the main contribution to the correlations near the interface comes from the bound states, while continuum states are more relevant for the correlations in the bulk phases. A possible merit of our approximation is the improvement in the accuracy of density correlation calculations in the vicinity of the interstate. To compare our results to experimental data, theoretical calculations should be supplemented by a priori knowledge of the macroscopic localization of the interface.
For a three-dimensional system the situation is similar. For square-gradient energy functionals, the perturbation $`\mathrm{\Delta }\rho (x,y,z)=\rho _0(z+f(x,y))\rho _0(z)`$ of an initially flat interface has the energy cost
$$\mathrm{\Delta }F=_{\mathrm{}}^+\mathrm{}[\rho _0^{}(z)]^2𝑑z_{\mathrm{}}^+\mathrm{}_{\mathrm{}}^+\mathrm{}|f(x,y)|^2𝑑x𝑑y$$
(4)
For long-wavelength fluctuations, $`f(x,y)\mathrm{exp}[i(k_xx+k_yy)]`$, $`|\stackrel{}{k}|1`$, the energy cost of such fluctuations vanishes as $`|k|^2`$. However, considering the expansion of the density fluctuations over the system of eigenstates (now in 3D) of the Hamiltonian second derivative matrix, $`\psi _i(\stackrel{}{r})1/L\varphi _i(z)\mathrm{exp}(ik_xx)\mathrm{exp}(ik_yy)`$, the divergent contribution comes not from the terms with localized $`\varphi _i`$, but from the bottom of the continuum of $`z`$-coordinate eigenstates. To illustrate that, in section IV we calculate the convergent contribution to the correlation function from the $`\varphi _0(z)\mathrm{exp}(ik_xx)\mathrm{exp}(ik_yy)`$ eigenstate.
## II Formalism
A density-density correlation function $`g(z_1,z_2)`$ is defined as a thermal average of a product of density fluctuations $`\mathrm{\Delta }\rho (z)=\rho (z)\rho _0(z)`$ around the equilibrium density profile $`\rho _0(z)`$:
$$g(z_1,z_2)\mathrm{\Delta }\rho (z_1)\mathrm{\Delta }\rho (z_2).$$
(5)
For simplicity, in this section we consider a one-dimensional case, $`\rho =\rho (z)`$. Following we express it as a functional integral over all possible density profiles,
$$g(z_1,z_2)=(1/Z)𝒟\rho \mathrm{\Delta }\rho (z_1)\mathrm{\Delta }\rho (z_2)\mathrm{exp}\{[\rho ]\}$$
(6)
where $`[\rho ]H[\rho ]/k_bT`$ is a reduced Hamiltonian functional, and the partition function $`Z`$ serves as the normalization constant,
$$Z=𝒟\rho \mathrm{exp}\{[\rho ]\}$$
(7)
In the mean-field approximation that we will use through this work, the equilibrium density profile $`\rho _0(z)`$ is determined as the one that minimizes the Hamiltonian,
$$\frac{\delta [\rho ]}{\delta \rho (z)}|_{\rho =\rho _0}=0.$$
(8)
To evaluate (6), we proceed by calculating the eigenstates of the integral operator with the kernel $`\frac{\delta ^2[\rho ]}{\delta \rho (z_1)\delta \rho (z_2)}|_{\rho =\rho _0}`$,
$$\frac{\delta ^2[\rho ]}{\delta \rho (z_1)\delta \rho (z_2)}|_{\rho =\rho _0}\varphi _i(z_2)dz_2=\lambda _i\varphi _i(z_1).$$
(9)
There is always a special eigenstate corresponding to $`\varphi _0(z)=d\rho _0(z)/dz`$ which has zero eigenvalue $`\lambda _0=0`$ since
$$\frac{\delta ^2[\rho ]}{\delta \rho (z_1)\delta \rho (z_2)}|_{\rho =\rho _0}\frac{d\rho _0(z_2)}{dz_2}dz_2=\frac{d}{dz}\frac{\delta [\rho ]}{\delta \rho (z)}|_{\rho =\rho _0}0.$$
(10)
The Hamiltonian is usually real and contains only even powers of differential operators; it makes the integral operator in Eq. (9) Hermitian. The system of eigenstates $`\varphi _i`$ is complete and orthogonal; we also assume that it is normalized with unit weight function and all eigenfunctions $`\varphi _i`$ are made real. An arbitrary density fluctuation $`\mathrm{\Delta }\rho (z)=\rho (z)\rho _0(z)`$ can be expanded over the complete set of functions $`\{\varphi _i(z)\}`$:
$$\mathrm{\Delta }\rho (z)=\underset{i=0}{\overset{\mathrm{}}{}}a_i\varphi _i(z),a_i=\mathrm{\Delta }\rho (z)\varphi _i(z)𝑑z$$
(11)
Using the expansion(11), the functional integral (6) can be expressed as
$$g(z_1,z_2)=\frac{1}{Z}\underset{i=0}{\overset{\mathrm{}}{}}\underset{j=0}{\overset{\mathrm{}}{}}\mathrm{}a_ia_j\varphi _i(z_1)\varphi _j(z_2)\mathrm{exp}\{[\rho _0+\underset{k=0}{\overset{\mathrm{}}{}}a_k\varphi _k]\}\underset{m=0}{\overset{\mathrm{}}{}}da_m$$
(12)
where the normalization constant
$$Z=\mathrm{}\mathrm{exp}\{[\rho _0+\underset{k=0}{\overset{\mathrm{}}{}}a_k\varphi _k]\}\underset{m=0}{\overset{\mathrm{}}{}}da_m.$$
(13)
We assume here that the integrals in both numerator and denominator are convergent. As we mentioned in the Introduction, it is not true for the single specific direction in the space of coefficients $`\{a_i\}`$, corresponding to the rigid macroscopic shifts of the interface. However, as a result of the approximations made below, this divergence will not affect the further calculations.
Traditionally, $`[\rho _0+_{k=0}^{\mathrm{}}a_k\varphi _k]`$ is expanded to the second order around the equilibrium density profile, the orthogonality conditions for the $`\varphi _i`$ are used, and the corresponding Gaussian integrals factorized . As a result, the familiar expression for the density-density correlation function is recovered:
$$g(z_1,z_2)=\underset{i=0}{\overset{\mathrm{}}{}}a_i^2\varphi _i(z_1)\varphi _i(z_2),$$
(14)
where $`a_i^2=1/\lambda _i`$. As we already mentioned, the zero term diverges since $`\lambda _0=0`$.
However, as we discussed in the Introduction, from physical considerations $`a_0^2`$ must have finite value. It is indeed the case if in Eqs. (12), (13) one goes to higher than second order in the expansion of $`[\rho ]`$.
In our case, the expansion of $`[\rho ]`$ up to the fourth order in the density fluctuation around the equilibrium profile is sufficient:
$`[\rho _0+{\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}a_k\varphi _k]`$ (15)
$`{\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}a_k{\displaystyle \frac{\delta [\rho ]}{\delta \rho (z)}}|_{\rho =\rho _0}\varphi _k(z)dz+`$ (16)
$`{\displaystyle \frac{1}{2}}{\displaystyle \underset{k,l=0}{\overset{\mathrm{}}{}}}a_ka_l{\displaystyle \frac{\delta ^2[\rho ]}{\delta \rho (z_1)\delta \rho (z_2)}}|_{\rho =\rho _0}\varphi _k(z_1)\varphi _l(z_2){\displaystyle \underset{j=1}{\overset{2}{}}}dz_j+`$ (17)
$`{\displaystyle \frac{1}{3!}}{\displaystyle \underset{k,l,m=0}{\overset{\mathrm{}}{}}}a_ka_la_m{\displaystyle \frac{\delta ^3[\rho ]}{\delta \rho (z_1)\delta \rho (z_2)\delta \rho (z_3)}}|_{\rho =\rho _0}\varphi _k(z_1)\varphi _l(z_2)\varphi _m(z_3){\displaystyle \underset{j=1}{\overset{3}{}}}dz_j+`$ (18)
$`{\displaystyle \frac{1}{4!}}{\displaystyle \underset{k,l,m,n=0}{\overset{\mathrm{}}{}}}a_ka_la_ma_n{\displaystyle \frac{\delta ^4[\rho ]}{\delta \rho (z_1)\delta \rho (z_2)\delta \rho (z_3)\delta \rho (z_4)}}|_{\rho =\rho _0}\varphi _k(z_1)\varphi _l(z_2)\varphi _m(z_3)\varphi _n(z_4){\displaystyle \underset{j=1}{\overset{4}{}}}dz_j.`$ (19)
The first order term is identically zero, the second-order terms are used in the traditional formalism, and the third and fourth-order terms are essential for our treatment. If we substitute this expansion into the expressions for the correlations function (12), (13), for non-pathological forms of the Hamiltonian functional $`[\rho ]`$, it will produce a finite value for $`a_0^2`$. However, in their complete form Eqs. (12), (13), (15) are hardly tractable. Assuming that we are far enough from a critical point, we use the following approximation to evaluate $`a_0^2`$. We assume that the second-order expansion works well enough for all $`a_i^2=1/\lambda _i`$ with $`i0`$ and drop from Eq. (15) all terms that do not contain $`a_0`$. In the remaining terms we replace all combinations of $`a_i`$, $`a_ia_j`$, $`a_ia_ja_k`$, $`\{i,j,k\}0`$ by their average values obtained with the second-order expansion:
$`a_i=0,`$ (20)
$`a_ia_j=\delta _{ij}{\displaystyle \frac{1}{\lambda _i}},`$ (21)
$`a_ia_ja_k=0.`$ (22)
This approximations allow us to express $`a_0^2`$ in the form of the following integral:
$$a_0^2=\frac{x^2\mathrm{exp}\{\alpha x^4\beta x^2\gamma x\delta x^3\}𝑑x}{\mathrm{exp}\{\alpha x^4\beta x^2\gamma x\delta x^3\}𝑑x},$$
(24)
with coefficients $`\alpha ,\beta ,\gamma `$ given by
$`\alpha ={\displaystyle \frac{1}{4!}}{\displaystyle \frac{\delta ^4[\rho ]}{\delta \rho (z_1)\delta \rho (z_2)\delta \rho (z_3)\delta \rho (z_4)}}|_{\rho =\rho _0}\varphi _0(z_1)\varphi _0(z_2)\varphi _0(z_3)\varphi _0(z_4){\displaystyle \underset{j=1}{\overset{4}{}}}dz_j,`$ (25)
$`\beta ={\displaystyle \frac{1}{4}}{\displaystyle \underset{i=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{\lambda _i}}{\displaystyle \frac{\delta ^4[\rho ]}{\delta \rho (z_1)\delta \rho (z_2)\delta \rho (z_3)\delta \rho (z_4)}}|_{\rho =\rho _0}\varphi _i(z_1)\varphi _i(z_2)\varphi _0(z_3)\varphi _0(z_4){\displaystyle \underset{j=1}{\overset{4}{}}}dz_j,`$ (26)
$`\gamma ={\displaystyle \frac{1}{2}}{\displaystyle \underset{i=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{\lambda _i}}{\displaystyle \frac{\delta ^3[\rho ]}{\delta \rho (z_1)\delta \rho (z_2)\delta \rho (z_3)}}|_{\rho =\rho _0}\varphi _i(z_1)\varphi _i(z_2)\varphi _0(z_3){\displaystyle \underset{j=1}{\overset{3}{}}}dz_j,`$ (27)
$`\delta ={\displaystyle \frac{1}{3!}}{\displaystyle \frac{\delta ^3[\rho ]}{\delta \rho (z_1)\delta \rho (z_2)\delta \rho (z_3)}}|_{\rho =\rho _0}\varphi _0(z_1)\varphi _0(z_2)\varphi _0(z_3){\displaystyle \underset{j=1}{\overset{3}{}}}dz_j.`$ (28)
## III Ginzburg-Landau Hamiltonian
To illustrate our approach let us a consider a simple 1D system with the Ginzburg-Landau Hamiltonian (1). The equilibrium density profile for symmetric boundary conditions $`\rho (l)=1`$, $`\rho (+l)=1`$, $`l\mathrm{}`$ is $`\rho _0(z)=\mathrm{tanh}(z)`$. The eigenvalue equation (9) takes the differential form
$$2\frac{d^2}{dz^2}\varphi _i(z)4\varphi _i(z)+12\mathrm{tanh}^2(z)\varphi _i(z)=\lambda _i\varphi _i(z),$$
(30)
and the third and fourth-order functional derivatives of $`[\rho ]`$ are:
$$\frac{\delta ^3[\rho ]}{\delta \rho (z_1)\delta \rho (z_2)\delta \rho (z_3)}|_{\rho =\rho _0}=24\mathrm{tanh}(z_1)\delta (z_1z_2)\delta (z_2z_3)$$
(31)
$$\frac{\delta ^4[\rho ]}{\delta \rho (z_1)\delta \rho (z_2)\delta \rho (z_3)\delta \rho (z_4)}|_{\rho =\rho _0}=24\delta (z_1z_2)\delta (z_2z_3)\delta (z_3z_4).$$
(32)
In fact, the expansion (15) is now exact for $`[\rho ]`$ since all higher-order variational derivatives are identically equal to zero. To proceed further we need to calculate $`\alpha `$, $`\beta `$, $`\gamma `$, and $`\delta `$ as defined in Eq. (25). From parity consideration, the coefficients $`\gamma `$ and $`\delta `$ are zero.
The calculation of $`\alpha `$ is straightforward:
$$\alpha =\frac{9}{16}_{\mathrm{}}^+\mathrm{}\frac{dz}{\mathrm{cosh}^8(z)}=\frac{18}{35}.$$
(33)
The coefficient $`\frac{9}{16}`$ appears from the normalization condition:
$$_{\mathrm{}}^+\mathrm{}\varphi _i^2(z)𝑑z=1.$$
(34)
To evaluate $`\beta `$ we need to know the eigenstates $`\varphi _i(z)`$ of Eq. (30):
$$\beta =\frac{9}{2}\underset{i=1}{\overset{\mathrm{}}{}}_{\mathrm{}}^+\mathrm{}\frac{\varphi _i^2(z)}{\lambda _i}\frac{dz}{\mathrm{cosh}^4(z)}.$$
(35)
The sum in Eq. (35) contains a contribution from one bound state, ($`\varphi _1(z)=\sqrt{3/2}\mathrm{sinh}(z)\mathrm{cosh}^2(z)`$, $`\lambda _1=6`$) and continuum states ($`i2`$) To evaluate the sum over the continuum, we use expressions obtained in (Eq. (3.15 - 3.20)):
$$\frac{9}{2}\underset{i=2}{\overset{\mathrm{}}{}}_{\mathrm{}}^+\mathrm{}\frac{\varphi _i^2(z)}{\lambda _i}\frac{dz}{\mathrm{cosh}^4(z)}=\frac{9}{4}_{\mathrm{}}^+\mathrm{}\frac{dk}{2\pi }\frac{1}{(k^2+4)^2}\frac{1}{(k^2+1)}_{\mathrm{}}^+\mathrm{}\frac{\varphi _k(z)}{\mathrm{cosh}^4(z)}𝑑z$$
(36)
where the continuum eigenstates $`\varphi _k(z)`$ are
$$\varphi _k(z)=\mathrm{exp}(ikz)[1+k^2+3ik\mathrm{tanh}(z)3\mathrm{tanh}^2(z)].$$
(37)
The normalization of $`\varphi _k(z)`$ is taken into account in the $`k`$-dependent factor of the outer integral in (36). Both integrations in (36) are straightforward and finally we obtain for $`\beta `$
$$\beta \frac{6}{35}+0.0220.193$$
(38)
The contribution from the continuum to the value of $`\beta `$ is $`13\%`$. We substitute these values of $`\alpha `$ and $`\beta `$ into Eq. (24), and, in the first order in $`\beta /\sqrt{\alpha }`$, obtain for $`a_0^2`$:
$$a_0^2\frac{x^2\mathrm{exp}\{\alpha z^4\}(1\beta z^2)𝑑z}{\mathrm{exp}\{\alpha z^4\}(1\beta z^2)𝑑z}=\frac{1}{\sqrt{\alpha }}\frac{\mathrm{\Gamma }^2(\frac{3}{4})\frac{\beta }{\sqrt{\alpha }}\frac{\pi \sqrt{2}}{4}}{\pi \sqrt{2}\frac{\beta }{\sqrt{\alpha }}\mathrm{\Gamma }^2(\frac{3}{4})}0.415$$
(39)
It is interesting to note that if one ignores all cross-terms ( contribution from $`\beta `$) in Eq. (24), $`a_0^20.471`$. For the Ginzburg-Landau Hamiltonian, $`\frac{\beta }{\sqrt{\alpha }}0.27`$ plays the role of a small parameter in the approximations (20) - (25), as well as in (39). Hence there is a certain degree of numerical justification of heuristic assumptions made in (20) - (25). To obtain the final result for the density-density correlation function (14) we use the following method. When the external potential $`V(z)=cz`$, linear in the coordinate $`z`$ measured form the interface location $`z=0`$, is added to the Hamiltonian, the zero eigenvalue becomes proportional to the coefficient in front of this term: $`\lambda _0c`$. In an expression for the correlation function $`G_c(z_1,z_2)`$ (Eq. (4.17) in ) is obtained in the presence of an external potential $`V(z)`$, with $`V(z)c\mathrm{tanh}(z)`$ when $`c0`$. It is shown that the zero eigenvalue $`\lambda _0/c1`$ for $`c0`$. Since for $`i1`$ all $`\lambda _i`$ go to constant limits when the external potential is turned off, the “truncated” correlation function $`\overline{g}`$ (without the zero eigenstate term) can be expressed as:
$$\overline{g}(z_1,z_2)\underset{i=1}{\overset{\mathrm{}}{}}\frac{\varphi _i(z_1)\varphi _i(z_2)}{\lambda _i}=\underset{c0}{lim}\frac{d}{dc}cG_c(z_1,z_2).$$
(40)
To recover the “non-truncated” correlation function $`g(z_1,z_2)`$ we add to $`\overline{g}(z_1,z_2)`$ from (40) the correct contribution form the zero-order term,
$$g(z_1,z_2)=\overline{g}(z_1,z_2)+\frac{3}{4\mathrm{cosh}^2(z_1)\mathrm{cosh}^2(z_2)}a_0^2.$$
(41)
with $`a_0^2`$ given by (39). Sketches of the density-density correlation function (41) are presented in Figs. 1,2. It is straightforward to demonstrate that far from the interface ($`z_1.z_21`$), the density-density correlation function (41) goes to the correct bulk phase limit,
$$g_{bulk}(z_1,z_2)=\frac{\mathrm{exp}(2|z_1z_2|)}{8}.$$
(42)
## IV 3D Calculation
The next logical step is to generalize this approach to a more realistic 3D system. For simplicity, we consider the same Ginzburg-Landau Hamiltonian as in (1)
$$[\rho ]=(|\rho |^2+\{1\rho ^2(\stackrel{}{r})\}^2)𝑑\stackrel{}{r}$$
(43)
The equilibrium density profile for the symmetric boundary conditions $`\rho (z=l)=1`$, $`\rho (z=+l)=1`$, $`l\mathrm{}`$ is the same as in the 1D case, $`\rho _0(z)=\mathrm{tanh}(z)`$. The three-dimensional analog of the eigenvalue equation (30) reads
$$2\mathrm{\Delta }\psi _i(\stackrel{}{r})4\psi _i(\stackrel{}{r})+12\mathrm{tanh}^2(z)\psi _i(\stackrel{}{r})=\lambda _i\psi _i(\stackrel{}{r}),$$
(44)
Since the “potential” term in (44) depends only on $`z`$, the variables can be separated, $`\psi _i(\stackrel{}{r})=\varphi _i(z)\xi _i(x,y)`$. We assume that in the $`xy`$ plane the system is confined to a square box of size $`L`$ with periodic boundary conditions. Then the $`xy`$ components of the eigenstates of (44) are the normalized plane waves $`\xi _\stackrel{}{k}(x,y)=(1/L)\mathrm{exp}(ik_xx)\mathrm{exp}(ik_yy)`$ with $`k_{\{x,y\}}=2\pi n_{\{x,y\}}/L`$, $`n_{\{x,y\}}=0,\pm 1,\pm 2\mathrm{}`$. Expanding density variations over the complete set of functions $`\psi _{i\stackrel{}{k}}(\stackrel{}{r})`$ and taking the thermal average, we obtain for the density-density correlation function (compare to (12)):
$`g(\stackrel{}{r}_1,\stackrel{}{r}_2)={\displaystyle \frac{1}{Z}}{\displaystyle \underset{i,\stackrel{}{k}_1}{}}{\displaystyle \underset{j,\stackrel{}{k}_2}{}}{\displaystyle \mathrm{}a_{i.\stackrel{}{k}_1}a_{j\stackrel{}{k}_2}\psi _{i\stackrel{}{k}_1}^{}(\stackrel{}{r}_1)\psi _{j\stackrel{}{k}_2}(\stackrel{}{r}_2)}`$ (45)
$`\mathrm{exp}\{[\rho _0+{\displaystyle \underset{m,\stackrel{}{k}_3}{}}a_{m\stackrel{}{k}_3}\psi _{m\stackrel{}{k}_3}]\}{\displaystyle \underset{n,\stackrel{}{k}_4}{}}da_{n\stackrel{}{k}_4}.`$ (46)
Similarly to (13), $`Z`$ is a normalization constant:
$$Z=\mathrm{}\mathrm{exp}\{[\rho _0+\underset{m,\stackrel{}{k}_1}{}a_{m\stackrel{}{k}_1}\psi _{m\stackrel{}{k}_1}]\}\underset{n,\stackrel{}{k}_2}{}da_{n\stackrel{}{k}_2}.$$
(47)
Here and below we use the shorthand notation: for sums, products, and subscript indexes symbol $`\stackrel{}{k}`$ means the set of $`xy`$ eigenstate indexes $`\{n_x,n_y\}`$. After expanding (45) to the second order in the density variation, an expression analogous to (14) is recovered:
$$g(\stackrel{}{r}_1,\stackrel{}{r}_2)=\underset{i,\stackrel{}{k}}{}a_{i\stackrel{}{k}}^2\psi _{i\stackrel{}{k}}^{}(\stackrel{}{r}_1)\psi _{i\stackrel{}{k}}(\stackrel{}{r}_2),$$
(48)
with $`a_{i\stackrel{}{k}}^2=1/(\lambda _i+2k^2)`$. As in the (14), a similar problem arises for the $`\lambda _0=0`$ eigenstate: for $`L\mathrm{}`$ the sum on $`\stackrel{}{k}`$ diverges at the lower limit.
To avoid the divergence occurring in the $`\lambda _0=0`$ term, we suggest the same recipe as in the one-dimensional case: to continue the expansion of the Hamiltonian to the fourth order. For simplicity, we neglect mixing of the eigenstates with different $`\lambda _i`$ in the fourth-order term. Similarly to the 1D case, where the relative contribution of mixing is given by the ratio of coefficients $`\beta /\sqrt{\alpha }0.27`$ (33,38,39), the inclusion of this mixing here will not affect convergence but will slightly change the numerical values of the coefficients. For all $`\lambda _i0`$, the second-order result (48) is sufficient, hence we perform fourth-order expansion only for the subspace of eigenstates with $`\lambda _0=0`$. The contribution from the $`\lambda _0=0`$ eigenstate to the density-density correlation function can be expressed as:
$`g_0(\stackrel{}{r}_1,\stackrel{}{r}_2)={\displaystyle \frac{1}{Z_0}}{\displaystyle \underset{\stackrel{}{k}_1}{}}{\displaystyle \underset{\stackrel{}{k}_2}{}}{\displaystyle \mathrm{}a_{0\stackrel{}{k}_1}a_{0\stackrel{}{k}_2}\psi _{0\stackrel{}{k}_1}^{}(\stackrel{}{r}_1)\psi _{0\stackrel{}{k}_2}^{}(\stackrel{}{r}_2)}`$ (49)
$`\mathrm{exp}\{[\rho _0+{\displaystyle \underset{\stackrel{}{k}_3}{}}a_{0\stackrel{}{k}_3}\psi _{0\stackrel{}{k}_3}]\}{\displaystyle \underset{\stackrel{}{k}_4}{}}da_{0\stackrel{}{k}_4},`$ (50)
$`Z_0={\displaystyle \mathrm{}\mathrm{exp}\{[\rho _0+\underset{\stackrel{}{k}_1}{}a_{0\stackrel{}{k}_1}\psi _{0\stackrel{}{k}_1}]\}\underset{\stackrel{}{k}_2}{}da_{0\stackrel{}{k}_2}}.`$ (51)
Using the orthogonality conditions for $`\xi _\stackrel{}{k}(z,y)`$ and Eq. (25) we obtain
$`g_0(\stackrel{}{r}_1,\stackrel{}{r}_2)={\displaystyle \frac{1}{Z_0}}{\displaystyle \underset{\stackrel{}{k}_1}{}}{\displaystyle \mathrm{}a_{0\stackrel{}{k}_1}^2\psi _{0\stackrel{}{k}_1}^{}(\stackrel{}{r}_1)\psi _{0\stackrel{}{k}_1}(\stackrel{}{r}_2)}`$ (52)
$`\mathrm{exp}\{2{\displaystyle \underset{\stackrel{}{k}_2}{}}a_{0\stackrel{}{k}_2}^2k_2^2{\displaystyle \frac{\alpha }{L^2}}[{\displaystyle \underset{\stackrel{}{k}_3}{}}a_{0\stackrel{}{k}_3}^2]^2\}{\displaystyle \underset{\stackrel{}{k}_4}{}}da_{0\stackrel{}{k}_4},`$ (53)
$`Z_0={\displaystyle \mathrm{}\mathrm{exp}\{2\underset{\stackrel{}{k}_1}{}a_{0\stackrel{}{k}_1}^2k_1^2\frac{\alpha }{L^2}[\underset{\stackrel{}{k}_2}{}a_{0\stackrel{}{k}_3}^2]^2\}\underset{\stackrel{}{k}_3}{}da_{0\stackrel{}{k}_3}},`$ (54)
where $`\alpha `$ is defined by (33). Direct evaluation of the functional integrals in (52) is impossible; however, a simple approximation will allow us to obtain a physically reasonable expression for $`g_0(\stackrel{}{r}_1,\stackrel{}{r}_2)`$. The main contribution to the integral in (52) comes from $`a_{0\stackrel{}{k}}`$ with small $`|\stackrel{}{k}|`$, so in the first approximation it is natural to introduce an upper cutoff $`C`$ in sums on $`\stackrel{}{k}`$. In particular, we replace the infinite limits in all the sums and products on $`n_x`$ and $`n_y`$ in (52) by the finite cutoff, $`|n_{x,y}|\sqrt{C}L/2`$. It corresponds to a system-size-independent cutoff for a wavevector $`\stackrel{}{k}`$ with $`|k_{x,y}|\sqrt{C}\pi `$. We select $`C`$ in such a way that the term $`2_\stackrel{}{k}a_{0\stackrel{}{k}}^2k^2`$, quadratic in $`a_{0\stackrel{}{k}}`$, can be neglected in the exponent, which decouples the integration over $`da_{0\stackrel{}{k}}`$ from the summation on $`n_x,n_y`$. Besides neglecting the quadratic term, we remove the functions $`\psi _{0\stackrel{}{k}}(\stackrel{}{r})`$ from (52), and call the remaining expression $`A`$.
$$A\frac{{\displaystyle \underset{k_1}{\overset{N}{}}}{\displaystyle \mathrm{}a_{k_1}^2\mathrm{exp}\{\frac{\alpha }{L^2}[\underset{k_2=1}{\overset{N}{}}a_{k_2}^2]^2\}\underset{k_3}{\overset{N}{}}da_{k_3}}}{{\displaystyle \mathrm{}\mathrm{exp}\{\frac{\alpha }{L^2}[\underset{k_4=1}{\overset{N}{}}a_{k_4}^2]^2\}\underset{k_5}{\overset{N}{}}da_{k_5}}},$$
(55)
with $`N=CL^2`$. This expression can be evaluated by using $`N`$-dimensional spherical coordinates, $`_k^Na_k^2R^2`$
$$A=\frac{{\displaystyle R^2\mathrm{exp}\{\frac{\alpha }{L^2}R^4\}R^{N1}\mathrm{\Omega }_N𝑑R}}{{\displaystyle \mathrm{exp}\{\frac{\alpha }{L^2}R^4\}R^{N1}\mathrm{\Omega }_N𝑑R}}=\frac{L}{\sqrt{\alpha }}\frac{\mathrm{\Gamma }(\frac{N+2}{4})}{\mathrm{\Gamma }(\frac{N}{4})}=\frac{L}{2}[\sqrt{\frac{N}{\alpha }}+𝒪(\frac{1}{\sqrt{N}})]$$
(56)
Here $`\mathrm{\Omega }_N=2\pi ^{N/2}/\mathrm{\Gamma }(N/2)`$ is the area of the N-dimensional sphere. By inspection, one can identify $`A`$ as the sum of the first N terms of $`a^2`$, $`A=_{l=1}^Na^2`$. Therefore
$$a^2=\frac{A}{N}=\frac{1}{2}\sqrt{\frac{1}{\alpha C}}$$
(57)
Now we return to Eq. (52) and replace one sum in the fourth-order term in the exponential by $`_{i=1}^Na^2`$:
$$\frac{\alpha }{L^2}[\underset{\stackrel{}{k}}{}a_{0\stackrel{}{k}}^2]^2\frac{\alpha }{L^2}[\underset{\stackrel{}{k}}{}a_{0\stackrel{}{k}}^2]Na^2=\frac{\sqrt{\alpha C}}{2}[\underset{\stackrel{}{k}}{}a_{0\stackrel{}{k}}^2]$$
(58)
After this substitution, Eq. (52) becomes a product of Gaussian integrals and its evaluation becomes trivial:
$$g_0(\stackrel{}{r}_1,\stackrel{}{r}_2)\varphi _0(z_1)\varphi _0(z_2)\underset{\stackrel{}{k}}{}\stackrel{~}{a}_\stackrel{}{k}^2\frac{\mathrm{exp}[i(k_xx+k_yy)]}{L^2},$$
(59)
where the “improved” average values $`\stackrel{~}{a}_\stackrel{}{k}^2`$ are given by
$$\stackrel{~}{a}_\stackrel{}{k}^2\frac{a_\stackrel{}{k}^2\mathrm{exp}[a_\stackrel{}{k}^2(2k^2+\sqrt{\alpha C}/2)]𝑑a_\stackrel{}{k}}{\mathrm{exp}[a_\stackrel{}{k}^2(2k^2+\sqrt{\alpha C}/2)]𝑑a_\stackrel{}{k}}=\frac{1}{4k^2+\sqrt{\alpha C}}$$
(60)
Taking the limit $`L\mathrm{}`$ and replacing the summation $`_{n_x,n_y}`$ in (59) by the integration $`(L/2\pi )^2𝑑k_x𝑑k_y`$, we obtain
$$g_0(\stackrel{}{r}_1,\stackrel{}{r}_2)\varphi _0(z_1)\varphi _0(z_2)\frac{1}{(4\pi )^2}_0^{\mathrm{}}\frac{kdk}{k^2+\sqrt{\alpha C}/4}J_0(kr^{})=\varphi _0(z_1)\varphi _0(z_2)\frac{1}{(4\pi )^2}K_0[\frac{(\alpha C)^{1/4}r_{}}{2}].$$
(61)
Here $`J_0`$ and $`K_0`$ are Bessel and Modified Hankel functions of zero order and $`r_{}\sqrt{(x_1x_2)^2+(y_1y_2)^2}`$. For large positive $`q`$ one has $`K_0(q)=\sqrt{\pi /2q}\mathrm{exp}(q)[1+𝒪(1/q)]`$. Consequently, we can identify $`(\alpha C)^{1/4}/2`$ with the previously introduced upper cutoff for the wavevector $`k`$, i.e., with $`\sqrt{C}\pi `$. It allows us to express the constant $`C`$ through the known parameters of the system, $`C=\alpha /(4\pi )^4`$. Finally, we write for the zero-eigenstate contribution to the correlation function:
$$g_0(\stackrel{}{r}_1,\stackrel{}{r}_2)\varphi _0(z_1)\varphi _0(z_2)\frac{1}{(4\pi )^2}K_0[\frac{\sqrt{\alpha }}{4\pi }\sqrt{(x_1x_2)^2+(y_1y_2)^2}].$$
(62)
The contribution from other eigenstates, $`\stackrel{~}{g}(\stackrel{}{r}_1,\stackrel{}{r}_2)`$, is obtained by straightforward integration,
$$\stackrel{~}{g}(\stackrel{}{r}_1,\stackrel{}{r}_2)=\underset{j=1}{}\varphi _j(z_1)\varphi _j(z_2)\frac{1}{2(2\pi )^2}K_0[\sqrt{\frac{\lambda _j}{2}}\sqrt{(x_1x_2)^2+(y_1y_2)^2}]$$
(63)
.
## V Conclusion
We show that taking into account the fourth-order terms in the expansion of the Hamiltonian in the calculation of the density-density correlation function makes the previously divergent (for $`D3`$) zero-eigenstate term convergent. We also note that the macroscopic shifts of the interfacial profile are described not by the zero and other bound eigenstates, but by the combination of low-lying continuum eigenstates of the Hamiltonian second-derivative matrix. The inclusion of the convergent zero-order term allows us to improve the accuracy of the calculation of the correlation function in the vicinity of the interface. Our approach could be relevant for the experimental results obtained in the microgravity conditions using a local analytical method, e.g., scattering with a narrow beam, focused on the interface.
## VI Acknowledgments
This work was done in the research group of Prof. B. Widom as part of a program supported by the U.S. National Science Foundation and the Cornell Center for Material Research. I thank Prof. Widom for having suggested this problem, for stimulating discussions during the course of the work, and for comments on the manuscript.
|
no-problem/9904/astro-ph9904355.html
|
ar5iv
|
text
|
# Unidentified 3EG gamma-ray sources at low galactic latitudes
## 1 Introduction
The existence of a galactic population of $`\gamma `$-ray sources is known since the days of the COS B experiment (Swanenburg et al. 1981). Montmerle (1979) showed that about 50 % of the unidentified COS B detections lie in regions containing young objects, like supernova remnants (SNRs) and OB massive stars. He suggested that the $`\gamma `$-ray emission could stem from $`\pi ^0`$-decays resulting from hadronic interactions of high-energy protons (or nuclei) and ambient matter. These protons would be locally injected by young stars in the SNR shocks where they would be diffusively accelerated up to high energies by Fermi mechanism.
Cassé & Paul (1980) argued that particle acceleration at the terminal shock of strong stellar winds alone could be responsible for the $`\gamma `$-ray sources without the mediation of the SNR shock waves advocated by Montmerle. Gamma-ray production in shocks generated by massive stars has been discussed since then, and from different points of view, by Völk & Forman (1982), White (1985), Chen & White (1991a,b), and White & Chen (1992), among others.
Since 1991, with the advent of the Energetic Gamma Ray Experiment Telescope (EGRET) onboard the Compton satellite, the observational data on galactic $`\gamma `$-ray sources have been dramatically improved. Two of the previously unidentified COS-B sources, Geminga and PSR 1706-44, are now known to be pulsars. The detection of pulsed high-energy emission from other sources (there are seven $`\gamma `$-ray pulsars so far, see Thompson 1996 for a review) and the identification of Geminga as a radio quiet object have prompted several authors to explore the possibility that all unidentified low latitude sources in the Second EGRET (2EG) catalog (Thompson et al. 1995, 1996) are pulsars (with the exception of a small extragalactic component which is seen through the Galaxy). In particular, Kaaret & Cottam (1996) have used OB associations as pulsar tracers finding out a significant positional correlation with 2EG unidentified sources. A similar study, including SNRs and HII regions (these latter considered as tracers of star forming regions and, consequently, of possible pulsar concentrations), has been carried out by Yadigaroglu & Romani (1997), who concluded that the pulsar hypothesis for the 2EG sources is consistent with the available information.
However, recent spectral analyses made by Merck et al. (1996) and Zhang & Cheng (1998) clearly show that several 2EG sources are quite at odds with the pulsar explanation. Time variability in the $`\gamma `$-ray flux of many sources also argues against a unique population behind the unidentified galactic $`\gamma `$-ray detections (McLaughlin et al. 1996, Mukherjee et al. 1997).
Sturner & Dermer (1995) and Sturner et al. (1996) have investigated the possible association of $`\gamma `$-sources with SNRs, finding significant statistical support for the idea that some remnants could be $`\gamma `$-ray emitters. Esposito et al. (1996) have shown that five 2EG sources are coincident with well known SNRs and, more recently, Combi et al. (1998a) have detected a new shell-type SNR at the position of 2EGS J1703-6302, as well as an interacting compact HI cloud, through multiple radio observations, clearly demonstrating, in this way, that at least some EGRET detections are physically related to SNRs.
With the publication of the Third EGRET (3EG) catalog of high-energy gamma-ray sources (Hartman et al. 1999), which includes data from Cycles 1 to 4 of the space mission, new and valuable elements become available to deepen the quest for the nature of the unidentified $`\gamma `$-ray sources. The new catalog lists 271 point sources, including 170 detections with no conclusive counterparts at other wavelengths. Of the unidentified sources, 74 are located at $`|b|<10^o`$ (this number can be extended to 81 if we include sources with their 95 % confidence contours reaching latitudes $`|b|<10^o`$). This means that the number of possible galactic unidentified sources is now nearly doubled respect to the 2EG catalog.
Can these new sources be associated with pulsars? How many sources could be ascribed to known SNRs? Is there new statistical evidence for the identification of some detections in the 3EG catalog with massive stars that generate very strong winds? In the present paper we investigate these questions in the light of the new $`\gamma `$-ray data of the 3EG catalog. We use numerical simulations (constrained by adequate boundary conditions) of $`\gamma `$-ray source populations to weight the statistical significance of the different levels of positional coincidences determined for diverse types of candidates such as individual massive stars (Wolf-Rayet and Of stars with strong stellar winds), SNRs, and OB associations.
The contents of the paper are as follows. In the next section we describe the numerical procedure implemented for the analyses. Sections 3, 4, and 5 deal with the possible association of unidentified 3EG sources with stars, SNRs, and star-forming regions considered as pulsar tracers, respectively. In Section 6 we present some further comments and, finally, in Section 7, we draw our conclusions.
## 2 Numerical simulations and statistical results
With the aim of finding the positional coincidences between 3EG unidentified sources at $`|b|<10^o`$ and different populations of galactic objects, we have developed a computer code that determines the angular distance between two points in the sky, taking into account the positional uncertainties in each of them. The code can be used to obtain a list of $`\gamma `$-ray sources with error boxes (here assumed as the 95 % confidence contours given by the 3EG catalog) overlapping different kinds of objects, both extended (like SNRs or OB associations) and punctual (like stars).
We run the code with the 81 unidentified EGRET sources at galactic latitudes $`|b|<10^o`$ and complete lists of Wolf-Rayet (WR) stars, Of stars, SNRs, and OB associations. These lists were obtained from van der Hucht et al. (1988), Cruz-González et al. (1974), Green (1998), and Mel’nick & Efremov (1995), respectively. We have found that 6 $`\gamma `$-ray sources of the 3EG catalog are positionally coincident with WR stars, 4 with Of stars, 22 with SNRs, and 26 with OB associations.
In order to estimate the statistical significance of these coincidences, we have simulated a large number of sets of EGRET detections, retaining for each simulated position the original uncertainty in its galactic coordinates. Specifically, in each case we have generated by computer 1500 populations of 81 $`\gamma `$-ray sources through rotations on the celestial sphere, displacing a source with original coordinates $`(l,b)`$ to a new position $`(l^{},b^{})`$. The new pair of coordinates is obtained from the previous one by setting $`l^{}=l+R_1\times 360^o`$. Here, $`R_1`$ is a random number between 0 and 1, which never repeats neither from source to source nor from set to set. Since we are simulating a galactic source population and not arbitrary sets at $`|b|<10^o`$, we impose that the new distribution (i.e. each of the simulated sets) retains the form of the actual histogram in latitude of the unidentified 3EG sources, with 1<sup>o</sup> or 2<sup>o</sup>-binning. The histogram, for 1<sup>o</sup>-binning, is shown in Figure 1.
In order to accomplish the mentioned constraint, we make $`b^{}=b+R_2\times 1^o`$, and then, if the integer part of $`b^{}`$ is greater than the integer part of $`b`$ or if the sign of $`b^{}`$ and $`b`$ are different, we replace $`b^{}`$ by $`b^{}1^o`$. Here, again, $`R_2`$ is a random number between 0 and 1. This ensures that the new set of artificial positions preserves the actual histogram in latitude at 1<sup>o</sup>-binning. Similarly, a 2<sup>o</sup>-binning distribution can be maintained. Both sets of simulations provide comparable results.
The unidentified 3EG sources have, additionally, a non-uniform distribution in galactic longitude, showing a concentration towards the galactic center. However, when doing the simulations, we imposed no constraints in longitude because we wanted to consider any kind of possible galactic populations.
Once we performed 1500 simulations for each type of counterparts (a larger number of simulations do not significantly modify the results), we estimated the level of positional coincidences between each simulated set and the different galactic populations under consideration. From these results we obtained an average expected value of chance associations and a corresponding standard deviation. The probability that the observed association level had happened by chance was then evaluated assuming a Gaussian distribution of the outputs. The results of this study are shown in Table 1, where we list, from left to right, the type of object under study, the number of actual positional coincidences, the number of expected chance coincidences according to 1<sup>o</sup>-binning simulations, the probabilities that the actual coincidences can be due to chance, and the similar results for simulations with 2<sup>o</sup>-binning.
From Table 1, it can be seen that there is a strong statistical correlation between unidentified $`\gamma `$-ray sources of the 3EG catalog and SNRs (at $`6\sigma `$ level) as well as with OB associations (at $`4\sigma `$ level). Regarding the stars, we find that there is a marginally significant correlation with WR and Of stars ($`3\sigma `$). Remarkably, the probability of a pure chance association for SNRs is as low as 5.4$`\times 10^{10}`$ according to the 2<sup>o</sup>-binning simulations ($`1.6\times 10^8`$ for 1<sup>o</sup>-binning). For the stars, we obtain probabilities in the range $`10^210^3`$, which are suggestive but not overwhelming.
In the next sections we explore these results in more detail.
## 3 Massive stars
The case for possible association of unidentified EGRET sources with WR stars was previously presented –using data from the 2EG catalog– by Raul & Mitra (1997). In the former catalog, there are 37 unidentified sources at $`|b|<10^o`$. Raul and Mitra proposed, on the basis of positional correlation, that 8 of these sources could be produced by WR stars. Their analysis of the possible chance occurrence of these associations, which was purely analytic and assumed equiprobability for each position on the sky, yielded an a priori expectation of $`10^4`$. Their results are notably modified when the 3EG catalog is considered. Changes in position and smaller positional uncertainties reduce the number of positional coincidences despite the remarkable increment in the number of sources. Additionally, a more rigorous treatment in the probability analysis has the effect of significantly enhance the possibility of chance association (see Table 1).
In Tables 2 and 3 we list the 3EG sources positionally coincident with WR and Of stars, respectively. As far as we are aware this is the first time that a statistical study of the correlation between Of stars and EGRET detections is carried out, despite that the possibility of $`\gamma `$-ray production in this kind of objects has been extensively discussed in the literature (e.g. Völk & Forman 1982). In the tables we provide, from left to right, the 3EG source name, the measured (summed over Cycles 1 to 4) $`\gamma `$-ray flux, the photon spectral index $`\mathrm{\Gamma }`$ ($`N(E)E^\mathrm{\Gamma }`$), the star name, the angular distance from the star to the $`\gamma `$-ray source best position, the distance to the star, the terminal wind velocity, the mass loss rate, the expected intrinsic $`\gamma `$-ray luminosity assuming the star’s distance (the minimum one when there are more than one star in the field) and isotropic emission with average index $`\mathrm{\Gamma }=2`$, and, in the last column, any other positional coincidence revealed in our study.
From Table 2, it can be seen that most of the possible associations claimed by Raul & Mitra (1997) are no longer viable ones. Just WR stars 37-39, 138, and 142 of their list stay after our analysis.
In order to compare Raul and Mitra’s results with our’s, it is worth remembering that when testing against positional coincidences they assumed an angular uncertainty of 1<sup>o</sup> for all EGRET sources. If we would make such an assumption, we would have found 20 positional coincidences in the 3EG catalog (i.e. 24.7 % of the unidentified low-latitude sources). However, due to the new reduced EGRET errors, just 7 % of these sources are now positionally consistent with WR stars, with a priori probability of $`10^3`$ of being by chance. In addition, there are 4 sources with Of stars within their error boxes. The probability that these latter associations result just by chance is $`10^2`$.
Several mechanisms have been proposed to generate $`\gamma `$-rays in the vicinity of massive stars with strong winds. A compact $`\gamma `$-ray source could be the result of $`\pi ^0`$-decays which occur as a consequence of hadronic interactions between relativistic protons or nuclei, locally accelerated by shocks arising from line-driven instabilities in the star wind, and thermal ions (White & Chen 1992). The same embedded shocks can also accelerate electrons that could provide an additional source of (inverse Compton) $`\gamma `$-ray emission through the upscattering of stellar UV photons (Chen & Withe 1991a). Synchrotron losses of these energetic electrons can produce observable nonthermal radio emission, as detected in several massive stars (e.g. Abbott et al. 1984).
A different region where the $`\gamma `$-rays might be generated is at the interface between the supersonic wind flow and the interstellar medium. There, the terminal shock can reaccelerate ions up to high energies and, if sufficient concentration of ambient matter is available (e.g. small clouds or swept-up material), nuclear $`\gamma `$-rays copious enough to be detected could be produced (Cassé & Paul 1980). Völk & Forman (1982) have argued that stellar energetic particles lose too much energy in the expanding wind to be efficiently accelerated at the terminal shock, in such a way that local injection (e.g. from a nearby star) is required. However, White (1985) showed that the shocks embedded in the highly unstable radiatively driven winds can be responsible for much higher initial energies and partial reacceleration of the particles during the adiabatic expansion, so isolated massive stars could be also efficient $`\gamma `$-ray emitters if they present sufficiently strong winds.
The a posteriori analysis of our association results show that three stars are of especial interest as possible counterparts of EGRET sources: WR 140, WR 142, and Cyg OB2 No.5. The first one is a binary system composed of a WC 7 plus an O4-5 star. The region of stellar wind collision seems to be particularly suitable for producing high energy emission. Eichler & Usov (1993) have studied the particle acceleration in this system concluding that it should be a strong $`\gamma `$-ray source. Based on observational data on WR 140, they predicted a $`\gamma `$-ray luminosity in the range $`5\times 10^{32}2.5\times 10^{35}`$ erg s<sup>-1</sup>, in well agreement with the measured EGRET flux from 3EG J2022+4317 and the distance to the system (see Table 2).
The second promising star, WR 142, is one of the five WR stars which present strong OVI lines without being associated with planetary nebulae. The large Doppler broadening of all spectral lines reveals the existence of a very high wind velocity of $`5200`$ km s<sup>-1</sup>, which doubles what is usually observed in WR stars (Polcaro et al. 1991). The identification of WR 142 with the COS-B source 2CG 075+00 was proposed in Polcaro et al.’s (1991) paper, where they considered the $`\gamma `$-ray production in the strong stellar wind. In the 3EG catalog the star position is consistent with the source 3EG J2021+3716. If the star is responsible for the observed $`\gamma `$-ray flux, its intrinsic luminosity would be $`3\times 10^{34}`$ erg s<sup>-1</sup>, which is of the order of what is expected from White & Chen’s (1992) hadronic model for isolated stars.
Finally, the binary system Cyg OB2 No.5 seems to be another interesting candidate for producing $`\gamma `$-rays. Usually considered as a contact binary formed by two O7 I stars (e.g. Torres-Dodgen et al. 1991), recent observations suggest that the secondary star in this system would be of spectral type B0 V–B2 V (Contreras et al. 1997). Variable radio emission was detected by several authors (e.g. Persi et al. 1990), with timescales of $`7`$ years. A weak radio component of nonthermal nature has been observed with the VLA at a separation of $`0.8^{\prime \prime }`$ from the main radio source, which is thermal and coincident with the primary optical component (Contreras et al. 1997). The radio variability in Cyg OB2 No.5 has been interpreted in terms of a colliding wind model by Contreras et al. (1997), who suggested that the weaker radio component is not a star but a bow shock produced by the wind collision. In this shock, electrons can be locally accelerated up to relativistic energies, yielding the synchrotron radiation that constitutes the secondary nonthermal source. Additionally, $`\gamma `$-rays are generated through inverse Compton losses in the UV radiation field of the secondary (Eichler & Usov 1993). The same Fermi mechanism that accelerates the electrons should also operate on protons, providing a source of energetic ions that could contribute with higher energy $`\gamma `$-ray emission, as in the case involving WR stars. For strong shocks, the test particle theory predicts that the relativistic protons will have a differential energy spectrum given by $`N(E)E^2`$. The $`\pi ^0`$-decay $`\gamma `$-rays resulting from $`pp`$ collisions should conserve the shape of the original proton spectrum, in such a way that at energies above 100 MeV the photon spectral index would be $`\mathrm{\Gamma }2`$, as observed by EGRET.
## 4 Supernova remnants
Possible correlation between SNRs and unidentified EGRET sources, on the basis of two dimensional positional coincidence, has been proposed since the release of the first EGRET (1EG) catalog. Sturner & Dermer (1995) suggested that some of the unidentified sources lying at galactic latitudes $`|b|<10^o`$ might be associated with SNRs: of 37 detections, 13 overlapped SNR positions in the 1EG catalog. However, their own analysis showed that the statistical significance was not too high as to provide a strong confidence. Chance association was just 1.8$`\sigma `$ away from the obtained result. Using the 2EG catalog, Sturner et al. (1996) repeated the analysis, and showed that 95% confidence contours of 7 unidentified EGRET sources overlapped SNRs, some of them appearing to be in interaction with molecular clouds. Similar results were independently reported by Esposito et al. (1996), although neither of them assessed the chance probablility of these 2EG-catalog findings. Considering the 1EG catalog, 35% of the unidentified sources were positional related to SNRs. This drop to 21.8% in the 2EG catalog, and is currently about 27%. One important point to take into account when evaluating these differences is not only to consider the evolution of the EGRET catalog but also that of the supernova remnant Green’s catalog. At the time of the first studies by Sturner & Dermer (1995), the supernova catalog contained 182 SNRs. This grew up to 194 in 1996, and currently it lists 220 remnants.
In Table 4 we show the 3EG sources that are positionally consistent with SNRs listed in the latest version of Green’s catalog. From left to right we provide the $`\gamma `$-ray source name, the measured flux, the photon spectral index $`\mathrm{\Gamma }`$, the SNR identification, the angular distance between the best $`\gamma `$-ray source position and the center of the remnant, the size of the remnant in arcminutes, the SNR type (S for shell, F for filled-centre, and C for composite), and other positional coincidences found in our study. The table contains 22 possible associations with an a priori probability of being purely by chance completely negligible ($`10^8`$). It is important to remind that this list is formed entirely of positional coincidences with currently catalogued SNRs. However, the diffuse galactic disk nonthermal emission, originated in the interaction of the leptonic component of cosmic rays with the galactic magnetic field, is veiling many remnants of low surface brigthness. Recent observational studies using filtering techniques in the analysis of radio data have revealed many new SNR candidates that are not included in Green’s catalog (e.g. Duncan et al. 1995, Combi & Romero 1998, Combi et al. 1998b, 1999). If these candidates were included in our analysis a larger number of associations would have resulted.
The intrinsic $`\gamma `$-ray luminosity of SNRs, stemmed from interactions between cosmic rays reaccelerated at the supernova shock front and swept-up material, is expected to be rather low (Drury et al. 1994). However, if a cloud is near the particle acceleration site, the enhanced nuclear cosmic rays from the shock can “illuminate” the cloud through $`\pi ^0`$-decays yielding a compact $`\gamma `$-ray source (Aharonian et al. 1994). Such scenario has been recently study by Combi et al (1998a) in relation with the source 3EG J1659-6251 (previously 2EGS J1703-6302).
## 5 OB associations
In Table 5 we list the unidentified 3EG sources that are positionally coincident with the OB associations in the catalog by Mel’nik & Efremov (1995). Our results can be compared with the similar work by Kaaret & Cottam (1996). Using the 2EG catalog, they have already found a statistically significant correlation: 9 of the unidentified 2EG sources have position contours overlapping an OB association and other 7 lie within $`1^o`$ angular distance. These results are totally compatible with our’s. Here, we find 26 superpositions out of 81 unidentified 3EG sources (32%), 5$`\sigma `$ away from what is expected from pure chance association. The mean angular separation between the centroid of the OB association and the EGRET source is 1.5<sup>o</sup>, although most sources are at angular distances of less than $`1^o`$.
The differences between both methods of analysis are worth commenting. In particular, we decided, for completitude, to keep the nearby association Sco 2A despite its proximity. Any pulsar traced by it must have a negligible proper velocity in order to be consistent with its angular size, but its existence cannot be ruled out only on a priori grounds. To calculate the chance superposition probability Kaaret and Cottam studied EGRET sources just within \[-5<sup>o</sup>, 5<sup>o</sup>\] in galactic latitude (only 25 sources of the total 129 unidentified ones present in the 2EG catalog), and generated sample locations using two Gaussian distributions, in longitude and latitude, with central value and deviation provided by the actual positions of the unidentified sources. They also used a galactic model to map the gas distribution. This procedure yields almost the same results than the method we follow (chance association probability around $`10^5`$). Interestingly, despite all EGRET sources changed their positions from the 2EG to the 3EG catalog and a significant number of new detections has been added, the percentage and the confidence level of the positional coincidences remains almost the same in both studies.
All known $`\gamma `$-ray pulsars are young objects ($`10^6`$ yr) with spectral indices smaller than 2.15 (Crab’s) and a trend for spectral hardening with characteristic age (Fierro et al. 1993). From Table 5, if we consider just sources coincident only with OB associations and exclude the three sources with very steep indices (3EG J 1308-6112, 3EG J 1718-3313, and 3EG J 1823-1314), we get $`<\mathrm{\Gamma }>=2.07`$ and $`1\sigma =0.12`$ for the 8 remaining EGRET sources. These are the most promising candidates for pulsar associations. We have marked them with a star symbol in Table 5.
## 6 Further comments
In Figure 2 we show a plot of the $`\gamma `$-ray luminosity (assuming isotropic emission) of the unidentified sources coincident with OB associations against the estimated distance to the associations. By using different symbols we indicate whether there are additional positionally coincident objects for each $`\gamma `$-ray source. The solid horizontal line represents the luminosity of Vela pulsar. A similar plot of luminosity versus photon spectral index $`\mathrm{\Gamma }`$ is shown in Figure 3. The first plot shows that the luminosity distribution of this subset of 3EG sources is consistent with the observed distribution for $`\gamma `$-ray pulsars when emission into $`4\pi `$ sr is assumed (see Kareet & Cottam 1996). Figure 3 shows, however, that not all sources superimposed to OB associations present the spectral signature expected from pulsars: they should concentrate in the left-upper corner of the frame. There, two sources clearly differentiate from the rest: 3EG J1027-5817 and 3EG J1048-5840. They have luminosities similar to Vela’s and hard spectra with $`\mathrm{\Gamma }<2`$, which make them good candidates for $`\gamma `$-ray pulsars.
The identification of 3EG J1048-5840 (formerly 2EG J1049-5847) with a pulsar (PSR B1046-58) was already proposed by Zhang & Cheng (1998), who showed that its $`\gamma `$-ray spectrum is consistent with the predictions of outer gap models. In addition, these authors also suggested that 3EG J1823-1314 (2EG J1825-1307) could be the pulsar PSR B1823-13. This latter identification must be now rejected in the light of the new determination of the spectral index of the $`\gamma `$-ray source in the 3EG catalog, $`\mathrm{\Gamma }=2.69\pm 0.19`$, which is too steep for a pulsar.
Regarding 3EG J1027-5817, no known radio pulsar is found within its 95 % confidence contour. It could be a Geminga-like object or the effect of the combined emission of a pulsar and a weak SNR in Car 1A-B (the 3EG catalog notes that it is a possible case of multiple or extended source).
Some of the low luminosity sources in Fig. 3 might be yet undetected SNRs, whereas the sources with the steepest indices could be background AGNs. A simple extrapolation of the high latitude population of $`\gamma `$-ray blazars shows that about 10 of this sources should be detected throughout the Galaxy within $`|b|<10^o`$ (Yadigaroglu & Romani 1997). Most of them, however, should belong to the group of 43 3EG sources for which we have found not positional coincidences with any known galactic object. This set of sources has an average value of galactic latitude $`<|b|>=5.8\pm 3.3`$, which suggests a significant extragalactic contribution.
Finally, we want to mention two interesting additional possibilities to explain some 3EG sources: isolated Kerr-Newman black holes (Punsly 1998a,b) and isolated standard black holes accreting from the diffuse interstellar medium (Dermer 1997). In Punsly’s model, a bipolar magnetically dominated MHD wind is driven by a charged black hole located in a low density region (otherwise it would discharge rapidly). The wind forms two leptonic jets which propagate along the rotation axis in opposite directions, as it occurs in AGNs. Self-Compton losses provide $`\gamma `$-ray luminosity in the range $`10^{32}10^{33}`$ erg s<sup>-1</sup> for a 7-$`M_{}`$ black hole with a polar magnetic field of $`10^{10}`$ G. If such an object is relatively close ($`300`$ pc), it could appear as a typical unidentified EGRET source with $`\mathrm{\Gamma }2.5`$.
In the case of isolated black holes accreting from a diffuse medium, a hole with mass of 10 $`M_{}`$ and a velocity of 10 km s<sup>-1</sup> can produce a $`\gamma `$-luminosity $`7\times 10^{33}`$ erg s<sup>-1</sup> in a medium with density of 0.1 cm<sup>-3</sup> (Dermer 1997). Changes in the particle density can result in $`\gamma `$-ray flux variability, as observed in several unidentified sources. None of these $`\gamma `$-sources based on black holes can be ruled out at present, and their observational signatures at other wavelengths seem to be worth of careful search.
## 7 Conclusions
We have studied the level of two-dimensional positional coincidences between unidentified EGRET sources at low galactic latitudes in the 3EG catalog and different populations of galactic objects, finding out that there is overwhelming statistical evidence for the association of $`\gamma `$-ray sources with SNRs and OB star forming regions (these latter considered as pulsar tracers). Additionally, there is marginally significant evidence for the association with early-type stars endowed with very strong winds, like Wolf-Rayet stars and Of stars. A posteriori analyses of the star candidates show that there are at least three systems (WR 140, WR 142, and Cyg OB2 No. 5) which are likely $`\gamma `$-ray sources. Several sources positionally coincident with OB associations are probably pulsars, like 3EG J1048-5840 and similar sources with hard spectra. Besides, there are 43 3EG sources for which we have not found any positional coincidence with known objects. This set of sources could include undetected low-brightness SNRs in interaction with dense and compact clouds, some Geminga-like pulsars, and, perhaps, a new kind of galactic $`\gamma `$-ray sources, like Kerr-Newman black holes or isolated black holes accreting from the interstellar medium.
The main conclusion to be drawn is that there seems to exist more than a single population of galactic $`\gamma `$-ray sources. Pulsars constitute a well established class of sources, and there is no doubt that under certain conditions some SNRs are also responsible for significant $`\gamma `$-ray emission in the EGRET scope. Both isolated and binary early-type stars are likely to present high-energy radiation strong enough to be detected by EGRET in some special cases. We propose that, in addition to the well-known WR stars 140 and 142, the Cyg OB2 No. 5 binary system could be a strong $`\gamma `$-ray source, the first one to be detected involving no WR stars. The large number of unidentified EGRET sources free of any positional coincidence with luminous objects also encourage further studies to find whether there exist a population of exotic objects yet undetected at lower wavelengths.
###### Acknowledgements.
This work has been partially supported by the Argentine agencies CONICET and ANPCT.
|
no-problem/9904/astro-ph9904382.html
|
ar5iv
|
text
|
# MASS PROFILES OF THE TYPICAL RELAXED GALAXY CLUSTERS A2199 AND A496
## 1. INTRODUCTION
Under a reasonable, but as yet not directly tested, set of assumptions that the hot intracluster gas is supported by its own thermal pressure and is in hydrostatic equilibrium in the cluster gravitational well, one can determine the total mass of a cluster, including its dominant dark matter component (Bahcall & Sarazin 1977; Mathews 1978). Because clusters are the largest collapsed objects in the Universe, their mass values are of great importance for cosmology. The cluster mass function and its evolution with redshift constrain the spectrum of the cosmological density fluctuations and the density parameter $`\mathrm{\Omega }_0`$ (e.g., Press & Schechter 1974; White & Rees 1978; Bahcall & Cen 1992; Viana & Liddle 1996). If the cluster matter inventory is representative of the Universe as a whole, as is expected, then by measuring the cluster total and baryonic mass and comparing it to the predictions of primordial nucleosynthesis, one can constrain $`\mathrm{\Omega }_0`$ (White et al. 1993). Comparison of independent cluster mass estimates, for example, by X-ray and gravitational lensing (e.g., Bartelmann & Narayan 1995) methods, provide unique insights into cluster structure and physics. A discrepancy between the different estimates may indicate significant turbulence or nonthermal pressure in the intracluster gas (e.g., Loeb & Mao 1994), or the effect of line of sight projections.
For an X-ray measurement of the cluster mass, one needs accurate radial profiles of the gas density and temperature, as well as confidence that the cluster is in hydrostatic equilibrium. The gas density profile for a symmetric cluster can readily be obtained with an imaging instrument, such as Einstein or ROSAT. Obtaining temperature distributions has proven to be more problematic, especially for hotter, more massive clusters. ASCA (Tanaka, Inoue, & Holt 1994) now provides spatially resolved temperature data for nearby hot clusters (e.g., Ikebe et al. 1997; Loewenstein 1997; Donnelly et al. 1998; Markevitch et al. 1998 \[hereafter MFSV\] and references therein), although their accuracy is still limited. Outside the central cooling flow regions, the temperature decreases with radius in most studied clusters. For a few clusters with more accurate temperature profiles, accurate mass profiles were already obtained (e.g., for A2256 by Markevitch & Vikhlinin 1997, hereafter MV).
MFSV found that gas temperature profiles of nearby symmetric clusters outside the cooling flow regions are similar when scaled by the virial radius and average temperature. The gas density profiles also are rather similar (e.g., Jones & Forman 1984; Vikhlinin, Forman, & Jones 1999). This suggests that the underlying dark matter profiles are similar. Indeed, analytical work and cosmological cluster simulations (e.g., Bertschinger 1985; Cole & Lacey 1996; Navarro, Frenk, & White 1995, 1997, hereafter NFW) predict that the dark matter radial profiles of most clusters in equilibrium should be similar in units of the virial radius. It is interesting to see whether their predicted “universal” dark matter profile agrees with the observations.
For all but a few clusters in the MFSV sample, the temperature data have insufficient accuracy for such a test. We therefore selected two additional typical, relaxed, but less distant clusters, A2199 ($`z=0.030`$) and A496 ($`z=0.033`$), for a more accurate temperature profile and mass derivation using ASCA. These clusters are very ordinary in their X-ray luminosities and temperatures ($`T4.5`$ keV) and, similarly to most clusters, have moderate cooling flows (170 and 95 $`M_{}`$yr<sup>-1</sup>, respectively; Peres et al. 1998). The presence of cooling flows is suggestive of a relaxed cluster, while at the same time these flows are not so strong as to prevent accurate resolved temperature measurements with ASCA (see MFSV). A subset of the data presented here (observations of the central regions) was already analyzed by Mushotzky et al. (1995). We have since obtained offset observations, and include the ASCA PSF correction in our analysis. Below we use ASCA and ROSAT data on these two clusters to derive their total mass profiles. We use $`H_0`$=50 km s$`^1`$Mpc<sup>-1</sup> ($`h=0.5`$); the error intervals are 90%.
## 2. ROSAT PSPC DATA
To derive the gas density distribution, we use ROSAT PSPC data. The archival observations of A2199 and A496 were analyzed as prescribed by Snowden et al. (1994) and using S. Snowden’s code. To optimize the signal to noise ratio, we used Snowden bands 5–7 that correspond to 0.7–2.0 keV. For A2199, two observations of the same field were combined. The radial brightness profiles were then fit with a $`\beta `$-model $`S_X(r)(1+r^2/a_x^2)^{3\beta +\frac{1}{2}}`$ plus a uniform X-ray background within a radial range of $`3^{}50^{}`$. The inner radius of 3 approximately corresponds to the cooling radius for both clusters (e.g., Peres et al. 1998) and encompasses all of the X-ray brightness excess due to the moderate cooling flows in the cluster centers. The resulting parameters of the gas density profile are given in Table 1 and are typical (e.g., Jones & Forman 1984). The $`\beta `$-model values for A2199 are similar to the results of Siddiqui, Stewart, & Johnstone (1998) using the same data.
## 3. ASCA DATA
A2199 and A496 were each observed by ASCA with one central pointing and two different 14–15 offsets from the cluster centers. Such a configuration has been chosen to cover the cluster to a radius where the mean overdensity is 500 ($`r_{500}`$), while at the same time keeping the cluster brightness peak within the ASCA field of view to avoid stray light contamination. Observing the clusters at different positions in the focal plane also reduces the ASCA systematic uncertainties that dominate in the temperature estimates. The offset positions were chosen to avoid bright foreground sources and also to cover representative regions of these slightly elliptical clusters.
After the standard data screening (ABC Guide<sup>1</sup><sup>1</sup>1http://heasarc.gsfc.nasa.gov/docs/asca/abc/abc.html), useful GIS exposures for the A2199 central and offset pointings were 31 ks, 19 ks, and 22 ks, and for A496, they were 37 ks, 24 ks, and 24 ks, respectively (the corresponding SIS exposures were about a factor of 0.8 of the GIS exposures). For the temperature fits, all pointings for both GIS and SIS were used simultaneously; different pointings and instruments fitted separately give consistent results. To derive the spatial temperature distributions, we used the method described in detail in MFSV and references therein. This method accounts for the ASCA PSF and assumes that outside the cooling flow regions, the ROSAT PSPC image provides an accurate description of the relative spatial distribution of the projected gas emission measure, after a correction of the PSPC brightness for any gas temperature variations. It should be mentioned here that a recent discovery of the possibly nonthermal EUV and soft X-ray ($`E<0.2`$ keV) emission in A2199 should not affect the latter assumption in any significant way, since we use a relatively hard ($`0.72.0`$ keV) PSPC band where this excess is absent (Lieu, Bonamente, & Mittaz 1999). The absorption column was assumed uniform at the Galactic values ($`N_H=0.9\times 10^{20}`$ cm<sup>-2</sup> and $`4.6\times 10^{20}`$ cm<sup>-2</sup> for A2199 and A496); for our $`E>1.5`$ keV spectral fitting band, any expected variations are unimportant.
The analysis method propagates all known calibration and other systematic uncertainties, including those of the ASCA PSF, effective area, ROSAT and ASCA backgrounds etc., to the final temperature values. All reported confidence intervals are one-parameter 90% and are estimated by Monte-Carlo simulations.
## 4. RESULTS
### 4.1. Temperature Maps
The resulting two-dimensional projected temperature maps are shown in Fig. 1, overlaid on the ROSAT images. We show only sectors in which the temperature is accurately constrained. The maps show no significant azimuthally asymmetric variations, and together with the brightness contours suggest that these clusters are well relaxed. These maps may be contrasted to the similarly derived, but highly irregular, temperature maps of merging clusters, e.g., A754 (Henriksen & Markevitch 1996) and Cygnus-A and A3667 (Markevitch, Sarazin, & Vikhlinin 1999). In the central regions, the maps clearly show low temperature regions that correspond to the previously known cooling flows (e.g., Stewart et al. 1984; Edge, Stewart, & Fabian 1992).
### 4.2. Radial Temperature Profiles
Figure 2 shows the cluster projected temperature profiles in five annuli. For the central radial bin, we used a model consisting of a thermal component and a cooling flow with the upper temperature tied to that of the thermal component, both with free normalizations. The figure also shows wide-beam, single-temperature fits ($`T_e=4.4\pm 0.2`$ keV and $`4.3\pm 0.2`$ keV for A2199 and A496, respectively) and emission-weighted average temperatures excluding the cooling flow component ($`T_X=4.8\pm 0.2`$ keV and $`4.7\pm 0.2`$ keV). The latter are calculated from these temperature profiles as described in MFSV. The data indicate a higher temperature in the central cluster regions (outside the cooling flows) compared to the average temperature, and a temperature decline with radius. This is similar to other clusters; in fact, when the profiles for A2199 and A496 are plotted in units of $`T_X`$ and virial radius, they lie within the composite profile obtained by MFSV for other nearby, relatively symmetric clusters (Fig. 3). Such typical temperature profiles, together with the typical gas density profiles and the presence of cooling flows, make A2199 and A496 representative examples of relaxed clusters.
Outside the central cooling flow bin, the profiles in Fig. 2 are described remarkably well by a polytrope, $`T_{\mathrm{gas}}\rho _{\mathrm{gas}}^{\gamma 1}`$ (both temperature profiles appear slightly more concave than the polytropic fits, which is probably nothing more than a coincidence; note that they differ in a similar way from the composite profile in Fig. 3). Assuming the ROSAT-derived $`\beta `$-models for $`\rho _{\mathrm{gas}}`$, we find $`\gamma =1.17\pm 0.07`$ and $`\gamma =1.24_{0.11}^{+0.08}`$ for A2199 and A496, respectively. Regardless of whether this fact has any physical meaning or is purely fortuitous, it simplifies the total mass derivation by providing a convenient functional form for the observed temperature profile. We will use it in the next section, but first note that for the mass derivation, one needs a real (three-dimensional) gas temperature profile as opposed to projected on the plane of the sky that we have obtained. We show in Appendix that as long as the gas density follows a $`\beta `$-model and the temperature is proportional to a power of density, a projected polytropic temperature profile differs from the three-dimensional profile only by a normalization. For the best-fit $`\beta `$ and $`\gamma `$ values for A2199 and A496, the projected temperature profiles are factors of 0.94 and 0.92 lower than the three-dimensional profiles, respectively.
## 5. TOTAL MASS PROFILES
For the total mass determination, we will take advantage of the fact that the temperature profiles can be described by a polytropic functional form. ¿From the hydrostatic equilibrium equation for a spherically symmetric gas distribution $`\rho _{\mathrm{gas}}(r)(1+r^2/a_x^2)^{\frac{3}{2}\beta }`$ and a temperature profile $`T\rho _{\mathrm{gas}}^{\gamma 1}`$, the total mass within a radius $`r=xa_x`$ is given by
$$M(r)=3.70\times 10^{13}M_{}\frac{0.60}{\mu }\frac{T(r)}{1\mathrm{keV}}\frac{a_x}{1\mathrm{Mpc}}\frac{3\beta \gamma x^3}{1+x^2}$$
(1)
(see, e.g., Sarazin 1988). A polytropic temperature decline thus corresponds to the following correction to an isothermal mass estimate $`M_{\mathrm{iso}}`$:
$$\frac{M(r)}{M_{\mathrm{iso}}(r)}=\frac{T(r)}{\overline{T}}\gamma ,$$
(2)
where $`\overline{T}`$ is the average temperature (see also Ettori & Fabian 1999). To calculate the 90% confidence bands on mass profiles (as well as the confidence intervals on the values of $`\gamma `$ above), we have fitted the polytropic model to the same simulated temperature values in those annuli that were used to calculate the temperature error bars (see MFSV). These fitted polytropic models were substituted into equations (1) and (2) above and 90% confidence intervals of the resulting values were calculated at each radius. The resulting correction factor to the isothermal mass estimate is shown in Fig. 4 as a function of radius. The corresponding profiles of the total mass, $`M`$, and the gas mass fraction, $`f_{\mathrm{gas}}M_{\mathrm{gas}}/M`$, are shown in Fig. 5. The corresponding ratio of the mean total density within a given radius to the critical density at the cluster’s redshift \[$`\rho _c=3H_0^2(1+z)^3/8\pi G`$\] is shown as a function of radius in Fig. 6. For both clusters, our mass profiles correspond to $`r_{1000}1.0`$ Mpc and $`r_{500}1.31.4`$ Mpc (the latter involves extrapolation to a region not covered by the temperature profile, see Fig. 2). Masses and gas fractions at several interesting radii are also given in Table 1. At $`r=1`$ Mpc, Mushotzky et al. (1995) obtained mass estimates of $`2.55\times 10^{14}`$ $`M_{}`$ for A2199 and $`3.05\times 10^{14}`$ $`M_{}`$ for A496. These estimates are close to ours, even though Mushotzky et al. did not apply the ASCA PSF correction in their analysis. From the galaxy velocity data, Girardi et al. (1998) obtained, at $`r=1`$ Mpc, masses of $`5.4_{1.9}^{+2.4}\times 10^{14}`$ $`M_{}`$ for A2199 and $`3.5\pm 1.8\times 10^{14}`$ $`M_{}`$ for A496 (their 68% errors are multiplied by 1.65 to obtain 90% intervals). These are consistent (although for A2199, only marginally) with our more accurate values.
## 6. DISCUSSION
### 6.1. Uncertainty of Our Mass Values
Although the polytropic model is a good representation of the observed temperature profiles, it does not necessarily cover all possibilities that could be consistent with the data. Since our best-fit model adequately represents the temperature and its gradient, the best-fit mass values should be unbiased (as long as the hydrostatic equilibrium assumption is valid). However, at the extremes of the confidence intervals, some other acceptable temperature models may result in slightly different mass profiles. Therefore, our confidence intervals on mass may be underestimated. A more exhaustive method of mass modeling would be to assume a certain functional form of the dark matter profile and find parameter ranges consistent with the data (e.g., Hughes 1989; Henry, Briel, & Nulsen 1993; Loewenstein 1994; MV; Nevalainen et al. 1999a,b). However, for the relatively high-quality data on A2199 and A496, we have chosen a simpler approach with the polytropic models, without giving undue importance to the formal error estimates. Indeed, given the present accuracy of the temperature and density profiles, the formal statistical uncertainties are already too small to be physically meaningful (e.g., MV). Hydrodynamic simulations suggest that systematic uncertainties in the method itself, such as the possible deviations from spherical symmetry and hydrostatic equilibrium (in the form of significant gas bulk motions), can give rise to rms mass errors of about 15–30% (e.g., Evrard, Metzler, & Navarro 1996; Roettiger et al. 1996; see a more detailed discussion in §4.2 of MV). Those simulations included merging clusters in the statistical sample, so the relaxed clusters A2199 and A496 should have mass errors on the lower side of these estimates. Another source of systematic mass uncertainty is the possible deviation of the measured electron temperature from the local mean plasma temperature that enters the hydrostatic equilibrium equation. Markevitch et al. (1996) proposed such nonequality as a possible explanation for an unusually sharp observed temperature gradient in A2163. Later theoretical work (Fox & Loeb 1997; Ettori & Fabian 1998; Chièze, Alimi, & Teyssier 1998; Takizawa 1998) concluded that for relaxed clusters, this effect should not be significant within the radial distances presently accessible for accurate X-ray temperature measurements (about half the virial radius). Therefore, it is safe to assume that the mass values within $`r_{1000}`$ obtained in this paper are unaffected by this complication. Finally, at these low redshifts, the unknown cluster peculiar velocity may introduce a noticeable distance and mass error (e.g., a 1000 km s<sup>-1</sup> velocity would correspond to a 10% error in the calculated mass). To summarize all of the above, the true uncertainty of the masses of A2199 and A496 is probably greater than our formal $`\pm 10`$% estimates and perhaps closer to 20–25% (90% confidence at $`r_{1000}`$), and is dominated by systematics.
### 6.2. The “Universal” Mass Profile
NFW have found that radial density profiles of equilibrium clusters in their cosmological simulations can be approximated by a functional form $`\rho (r)(r/r_s)^1(1+r/r_s)^2`$. This form is a very good description of our observed total mass profiles in the range of radii covered by the temperature data, as shown in Fig. 5. Normalizations and scale radii $`r_s`$ of the NFW profiles were selected to fit the observed mass profiles. For A2199, $`r_s=0.18`$ Mpc, and for A496, $`r_s=0.36`$ Mpc.<sup>2</sup><sup>2</sup>2NFW included only dark matter in their simulations and it is unclear whether the inclusion of gas would significantly change the shape of the mass profiles. If we subtract the gas mass from our total mass profiles (assuming, for example, the currently favored value of $`h=0.65`$), the resulting dark matter profiles are well described by the same functional form with slightly different parameter values. Extrapolating the best-fit NFW profiles to greater radii, we obtain the NFW’s concentration parameter $`cr_{200}/r_s`$ of about 10 and 6 for the two clusters, respectively. According to the NFW’s simulations, $`c`$ and the total mass within $`r_{200}`$ are strongly correlated for a given cosmological model. Our $`c`$ and $`M_{200}`$ values agree well with those for several cosmological models considered by NFW, including CDM$`\mathrm{\Lambda }`$ (for that model, our observed masses correspond to $`M_{200}/M_{}56`$ as defined in NFW). An isothermal profile would imply a less concentrated dark matter distribution than that suggested by the NFW simulations (see also Makino, Sasaki, & Suto 1998). The outer regions of other clusters for which relatively accurate mass profiles were derived from the ASCA temperature profiles (e.g., A2256, MV; A3571, Nevalainen et al. 1999b) are consistent with the NFW profiles as well, although the constraints are poorer. Thus, the similarity of the gas temperature profiles found by MFSV for nearby relaxed clusters (outside the cooling flow regions) appears to be due to the underlying “universal” dark matter profile of the NFW form. Note that we do not consider cooling flow regions due to their unknown temperature structure. As noted in MV and Nevalainen et al. (1999a), in those few relaxed clusters without cooling flows where the gas density profile exhibits a flat core all the way to the center (e.g., A401), the dark matter cannot have an NFW central cusp because the gas halo would be convectively unstable (see also Suto, Sasaki, & Makino 1998). On the other hand, it is likely that in clusters that do have central dark matter cusps, the corresponding dip of the gravitational potential causes the gas density peak and acts as a focus for a cooling flow.
### 6.3. Mass – Temperature Scaling
Our mass values within our $`r_{1000}`$ are a factor of $`1.61.8`$ below the scaling relation between the total mass and emission-weighted average gas temperature derived from the simulations by Evrard et al. (1996). The same is true for other clusters (e.g., A2256, MV; A2029, Sarazin, Wise, & Markevitch 1998; A401, A3571, Nevalainen et al. 1999a,b). Note that isothermal mass estimates are also lower than the Evrard et al. $`MT`$ relation predicts. Given the agreement of our observed total (or dark) mass profile with the “universal” profile from NFW as well as from Evrard et al., the main source of this discrepancy apparently lies in the gas density and temperature distributions. Indeed, as noted by MFSV, simulations predict a less steep temperature decline than observed, and steeper gas density profiles than observed (e.g., Vikhlinin et al. 1999). Both these effects have the sign needed to cause the $`MT`$ discrepancy. Because most of the cluster X-ray emission originates in the central region, simulations that do not sufficiently resolve the cluster core may predict an incorrect (apparently too low) emission-weighted gas temperature. For example, the Evrard et al. (1996) simulations have a resolution of 0.2–0.3 Mpc, comparable to a typical cluster core radius. On the other hand, the hydrostatic mass measurement can underestimate the true total mass if, for example, there is significant gas turbulence. Indeed, simulations suggest that there may be residual turbulence even in an apparently relaxed cluster (e.g., Evrard et al. 1996; Norman & Bryan 1998) resulting in a 10–15% underestimate of the mass within $`rr_{1000}`$.
### 6.4. Implications for the X-ray–Lensing Mass Discrepancy
A2199 and A496 are representative examples of relaxed clusters and the difference in their X-ray mass estimates using the measured temperature profile from the isothermal estimates, shown in Fig. 4, generally applies to other such clusters (see MV and MFSV). The upward revision of the mass estimate in the inner part has one important implication, the convergence of the X-ray and gravitational lensing mass estimates in the cluster central regions. The strong lensing mass values (that usually correspond to $`r<0.2`$ Mpc) often exceed by a factor of 2–3 the X-ray estimates made under the assumptions of isothermality and a typical $`\beta `$-model density profile (e.g., Loeb & Mao 1994; Miralda-Escudé & Babul 1995; Tyson & Fischer 1995). In many cases, the lensing analysis is likely to overestimate the mass as a result of substructure or projection (e.g., Bartelmann 1995). On the X-ray side, some clusters are undergoing mergers and the hydrostatic equilibrium method may give a wrong mass value. For those clusters which are relaxed, the low-resolution X-ray image analysis may underestimate the gas density gradient at small radii typical of the lensing measurements and may be responsible for part of the disagreement (e.g., Markevitch 1997; Allen 1998). If the cluster has a strong cooling flow, the overall temperature can be significantly underestimated if no allowance for the cool component is made (Allen 1998), although for most clusters this correction is within $`20`$% (MFSV). Still, in many cases these effects alone are not sufficient to account for the mass discrepancy. It has been suggested, e.g., by Miralda-Escudé & Babul (1995) that a gas temperature gradient could explain the discrepancy for the distant non-cooling-flow cluster A2218. A temperature decline with radius has indeed been observed in A2218 by Loewenstein (1997) and Cannon, Ponman, & Hobbs (1999) using ASCA, while MFSV find that such a declining profile is common among nearby clusters. The analysis in §5 has shown that within the core radius, the commonly observed temperature gradient implies a mass that is higher than the isothermal estimate by a factor of $`>1.5`$. The reference isothermal estimate uses a cooling flow-corrected temperature, so this effect is in addition to the cooling flow-related mass correction of Allen (1998). If a similar temperature gradient is common in more distant clusters, this effect, together with others mentioned above, effectively resolves the mass discrepancy. This seems to obviate the need for more exotic causes, such as a significant magnetic field pressure or strong turbulence within cluster cores (Loeb & Mao 1994).
### 6.5. Gas Fraction
The lower panels in Fig. 5 show the gas mass fraction a function of radius for the two clusters. At $`r_{1000}`$, we obtain similar values of $`f_{\mathrm{gas}}=0.161\pm 0.014`$ and $`0.158\pm 0.017`$ for the two clusters, respectively. These values are consistent with those for A2256, $`0.14\pm 0.01`$ at $`r_{1000}`$, obtained by MV using an ASCA temperature profile, and for A401 ($`0.18_{0.04}^{+0.02}`$) and A3571 ($`0.16_{0.01}^{+0.03}`$) from Nevalainen et al. (1999a,b). Our values are also similar to the median values for large samples of clusters analyzed using the isothermal assumption: $`f_{\mathrm{gas}}=0.168`$ from Ettori & Fabian (1999, scaled to $`r_{1000}`$), and $`f_{\mathrm{gas}}=0.160`$ from Mohr, Mathiesen, & Evrard (1999, for clusters cooler than 5 keV). The latter similarity is due to the fact that the effect of the radial temperature decline on mass is small at this radius; at greater radii, the isothermal analysis underestimates the gas fraction as Fig. 5 shows. The values of the cluster gas fraction from X-ray analysis are often used to place constraints on the cosmological density parameter ($`\mathrm{\Omega }_0<0.3`$), under the assumption that $`f_{\mathrm{gas}}`$ in clusters is representative of the Universe as a whole (White et al. 1993 and many later works). However, $`f_{\mathrm{gas}}`$ increases with radius even if one assumes a constant gas temperature (e.g., David, Jones, & Forman 1995; Ettori & Fabian 1999), and the true increase is steeper as seen in clusters with measured temperature profiles. Fig. 5 shows that $`f_{\mathrm{gas}}`$ increases by a factor of 3 between the X-ray core radius and $`r_{1000}`$ and does not show any evidence of flattening at large radii and hence asymptotically reaching a universal value. Although at some radius, the cluster must merge continuously into the infalling matter with a cosmic mix of components, that presumably happens at the infall shock radius of $`23r_{1000}`$, well beyond the region presently accessible to accurate measurements. Note that both the dark matter and the gas mass within a given radius, and thus $`f_{\mathrm{gas}}`$, are dominated by the contribution at large radii. Cosmological simulations suggest that at smaller radii, a deviation from the universal $`f_{\mathrm{gas}}`$ value is not large (e.g., Frenk et al. 1996). However, at the present stage, the simulations do not accurately reproduce the observed gas density, temperature and $`f_{\mathrm{gas}}`$ profiles (e.g., MFSV; Vikhlinin et al. 1999). We conclude that our $`f_{\mathrm{gas}}`$ values for A2199 and A496 are consistent with the constraints on $`\mathrm{\Omega }_0`$ derived in earlier works (e.g., Ettori & Fabian 1999; Mohr et al. 1999), but caution that such estimates at present involve a large extrapolation. Future observatories Chandra and XMM will be capable of studying the cluster outermost regions and possibly determining the asymptotic value of $`f_{\mathrm{gas}}`$.
## 7. SUMMARY
The ASCA gas temperature maps and radial profiles for A2199 and A496 indicate that these systems are representative examples of relaxed, moderately massive clusters. Our high quality temperature data imply total mass profiles that are in good agreement with the NFW simulated “universal” profile over the range of radii covered by the data ($`0.1\mathrm{Mpc}<r<r_{1000}1\mathrm{Mpc}`$). Because the temperature profiles of these two clusters are similar to the average profile for a large sample of nearby clusters in MFSV, this agreement indicates that the NFW profile is indeed common in nearby clusters. The upward revision of the total mass at small radii, by a factor of $`>1.5`$ compared to an isothermal analysis, may reconcile X-ray and strong lensing mass estimates in distant clusters. The observed mass profile also implies a gas mass fraction profile steeply rising with radius. While our $`f_{\mathrm{gas}}`$ values at $`r_{1000}`$ support earlier upper limits on $`\mathrm{\Omega }_0`$, the steep increase of $`f_{\mathrm{gas}}`$ with radius, not anticipated by most cluster simulations, suggests that we may not yet have correctly determined the universal baryon fraction and caution is needed in such analysis.
This work was supported by NASA contracts and grants NAS8-39073, NAG5-3057, NAG5-4516, NAG5-8390, and by the Smithsonian Institution.
## Appendix A PROJECTION OF THE POLYTROPIC GAS TEMPERATURE PROFILE
Assuming a spherically symmetric gas density distribution of the form $`\rho (r)(1+r^2/a^2)^{\frac{3}{2}\beta }`$ (and taking $`a=1`$ for clarity) and the polytropic temperature profile
$$T(r)\rho ^{\gamma 1}(r),$$
(A1)
one can calculate a temperature profile that is emission-weighted (by $`\rho ^2`$) along the line of sight $`l`$, as a function of the projected distance from the center $`x`$ (such that $`r^2=x^2+l^2`$), as
$$T_{\mathrm{proj}}(x)=\frac{_0^{\mathrm{}}T(r)\rho ^2(r)𝑑l}{_0^{\mathrm{}}\rho ^2(r)𝑑l}\frac{_0^{\mathrm{}}\rho ^{1+\gamma }(r)𝑑l}{_0^{\mathrm{}}\rho ^2(r)𝑑l}\frac{_0^{\mathrm{}}(1+x^2+l^2)^{\frac{3}{2}\beta (1+\gamma )}𝑑l}{_0^{\mathrm{}}(1+x^2+l^2)^{3\beta }𝑑l}\frac{(1+x^2)^{\frac{3}{2}\beta (1+\gamma )+\frac{1}{2}}}{(1+x^2)^{3\beta +\frac{1}{2}}}=(1+x^2)^{\frac{3}{2}\beta (\gamma 1)}T(x).$$
(A2)
That is, the resulting projected temperature profile has the same shape as the real (three-dimensional) profile in eq. (A1). The relative normalization of the projected profile at $`x=0`$ (and, therefore, at all radii) can easily be derived using the above formulae and is found to be:
$$\frac{T_{\mathrm{proj}}}{T}=\frac{\mathrm{\Gamma }\left[\frac{3}{2}\beta (1+\gamma )\frac{1}{2}\right]\mathrm{\Gamma }(3\beta )}{\mathrm{\Gamma }\left[\frac{3}{2}\beta (1+\gamma )\right]\mathrm{\Gamma }(3\beta \frac{1}{2})}.$$
(A3)
The normalization of the projected temperature profile is slightly smaller than that of the three-dimensional profile. For $`\beta >0.5`$ and $`\gamma <5/3`$, their difference is less than $`20`$%. Strictly speaking, different temperatures along the line of sight are not simply weighted with $`\rho ^2`$; to obtain an exact projected temperature, a single-temperature fit to a multi-temperature spectrum should be performed in the ASCA energy band. However, the above similarity of the projected and real profiles holds for any weighting that is proportional to $`\rho ^2T^\alpha `$, which approximates a wide range of possibilities. The normalization (A3) changes only weakly if $`\alpha 0`$. For example, taking $`\alpha =0.5`$ (weighting with a bolometric emissivity) instead of $`\alpha =0`$ changes the normalization for our clusters by only 1%, and by less than 5% for any reasonable $`\beta `$ and $`\gamma `$. We have therefore assumed $`\alpha =0`$ in §4.2 for simplicity.
|
no-problem/9904/hep-ph9904313.html
|
ar5iv
|
text
|
# A Critical Look at 𝜸 Determinations from 𝑩→𝝅𝑲 Decays
## 1 Setting the Scene
In order to obtain direct information on the angle $`𝜸`$ of the unitarity triangle of the CKM matrix in an experimentally feasible way, $`𝑩\mathbf{}𝝅𝑲`$ decays appear very promising. Fortunately, experimental data on these modes are now starting to become available. In 1997, the CLEO collaboration reported the first results on the decays $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$ and $`𝑩_𝒅\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{\pm }`$; last year, the first observation of $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathrm{𝟎}𝑲^\mathbf{\pm }`$ was announced. So far, only results for CP-averaged branching ratios have been reported, with values at the $`\mathrm{𝟏𝟎}^\mathbf{}\mathrm{𝟓}`$ level and large experimental uncertainties. However, already such CP-averaged branching ratios may lead to highly non-trivial constraints on $`𝜸`$. The following three combinations of $`𝑩\mathbf{}𝝅𝑲`$ decays were considered in the literature: $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$ and $`𝑩_𝒅\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{\pm }`$,$`^\text{-}`$ $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$ and $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathrm{𝟎}𝑲^\mathbf{\pm }`$,$`^\text{-}`$ as well as the combination of the neutral decays $`𝑩_𝒅\mathbf{}𝝅^\mathrm{𝟎}𝑲`$ and $`𝑩_𝒅\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{\pm }`$.
## 2 Probing $`𝜸`$ with $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$ and $`𝑩_𝒅\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{\pm }`$
Within the framework of the Standard Model, the most important contributions to these decays originate from QCD penguin topologies. Making use of the $`𝑺𝑼\mathbf{(}\mathrm{𝟐}\mathbf{)}`$ isospin symmetry of strong interactions, we obtain
$$𝑨\mathbf{(}𝑩^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝑲^\mathrm{𝟎}\mathbf{)}\mathbf{}𝑷\mathbf{,}𝑨\mathbf{(}𝑩_𝒅^\mathrm{𝟎}\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{+}\mathbf{)}\mathbf{=}\mathbf{}\mathbf{\left[}𝑷\mathbf{+}𝑻\mathbf{+}𝑷_{\mathrm{𝐞𝐰}}^𝐂\mathbf{\right]}\mathbf{,}$$
(1)
where
$$𝑻\mathbf{}\mathbf{|}𝑻\mathbf{|}𝒆^{𝒊𝜹_𝑻}𝒆^{𝒊𝜸}\text{and}𝑷_{\mathrm{𝐞𝐰}}^𝐂\mathbf{}\mathbf{}\mathbf{\left|}𝑷_{\mathrm{𝐞𝐰}}^𝐂\mathbf{\right|}𝒆^{𝒊𝜹_{\mathrm{𝐞𝐰}}^𝐂}$$
(2)
are due to tree-diagram-like topologies and electroweak (EW) penguins, respectively. The label “C” reminds us that only “colour-suppressed” EW penguin topologies contribute to $`𝑷_{\mathrm{𝐞𝐰}}^𝐂`$. Making use of the unitarity of the CKM matrix and applying the Wolfenstein parametrization yields
$$𝑷\mathbf{}𝑨\mathbf{(}𝑩^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝑲^\mathrm{𝟎}\mathbf{)}\mathbf{=}\mathbf{}\mathbf{\left(}\mathrm{𝟏}\mathbf{}\frac{𝝀^\mathrm{𝟐}}{\mathrm{𝟐}}\mathbf{\right)}𝝀^\mathrm{𝟐}𝑨\mathbf{\left[}\mathrm{𝟏}\mathbf{+}𝝆𝒆^{𝒊𝜽}𝒆^{𝒊𝜸}\mathbf{\right]}𝓟_{𝒕𝒄}\mathbf{,}$$
(3)
where
$$𝝆𝒆^{𝒊𝜽}\mathbf{=}\frac{𝝀^\mathrm{𝟐}𝑹_𝒃}{\mathrm{𝟏}\mathbf{}𝝀^\mathrm{𝟐}\mathbf{/}\mathrm{𝟐}}\mathbf{\left[}\mathrm{𝟏}\mathbf{}\mathbf{\left(}\frac{𝓟_{𝒖𝒄}\mathbf{+}𝓐}{𝓟_{𝒕𝒄}}\mathbf{\right)}\mathbf{\right]}\mathbf{,}$$
(4)
and $`𝝀\mathbf{}\mathbf{|}𝑽_{𝒖𝒔}\mathbf{|}`$, $`𝑨\mathbf{}\mathbf{|}𝑽_{𝒄𝒃}\mathbf{|}\mathbf{/}𝝀^\mathrm{𝟐}`$, $`𝑹_𝒃\mathbf{}\mathbf{|}𝑽_{𝒖𝒃}\mathbf{/}\mathbf{(}𝝀𝑽_{𝒄𝒃}\mathbf{)}\mathbf{|}`$. Note that $`𝝆`$ is strongly CKM-suppressed by $`𝝀^\mathrm{𝟐}𝑹_𝒃\mathbf{}\mathbf{0.02}`$. In the parametrization of the $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$ and $`𝑩_𝒅\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{\pm }`$ observables, it turns out to be very useful to introduce
$$𝒓\mathbf{}\frac{\mathbf{|}𝑻\mathbf{|}}{\sqrt{\mathbf{}\mathbf{|}𝑷\mathbf{|}^\mathrm{𝟐}\mathbf{}}}\mathbf{,}\mathit{ϵ}_𝐂\mathbf{}\frac{\mathbf{|}𝑷_{\mathrm{𝐞𝐰}}^𝐂\mathbf{|}}{\sqrt{\mathbf{}\mathbf{|}𝑷\mathbf{|}^\mathrm{𝟐}\mathbf{}}}\mathbf{,}$$
(5)
with $`\mathbf{}\mathbf{|}𝑷\mathbf{|}^\mathrm{𝟐}\mathbf{}\mathbf{}\mathbf{(}\mathbf{|}𝑷\mathbf{|}^\mathrm{𝟐}\mathbf{+}\mathbf{|}\overline{𝑷}\mathbf{|}^\mathrm{𝟐}\mathbf{)}\mathbf{/}\mathrm{𝟐}`$, as well as the strong phase differences
$$𝜹\mathbf{}𝜹_𝑻\mathbf{}𝜹_{𝒕𝒄}\mathbf{,}𝚫_𝐂\mathbf{}𝜹_{\mathrm{𝐞𝐰}}^𝐂\mathbf{}𝜹_{𝒕𝒄}\mathbf{.}$$
(6)
In addition to the ratio
$$𝑹\mathbf{}\frac{\text{BR}\mathbf{(}𝑩_𝒅\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{\pm }\mathbf{)}}{\text{BR}\mathbf{(}𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲\mathbf{)}}$$
(7)
of CP-averaged $`𝑩\mathbf{}𝝅𝑲`$ branching ratios, also the “pseudo-asymmetry”
$$𝑨_\mathrm{𝟎}\mathbf{}\frac{\text{BR}\mathbf{(}𝑩_𝒅^\mathrm{𝟎}\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{+}\mathbf{)}\mathbf{}\text{BR}\mathbf{(}\overline{𝑩_𝒅^\mathrm{𝟎}}\mathbf{}𝝅^\mathbf{+}𝑲^{\mathbf{}}\mathbf{)}}{\text{BR}\mathbf{(}𝑩^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝑲^\mathrm{𝟎}\mathbf{)}\mathbf{+}\text{BR}\mathbf{(}𝑩^{\mathbf{}}\mathbf{}𝝅^{\mathbf{}}\overline{𝑲^\mathrm{𝟎}}\mathbf{)}}$$
(8)
plays an important role to probe $`𝜸`$. Explicit expressions for $`𝑹`$ and $`𝑨_\mathrm{𝟎}`$ in terms of the parameters specified above are given in Ref. 8. So far, the only available experimental result from the CLEO collaboration is for $`𝑹`$:
$$𝑹\mathbf{=}\mathbf{0.9}\mathbf{\pm }\mathbf{0.4}\mathbf{\pm }\mathbf{0.2}\mathbf{\pm }\mathbf{0.2}\mathbf{,}$$
(9)
and no CP-violating effects have been reported. However, if in addition to $`𝑹`$ also the pseudo-asymmetry $`𝑨_\mathrm{𝟎}`$ can be measured, it is possible to eliminate the strong phase $`𝜹`$ in the expression for $`𝑹`$, and to fix contours in the $`𝜸`$$`𝒓`$ plane, which correspond to the mathematical implementation of a simple triangle construction. In order to determine $`𝜸`$, the quantity $`𝒓`$, i.e. the magnitude of the “tree” amplitude $`𝑻`$, has to be fixed. At this step, a certain model dependence enters. Since the properly defined amplitude $`𝑻`$ does not receive contributions only from colour-allowed “tree” topologies, but also from penguin and annihilation processes, it may be shifted sizeably from its “factorized” value. Consequently, estimates of the uncertainty of $`𝒓`$ using the factorization hypothesis, yielding typically $`𝚫𝒓\mathbf{=}𝓞\mathbf{(}\mathrm{𝟏𝟎}\mathbf{\%}\mathbf{)}`$, may be too optimistic.
Interestingly, it is possible to derive bounds on $`𝜸`$ that do not depend on $`𝒓`$ at all. To this end, we eliminate again $`𝜹`$ in $`𝑹`$ through $`𝑨_\mathrm{𝟎}`$. If we now treat $`𝒓`$ as a “free” variable, we find that $`𝑹`$ takes the following minimal value:
$$𝑹_{\mathrm{𝐦𝐢𝐧}}\mathbf{=}𝜿\mathrm{𝐬𝐢𝐧}^\mathrm{𝟐}𝜸\mathbf{+}\frac{\mathrm{𝟏}}{𝜿}\mathbf{\left(}\frac{𝑨_\mathrm{𝟎}}{\mathrm{𝟐}\mathrm{𝐬𝐢𝐧}𝜸}\mathbf{\right)}^\mathrm{𝟐}\mathbf{}𝜿\mathrm{𝐬𝐢𝐧}^\mathrm{𝟐}𝜸\mathbf{.}$$
(10)
Here, the quantity
$$𝜿\mathbf{=}\frac{\mathrm{𝟏}}{𝒘^\mathrm{𝟐}}\mathbf{\left[}\mathbf{\hspace{0.17em}1}\mathbf{+}\mathrm{𝟐}\mathbf{(}\mathit{ϵ}_𝐂𝒘\mathbf{)}\mathrm{𝐜𝐨𝐬}𝚫\mathbf{+}\mathbf{(}\mathit{ϵ}_𝐂𝒘\mathbf{)}^\mathrm{𝟐}\mathbf{\right]}\mathbf{,}$$
(11)
with $`𝒘\mathbf{=}\sqrt{\mathrm{𝟏}\mathbf{+}\mathrm{𝟐}𝝆\mathrm{𝐜𝐨𝐬}𝜽\mathrm{𝐜𝐨𝐬}𝜸\mathbf{+}𝝆^\mathrm{𝟐}}`$, describes rescattering and EW penguin effects. An allowed range for $`𝜸`$ is related to $`𝑹_{\mathrm{𝐦𝐢𝐧}}`$, since values of $`𝜸`$ implying $`𝑹_{\mathrm{𝐞𝐱𝐩}}\mathbf{<}𝑹_{\mathrm{𝐦𝐢𝐧}}`$ are excluded. In particular, $`𝑨_\mathrm{𝟎}\mathbf{}\mathrm{𝟎}`$ would allow us to exclude a certain range of $`𝜸`$ around $`\mathrm{𝟎}^{\mathbf{}}`$ or $`\mathrm{𝟏𝟖𝟎}^{\mathbf{}}`$, whereas a measured value of $`𝑹\mathbf{<}\mathrm{𝟏}`$ would exclude a certain range around $`\mathrm{𝟗𝟎}^{\mathbf{}}`$, which would be of great phenomenological importance. The first results reported by CLEO in 1997 gave $`𝑹\mathbf{=}\mathbf{0.65}\mathbf{\pm }\mathbf{0.40}`$, whereas the most recent update is that given in (9).
The theoretical accuracy of these constraints on $`𝜸`$ is limited both by rescattering processes of the kind $`𝑩^\mathbf{+}\mathbf{}\mathbf{\{}𝝅^\mathrm{𝟎}𝑲^\mathbf{+}\mathbf{,}𝝅^\mathrm{𝟎}𝑲^\mathbf{}\mathbf{+}\mathbf{,}\mathbf{}\mathbf{\}}`$, and by EW penguin effects. The rescattering effects, which may lead to values of $`𝝆\mathbf{=}𝓞\mathbf{(}\mathbf{0.1}\mathbf{)}`$, can be controlled in the contours in the $`𝜸`$$`𝒓`$ plane and the associated constraints on $`𝜸`$ through experimental data on $`𝑩^\mathbf{\pm }\mathbf{}𝑲^\mathbf{\pm }𝑲`$ decays, the $`𝑼`$-spin counterparts of $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$. Another important indicator for large rescattering effects is provided by $`𝑩_𝒅\mathbf{}𝑲^\mathbf{+}𝑲^{\mathbf{}}`$ modes, for which there already exist stronger experimental constraints.
An improved description of the EW penguins is possible if we use the general expressions for the corresponding four-quark operators, and perform appropriate Fierz transformations. Following these lines, we arrive at
$$\frac{\mathit{ϵ}_𝐂}{𝒓}𝒆^{𝒊\mathbf{(}𝚫_𝐂\mathbf{}𝜹\mathbf{)}}\mathbf{=}\mathbf{0.66}\mathbf{\times }\mathbf{\left[}\frac{\mathbf{0.41}}{𝑹_𝒃}\mathbf{\right]}\mathbf{\times }𝒂_𝐂𝒆^{𝒊𝝎_𝐂}\mathbf{,}$$
(12)
where $`𝒂_𝐂𝒆^{𝒊𝝎_𝐂}\mathbf{=}𝒂_\mathrm{𝟐}^{\mathrm{𝐞𝐟𝐟}}\mathbf{/}𝒂_\mathrm{𝟏}^{\mathrm{𝐞𝐟𝐟}}`$ is the ratio of certain generalized “colour factors”. Experimental data on $`𝑩\mathbf{}𝑫^{\mathbf{(}\mathbf{}\mathbf{)}}𝝅`$ decays imply $`𝒂_\mathrm{𝟐}\mathbf{/}𝒂_\mathrm{𝟏}\mathbf{=}𝓞\mathbf{(}\mathbf{0.25}\mathbf{)}`$. However, “colour suppression” in $`𝑩\mathbf{}𝝅𝑲`$ modes may in principle be different from that in $`𝑩\mathbf{}𝑫^{\mathbf{(}\mathbf{}\mathbf{)}}𝝅`$ decays, in particular in the presence of large rescattering effects. A first step to fix the hadronic parameter $`𝒂_𝐂𝒆^{𝒊𝝎_𝐂}`$ experimentally is provided by the mode $`𝑩^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝝅^\mathrm{𝟎}`$. Detailed discussions of the impact of rescattering and EW penguin effects on the strategies to probe $`𝜸`$ with $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$ and $`𝑩_𝒅\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{\pm }`$ decays can be found in Refs. 7, 8 and 12.
## 3 Probing $`𝜸`$ with $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$ and $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathrm{𝟎}𝑲^\mathbf{\pm }`$
Several years ago, Gronau, Rosner and London proposed an interesting $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$ strategy to determine $`𝜸`$ with the help of $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$, $`𝝅^\mathrm{𝟎}𝑲^\mathbf{\pm }`$, $`𝝅^\mathrm{𝟎}𝝅^\mathbf{\pm }`$ decays. However, as was pointed out by Deshpande and He, this elegant approach is unfortunately spoiled by EW penguins, which play an important role in several non-leptonic $`𝑩`$-meson decays because of the large top-quark mass. Recently, this approach was resurrected by Neubert and Rosner, who pointed out that the EW penguin contributions can be controlled in this case by using only the general expressions for the corresponding four-quark operators, appropriate Fierz transformations, and the $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$ flavour symmetry (see also Ref. 3). Since a detailed presentation of these strategies can be found in Ref. 16, we will just have a brief look at their most interesting features.
In the case of $`𝑩^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝑲^\mathrm{𝟎}`$, $`𝝅^\mathrm{𝟎}𝑲^\mathbf{+}`$, the $`𝑺𝑼\mathbf{(}\mathrm{𝟐}\mathbf{)}`$ isospin symmetry implies
$$𝑨\mathbf{(}𝑩^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝑲^\mathrm{𝟎}\mathbf{)}\mathbf{+}\sqrt{\mathrm{𝟐}}𝑨\mathbf{(}𝑩^\mathbf{+}\mathbf{}𝝅^\mathrm{𝟎}𝑲^\mathbf{+}\mathbf{)}\mathbf{=}\mathbf{}\mathbf{\left[}\mathbf{(}𝑻\mathbf{+}𝑪\mathbf{)}\mathbf{+}𝑷_{\mathrm{𝐞𝐰}}\mathbf{\right]}\mathbf{.}$$
(13)
The phase stucture of this relation, which has no $`𝑰\mathbf{=}\mathrm{𝟏}\mathbf{/}\mathrm{𝟐}`$ piece, is completely analogous to the $`𝑩^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝑲^\mathrm{𝟎}`$, $`𝑩_𝒅^\mathrm{𝟎}\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{+}`$ case (see (1)):
$$𝑻\mathbf{+}𝑪\mathbf{=}\mathbf{|}𝑻\mathbf{+}𝑪\mathbf{|}𝒆^{𝒊𝜹_{𝑻\mathbf{+}𝑪}}𝒆^{𝒊𝜸}\mathbf{,}𝑷_{\mathrm{𝐞𝐰}}\mathbf{=}\mathbf{}\mathbf{|}𝑷_{\mathrm{𝐞𝐰}}\mathbf{|}𝒆^{𝒊𝜹_{\mathrm{𝐞𝐰}}}\mathbf{.}$$
(14)
In order to probe $`𝜸`$, it is useful to introduce observables $`𝑹_𝐜`$ and $`𝑨_\mathrm{𝟎}^𝐜`$ corresponding to $`𝑹`$ and $`𝑨_\mathrm{𝟎}`$; their general expressions can be otained from those for $`𝑹`$ and $`𝑨_\mathrm{𝟎}`$ by making the following replacements:
$$𝒓\mathbf{}𝒓_𝐜\mathbf{}\frac{\mathbf{|}𝑻\mathbf{+}𝑪\mathbf{|}}{\sqrt{\mathbf{}\mathbf{|}𝑷\mathbf{|}^\mathrm{𝟐}\mathbf{}}}\mathbf{,}𝜹\mathbf{}𝜹_𝐜\mathbf{}𝜹_{𝑻\mathbf{+}𝑪}\mathbf{}𝜹_{𝒕𝒄}\mathbf{,}𝑷_{\mathrm{𝐞𝐰}}^𝐂\mathbf{}𝑷_{\mathrm{𝐞𝐰}}\mathbf{.}$$
(15)
The measurement of $`𝑹_𝐜`$ and $`𝑨_\mathrm{𝟎}^𝐜`$ allows us to fix contours in the $`𝜸`$$`𝒓_𝒄`$ plane in complete analogy to the $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$, $`𝑩_𝒅\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{\pm }`$ strategy. There are, however, important differences from the theoretical point of view. First, the $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$ symmetry allows us to fix $`𝒓_𝒄\mathbf{}\mathbf{|}𝑻\mathbf{+}𝑪\mathbf{|}`$:
$$𝑻\mathbf{+}𝑪\mathbf{}\mathbf{}\sqrt{\mathrm{𝟐}}\frac{𝑽_{𝒖𝒔}}{𝑽_{𝒖𝒅}}\frac{𝒇_𝑲}{𝒇_𝝅}𝑨\mathbf{(}𝑩^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝝅^\mathrm{𝟎}\mathbf{)}\mathbf{,}$$
(16)
where $`𝒓_𝒄`$ thus determined is – in contrast to $`𝒓`$ – not affected by rescattering effects. Second, in the strict $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$ limit, we have
$$\mathbf{\left|}\frac{𝑷_{\mathrm{𝐞𝐰}}}{𝑻\mathbf{+}𝑪}\mathbf{\right|}𝒆^{𝒊\mathbf{(}𝜹_{\mathrm{𝐞𝐰}}\mathbf{}𝜹_{𝑻\mathbf{+}𝑪}\mathbf{)}}\mathbf{=}\mathbf{0.66}\mathbf{\times }\mathbf{\left[}\frac{\mathbf{0.41}}{𝑹_𝒃}\mathbf{\right]}\mathbf{.}$$
(17)
In contrast to (12), this expression does not involve a hadronic parameter.
The contours in the $`𝜸`$$`𝒓_𝒄`$ plane may be affected – in analogy to the $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$, $`𝑩_𝒅\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{\pm }`$ case – by rescattering effects. They can be taken into account with the help of additional data. The major theoretical advantage of the $`𝑩^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝑲^\mathrm{𝟎}`$, $`𝝅^\mathrm{𝟎}𝑲^\mathbf{+}`$ strategy with respect to $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$, $`𝑩_𝒅\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{\pm }`$ is that $`𝒓_𝒄`$ and $`𝑷_{\mathrm{𝐞𝐰}}\mathbf{/}\mathbf{(}𝑻\mathbf{+}𝑪\mathbf{)}`$ can be fixed by using only $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$ arguments. Consequently, the theoretical accuracy is mainly limited by non-factorizable $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$-breaking effects.
## 4 Probing $`𝜸`$ with $`𝑩_𝒅\mathbf{}𝝅^\mathrm{𝟎}𝑲`$ and $`𝑩_𝒅\mathbf{}𝝅^{\mathbf{}}𝑲^\mathbf{\pm }`$
The strategies to probe $`𝜸`$ that are allowed by the observables of $`𝑩_𝒅\mathbf{}𝝅^\mathrm{𝟎}𝑲`$, $`𝝅^{\mathbf{}}𝑲^\mathbf{\pm }`$ are completely analogous to the $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$, $`𝝅^\mathrm{𝟎}𝑲^\mathbf{\pm }`$ case. However, if we require that the neutral kaon be observed as a $`𝑲_𝐒`$, we have an additional observable at our disposal, which is provided by “mixing-induced” CP violation in $`𝑩_𝒅\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒`$ and allows us to take into account the rescattering effects in the extraction of $`𝜸`$. To this end, time-dependent measurements are required. The theoretical accuracy of the neutral strategy is only limited by non-factorizable $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$-breaking corrections, which affect $`\mathbf{|}𝑻\mathbf{+}𝑪\mathbf{|}`$ and $`𝑷_{\mathrm{𝐞𝐰}}`$.
|
no-problem/9904/cond-mat9904232.html
|
ar5iv
|
text
|
# Specific heat and magnetic order in LaMnO3+δ
## I Introduction
Hole-doped perovskite-type manganese oxides have attracted considerable interest in recent years, motivated by the observation of colossal magnetoresistance (CMR) in numerous related compounds, and the great variety of magnetic and transport properties in this class of materials. Among the La-based systems, the ground state of the stoichiometric parent compound LaMnO<sub>3</sub> is insulating A-type antiferromagnetic (AF), which is attributed to a cooperative effect of orbital ordering and superexchange interactions. Substitution of a fraction $`x`$ of La<sup>3+</sup> by divalent cations such as Sr<sup>2+</sup>, Ca<sup>2+</sup> or Ba<sup>2+</sup> causes the conversion of a proportional number of Mn<sup>3+</sup> to Mn<sup>4+</sup>. At certain doping ranges ($`0.2x0.5`$) this induces a metal-insulator transition and the appearance of a ferromagnetic (FM) state. The simultaneous FM and metallic transitions have been qualitatively explained by the double-exchange (DE) model, which considers the magnetic coupling between Mn<sup>3+</sup> and Mn<sup>4+</sup> resulting from the motion of an electron between the two partially filled $`d`$ shells. Nevertheless, this DE mechanism does not account for several experimental results, and it has been claimed that a Jahn-Teller type electron-phonon coupling plays an important role in explaining the large magnetoresistive effects.
Conversion of Mn<sup>3+</sup> to Mn<sup>4+</sup> can also be achieved by the presence of non-stoichiometric oxygen in undoped LaMnO<sub>3+δ</sub>, with a nominal Mn<sup>4+</sup> content of $`2\delta `$. For simplicity this is the crystallographic representation used in the present work and in most other studies in this system. However, it does not reflect the fact that the system contains randomly distributed La and Mn vacancies rather than oxygen excess, which can not be accommodated interstitially in the lattice. The actual crystallographic formula is better written as La<sub>1-x</sub>Mn<sub>1-y</sub>O<sub>3</sub>. By varying the oxygen stoichiometry the resulting compounds display a wide variety of structural and magnetic phases, previously studied by x-rays and neutron scattering, as well as magnetic and transport measurements. It is well known that the low-temperature magnetic phase of non-stoichiometric LaMnO<sub>3+δ</sub> changes from AF to FM for small values of $`\delta `$ due to the DE interaction caused by the presence of Mn<sup>4+</sup> ions in the sample. However, unlike the cation-doped systems, the material remains insulating at all temperatures, and the FM transition temperature decreases for increasing content of Mn<sup>4+</sup>. The relevant fact to be considered appears to be the competing effect between La vacancies, enhancing the Mn<sup>3+</sup>-Mn<sup>4+</sup> DE interaction, and Mn vacancies which introduce considerable disorder in the lattice. For large values of $`\delta `$ the FM order is suppressed, and the low-temperature phase is better described by a spin-glass-like state.
The competing effect between cation and manganese vacancies makes LaMnO<sub>3+δ</sub> a model system for studying magnetic interactions and disorder effects in mixed-valence manganites. In order to achieve a better understanding of the low temperature properties of this system we have performed magnetic and specific-heat measurements in three different samples of non-stoichiometric LaMnO<sub>3+δ</sub>. Magnetic data show signatures of a double transition: as the temperature is lowered, the system first orders ferromagnetically in small weakly-connected clusters, and then changes to a cluster-glass phase. Results of low-temperature specific-heat measurements show an unexpectedly large linear coefficient and a spin-wave contribution. This is interpreted in terms of the existence of disorder-induced charge-localization in these compounds.
## II Experiments
The bulk samples of LaMnO<sub>3+δ</sub> investigated in the present study were thoroughly characterized in Refs. . They were prepared in polycrystalline form by a citrate technique, as described elsewhere. The products were annealed at 1100 C in air (Sample 1), 1000 C in air (Sample 2) and 1000 C under 200 bar of O<sub>2</sub> (Sample 3). The determination of $`\delta `$ was initially performed by thermogravimetric analysis. The final materials were characterized by x-ray diffraction. Neutron powder diffraction diagrams were also collected in the temperature range 2-250 K. The Rietveld method was used to refine the crystal and magnetic structures.
The neutron-diffraction refinements showed that all investigated samples have stoichiometric oxygen content of $`3.00\pm 0.05`$. The Mn<sup>4+</sup> content was calculated from the vacancy concentration of La and Mn determined from the neutron data, and found to be in good agreement with the thermogravimetric analysis. Sample 1, with $`\delta =0.11`$ and 23% of Mn<sup>4+</sup>, consists of a mixture of a main orthorhombic phase (64%) and a minor rhombohedral phase (36%). Sample 2, with $`\delta =0.15`$ and 33% of Mn <sup>4+</sup>, and Sample 3, with $`\delta =0.26`$ and 52% of Mn<sup>4+</sup>, both have rhombohedral symmetry. Samples 1 and 2 showed a FM ordered structure at low temperatures (with some canting observed in Sample 2), whereas Sample 3 showed spin-glass-like signatures. Transport measurements revealed that all the studied compounds are insulating down to low temperatures, with a typical semiconductor-like behavior. Selected sample parameters are summarized in Table I.
In the present study, specific-heat results were obtained from 4.5 to 200 K with an automated quasi-adiabatic pulse technique. The absolute accuracy of the data, checked against a copper sample, is better than 3%. The measured samples had masses of approximately 50 mg. Detailed AC susceptibility and DC magnetization measurements where performed in a commercial magnetometer (Quantum Design PPMS). The FM transition temperatures of Samples 1 and 2, obtained from AC susceptibility data, are also shown in Table I.
## III Magnetic Measurements
Figure 1 shows the AC susceptibility of LaMnO<sub>3+δ</sub>. Real and imaginary parts, respectively $`\chi ^{}`$ and $`\chi ^{\prime \prime }`$ , were measured in zero DC field, with an alternating field $`h_{ac}=1`$Oe, and frequencies of 25, 125 and 1000 Hz. Results for Sample 3 are multiplied by a factor of 3. Part
of the data was shifted vertically for clarity. The first point to note is a pronounced FM transition, observed at 154 and 142 K for Samples 1 ($`\delta =0.11`$) and 2 ($`\delta =0.15`$), respectively. The values of T<sub>c</sub> were determined from the maximum derivative in $`\chi ^{}`$. For Sample 3, with the higher vacancy content ($`\delta =0.26`$), at 48 K we observe a much lower cusp-like anomaly in $`\chi ^{}`$, typical of a spin glass behavior. As mentioned in the introduction, the evolution from FM to spin glass features for increasing oxygen content in LaMnO<sub>3+δ</sub> was previously observed in the literature.
Moreover, it is most interesting to note in Fig. 1 that the results for the two FM samples show a double-peak structure and a frequency dependence of the imaginary component, $`\chi ^{\prime \prime }`$. The high-temperature peak is frequency independent, whereas the position of the low-temperature peak strongly depends on the measuring frequency. The maximum in $`\chi ^{\prime \prime }`$ shifts to higher temperatures as the frequency increases. These are clear signatures of a cluster-glass behavior, as previously reported for other manganite and cobaltite systems. The high-temperature peak signals the onset of FM order, whereas the low-temperature frequency-dependent peak is associated with freezing of the cluster magnetic moments. In connection with the low-temperature peak in $`\chi ^{\prime \prime }`$, a frequency-dependent shoulder can be observed in the real component $`\chi ^{}`$. Results for Sample 3 also show a distinct frequency dependence in $`\chi ^{}`$, not visible in the scale of the figure.
In order to probe disorder-induced features in the system, we have measured the field-cooled (FC) and zero-field-cooled (ZFC) magnetization of the studied samples. Figures 2(a) and 2(b) display the results for Samples 1 and 2 respectively. The low-field data was taken with $`H=50`$Oe. A pronounced irreversibility is observed, again indicative of a disordered state. In our results the irreversibility starts just below T<sub>c</sub>, which is the typical behavior of a cluster-glass phase, whereas in reentrant-spin-glass systems irreversibility occurs far below T<sub>c</sub>.
As the field increases the irreversible behavior is reduced, and is no longer present at $`H=5000`$Oe. Measurements of AC susceptibility with an applied DC field (not shown) confirm that the frequency dependence in $`\chi ^{}`$ disappears with increasing fields. These results show that the application of a DC field tends to align the cluster moments, and stabilizes a reversible FM ordered state. In the magnetization results for Sample 3, shown in Fig. 2(c), the behavior is quite different. The magnetization peak is more than two orders of magnitude lower than in the other samples, and the irreversibility persists with higher applied DC field, which confirms the standard spin-glass features in the high-vacancy sample. The difference between the ZFC and FC magnetizations is much higher in the cluster-glass phase (Samples 1 and 2) compared to the spin-glass phase (Sample 3), reflecting the presence of FM order within the clusters.
Isothermal $`M`$ vs. $`H`$ curves measured at 10 K are plotted in Fig. 3. For the FM samples (1 and 2) the magnetization saturates at fields of the order of 1–2 T. The saturation values are $`3.70\mu _B`$ and $`3.57\mu _B`$ for Samples 1 and 2, respectively. The magnetic moment expected from the spin contribution is $`gS\mu _B`$, where $`S`$ is the spin of the ion, which is 3/2 for Mn<sup>4+</sup> and 2 for Mn<sup>3+</sup>, and the gyromagnetic factor $`g=2`$ in both cases. Taking into account the relative concentrations of Mn<sup>4+</sup> and Mn<sup>3+</sup> in the compounds, we get an effective moment of $`3.77\mu _B`$ for Sample 1 (23% Mn<sup>4+</sup>), and $`3.67\mu _B`$ for Sample 2 (33%
Mn<sup>4+</sup>). This prediction virtually coincides with the values observed experimentally, indicating that the applied field fully polarizes the FM clusters. Hysteresis is observed at very low fields, up to about 400 Oe, as shown for Sample 1 in the inset of Fig. 3. This hysteresis is consistent with the $`M`$ vs. $`T`$ data of Fig. 2, and is attributed to the low field cluster-glass nature of the samples. For Sample 3 (52% Mn<sup>4+</sup>), the low-temperature magnetization does not saturate at our highest field, and a large hysteretic behavior is observed. At 9 T the measured magnetic moment is $`2.15\mu _B`$, much smaller than the predicted value of $`3.48\mu _B`$. This is an additional indication of the spin-glass-like properties of this sample.
In order to verify the consistency of our magnetic results, we have performed the same measurements on another similar series of LaMnO<sub>3+δ</sub> samples. The cluster-glass behavior of the intermediate-vacancy FM samples, i.e., the frequency-dependent AC susceptibility and the irreversibility in low-field magnetization, were confirmed to exist in this second series of samples.
## IV Specific Heat Measurements
Figure 4(a) shows the specific heat of the investigated samples plotted as $`C/T`$ vs. $`T^2`$, in the temperature range of 4.5–15 K. For comparison, measurements on La<sub>0.90</sub>Ca<sub>0.10</sub>MnO<sub>3</sub>, a ferromagnetic insulator, and on La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub>, a ferromagnetic metal, are also
displayed. The latter has the same Mn<sup>4+</sup> content as in LaMnO<sub>3+δ</sub> with $`\delta =0.15`$. However, it is clear from the figure that the heat capacity is considerably higher in LaMnO<sub>3+δ</sub> as compared to the Ca-doped compounds. In order to interpret these results and evaluate the different contributions to the specific heat, the low-temperature data of each studied sample were fitted to the expression
$$C=\gamma T+\beta T^3+BT^{3/2}.$$
(1)
The linear coefficient $`\gamma `$ is usually attributed to charge carriers, and is proportional to the density of states at the Fermi level. However, transport measurements showed that all the investigated samples of LaMnO<sub>3+δ</sub> are insulating, and the appearance of a linear term in the specific heat must be more carefully interpreted. The lattice contribution is given by $`\beta T^3`$. A higher-order lattice term proportional to $`T^5`$ was not needed to fit the data in the temperature range up to 10 K. The term $`BT^{3/2}`$ is associated with FM spin-wave excitations. The coefficient
$`\beta `$ is related to the Debye temperature $`\theta _D`$, and the coefficient $`B`$ to the spin-wave stiffness constant $`D`$.
The fitting parameters obtained for all samples are given in Table II, and the fitted curves can be seen in Fig. 4(b) in a plot of $`C`$ vs. $`T`$. For Samples 1 and 2 ($`\delta `$ = 0.11 and 0.15) although the plot of $`C/T`$ vs. $`T^2`$ gives approximately straight lines, a careful fitting procedure confirms the existence of a magnetic $`BT^{3/2}`$ term. The uncertainty in the coefficients is estimated mostly by varying the fitted temperature range. All fitted curves fall within the experimental data with a maximum dispersion smaller than $`\pm `$0.7% in more than 90% of the points, and no systematic departures from the fitted curves are observed. In Sample 3 ($`\delta =0.26`$) we found no contribution arising from a linear term $`\gamma T`$. An upper estimate gives $`\gamma <0.8`$ mJ/mol K<sup>2</sup>, obtained using a maximum fitting temperature above 9 K. Below this range, the inclusion of a linear term in the fitted expression yields negative values of $`\gamma `$. By allowing the magnetic contribution to vary as $`BT^n`$
we find a best fit with $`n`$ very close to the assumed value of 3/2. One of the most important and unexpected results obtained from our low-temperature specific-heat data is the observation of a very high linear coefficient $`\gamma `$ in Samples 1 and 2. In this case, by fitting the data only with linear and cubic terms we obtain even higher values of $`\gamma `$. Possible origins of this contribution will be discussed below.
The Debye temperature $`\theta _D`$ significantly increases with the increase of vacancy content in LaMnO<sub>3+δ</sub>. The values of $`\theta _D`$, in the range 370-500 K, are comparable to those previously reported in manganite perovskites. The tendency to an increase of $`\theta _D`$ with higher hole doping has been previously observed. It has been argued that the reduction of lattice stiffness at low doping values could be related to dynamic Jahn-Teller distortion in the compounds. The large value of $`\theta _D`$ in Sample 3, with the highest content of Mn<sup>4+</sup>, is close to that observed in the AF insulator La<sub>0.38</sub>Ca<sub>0.62</sub>MnO<sub>3</sub>. This suggests that AF interactions, also present in Sample 3, may contribute to a hardening of the lattice vibrations.
The magnitude of the $`BT^{3/2}`$ term is also of relevance, providing information on the spin-wave excitations in the compounds. The value of the spin-wave stiffness constant determined for Sample 1, $`D=75`$meVÅ<sup>2</sup> , is approximately half of that obtained for La<sub>0.7</sub>Ca<sub>0.3</sub>MnO<sub>3</sub> ($`D=170`$meVÅ<sup>2</sup>) and La<sub>0.7</sub>Sr<sub>0.3</sub>MnO<sub>3</sub> ($`D=154`$meVÅ<sup>2</sup>), both in the FM metallic phase. The value of $`D=32`$meVÅ<sup>2</sup> in Sample 2 is of the same order as in the FM insulator La<sub>0.9</sub>Ca<sub>0.1</sub>MnO<sub>3</sub> ($`D=40`$meVÅ<sup>2</sup>), whose insulating character is also interpreted as a disorder effect. This is consistent with the fact that increasing disorder should give rise to lower values of $`D`$, i.e.,“softer” spin waves, as it is expected to reduce the strength of the ferromagnetic coupling. The observation of a magnetic $`BT^{3/2}`$ contribution in Sample 3, for which a spin-glass phase is observed, will be addressed in the next section.
For completeness, Fig. 5 displays the high-temperature (30–200 K) specific heat of the investigated LaMnO<sub>3+δ</sub> samples, plotted as $`C/T`$ vs. $`T`$. Results for Samples 2 and 3 are shifted downward for clarity. Sample 1 shows a small anomaly associated with the FM transition at 152 K, coinciding with the transition temperature obtained from the AC susceptibility. No anomaly is observed in the results for Samples 2 and 3. The inset shows the temperature derivative, $`d(C/T)/dT`$, for Samples 1 and 2. The FM transition in Sample 2 is visible in the derivative plot at 143 K, again coinciding with the susceptibility measurements. Phase transitions with a large temperature width often show no specific-heat anomaly, as reported in the FM insulator La<sub>0.90</sub>Ca<sub>0.10</sub>MnO<sub>3</sub>. For Sample 3, a specific-heat anomaly is not observed even in the derivative plot (not shown), as expected for a spin glass. For Sample 1, the entropy associated with the FM transition, which can be obtained from $`\mathrm{\Delta }S=(C/T)𝑑T`$, is $`\mathrm{\Delta }S=0.21\pm 0.02`$J/mol K. The subtracted lattice contribution is estimated by excluding the peak region from the data, and fitting the remaining data with a sum of three Einstein optical modes. The value of $`\mathrm{\Delta }S`$ is about an order of magnitude smaller than reported on Ca-doped samples, and on other manganite compounds, where in turn the $`\mathrm{\Delta }S`$ values are also smaller than expected from the ordering of the spin system. A thorough discussion related to this “missing” entropy can be found elsewhere.
## V Discussion
From our susceptibility and magnetization data we have established that the FM phase of hole doped LaMnO<sub>3+δ</sub> samples evolves to a cluster-glass-like state. Several other manganite compounds present similar behavior when substitution occurs in the manganese site. If one takes, for instance, the standard CMR compound La<sub>0.7</sub>Ca<sub>0.3</sub>MnO<sub>3</sub>, substitution of Mn by Co (Ref. ) or In (Ref. ) also gives rise to an insulating cluster-glass phase. This suggests that the DE interaction, mostly responsible for the metallic FM state of doped manganites, is inhibited by random disorder in the system. The formation of FM clusters is accompanied by strong charge-localizing effects which yield an insulating state. Nevertheless, the size of the clusters must be large enough for the $`e_g`$ electrons to extend over several sites, and provide the observed FM interaction.
The most striking feature of the specific-heat data for Samples 1 and 2 is the appearance of an unexpectedly large linear term, in excess of 19 mJ/mol K<sup>2</sup>, although the system as whole is an insulator with respect to transport properties. It is most important to understand the origin of this anomalous contribution. Compared with the increasing number of publications on doped manganite perovskite samples, relatively few reports on heat capacity have been presented. Low-temperature data for LaMnO<sub>3</sub> doped with Ca, Sr, and Ba, all in the metallic FM phase, observed a specific-heat linear term $`\gamma `$ in the range of 5–7 mJ/mol K<sup>2</sup>, associated with conduction electrons. However, few previous investigations have reported high $`\gamma `$ values in insulating manganite samples: in the electron doped system La<sub>2.3</sub>Ca<sub>0.7</sub>Mn<sub>2</sub>O<sub>7</sub>, the authors found $`\gamma =41`$ mJ/mol K<sup>2</sup>, and in Nd<sub>0.67</sub>Sr<sub>0.33</sub>MnO<sub>3</sub> a value of $`\gamma =25`$mJ/mol K<sup>2</sup> was observed. A detailed explanation for this contribution has not been put forward. As already mentioned, our magnetic results clearly allow us to infer that ferromagnetic order in LaMnO<sub>3+δ</sub> develops in regions of limited size (clusters), whose magnetic moments undergo a spin-glass-like transition. We will argue now that our heat capacity results are consistent with this picture.
The stoichiometric compound LaMnO<sub>3</sub> ($`\delta =0`$) has an orthorhombic crystal structure, which is a distorted form of the cubic perovskite structure. The ideal cubic system would have a FM metallic character, with the Fermi energy lying in the middle of the $`e_g`$ band. The splitting of the $`e_g`$ bands, due to the Jahn-Teller distortion, leads to a small gap (1.5 eV) between the Mn $`e_g^1`$ and $`e_g^2`$ bands. This stabilizes the A-type AF order and makes the system a Mott insulator. As we dope with holes, Mn<sup>4+</sup> are created, the perovskite distortion decreases, and the Fermi level drops down into the lowest half of the split band. Thus the system becomes metallic, with the DE mechanism being responsible for charge transfer among Mn ions, and the consequent polarization of the $`t_{2g}`$ spins that yield ferromagnetic order. However, in non-stoichiometric LaMnO<sub>3+δ</sub>, the disorder introduced by random La and Mn vacancies may cause Anderson-like localization of the electron states close to the band edges. In contrast to what happens in the cation-substituted compounds, the disordering effect of Mn vacancies is strong enough for localization to be effective even at high concentrations of Mn<sup>4+</sup>.
Previous theoretical investigations confirmed that disorder leads to charge localization in doped manganites. As is well known since Anderson’s original paper, a distribution of site-dependent diagonal energies produces localization of the electronic states from the edges of the bands to an energy within them which is called the mobility edge. Allub and Alascio have shown that, for La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub>, according to the amount of disorder and concentration of carriers, the Fermi level can cross the mobility edge to produce a metal-insulator transition. Disorder is quantified by a distribution width $`\mathrm{\Gamma }`$. If $`\mathrm{\Gamma }`$ is large, charge localization is enhanced, and the system remains insulating, as observed in our LaMnO<sub>3+δ</sub> samples. It is worth mentioning that electron localization also occurs in metallic manganite compounds. It has been argued that the $`e_g`$ electrons may be localized in large wave packets due to potential fluctuations arising from cation substitution, and additionally by spin-dependent fluctuations due to local deviations from FM order. In LaMnO<sub>3+δ</sub> the missing Mn ions enhance these random fluctuations, favoring charge localization.
On the other hand, it is reasonable to assume that at low vacancy concentration the Fermi level does not fall too far above the mobility edge, which implies that the localization length may be fairly large. Charge carriers can thus hop between a number of Mn ions, which actually defines the FM clusters. The electron mobility inside the clusters ensures the effectiveness of the DE interaction, giving rise to the observed FM behavior. Furthermore, if the FM regions are not too small, regular spin waves can be excited inside them, yielding the observed $`T^{3/2}`$ contribution to the specific heat. This is consistent with the values obtained for the spin wave stiffness in these compounds. The electron levels, although localized, are not largely spaced in energy, allowing for thermal excitations that contribute with a linear term to the specific heat as a function of temperature. It remains to be understood why the coefficient of the specific-heat linear term is so large in our results, as compared to other perovskite systems. A number of mass enhancement mechanisms may be envisaged, like magnetic polarons, lattice polarons related to the dynamical Jan-Teller effect, or Coulomb interaction effects. However, it is not straightforward to understand why these effects would not be equally noticeable in most doped manganite compounds. We suggest that the explanation lies in the fact that localization has changed the Fermi level to a region of high density of states. For instance, it is possible that the disorder yields an enhancement of the two-dimensional character of the bands, giving rise to a high density of states. Indeed, band structure calculations on stoichiometric LaMnO<sub>3</sub> revealed sharp features resembling the typical logarithmic van Hove singularities of two-dimensional tight-binding bands.
Sample 3, with the largest vacancy content, shows qualitatively distinct characteristics. Nevertheless, its behavior can be interpreted with the same arguments discussed above. The localization length is now very small due to the high degree of disorder. Thus, FM clusters are no longer formed, which is consistent with the observed magnetic response of the compound. The absence of a linear term in the specific heat reflects the higher degree of localization of the charge carriers, which effectively prevents the DE mechanism. The system behavior closely resembles that of a regular spin glass, with short range FM interaction competing with AF coupling, the latter arising from the high Mn<sup>4+</sup> content. It is somewhat puzzling, though, that the dominant contribution to the specific heat is a term proportional to $`T^{3/2}`$, which is usually attributed to spin waves in a long range FM system. However, according to computer simulations by Walker and Walstedt on a model spin glass, the low-energy excitations are collective modes, even though the local magnetic moments do not show long range order. Thus, some power-law behavior of the specific heat with temperature can be expected. Linear and quadratic terms are obtained in Ref. for a model metallic spin glass with RKKY interactions, but the actual value of the exponent depends on details of the distribution of low-lying excitations, and a value of 3/2 cannot be ruled out.
## VI Conclusions
In this work we have presented measurements of AC susceptibility, DC magnetization, and specific heat in a series of LaMnO<sub>3+δ</sub> samples with large $`\delta `$ values, and therefore high degree of disorder. The aim is to provide a better understanding of the role of La and Mn vacancies in the properties of mixed-valence manganites. From our analysis we may draw two main conclusions: (i) magnetic measurements showed that the previously known FM insulating phase of these compounds displays a disorder-induced cluster-glass-like behavior; (ii) the anomalous high specific-heat linear coefficient $`\gamma `$ in this case gives evidence of a high density of localized states around the Fermi level, even though the latter falls in a region of Anderson-localized states. Hence, charge localization is enhanced due to disorder in the system, and the low temperature FM insulating state consists of randomly oriented FM clusters, which align in small applied fields. The carriers, though localized, may hop between several Mn sites to ensure the DE interaction responsible for the FM order. In the sample with the highest vacancy content the increased random disorder, and the competition between FM and AF interactions give rise to a spin-glass state.
## VII Acknowledgments
We thank Mucio Continentino and Gerardo Martínez for helpful discussions. This research was financed by the Brazilian Ministry of Science and Technology under the contract PRONEX/FINEP/CNPq no 41.96.0907.00. Additional support was also given by FUJB and FAPERJ. J.A.A. thanks the Spanish CICyT for funds to the project PB97-1181. L.F.C. was supported by the EPSRC grant number GR/K 73862 and by the Royal Society, U.K.
|
no-problem/9904/physics9904037.html
|
ar5iv
|
text
|
# High-quality variational wave functions for small 4He clusters
## ACKNOWLEDGMENTS
We are grateful to S.A. Chin and E. Krotscheck for providing us with useful information about their previous work. M.P. acknowledges CONICET (Argentina) for a fellowship. This work has been partially supported by grant PB97-1139 (Spain).
|
no-problem/9904/hep-th9904167.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Integrable one-dimensional models with long ranged interactions have recently attracted a lot of interest due to their close connection with diverse subjects like fractional statistics, random matrix theory, level statistics for disordered systems, Yangian algebra, q-polynomials etc. \[1-20\]. The $`SU(M)`$ Polychronakos spin chain \[6-8\] is a well known example of such integrable models with the Hamiltonian given by
$$H_P=\underset{1i<jN}{}\frac{\left(1ϵP_{ij}\right)}{\left(\overline{x}_i\overline{x}_j\right)^2},$$
(1.1)
where $`ϵ=1(1)`$ represents the ferromagnetic (anti-ferromagnetic) case and $`P_{ij}`$ is the exchange operator interchanging the ‘spins’ (taking $`M`$ possible values) of $`i`$-th and $`j`$-th lattice sites. Moreover the positions of corresponding lattice sites ($`\overline{x}_i`$), which are inhomogeneously distributed on a line, are given by the zero points of the $`N`$-th order Hermite polynomial and may also be obtained as a solution of the following set of equations:
$$x_i=\underset{ki}{}\frac{2}{\left(x_ix_k\right)^3},$$
(1.2)
where $`i[1,2,\mathrm{},N]`$. Though the Polychronakos spin chain (1.1) does not enjoy translational invariance, one can find out its exact spectrum as well as partition function by considering the ‘freezing limit’ of the Calogero Hamiltonian which possesses both spin and particle degrees of freedom :
$$H_C=\frac{1}{2}\underset{i=1}{\overset{N}{}}\left(\frac{^2}{x_i^2}+\omega ^2x_i^2\right)+\underset{1i<jN}{}\frac{l\left(lϵP_{ij}\right)}{\left(x_ix_j\right)^2},$$
(1.3)
$`\omega `$ and $`l`$ being some positive coupling constants. Thus, it is revealed that the Hamiltonian (1.1) generates an equidistant spectrum, where the energy levels are highly degenerate in general. Such high degeneracy of energy levels can be explained through the ‘motif’ picture originating from the $`Y(gl_M)`$ Yangian symmetry of the Polychronakos spin chain (1.1) and Calogero model (1.3) . Moreover, the partition function of the Hamiltonian (1.1) is found to be closely related to the $`SU(M)`$ Rogers-Szeg$`\ddot{\mathrm{o}}`$ (RS) polynomial, which have appeared earlier in different contexts like the theory of partitions and the character formula for the Heisenberg XXX spin chain .
Now, for the purpose of obtaining a supersymmetric extension of the Polychronakos spin chain, we consider a set of operators like $`C_{i\alpha }^{}`$ ($`C_{i\alpha }`$), which creates (annihilates) a particle of species $`\alpha `$ on the $`i`$-th lattice site. Such creation (annihilation) operators are defined to be bosonic if $`\alpha [1,2,\mathrm{},m]`$ and fermionic if $`\alpha [m+1,m+2,\mathrm{},m+n]`$ (where, according to our notation, $`m+n=M`$). Next, we focus our attention only to a subspace of the related Fock space, for which the total number of particles per site is always one:
$$\underset{\alpha =1}{\overset{M}{}}C_{i\alpha }^{}C_{i\alpha }=1,$$
(1.4)
for all $`i`$. On the above mentioned subspace, one can define a supersymmetric exchange operator as
$$\widehat{P}_{ij}^{(m|n)}=\underset{\alpha ,\beta =1}{\overset{M}{}}C_{i\alpha }^{}C_{j\beta }^{}C_{i\beta }C_{j\alpha },$$
(1.5)
and show that this $`\widehat{P}_{ij}^{(m|n)}`$ yields a realisation of the permutation algebra given by
$$𝒫_{ij}^2=1,𝒫_{ij}𝒫_{jl}=𝒫_{il}𝒫_{ij}=𝒫_{jl}𝒫_{il},[𝒫_{ij},𝒫_{lm}]=0,$$
(1.6)
($`i,j,l,m`$ being all distinct). This supersymmetric exchange operator (1.5) was used earlier for constructing the $`SU(m|n)`$ supersymmetric Haldane-Shastry (HS) model . So, in analogy with the case of supersymmetric HS model, we may use the exchange operator (1.5) for constructing a Hamiltonian of the form
$$_P^{(m|n)}=\underset{1i<jN}{}\frac{\left(\mathrm{\hspace{0.17em}1}\widehat{P}_{ij}^{(m|n)}\right)}{\left(\overline{x}_i\overline{x}_j\right)^2}.$$
(1.7)
It should be noted that, at the special case $`m=M,n=0`$, i.e. when all degrees of freedom are bosonic, $`\widehat{P}_{ij}^{(m|n)}`$ (1.5) becomes equivalent to the spin exchange operator $`P_{ij}`$ which appears in the Hamiltonian (1.1). Therefore, at this pure bosonic case, (1.7) would reproduce the ferromagnetic Polychronakos spin chain (1.1) where $`ϵ=1`$. Similarly, for the case $`m=0,n=M`$, i.e. , when all degrees of freedom are fermionic, (1.7) would reproduce the anti-ferromagnetic Polychronakos spin chain (1.1) where $`ϵ=1`$. So, when both bosonic and fermionic degrees of freedom are involved (i.e., when both $`m`$ and $`n`$ take nonzero values), we may say that $`_P^{(m|n)}`$ (1.7) represents the Hamiltonian of $`SU(m|n)`$ supersymmetric Polychronakos (SP) model.
In this article, our aim is to study the spectrum and partition function of the above defined $`SU(m|n)`$ SP model. To this end, in Sec.2, we introduce an appropriate extension of spin Calogero model (1.3), which would generate the SP model (1.7) at the ‘freezing limit’. In this section, we also demonstrate that the spectrum of such extended spin Calogero model is essentially the same with the spectrum of decoupled $`SU(m|n)`$ supersymmetric harmonic oscillators, where each oscillator has $`m`$ bosonic as well as $`n`$ fermionic spin degrees of freedom. In this way we are able to show that, the equidistant spectrum of $`SU(m|n)`$ SP model (1.7) can be obtained by simply ‘modding out’ the spectrum of $`SU(m|n)`$ harmonic oscillators through the spectrum of spinless bosonic harmonic oscillators. Subsequently, in Sec.3, we derive the partition function of SP model (1.7) and interestingly observe that such partition functions can be expressed through some novel $`q`$-polynomials. Finally, we obtain a duality relation between the partition functions of $`SU(m|n)`$ and $`SU(n|m)`$ SP models. Sec.4 is the concluding section.
## 2 Spectra of $`SU(m|n)`$ SP model and related spin Calogero model
For obtaining the spectrum of SP model, we wish to follow here the approach of Ref.8 and find out at first a suitable extension of spin Calogero model (1.3), which would reproduce the SP model (1.7) at the ‘freezing limit’. However, one immediate problem which arises at present is that a spin Calogero model like (1.3) is a first quantised system, while the SP model (1.7) is a second quantised system. So, for applying the above mentioned approach, it is convenient to transform the second quantised SP model (1.7) to a first quantised spin system.
To this end, we now consider some special cases of ‘anyon like’ representations of permutation algebra (1.6), which were used earlier for constructing integrable extensions of $`SU(M)`$ Calogero-Sutherland model as well as HS spin chain \[25-28\]. Such special cases of ‘anyon like’ representations are defined by their action on a spin state like $`|\alpha _1\alpha _2\mathrm{}\alpha _N`$ (with $`\alpha _i[1,2,\mathrm{},M]`$) as
$$\stackrel{~}{P}_{ij}^{(m|n)}|\alpha _1\alpha _2\mathrm{}\alpha _i\mathrm{}\alpha _j\mathrm{}\alpha _N=e^{i\mathrm{\Phi }^{(m|n)}(\alpha _i,\alpha _{i+1},\mathrm{},\alpha _j)}|\alpha _1\alpha _2\mathrm{}\alpha _j\mathrm{}\alpha _i\mathrm{}\alpha _N,$$
(2.1)
where $`\mathrm{\Phi }^{(m|n)}(\alpha _i,\alpha _{i+1},\mathrm{},\alpha _j)=\mathrm{\hspace{0.17em}0}`$ if $`\alpha _i,\alpha _j[1,2,\mathrm{},m]`$, $`\mathrm{\Phi }^{(m|n)}(\alpha _i,\alpha _{i+1},\mathrm{},\alpha _j)=\pi `$ if $`\alpha _i,\alpha _j[m+1,m+2,\mathrm{},m+n]`$, and $`\mathrm{\Phi }^{(m|n)}(\alpha _i,\alpha _{i+1},\mathrm{},\alpha _j)=\pi _{\tau =m+1}^{m+n}_{p=i+1}^{j1}`$ $`\delta _{\tau ,\alpha _p}`$ if $`\alpha _i[1,2,\mathrm{},m]`$ and $`\alpha _j[m+1,m+2,\mathrm{},m+n]`$ or vice versa. It is clear that $`\stackrel{~}{P}_{ij}^{(M|0)}`$ reproduces the original spin exchange operator $`P_{ij}`$ and $`\stackrel{~}{P}_{ij}^{(0|M)}`$ reproduces $`P_{ij}`$. Next we notice that, due to the constraint (1.4), the Hilbert space associated with SP Hamiltonian (1.7) can be spanned through the following orthonormal basis vectors: $`C_{1\alpha _1}^{}C_{2\alpha _2}^{}\mathrm{}C_{N\alpha _N}^{}|0`$, where $`|0`$ is the vacuum state and $`\alpha _i[1,2,\mathrm{},M]`$. So, it is possible to define a one-to-one correspondence between the state vectors of the above mentioned Hilbert space and the state vectors associated with a spin chain as
$$|\alpha _1\alpha _2\mathrm{}\alpha _NC_{1\alpha _1}^{}C_{2\alpha _2}^{}\mathrm{}C_{N\alpha _N}^{}|0.$$
(2.2)
However it should be noted that, the matrix elements of $`\stackrel{~}{P}_{ij}^{(m|n)}`$ (2.1) and $`\widehat{P}_{ij}^{(m|n)}`$ (1.5) are related as
$$\alpha _1^{}\alpha _2^{}\mathrm{}\alpha _N^{}|\stackrel{~}{P}_{ij}|\alpha _1\alpha _2\mathrm{}\alpha _N=0|C_{N\alpha _N^{}}C_{N1,\alpha _{N1}^{}}\mathrm{}C_{1\alpha _1^{}}\widehat{P}_{ij}^{(m|n)}C_{1\alpha _1}^{}C_{2\alpha _2}^{}\mathrm{}C_{N\alpha _N}^{}|0,$$
(2.3)
where $`\alpha _i`$s and $`\alpha _i^{}`$s may be chosen in all possible ways. Thus one finds that, the ‘anyon like’ representation (2.1) is in fact equivalent to the supersymmetric realisation (1.5). Consequently, the first quantised spin Hamiltonian given by
$$H_P^{(m|n)}=\underset{1i<jN}{}\frac{\left(\mathrm{\hspace{0.17em}1}\stackrel{~}{P}_{ij}^{(m|n)}\right)}{\left(\overline{x}_i\overline{x}_j\right)^2},$$
(2.4)
would also be completely equivalent to the second quantised SP model (1.7). In particular, the Hamiltonians (1.7) and (2.4) would share the same spectrum and their eigenfunctions can be related through the correspondence (2.2).
Next, we define a spin Calogero model as
$$H_C^{(m|n)}=\frac{1}{2}\underset{i=1}{\overset{N}{}}\left(\frac{^2}{x_i^2}+\omega ^2x_i^2\right)+\underset{1i<jN}{}\frac{l\left(l\stackrel{~}{P}_{ij}^{(m|n)}\right)}{\left(x_ix_j\right)^2},$$
(2.5)
where $`\stackrel{~}{P}_{ij}^{(m|n)}`$ is the ‘anyon like’ representation (2.1). At the special case $`m=M,n=0`$ ($`m=0,n=M`$), the above Hamiltonian reproduces the $`SU(M)`$ spin Calogero model (1.3) with $`ϵ=1`$ ($`ϵ=1`$). Notice that the Hamiltonian (2.5) can be rewritten as
$$H_C^{(m|n)}=H_0+lH_1^{(m|n)},$$
(2.6)
where $`H_0`$ is the Hamiltonian for spinless Calogero model:
$$H_0=\frac{1}{2}\underset{i=1}{\overset{N}{}}\left(\frac{^2}{x_i^2}+\omega ^2x_i^2\right)+\underset{1i<jN}{}\frac{l\left(l1\right)}{\left(x_ix_j\right)^2},$$
(2.7)
and $`H_1^{(m|n)}`$ is obtained from $`H_P^{(m|n)}`$ (2.4) by replacing $`\overline{x}_i`$ and $`\overline{x}_j`$ with $`x_i`$ and $`x_j`$ respectively. Thus, the operator $`H_1^{(m|n)}`$ possesses both spin and particle degrees of freedom. However, in analogy with the case of usual spin Calogero model (1.3) , we may now consider the ‘freezing limit’ of extended spin Calogero model (2.5). Such ‘freezing limit’ is obtained by setting $`l=\omega `$ in the Hamiltonian (2.5) and finally taking its $`\omega \mathrm{}`$ limit. It is evident that, at this ‘freezing limit’, particle and spin degrees of freedom of Hamiltonian (2.5) would decouple and the operator $`H_1^{(m|n)}`$ would be transformed to a pure spin Hamiltonian where the fixed values of $`x_i`$s are determined through the minima of potential energy associated with spinless Calogero Hamiltonian (2.7). Since the solution of eqn.(1.2) leads to the minima of such potential energy, the operator $`H_1^{(m|n)}`$ will exactly reproduce the SP Hamiltonian (2.4) at the ‘freezing limit’. Consequently, the eigenfunctions of spin Calogero model (2.5) will factorise into eigenfunctions of spinless Calogero model (2.7) containing only particle degrees of freedom and $`SU(m|n)`$ SP model (2.4) containing only spin degrees of freedom. Moreover, at this ‘freezing limit’, the energy eigenvalues (denoted by $`E_{p,s}(\omega )`$, where the subscripts $`p`$ and $`s`$ represent the particle and spin degrees of freedom respectively) of the full system (2.5) can be expressed as
$$E_{p,s}(\omega )=E_p(\omega )+\omega E_s,$$
(2.8)
where $`E_p(\omega )`$ and $`E_s`$ denote the energy eigenvalues of spinless Calogero model (2.7) and SP spin chain (2.4) respectively.
It is clear from eqn.(2.8) that, by ‘modding out’ the spectrum of spin Calogero model (2.5) through the spectrum of spinless Calogero model (2.7), one can construct the spectrum of SP model (2.4) or (1.7). So, for the purpose of obtaining the spectrum of SP model, it is essential to find out at first the spectrum of spin Calogero model (2.5). To this end, we consider the spinless Calogero model of distinguishable particles, which is described by a Hamiltonian of the form
$$_0=\frac{1}{2}\underset{i=1}{\overset{N}{}}\left(\frac{^2}{x_i^2}+\omega ^2x_i^2\right)+\underset{1i<jN}{}\frac{l\left(lK_{ij}\right)}{\left(x_ix_j\right)^2},$$
(2.9)
where $`K_{ij}`$ is the coordinate exchange operator: $`K_{ij}\psi (\mathrm{},x_i,\mathrm{},x_j,\mathrm{})=\psi (\mathrm{},x_j,\mathrm{},x_i,\mathrm{})`$. By restricting the action of the above Hamiltonian on completely symmetric wave functions, we may recover the spinless Calogero model (2.7) of indistinguishable particles. It has been recently found that, by using some similarity transformations, one can completely decouple all particle degrees of freedom of Calogero Hamiltonians (2.7) as well as (2.9) \[29-33\]. Such similarity transformations naturally lead to a very efficient method of calculating the eigenfunctions of spinless Calogero models. In particular, the spectrum and eigenfunctions of $`_0`$ (2.9) can be obtained through a similarity transformation which maps this interacting Hamiltonian to a system of decoupled harmonic oscillators like
$$H_{free}=\omega \underset{k=1}{\overset{N}{}}a_k^{}a_k,$$
(2.10)
where $`a_k^{}=\frac{1}{\sqrt{2\omega }}\left(\omega x_k\frac{}{x_k}\right)`$ and $`a_k=\frac{1}{\sqrt{2\omega }}\left(\omega x_k+\frac{}{x_k}\right)`$. The above mentioned similarity transformation is explicitly given by
$$𝒮^1\left(_0E_g\right)𝒮=H_{free},$$
(2.11)
where $`E_g=\frac{1}{2}N\omega \left(Nl+(1l)\right)`$ and
$$𝒮=\varphi _ge^{\frac{1}{4\omega }𝒪_L}e^{\frac{1}{4\omega }^2}e^{\frac{1}{2}\omega X^2},$$
(2.12)
with
$`X^2={\displaystyle \underset{j=1}{\overset{N}{}}}x_j^2,^2={\displaystyle \underset{j=1}{\overset{N}{}}}{\displaystyle \frac{^2}{x_j^2}},\varphi _g={\displaystyle \underset{1j<kN}{}}|x_jx_k|^l\mathrm{exp}\left({\displaystyle \frac{\omega }{2}}{\displaystyle \underset{j=1}{\overset{N}{}}}x_j^2\right),`$
$`𝒪_L={\displaystyle \underset{j=1}{\overset{N}{}}}{\displaystyle \frac{^2}{x_j^2}}+l{\displaystyle \underset{jk}{}}\left\{{\displaystyle \frac{1}{(x_jx_k)}}\left({\displaystyle \frac{}{x_j}}{\displaystyle \frac{}{x_k}}\right)+{\displaystyle \frac{K_{jk}1}{(x_jx_k)^2}}\right\}.`$
The operator $`𝒪_L`$ is called the Lassalle operator. As is well known, the eigenfunctions of decoupled oscillators (2.10) can be written in the form: $`|n_1n_2\mathrm{}n_N=_{j=1}^N\left(a_j^{}\right)^{n_j}|0,`$ where $`|0`$ is the corresponding vacuum state. The coordinate representation of such eigenfunctions, having eigenvalues $`\omega _{j=1}^Nn_j`$, is given by
$$\psi _{n_1,n_2,\mathrm{},n_N}(x_1,x_2,\mathrm{},x_N)=e^{\frac{1}{2}\omega X^2}\underset{j=1}{\overset{N}{}}H_{n_j}(x_j),$$
(2.13)
where $`H_{n_j}(x_j)`$ is the Hermite polynomial of order $`n_j`$. It should be noted that, the above eigenfunctions do not generally obey any symmetry property under the exchange of coordinates and, therefore, they are free from any statistics. Due to the similarity transformation (2.11), the eigenfunctions of spinless Calogero model (2.9) may be obtained through the eigenfunctions (2.13) of free oscillators as
$$\chi _{n_1,n_2,\mathrm{},n_N}(x_1,x_2,\mathrm{},x_N)=𝒮\left(e^{\frac{1}{2}\omega X^2}\underset{j=1}{\overset{N}{}}H_{n_j}(x_j)\right),$$
(2.14)
with $`E_{\{n_i\}}=E_g+\omega _{j=1}^Nn_j`$ representing the corresponding eigenvalues. So, apart from a constant energy shift ($`E_g`$) for all levels, the spectrum of spinless Calogero model (2.9) is exactly the same as the spectrum of decoupled Hamiltonian (2.10) containing distinguishable particles . A comment might be in order. The exponentiation of the Lassalle operator $`e^{\frac{1}{4\omega }𝒪_L}`$ yields essential singularities at $`x_i=x_j`$, $`i,j=1,2,\mathrm{},N`$, when it operates on general multivariable functions. However, such essential singularities are not generated when the exponentiation of the Lassalle operator operates on multivariable polynomials. Our using of the Lassalle operator in the expression (2.14) is such a safe one. Details on the property of the Lassalle operators are given in refs. 32, 33.
In the following, we want to show that the eigenfunctions of spin Calogero model (2.5) can also be obtained through the eigenfunctions of decoupled harmonic oscillators, provided they possess nondynamical spin degrees of freedom and obey some definite statistics. To this end, we introduce a projection operator $`\mathrm{\Lambda }_N^{(m|n)}`$ which satisfies the relations like
$$K_{ij}\stackrel{~}{P}_{ij}^{(m|n)}\mathrm{\Lambda }_N^{(m|n)}=\mathrm{\Lambda }_N^{(m|n)}K_{ij}\stackrel{~}{P}_{ij}^{(m|n)}=\mathrm{\Lambda }_N^{(m|n)},$$
(2.15)
where $`i,j[1,2,\mathrm{},N]`$. Such a projector can be formally written through ‘transposition’ operator $`\tau _{ij}^{(m|n)}`$ ($`=\stackrel{~}{P}_{ij}^{(m|n)}K_{ij}`$) as
$$\mathrm{\Lambda }_N^{(m|n)}=\underset{p}{}\underset{\{i_k,j_k\}}{}\tau _{i_1j_1}^{(m|n)}\tau _{i_2j_2}^{(m|n)}\mathrm{}\tau _{i_pj_p}^{(m|n)},$$
(2.16)
where the series of transposition $`\tau _{i_1j_1}^{(m|n)}\tau _{i_2j_2}^{(m|n)}\mathrm{}\tau _{i_pj_p}^{(m|n)}`$ represents an element of the permutation group ($`P_N`$) associated with $`N`$ objects and, due to the summations on $`p`$ and $`\{i_k,j_k\}`$, each element of $`P_N`$ would appear only once in the r.h.s. of the above equation. For example, $`\mathrm{\Lambda }_2^{(m|n)}`$ and $`\mathrm{\Lambda }_3^{(m|n)}`$ are given by: $`\mathrm{\Lambda }_2^{(m|n)}=1+\tau _{12}^{(m|n)},`$ $`\mathrm{\Lambda }_3^{(m|n)}=1+\tau _{12}^{(m|n)}+\tau _{23}^{(m|n)}+\tau _{13}^{(m|n)}+\tau _{23}^{(m|n)}\tau _{12}^{(m|n)}+\tau _{12}^{(m|n)}\tau _{23}^{(m|n)}.`$ It should be noted that at the special case $`m=M,n=0`$ ($`m=0,n=M`$), the projector $`\mathrm{\Lambda }_N^{(m|n)}`$ (2.16) completely symmetrises (antisymmetrises) any wave function under simultaneous interchange of particle as well as spin degrees of freedom, and thus projects the wave function to bosonic (fermionic) subspace. Next, we multiply the eigenfunction (2.14) of Calogero Hamiltonian (2.9) by an arbitrary spin state $`|\alpha _1\alpha _2\mathrm{}\alpha _N`$ and subsequently apply the projector (2.16) for obtaining a wave function like
$$\mathrm{\Psi }_{n_1,n_2,\mathrm{},n_N}^{\alpha _1,\alpha _2,\mathrm{},\alpha _N}(x;\gamma )=\gamma _1\gamma _2\mathrm{}\gamma _N|\mathrm{\Lambda }_N^{(m|n)}\left\{𝒮\left(e^{\frac{1}{2}\omega X^2}\underset{j=1}{\overset{N}{}}H_{n_j}(x_j)\right)|\alpha _1\alpha _2\mathrm{}\alpha _N\right\},$$
(2.17)
where $`xx_1,x_2,\mathrm{},x_N`$ and $`\gamma \gamma _1,\gamma _2,\mathrm{},\gamma _N`$. Since the operator $`_0`$ (2.9) commutes with $`K_{ij}`$ as well as $`\mathrm{\Lambda }_N^{(m|n)}`$ (2.16), the expression (2.17) again gives an eigenfunction of the Calogero Hamiltonian (2.9), where each particle possesses some nondynamical spin degrees of freedom. Consequently, with the help of relation (2.15) which allows one to ‘replace’ $`K_{ij}`$ by $`\stackrel{~}{P}_{ij}^{(m|n)}`$, one can show that $`\mathrm{\Psi }_{n_1,n_2,\mathrm{},n_N}^{\alpha _1,\alpha _2,\mathrm{},\alpha _N}(x;\gamma )`$ (2.17) also gives an eigenfunction of spin Calogero model (2.5) with eigenvalue $`E_{\{n_i\},\{\alpha _i\}}=E_g+\omega _{j=1}^Nn_j`$.
Thus we interestingly find that, eqn.(2.17) produces all eigenfunctions of spin Calogero model (2.5) through the known eigenfunctions of decoupled harmonic oscillators (2.10). However it is important to notice that, due to the existence of projector $`\mathrm{\Lambda }_N^{(m|n)}`$ in eqn.(2.17), a definite correlation is now imposed among the eigenfunctions of decoupled harmonic oscillators. To demonstrate this point in an explicit way, we observe first that the operator $`𝒮`$ (2.12) commutes with $`K_{ij}`$ and the projector $`\mathrm{\Lambda }_N^{(m|n)}`$ (2.16). By using such commutation relations, eqn.(2.17) can be rewritten as
$$\mathrm{\Psi }_{n_1,n_2,\mathrm{},n_N}^{\alpha _1,\alpha _2,\mathrm{},\alpha _N}(x;\gamma )=𝒮\stackrel{~}{\mathrm{\Psi }}_{n_1,n_2,\mathrm{},n_N}^{\alpha _1,\alpha _2,\mathrm{},\alpha _N}(x;\gamma ),$$
(2.18)
where
$$\stackrel{~}{\mathrm{\Psi }}_{n_1,n_2,\mathrm{},n_N}^{\alpha _1,\alpha _2,\mathrm{},\alpha _N}(x;\gamma )=\gamma _1\gamma _2\mathrm{}\gamma _N|\mathrm{\Lambda }_N^{(m|n)}\left\{\left(e^{\frac{1}{2}\omega X^2}\underset{j=1}{\overset{N}{}}H_{n_j}(x_j)\right)|\alpha _1\alpha _2\mathrm{}\alpha _N\right\}.$$
(2.19)
This $`\stackrel{~}{\mathrm{\Psi }}_{n_1,n_2,\mathrm{},n_N}^{\alpha _1,\alpha _2,\mathrm{},\alpha _N}(x;\gamma )`$ represents a correlated eigenfunction (with eigenvalue $`\omega _{j=1}^Nn_j`$) for decoupled harmonic oscillators (2.10), where each oscillator carries $`M`$ number of nondynamical spin degrees of freedom. To determine the precise nature of the above mentioned correlation, we use the relation (2.15) and replace $`\mathrm{\Lambda }_N^{(m|n)}`$ by $`\mathrm{\Lambda }_N^{(m|n)}K_{ij}\stackrel{~}{P}_{ij}^{(m|n)}`$ in the r.h.s. of eqn.(2.19). Finally, by acting $`\stackrel{~}{P}_{ij}^{(m|n)}`$ on $`|\alpha _1\alpha _2\mathrm{}\alpha _N`$ and $`K_{ij}`$ on $`e^{\frac{1}{2}\omega X^2}_{j=1}^NH_{n_j}(x_j)`$, we find that the eigenfunction (2.19) of free oscillators must satisfy the following symmetry condition under simultaneous interchange of related particle and spin quantum numbers:
$$\stackrel{~}{\mathrm{\Psi }}_{n_1,\mathrm{},n_i,\mathrm{},n_j,\mathrm{},n_N}^{\alpha _1,\mathrm{},\alpha _i,\mathrm{},\alpha _j,\mathrm{},\alpha _N}(x;\gamma )=e^{i\mathrm{\Phi }^{(m|n)}(\alpha _i,\alpha _{i+1},\mathrm{},\alpha _j)}\stackrel{~}{\mathrm{\Psi }}_{n_1,\mathrm{},n_j,\mathrm{},n_i,\mathrm{},n_N}^{\alpha _1,\mathrm{},\alpha _j,\mathrm{},\alpha _i,\mathrm{},\alpha _N}(x;\gamma ),$$
(2.20)
$`\mathrm{\Phi }^{(m|n)}(\alpha _i,\alpha _{i+1},\mathrm{},\alpha _j)`$ being the same phase factor which appeared in eqn.(2.1). It is clear from eqn.(2.20) that, for the case $`\alpha _i,\alpha _j[1,2,\mathrm{},m]`$, the eigenfunction $`\stackrel{~}{\mathrm{\Psi }}_{n_1,\mathrm{},n_i,\mathrm{},n_j,\mathrm{},n_N}^{\alpha _1,\mathrm{},\alpha _i,\mathrm{},\alpha _j,\mathrm{},\alpha _N}(x;\gamma )`$ remains completely unchanged under simultaneous interchange of particle and spin quantum numbers. Thus $`\alpha _i`$ may be treated as a ‘bosonic’ quantum number, if it takes any value ranging from $`1`$ to $`m`$. Next, by using eqn.(2.20) for the case $`\alpha _i,\alpha _j[m+1,m+2,\mathrm{},m+n]`$, it is easy to see that $`\stackrel{~}{\mathrm{\Psi }}_{n_1,\mathrm{},n_i,\mathrm{},n_j,\mathrm{},n_N}^{\alpha _1,\mathrm{},\alpha _i,\mathrm{},\alpha _j,\mathrm{},\alpha _N}(x;\gamma )`$ would pick up a minus sign under simultaneous interchange of particle and spin quantum numbers. Therefore, the eigenfunction $`\stackrel{~}{\mathrm{\Psi }}_{n_1,\mathrm{},n_i,\mathrm{},n_j,\mathrm{},n_N}^{\alpha _1,\mathrm{},\alpha _i,\mathrm{},\alpha _j,\mathrm{},\alpha _N}(x;\gamma )`$ must be trivial if we choose $`n_i=n_j`$ and $`\alpha _i=\alpha _j[m+1,m+2,\mathrm{},m+n]`$. Thus $`\alpha _i`$ may be treated as a ‘fermionic’ quantum number, if it takes any value ranging from $`m+1`$ to $`m+n`$. Notice that if we simultaneously interchange a ‘bosonic’ quantum number $`\alpha _i`$ with a ‘fermionic’ quantum number $`\alpha _j`$ and $`n_i`$ with $`n_j`$, then the eigenfunctions satisfying transformation relation (2.20) would either remain invariant or pick up a minus sign depending on whether even or odd number of fermionic spin quantum numbers are present in the configuration: $`\alpha _{i+1}\alpha _{i+2}\mathrm{}\alpha _{j1}`$. It is obvious that, at the special case $`m=M,n=0`$ ($`m=0,n=M`$), $`\stackrel{~}{\mathrm{\Psi }}_{n_1,n_2,\mathrm{},n_N}^{\alpha _1,\alpha _2,\mathrm{},\alpha _N}(x;\gamma )`$ (2.19) represents completely symmetric (antisymmetric) eigenfunctions of $`SU(M)`$ bosonic (fermionic) oscillators. So, for the case $`m,n0`$, we may say that the correlated state vectors (2.19) would represent the eigenfunctions of $`N`$ number of $`SU(m|n)`$ supersymmetric harmonic oscillators.
Due to the existence of symmetry condition (2.20), we find that all independent eigenfunctions of $`SU(m|n)`$ supersymmetric harmonic oscillators can be obtained uniquely through the following occupation number representation. Since, at present, spins behave as nondynamical degrees of freedom, all ‘single particle states’ for this occupation number representation may be constructed by taking $`m+n`$ copies of each energy eigenstate associated with a spinless harmonic oscillator $``$ the first $`m`$ copies being bosonic in nature and last $`n`$ copies being fermionic in nature. As usual, any bosonic single particle state can be occupied with arbitrary number of particles and any fermionic single particle state can hold at most one particle. By filling up such bosonic and fermionic single particle states through $`N`$ number of particles, we can easily identify all independent eigenfunctions of the form (2.19). So, this occupation number representation of states (2.19) gives us a very convenient way of analysing the spectrum of $`SU(m|n)`$ supersymmetric harmonic oscillators. In particular we may verify that, similar to pure bosonic or fermionic case, the spectrum of $`SU(m|n)`$ supersymmetric harmonic oscillators is also equally spaced. But, it is interesting to note further that, the ground state energy of $`SU(m|n)`$ supersymmetric harmonic oscillators coincides with that of the pure bosonic case, instead of pure fermionic case. Since $`M`$ number of fermionic single particle states with zero energy can hold at most $`M`$ number of particles, a nonzero ground state energy is obtained for a pure fermionic system when $`N>M`$. However, if at least one bosonic single particle state with zero energy is available, we can fill up that state through all available particles. Consequently we find that, irrespective of the values of $`m`$ and $`n`$, the ground state energy of $`SU(m|n)`$ supersymmetric harmonic oscillators would always be zero. Moreover, by using the above mentioned occupation number representation, we obtain the degeneracy ($`D_g`$) of ground state for $`SU(m|n)`$ supersymmetric harmonic oscillators as
$$D_g=\underset{k=0}{\overset{n}{}}\frac{(N+mk1)!n!}{(Nk)!(m1)!k!(nk)!}.$$
(2.21)
It is worth observing that this degeneracy factor crucially depends on the values of $`m`$ and $`n`$, and reproduces the degeneracy of the ground state for $`SU(M)`$ bosonic oscillators at the special case $`m=M,n=0`$. Similarly, one can demonstrate that the degeneracy of higher energy levels, appearing in the spectrum of $`SU(m|n)`$ supersymmetric harmonic oscillators, would also depend on the values of $`m`$ and $`n`$.
Due to the relation (2.18), where $`𝒮`$ acts as a nonsingular operator, there exists a one-to-one correspondence between the independent eigenfunctions of spin Calogero model (2.5) and $`SU(m|n)`$ supersymmetric oscillators. Consequently, up to a constant energy shift ($`E_g`$) for all energy levels, the spectrum of spin Calogero model (2.5) exactly coincides with the spectrum of $`SU(m|n)`$ supersymmetric harmonic oscillators. However it is well known that, up to the same constant energy shift for all energy levels, the spectrum of spinless Calogero model (2.7) exactly coincides with the spectrum of $`N`$ number of spinless bosonic harmonic oscillators. As the eigenvalues of both $`SU(m|n)`$ supersymmetric oscillators and spinless bosonic oscillators depend linearly on the coupling constant $`\omega `$, it is clear that eqn.(2.8) can be used even for any finite value of $`\omega `$ (though the eigenfunctions of spin Calogero model (2.5) will factorise only at the ‘freezing limit’). Moreover, since the spectra of both $`SU(m|n)`$ supersymmetric oscillators and spinless bosonic oscillators are equally spaced, it automatically follows from eqn.(2.8) that the spectrum of $`SU(m|n)`$ SP model (1.7) would also be equally spaced for any choice of $`m`$ and $`n`$. But it is natural to expect that, similar to the case of $`SU(m|n)`$ supersymmetric oscillators, the degeneracy of energy levels for $`SU(m|n)`$ SP model (1.7) would crucially depend on the values of $`m`$ and $`n`$. So, it should be interesting to derive the partition function of SP model (1.7), where all information about these degeneracy factors is encoded.
## 3 Partition function of $`SU(m|n)`$ SP model
In the previous section we have observed that, similar to the pure bosonic case, the ground state energy of $`SU(m|n)`$ supersymmetric harmonic oscillators would always be zero. So, in analogy with this pure bosonic (i.e., ferromagnetic) case , we may now put $`\omega =1`$ in eqn.(2.8) and subsequently use this equation to obtain a relation like
$$Z_N^{(m|n)}(q)=\frac{\widehat{Z}_N^{(m|n)}(q)}{\widehat{Z}_N^{(1|0)}(q)},$$
(3.1)
where $`q=e^{\frac{1}{kT}}`$ , $`Z_N^{(m|n)}(q)`$ and $`\widehat{Z}_N^{(m|n)}(q)`$ denote the canonical partition functions of $`SU(m|n)`$ SP model (1.7) and $`SU(m|n)`$ supersymmetric harmonic oscillators (where $`\omega =1`$) respectively. Due to the above mentioned notations, $`\widehat{Z}_N^{(1|0)}(q)`$ and $`\widehat{Z}_N^{(0|1)}(q)`$ denote the canonical partition functions of $`N`$ number of spinless bosonic and fermionic harmonic oscillators respectively. It is well known that, the partition functions of such spinless bosonic and fermionic oscillators are given by
$$\widehat{Z}_N^{(1|0)}(q)=\frac{1}{(q)_N},\widehat{Z}_N^{(0|1)}(q)=q^{\frac{N(N1)}{2}}\frac{1}{(q)_N},$$
(3.2)
where the standard notation: $`(q)_N=(1q)(1q^2)\mathrm{}(1q^N)`$ (and $`(q)_0=1`$) is used. So, for calculating $`Z_N^{(m|n)}(q)`$ with the help of eqn.(3.1), we have to find out only the partition function of $`N`$ number of $`SU(m|n)`$ supersymmetric oscillators.
To this end, however, we consider at first the grand canonical partition function of $`SU(m|n)`$ supersymmetric oscillators. Such grand canonical partition function may be denoted by $`\widehat{𝒵}^{(m|n)}(q,y)`$, where $`y=e^\mu `$ and $`\mu `$ is the chemical potential of the system. So, according to our notation, $`\widehat{𝒵}^{(1|0)}(q,y)`$ and $`\widehat{𝒵}^{(0|1)}(q,y)`$ denote the grand canonical partition functions of spinless bosonic and fermionic harmonic oscillators respectively. As usual, the grand canonical partition function of $`SU(m|n)`$ supersymmetric oscillators can be related to the corresponding partition function through the following power series expansion in variable $`y`$:
$$\widehat{𝒵}^{(m|n)}(q,y)=\underset{N=0}{\overset{\mathrm{}}{}}y^N\widehat{Z}_N^{(m|n)}(q)$$
(3.3)
where it is assumed that $`\widehat{Z}_0^{(m|n)}(q)=1`$. In the previous section we have found that, all independent eigenfunctions of $`SU(m|n)`$ supersymmetric harmonic oscillators can be obtained through an occupation number representation, where the corresponding single particle states are constructed by taking $`m+n`$ copies of each energy eigenstate associated with a spinless harmonic oscillator $``$ the first $`m`$ copies being bosonic in nature and last $`n`$ copies being fermionic in nature. By exploiting this result, it is easy to prove that the grand canonical partition function of $`SU(m|n)`$ supersymmetric oscillators can be expressed through those of spinless bosonic and fermionic oscillators as
$$\widehat{𝒵}^{(m|n)}(q,y)=\left[\widehat{𝒵}^{(1|0)}(q,y)\right]^m\left[\widehat{𝒵}^{(0|1)}(q,y)\right]^n.$$
(3.4)
Next, we substitute the power series expansion (3.3) to the place of all grand canonical partition functions appearing in the above equation. Subsequently, we compare the coefficients of $`y^N`$ from both sides of eqn.(3.4), and readily find that the canonical partition function of $`SU(m|n)`$ supersymmetric harmonic oscillators may also be related to those of spinless bosonic and fermionic oscillators:
$$\widehat{Z}_N^{(m|n)}(q)=\underset{_{i=1}^ma_i+_{j=1}^nb_j=N}{}\underset{i=1}{\overset{m}{}}\widehat{Z}_{a_i}^{(1|0)}(q)\underset{j=1}{\overset{n}{}}\widehat{Z}_{b_j}^{(0|1)}(q),$$
(3.5)
where $`a_i`$s and $`b_j`$s are some nonnegative integers. By substituting the known partition functions (3.2) of spinless bosonic and fermionic oscillators to the above equation, one may now obtain an explicit expression for the partition function of $`SU(m|n)`$ supersymmetric oscillators as
$$\widehat{Z}_N^{(m|n)}(q)=\underset{_{i=1}^ma_i+_{j=1}^nb_j=N}{}\frac{q^{_{j=1}^n\frac{b_j(b_j1)}{2}}}{(q)_{a_1}(q)_{a_2}\mathrm{}(q)_{a_m}(q)_{b_1}(q)_{b_2}\mathrm{}(q)_{b_n}}.$$
(3.6)
Finally, by using eqns.(3.1), (3.2) and (3.6), we derive the exact canonical partition function of $`SU(m|n)`$ SP model (1.7) as
$$Z_N^{(m|n)}(q)=\underset{_{i=1}^ma_i+_{j=1}^nb_j=N}{}\frac{(q)_Nq^{_{j=1}^n\frac{b_j(b_j1)}{2}}}{(q)_{a_1}(q)_{a_2}\mathrm{}(q)_{a_m}(q)_{b_1}(q)_{b_2}\mathrm{}(q)_{b_n}}.$$
(3.7)
Since, there exists an upper bound on the highest energy eigenvalue of SP model (1.7), the partition function $`Z_N^{(m|n)}(q)`$ (3.7) evidently yields some new $`q`$-polynomials which are characterised by the values of $`m`$ and $`n`$. The coefficients of various powers of $`q`$, appearing in such novel $`q`$-polynomial, would represent the degeneracy factors of corresponding energy levels associated with SP model (1.7). It is interesting to note that, by putting $`m=M,n=0`$ to the expression (3.7), one can exactly reproduce the partition function of ferromagnetic Polychronakos spin chain . On the other hand, for the limiting case $`m=0,n=M`$, eqn.(3.7) would reproduce the partition function of anti-ferromagnetic Polychronakos spin chain up to an insignificant multiplicative factor. This is because, in pure fermionic case, a system of harmonic oscillators with $`M`$ spin degrees of freedom gives some nonzero ground state energy when $`N>M`$. So, the r.h.s. of eqn.(3.1) must be modified through a multiplicative factor for taking into account such nonzero ground state energy . Consequently the expression (3.7) for partition function, which we have derived for the supersymmetric case (i.e., when both $`m`$ and $`n`$ are nonzero), should also be modified by the same factor at the pure fermionic limit.
With the help of a relation: $`\left(q^1\right)_l=(1)^lq^{\frac{l(l+1)}{2}}(q)_l`$, we find that the partition function (3.7) of SP model satisfies a remarkable duality condition given by
$$Z_N^{(m|n)}(q)=q^{\frac{N(N1)}{2}}Z_N^{(n|m)}(q^1),$$
(3.8)
where $`m`$ and $`n`$ may be chosen as any nonzero integer. Due to this duality condition, one can write down a relation of the form
$$D_N^{(m|n)}(E)=D_N^{(n|m)}\left(\frac{N(N1)}{2}E\right),$$
(3.9)
where $`D_N^{(m|n)}(E)`$ denotes the degeneracy factor associated with energy eigenvalue $`E`$ of $`SU(m|n)`$ SP model. Moreover, by using the duality condition (3.8), along with the fact that the ground state energy of Hamiltonian (1.7) is zero, it is easy to show that
$$E_{\mathrm{max}}=\frac{N(N1)}{2}$$
(3.10)
represents the highest energy eigenvalue of the $`SU(m|n)`$ SP model. In this context one may notice that, the highest energy eigenvalue of ferromagnetic or anti-ferromagnetic $`SU(M)`$ Polychronakos spin chain (1.1) is given by
$$E_{\mathrm{max}}=\frac{M1}{2M}N^2\frac{t(Mt)}{2M},$$
(3.11)
where $`t=N\mathrm{mod}M`$. Thus we curiously find that, in contrast to the case of $`SU(M)`$ Polychronakos spin chain (1.1), the highest energy eigenvalue of $`SU(m|n)`$ SP model (1.7) does not depend at all on the values of $`m`$ or $`n`$.
For obtaining some insight about the above mentioned difference between the highest energy eigenvalues (3.10) and (3.11), we finally consider the ‘motif’ picture, which was used to analyse the degeneracy of eigenfunctions for $`SU(M)`$ HS as well as Polychronakos spin chain through the corresponding symmetry algebra . As is well known, motifs of these spin chains are made of binary digits like ‘0’ and ‘1’. Moreover, for a spin chain with $`N`$ number of lattice sites, these binary digits would form motifs of length $`N1`$. So we can write a motif as $`(a_1a_2\mathrm{}a_{N1})`$, where $`a_i[0,1]`$ and each motif represents a class of degenerate eigenfunctions which yield an irreducible representation of $`Y(gl_M)`$ Yangian algebra . For the case of $`SU(M)`$ Polychronakos spin chain, the energy eigenvalue corresponding to $`(a_1a_2\mathrm{}a_{N1})`$ motif is given by
$$E_{(a_1a_2\mathrm{}a_{N1})}=\underset{r=1}{\overset{N1}{}}ra_r.$$
(3.12)
However, for the case of $`SU(M)`$ Polychronakos and HS spin chain, there exists a ‘selection rule’ which forbids the occurrence of $`M`$ number of consecutive ‘1’s in any motif. By combining this selection rule along with eqn.(3.12), one can derive the highest energy eigenvalue (3.11) of $`SU(M)`$ Polychronakos model and also understand why this eigenvalue crucially depends on the value of $`M`$. It is worth noting that, the above mentioned motif picture can also be used to analyse the spectrum of $`SU(m|n)`$ supersymmetric HS spin chain . But, for this supersymmetric case, there exists no ‘selection rule’ and binary digits like ‘0’ and ‘1’ can be chosen freely for constructing a motif of length $`N1`$. Being motivated by the case of supersymmetric HS spin chain, we may now conjecture that the motifs corresponding to SP model (1.7) are also free from any ‘selection rule’ and eqn.(3.12) again gives the corresponding energy eigenvalues. So, the spectrum of $`SU(m|n)`$ SP model contains $`2^{N1}`$ number of motifs, which can be constructed by filling up $`N1`$ positions with ‘0’ or ‘1’ in all possible ways. By using eqn.(3.12), it is easy to check that the motif $`(11\mathrm{}1)`$ yields the highest energy eigenvalue (3.10). Thus the absence of any ‘selection rule’, for the motifs corresponding to SP model, turns out to be the main reason behind the remarkably simple expression (3.10).
## 4 Concluding Remarks
In this paper we have investigated the spectrum as well as partition function of $`SU(m|n)`$ supersymmetric Polychronakos (SP) model (1.7) and the related spin Calogero model (2.5). The similarity transformation (2.11), which maps the spinless Calogero model of distinguishable particles to decoupled oscillators, and the projection operator (2.16) have played a key role in our derivation for the spectrum of spin Calogero model (2.5). Thus we have found that, up to a constant energy shift for all states, the spectrum of this spin Calogero model is exactly the same with the spectrum of decoupled $`SU(m|n)`$ supersymmetric harmonic oscillators. Furthermore, by using the occupation number representation associated with $`SU(m|n)`$ supersymmetric harmonic oscillators, we have obtained the exact partition function for this spin Calogero model.
It turned out that, the above mentioned spin Calogero model reproduces the SP model (1.7) at the ‘freezing limit’. Consequently, by factoring out the contributions due to dynamical degrees of freedom from the spectrum as well as partition function of spin Calogero model (2.5), one can compute the spectrum and partition function for $`SU(m|n)`$ SP model. By following this procedure we have found that, similar to the non-supersymmetric case, the spectrum of $`SU(m|n)`$ SP model is also equally spaced. However, the degeneracy factors of corresponding energy levels crucially depend on the values of $`m`$ and $`n`$. As a result, we get some novel $`q`$-polynomials which represent the partition functions of SP models. Moreover, by interchanging the bosonic and fermionic degrees of freedom, we obtain a duality relation among the partition functions of SP models.
As a future study, it should be interesting to find out the Lax operators and conserved quantities for the $`SU(m|n)`$ SP model (1.7). Moreover, in parallel to the case of $`SU(m|n)`$ Haldane-Shastry model , one might be able to show that the $`SU(m|n)`$ SP model also exhibits the $`Y(gl_{(m|n)})`$ super-Yangian symmetry. Such super-Yangian symmetry of SP model may turn out to be very helpful in analysing the degeneracy patterns for its spectrum. However, information about these degeneracy patterns is also encoded in our expression of partition function (3.7). So, there should exist an intriguing connection between the partition function (3.7) and the motif representations for super-Yangian algebra. In particular, the partition function (3.7) might lead to a supersymmetric generalisation of well-known Rogers-Szeg$`\ddot{\mathrm{o}}`$ (RS) polynomial. The recursion relation among these supersymmetric RS polynomials may then be used to find out the motifs representations and degenerate multiplets associated with super-Yangian algebra. We hope to report about such supersymmetric RS polynomials and related motifs in a forthcoming publication .
Acknowledgments
We like to thank K. Hikami for many illuminating discussions. One of the authors (BBM) likes to acknowledge Japan Society for the Promotion of Science for a fellowship (JSPS-P97047) which supported this work.
References
1. F. Calogero: J. Math. Phys. 10 (1969) 2191.
2. B. Sutherland: Phys. Rev. A 5 (1972) 1372.
3. F.D.M. Haldane: Phys. Rev. Lett. 60 (1988) 635.
4. B.S. Shastry: Phys. Rev. Lett. 60 (1988) 639.
5. M.A. Olshanetsky and A.M. Perelomov: Phys. Rep. 94 (1983) 313.
6. A.P. Polychronakos: Phys. Rev. lett. 70 (1993) 2329.
7. H. Frahm: J. Phys. A 26 (1993) L473.
8. A.P. Polychronakos: Nucl. Phys. B 419 (1994) 553.
9. H. Ujino, K. Hikami and M. Wadati: J. Phys. Soc. Jpn. 61 (1992) 3425.
10. K. Hikami and M. Wadati: J. Phys. Soc. Jpn. 62 (1993) 4203; Phys. Rev. Lett. 73 (1994) 1191.
11. F.D.M. Haldane, Z.N.C. Ha, J.C. Talstra, D. Bernard and V. Pasquier: Phys. Rev. Lett. 69 (1992) 2021.
12. D. Bernard, M. Gaudin, F.D.M. Haldane and V. Pasquier: J. Phys. A 26 (1993) 5219.
13. K. Hikami: Nucl. Phys. B 441 \[FS\] (1995) 530.
14. Z.N.C. Ha: Phys. Rev. Lett. 73 (1994) 1574; Nucl. Phys. B 435 \[FS\] (1995) 604.
15. M.V.N. Murthy and R. Shankar: Phys. Rev. Lett. 73 (1994) 3331.
16. F. Lesage, V. Pasquier and D. Serban: Nucl. Phys. B 435 \[FS\] (1995) 585.
17. B. Sutherland and B.S. Shastry: Phys. Rev. Lett. 71 (1993) 5.
18. F.D.M. Haldane: Proc. 16th Taniguchi Symp., Kashikojima, Japan, (1993) eds. A. Okiji and N. Kawakami (Springer, Berlin, 1994).
19. B.D. Simons, P.A. Lee and B.L. Altshuler: Nucl. Phys. B 409 (1993) 487.
20. A.P. Polychronakos: Generalised statistics in one dimension , hep-th/9902157 (Les Houches Lectures, Summer 1998)
21. K. Hikami: J. Phys. A 28 (1995) L131.
22. K. Hikami: J. Phys. Soc. Jpn. 64 (1995) 1047.
23. G.E. Andrews: The Theory of Partitions (Addison-Wesley, London, 1976).
24. E. Melzer: Lett. Math. Phys. 31 (1994) 233.
25. B. Basu-Mallick: Nucl. Phys. B 482 \[FS\] (1996) 713.
26. B. Basu-Mallick and A. Kundu: Nucl. Phys. B 509 \[FS\] (1998) 705.
27. B. Basu-Mallick: J. Phys. Soc. Jpn. 67 (1998) 2227.
28. B. Basu-Mallick: Nucl. Phys. B 540 \[FS\] (1999) 679.
29. K. Sogo: J. Phys. Soc. Jpn. 65 (1996) 3097.
30. T. H. Baker and P.J. Forrester: Nucl. Phys. B 492 (1997) 682.
31. N. Gurappa and P.K. Panigrahi: Equivalence of the Calogero-Sutherland model to free harmonic oscillators, cond-mat/9710035.
32. H. Ujino, A. Nishino and M. Wadati: J. Phys. Soc. Jpn. 67 (1998) 2658.
33. H. Ujino, A. Nishino and M. Wadati: Phys. Lett. A 249 (1998) 459.
34. K. Hikami and B. Basu-Mallick: Supersymmetric Polychronakos spin chain. Motif, distribution function and character (University of Tokyo Preprint, Under Preparation)
|
no-problem/9904/astro-ph9904179.html
|
ar5iv
|
text
|
# Disk and Bulge Morphology of WFPC2 galaxies: The HST Medium Deep Survey database
## 1 Introduction
WFPC2 pure parallel images from the HST Medium Deep Survey key project (Griffiths et al. 1994b ; Griffiths et al. 1994a hereafter MDS ) cover a very wide range of signal-to-noise. For the few brightest galaxies observed, detailed structures such as spiral arms and bright regions of star-formation are well exposed and the morphology can be easily classified by eye and measured by traditional interactive one-dimensional profile fitting procedures. At these brighter magnitudes the two-dimensional light distributions of galaxies are not well fitted by simple parameterized models which are necessarily crude fits to the broad continuum using smooth image profiles. However, as the images get fainter and smaller (undersampled), the morphology is less apparent and requires a model-based two-dimensional image analysis to derive quantitative estimates. For the extreme faint and small objects there is very little morphological information in the observations. The MDS procedure described in this paper has been optimized for the intermediate (medium deep) galaxies, in the rough magnitude range between V $`21`$ to 24 mag., as imaged in exposures of about one hour. This has yielded a significantly large catalog of quantitative morphological and structural parameter estimates. This magnitude range is now accessible for spectroscopic determination of redshifts via the new generation of 8-10 meter class ground based telescopes.
Decomposition of the images into disk and bulge has been a difficult task even at bright magnitudes with well sampled images (Kormendy (1977); Boroson (1981); Kent (1984, 1985)). Interactive procedures (Yee (1991)) are also impractical for a large survey and in any case they do not generate an uniform catalog suitable for statistical analysis. The image analysis adopted is similar to that in stellar photometry programs like DAOphot (Stetson (1987)). But unlike stellar photometry where the image can be characterized by the centroid, magnitude and the Point Spread Function (PSF), there is no simple model which will intrinsically fit all of the galaxy images. We adopt axisymmetric scale-free models which have been shown to fit the image continuum of normal galaxies (de Vaucouleurs (1959); Freeman (1970)). The procedure will average over any bright regions, as typically occurs in the data themselves at fainter magnitudes where the objects are smaller and less resolved. The residuals to these simple galaxy model fits at brighter magnitudes are the subject of a separate study (Naim, Ratnatunga & Griffiths 1997a ). To limit the complexity of the analysis, we assume that an image pixel is associated with a single object or background sky as is typical of the MDS WFPC2 images. We do not deal with the problems of crowding or image overlap, which are the major issues in programs for stellar photometry.
The number and choice of parameters fitted to an extended image is clearly important. Fitting too few parameters to a well exposed image could significantly bias the estimates of the parameters fitted, by the implicit choice of the parameters that are not fitted. However, fitting too many parameters to faint and/or compact unresolved images could cause the fit to converge to a false local minimum of a likelihood function which is very noisy in that multidimensional space. For practical reasons, and to ensure statistical uniformity of the resulting catalog, we require an automated procedure which will select and fit those (necessary and sufficient) parameters which are constrained by each particular image. We have developed two-dimensional “maximum likelihood” image analysis software that attempts to automatically optimize the model and the number of parameters fitted to each image. We apply the Ockham’s razor: non sunt multiplicanda entia praeter necessitatem; i.e., entities are not to be multiplied beyond necessity (Ockham 1285-1348). The model varies from a simultaneous decomposition of disk and bulge components of galaxy images ( hereafter D+B models) at the bright end to circularly symmetric sources at the faint end. However, this choice of parameters creates selection effects which depend on the signal-to-noise of the image and needs to be included explicitly in any statistical analysis of the MDS database.
The success of the procedure depends on the ability to efficiently generate smooth subpixelated galaxy images which can be convolved with an adopted Point Spread Function (PSF), such that precise derivatives can be evaluated with respect to all the parameters which need to be estimated. We will outline the procedure here but will avoid giving all the details of the numerical algorithm, since they are probably not of interest to the general reader. The algorithm is documented by comments in the software and the interested reader should contact the first author. A brief outline of the MDS pipeline is given in the Appendix.
This paper is also the primary reference to the Medium Deep Survey database which has been made available on the MDS website in the HST archive <sup>2</sup><sup>2</sup>2at http://archive.stsci.edu/mds/ and also mirrored at the Canadian Astronomy Data Center (CADC) <sup>3</sup><sup>3</sup>3at http://cadcwww.dao.nrc.ca/mds/. We avoid duplicating extensive tables since those can only be a snapshot of the present MDS database and we wish to ensure that users will always refer to the latest version which will be maintained on the Internet. The MDS website has a cgi-interface written in f77 which allows the database to be searched using coordinates or galaxy parameters, or looked at interactively by clicking on objects on an image-map of each stack. Direct access is also provided to the MDS database which is on CDROMs in a ‘jukebox’.
The database contains WFPC2 Pure Parallel observations taken for the Medium Deep Survey (MDS - HST GO program ids 5369, 5370, 5371, 5372, 5971, 6251, 6802, 7203) and for the GTO observers (HST program ids 5091, 5092, 5201, 6252, 6254, 6609, 6610, 7202 ) as well as HST archival observations of randomly selected WFPC2 fields like that of the Groth-Westphal strip (HST GTO program ids 5090 5109 - hereafter GWS ) and the Hubble Deep Field (HST DD program id 6337 - hereafter HDF ), and selected galaxy cluster fields (HST archival program id 7536) and will continue to be expanded as more fields are processed (HST archival program id 8384).
## 2 The Observations
The HST MDS and GTO pure parallel observations were taken with the WFPC2 after January 1994, following the SM93 repair mission, and continued for four years until January 1998. Before the SM97 second servicing mission in February 1997, the instruments used for the associated HST primary observations were the FGS, FOC and FOS; after this mission, the primary instruments were FGS, STIS and NICMOS.
We illustrate in Figure 1 the difference in pointing between the parallel and primary observations for all pure parallel fields in the MDS database, using different symbols for each primary instrument. The WFPC2 field is on average 4.5, 8.2, 12.1, 7.1, and 5.3 arc min away from the FOC, FOS, FGS, STIS, and NICMOS primary target respectively.
About 25 hours of pure parallel exposure was obtained each month, giving a steady flow of observations. The database was supplemented using the archival data from primary observations which satisfied the survey criteria.
The observation history is illustrated in Figure 2 as an indication of the quantity of HST data that was available for the survey. There was a significant drop in the number of WFPC2 parallels after SM97 as a result of the dithered observing strategy of NICMOS and STIS primary observations. The pure parallel GTO data was available to REG as a WFPC2 Investigation Definition Team member and Windhorst’s Blue Survey data (WBS) was available from the HST archive 3 months after observation.
The HST MDS (Griffiths et al. 1994a ), with over 400 random WFPC2 fields distributed over the full Sky, the GWS (Groth et al. (1994)) with 28 contiguous WFPC2 fields and the Hubble Deep Field (HDF Williams et al. (1996)) are datasets which give three very complementary samples of field galaxies at faint magnitude. The HDF gives depth in a single WFPC2 field, the GWS gives a larger area uniformly observed, and the MDS samples the whole sky as illustrated in Figure 3. All three sets have been analyzed uniformly through the MDS pipeline analysis software system.
MDS and GTO observations were primarily done with the F814W and F606W broadband filters. When more than 3 exposures could be taken with each of these filters, then F450W observations were taken in addition to those in the first two. In order to achieve a similar signal-to-noise ratio in the images taken in all three filters, the exposure times in F814W and F606W were requested to be about equal while in F450W the requested exposure was about twice as long. However, all WFPC2 observations in the MDS were taken in “non-interference” pure parallel mode (Griffiths et al. 1994b ), with the result that exposure times were of varied duration, with a variable number of exposures in the stack used for cosmic ray removal.
Each field was given a priority based on the number of exposures available, as listed in Table 1. The total hours of exposure and the number of fields at each priority in each of the 3 selected WFPC2 Filters is illustrated in Figure 4. Practically all of the higher priority data has been processed through the MDS pipeline and made available in the MDS website. The single exposure, single filter fields were given lowest priority because of the inability to remove cosmic rays and the lack of any color information. After October 1995, the MDS used only pure parallel opportunities in which a minimum of two exposures with total exposure time longer than 20 min could be taken in each of two filters, or one exposure with a total exposure time longer than 30 min in each of two filters. Special data processing code was developed to perform cosmic ray rejection using exposures through different filters. This procedure, although better than attempting to clean cosmic rays from a single exposure, is performed at the cost of losing any objects of extreme color.
The pure parallel observations per se have therefore been the biggest challenge in the task of building a database using a clean statistical analysis. We will use the GWS for many of the illustrated distributions, in order to avoid complicating the discussion with effects due to changes in data quality.
## 3 The signal-to-noise index $`\mathrm{\Xi }`$.
To characterize the ability of our method to extract quantitative parameter estimates we define an information index based on the signal-to-noise in the image. Since we are dealing with mostly extended images, the definition of the index is different from the signal-to-noise ratio generally used for point sources.
We first define a contour around an object by selecting the subset of contiguous pixels which each contain a signal that is at least $`1\sigma `$ above the estimated local sky (see appendix for details). The signal-to-noise ratio of each of these pixels is computed individually. We define the signal-to-noise index $`\mathrm{\Xi }`$ as the decimal logarithm of the integral sum of these ratios, and we have found this dimensionless quantity to be a good measure of the information content of the image, and we have used it to define thresholds within the image analysis procedure. For any particular field, exposure time and WFPC2 filter, $`\mathrm{\Xi }`$ is linearly correlated with the magnitude of an extended image. Furthermore, it has the expected slope of 1 magnitude per 0.4dex, as shown in Figure 5 for GWS observations through F606W. Point-like stars follow a different sequence at brighter magnitudes.
In most of the discussion on image analysis we will refer to $`\mathrm{\Xi }`$ rather than magnitude since it is a measure of image quality which can be used without reference to exposure time, sky background and filter used.
The MDS detection limit is at $`\mathrm{\Xi }1.6`$, but the sample does contain some images with a smaller index, viz. those objects detected in the image of a different filter of the same region of sky. The completeness limit is at $`\mathrm{\Xi }1.8`$ which is a half magnitude brighter than the detection limit. The morphology (disk-like or bulge-like) of galaxies can be determined for $`\mathrm{\Xi }>2.0`$ and D+B models can be done for $`\mathrm{\Xi }>2.4`$, which is 2 magnitudes brighter than the detection limit. To avoid any contamination by image noise the detection limit is set at a conservative level since that was already much fainter than the image quality needed to estimate morphology.
In Figure 6 we illustrate the limiting magnitude ($`\mathrm{\Xi }1.8`$) as a function of total exposure time for all the MDS fields processed from WFPC2 pure parallel observations from HST Cycles 4 through 6, the GWS, the HDF, and archival cluster fields.
The GWS comprises 27 WFPC2 fields, each observed uniformly with 4 exposures in each of the I (F814W) and V (F606W) filters, with total integration times of 4400 and 2800 seconds respectively and one deep WFPC2 field with $`25,000`$ seconds in each filter. Our object catalog for the GWS has 12,800 objects in the 27 WFPC2 fields. The percentage of images with $`\mathrm{\Xi }>4.0,3.0,2.0\mathrm{and}1.5`$ is $`0.3\%,4.6\%,30\%\mathrm{and}67\%`$ respectively. In these survey images, $`\mathrm{\Xi }1.8`$ corresponds to I=24.5 mag and to V=25.2 mag. From a catalog of $`10,800`$ galaxy images with $`\mathrm{\Xi }>1.8`$ in both F814W and F606W, 11% of the images are fitted with two-component D+B models, 7% are classified as stars, 61% are classified as either disk-like or bulge-like, 20% are classified as generic galaxies (of uncertain disk or bulge nature) and less than 1% remain unclassified.
As illustrated in Figure 7, we find empirically that on average there are $`10^{0.8\mathrm{\Xi }}`$ pixels above the $`1\sigma `$ contour, and $`\mathrm{0.025\; 10}^{1.1\mathrm{\Xi }}`$ pixels above $`5\sigma `$. When $`\mathrm{\Xi }2.0`$ we thus have typically an image with 40 pixels above $`1\sigma `$ and 4 pixels above $`5\sigma `$. At the detection limit of our object-finding algorithm ($`\mathrm{\Xi }1.6`$), we have typically an image with 15 pixels above $`1\sigma `$ and 1 pixel above $`5\sigma `$. Most images with $`\mathrm{\Xi }<1.6`$ are of regions corresponding to objects which were detected in another filter and model fits on them are typically very poor.
In Figure 8a we show empirically that the fraction of images for which we can use the likelihood ratio (see sec 7) to determine whether the galaxy is more disk-like or bulge-like follows the relation $`\mathrm{min}(\mathrm{max}(0,(0.82\mathrm{\Xi }1)),1)`$. Of these galaxies, the fraction of images for which we can fit a significant two-component model follows the relation $`\mathrm{min}(\mathrm{max}(0,(0.46\mathrm{\Xi }1)),1)`$. Both these relations were derived by fitting a straight line to the slopes in this figure. We can classify about 60% of the galaxies with a $`\mathrm{\Xi }2.0`$, and all of them with $`\mathrm{\Xi }>2.5`$. Hardly any galaxy with $`\mathrm{\Xi }<2.2`$ has sufficient signal to fit a two-component model, while 70% of them can be fitted at $`\mathrm{\Xi }>3.5`$, of which however there are only very few examples in the MDS database. At $`\mathrm{\Xi }3.0`$, about 40% of the galaxies are modeled as D+B . The saturation of the fraction at about 70% is probably the intrinsic percentage of galaxies which have a significant component in both disk and bulge.
In Figure 8b, we show a plot similar to Figure 8a, as a function of the half-light radius in pixels for images with $`\mathrm{\Xi }>2.0`$. For over 90% of the images with a half-light radius $`\xi >2`$ pixels, the image can be classified statistically using the likelihood ratio (see sec 7) to determine whether the galaxy is more disk-like or bulge-like. We can fit a significant two-component D+B model to 20%, 25%, and 40% of the galaxy images with $`\xi >`$ 2, 5 and 10 pixels respectively. None of the galaxies with $`\xi <2`$ pixels had sufficient sampling to fit a two-component D+B model fit.
The ability and success of fitting models to an observed galaxy depend of course on both the integrated signal-to-noise index $`\mathrm{\Xi }`$, as well as the half-light radius of the galaxy in pixels $`\xi `$. These two quantities are related to each other and to the morphology of the image. Systematic (or evolutionary) changes in the mean size and morphology as a function of apparent magnitude could slightly change Figure 8 if it were to be drawn for WFPC2 fields at significantly different limiting magnitudes. Figure 8 is applicable to exposures of about 1-hour in F814W and F606W. The difference in zero point magnitude for these filters is about the same as the mean color of galaxies, and therefore we can expect similar $`\mathrm{\Xi }`$ for the typical galaxy. We have excluded in this figure the galaxies imaged on the PC camera in order to keep the spatial resolution constant. Each WFC pixel is $`0\stackrel{}{\mathrm{.}}1`$.
## 4 Maximum likelihood estimation
Estimates of the centroid, magnitude, size, orientation and axis ratio of the observed galaxy image are initially evaluated using simple moments of the flux above the mean estimated sky, using those pixels within the $`1\sigma `$ contour. We next select an elliptical region around the object, ensuring that there are sufficient pixels to define the mean sky background to $``$ 0.5% accuracy (0.005 mag). Any pixels within the elliptical region which are associated with some other object and which are $`1\sigma `$ above the mean sky are cut out from the region analyzed, together with any pixels which have been flagged as “bad” in the calibration procedure (Ratnatunga et al. (1994)).
The procedure for the estimation of parameters via “maximum likelihood” starts by initial estimates of the model parameters from the observed moments of the image. For a given set of model parameters, the software creates a model image of the object and compares this image with the observations within the selected region (including the error image). The “likelihood function” is defined as the product of the probabilities for each model pixel value with respect to the observed pixel value and its error distribution; this function is evaluated as the integral sum of the logarithm of these probabilities. The likelihood function is then maximized by using a modified IMSL minimization routine (see Ratnatunga & Casertano (1991)). The 2D-image analysis used an improved version of the software developed for pre refurbishment WF/PC data (Ratnatunga, Griffiths & Casertano (1994)), the catalog of which was presented in Casertano et al. (1995).
## 5 Model fitting
There are many types of empirical models that have been suggested over the years to represent galaxy profiles. We have decided in particular to use scale-free axisymmetric models with an exponential power-law profile which have been shown to fit the broad continuum of normal galaxies (de Vaucouleurs (1959); Freeman (1970)). This choice has many numerical advantages which are desirable in leading towards the development of a practical maximum likelihood fitting algorithm. Elliptical galaxies are assumed to have a $`e^{r^{1/4}}`$ (bulge-like) profile, and disk galaxies a $`e^r`$ (disk-like) profile. Each profile is characterized by a major axis half-light radius and axis ratio. Some well exposed images need to be modeled as the sum of two elliptical components.
For about 4% of the galaxy images with no central concentration, the images are better fit by a $`e^{r^2}`$ (Gaussian) or even $`e^{r^4}`$ profile in which the light distribution is both less centrally peaked and has no extended tail. The isophotes of some ellipticals may be Boxy-distorted (Bender et al. (1989)) rather than the elliptical models which have been currently adopted. We will explore these and alternative models for fitting the continuum of irregular galaxies in a future paper.
For a point-like stellar image (star or QSO), we need four parameters: sky background, centroid (x,y), and magnitude. For the extended images of galaxies, we need at least one extra parameter which measures the size of the image. Taking into account the image jitter (see discussion above) and any errors in the PSF, we have found it useful to adopt a Gaussian profile and to estimate a size parameter even for the point-like images, to be used as a star-galaxy separation index. This procedure also takes the stellar image analysis through the same convolutions as those done for galaxy images, enabling the likelihood functions to be compared, with some caveats. Errors in the adopted PSF would appear as an extended residual image following the model fit. This could make a bright stellar image significantly better fitted with a model image which includes an extended component. This is a particularly important issue when attempting to detect underlying galaxies in QSO images (see Bahcall, Kirhakos & Schneider (1995)).
In Figure 9, we show for the GWS dataset a plot of half-light radius in seconds of arc as a function $`\mathrm{\Xi }>2.0`$. For most stellar images the estimated $`0\stackrel{}{\mathrm{.}}02`$ or $`\xi 0.2`$ WFPC2 pixel. At brighter magnitudes with $`\mathrm{\Xi }>3.5`$ we notice some larger objects (but $`\xi <`$ 1 pixel) which are very well separated from the sizes of galaxies. The PSF approximation adopted in the analysis is insufficient at these bright magnitudes. They could also be cases of stellar binaries which are just resolved and which at fainter magnitudes could contaminate the sample of objects classified as galaxies.
## 6 The parameters
We describe here the full list of model parameters in the order in which they are introduced as we increase the number fitted to an image. These are the intrinsic galaxy model parameters which are introduced before any PSF convolution or allowance for other instrumental effects such as (the small amount of) photon scattering in the CCD before detection.
(1) Sky Background.
The sky background is a very important part of the model estimates. A bias in the sky estimate could translate to a bias in the estimated morphology. Unlike for example Byun & Freeman (1995); Schade et al. (1996) we have therefore chosen to derive a maximum likelihood estimate for the mean sky background simultaneous with the other image parameters. We use sufficient pixels to ensure that the mean background sky is determined to an accuracy of 0.5%. Typical fluctuations of order 1% are seen in a single WFC frame. Some of this variation may be caused by the extragalactic background light (EBL) from faint unresolved galaxies. Much larger fluctuations are occasionally caused by the faint halos of nearby images, or by charge transfer problems caused by bright stars. The estimated sky backgrounds are seen to follow these variations very well. Sky background is assumed to be flat over the small region selected for analysis of each object.
In the procedure we have adopted, disk-like or bulge-like model fits could possibly converge with slightly different sky backgrounds within the measurement errors. By allowing the sky to vary, we are not imposing some prior choice of sky background. The error in the sky background is then properly reflected in the error estimates for the galaxy parameters and the likelihood ratio used for morphological classification.
(2,3) Centroid
The $`(x,y)`$ centroid of the model image is in most cases very close to the centroid of the observed image. The mean errors for $`\mathrm{\Xi }`$ 2.0 and 1.6 are 0.1 and 0.2 pixels respectively. The error becomes much larger for images fainter than the detection limit (i.e. $`\mathrm{\Xi }<1.6`$). For the D+B models we assume the same centroid for both components. The software does allow an independent offset for the center of the bulge from that for the disk (parameters (12,13)), but this has not yet been fully investigated. The extra degree of freedom resulted in poor convergence in many more galaxies than in the few which justified it.
(4) Total magnitude.
The adopted magnitude is the analytical total magnitude of the galaxy model. This estimate has the advantage of not needing an aperture correction as is required for a fixed aperture or isophotal magnitude. However, since the magnitude integration is over a smooth galaxy image, small errors could arise from the fact that the model may not average properly over bright regions of star formation, for example. For D+B models the magnitude is the total for both components, a quantity better defined than the magnitudes of the individual components. The magnitudes of the individual disk and bulge components can be derived using the flux ratio ( $`\mathrm{B}/\mathrm{T}`$ see below).
Note that the total magnitude is integrated theoretically out to infinity. For disk galaxies practically all (99%) of the light falls within 4 half-light radii. However for bulge-like galaxies only 85% of the light is within 4 half-light radii, and the model needs to extend out to 19 half-light radii to contain 99% of it. The Total magnitude for an elliptical could therefore be $`15`$% brighter than when calculated by integration out to a typical isophotal detection radius, and correspondingly for the $`\mathrm{B}/\mathrm{T}`$.
(5) Half-light radius.
This is the radius within which half the light of the unconvolved model would be contained if it were radially symmetric (an axis ratio of unity). For axisymmetric galaxies, this definition is independent of the observed axis ratio of the galaxy, a parameter which depends on the intrinsic axis ratio and its inclination to the line-of-sight.
For point-like sources we fit a Gaussian profile with an exponent of 2.0, and the half-light radius is then 0.69 times the scale length. For disk-like galaxies with a profile exponent of 1.0, it is 1.68 times the exponential scale length. For bulge-like galaxies with a profile exponent of 0.25, it is the effective radius or 7.67 times the scale length. For D+B models it is by definition still the major axis radius within which half the light of the combined profile is contained. Like the total magnitude, this is a quantity better defined than the half-light radii of the individual components.
As a direct consequence of allowing the sky background to be a free parameter, we need to impose a maximum half-light radius in order to avoid this parameter from becoming meaninglessly large when a galaxy with no central concentration is fitted with a disk-like or bulge-like model. This limit has been set conservatively to equal half the maximum radius of the region selected for analysis. For $`4`$% of the galaxy images, the half-light radius converges on this limit, and those models need to be rejected and flagged for fitting with a less centrally concentrated model.
From numerical considerations we impose a minimum half-light radius of a tenth of a pixel on both the major and minor axes of a galaxy. For D+B models this minimum is imposed independently for each component. This assumption does not put any significant constraints on the axis ratio distribution of galaxies with a half-light radius larger than one pixel.
The quantity fitted is the logarithm of the half-light radius in seconds of arc. The half-light radius of the individual disk and bulge components can be derived using the bulge/(disk+bulge) flux ratio $`\mathrm{B}/\mathrm{T}`$ and bulge/disk half-light radius ratio (see $`\mathrm{HLF}`$ below).
(6) Orientation.
The adopted position angle is that of the axis of symmetry of the galaxy model. Measured in radians in the range $`[\pi /2,+\pi /2]`$, this is set equal to zero when the source is assumed to be azimuthally symmetric with an axis ratio of unity.
For pre-refurbishment data with a highly asymmetric PSF, the observed orientation of the image could be significantly different from the intrinsic orientation of the fitted model. During the minimization procedure, the angle is measured clockwise from positive Y to positive X of the relevant CCD. It is then translated into a position angle as measured clockwise from North towards East using $`\mathrm{PA}_{\mathrm{V}_3}`$of the HST attitude (pointing) vectors and the WFPC2 CCD plate-scale distortion map.
For D+B models we generally assume that the orientations of the disk and bulge components are the same. Since the bulge axis ratio is expected to be close to unity, any difference in orientation could be expected to be insignificant except in the brightest galaxy images. The software does allow for a difference in the orientation of the bulge from that of the disk (parameter(11)), but this too has not yet been fully investigated.
(7) Axis ratio
This is the ratio of the minor axis half-light radius to that of the major axis. This parameter has no units and is constrained to be smaller than unity to ensure proper definition of the major axis. For D+B models it is defined independently for each component. If the axis ratio cannot be shown to be significantly different from unity then it is held at unity; for the one-component case, the position angle can then also be dropped as a free parameter. The size of individual pixels also imposes limits on the ability to usefully constrain an axis ratio. Note that we adopt a minimum minor axis half-light radius of 0.1 pixel; i.e. for a Galaxy with a half-light radius of 0$`\stackrel{}{\mathrm{.}}`$5 this imposes a lower limit on the axis ratio of 0.02 since WFPC2 has a pixel size of 0$`\stackrel{}{\mathrm{.}}`$1 . In a few rare cases, this limit was useful for the prevention of the minimization procedure from converging on an unrealistically low axis ratio. This observationally imposed limit could be taken into consideration in an analysis of the axis ratio distribution, but can practically be ignored for galaxies with half-light radii larger than 1 pixel.
(8) bulge/(disk+bulge) flux ratio
This is the fractional flux contribution of the bulge-like component to the (disk+bulge) light ( $`\mathrm{B}/\mathrm{T}`$) in the galaxy image. It has no units and ranges from zero for pure disk-like galaxies to one for pure bulge-like galaxies. The ability to estimate this quantity depends on the integrated signal-to-noise index $`(\mathrm{\Xi })`$ in the image. A larger $`\mathrm{\Xi }`$ is needed to separate out a second component with smaller fractional contribution to the total light (see Figure 17). A second component is only fitted when there is a significant improvement to the likelihood ratio to compensate for the increased number of parameters. The definition has used (disk+bulge) rather than Total to allow for the possible extension of the model parameter set to a third component such as a central point source (see Sarajedini et al. (1996)).
(9) Bulge axis ratio
This is the ratio of the half-light radius of the minor axis to that of the major axis of the bulge-like component. In D+B models it is often a poorly defined quantity when the disk component dominates the galaxy image, and the ratio is then adopted to be unity. We could not determine any meaningful relation between the bulge axis ratio and the disk axis ratio. Such a relation might have been expected if most disks and bulges have a typical axis ratio and were related by the common inclination to the observed line of sight. The latter does not seem to be the case.
(10) The ratio of the half-light radii ($`\mathrm{HLF}`$) bulge/disk
This is the ratio of the half-light radius of the bulge-like component to that of the disk-like component. We observed that the logarithm of this ratio has a weak correlation with the $`\mathrm{B}/\mathrm{T}`$ flux ratio (see Figure 12). This correlation has been reported also by Kent (1985). For disk-like galaxies this ratio is about 0.25 and for bulge-like galaxies the ratio is about 1.6, i.e. on average, disk dominated galaxies have a disk half-light radius which is larger than the bulge half-light radius. Such is the case for our own Galaxy where this ratio is estimated to be about 0.65. However, there is a factor of 2.5 rms (i.e. one magnitude cosmic scatter) about the mean relation. It will be interesting to understand this relation using galaxy structure formation theories like those published by Mao & Mo (1998).
(11) Orientation difference of bulge from disk
See discussion above on Orientation.
(12,13) Centroid difference of bulge from disk
See discussion above on centroid.
## 7 Optimizing the model fitted
In brief outline the procedure is as follows:
The initial guess is typically far removed in parameter space from the final maximum likelihood model fit. At this point it is not useful to make any judgment about the selection of the model or the parameters to be fitted. However, testing has shown us that for 70% of a typical catalog with $`\mathrm{\Xi }<2`$, we are never able to fit a significant D+B model. These images are analyzed only as stars or pure disks or pure bulge-like galaxies and the better model is selected. In Figure 8 we show a histogram of the number of galaxies as a function of $`\mathrm{\Xi }`$. We have highlighted the fraction fitted as D+B and the fraction for which we can classify the object as being significantly disk-like or bulge-like.
We first start with a disk-like model, or if $`\mathrm{\Xi }>2`$ we attempt a 10-parameter D+B model fit. The first fit is a special quick mode of the minimization routine (modified IMSL 9.2 ZXMIN subroutine that uses a Quasi-Newton method). This mode of minimization is fairly fast since it does not attempt to check full convergence. It reaches a point in the multi-dimensional parameter space which is close enough to the final answer to investigate the likelihood function and make some intelligent decisions. These investigations are made after each minimization, and depend on the number of parameters that were fitted.
The quick mode does not use a higher resolution center (see Appendix). If a default resolution image had been used for the models, we investigate whether a high-resolution center will change the likelihood function. In over 75% of the tests in a typical catalog reaching down to the detection limit, the absolute change in the likelihood function is less than three, which can be considered as insignificant justification for the introduction of a higher resolution center. Since we are merging parts of two independently convolved images, the high-resolution center option is only used when needed.
If the half-light radius is less than $`10^{2.8+0.5max(\mathrm{\Xi },3.6)}`$ arc seconds, the program branches to test if the object is point-like. As discussed above, we fit a symmetric 5-parameter Gaussian model to allow for image jitter and any errors in the PSF. For most images the cut is at $`0\stackrel{}{\mathrm{.}}1`$ or one WFC pixel with a small increase for the brightest images (see fig 9). This test is done for about 30% of the objects in the sample, although only about 8% of the sample are eventually classified as probable point-like stellar sources, either stars or quasars. The star-galaxy classification is based on both the likelihood ratio for the best-fit galaxy model as well as the evaluated half-light radius for the object, which is typically 0.2 pixels (equal to the resolution used for the sub-pixel definition of the PSF).
The next check is to see if a two-component D+B model, if being considered, is significantly better than a single-component model with less parameters. In 60% of the cases (for $`\mathrm{\Xi }>2.0`$) the numerical difference is less than 6 and this is insufficient justification for the fitting of a D+B model. If the half-light radius is less than two pixels we again select a single-component fit. In Figure 8b we show the fraction of galaxies as a function of half-light radius for which we can fit D+B models and the fraction for which we can classify the object as being significantly disk-like or bulge-like. The peak of the distribution for which we can fit D+B models is at about 5 pixels, and for obvious reasons we are not able to do so for galaxies with a half-light radius of less than 2 pixels. Even if the minimization gave a significant fit for a few of the latter galaxies, these fits are unlikely to be realistic models of these extremely under-sampled galaxy images.
For single component galaxies we next check if the axis ratio is significantly different from unity. If not, then it is set equal to unity, and a five-parameter symmetric model is fitted to the data. For all galaxies we fit both a pure disk as well as a pure bulge model, selecting the better fit model. If the absolute value of the likelihood ratio is smaller than four, then the classification as disk or bulge is not significant and these objects are classified as generic “galaxy”. If the object had been classified at a longer wavelength as disk or bulge, then the model output is selected to be that of the nearest wavelength for which the image was definitively classified. Otherwise, the model output is based formally on the likelihood ratio, ignoring the significance of it. For images with a sub-pixel half-light radius for which the likelihood ratio does not give a preference between star and galaxy, such objects are classified merely as “object”. The star-galaxy separation at sub-pixel half-light radii needs more detailed investigation, particularly for the purpose of attempting to isolate an uncontaminated sample of stars needed for modeling our own Milky-Way Galaxy.
The image in each filter is modeled independently since the parameters in each filter need not be the same. In Figure 10 we compare the classification of images in GWS with $`\mathrm{\Xi }>2.0`$ in both the filters F606W and F814W. Most of the objects received the same classification in the two filters. As expected from Figure 9 there is very little ambiguity in the star-galaxy classification.
In Figure 11 we compare the parameter estimates for about 150 galaxies in GWS for which there is a full 10 parameter D+B fit in both filters and a rms error estimate smaller than 0.5 in $`log_e(\mathrm{HLF})`$. The orientation (PA) is clearly the best defined parameter and this has proven very useful for studies of weak lensing (Griffiths et al. (1996)). The deviation for the total magnitude (Mag) is the color of the galaxy. The half-light radius (HLR) is the equal in the two filters for most galaxies. The axis ratios \[for the disk components\] (DAR) and for the bulge components (BAR) show scatter mostly from measurement error. The scatter in $`\mathrm{B}/\mathrm{T}`$ flux ratio is real, and is caused by the different colors of the bulge and disk components.
For galaxies which demonstrably have two components, i.e. disk and bulge, the least well defined parameter is $`\mathrm{HLF}`$ the ratio of the half-light radii. After a lot of effort, we have optimized an automated procedure to identify those cases for which a significant D+B model can be fitted. We are now able to select and converge (with over 90% success) on an unbiased estimate of the ratio of half-light radii for about half of these cases. The program determines if this quantity is unconstrained, by searching for a change in the likelihood as a function of this parameter. If the fainter component contributes less than 10% of the light, or if the axis ratio of both components is unity, then we have generally found this parameter to be poorly constrained. In Figure 12 we show that the logarithm of this parameter is a linear function of the $`\mathrm{B}/\mathrm{T}`$ flux ratio with a correlation coefficient of about 0.5. Bulge dominated galaxies have a systematically larger Bulge/Disk half-light radius ratio ($`\mathrm{HLF}`$) than disk dominated galaxies. However the surface brightness limit for detection of the fainter component (see Fig. 20) probably contributes most of the observed correlation.
The scatter of 0.4 dex rms about the adopted mean relation (solid line) is equivalent to a cosmic scatter of one magnitude. If in the preliminary convergence the flux ratio $`\mathrm{B}/\mathrm{T}<0.1`$, $`\mathrm{B}/\mathrm{T}>0.9`$ or if the likelihood function was evaluated at extremes $`\mathrm{HLF}\pm 0.7`$ dex showed that the ratio of the half-light radius ratio was not constrained by the data, it is held fixed at the nominal value derived from the empirical relationship
$$\mathrm{log}_{10}(\mathrm{HLF})=0.7+\mathrm{B}/\mathrm{T}$$
Such relationships, although needed to facilitate convergence of the model fits at fainter magnitudes, are at best a rough approximation. However, when a parameter is unconstrained and the errors become comparable to the expected range of parameter space, this assumption does not significantly change the estimates of better defined parameters. The justification for the application of such a relationship is that it helps the routine to converge on a better defined minimum.
The program may also choose to fix the bulge axis ratio, or less frequently, the disk axis ratio at unity, if either of them is determined to be not different statistically from unity.
## 8 Estimated errors of parameters
The covariance matrix is the inverse of the Hessian i.e. the second-order derivatives evaluated at the peak of the likelihood function. When it is normalized to have unit diagonal elements, the cross-terms then give the correlation coefficients between the estimated model parameters. If the cross-correlation terms are not large, we can expect to derive reliable error estimates for the parameters from the diagonal elements. The parameters were selected to try to minimize the covariance between the fitted parameters described above.
In MLE theory, if the image being modeled is the same as the simple model assumed, then the parameter estimates and associated errors will be unbiased. However, real galaxy images which are well resolved are more complex than the simple axisymmetric image models that are assumed for MLE. The effects of spiral arms and bars on the parameter estimates are complicated and difficult to quantify using simulations. In general, we can expect that, given a sufficiently large sample, the cosmic dispersion caused by image peculiarities will be averaged out.
In Figure 13 we illustrate a running mean of rms errors for parameters as a function of $`\mathrm{\Xi }`$. To first order, the logarithm of the rms error appears to increase linearly with $`\mathrm{\Xi }`$. The errors for single component and two component D+B fits are illustrated independently: in general, the latter errors are larger. There are a few points to notice. Firstly the sky error, of order 0.005 magnitude, is practically independent of $`\mathrm{\Xi }`$ and is defined by our choice of the number of sky pixels to include in the MLE. The orientation and centroid position, which were held the same for both components, show no significant increase in error than a single component fit at the same $`\mathrm{\Xi }`$. The errors in the bulge axis ratio are much larger for the two component fits. Since the rms of a random distribution between 0.13 and 1.00 is 0.25, rms errors larger than $`0.1`$ convey little useful information about the axis ratio. This occurs at a $`\mathrm{\Xi }`$ of 1.93 and 2.12 for single component disk-like and bulge-like galaxies and $`\mathrm{\Xi }2.25`$ and 2.72 for the disk and bulge components in D+B model fits. The $`\mathrm{B}/\mathrm{T}`$ errors do not become larger than 0.1 since a two-component model would not be significant if they did. The error in half-light radius is given in $`log_{10}`$ units. The error is 0.1 dex or 26% at $`\mathrm{\Xi }`$ values of 2.15 and 2.37 for single component and two-component models respectively. The $`\mathrm{HLF}`$ ratio, given in $`log_{10}`$ units, is clearly the worst constrained parameter, requiring $`\mathrm{\Xi }>3.37`$ for the expected error to be less than 0.1 dex or 26% rms.
The HDF superstack consisted of eleven individual HDF field pointings, and we can therefore use these to test the MLE method. We compare the MLE results for the HDF super-stack with the MLE results of the independent fits to the images of the same galaxies in each of the 11 sub-stacks. We limit the comparison to those galaxy images where the output from the sub-stacks resulted in the same morphology classification as that from the super-stack in the same filter. In the fits to all of the sub-stacks, we used the same object definition mask (see appendix) as that derived from object detection in the super-stack, together with the appropriate shifts.
In Figure 14 we compare the MLE parameters derived from the super-stack with the weighted means from the individual sub-stacks. We notice a small systematic bias: the axis ratios in the super-stack are slightly rounder and the half-light radii slightly larger, with a slightly larger $`\mathrm{HLF}`$ ratio. Our adopted approach to stack after shifting by closest integer number of pixels modifies the appearance of the peaked bulges. It will be instructive to see if the process of “drizzling” (Fruchter et al. (1997)) and stacking with sub-pixel shifts helps to remove this effect completely. The errors in flux values of pixels in drizzled images are not independent and to use our MLE approach, the covariance error matrix for each pixel needs to be included in the evaluation of the likelihood function. Software to do this has yet to be developed. For the brighter galaxy images in the HDF it is probably better to use a weighted mean estimate of the galaxy parameters from the individual HDF sub-stacks rather than the MLE values derived for the super-stack. Although the image bias in our HDF super-stack is disappointing, it does show the power of MLE estimates to be sensitive to the true nature of the images analyzed. Of course, all these problems can be avoided by not stacking the images at all and, instead, by summing the likelihood over the individual images (Ratnatunga, Griffiths & Casertano (1994)). This latter approach, however, is computationally impractical as yet.
Figure 13 allowed us to easily estimate an expected error for a given $`\mathrm{\Xi }`$. If the error estimate from inverting the Hessian is significantly smaller then it is unlikely to be real. This could happen for many reasons. There could be a sufficient covariance between parameters to make the diagonal only a small part of the error. The non-axisymmetric features of the galaxy image could have made a sharper dip in the likelihood function. The expected error could in fact be built into the evaluation of the Hessian at the peak of the likelihood function in order to pass over any sharp dips in the function. However, these relationships had not been derived at the time of the 1996-98 MDS pipeline processing. We find that a reasonable compromise for the current (October 1998) version of the database is to adopt a nominal expected error of half a magnitude brighter object if that is larger than the MLE error estimate from the Hessian. We find this is appropriate for all parameters except the orientation parameter, for which the original error estimates appear to be good. The orientation is not correlated with any of the other image parameters.
In Figure 15 we show a histogram of the resulting normalized deviations of the parameter estimates evaluated in the individual sub-stacks from the value derived from the super-stack, and we compare the results with the expected standard normal distribution. The small bias caused by stacking the parameter estimates discussed above is clearly emphasized. We see a significant tail larger than normal for the $`\mathrm{B}/\mathrm{T}`$ flux ratio and for the $`\mathrm{HLF}`$ ratio because of the residual covariance in these parameters. The overall accuracy of the MLE parameter error estimates seems reasonable if we recognize that the simple galaxy model fitted does not include the structural detail seen in the real galaxy images at brighter magnitudes.
## 9 Selection effects effects due to Ockham’s razor
Since the adopted procedure fits the minimum number of parameters which are required to get a best MLE fit which is statistically significant, the parameter estimates reflect that decision.
In Figure 16 we show the distributions of disk and bulge axis ratios as a function of $`\mathrm{\Xi }`$. For illustration, the face-on case (axis ratio = unity) has been distributed randomly in the finite range \[1.00,1.05\] outside the fitted range \[0.01,1.00\]. The disk axis ratio appears to be randomly distributed within the range \[0.10,1.00\] for $`\mathrm{\Xi }`$ brighter than $`3.0`$. As images get fainter the axis ratios close to unity are found to be insignificantly different from unity. For example at $`\mathrm{\Xi }2`$, axis ratios in the range \[0.8,1.0\] get set equal to unity, thus removing two parameters from the MLE fit. The same increase in errors produces a scattering of the observed axis ratios below 0.10. The bulge axis ratios show a similar distribution except that they are expectedly larger than the disk axis ratios. We also notice a number of small bulge axis ratios which are spurious and caused by barred galaxies which have not been properly included in the current MLE models.
In Figure 17 we show the distributions of $`\mathrm{B}/\mathrm{T}`$ flux ratio as a function of $`\mathrm{\Xi }`$. For illustration, single component fits as pure disks $`\mathrm{B}/\mathrm{T}=0`$ and pure bulges $`\mathrm{B}/\mathrm{T}=1`$ have respectively been distributed randomly in the finite ranges \[-0.05,0.00\] and \[1.00,1.05\] outside the fitted range \[0.00,1.00\]. The $`\mathrm{B}/\mathrm{T}`$ flux ratio is distributed within the fitted range \[0.00,1.00\] for $`\mathrm{\Xi }`$ brighter than about 3.0, with the understood excess of disk like galaxies. As images get fainter, ratios close to zero and unity are not observed since these galaxies do not show a significant second component. The disk component in ellipticals is ‘lost’ before small bulges are lost in disk-like galaxies. For example at $`\mathrm{\Xi }2`$, the observed $`\mathrm{B}/\mathrm{T}`$ flux ratios are in the approximate range \[0.1,0.6\]
In Figure 18 we show the distributions of $`\mathrm{B}/\mathrm{T}`$ flux ratio as a function of half-light radius. The distribution for single component fits are as those in Figure 17. As galaxy images get smaller, the $`\mathrm{B}/\mathrm{T}`$ flux ratios close to zero and unity are not observed since for these galaxies a significant second component cannot be resolved. Not unexpectedly, small bulge components in spirals can be inferred to have been lost from the MLE models of galaxies with half-light radii of a few pixels.
In Figure 19 we show the distributions of the $`\mathrm{HLF}`$ ratios as a function of $`\mathrm{\Xi }`$. For $`\mathrm{\Xi }`$ brighter than about 3.0 the ratio is seen to be distributed over a wide range. As images get fainter than those corresponding to $`\mathrm{\Xi }2.4`$, the MLE routine does not estimate ratios larger than unity. This is because, as seen in Figure 17, the MLE program does not resolve disk-like components in faint galaxy images dominated by bulges.
We now look at the mean surface brightness within the central half-light radius ellipse. This has a constant magnitude offset from the central surface brightness of $`1.12463`$ mag for disks and $`6.18126`$ mag for bulges. The central surface brightness is a commonly quoted quantity, independent of axis ratio and inclination for our simple galaxy models. The advantage of discussing mean surface brightness here is that we find that the limiting mean surface brightness for morphological classification is a quantity which is about the same for disk-like and bulge-like components of the galaxy.
On the left-side of Figure 20 we show the mean surface brightness within the half-light radius ellipse as a function of the estimated major-axis half-light radius. In the case of D+B models, each of the components are considered separately. There is clearly a limiting magnitude for morphological classification which appears to be the same in each case, i.e. independent of whether it was a component of a D+B galaxy model, or a single component. We illustrate this for the GWS 4-stack images of 2800 seconds in F606W. A very similar graph is seen for F814W. There appears to be a slight numerical bias for MLE to converge on integer or half-integer half-light radii at the smaller values. This bias is presumably caused by our attempt to merge in a high-resolution center (see appendix).
On the right-side of Figure 20 we show the same surface brightness estimates within the half-light radius ellipse as a function of the total magnitude of the galaxy. Within the half-light radius ellipse of a galaxy or component, morphological classification can be done to a limit in surface brightness which is independent of the total magnitude of the galaxy up to certain magnitude limits. These two magnitude limits will be very useful as simple selection criteria in future models used to interpret the observed $`\mathrm{B}/\mathrm{T}`$ distribution of galaxies and surface brightness dimming for cosmology.
## 10 Results
Preliminary versions of the MDS catalog have been the source of many scientific investigations: see, for example the papers on the size - redshift relation (Mutz et al. (1994)); angular size evolution (Im et al. 1995b ; Roche et al. (1996, 1997, 1998)) axis ratio distribution (Im et al. 1995a ); weak gravitational lensing (Griffiths et al. (1996)); luminosity functions of elliptical galaxies (Im et al. (1996)); morphological classification (Owens, Griffiths & Ratnatunga (1996); Naim, Ratnatunga & Griffiths 1997a ; Naim, Ratnatunga, & Griffiths 1997b ; Im et al. (1999)); galaxy interactions and mergers (Neuschaefer et al. (1997)); compact nuclei (Sarajedini et al. (1996)) the HST MDS cluster sample (Ostrander et al. (1998)) and a study of high-redshift clusters (Lubin et al. (1998)).
The catalog used in these analyses was mostly based on the star, disk or bulge model that best fit each object. Most of the previous analyses can be repeated on the new catalog and refined using the D+B models for the brighter sample. We do not, however, expect any significant changes to these previously reported results.
It is especially interesting to look at results on the two observables which have not been previously measured for large numbers of galaxies, especially in the magnitude range observed here, viz. the $`\mathrm{B}/\mathrm{T}`$ flux ratio and the Bulge/Disk half-light radius ratio ($`\mathrm{HLF}`$). In fact we need to apply the same procedure to a large sample of bright nearby galaxies like those from the SLOAN digital sky survey (Gunn eta al. (1998)) in order to establish the behavior of these parameters on galaxies in the local universe.
## 11 Surface Brightness
We have made plots similar to Figure 20 for much deeper observations such as the Hubble Deep Field (HDF). In Figure 21 we show a running mean of surface brightness as a function of total magnitude for the GWS galaxies on the left-side and compare it with those estimated for the HDF on the right-side. This graph illustrates Freeman’s result for disk galaxies (Freeman (1970)); the mean is the same, indicating that the observed distribution of surface brightness is intrinsic to the galaxies, with a cosmic dispersion of only about 1-magnitude. The expected trend of surface brightness dimming as the mean redshift increases for galaxies with fainter total magnitude is also seen. Correcting by $`1.12`$ mag., we estimate that the mean central surface brightness of disk galaxies is 20.6, 21.4 mag in F814W, F606W for the GWS and 21.0, 21.8, 22.4 mag in F814W, F606W, F450W for the HDF. It is interesting that the mean surface brightness is the same for galaxies fitted as pure disk, as it is for the disk component of D+B galaxies. For bulges the scatter appears to be very much larger and our observations in GWS do not reach the limiting mean surface brightness bulges. Consequently, the mean for bulges in the HDF is about 2-magnitudes fainter than for bulges in the GWS fields.
## 12 Galaxy Color
In Figure 22 we look at the color of GWS galaxies as a function of the F606W apparent magnitude. We have shown all 6 classifications. The dotted vertical line is drawn at the observed completeness magnitude ( $`\mathrm{\Xi }1.8`$ ). Most of the objects which were not classified morphologically are fainter than this limit. Furthermore, some of the images fainter than this limit and which have been classified as point-like are probably faint galaxies rather than stars. We have chosen not to filter the MDS catalogs brighter than this limit in order to avoid additional censorship of the sample in statistical analyses. All parameter estimates for galaxies fainter than this limit (i.e. magnitudes corresponding to $`\mathrm{\Xi }<2.2`$) can be used for statistical analyses only, i.e. studies should not be focused on individual galaxies, particularly any outliers of such a distribution.
In Figure 23 we compare the color of the bulge-components of galaxies with the corresponding disk-components for GWS galaxies which were fitted with D+B models in both F814W and F606W. It appears that the colors of disk and bulge components of many galaxies are similar (the dotted line), although bulges are observed to be systematically redder, as expected, except for a few isolated cases.
In Figure 24 we look at the color of disk and bulge components of galaxies as a function of the $`\mathrm{B}/\mathrm{T}`$ flux ratio. As may be expected, the disk components of galaxies appear to become $`0.5`$ mag redder as we follow the plot from disk-like to bulge-like galaxies, with a cosmic scatter of 0.45 mag. The colors of bulges remain practically the same $`0.2`$ mag., with a larger cosmic scatter of 0.6 magnitudes.
In Figure 25 we look at the color of galaxies as a function of the $`\mathrm{HLF}`$ ratio. It appears that redder galaxies have a smaller ratio. Careful statistical analysis is needed to ensure that this is “real” and is not caused by a selection effect in which the GWS galaxies were sufficiently bright that D+B models could be fitted to images in both F814W and F606W filters.
## 13 Conclusions
An automated maximum Likelihood procedure has been developed to calibrate, detect and quantitatively measure objects in the HST WFPC2 fields. The procedure measures the parameters of faint galaxies, despite the potential difficulties related to the undersampling in WFPC2.
D+B models are now fitted routinely to the brighter galaxy images as a part of the MDS pipeline. A D+B galaxy model, a pure disk, a pure bulge or a star model is chosen automatically using likelihood ratio tests. Classification is done for images with significant confidence.
Most HST MDS fields observed in 1994-1997 have been processed, resulting in a catalog of over 200,000 objects which have been put on the MDS website with a searchable browser interface. Clicking on a stack image will pick out and display the maximum likelihood model fit and the parameters for that object.
The statistical properties of the HST-MDS Catalog has resulted in many publications and comparisons with models of galaxy evolution will continue.
This paper is based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. The Medium Deep Survey was funded by STScI grant G02684 et seqq. and by the HST WFPC2 Science Team under JPL subcontract 960772, under NASA contract NAS7-918. Some of the data was also processed under the STScI archival grants GO6951, GO7536, and GO8384. We acknowledge the multiple contributions of Dr. Stefano Casertano, Dr. Myungshin Im, Mr. Adam Knudson and Dr. Lyman Neuschaefer who were associated with the MDS pipeline processing and analysis. We also thank the rest of the original MDS Co-I team, including Dr. Richard Ellis, Dr. Gerard Gilmore, Dr. John Huchra, Dr. Garth Illingworth, Dr. David Koo, Dr. Antony Tyson and Dr. Rogier Windhorst for their contributions to the program.
Appendix - MDS Pipeline
## 14 Association of WFPC data and MDS Field names
The MDS database was maintained and updated using Starview (Fruchter (1994)). Observations are assigned an alphanumeric 5-character name that is based on Galactic coordinates as described below such that fields which are from the same region of the sky are associated by name. The choise of the individual characters in the name is as follows:
1) The first letter of the name, ’u’, is the HST instrument letter assigned to WFPC2 observations. It was ’w’ for older WF/PC data.
2) Galactic Latitude from ‘a’ in the south to ‘z’ in the north in equal steps of sin(latitude), using numeric index \[6-9,0-5\] within 16$`\stackrel{}{.}`$1 from the Galactic plane.
3) Galactic Longitude using sequence ‘1-5,a-z,6-9,0’ in steps of 10$`\stackrel{}{.}`$0 such that numeric indexed fields are towards the Galactic Center. The Galactic caps within 3$`\stackrel{}{.}`$9 from pole are assigned “a-” and “z\_” SGP for NGP respectively.
4) Chronological sequence of primary target within the 31 degree<sup>2</sup> cells defined above, based on coordinates. Observations within a 0$`\stackrel{}{.}`$5 radius are assumed to be of the same target.
5) Chronological sequence of Association around the same primary target set. These fields may overlap each other.
The program using a list of all pure parallel WFPC2 GO and GTO observations assigns the names. We have not included the STScI UV-survey program (pid=6253) or the current archive program (pid=7909) for all parallel WFPC2 data since February 1998. Every dataset in an associated group is allowed to be a maximum of 8$`\stackrel{}{\mathrm{.}}`$0 (10% of WFC CCD width) from any other dataset in the same association. This range is sufficient to associate all WFPC data taken in parallel with a STIS or NICMOS observations, which are dithered, say within a 5$`\stackrel{}{\mathrm{.}}`$6 square. For most cases the $`\mathrm{PA}_{\mathrm{V}_3}`$orientation is identical. If it is not, then we ensure that the difference in rotation is less than 0$`\stackrel{}{.}`$03 . This ensures a 1-1 mapping of the pixels, keeping any effect caused by the small rotation or differential distortion to be under about 0.5 pixels, the maximum error made by adopting integer truncated pixel shifts between images in a stack.
Around some objects such as the FOS calibration star BD+28-4211 MDS has many repeat observations as illustrated on Figure 26.
## 15 Calibration procedure
Briefly, the calibration procedure is as follows.
The WFPC2 images are calibrated using the best available calibration data. We adopt the STScI static mask, super-bias and super-dark and flat field calibration files created for the HDF. Tables of hot pixels from STScI are used to correct fluxes in fluctuating warm pixels for the period of observation. Correction is made to ensure that the noise from any residual warm current is smaller than the read noise. Hot pixels, which cannot be corrected to that accuracy, are rejected. Saturated pixels and pixels with large dark current are flagged as bad and ignored. No attempt is made to interpolate over them. The software has been specifically developed to recognize the existence of missing pixel values.
In general we have more than one exposures in the same filter, in the same field, to reject the numerous cosmic rays by stacking exposures with a $`3\sigma `$ clip. We use a corrected version of the IRAF/STSDAS combine task. See Ratnatunga et al. (1994) for a detailed discussion of various aspects of the stacking procedure and the statistical errors which are corrected in the “combine” algorithm. A error image is also generated which computes the rms error from the noise model, taking proper account of pixels rejected by cosmic rays, the dark current, flat-field.
Shifts between images were determined by cross-correlation of the images. The coordinates listed in HST WFPC2 image and/or jitter file headers are often found to be insufficiently precise for the process of image stacking (Ratnatunga, Ostrander & Griffiths (1997)). The shifts are determined to an estimated rms accuracy of 0.1 WFC pixels. To avoid interpolation (which spreads the charge from cosmic rays, charge that is otherwise well confined), exposures are stacked with shifts corresponding to the nearest integer number of pixels, without any rotation or drizzling. Drizzled images (Fruchter et al. (1997)) are most useful for very deep exposures like the HDF, which do not occur in pure parallel observations. Drizzling causes the errors in adjacent pixels in the image to become correlated and significantly complicates a proper statistical analysis of the image.
A mode offset is employed to allow for changes in the sky background in different exposures due to changes in the fluorescent glow and scattered Sun/Earth light. The calibration accuracy is partly limited by the fluorescent glow. This can contribute as much as 50% of the dark current, and is strongly correlated with the cosmic ray activity during the WFPC2 exposure, which in turn depends on the particular orbit. However, except for very deep stacks like the HDF, the noise created by improper correction of this fluorescent glow results in a term which is small compared to other noise terms.
We next remove any large-scale gradient from the faint outer regions of bright galaxies, for which the nucleus was probably the target of the primary observation. The four CCD images of the WFPC2 are first oriented and merged along the pyramid edge and a single 2nd order 6-parameter polynomial surface is fitted across all four. This surface is then subtracted from each of the individual images and this automated procedure is iterated 2 or 3 times until no gradient is visible. Only about 4% of the processed MDS observations required this gradient removal.
After stacking, the image is multiplied by a selected factor, which is a power of 2, followed by an integer truncation and division by the same factor. This makes the images compressible without any loss of useful information, since the differential values before and after this process are much smaller than either the accuracy of the calibration or the averaged read-noise. The selected power depends on NCOMB, the number of images stacked and we adopt the function $`\mathrm{nint}(\mathrm{log}_2(8\mathrm{NCOMB}))`$, i.e. $`2^3`$ for a single image and $`2^6`$ for a deep 6-stack. The estimated rms error has an expected dynamic range of 0 to 25 ADU and the accuracy is unlikely to be better than 0.01 ADU. Therefore the rms ADU error image is multiplied by 100 and truncated to the nearest integer, in order to generate a short Integer image which is half the size when uncompressed than the size of the corresponding image of real numbers.
## 16 Object detection
The MDS pipeline was used to process only the typical field in which crowding was not a problem. We selected sparse fields in which the number of pixels $`4\sigma `$ or more above sky is typically under 5% of the total pixels in the field. We classified as non-survey and excluded from the MDS pipeline image analysis all low galactic latitude fields with lots of stars, and those fields close to Globular Clusters and local Group galaxies with, say, more than about 1600 objects detected. Non-Survey MDS fields were analyzed independently by other members of the team.
In Figure 27 we illustrate the number of objects detected as a function of the number of pixels above $`4\sigma `$. The figure shows that most of the fields selected as part of the MDS survey have a smaller fraction of pixels over $`4\sigma `$ and a smaller number of objects detected than in non-survey fields. In both cases, images with no cosmic-ray split have been indicated with crosses and show a systematic smaller number of object detections because of the attempted cosmic ray clean out.
Objects are located independently on each image using a ‘find’ algorithm developed for HST-WFPC data. This algorithm does not do any pre-convolution of the data, so that it is specifically designed to be insensitive to hot pixels and missing pixel values. It is based on finding local maxima and mapping nearby pixels to the central object, and then selecting those detections which are significantly above noise. The detection threshold algorithm originally developed for pre-refurbishment WF/PC data was optimized for WFPC2. This resulted in the location of a practically identical list of objects in the overlapping region of three WFPC2 MDS parallel fields USA0\[1-3\] observed in June 1994. To ensure that we do not break up bright galaxies into small regions of star formation, we adopted an object resolution of $`0\stackrel{}{\mathrm{.}}5`$, and small regions within this radius were allowed to merge with a brighter center. A larger radius of $`1\stackrel{}{\mathrm{.}}0`$ was used for WF/PC data. This algorithm has been observed to locate real objects with as much or better efficiency as the FOCAS algorithm (Tyson & Jarvis (1979)), which was developed mainly for ground-based data.
The MDS ‘find’ algorithm generates both a catalog and a ‘mask’ image, which associates each pixel with one object. This is a short integer file since the MDS pipeline assumes that there are less than 10K objects in a single WFPC2 field suitable for analysis. The stacking and initial object location procedure is a fully automated first step of the MDS pipeline. After the initial find, we have the only interactive part of the operation. We first look at the exposure, and confirm that it satisfies our requirements on inclusion into the MDS. A typical MDS survey field is uncrowded, with about 400-800 objects detected in the 5 arc-min<sup>2</sup> field. We also exclude from the MDS catalogs those objects with a centroid within 10 pixels ( $`1\stackrel{}{\mathrm{.}}0`$ ) from the pyramid and CCD edge, thus reducing the area surveyed by about 5% from 5.03 arc-min<sup>2</sup> to 4.77 arc-min<sup>2</sup> per WFPC2 field, and causing a $`2\stackrel{}{\mathrm{.}}0`$ wide gap in the shape of a cross in the center of each field. Rapid changes in the image distortion and the PSF ( a residual consequence of the original HST Spherical Aberration ) make the edge a very difficult region for reliable quantitative analysis.
The next operation is to fix up the mask for any bright objects which have been over resolved, or to delete any ghost images or extremities of bright stellar diffraction spikes which have been spuriously detected as objects. The detection algorithm has been optimized to work best at intermediate to faint magnitudes at the cost of over resolving a few bright objects. The numbers plotted in Figure 27 are raw counts before cleanup. The spurious detections are flagged with an interactive cursor for rejection or merger with the central image. This interactive operation takes about 30 min per stack and is done with a well defined set of guidelines which were originally developed for WF/PC data and modified appropriately for WFPC2.
The object detections in the various filters are then matched by software, and a single catalog is created, together with a revised mask for each image, so that corresponding pixels in the different filters are associated with the same object. Looking at a grid of the individual object detections, the final masks are inspected and the procedure is iterated as required to ensure that the object definitions as encoded by the final masks are acceptable. This is the conclusion of the calibration and object detection phase of the MDS pipeline.
The object detection algorithm and search thresholds were kept unchanged over the four years of the MDS. When new calibration data became available we recalibrated the data and created a new stack to obtain a slightly lower noise in the image. We however do not redetect objects, however. The masks remain constant, and after the field has been setup in the MDS database, model fits to any objects can be reprocessed with practically no human intervention. When there has been a significant improvement in the calibration or the fitting software, the whole database is reprocessed to obtain an improved version of the catalog, which is uniform over the whole period of observation. This has, however, become practical only after we obtained a SPARC Ultra-1, which on its own can do the reprocessing of the current database of over 400 fields in about two months. All of the MDS fields have been reprocessed with the last (July 1996) version of the MDS image analysis software. The shifted stacks were refitted after they were improved in July 1997 using inter-image shifts derived from cross-correlation analysis.
## 17 Definition of the object region for analysis
Most galaxies are analyzed by picking out a 64-pixel square region centered on the galaxy. The very few images (on average about 3 galaxies per WFPC2 field or 0.67% of the catalog) which are larger are analyzed as 128-pixel square images and in an extreme case as 256-pixel square region. The integral power of two in the region size in pixels was chosen for efficient convolution of models by fast Fourier transforms (FFT).
An initial guess of the local sky background is determined from an algorithm that determines the sky using an adaptation of the iterative, asymmetric clipping procedure as described by Ratnatunga & Newell (1984). In the very few cases for which it is detected that the local sky is poorly defined (large rms and skew in distribution) then the global sky is adopted as the initial guess. We next use the mask of detected objects generated by the MDS ‘find’ program in order to define a $`1\sigma `$ contour around the object. This is done by selecting the subset of pixels which are next to each other and are $`1\sigma `$ above the estimated local sky.
Despite careful ‘dark’ calibration and correction for suspected hot and warm pixels, pixels with fluctuating dark current are seen to leave a few “hot” pixels in the image. Since they could contribute significant flux compared with the flux of some of the faint images, any isolated pixels in the region outside the $`1\sigma `$ contour and over $`5\sigma `$ above immediate neighbors were located and were assumed to be hot pixels and rejected. This algorithm detected hot pixels in only 25% of the images and in these cases found on average only 5-pixels in a 64 pixel square region (See Figure 7). These values are for the GWS taken with the WFPC2 before it was cooled down from $`78^oC`$ in April 1994, and for which warm-pixel corrections are not available. There are many less hot pixels in the newer data taken at $`88^oC`$.
The initial guess of the local sky and the choice of pixels within the $`1\sigma `$ contour associated with the object are factors which influence only the region picked out for analysis. The pixels within the $`1\sigma `$ contour get no different treatment when the likelihood function is integrated.
## 18 The observational error distribution.
The presence of cosmic rays makes the observational error distribution of the raw observation non-Gaussian. We have found that the cosmic ray contamination can be represented by a Weibull distribution with index 0.25. In theory, the likelihood function can be defined by taking the model all the way back through the calibration procedure in order to make the comparison by summing over independent raw observations without any stacking. If this is done, one can take proper account of the effect of telescope “breathing” which results in slight changes to the observed PSF. One can also allow for contamination by faint cosmic rays and even any analog to digital conversion errors (a problem mainly for old WF/PC data) on the observational error distribution (Ratnatunga et al. (1994)).
However, after extensive software development and investigation using simulations, we found that this analysis of raw observations and the use of a complex error distribution gained only about 0.15 magnitudes in quality of morphology classification over the very much simpler analysis of calibrating and stacking the image to remove cosmic rays and the assumption of a Gaussian error distribution. With the latter approximation the log likelihood function is equal to
$$0.5(1+\mathrm{log}(2\pi ))\chi ^2=1.42\chi ^2.$$
Maximizing the likelihood function is then identical to minimization of $`\chi ^2`$. We have currently chosen to use the simpler analysis since the very slight improvement in results does not justify the very large increase in computation.
## 19 Generation of model images for comparison with observation.
The creation of the model image is the most technical and computer intensive part of the procedure. On average, of order 700 model images are used by the minimization routine to converge on the best-fit model of a single object. Since our minimization routine uses derivatives, an efficient high precision algorithm is required. For under-sampled images like those from WFPC2, sub-pixelation is very important, particularly close to the central peak of the galaxy image. We have developed a procedure which is automatically optimized by the algorithm by testing the evaluated likelihood function on the image being analyzed. We find that for many images the central pixels of the model image convolution needs to be done in sub-pixel space, and then block averaged for comparison with observation.
## 20 The creation of the image.
In order to ensure that the evaluated likelihood is a smooth function of all the model parameters, require computation of the model image at much higher resolution than that observed, particularly in under-sampled regions close to the center. The image models we have adopted are scale free and have an axis of symmetry. In order to minimize computation and make use of this symmetry, we therefore first evaluate the image by adopting an origin at the middle of central pixel of the array. The outer regions of the image are evaluated without sub-pixelation. If the models are scale free, then the outer regions can be multiplied by a constant factor to obtain sub-pixeled values for the inner pixels.
For example, we generate an 81 pixel square image by first computing pixels outside the inner 27 pixel square. Using the axis of symmetry, only half these pixels need to be computed. Then each pixel outside the inner 9 pixel square and within the 27 pixel square region is integrated with 3x3 subpixelation by integrating the 9 pixels at 3 times the radius and using a scale factor appropriate for the selected model. Following this step, each pixel outside the inner 3 pixel square and within the 9 pixel square can be integrated with an effective 9x9 subpixelation, and the region outside the central pixel and within the inner 3 pixel square can be integrated with an effective 27x27 subpixelation. Finally the central pixel with an 81x81 subpixelation is integrated from the rest of the whole image and the contribution for the very central subpixel. In this way the model image that is created has a very high degree of subpixelation for the inner pixels at practically no extra computation. In this example, the central 64-pixel square region used gets computed as a 280 pixel square image with increasing subpixelation towards the center. The image is effectively computed at 39200 points at the cost of 2917 evaluations, more than an order of magnitude increase in speed. This approach of sub-pixelation can be used on any scale free model, even if not elliptical.
## 21 The Point Spread Function
Selection of the Point Spread Function (PSF) is not easy. The choice is between, using observed PSF’s of well-exposed stars, or model PSF’s from programs such as tinytim (Krist (1992)). Observed stellar PSF’s are under-sampled, have random observational errors, and are not always available close to the image being analyzed. A compiled grid of stellar PSF’s from various observations has systematic errors comparable to the small systematic errors seen in model PSF’s. tinytim PSF images have the added advantage of being able to be generated as a sub-sampled image without observational jitter or the scattering in the WFPC2 CCD photon detection (see below for details).
Convolution of the WFPC2 model image is best done in sub-pixel space where it is less under-sampled. Tinytim (Krist (1992)) PSF’s are evaluated with 3 and 5 times sub-sampling for the PC and WFC CCD chips respectively. The 267 square PSF images are stored in the same data file format as the observations in a 3 by 3 PSF grid for each chip, and centered on the image at the pixel for which they were evaluated. A PSF grid image data file is made for each filter used in the observations. In the image analysis we choose from the grid the PSF for which the center is nearest to the location of the object. A 3 by 3 grid is sufficient for the corrected optics of WFPC2. A 11 by 11 grid with no sub-sampling was used for pre-refurbishment WF/PC data for which under-sampling of the extended PSF due to the spherical aberration was relatively less of a problem than the rapidly changing PSF as a function of the location on the chip.
We have so far ignored the changes to the PSF caused by the gradual shift of the mean focus between resets (maximum of 6 microns) and the telescope breathing which has a rms of ( 3 microns ). This in itself is a complicated issue when a stacked image is used in the analysis. The focus of every exposure in the stack cannot be assumed to be the same. Simulations have shown that for typical extended images with a half-light radius larger than say 2-pixels, model parameters derived using slight changes to the PSF are well within the parameter error estimates. For extended images, deviation of a galaxy from the simple model assumed could give larger errors to the parameter estimates.
## 22 Convolution
Convolution is done with IMSL 9.2 FFT routines. For most images a 64 pixel square array is used. We correct for any aliasing by generating a 16 pixel square image with a factor of 4 lower resolution and convolving it at the center of a 32 pixel square image where the region outside the central 16 pixel square is set equal to zero. The flux which gets convolved out to this border region is a sufficient estimate of the alias, and subtracted appropriately from the convolution of the original 64 pixel square array which is not surrounded by a zero border to prevent aliasing. This correction procedure takes only 1.25 times longer, rather than the 4 times increase needed to surround the 64 pixel square array with a zero buffer, and doing a 128 pixel square array convolution.
The subpixelation chosen is such that the region of the image selected for analysis will fit within a 64-pixel square array at the subpixel resolution. However, for WFPC2 images this was found to be insufficient for many highly peaked images that also covered most of the 64-pixel square array without any subpixelation. In these cases, we generate a second image for just the central pixels at the sub-pixelation used for PSF and then convolve this image as a 32-pixel square array, and correct it for aliasing effects using a similar procedure as for the main image.
The high-resolution image replaces the central 5-pixel square region of the model image. Since the center region after convolution is at 5-pixel subpixelation, the image can be shifted to the required center and block-averaged to the observed pixel scale using a simple algorithm that assumes a uniform flux distribution within each subpixel. Including such a higher-resolution center takes factor of 1.25 longer in CPU time, and used as required based on changes to the likelihood function.
## 23 Scattering at time of photon detection in WFPC2 CCD.
There is a non-negligible probability that a photon will be counted in a pixel adjacent to that in which it should have been detected. For a highly peaked source such as a stellar image, the location of the centroid within a pixel will govern the spill over to the adjacent pixels. The photon detection scattering of the model image therefore needs to be done at the observed image resolution and cannot be incorporated within a sub-sampled PSF.
After shifting the center and block averaging down to the size of the observed pixels, the image is convolved by a 3-pixel square kernel to allow for the photon detection scattering in WFPC2 data. The symmetric kernel adopted has \[center,side,corner\] values of \[0.75,0.05,0.0125\] and is as recommended in tinytim 4.1. We have compared this kernel with an azimuthally averaged kernel like that which was recommended in tinytim 4.0 with elements \[0.5628,0.0937,0.0156\]. We find that the revised, more centrally peaked kernel yields better model fits to a sample of stars. However, the combination of PSF and scattering is still not perfect and leaves residuals of about 5% to 10% which are significant in the brighter images.
## 24 Image Jitter
Since the tinytim PSF models are generated without including any contribution from telescope jitter, the intrinsic half-light radius estimated for a point source is non-zero. For a WFPC2 primary pointing in fine lock we expect about 10 mas (milli-arc-seconds) of telescope jitter. However, for parallel WFPC2 observations it could be larger because aberration corrections made for the primary instrument are slightly different from those required for the WFPC2. Since jitter data is still not available for all WFPC2 observations, they have as yet not been incorporated them into the MDS pipeline ( see Ratnatunga et al. (1995) for details).
Shifts between images were determined by cross-correlation of the images. Any small sub-integer shifts between the images that are ignored in the stacking procedure would also increase the effective jitter in the stacked image. Pointing Errors of 10 mas are possible at the times of target reacquisition between consecutive orbits of the HST and 20 mas if the target is reacquired after some other observation.
Any systematic radial errors between the actual PSF for the observation and the tinytim PSF model would also translate to a larger effective half-light radius. We typically estimate about 20 mas for unsaturated stellar images, growing to as large as 60 mas for the very bright saturated stars. The half-light radius computed by the program is not corrected as yet for jitter this correction which not have any significant effect, except on those images with a half-light radius smaller than a pixel.
## 25 The flow chart of the fitting and the output files
The input data for the program are four STSDAS/GEIS images for each filter, the calibrated image, corresponding error image, object definition mask image, the PSF grid image, and an ASCII data file with keyword information about the pointing and global noise characteristics of each calibrated stack. These files are identified in the header of the catalog that then identifies the object to be analyzed by the group and coordinate of the centroid, together with the mask number. In a special mode, it is possible if necessary to identify a small group of adjacent objects resolved with different mask numbers as a single object in the analysis.
The program fits all available images of the object in the different filters and outputs an ASCII data file with the fitted parameters, covariance matrix and other information about the likelihood ratio, and the sequence of intermediate results and tests. Catalogs for all objects in a field or a number of fields can be obtained using a keyword search of these data files. Also created is a FITS data image for each object. This file has the format of a grid with a single row of 7 images for each filter, starting from the longest wavelength at the top and progressively shorter wavelengths below it.
From left to right the images on a single row are
(1) Full image area read from the stack as observed.
(2) Selected region for analysis, with any adjacent objects masked out.
(3) Maximum likelihood model image, following PSF convolution.
(4) Maximum likelihood model image.
(5) Residual image.
(6) Error image.
(7) Object mask image.
To make the FITS files short integer and compressible, the sky subtracted stack images and the residuals are multiplied by 10 and transformed to short integer. The error image and mask are in the integer format used for these images (see sec 15).
We show on Figure 28 an example of a well exposed galaxy image from the MDS database as displayed on the MDS website. Note that even the fainter F606W image which has $`\mathrm{\Xi }2.5`$ is in the range signal-to-noise index which gives reasonable D+B decomposition.
As of October 1998 similar output for over 200,000 galaxies and stars had been made available on 19 CDROMS installed on a ‘Jukebox’ at STScI.
Analysis of an image as a star, disk-like or bulge-like galaxy is fairly straightforward. The only parameter that may be dropped is the axis ratio and in that case the orientation parameter is also not needed and the number of parameters fitted drops from 7 to 5.
The software can also use any profile index or even attempt to optimize its value as was done for images in the Uppsala galaxy catalog (Lauberts & Valentijn (1989)). However numerical simulations have shown that the profile index gives a measure of the $`\mathrm{B}/\mathrm{T}`$ flux ratio only if the axis ratio of the two components are very similar. Else, the minimization procedure computes an index which is not within the range of one (for pure disk-like) and a quarter for (pure bulge-like). The value is often larger than two, as seen for the Uppsala galaxy catalog in which the index seems to have been constrained to be smaller than three for the same reason.
The D+B analysis is much more complicated. From the very choice and definition of the fitted parameters, to the automated selection of them to ensure convergence, has been a long investigation based on both fits to real data and to realistic simulations. Almost like the minimization process, getting close to the answer was much faster than checking and ensuring that the algorithm was optimum. Getting an algorithm that worked on a majority of the images has been tested and in use for sometime. The final optimization became practical with the aid of a SPARC Ultra-1 which is more than an order of magnitude faster than a SPARC-2 on which the programs were developed.
Automation gives uniformity at the cost of a few complicated cases (particularly at bright magnitudes) where a decision made by human eye would probably be different. The program was improved constantly till about July 1996 to reduce the percentage (currently about 2% ) of fits which are in error. All of the MDS database has been reprocessed with the improved version of the program logic.
|
no-problem/9904/patt-sol9904002.html
|
ar5iv
|
text
|
# Interplay between Coherence and Incoherence in Multi-Soliton Complexes
## Abstract
We analyze photo-refractive incoherent soliton beams and their interactions in Kerr-like nonlinear media. The field in each of $`M`$ incoherently interacting components is calculated using an integrable set of coupled nonlinear Schrödinger equations. In particular, we obtain a general $`N`$-soliton solution, describing propagation of multi-soliton complexes and their collisions. The analysis shows that the evolution of such higher-order soliton beams is determined by coherent and incoherent contributions from fundamental solitons. Common features and differences between these internal interactions are revealed and illustrated by numerical examples.
One of the most noted discoveries of modern soliton science is that solitons can be excited by an incandescent light bulb instead of a high power laser source . This produces ”incoherent solitons” ; they can exist in photorefractive materials which require amazingly low powers to observe highly nonlinear phenomena . It is also remarkable that, in certain conditions, incoherent solitons in photorefractive materials can be studied using coupled nonlinear Schrödinger equations (NLSE) .
In general, coupled nonlinear Schrödinger equations (NLSE) can be applied to various phenomena. These include incoherent solitons in photo-refractive materials, plasma waves in random phase approximation , multicomponent Bose-Einstein condensate and self-confinement of multimode optical pulses in a glass fiber . Therefore its solutions are of great interest for theoretical physicists. In special cases these equations are found to be integrable . Then, in analogy with single (scalar) NLSE (when the number of equations, $`M`$, is $`1`$) and the Manakov case ($`M=2`$), the total solution consists of a finite number ($`N`$) of solitons and small amplitude radiation waves. The former is defined by the discrete spectrum of linear $`(L,A)`$ operators and the latter is defined by the continuous spectrum. Most applications deal with the soliton part of the solution as it contains the most important features of the problem. Moreover, a localized superposition of fundamental solitons can be called “multisoliton complex”. An incoherent soliton is a particular example of a multisoliton complex .
The cases $`M=1`$ and $`M=2`$ have been extensively discussed in the literature . On the other hand, results for general $`M`$ are scarce. The linear $`(L,A)`$ operators are important elements for the inverse scattering technique, which can be considered as a basis for integrability of $`M`$ coupled NLSEs. Moreover, it has been shown that $`N`$-soliton solutions of $`M`$ coupled NLSE can be found using a simple technique which is an extension of the theory of reflectionless potentials . In recent works cases when each component has only one fundamental soliton have been considered. It was demonstrated that, in this configuration, the formation of stationary complexes may be observed, and corresponding solutions for $`M=N4`$ were presented in explicit form .
So far, only the case of complete mutual incoherence of the fundamental solitons has been considered. In this case the multisoliton complex can also be viewed as a self-induced multimode waveguide . The general case, where fundamental solitons in the multisoliton complex interact both coherently and incoherently, has not been analyzed. Such interactions may be observed if $`N`$ is larger than $`M`$, so that each component has not less than one fundamental soliton. In general, each fundamental soliton can be ”spread out” among several components. We will refer to this effect as mixed “polarization” of fundamental solitons. However, in order to capture distinctive features of coherent and incoherent soliton interactions, we will focus on a special case which is important for incoherent solitons. Specifically, we consider a situation where all the fundamental soliton polarizations are mutually parallel or orthogonal, and thus are conserved in collisions . Due to the symmetry of the NLSE with respect to rotations in functional space, hereafter we assume that each fundamental soliton is polarized in one component only. It is for this case that we present new explicit $`N`$-soliton solutions of $`M`$ coupled NLSEs, and we discuss the new physics which it brings into the theory.
We consider propagation of an incoherent self-trapped beam in a slow Kerr-like medium and write the set of coupled NLSEs in the form :
$$i\frac{\psi _m}{z}+\frac{1}{2}\frac{^2\psi _m}{x^2}+\delta n(I)\psi _m=0,$$
(1)
where $`\psi _m`$ denotes the $`m`$-th component of the beam, $`z`$ is the coordinate along the direction of propagation, $`x`$ is the transverse coordinate, and
$$\delta n(I)=\underset{m=1}{\overset{M}{}}\alpha _m|\psi _m|^2$$
(2)
is the change in refractive index profile created by all incoherent components of the light beam, where the $`\alpha _m`$ $`(>0)`$ are the coefficients representing the strength of the nonlinearity, and $`M`$ is the number of components.
Solutions in the form of multisoliton complexes of Eq. (1) and their collisions can be obtained using the formalism of with some refinements. First, we introduce functions $`u_j(x,z)`$ as solutions of the following set of equations:
$$\underset{m=1}{\overset{N}{}}D_{jm}u_m=e_j.$$
(3)
where $`N`$ is a total number of fundamental solitons, $`e_j=\chi _j\mathrm{exp}\left(k_j\overline{x}_j+ik_j^2\overline{z}_j/2\right)`$, $`\overline{x}_j=xx_j`$ and $`\overline{z}_j=zz_j`$ are shifted coordinates, and $`\chi _j`$ are arbitrary coefficients. The values $`x_j`$ and $`z_j`$ characterize the initial positions of fundamental solitons, but the actual beam trajectories may not follow the specified points due to mutual interactions between fundamental solitons. Each fundamental soliton is characterized by an eigenvalue $`k_j=r_j+i\mu _j`$. Its real part, $`r_j`$, determines the amplitude of the fundamental soliton, while the imaginary part, $`\mu _j=\mathrm{tan}\theta _j`$, accounts for the soliton velocity (i.e. motion in transverse direction). Here $`\theta _j`$ is the angle of the fundamental soliton propagation relative to the $`z`$ axis.
To distinguish coherent and incoherent contributions to the multi-soliton complex, we use variables $`n_j`$, which represent the number of the component where the $`j`$-th soliton is located. Thus, two fundamental solitons with $`n_j=n_m`$ are coherent, and they are incoherent otherwise. Now we can write the expression for the matrix $`D`$:
$$D_{jm}=\frac{e_je_m^{}}{k_j+k_m^{}}+\{\begin{array}{cc}1/\left(k_j+k_m^{}\right),\hfill & n_j=n_m,\hfill \\ 0,\hfill & n_jn_m.\hfill \end{array}$$
(4)
Finally, the $`N`$-soliton solution of the original Eq. (1) can be obtained by adding up of all the $`u_j`$ corresponding to a given component number $`m`$:
$$\psi _m=\underset{j;n_j=m}{}u_j/\sqrt{\alpha _m}.$$
(5)
Note that the number of terms in the sum is exactly the number of fundamental solitons polarized in this component, viz. $`N_m`$, and the total $`N`$ is $`_{m=1}^MN_m`$.
One of the features of this approach is that coherent fundamental solitons are ”split” among all the $`u_j`$ functions for a given component. However, when obtaining analytical solutions in explicit form, it is possible to separate fundamental solitons by combining terms with corresponding propagation constants. Consequently, we write the exact solutions for a different set of functions $`\stackrel{~}{u}_j`$, with each of them containing one fundamental soliton (at distances where coherent interactions are small). These are combined into the original functions in the following way: $`\psi _m=_{j;n_j=m}\stackrel{~}{u}_j/\sqrt{\alpha _m}`$.
The coefficients $`\chi _j`$ are arbitrary, and we can choose particular values for them:
$$\chi _j=\underset{m;n_mn_j}{}\sqrt{b_{jm}},$$
(6)
where $`b_{jm}=(k_j+k_m^{})/(k_jk_m)`$, and the square root value is taken on the branch with positive real part. This step significantly simplifies further analysis, as the resulting solution will acquire a highly symmetric form.
Finally, the explicit expressions for solutions can be found as sums over specific permutations:
$`\stackrel{~}{u}_j`$ $`=`$ $`{\displaystyle \frac{e^{i\gamma _j}}{U}}{\displaystyle \underset{\{1,\mathrm{},j1,j+1,\mathrm{},N\}L}{}}C_L^jF_L^j(x,z),`$ (7)
$`U`$ $`=`$ $`{\displaystyle \underset{\{1,\mathrm{},N\}L}{}}C_LF_L(x,z).`$ (9)
Here $`L`$ denotes four sets of indices $`(L_1,L_2,L_3,L_4)`$. The summation is performed over all combinations in which the given set of soliton numbers (for example, $`\{1,\mathrm{},N\}`$) can be split among all the $`L_j`$. When performing permutations, $`L_1`$, $`L_2`$ are only filled with numbers of mutually coherent solitons (thus the number of elements in these sets is the same).
The coefficients and functions from (7) are determined for each realization of the permutation $`L`$ as follows:
$$\begin{array}{c}\begin{array}{cc}C_L=\hfill & (1)^{|L_1|}T_{\mathrm{sg}}^{L_1}T_{\mathrm{sg}}^{L_2}T_{\mathrm{mg}}T_{\mathrm{sb}}^{L_3}T_{\mathrm{sb}}^{L_4}T_{\mathrm{mb}}T_{\mathrm{mgb}},\hfill \end{array}\hfill \\ \begin{array}{cc}F_L(x,z)=\hfill & \mathrm{cos}(S_\mathrm{g})\mathrm{cos}(S_\mathrm{f})\mathrm{cosh}(S_\mathrm{b})\hfill \\ & \mathrm{sin}(S_\mathrm{g})\mathrm{sin}(S_\mathrm{f})\mathrm{sinh}(S_\mathrm{b}),\hfill \end{array}\hfill \\ \begin{array}{cc}C_L^j=\hfill & (1)^{|L_1|}T_\mathrm{c}^jT_{\mathrm{sg}}^{L_1}T_{\mathrm{sg}}^{L_2}T_{\mathrm{mg}}T_\mathrm{g}^jT_{\mathrm{sb}}^{L_3}T_{\mathrm{sb}}^{L_4}T_{\mathrm{mb}}T_{\mathrm{mgb}},\hfill \end{array}\hfill \\ \begin{array}{cc}F_L^j(x,z)=\hfill & \mathrm{cos}(S_\mathrm{g}^j)\mathrm{cos}(S_\mathrm{f})\mathrm{cosh}(S_\mathrm{b}^j)\hfill \\ & \mathrm{sin}(S_\mathrm{g}^j)\mathrm{sin}(S_\mathrm{f})\mathrm{sinh}(S_\mathrm{b}^j).\hfill \end{array}\hfill \end{array}$$
(10)
Here we used $`|L_l|`$ to denote the number of elements in the set. Note that the $`F`$ functions are written in the simplest form in terms of trigonometric and hyperbolic functions, due to the specific choice of coefficients in Eq. (6).
The variables introduced above are the following sums and products over the $`L_j`$ sets:
$`T_\mathrm{c}^j=\left[1+{\displaystyle \underset{mL_1;n_m=n_j}{}}1\right]^1,`$
$`T_{\mathrm{sg}}^{L_l}={\displaystyle \underset{\{j,m\}L_l;j<m}{}}\{\begin{array}{cc}|k_jk_m|^2,\hfill & n_j=n_m,\hfill \\ s_{jm}|k_j+k_m^{}|,\hfill & n_jn_m,\hfill \end{array}`$
$`T_{\mathrm{mg}}={\displaystyle \underset{jL_1;mL_2}{}}\{\begin{array}{cc}1/|k_j+k_m^{}|^2,\hfill & n_j=n_m,\hfill \\ s_{jm}/|k_jk_m|,\hfill & n_jn_m,\hfill \end{array}`$
$`T_\mathrm{g}^j={\displaystyle \underset{mL_1L_2L_3L_4}{}}\{\begin{array}{cc}1/c_{jm},\hfill & n_j=n_m,\hfill \\ s_{jm}\sqrt{c_{jm}},\hfill & n_jn_m,\hfill \end{array}`$
$`T_{\mathrm{sb}}^{L_l}={\displaystyle \underset{\{j,m\}L_l;jm}{}}\{\begin{array}{cc}1/(2r_j),\hfill & j=m,\hfill \\ c_{jm}^2,\hfill & n_j=n_m,\hfill \\ 1,\hfill & n_jn_m,\hfill \end{array}`$
$`T_{\mathrm{mb}}={\displaystyle \underset{jL_3;mL_4}{}}\{\begin{array}{cc}1,\hfill & n_j=n_m,\hfill \\ c_{jm},\hfill & n_jn_m,\hfill \end{array}`$
$`T_{\mathrm{mgb}}={\displaystyle \underset{\begin{array}{c}m_1L_1L_2\hfill \\ m_2L_3L_4\hfill \end{array}}{}}\{\begin{array}{cc}1/c_{m_1m_2},\hfill & n_{m_1}=n_{m_2},\hfill \\ s_{m_1m_2}\sqrt{c_{m_1m_2}},\hfill & n_{m_1}n_{m_2},\hfill \end{array}`$
$`S_\mathrm{g}={\displaystyle \underset{jL_l}{}}\gamma _j{\displaystyle \underset{jL_2}{}}\gamma _j,S_b={\displaystyle \underset{jL_3}{}}\beta _j{\displaystyle \underset{jL_4}{}}\beta _j,`$
$`S_\mathrm{g}^j=S_\mathrm{g}i\left(S_{\mathrm{sg}}^{j,L_1}S_{\mathrm{sg}}^{j,L_2}\right),`$
$`S_{\mathrm{sg}}^{j,L_l}={\displaystyle \underset{mL_l}{}}\{\begin{array}{cc}2\eta _{jm},\hfill & n_j=n_m,\hfill \\ \eta _{jm},\hfill & n_jn_m,\hfill \end{array}`$
$`S_\mathrm{b}^j=S_\mathrm{b}+i\left(S_{\mathrm{sb}}^{j,L_3}S_{\mathrm{sb}}^{j,L_4}\right),`$
$`S_{\mathrm{sb}}^{j,L_l}={\displaystyle \underset{mL_l}{}}\{\begin{array}{cc}2\phi _{jm},\hfill & n_j=n_m,\hfill \\ \phi _{jm},\hfill & n_jn_m,\hfill \end{array}`$
$`S_\mathrm{f}=S_\phi ^{L_1,L_3}+S_\phi ^{L_2,L_4}S_\phi ^{L_1,L_4}S_\phi ^{L_2,L_3},`$
$`S_\phi ^{L_{l1},L_{l2}}={\displaystyle \underset{jL_{l1};mL_{l2}}{}}\{\begin{array}{cc}2\phi _{jm},\hfill & n_j=n_m,\hfill \\ \phi _{jm},\hfill & n_jn_m.\hfill \end{array}`$
Here the ”$``$” operator is used to merge the sets, and the variables $`\beta _j+i\gamma _j=k_j\overline{x}_j+ik_j^2\overline{z}_j/2`$ (with $`\beta _j`$ and $`\gamma _j`$ real), $`\eta _{jm}=\mathrm{log}\left(\left|(k_jk_m)(k_j+k_m^{})\right|\right)/2`$, $`c_{jm}=|b_{jm}|`$, $`\phi _{jm}=\mathrm{arg}\left(1/b_{jm}\right)/2`$, $`s_{jm}=\mathrm{sign}\left\{\pi \mathrm{arg}\left[\sqrt{b_{jm}}\left(\sqrt{b_{mj}}\right)^{}/\left(k_j+k_m^{}\right)\right]\right\}\mathrm{sign}\left(mj\right)`$, $`b_{jm}=(k_j+k_m^{})/(k_jk_m)`$. The function $`\mathrm{arg}`$ is supposed to give values in the interval $`[0,\mathrm{\hspace{0.33em}2}\pi )`$, and
$`\mathrm{sign}=\{\begin{array}{cc}1,\hfill & x0,\hfill \\ 1,\hfill & x<0.\hfill \end{array}`$
Note that only $`\beta _j`$ and $`\gamma _j`$ depend on the coordinates $`(x,z)`$. All the other coefficients are expressed in terms of the wave numbers $`k_j`$ and constant shifts in positions $`(x_j,z_j)`$ of the $`N`$ fundamental solitons. As the total solution has translational symmetry, one of the shifts can be fixed, so that the number of independent parameters controlling the multisoliton complex is $`2N1`$.
If an incoherent soliton consists only of orthogonally polarized fundamental solitons ($`n_jj`$, $`NM`$), and all are propagating in the same direction, then its transverse intensity profile remains stationary . In this particular case, the general expressions (10) are radically simplified, since, due to the above-mentioned restrictions on the permutations, the sets $`L_1`$ and $`L_2`$ are always empty. Hence, we obtain:
$`C_L=T_{\mathrm{mb}},C_L^j=2r_j\chi _jT_{\mathrm{mb}},`$
$`F_L=\mathrm{cosh}(S_\mathrm{b}),F_L^j=\mathrm{cosh}(S_\mathrm{b}^j).`$
Note that here we have neglected a common multiplier in $`C_L`$ and $`C_L^j`$, as these coefficients determine respectively the denominator and numerator in the expression for $`\stackrel{~}{u}_j`$.
Now we present numerical examples to illustrate these results. An example of a stationary incoherent soliton consisting of eight components ($`N=M=8`$) is shown in Fig. 1. The profiles of the constituent fundamental solitons, and their superposition as a whole, are determined by the wave numbers and relative shifts along the $`x`$ axis. In this configuration, the shifts in propagation direction, $`z_j`$, correspond to arbitrary phase changes of different components, but these do not influence the evolution due to the incoherent nature of the inter-component interactions.
On the other hand, if $`N>M`$, two or more of the fundamental solitons are polarized in the same components, and thus interact coherently. If the inclination angles of the fundamental solitons are all the same, the beam will remain localized upon propagation. Such a multi-soliton complex is an incoherent soliton with an intensity profile which evolves periodically or quasi-periodically, as shown in Fig. 2. These oscillations, appearing due to internal coherent intra-component interactions, are a general feature of incoherent solitons, and can be eliminated only in specific cases, as discussed earlier. It follows that spatial ”beating” always accompanies the interaction of fundamental solitons of a single NLSE, which agrees with previous studies .
Our explicit solution (7) also describes collisions of incoherent solitons. As mentioned earlier, the polarizations of the fundamental solitons are preserved in collisions (provided they are orthogonal or parallel), and thus the degree of internal coherence doesn’t change. However, the shifts of the fundamental soliton trajectories differ, and this results in the incoherent solitons changing their shapes. These transformations can be seen clearly in Fig. 3.
To calculate the shifts, we use the fact that in the expression for soliton profiles, $`\stackrel{~}{u}_j`$, given by Eq. (7), the denominator $`U`$ is real, and the numerator does not depend on the coordinates of the corresponding fundamental soliton $`(x_j,z_j)`$. It is then straightforward to take appropriate limits and calculate the shift of $`j`$-th fundamental soliton along the $`x`$ axis due to collisions:
$`\delta x_j={\displaystyle \frac{1}{r_j}}{\displaystyle \underset{m}{}}\pm \{\begin{array}{cc}2\mathrm{ln}(c_{jm}),\hfill & n_j=n_m,\hfill \\ \mathrm{ln}(c_{jm}),\hfill & n_jn_m.\hfill \end{array}`$
Here the summation involves the fundamental solitons which feature in the collisions. The ”$`+`$” sign corresponds to the case when colliding soliton number $`m`$ comes from the right (i.e. has larger $`x`$ coordinate before the impact), and the ”$``$” sign when from the left. This is a generalization of the expressions found in .
In summary, we have obtained a general $`N`$-soliton solution of $`M`$ coupled nonlinear Schrödinger equations which describes multi-soliton complexes supported by a Kerr-type nonlinearity. A particular example is an incoherent soliton in a photo-refractive medium. We have revealed that the properties of multi-soliton complexes, which are superpositions of fundamental solitons with orthogonal or parallel polarizations, are determined by internal interactions, both phase-insensitive inter-component and coherent intra-component, with the latter resulting in spatial ”beating”. Using our exact result, we also analyzed collisions of incoherent solitons. We showed that the re-shaping of incoherent solitons after collisions are characterized by the relative shifts of the fundamental solitons, and these are calculated using a simple analytical formula. These distinctive features of incoherent solitons are illustrated by numerical examples.
The authors are part of the Australian Photonics CRC. We are grateful to Dr. Ankiewicz for critical reading of this manuscript.
|
no-problem/9904/cond-mat9904328.html
|
ar5iv
|
text
|
# Andreev reflection in engineered Al/Si/InGaAs(001) junctions
<sup>1</sup><sup>1</sup>footnotetext: Also with Università di Trieste, Trieste, Italy.
## Abstract
Complete suppression of the native n-type Schottky barrier is demonstrated in Al/InGaAs(001) junctions grown by molecular-beam-epitaxy. This result was achieved by the insertion of Si bilayers at the metal-semiconductor interface allowing the realization of truly Ohmic non-alloyed contacts in low-doped and low-In content InGaAs/Si/Al junctions. It is shown that this technique is ideally suited for the fabrication of high-transparency superconductor-semiconductor junctions. To this end magnetotransport characterization of Al/Si/InGaAs low-n-doped single junctions below the Al critical temperature is presented. Our measurements show Andreev-reflection dominated transport corresponding to junction transparency close to the theoretical limit due to Fermi-velocity mismatch.
In the last few years there has been an increasing interest in the study of semiconductor-superconductor (Sm-S) hybrid systems . These allow the investigation of exotic coherent-transport effects and have great potential for device applications. The characteristic physical phenomenon driving electron transport at a S-Sm junction is Andreev reflection . In this process (originally observed in normal metal-superconductor junctions ) an electron incident from the Sm side on the superconductor may be transmitted as a Cooper pair if a hole is retroreflected along the time-reversed path of the incoming particle. High junction transparency is a crucial property for the observation of Andreev-reflection dominated transport. Different techniques have been explored to meet this requirement including metal deposition immediately after As-decapping , Ar<sup>+</sup> back-sputtering , and in situ metallization in the molecular-beam epitaxy (MBE) chamber . All these tests were performed in InAs-based Sm-S devices where the main transmittance-limiting factor is interface contamination. On the contrary, for semiconductor materials such as those grown on either GaAs or InP, the strongest limitation arises from the presence of a native Schottky barrier. In this case, in order to enhance junction transparency penetrating contacts and heavily doped surface layers were used. Recently we have reported on a new technique , alternative to doping, to obtain Schottky-barrier-free Al/n-In<sub>x</sub>Ga<sub>1-x</sub>As(001) junctions ($`x0.3`$) by MBE growth. This is based on the inclusion of an ultrathin Si interface layer under As flux which changes the pinning position of the Fermi level at the metal-semiconductor junction and leads to the total suppression of the Schottky barrier. In this work we show the behavior of such Ohmic contacts realized and furthermore demonstrate how this method can be successfully exploited to obtain high transparency Sm-S hybrid junctions . Notably these are based on low-doped and low-In-content InGaAs alloys that are ideal candidates for the implementation of ballistic-transport structures.
Al/n-In<sub>0.38</sub>Ga<sub>0.62</sub>As junctions incorporating Si interface layers were grown by MBE. Their schematic structure is shown in Fig. 1. The semiconductor portion consists of a 300-nm-thick GaAs buffer layer grown at 600 C on n-type GaAs(001) and Si-doped at $`n10^{18}`$ cm<sup>-3</sup> followed by a 2-$`\mu `$m-thick n-In<sub>0.38</sub>Ga<sub>0.62</sub>As layer grown at 500 C with an inhomogeneous doping profile. The top 1.5-$`\mu `$m-thick region was doped at $`n=6.510^{16}`$ cm<sup>-3</sup>, the bottom buffer region (0.5 $`\mu `$m thick) was heavily doped at $`n10^{18}`$ cm<sup>-3</sup>. After In<sub>0.38</sub>Ga<sub>0.62</sub>As growth the substrate temperature was lowered to 300C and a Si atomic bilayer was deposited under As flux . Al deposition ($`150`$ nm) was carried out in situ at room temperature. During Al deposition the pressure in the MBE chamber was below $`510^{10}`$ Torr. Reference Al/n-In<sub>0.38</sub>Ga<sub>0.62</sub>As junctions were also grown with the same semiconductor part but without the Si interface layer.
In order to compare the current-voltage ($`I`$$`V`$) behavior of Si-engineered and reference junctions, circular contacts were defined on the top surface with various diameters in the 75–150 $`\mu `$m range. Standard photolithographic techniques and wet chemical etching were used to this aim. Back contacting was provided for electrical characterization by metallizing the whole substrate bottom. $`I`$$`V`$ characterization was performed in the 20–300 K temperature range using a closed-cycle cryostat equipped with microprobes. Typical room-temperature (dashed lines) and low-temperature (solid lines) $`I`$$`V`$ characteristics for both Si-engineered and reference diodes are shown in Fig. 2.
The reference diode exhibits a marked rectifying behavior which is enhanced at low temperatures. We have measured the corresponding barrier height by different techniques: thermionic-emission $`I`$$`V`$ measurements in the 270–300 K temperature range, and linear fit in the forward bias region of log($`I`$)–$`V`$ characteristics measured at $`200`$ K. These two approaches yielded barrier heights of $`0.22\pm 0.05`$ eV and $`0.23\pm 0.02`$ eV respectively. These values include corrections for image-charge and thermionic-field-emission effects . The quoted uncertainties reflect diode to diode fluctuations and uncertainties in the barrier height determination.
The engineered diode shows no rectifying behavior even at low temperatures (20 K in Fig. 2). Its $`I`$$`V`$ characteristics bear no trace of a SB and are linear over the whole 20–300 K temperature range. Their slope is only weakly affected by temperature. To investigate the possible existence of a residual SB whose rectifying effect might be hidden by the series resistance arising from the InGaAs bulk and the back contact, we modeled the low-temperature $`I`$$`V`$ behavior of the engineered diode in terms of a residual barrier height $`\varphi _n`$ and a series resistance $`R`$ . We were able to reproduce the experimental $`I`$$`V`$ curves only with $`\varphi _n<0.03`$ eV. As will be apparent from what follows, this value represents only an upper limit for the barrier height.
Doping effects do not play any significant role in the barrier suppression. In order to verify this, we annealed the engineered diode at 420 C for 5 seconds. Following this we observed a marked rectifying behavior analogous to that of the reference sample. This result is in line with the findings reported in Ref. on the thermal stability of Si-engineered SBs in Al/GaAs junctions and reflects Si redistribution at the interface. Wear-out tests on engineered diodes were also performed in order to verify the persistence of the ohmic behavior against prolonged high-current stress. To this end we monitored the $`I`$$`V`$ characteristics during 24 hours of continuous operation at current densities of 200 A/cm<sup>2</sup>. No changes were detected.
In order to demonstrate the applicability of this technique to the realization of high transparency Sm-S hybrid devices, rectangular 100$`\times `$160 $`\mu `$m<sup>2</sup> Al/n-In<sub>0.38</sub>Ga<sub>0.62</sub>As junctions were patterned on the sample surface using standard photolithographic techniques and wet chemical etching. Two additional 100$`\times `$50 $`\mu `$m<sup>2</sup>-wide and 200-nm-thick Au pads were electron-beam evaporated on top of every Al pattern in order to allow four-wire electrical measurements. Samples were mounted on non-magnetic dual-in-line sample holders, and 25-$`\mu `$m-thick gold wires were bonded to the gold pads. $`I`$$`V`$ characterizations as a function of temperature ($`T`$) and static magnetic field ($`H`$) were performed in a <sup>3</sup>He closed-cycle cryostat.
The critical temperature ($`T_c`$) of the Al film was 1.1 K (which corresponds to a gap $`\mathrm{\Delta }0.16`$ meV). The normal-state resistance $`R_N`$ of our devices was 0.2 $`\mathrm{\Omega }`$, including the series-resistance contribution ($`0.1\mathrm{\Omega }`$) of the semiconductor. At $`H=0`$ and below $`T_c`$, dc $`I`$$`V`$ characteristics exhibited important non-linearities around zero bias that can be visualized by plotting the differential conductance ($`G`$) as a function of the applied bias ($`V`$). In Fig. 3(a) we show a tipical set of $`G`$$`V`$ curves obtained at different temperatures in the 0.33–1.03 K range. Notably even at $`T=0.33`$ K, i.e. well below $`T_c`$, a high value of $`G`$ is observed at zero bias. At low temperature and bias (i.e., when the voltage drop across the junction is lower than $`\mathrm{\Delta }/e`$ ), transport is dominated by Andreev reflection. The observation of such pronounced Andreev reflection demonstrates high junction transparency. The latter can be quantified in terms of a dimensionless parameter $`Z`$ according to the Blonder-Tinkham-Klapwijk (BTK) model . To analyze the data of Fig. 3(a) we followed the model by Chaudhuri and Bagwell , which is the three-dimensional generalization of the BTK model. For our S-Sm junction we found $`Z1`$ corresponding to a $``$50 % normal-state transmission coefficient. We note that without the aid of the Si-interface-layer technique, doping concentrations over two orders of magnitude greater than that employed here would be necessary to achieve comparable transmissivity (see e.g. Refs. ). This drastic reduction in the impurity concentration is a very attractive feature for the fabrication of ballistic structures. It should also be noted that our reported $`Z`$ value is close to the intrinsic transmissivity limit related to the Fermi-velocity mismatch between Al and InGaAs .
We should also like to comment on the homogeneity of our junctions. By applying the BTK formalism, $`Z1`$ leads to a calculated value of the normal-state resistance ($`R_N^{th}`$) much smaller than the experimental value $`R_N^{exp}`$: $`R_N^{th}/R_N^{exp}=0.003`$ . This would indicate that only a small fraction ($`R_N^{th}/R_N^{exp}`$) of the contact area has the high transparency and leads to the transport properties of the junction, as already reported for different fabrication techniques . Values of $`R_N^{th}/R_N^{exp}`$ ranging from $`10^4`$ to $`10^2`$ can be found in the literature (see, e.g., Refs. ). Such estimates, however, should be taken with much caution. Experimentally, no homogeneities were observed on the lateral length scale of our contacts and we did observe a high uniformity in the transport properties of all junctions studied.
The superconducting nature of the conductance dip for $`|V|<\mathrm{\Delta }/e`$ is proved by its pronounced dependence on temperature and magnetic field. Figure 3(a) shows how the zero-bias differential-conductance dip observed at $`T=0.33`$ K progressively weakens for $`T`$ approaching $`T_c`$. This fact is consistent with the well-known temperature-induced suppression of the superconducting energy gap $`\mathrm{\Delta }`$. Far from $`V=0`$ the conductance is only marginally affected by temperature as expected for a S-Sm junction when $`|V|`$ is significantly larger than $`\mathrm{\Delta }/e`$ . A small depression in the zero-bias conductance is still observed at $`TT_c`$. This, together with the slight asymmetry in the $`G`$$`V`$ curves, can be linked to a residual barrier at the buried InGaAs/GaAs heterojunction.
In Fig. 3(b) we show how the conductance can be strongly modified by very weak magnetic fields ($`H`$). The $`G`$$`V`$ curves shown in Fig. 3(b) were taken at $`T=0.33`$ K for different values of $`H`$ applied perpendicularly to the plane of the junction in the 0–5 mT range. The superconducting gap vanishes for $`H`$ approaching the critical field ($`H_c`$) of the Al film ($`H_c10`$ mT at $`T=0.33`$ K). Consequently, the zero-bias conductance dip is less and less pronounced and at the same time shrinks with increasing magnetic field. The latter effect was not as noticeable in Fig. 3(a) owing to the temperature-induced broadening of the single-particle Fermi distribution function .
In conclusion, we have reported on Ohmic behavior and Andreev-reflection dominated transport in MBE-grown Si-engineered Al/n-In<sub>0.38</sub>Ga<sub>0.62</sub>As junctions. Transport properties were studied as a function of temperature and magnetic field and showed junction transmissivity close to the theoretical limit for the S-Sm combination. The present study demonstrates that the Si-interface-layer technique is a promising tool to obtain high-transparency S-Sm junctions involving InGaAs alloys with low In content and low doping concentration. This technique yields Schottky-barrier-free junctions without using InAs-based heterostructures and can be exploited in the most widespread MBE systems. It is particularly suitable for the realization of low-dimensional S-InGaAs hybrid systems grown on GaAs or InP substrates. We should finally like to stress that its application in principle is not limited to Al metallizations and other superconductors could be equivalently used. In fact, to date the most convincing interpretation of the silicon-assisted Schottky-barrier engineering is based upon the heterovalency-induced IV/III-V local interface dipole . Within this description Schottky-barrier tuning is a metal-independent effect.
The present work was supported by INFM under the PAIS project Eterostrutture Ibride Semiconduttore-Superconduttore and the TUSBAR program. One of us (F. G.) would like to acknowledge Europa Metalli S.p.A. for financial support.
|
no-problem/9904/astro-ph9904232.html
|
ar5iv
|
text
|
# The gaseous extent of galaxies and the origin of Ly𝛼 absorption systems. IV: Ly𝛼 absorbers arising in a galaxy group.Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5–26555.
## 1 Introduction
Ly$`\alpha `$ absorption due to groups or clusters of galaxies has only been detected relatively recently (Lanzetta et al. 1996, Ortiz-Gil et al. 1997, Tripp et al. 1998, Shull et al. 1998). This is because it had been difficult to identify suitable combinations of background QSOs and foreground clusters close enough to the QSO line of sight to produce Ly$`\alpha `$ absorption in the QSO spectrum. The search for suitable groups was poorly motivated since Ly$`\alpha `$ absorption would not be expected due to the high temperature of the intracluster medium. Furthermore the majority of Lyman-alpha absorption line data in the literature are at redshifts z$`>1.5`$ (when the line is shifted into the optical band), where most clusters are still on the process of formation.
Recent observations appear to produce contradictory results: Morris et al. (1993) and Bowen, Blades, & Pettini (1996) failed to identify absorption due to clusters of galaxies, in contrast to the results of Lanzetta, Webb & Barcons (1996), Ortiz-Gil et al. (1997), Tripp et al. (1998), and Shull et al. (1998).
However, other recent work suggests that galaxies may partially retain their gaseous envelopes in a cluster environment (see Cayatte et al. 1990 for an example in the Virgo cluster). Zabludoff & Mulchaey (1998) find that galaxies in poor groups ($`2050`$ members) lie in a common halo which contains most of the mass of the group. Blitz et al. (1998) suggest that the High Velocity Clouds (HVCs) detected in the Local Group might be the counterparts of Lyman Limit systems, as they find similar column densities and internal velocity dispersions and subsolar metallicites. They also suggest that lower column density HVCs may correspond to Ly$`\alpha `$ clouds. In their models the HVCs trace the distribution of dark matter in and around the group, following its filamentary/sheet-like structure.
Lanzetta et al. (1996) report the identification of a group of galaxies toward QSO 1545$`+`$2101 responsible for a broad absorption feature present in an HST Faint Object Spectrograph (FOS) spectrum of this object. The spectral resolution of these data was insufficient to show whether it was the group as a whole or individual galaxies within it which were responsible for the observed Ly$`\alpha `$ absorption. Individual galaxies would give rise to discrete absorption components associated with particular galaxies. Also, galaxies at a smaller impact parameter should produce higher column densities.
In this paper we present new higher resolution observations of the QSO 1545$`+`$2101 using the Goddard High Resolution Spectrograph on HST.
Throughout this paper we have adopted a deceleration parameter $`q_0=0.5`$ and a dimensionless Hubble constant $`h=H_0/100(\mathrm{km}\mathrm{s}^1\text{Mpc}^1)`$.
## 2 HST/GHRS observations of the QSO 1545$`+`$2101
A high resolution spectrum of the QSO 1545$`+`$2101 was obtained using the GHRS spectrograph with the G160M grating on August 23 1996 (Fig. 1). The observations were obtained during a series of 14 exposures each of 300s duration for a total exposure time of 4200s. The individual exposures were reduced using standard pipeline techniques and were registered to a common origin and combined using our own reduction programs. The final spectrum was fitted with a smooth continuum using an iterative spline fitting technique. The spectral resolution of the final spectrum was measured to be FWHM=$`0.07`$ Å (or FWHM=$`14\mathrm{km}\mathrm{s}^1`$), and the continuum signal-to-noise ratio was measured to be $`S/N12`$ per resolution element. Previous imaging and spectroscopic observations of the field surrounding QSO 1545$`+`$2101 are described in Lanzetta et al. (1996).
## 3 Detection and analysis of the absorption systems in the field of QSO 1545$`+`$2101
We detected all absorption lines above a significance level of $`3\sigma `$ in the spectrum of QSO 1545$`+`$2101. The parameters characterizing each absorption line were estimated using multi-component Voigt profile fitting (three different and independently developed software packages were used, the three of them providing the same results). The values obtained are shown in Table 1. The $`3\sigma `$ confidence level detection limit was found to be $`0.03`$ Å .
Two absorption lines are found at $`z=0.2504707\pm 0.0000030`$ and $`z=0.2522505\pm 0.0000016`$, with a velocity separation of $`427\mathrm{km}\mathrm{s}^1`$. A group of at least eight lines is also detected, the redshift centroid of this group being $`<z>=0.2648\pm 0.0002`$ with a velocity dispersion of $`163\pm 57\mathrm{km}\mathrm{s}^1`$.
Galactic heavy element lines in the spectrum (C iv $`\lambda \lambda `$1548,1550 and Si ii$`\lambda `$ 1526) were identified and removed them from subsequent analyses (Fig.1).
## 4 Group of galaxies in the field of QSO 1545+2101
The spectroscopic galaxy sample was selected on the basis of galaxy brightness and proximity to the QSO line-of-sight. Although this sample cannot be considered complete the galaxies were selected randomly from an essentially flux-limited sample. The sample of galaxies used in this study is given in Table 2. Figure 2 is an image of the field. The galaxies in the group which were observed spectroscopically are indicated in that figure. The error in the galaxies redshift is estimated to be $`120\mathrm{km}\mathrm{s}^1`$ (Lanzetta et al. 1995).
We detected a group of seven galaxies with redshift centroid at $`<z>=0.2645\pm 0.0004`$ and velocity dispersion of $`239\pm 90\mathrm{km}\mathrm{s}^1`$. Their individual impact parameters range from $`7.2`$ up to $`456.4h^1`$ kpc.
We searched in the archives for X-ray data on this field as detecting X-ray emission from the group might help to characterize it better. Unfortunately the group is so close to the QSO (which is an X-ray source itself) that only the ROSAT High Resolution Imager data would be of any use, and there are no such data in the archive. QSO 1545$`+`$2101 was observed both with the Einstein Imaging Proportional Counter and with the ROSAT Position Sensitive Proportional Counter, but the extended Point-Spread-Function of both instruments resulted in QSO emission severely contaminating the region where the galaxy group might emit X-rays.
In any case, from our image of this field and the fact that the velocity dispersion that we have measured for this group is quite small, it is clear that it is a loose association of galaxies rather than a galaxy cluster. The maximum value of impact parameter in our sample ($`456.4h^1`$ kpc) is also the typical physical size for poor groups of galaxies (Zabludoff & Mulchaey 1998). In fact, this group is most probably the one hosting the QSO itself, as $`z_{em}=0.264\pm 0.0003`$ for this object (Marziani et al. 1996). One might therefore expect this group to have only between 10-20 members (Bahcall et al. 1997).
## 5 The relationship between the absorption systems and the galaxies
The cluster of absorption lines present in the spectrum of QSO 1545+2101 might arise in different environments. They might be intrinsic absorbers, arising either in the QSO region itself or in its near environment. The similar redshifts of the QSO and the group of absorbers in these data may point to one of these hypotheses as the right one. In addition, QSO 1545+2101 is a radio-loud object and there have been suggestions that intrinsic absorption or absorption arising in the QSO host galaxy would be stronger in radio-loud QSOs than in radio-quiet ones (Foltz et al. 1988; Mathur, Wilkes & Aldcroft 1997).
But there are also some arguments against an intrinsic origin for these lines, as discussed by Lanzetta et al. (1996). Associated absorbers of the kind described above produce rather strong absorption lines, while the lines detected in QSO 1545+2101 are relatively weak. Moreover, no corresponding metal absorption lines have been detected. With the new data from GHRS we have more evidence against the associated nature of the absorption systems: the agreement between the galaxies’ and absorbers’ redshift centroids and between their corresponding velocity dispersions suggests that both galaxies and absorbers groups share—at least—the same physical location. These characteristics lead us to reject the associated hypothesis and to consider another possible scenario: absorption arising in cosmologically intervening objects, within the same group of galaxies that hosts the QSO.
To further explore this third hypothesis a demonstration of the non-random coincidence between the galaxies and absorbers positions in velocity space is necessary. Identifying each galaxy with a single absorption line would also be very interesting. In what follows we use statistical methods to address both questions.
### 5.1 A cluster of absorbers arising in a group of galaxies
The group of galaxies detected toward the QSO 1545$`+`$2101 has a mean redshift of $`<z_g>=0.2645\pm 0.0004`$, compatible with the mean redshift centroid value of the group of absorption lines, $`<z_a>=0.2648\pm 0.0002`$ (to the red of the Galactic Si ii line in Fig. 1). The absorber and galaxy velocity dispersions are similar: $`239\pm 90\mathrm{km}\mathrm{s}^1`$ for the group of galaxies and $`163\pm 57\mathrm{km}\mathrm{s}^1`$ for the group of absorption lines (the error in the galaxy redshifts is $`120\mathrm{km}\mathrm{s}^1`$ ). This strongly suggests a connection between the absorbers and galaxies.
There is also a galaxy in this field whose redshift is $`z=0.2510`$. Two Ly$`\alpha `$ absorption lines are detected near this redshift at $`z=0.2504707\pm 0.0000030`$ and $`z=0.2522505\pm 0.0000020`$ (to the blue of the Galactic Si ii line in Fig. 1). The galaxy-absorber velocity differences are $`\mathrm{\Delta }v=127\pm 120\mathrm{km}\mathrm{s}^1`$ and $`\mathrm{\Delta }v=300\pm 120\mathrm{km}\mathrm{s}^1`$ respectively. This implies that this galaxy, whose impact parameter is $`\rho =306.4h^1`$ kpc, could be responsible for one of the absorption lines as both $`\mathrm{\Delta }v`$ values are compatible with the velocity dispersions that one finds typically in a galactic halo ($`200\mathrm{km}\mathrm{s}^1`$). The velocity difference between the two absorption systems ($`427\mathrm{km}\mathrm{s}^1`$), is perhaps too large for the same galaxy to be responsible for both of them. It may also be that we have not observed the actual galaxy giving rise to either of these absorption lines, since our galaxy sample is not complete.
#### 5.1.1 Statistical analysis
Two statistical tests were carried out to investigate the relationship between the group of galaxies and absorbers. In the first, we computed the two-point cross-correlation function ($`\xi _{ag}`$) between the absorbers and the galaxies. This function was normalized by computing the $`\xi _{ag}`$ expected if there were no relation between absorbers and galaxies (derived using galaxy redshifts which are randomly distributed over a redshift range around the real group of absorption lines). Errors were computed using a bootstrap method, simulating 1000 samples of 7 galaxy redshifts, each set randomly selected from the real set, deriving error estimates from the distribution of 1000 values of $`\xi _{ag}`$. The final result is shown in Fig. 3 (panel $`a`$).
In the second statistical test, we applied a similar statistical method, not to the actual set of absorbers but to the individual pixel intensities in the spectrum. This way we overcome any potential problems introduced as a consequence of incorrect determination of the true velocity structure in the profile fitting process, or the presence of weak lines falling below the detection threshold. Recent analyses by Liske, Webb & Carswell (1999) show that the study of pixel intensities is more sensitive to clustering than the usual line–fitting techniques. The test is as follows: we evaluated for each pixel $`i`$ the value of the function $`g_i=1f_i`$, where $`f_i`$ is the intensity of pixel $`i`$ normalized to the continuum. Clearly, $`g_i`$ has larger values in pixels belonging to absorption lines. For pixels corresponding to metal lines (including galactic lines) we assigned a value of $`g_i=0`$. Then, for each galaxy we compute the function $`\zeta _v=_{n=1}^N_jg_{nj}`$, where $`g_{nj}`$ refers to all the pixels $`j`$ located at a distance $`v`$ in velocity space from galaxy $`n`$, $`N`$ being the total number of galaxies in the sample. The velocity distances $`v`$ considered range from $`0\mathrm{km}\mathrm{s}^1`$ up to the one spanned by the whole spectrum. This function $`\zeta _v`$ is analogous in some sense to the two-point cross-correlation function computed before: high values of $`\zeta _v`$ at low velocity distances reflect the tendency of the pixels corresponding to absorption lines to lie close to the galaxies. The same function $`\zeta _v`$ corresponding, again, to a randomly selected sample of galaxies chosen from a uniformly distributed group was computed in a way analogous to the previous one and the result, after normalizing to this random case, is shown in Fig. 3 (panel $`b`$). The error bars in $`\zeta _v`$ were computed using a bootstrap method as before. The result obtained is similar to the one obtained by computing the two-point cross-correlation function: most of the pixels belonging to absorption lines (i.e., with larger values of $`g`$) lie less than $`200\mathrm{km}\mathrm{s}^1`$ away from the galaxies.
### 5.2 Are absorption lines related to galaxies on a case-by-case basis?
As there is a clear complex of discrete Ly$`\alpha `$ lines in the spectrum of QSO 1545+2101, we explored the possibility of a one-to-one match between galaxies and absorbers.
A Gaussian model was assumed for the distribution of galaxies in the group. The null hypothesis is that the galaxies are randomly drawn from a Gaussian whose parameters are derived from the real data. The two-point correlation functions $`\xi _{ag}`$ and the function $`\zeta _v`$ were computed in the same way as in §5.1 (see Fig. 3, panels $`c`$ and $`d`$). No evidence of a one-to-one correspondence between galaxies and absorbers is found.
We can estimate the maximum statistical velocity dispersion between individual absorbers and their galaxies of origin that would permit the detection of a one–to–one correspondence, assuming an intrinsic one–to–one correspondence exists. A Kolmogorov-Smirnov Monte Carlo test showed that if the average velocity dispersion between the galaxies and the corresponding absorbers were $`100\mathrm{km}\mathrm{s}^1`$ then the null hypothesis would be rejected ($`>2\sigma `$) in $`90`$% of the cases. As a typical galactic velocity dispersion is $`200\mathrm{km}\mathrm{s}^1`$, this condition is not likely to be satisfied in practice. Another possibility is to improve the statistics. From Monte Carlo simulations we estimate that about 100 groups similar to the ones studied here are needed for a $`2\sigma `$ detection of this one–to–one association, for a velocity dispersion between the absorber and the galaxy of about $`200\mathrm{km}\mathrm{s}^1`$.
## 6 Discussion and conclusions
We have detected a clump of absorption lines along the line-of-sight towards the QSO 1545$`+`$2101. A group of galaxies has also been detected, with impact parameters $`\rho `$ of the individual galaxies to the QSO line–of–sight of less than $`460h^1`$ kpc. The group is probably the one hosting the QSO, so we can expect it to have about 10-20 members (Bahchall et al. 1997).
Several scenarios might give rise to the absorption. Due to the close redshift values of the QSO and the group of absorbers one could easily think that they arise either in the QSO itself or in the corresponding host galaxy. We consider that there are compelling arguments supporting the intervening system hypothesis (see Lanzetta et al. 1996) and contradicting the associated hypothesis. We now summarise those arguments.
The velocity spanned by the group of Ly$`\alpha `$ absorption lines is consistent with the velocity dispersion of the group of galaxies. This implies that the Ly$`\alpha `$ absorbers arising in that group occupy the same region of space as the galaxies themselves. Moreover, the spectrum of QSO 1545$`+`$2101 reveals a group of discrete Ly$`\alpha `$ absorption lines at $`z0.26`$. Multi-component Voigt profile fitting provides a statistically good fit to the data, indicating that the absorption lines arise in overdense gas regions rather than in some smoothly distributed intragroup medium.
The average Doppler dispersion parameter of the absorption lines, b, is measured to be $`19\pm 4\mathrm{km}\mathrm{s}^1`$ with a dispersion of $`10\pm 4\mathrm{km}\mathrm{s}^1`$. This value is in agreement with the values measured in the low redshift Ly$`\alpha `$ forest. Therefore there is no evidence from this case for any physical difference, in terms of the b parameter, between Ly$`\alpha `$ clouds lying within or outside of groups.
Two statistical analyses show that the distribution of galaxies with respect to the absorbers is not random, but that is not possible to confirm a one–to–one match due to the proximity in velocity space of the galaxies in the groups and the uncertainty on their redshifts. A Kolmogorov-Smirnov test showed that a small galaxy-absorber velocity dispersion (less than $`100\mathrm{km}\mathrm{s}^1`$) would be required to establish a one–to–one match. As this is well below the typical values corresponding to a galaxy potential well, another approach is required, such as having a large enough sample of clusters or groups of galaxies related to clusters of Ly$`\alpha `$ absorption lines. All the facts above support the idea of a physical connection between the group of galaxies and the group of Ly$`\alpha `$ absorbers.
Another piece of circumstantial evidence pointing to a one–to–one relationship between absorbers and galaxies. This concerns the number of each type of object detected. As mentioned before, the galaxy group towards QSO 1545$`+`$2101 contains approximately 10-20 members. According to Lanzetta et al. (1995), only a subset of them will be close enough to the QSO line of sight to produce observable Ly$`\alpha `$ absorption ($`\rho 160`$ kpc for a covering factor of $`1`$). We observe eight individual components in the absorption profile, consistent with the expectations from such a naïve model. There is no obvious reason why such an agreement would be found for some other quite different model (be it HVCs, filaments or any other structure). We note that the absorption lines may break up into further components at higher spectral resolution although these may then be substructure within individual galaxies.
###### Acknowledgements.
A.O.-G., K.M.L. and A.F.-S. were supported by grant NAG-53261; grants AR-0580-30194A, GO-0594-80194, GO-0594-90194A and GO-0661-20195A from STScI; and grant AST-9624216 from NSF. A.O.-G. acknowledges support from a UNSW Honorary Fellowship. A.F.-S. was also supported by an ARC grant. X.B. was partially supported by the DGES under project PB95-0122.
|
no-problem/9904/cond-mat9904387.html
|
ar5iv
|
text
|
# Modulated Phases in Spin-Peierls Systems
## 0.1 Sinusoidal Modulation
Elastic X-ray scattering confirmed that the distortion in the I phase is incommensurate. The deviation from dimerization $`|q\pi |`$ increases with increasing magnetic field. This can be illustrated in the unfrustrated XY model which corresponds via the Jordan-Wigner transformation to free fermions. The susceptibility towards distortion becomes maximum at $`q=2k_\mathrm{F}`$ since $`q=2k_\mathrm{F}`$ allows to create particle-hole pairs of vanishing energy. A distortion with $`q=2k_\mathrm{F}=2\pi m+\pi `$ is formed where $`m`$ is the magnetization. Since this distortion couples the degenerate states at $`k_\mathrm{F}`$ and at $`k_\mathrm{F}`$ a gap appears there. Let us now gradually increase the interaction corresponding to the $`S_i^zS_{i+1}^z`$ terms at fixed particle number. For continuity no state can cross the gap such that the picture of a distortion at $`2k_\mathrm{F}`$ remains valid . So we have
$$|q\pi |=2\pi m.$$
(0.2)
This relation is confirmed numerically .
Since experimentally it turned out that higher harmonics of the distortion are considerably suppressed it is plausible to start with ($`q`$ given by (0.2) )
$$H=J\underset{i}{}[1\delta \mathrm{cos}(qr_i)]𝐒_i𝐒_{i+1}.$$
(0.3)
The distribution of the local magnetizations $`m_i=S_i^z`$ is found from NMR experiments . With some success, the experimental data were compared to a continuum theory . This theory, however, is based on a Hartree-Fock treatment where all Hartree and Fock terms are spatially constant. In Ref. it is shown that this is a too crude approximation reducing the physics to the one of a XY chain. The antiferromagnetic correlations found in this way are much smaller than those of an isotropic XYZ chain. This conclusion is corroborated by several works . But the spin isotropy of cuprates can hardly be questioned. To account for the smaller amplitudes it is proposed that experimentally only an effective magnetization $`m_i^{\mathrm{eff}}`$ is seen which is an average
$$m_i^{\mathrm{eff}}=(12\gamma )m_i+\gamma (m_{i1}+m_{i+1}).$$
(0.4)
The results for $`\gamma =0.2`$ agree well with experiment . The microscopic origin of the average is discussed in Sect. 4.
The reason for strong local magnetizations around the zeros of the modulation (cf. Fig. 0.2) is found in the localization of a spinon. Each zero binds exactly one spinon . Summing the $`m_i`$ around a magnetization maximum yields 1/2.
The order of the transition D $``$ I can be determined by investigating the ground state energy $`E(m)`$ as function of the average magnetization $`m`$ . By means of a Legendre transformation $`\stackrel{~}{E}(h)=E(m)hm`$, one obtains the dependence of ground state energy $`\stackrel{~}{E}(h)`$ on the magnetic field $`h=g\mu _\mathrm{B}H`$. It is found that a discontinuous jump for $`m0`$ occurs. This implies that the transition D $``$ I for fixed sinusoidal modulation is of first order. The mean square of $`\mathrm{cos}(qr_i)`$ jumps discontinuously from 1 to 1/2 if $`q`$ deviates infinitesimaly from $`\pi `$ since the $`r_i`$ are summed over integer values only.
Experimentally, however, the observed first order jumps are much lower than those found for fixed sinusoidal modulation .
## 0.2 Adaptive Modulation
Since it was stated above that sinusoidal modulation alone does not account for the weak first order D $``$ I transition we turn to the full minimization of the ground state energy of (0.1). Derivation with respect to $`\delta _i`$ yields
$$0=𝐒_{i+1}𝐒_i𝐒_{j+1}𝐒_j+K\delta _i,$$
(0.5)
where $``$ stands for the expectation value and the average along the chain. The double-bracketed term accounts for the constraint of the vanishing average of the $`\delta _i`$. The minimization is done iteratively .
The generic result is depicted in Fig. 0.2.
The local magnetizations do not display major differences to the results for sinusoidal modulation in because the spinon localization is in essence determined only by the slope with which the modulation vanishes. Note that the envelope of $`m_i`$ is proportional to the probability of finding a spinon at that site . Hence, the magnetic part of the soliton displays localization as for sinusoidal modulation. For the distortions the relation (0.5) implies that the deviations from constantly alternating dimerization are also localized.
The distortion belonging to an isolated soliton is a kink, i.e. the distortion between two solitons resembles the one in the D phase. This implies a crucial advantage over the sinusoidal modulation. For kink-like solitons the reduction of the mean square distortion is proportional to the soliton number. Hence, a low soliton concentration leads only to a small change of the energy such that $`E(m)`$ is continuous (in the sense of Lipschitz) on $`m0`$ .
The investigations in Ref. of the model (0.1) yielded a continuous phase transition even though the magnetization grows very quickly above the critical field $`m1/\mathrm{ln}(HH_c)`$. Most of the results of the continuum theories comply also with a phase transition of second order . Solely Horovitz mentions the possibility of soliton attraction in an early work implying a first order transition. Buzdin et al. expect a first order transition at $`T>0`$ .
In fact, the details of the models matter. Cross argued already that an elastic energy with dispersion $`K(q)`$ being minimum at $`q=\pi `$ leads to a first order transition. The positive curvature of $`K(q)`$ around $`q=\pi `$ suppresses higher harmonics in the distortion. Hence, sinusoidal modulation is favoured. The concomitant concavity in $`E(m)`$ at low magnetization $`m`$ implies phase separation via the Maxwell construction. So the phase transition is first order . This finding is in accordance with the conclusion from a phenomenological Ginzburg-Landau description that the D $``$ I transition is generically of first order.
At the transition the distance between two solitons is rather large so that the difference between sinusoidal and adaptive modulation matters most. At higher soliton concentrations the adaptive modulation becomes more and more sinusoidal but with a concentration dependent amplitude. The mean square distortion could be determined from the elastic lattice constants. These experimental results agree well with the predictions based on the model (0.1), see .
The continuum theories applying to the isotropic Heisenberg chain provide the following results (details in Ref. ; $`\mathrm{sn},\mathrm{cn},\mathrm{dn}`$: elliptic Jacobi functions)
$`m_i`$ $`=`$ $`{\displaystyle \frac{W}{2}}\left\{{\displaystyle \frac{1}{R}}\mathrm{dn}({\displaystyle \frac{r_i}{k_\mathrm{m}\xi _\mathrm{m}}},k_\mathrm{m})+(1)^i\mathrm{cn}({\displaystyle \frac{r_i}{k_\mathrm{m}\xi _\mathrm{m}}},k_\mathrm{m})\right\}`$ (0.6)
$`\delta _i`$ $`=`$ $`(1)^i\delta \mathrm{sn}({\displaystyle \frac{r_i}{k_\mathrm{d}\xi _\mathrm{d}}},k_\mathrm{d})`$ (0.7)
$`\text{with}\xi `$ $`:=`$ $`\xi _\mathrm{m}=\xi _\mathrm{d}k:=k_\mathrm{m}=k_\mathrm{d}`$ (0.8)
$`1`$ $`=`$ $`4mk_{\mathrm{m}/\mathrm{d}}K(k_{\mathrm{m}/\mathrm{d}})\xi _{\mathrm{m}/\mathrm{d}}`$ (0.9)
$`1`$ $`=`$ $`\pi k_\mathrm{m}\xi _\mathrm{m}{\displaystyle \frac{W}{R}}.`$ (0.10)
The fits in Fig. 0.2 are based on Eqs. (0.6,0.7). Identity (0.8) is not complied with, see Sect. 3. Otherwise no agreement would be obtained. Relation (0.9) is imposed on the fits whereas Eq. (0.10) serves as check. It is fulfilled within 4%. The fact that $`\xi _\mathrm{d}/\xi _\mathrm{m}1.33`$ is considerably above unity complies nicely with the experimental findings. Elastic X-ray scattering found $`\xi _\mathrm{d}=13.6\pm 0.3`$ while the NMR investigations provide $`\xi _\mathrm{m}10`$ close to the transition.
There is also another way to introduce solitons in a spin-Peierls system than the application of magnetic field. Doping non-magnetic impurities in a spin-Peierls systems cuts the infinite chains into finite chain segments . In a number of works it has been shown that each impurity frees one spinon which is situated either before or after the impurity on the chain. Assuming a fixed dimerization it is easy to see that the spinon is bound to its generating impurity in accordance with experimental results .
But in a spin-Peierls system the change of the modulation has to be taken into account, too. This is done by introducing
$$H=J\underset{i0}{}\left[(1+\delta _i)𝐒_i𝐒_{i+1}+\alpha 𝐒_i𝐒_{i+2}+\frac{K}{2}\delta _i^2+f\delta _i(1)^i\delta _{\mathrm{bulk}}\right],$$
(0.11)
such that the impurity is at site -1. The important amendment compared to $`H`$ in Eq. (0.1) is the last term. If the spinon moves away from the impurity the distortion pattern is changed between impurity and spinon. Due to an elastic interchain interaction (parametrized by $`f`$) a coherent distortion pattern throughout the whole three-dimensional system is preferred. A deviation from this pattern is energetically unfavourable. So one is led to include the last term in Eq. (0.11) assuming that the adjacent chains are dimerized as in the unperturbed D phase. The relation $`K=K_0+f`$ ensures the consistency of the distortion amplitude with its bulk value.
Fig. 0.3 displays the results for various elastic interchain interactions.
The soliton is not localized at the impurity but at a certain distance. For lower $`f`$ values the soliton resembles very much the ones in the I phase (cf. Fig. 0.2). On increasing $`f`$ the soliton is squeezed more and more towards the impurity. Analogous results can be found by QMC , too.
Sure enough, the soliton is bound to the impurity confirming previous ideas . Moreover, the first excitations for the same distortions and in the same spin sector are found below the singlet-triplet gap at 52% of $`\mathrm{\Delta }_{\mathrm{trip}}`$ for $`f=0.01`$ and at 64% for $`f=1`$. This matters for the spectroscopic analysis.
## 0.3 Local Renormalization
Most remarkable is the difference between the distortive soliton width $`\xi _\mathrm{d}`$ and the magnetic soliton width $`\xi _\mathrm{m}`$ in the numerical results (cf. Fig. 0.2). It amounts to 30% at $`\alpha =0.35`$ depending mainly on the frustration for low soliton concentrations . The challenge is to extend the existing continuum description to account for this fact. Let us revisit the semiclassical treatment of the bosonized description of the spin-Peierls problem . Minimizing the total energy the variation of the distortion $`\delta (x)`$ leads to
$$\delta (x)e^{2\sigma }\mathrm{cos}(2\varphi _{\mathrm{class}})$$
(0.12)
where $`\sigma :=\widehat{\varphi }^2`$ denotes the renormalizing fluctuations of the local bosonic field $`\widehat{\varphi }`$ about the classical field $`\varphi _{\mathrm{class}}`$. A soliton corresponds to a solution where $`\varphi _{\mathrm{class}}`$ increases by $`\pi `$ in a kink-like fashion. If $`\sigma `$ is assumed to take the spatially constant value that it has in the ground state one obtains
$$\delta (x)/\delta =\mathrm{cos}(2\varphi _{\mathrm{class}})=\mathrm{tanh}(x/\xi ),$$
(0.13)
wherein $`\xi `$ is given by the ratio $`v_\mathrm{S}/\mathrm{\Delta }_{\mathrm{trip}}`$ of the spin wave velocity and the gap. The alternating component of the magnetization $`a(x)`$ is proportional to $`\mathrm{sin}(2\varphi _{\mathrm{class}})`$. Hence, one has $`a(x)\sqrt{1\mathrm{tanh}^2(x/\xi )}=1/\mathrm{cosh}(x/\xi )`$.
But the presence of the soliton induces a deviation $`\mathrm{\Delta }\sigma `$ from its ground state value. Fig. 0.4 displays a generic result for this deviation. It is calculated on top of the solution of Nakano and Fukuyama .
The alternating component
$$a(x)\sqrt{1\mathrm{exp}(4\mathrm{\Delta }\sigma )\mathrm{tanh}^2(x/\xi )}$$
(0.14)
is thus indeed narrower than before due to the spatial dependence of the renormalization factor. The result in Fig. 0.4 is only a first step since the influence of the altered magnetic behaviour is not included. Yet it is clear that the renormalization is a local quantity and that this is the origin of the difference between $`\xi _\mathrm{d}`$ and $`\xi _\mathrm{m}`$.
## 0.4 Phasons
In Sect. 1 the discrepancy between theoretical and experimental amplitudes of the alternating local magnetizations was solved by an averaging procedure (0.4). Passing to adaptive modulations does not change the amplitudes much . Hence, one still has to find the microscopic origin for the averaging.
Non-adiabatic effects are in fact responsible. The soliton lattice oscillates about its equilibrium positions. These oscillations are best understood in a continuum description which is well justified if the typical length $`\xi `$ is noticeably larger than the lattice constant. In such a continuum description the modulation $`\delta (x)`$ can be shifted along the chains without energy cost. This continuous translational invariance is spontaneously broken by the soliton lattice. Hence, there are massless Goldstone modes, the so-called phasons. They are analogous to the phonons of a crystal lattice except that they do not have three branches but only with one. While the atoms of a crystal lattice can be shifted in all three spatial directions the solitons can be shifted only along the chains.
Albeit the ideal spin-Peierls system is magnetically one-dimensional the distortions on different chains are elastically coupled. Thus, the phasons are governed by a 3D, though anisotropic, dispersion. The dispersion parameters are determined from the anisotropy of the correlation lengths assuming a Ginzburg-Landau description . The corresponding $`T^3`$ term in the specific heat has been measured and theory and experiment agree astonishingly well .
The zero point motion and the excited motion of phasons lead to the averaging (0.4). Let us denote the adiabatic result for the local magnetizations by $`m_i=a(r_i)\mathrm{cos}(\pi r_i)+u(r_i)`$ where $`r_i`$ is the component along the chains, $`a(r_i)`$ the alternating component and $`u(r_i)`$ the uniform component of the magnetizations. A local shift can be implemented by replacing $`\pi r_i\pi r_i+\widehat{\mathrm{\Theta }}(𝐫_i)`$ where $`\widehat{\mathrm{\Theta }}(𝐫_i)`$ denotes a phase shift operator. On the long time scales of a NMR measurement one measures
$$m_i^{\mathrm{exp}}=m_i=a(r_i)\gamma ^{}\mathrm{cos}(\pi r_i)+u(r_i)$$
(0.15)
with $`\gamma ^{}:=\mathrm{exp}\left(\frac{1}{2N}_i\widehat{\mathrm{\Theta }}^2(𝐫_i)\right)<1`$. So there is an amplitude reduction engendered by the local fluctuations. The factor $`\gamma ^{}`$ is comparable to a Debye-Waller factor.
Using the values fixed previously yields $`\gamma ^{}=0.16\mathrm{exp}((T/T^{})^2/2)`$ with $`T^{}16.9`$K . Eq. (0.4) is retrieved by estimating $`a(r)`$ and $`u(r)`$ from the discrete values $`m_i`$ by $`a(r_i)=m_i/2(m_{i1}+m_{i+1})/4`$ and by $`u(r_i)=m_i/2+(m_{i1}+m_{i+1})/4`$. Inserting these formulae into Eq. (0.15) yields Eq. (0.4) with $`\gamma =(1\gamma ^{})/4`$. At $`T=0`$, $`\gamma `$ takes the value $`0.21`$ in accordance with experiment .
## 0.5 Conclusions
In this report the modulated phases of spin-Peierls systems were discussed. Such modulations are induced either by magnetic field or by impurities. In both ways the singlet pairing in the D phase is broken and spinons are freed. The lattice distortion adapts to the spinon by forming a zero to which the spinon is bound. This new entity constitutes the spin-Peierls soliton.
The order of the transition D $``$ I phase on increasing field depends on model details. Imposed sinusoidal modulation leads to a pronounced first order transition. Allowing the system to choose an optimum modulation makes the transition continuous if the elastic energy is wave vector independent. If the elastic energy itself pins the modulation to $`\pi `$ a weak first order transition is found.
Doping induced solitons are bound to their generating impurity. The distortion pattern between impurity and soliton is not coherent with the bulk pattern. This costs energy which acts as a confining potential. Binding occurs for which experimental evidence exists .
The difference between magnetic and distortive soliton width could be traced back to the so far neglected spatial dependence of the renormalizing local fluctuations. A fully self consistent analysis is in progress.
The reduction of the alternating magnetic amplitude due to phasons provides striking evidence for the importance of the lattice dynamics. The inclusion of non-adiabatic effects on top of an otherwise adiabatic calculation might still be unsatisfactory. So other approaches to non-adiabatic behaviour should be extended to the I phase .
I thank C. Berthier, J.P. Boucher, B. Büchner, T. Lorenz, M. Horvatić, Th. Nattermann for fruitful discussions, F. Schönfeld for reliable numerical work, E. Müller-Hartmann for generous support and G. Güntherodt’s group for intense collaboration. Financial support of the DFG by the SFB 341 is acknowledged.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.