id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9905/astro-ph9905385.html
ar5iv
text
# Spectral Analysis of the Stromlo-APM Survey II. Galaxy luminosity function and clustering by spectral type ## 1 Introduction Important clues to the physics of galaxy formation and evolution may be obtained by studying the global properties, such as the luminosity function and correlation function, of quiescent versus star-forming galaxies. The most reliable tracer of the formation rate of massive, hot stars is the flux of the H$`\alpha `$ emission line, directly related to the stellar UV ($`<912`$ Å) photoionizing flux (Kennicutt 1983). This line is frequently redshifted out of the observed spectral window, and so most deep galaxy surveys have instead used the \[O ii\] 3727Å line as a measure of star-formation rate (Kennicutt 1992). The luminosity function of galaxies subdivided by the presence or absence of the \[O ii\] emission line has been calculated in the local Universe for the Las Campanas Redshift Survey (LCRS, Lin et al. 1996a) and for the ESO Slice Project (ESP, Zucca et al. 1997). In both surveys it was found that the faint-end of the galaxy luminosity function is dominated by \[O ii\] emitters, in other words that presently star-forming galaxies tend to be less luminous than quiescent galaxies in both the $`b_J`$ (ESP) and Gunn-$`r`$ (LCRS) bands. These results from \[O ii\]-selected samples are consistent with the recent luminosity function estimates from local samples of galaxies selected by morphological (eg. Marzke et al. 1998) and spectral (eg. Bromley et al. 1998, Folkes et al. 1999) type: early-type (elliptical and lenticular) galaxies tend to be luminous, and late-type (spiral and irregular) galaxies faint. It is by now also well known (eg. Davis & Geller 1976, Giovanelli et al. 1986, Iovino et al. 1993, Loveday et al. 1995) that galaxies of early morphological type cluster together on small scales more strongly than late-type galaxies. Since emission-line galaxies (ELGs) tend to be of late Hubble type, we would expect ELGs to be more weakly clustered than non-ELGs, and indeed this has been observed by numerous authors (eg. Iovino et al. 1988, Salzer 1989, Rosenberg, Salzer & Moody 1994 and Lin et al. 1996b). In this paper we study the luminosity function and clustering for subsamples of the Stromlo-APM survey (Loveday et al. 1996) selected by H$`\alpha `$ and \[O ii\] emission-line equivalent widths. The Stromlo-APM survey is ideal for quantifying the statistical properties of emission-line versus quiescent galaxies in the local universe since it contains a representative sample of different galaxy types and covers a large volume $`V1.38\times 10^6h^3\mathrm{Mpc}^3`$. Since the red wavelength coverage of Stromlo-APM spectra extends from 6300–7600 Å we are able to detect the H$`\alpha `$ (6562.82Å) line, when present, to a redshift $`z0.16`$, i.e. beyond the maximum distance reached by the survey. Thus for the first time we are able to classify a large, representative sample of galaxies by the primary tracer of massive star formation, viz. the equivalent width of the H$`\alpha `$ emission line. Measurement of the spectral properties of Stromlo-APM galaxies is discussed by Tresse et al. (1999), hereafter referred to as Paper 1. The subsamples selected by their emission-line properties are described in §2. The luminosity functions of the different samples are compared in §3 and in §4 we present clustering measurements. We summarize our results in §5. Throughout, we assume a Hubble constant of $`H_0=100h`$ km s<sup>-1</sup>Mpc<sup>-1</sup> with $`h=1`$ and a deceleration parameter $`q_0=0.5`$. The exact cosmology assumed has little effect at redshifts $`z0.15`$. ## 2 Galaxy Samples Our sample of galaxies is taken from the Stromlo-APM redshift survey which covers 4300 sq-deg of the south galactic cap and consists of 1797 galaxies brighter than $`b_J=17.15`$ mag. The galaxies all have redshifts $`z<0.145`$, and the mean is $`z=0.051`$. A detailed description of the spectroscopic observations and the redshift catalog is published by Loveday et al. (1996). Of the 1797 galaxies originally published in the redshift survey, 82 have $`b_J<15`$. These bright galaxies are excluded from our analysis since they tend to be saturated on the Schmidt plates and hence have unreliable magnitudes. Of the remaining 1715 galaxies, 26 have a redshift taken from the literature, and for 7 we could not retrieve the spectra because they were not observed with the Dual-Beam Spectrograph (DBS) of the ANU 2.3-m telescope at Siding Spring. Also excluded were 6 blueshifted spectra, 3 with $`cz<1000`$ km s<sup>-1</sup>, and 2 with too low signal-to-noise. The remaining 1671 spectra were flux-calibrated and had their spectral properties measured as described in Paper 1. Flux calibration of our spectra is accurate to $`10`$$`20\%`$, and so in the present paper we have restricted our analysis to galaxy samples selected by the equivalent widths (EWs) of their H$`\alpha `$ and \[O ii\] emission lines, which are insensitive to flux calibration errors. Note that since the resolution of our spectra has FWHM = 5Å, the H$`\alpha `$ line can always be deblended from the \[N ii\] doublet. Of the 1671 measured galaxies, 11 were not part of our core statistical sample, either because they had an uncertain redshift or happened to lie in a part of the sky masked by “holes” around bright stars, etc. Of the remaining 1660 galaxies, 82 could not have EW (H$`\alpha `$) measured as their redshift places the H$`\alpha `$ line in a small gap in the red part of the spectrum from 7000–7020Å (Loveday et al. 1996). For an additional 57 spectra, H$`\alpha `$ was seen in emission but could not be measured due to contamination by a sky line, or some other problem with the spectrum; \[O ii\] lines could not be measured for similar reasons for 5 spectra. Note that lack of EW measurement, while correlated with redshift, is uncorrelated with galaxy morphology, and so we can reliably correct for missing EW measurements. We are thus left with a sample of 1521 galaxies which could be analysed by EW (H$`\alpha `$), and 1655 which could be analysed by EW (\[O ii\]). Histograms of log EW (H$`\alpha `$) and log EW (\[O ii\]) are plotted in Figures 1 and 2 respectively. We select galaxy subsamples using measured equivalent widths of the H$`\alpha `$ and \[O ii\] emission lines. The H$`\alpha `$ line is the best tracer of massive star formation (Kennicutt 1983) but we also select samples using the equivalent width of the \[O ii\] line, as this line allows us to compare with other surveys in which H$`\alpha `$ is not always within the wavelength range measured. The H$`\alpha `$ line is detected with EW $`2`$Å in 61% of galaxies. Of these emission-line galaxies, half have EW (H$`\alpha `$) $`>15`$Å. Thus we form three subsamples of comparable size by dividing the sample at EW (H$`\alpha `$) of 2Å and 15Å. In the case of the \[O ii\] line, 60% of galaxies have EW $`2`$Å, and of these half have EW (\[O ii\]) $`9.6`$Å. The galaxy samples selected by H$`\alpha `$ and \[O ii\] equivalent widths are defined in Table 1. Most galaxies in the Stromlo-APM survey have had a morphological type (elliptical, lenticular, spiral or irregular) assigned by visual inspection of the galaxy image (Loveday 1996, Loveday et al. 1996). In Table 1 we give the numbers of galaxies of each morphological type in each spectroscopically selected subsample. In Figures 1 and 2 we also plot the distribution of equivalent widths for these morphologically-selected subsamples. The sample labeled “Unk” consists of galaxies to which no morphological classification was assigned. We see that early-type galaxies dominate when H$`\alpha `$ or \[O ii\] emission is not detected and are underrepresented when emission lines are detected. Conversely, the number of irregular galaxies increases significantly in the spectroscopic samples which show strongest star formation. Strong star formation is known to disrupt the regularity in the shape of a galaxy. In the deeper universe, the apparent increase in number of irregulars is also related to strong star formation (Brinchmann et al. 1998). Thus as expected we find a good correlation between morphological types and emission line equivalent widths. Since they can be measured objectively, spectroscopic properties of galaxies are a more reliable discriminator than visually assigned morphological types. Moreover, a significant fraction of Stromlo-APM galaxies have no morphological type assigned (the column marked “Unk” in Table 1). The low median EW (H$`\alpha `$) and EW (\[O ii\]) for these unclassified galaxies compared with the total sample suggests that many are in fact of early morphological type. The spectral classification described in this section allows these galaxies to be assigned to their appropriate class in a quantitative way. ## 3 The Galaxy Luminosity Function We estimate the $`b_J`$ luminosity function (LF) for each galaxy subsample using maximum-likelihood, density-independent methods, so that our results are unbiased by galaxy clustering. We use the Sandage, Tammann & Yahil (1979) parametric maximum-likelihood estimator to fit a Schechter (1976) function, $$\varphi (L)dL=\varphi ^{}\left(\frac{L}{L^{}}\right)^\alpha \mathrm{exp}\left(\frac{L}{L^{}}\right)dL.$$ (1) We correct for random errors in our magnitudes by convolving this luminosity function with a Gaussian with zero mean and rms $`\sigma _m=0.30`$ (see Loveday et al. 1992, hereafter L92, for details). We also perform a non-parametric fit to each luminosity function using the stepwise maximum-likelihood estimator of Efstathiou, Ellis & Peterson (1988). This estimator calculates $`\varphi (L)`$ in a series of evenly-spaced magnitude bins and provides a reliable error estimate for each bin by inverting the information matrix. $`K`$-corrections are applied to each galaxy according to its morphological classification as E/S0: $`4.14z`$, Sp: $`2.25z`$, Irr: $`1.59z`$, Unk: $`2.90z`$. Before calculating the LF for each spectroscopic subsample defined in Table 1, we first checked that the galaxies omitted from this analysis, ie. those galaxies whose H$`\alpha `$ or \[O ii\] emission lines could not be measured, did not bias the LF measurement relative to the full Stromlo-APM survey. The LF estimates using all galaxies except the 194 with no H$`\alpha `$ measurement available and all galaxies except the 60 with no \[O ii\] measurement were indeed both consistent with the full sample. Our estimates of the luminosity function for the EW (H$`\alpha `$) selected samples are shown in Figure 3. The inset to this Figure shows the likelihood contours for the best-fit Schechter parameters $`\alpha `$ and $`M^{}`$. The Schechter parameters and their $`1\sigma `$ errors (from the bounding box of the $`1\sigma `$ error contours) are also listed in Table 2. Note that the estimates of $`\alpha `$ and $`M^{}`$ are strongly correlated and so the errors quoted for $`\alpha `$ and $`M^{}`$ in the Table are conservatively large. We see a trend of faintening $`M^{}`$ and steepening $`\alpha `$ as EW (H$`\alpha `$) increases. There is a significantly greater contrast between the H-high and H-mid samples than between the H-mid and H-low samples, despite the rather similar distribution of morphological types in the H-high and H-mid samples as compared with the H-low sample. This suggests that either there is not a simple one-to-one correlation between optical morphology and EW (H$`\alpha `$), or that the larger fraction of Irr galaxies in the H-high sample are contributing to the steep faint-end slope for this sample. Luminosity function estimates of the EW (\[O ii\]) selected samples and errors in the best-fit Schechter parameters are shown in Figure 4. The $`1\sigma `$ error contours for the O-low and O-mid samples overlap and the O-high sample does not show a fainter $`M^{}`$ than non-emission line galaxies. However, the LF for the O-high sample does have a significantly steeper faint-end slope than that for galaxies with only weak or moderate \[O ii\] emission. The fact that we see a systematic dimming of $`M^{}`$ with emission-line EW for the H$`\alpha `$-selected sample but not for the \[O ii\]-selected sample is probably due to the fact that EW (H$`\alpha `$) is a measure of the fraction of ionizing photons from OB stars over the flux from the old stellar population emitted in the rest-frame $`R`$ band which forms the continuum at H$`\alpha `$, while EW (\[O ii\]) is normalised by the flux from relatively young stars (mainly type A). Thus EW (H$`\alpha `$) is more sensitive to the current star formation rate and hence blue luminosity enhancement than EW (\[O ii\]). Note that the LF estimate for late-type galaxies presented by L92 does not have such a steep faint-end slope as we find here for strong emission-line galaxies. In L92 we combined galaxies classified as spiral or irregular as “late type”, and so not all of them have strong emission lines. The faint-end slope for early-type galaxies (L92) was much shallower than that measured here for galaxies with no emission lines. At least part of this difference is due to a bias in the morphological type dependent LFs of L92 due to the tendency of unclassified galaxies in the Stromlo-APM survey to be of low luminosity (Marzke et al. 1994, Zucca et al. 1994). We avoid this bias with the spectroscopically selected samples analysed here. The normalisation $`\varphi ^{}`$ of the fitted Schechter functions was estimated using a minimum variance estimate of the space density $`\overline{n}`$ of galaxies in each sample (Davis & Huchra 1982, L92). We corrected our estimates of $`\overline{n}`$, $`\varphi ^{}`$ and luminosity density $`\rho _L`$ to allow for those galaxies excluded from each subsample. First, all subsamples were scaled by the factor 1715/1660 to account for the 55 galaxies with no EW information available. Second, all H$`\alpha `$ selected subsamples were scaled by 1660/1578 to account for the 82 galaxies whose H$`\alpha `$ line, if present, would have fallen in the “red gap” (§2). Samples H-mid & H-high were scaled by an additional factor 1578/1521 to allow for the 57 galaxies in which H$`\alpha `$ was seen, but was not able to be measured. Finally, samples O-mid & O-high were scaled by 1660/1655 to allow for the five galaxies in which \[O ii\] was seen but not measured. Our final estimates of $`\overline{n}`$, $`\varphi ^{}`$ and $`\rho _L`$ are given in Table 2. The uncertainty in mean density due to “cosmic variance” (L92 equation 7) is $`6\%`$ for each sample. However, the errors in these quantities are dominated by the uncertainty in the shape of the LF, particularly by the value of the estimated characteristic magnitude $`M^{}`$. Using both H$`\alpha `$ and \[O ii\] equivalent widths as indicators of star formation activity, we find that galaxies currently undergoing significant bursts of star formation dominate the faint-end of the luminosity function, whereas more quiescent galaxies dominate at the bright end. This is in agreement with the results of Lin et al. (1996a) and Zucca et al. (1997), but in disagreement with Salzer (1989), who finds no significant difference in the LF shapes of star-forming and quiescent galaxies. As pointed out by Schade & Ferguson (1994), Salzer’s sample is biased against weak-lined ELGs at low-luminosity, and their reanalysis of his data correcting for this selection effect does find a steep faint-end slope for the LF of star-forming galaxies. The characteristic magnitude $`M^{}`$ for the O-high sample is about 0.5 mag brighter than that for the H-high sample. This is probably due to a combination of several factors: 1) A large \[O ii\] EW can come from a small \[O ii\] flux and a very red continuum (ie. a small star formation rate and an old stellar population). 2) The correlation between estimated values of faint-end slope $`\alpha `$ and characteristic magnitude $`M^{}`$ means that the steeper $`\alpha `$ of the O-high sample will push the estimated $`M^{}`$ to brighter magnitudes. 3) The errors on $`M^{}`$ are large ($`\pm 0.3`$ mag), and so the H-high and O-high $`M^{}`$ estimates disagree only at the 1–2 $`\sigma `$ level. ## 4 Galaxy Clustering In this section we measure the clustering properties of the galaxy subsamples. We measure the auto-correlation function of each sample in redshift space, and the cross-correlation function of each galaxy sample with all galaxy types in real space. For both estimates, we first verified that the 194 galaxies missing EW (H$`\alpha `$) measurement and the 60 galaxies missing EW (\[O ii\]) did not bias the measured clustering relative to the complete sample. Those galaxies excluded because H$`\alpha `$ fell in the “red gap” lie at redshifts $`z0.06`$–0.07. Nevertheless, omitting these galaxies did not significantly affect the measured clustering in real or redshift space. ### 4.1 Redshift-Space Correlation Function We correct for boundary conditions and the survey selection function by populating the survey volume with a catalogue of $`18,000`$ random points whose radial density matches that expected for each subsample. The number-distance distributions for the six galaxy subsamples analysed here are shown in Figure 5. These plots also show the expected distributions inferred from the luminosity functions calculated in the previous section. We see that given the tendency for non-ELGs to be luminous and for ELGs to be faint, the ELGs are slightly overdense at large distances ($`x200h^1\mathrm{Mpc}`$) whereas there is an underdensity of non-ELGs at similar distances. This observation is reflected by the increasing $`V/V_{\mathrm{max}}`$ with EW (H$`\alpha `$) seen in Table 2, and is probably due to evolution in emission line strength with redshift (eg. Broadhurst et al. 1992), occurring at redshifts as low as $`z0.15`$. It is unlikely to be due to the changing projected size of the spectrograph slit at different redshifts as we demonstrated in Paper 1. We checked that these discrepancies between observed and expected $`N(x)`$ distributions did not bias our estimates of $`\xi (s)`$ by also generating a random distribution according to a fourth-order polynomial fit to the observed radial density of each subsample. Clustering estimates using this random distribution gave results consistent with a random distribution generated according to the predicted radial density. The auto-correlation function of each sample in redshift space is measured using the estimator $$1+\xi (s)=\frac{w_{gg}(s)w_{rr}(s)}{[w_{gr}(s)]^2},$$ (2) Hamilton (1993). Here $`w_{gg}(s)`$, $`w_{gr}(s)`$ and $`w_{rr}(s)`$ are the summed products of the weights of galaxy-galaxy, galaxy-random and random-random pairs respectively at separation $`s`$. We use the minimum-variance pair weighting given by equation 1 of Loveday et al. (1995), and the reader is referred to that paper for further details. Errors are estimated by dividing the survey into four zones of roughly equal area and calculating the variance in $`\xi (s)`$ from zone-to-zone. Estimates of $`\xi (s)`$ are shown in Figure 6. A power-law $`\xi (s)=(s/s_0)^{\gamma _s}`$ was fitted over the range 1.5–30 $`h^1\mathrm{Mpc}`$. For each subsample the estimated power-law slope $`\gamma _s`$ was formally consistent with $`\gamma _s=1.47`$, measured for the whole Stromlo-APM sample (Loveday et al. 1995). Since estimates of the index $`\gamma _s`$ and correlation length $`s_0`$ are strongly correlated, we determined the best fit $`s_0`$ to each subsample, keeping the power-law index fixed at $`\gamma _s=1.47`$. The results of these fits are shown by the dashed lines in Figure 6 and the best-fit values of $`s_0`$ with $`1\sigma `$ uncertainties (determined from fitting to each zone separately) are shown in Table 3. We see that the correlation length $`s_0`$ becomes significantly smaller in more actively star-forming galaxies, as traced by both EW (H$`\alpha `$) and EW (\[O ii\]). This result is in agreement with the power-spectrum analysis of the Las Campanas Redshift Survey by Lin et al. (1996b) who find that the clustering amplitude of ELGs is only about 70% that of the full LCRS sample. These results are also consistent with those of Rosenberg et al. (1994), Iovino et al. (1988) and Salzer (1989), all of whom find that ELGs are less strongly clustered than quiescent galaxies. Galaxies with no detected H$`\alpha `$ (H-low) or \[O ii\] (O-low) emission have a correlation length about twice that of ELG galaxies (H-high and O-high samples). This is larger than the difference in clustering amplitude determined by Lin et al. (1996b) from the LCRS, presumably because we have subdivided galaxies into three EW bins compared to their two EW bins. ### 4.2 Real-Space Correlation Function The estimate of $`\xi (s)`$ described above is affected by redshift space distortions. On small scales, random, thermal motions tend to decrease galaxy clustering, whereas on large scales, galaxy streaming motions tend to enhance $`\xi (s)`$. In order to avoid the effects of galaxy peculiar velocities, we have calculated the projected cross-correlation function $`\mathrm{\Xi }(\sigma )`$ of each galaxy subsample with all galaxies in the APM survey to a magnitude limit of $`b_J=17.15`$. We then invert this projected correlation function to obtain the real space cross-correlation function $`\xi (r)`$ of each subsample with the full galaxy sample. This method of estimating $`\xi (r)`$ is described by Saunders et al. (1992) and by Loveday et al. (1995). The large number of galaxy pairs used by this estimator allows us to fit a power-law to the measured cross-correlation function over the range of separations 0.2–20 $`h^1\mathrm{Mpc}`$ and to fit both the power-law index $`\gamma _r`$ and the correlation length $`r_0`$. Our estimates of $`\xi (r)`$ are plotted in Figure 7 and our best-fit power-laws are tabulated in Table 3. As in redshift-space, we see that strong emission-line galaxies are more weakly clustered than their quiescent counterparts by a factor of about two. The real space clustering measured for non-ELGs is very close to that measured for early-type (E + S0) galaxies, and the clustering of late-type (Sp + Irr) galaxies lies between that of the moderate and high EW galaxies (cf. Loveday et al. 1995). Given the strong correlation between morphological type and presence of emission lines (Table 1) this result is not unexpected. The power-law slopes are consistent ($`\gamma _r=1.8\pm 0.1`$) between the H-low, H-high, O-low and O-high samples. For the moderate EW galaxies (H-mid and O-mid samples) we find shallower slopes ($`\gamma _r=1.6\pm 0.1`$). This is only a marginally significant (1–2 $`\sigma `$) effect, but may indicate a deficit of moderately star-forming galaxies principally in the cores of high density regions, whereas strongly star forming galaxies appear to more generally avoid overdense regions. ## 5 Conclusions We have presented the first analysis of the luminosity function and spatial clustering for representative and well-defined local samples of galaxies selected by EW (H$`\alpha `$), the most direct tracer of star-formation. We have also selected galaxies by EW (\[O ii\]), and find broadly consistent results between the two tracers of star formation, which is expected from their close relation (Kennicutt 1992, Paper 1). The observed trend for $`M^{}`$ to fainten systematically with increasing EW (H$`\alpha `$), contrasted with the roughly constant $`M^{}`$ with varying EW (\[O ii\]), is probably due to EW (H$`\alpha `$) being a more reliable indicator of star formation rate than EW (\[O ii\]). Star-forming galaxies are likely to be significantly fainter than their quiescent counterparts. The faint-end ($`MM^{}`$) of the luminosity function is dominated by ELGs and thus the majority of local dwarf galaxies are currently undergoing star formation. Star-forming galaxies are more weakly clustered, both amongst themselves, and with the general galaxy population, than quiescent galaxies. This weaker clustering is observable on scales from 0.1–10 $`h^1\mathrm{Mpc}`$. We thus confirm that star-forming galaxies are preferentially found today in low-density environments. A possible explanation for these observations is that luminous galaxies in high-density regions have already formed all their stars by today, while less luminous galaxies in low-density regions are still undergoing star formation. It is not clear what might be triggering the star formation in these galaxies today. While interactions certainly enhance the rate of star formation in some disk galaxies, interactions with luminous companions can only account for a small fraction of the total star formation in disk galaxies today (Kennicutt et al. 1987). Telles & Maddox (1999) have investigated the environments of H ii galaxies by cross-correlating a sample of H ii galaxies with APM galaxies as faint as $`b_J=20.5`$. They find no excess of companions with H i mass $`10^8M_{\mathrm{}}`$ near H ii galaxies, thus arguing that star formation in most H ii galaxies is unlikely to be induced by even a low-mass companion. Our results are entirely consistent with the hierarchical picture of galaxy formation. In this picture, today’s luminous spheroidal galaxies formed from past mergers of galactic sub-units in high density regions, and produced all of their stars in a merger induced burst, or series of bursts, over a relatively short timescale. The majority of present-day dwarf, star-forming galaxies in lower density regions may correspond to unmerged systems formed at lower peaks in the primordial density field (eg. Bardeen et al. 1986) and whose star formation is still taking place. Of course, the full picture of galaxy formation is likely to be significantly more complicated than this simple sketch, and numerous physical effects such as depletion of star-forming material and other feedback mechanisms are likely to play an important role. ## Acknowledgments We thank George Efstathiou and Bruce Peterson for their contributions to the Stromlo-APM survey.
no-problem/9905/math9905167.html
ar5iv
text
# Untitled Document Metric curvature of infinite branched covers Daniel Allcock<sup>*</sup><sup>*</sup>Supported in part by an NSF Postdoctoral Fellowship 25 May 1999 MSC: 53C23 (14J28, 57N65) Keywords: branched cover, ramified cover, Alexandrov space, cubic surface, Enriques surface AbstractWe study branched covering spaces in several contexts, proving that under suitable circumstances the cover satisfies the same upper curvature bounds as the base space. The first context is of a branched cover of an arbitrary metric space that satisfies Alexandrov’s curvature condition CAT($`\kappa `$), over an arbitrary complete convex subset. The second context is of a certain sort of branched cover of a Riemannian manifold over a family of mutually orthogonal submanifolds. In neither setting do we require that the branching be locally finite. We apply our results to hyperplane complements in several complex manifolds of nonpositive sectional curvature. This implies that two moduli spaces arising in algebraic geometry are aspherical, namely that of the smooth cubic surfaces in $`CP^3`$ and that of the smooth complex Enriques surfaces. 1. Introduction The purpose of this paper is to establish a basic result in the theory of metric space curvature in the sense of Alexandrov, together with several applications in algebraic geometry. A commonly observed phenomenon is that “taking a branched cover of almost anything can only introduce negative curvature”. One can see this phenomenon in elementary examples using Riemann surfaces, and the idea also plays a role in the construction of exotic manifolds with negative sectional curvature. In this paper we work in the maximal generality in which sectional curvature bounds make sense, namely in the comparison geometry of Alexandrov. In this setting we will establish a very strong theorem concerning the persistence of upper curvature bounds in branched covers. We include examples showing that an important completeness hypothesis cannot be dropped; our examples also disprove several claims in the literature. A simple way to build a cover $`\widehat{Y}`$ of a space $`\widehat{X}`$ branched over $`\mathrm{\Delta }\widehat{X}`$ is to take any covering space $`Y`$ of $`\widehat{X}\mathrm{\Delta }`$ and define $`\widehat{Y}=Y\mathrm{\Delta }`$. We call $`\widehat{Y}`$ a simple branched cover of $`\widehat{X}`$ over $`\mathrm{\Delta }`$. Our main result (theorem 3.1) states that if $`\widehat{X}`$ satisfies Alexandrov’s CAT($`\kappa `$) condition and $`\mathrm{\Delta }`$ is complete and convex then the natural metric on $`\widehat{Y}`$ also satisfies CAT($`\kappa `$). (When $`\kappa >0`$ we impose a minor hypothesis on the diameters of $`\widehat{X}`$ and $`\widehat{Y}`$.) See section 2 for a discussion of Alexandrov’s criterion and other background; we follow the conventions of the book by Bridson and Haefliger. Most of section 3 is devoted to establishing this theorem. Only partial results can be obtained without the completeness hypothesis, and we give these results together with counterexamples when completeness is not assumed. We also give a local version, theorem 3.6, which allows one to work with branched covers more complicated than the simple sort introduced above, and also avoids any diameter constraints on $`\widehat{X}`$ and $`\widehat{Y}`$. One interesting twist is that one must take $`\mathrm{\Delta }`$ to be locally complete in order to obtain even local results. The question which motivated this investigation is whether the moduli space of smooth cubic surfaces in $`CP^3`$ is aspherical (i.e., has contractible universal cover). The answer is yes, and our argument also establishes the analogous result for the moduli space of smooth complex Enriques surfaces. To prove these claims, we use the fact that each of these moduli spaces is known to be covered by a Hermitian symmetric space with nonpositive sectional curvature, minus an arrangement of complex hyperplanes. In each case the hyperplanes have the property that any two of them are orthogonal wherever they meet. In section 5 we show that such a hyperplane complement is aspherical. We actually prove a more general result, in the setting of a complete simply connected Riemannian manifold $`\widehat{M}`$ of non-positive sectional curvature, minus the union $`H`$ of suitable submanifolds which are mutually orthogonal, complete, and totally geodesic. The basic idea is to try to apply standard nonpositive curvature techniques like the Cartan-Hadamard theorem to the universal cover $`N`$ of $`M=\widehat{M}H`$. The fundamental obstruction is that $`N`$ is not metrically complete. This problem can be circumvented by passing to its metric completion $`\widehat{N}`$, but this introduces problems of its own. First there is the issue of how $`N`$ and $`\widehat{N}`$ are related. We resolve this by a simple trick that shows that the inclusion $`N\widehat{N}`$ is a homotopy equivalence. The second and more central problem is that $`\widehat{N}`$ is not a manifold and not even locally compact. In particular, one cannot use the techniques of Riemannian geometry. But it is still a metric space and it turns out to have curvature $`0`$, in the sense that it satisfies Alexandrov’s CAT(0) condition locally. It is then an easy matter to show that $`N`$ and $`\widehat{N}`$ are contractible. In summary, to study the topology of the manifold $`N`$ it turns out to be natural and useful to study the non-manifold $`\widehat{N}`$ and use metric-space curvature rather than Riemannian curvature. The connection between the very general treatment of metric curvature and the applications lies in our study of the curvature of $`\widehat{N}`$. For this it suffices to work locally; the reader should imagine a closed ball $`B`$ in $`C^n`$, equipped with some Riemannian metric, minus the coordinate hyperplanes. The metric completion of the universal cover of the hyperplane complement can be obtained by first taking a simple branched cover of $`B`$ over one hyperplane, then taking a simple branched cover of this branched cover over (the preimage of) the second hyperplane, and so on. If the hyperplanes are mutually orthogonal and totally geodesic then our main theorem may be used inductively to study the curvature of the iterated branched cover. There are some minor technical issues, which we chase down in section 4. Note that the base space in each of the sequence of branched covers fails to be locally compact (except in the first step). This means that the inductive argument actually requires a theorem treating branched covers of spaces considerably more general than manifolds. I would like to thank Jim Carlson and Domingo Toledo for their interest in this work, and for the collaboration that suggested these problems. I would also like to thank Richard Borcherds, Misha Kapovich and Bruce Kleiner for useful conversations. Finally, I am grateful to Brian Bowditch for pointing out an error in an early version. 2. Background Let $`(X,d)`$ be a metric space. A path in $`X`$ is a continuous map from a nonempty compact interval to $`X`$; its initial (resp. final) endpoint is the image of the least (resp. greatest) element of this interval. We sometimes describe a path as being from its initial endpoint to its final endpoint. When we wish to mention its endpoints but not worry about which is which, we describe the path as joining one endpoint and or with the other. When we speak of a point of a path we mean a point in its image. If $`\gamma `$ is a path in $`X`$ with domain $`[a,b]`$ then we define its length to be $$\mathrm{}(\gamma )=sup\left\{\underset{i=1}{\overset{N}{}}d(\gamma (t_{i1}),\gamma (t_i))\text{ }\right|a=t_0t_1\mathrm{}t_N=b,N1\text{ }\}.$$ This is an element of $`[0,\mathrm{}]`$. We call $`X`$ a length space and $`d`$ a path metric if for all $`x,yX`$ and all $`\epsilon >0`$ there is a path of length $`<d(x,y)+\epsilon `$ joining $`x`$ and $`y`$. All of the spaces in this paper are length spaces. An important class of length spaces is that of connected Riemannian manifolds. Given such a manifold $`M`$, one defines the ‘length’ of each piecewise differentiable path in $`M`$ as a certain integral. Then one defines the distance between two points of $`M`$ to be the infimum of the ‘lengths’ of such paths joining them. By the machinery above, this metric assigns a length to every path in $`M`$. Happily for the terminology this agrees with the ‘length’ when the latter is defined. See \[4, I.3.15\] for details. A path $`\gamma `$ is called a geodesic parameterized proportionally to arclength if there exists $`k0`$ such that $`d(\gamma (s),\gamma (t))=k|st|`$ for all $`s`$ and $`t`$ in the domain of $`\gamma `$. We call $`\gamma `$ a geodesic if $`k=1`$. We sometimes regard two geodesics as being the same if they differ only by an isometry of their domains. For example, we use this convention in assertions about uniqueness of geodesics in $`X`$. Similarly, we will sometimes refer to the image of $`\gamma `$, rather than $`\gamma `$ itself, as a geodesic. Sometimes we will even refer to a path as a geodesic when it is only a geodesic parameterized proportionally to arclength. We say that $`X`$ is a geodesic space if any two of its points are joined by a geodesic. Most of the spaces in this paper are geodesic. A subset $`Y`$ of $`X`$ is called convex (in $`X`$) if any two points of $`Y`$ are joined by a geodesic of $`X`$ and every such geodesic actually lies in $`Y`$. A triangle $`T`$ in $`X`$ is a triple $`(\gamma _1,\gamma _2,\gamma _3)`$ of geodesics of $`X`$, called the edges of $`T`$, such that for each $`i`$, the final endpoint of $`\gamma _i`$ is the initial endpoint of $`\gamma _{i+1}`$; the vertex of $`T`$ opposite $`\gamma _i`$ is defined to be the common final endpoint of $`\gamma _{i+1}`$ and initial endpoint of $`\gamma _{i1}`$. Here, subscripts should be read modulo 3. It is possible for a vertex to be opposite more than one edge; this occurs when an edge of $`T`$ has length $`0`$. An altitude of $`T`$ is a geodesic of $`X`$ joining a vertex of $`T`$ and a point of an edge opposite it. This terminology does not reduce to the usual notion of an altitude of a triangle in the Euclidean plane when $`X=R^2`$. Since we will not use the classical meaning of the term this should cause no confusion. Now we define the notion of a metric space satisfying a bound on its curvature. This elegant idea of Alexandrov captures much of the flavor of an upper bound on the sectional curvature of a Riemannian manifold, in the setting of much more general metric spaces. The idea is that triangles should be thinner than comparable triangles in some standard space like the Euclidean plane. For each $`\kappa R`$, let $`M_\kappa ^2`$ be the (unique up to isometry) complete simply connected Riemannian 2-manifold with constant curvature $`\kappa `$. For $`\kappa =0`$ or $`\kappa >0`$ this space is $`R^2`$ or the sphere of radius $`1/\sqrt{\kappa }`$. For $`\kappa <0`$ it is the hyperbolic plane equipped with a suitable multiple of its standard metric. If $`T`$ is a triangle in $`X`$ then a comparison triangle $`T^{}`$ for $`T`$ in $`M_\kappa ^2`$ is a triangle $`(\gamma _1^{},\gamma _2^{},\gamma _3^{})`$ in $`M_\kappa ^2`$ such that the domains of $`\gamma _i`$ and $`\gamma _i^{}`$ coincide for each $`i`$. In particular we have $`\mathrm{}(\gamma _i)=\mathrm{}(\gamma _i^{})`$. Comparison triangles exist unless $`\kappa >0`$ and $`T`$ has perimeter $`>2\pi /\sqrt{\kappa }`$. When they exist they are unique up to isometry unless $`\kappa >0`$ and $`T`$ has an edge of length $`\pi /\sqrt{\kappa }`$. We will arrange things later so that we will not need to worry about the existence or uniqueness of comparison triangles. We will follow the usual convention of taking $`2\pi /\sqrt{\kappa }`$ and similar expressions to represent $`\mathrm{}`$ when $`\kappa 0`$. This allows many assertions to be phrased more uniformly. For each $`i`$, we say that $`\gamma _i^{}`$ is the edge of $`T^{}`$ corresponding to $`\gamma _i`$. If $`p`$ is a point of $`\gamma _i`$ then the point $`p^{}`$ associated to $`p`$ on the edge $`\gamma _i^{}`$ is $`\gamma _i^{}(t)`$, where $`t`$ is such that $`\gamma _i(t)=p`$. Note that a choice of edge containing $`p`$ is essential for this construction, since $`p`$ may lie on more than one edge of $`T`$. We say that $`T`$ satisfies CAT($`\kappa `$) if $`T`$ has perimeter $`<2\pi /\sqrt{\kappa }`$ and for any two edges $`\alpha `$ and $`\beta `$ of $`T`$ and points $`p`$ on $`\alpha `$ and $`q`$ on $`\beta `$, we have $`d(p,q)d(p^{},q^{})`$. Here $`p^{}`$ and $`q^{}`$ are the points of $`\alpha ^{}`$ and $`\beta ^{}`$ corresponding to $`p`$ and $`q`$ and $`\alpha ^{}`$ and $`\beta ^{}`$ are the edges corresponding to $`\alpha `$ and $`\beta `$ in a comparison triangle for $`T`$. We say that $`X`$ satisfies (or is) CAT($`\kappa `$) if $`X`$ is geodesic and every triangle in $`X`$ of perimeter $`<2\pi /\sqrt{\kappa }`$ satisfies CAT($`\kappa `$). The intuitive meaning of this condition is that $`X`$ is “at least as negatively curved” as $`M_\kappa ^2`$. We say that $`X`$ is locally CAT($`\kappa `$), or has curvature $`\kappa `$, if each point of $`X`$ has a convex CAT($`\kappa `$) neighborhood. Our interest in spaces with curvature bounded above stems from the following very general version of the Cartan-Hadamard theorem, proven by Bridson and Haefliger \[4, II.5.1\]. Theorem 2.1. A complete simply connected length space of curvature $`\kappa 0`$ is CAT($`\kappa `$). Note that part of the conclusion of theorem 2.1 is that the space is geodesic, which is very important and not at all obvious. This statement of the theorem implies the version quoted in the introduction: contractibility follows from the CAT($`\kappa `$) condition for $`\kappa 0`$, and if the given metric isn’t a path metric then it induces one and the two metrics define the same topology. Another very important theorem in Alexandrov’s subdivision lemma, a proof of which appears in . Theorem 2.2 (Alexandrov). Let $`T`$ be a triangle in a metric space, with an altitude $`\alpha `$. If $`T`$ has perimeter $`<2\pi /\sqrt{\kappa }`$ and both of the triangles into which $`A`$ subdivides $`T`$ satisfy CAT($`\kappa `$), then $`T`$ also satisfies CAT($`\kappa `$). Sometimes this is stated with the conditions that the two subtriangles have perimeters $`<2\pi /\sqrt{\kappa }`$, but we have made this condition a part of the definition of CAT($`\kappa `$). 3. Simple branched covers The purpose of this section is to show that under very general conditions a branched cover satisfies the same upper bounds on curvature as its base space. Our precise formulation of this idea is theorem 3.1. The statement is slightly stronger than the version given in the introduction because it turns out that the completeness of the branch locus is needed for the existence of geodesics but not for the fact that all triangles in the cover satisfy CAT($`\kappa `$). The other result of this section is a local version of this result, theorem 3.6. Since we do not need this result we will merely state it and give the idea of its proof. The branched covering spaces we treat here are what we call simple branched covers. The basic idea is very simple: one removes a closed subset $`\mathrm{\Delta }`$ from a length space $`\widehat{X}`$, takes a cover of what is left, and then attaches a copy of $`\mathrm{\Delta }`$ in the obvious way. Formally, if $`\widehat{X}`$ is a length space and $`\mathrm{\Delta }`$ is a closed subset of $`\widehat{X}`$ then we say that $`\pi :\widehat{Y}\widehat{X}`$ is a simple branched cover of $`\widehat{X}`$ over $`\mathrm{\Delta }`$ if $`\widehat{Y}`$ is a length space and $`\pi `$ satisfies the following two conditions. First, the restriction of $`\pi `$ to $`\widehat{Y}\pi ^1(\mathrm{\Delta })`$ must be a locally isometric covering map. Second, we require that $`d(y,z)=d(\pi y,\pi z)`$ if at least one of $`y,z\widehat{Y}`$ lies in $`\pi ^1(\mathrm{\Delta })`$. It follows from the second condition that the restriction of $`\pi `$ to $`\pi ^1(\mathrm{\Delta })`$ is an isometry. We will identify $`\mathrm{\Delta }`$ with its preimage under $`\pi `$ and write $`X`$ and $`Y`$ for $`\widehat{X}\mathrm{\Delta }`$ and $`\widehat{Y}\mathrm{\Delta }`$, respectively. It is easy to see that any simple branched cover is distance non-increasing. If we are given $`\widehat{X}`$ and $`\mathrm{\Delta }`$ as above, and $`Y`$ is any covering space of $`X=\widehat{X}\mathrm{\Delta }`$, then there is a unique metric on $`\widehat{Y}=Y\mathrm{\Delta }`$ such that the obvious map $`\pi :\widehat{Y}\widehat{X}`$ is a simple branched cover of $`\widehat{X}`$ over $`\mathrm{\Delta }`$. This may be constructed as follows. First, each component of $`X`$ carries a unique path metric under which its inclusion into $`\widehat{X}`$ is a local isometry. (This uses the fact that $`\mathrm{\Delta }`$ is closed.) Second, each component of $`Y`$ carries a natural path metric, the unique such metric under which the covering map is a local isometry. Third, for $`y,z\widehat{Y}`$ with at least one of them in $`\mathrm{\Delta }`$ we define $`d(y,z)=d(\pi y,\pi z)`$. Finally, if $`x,zY`$ then we define $`d(x,z)`$ as $$inf\left(\left\{d(x,y)+d(y,z)\text{ }\right|y\mathrm{\Delta }\text{ }\}\left\{\mathrm{}(\gamma )\text{ }\right|\gamma \text{ is a path in a component of }Y\text{ joining }x\text{ and }z\text{ }\}\right).$$ One can check that $`d`$ is a path metric on $`\widehat{Y}`$ and that $`\pi `$ is a simple branched covering. Our main theorem is a sufficient condition for $`\widehat{Y}`$ to be CAT($`\kappa `$): Theorem 3.1. Suppose $`\mathrm{\Delta }`$ is a closed convex subset of a CAT($`\kappa `$) space $`\widehat{X}`$ and let $`\pi :\widehat{Y}\widehat{X}`$ be a simple branched cover of $`\widehat{X}`$ over $`\mathrm{\Delta }`$. If $`\kappa >0`$ then assume also that $`Diam(\widehat{X})<\pi /2\sqrt{\kappa }`$ and $`Diam(\widehat{Y})<2\pi /3\sqrt{\kappa }`$. Then (i) Every triangle in $`\widehat{Y}`$ satisfies CAT($`\kappa `$). (ii) If $`\mathrm{\Delta }`$ is complete then $`\widehat{Y}`$ is geodesic and hence CAT($`\kappa `$). Example: The completeness condition in (ii) cannot be dropped, because of the following example. Take $`\widehat{X}`$ to be the set of points $`(x,y)R^2`$ with $`x0`$ and $`y>0`$, together with the point $`(1,0)`$. Let $`\mathrm{\Delta }`$ be the positive $`y`$-axis. Then $`\widehat{X}`$ is a convex subset of $`R^2`$, hence CAT(0), and $`\mathrm{\Delta }`$ is a closed convex subset of $`\widehat{X}`$. The set $`X=\widehat{X}\mathrm{\Delta }`$ is contractible, so any cover of it is a union of disjoint copies of it. Taking $`Y`$ to be the cover with 2 sheets, $`\widehat{Y}`$ is isometric to the upper half plane in $`R^2`$ together with the points $`(\pm 1,0)`$. There is no geodesic joining these two points, so $`\widehat{Y}`$ is not a geodesic space. This provides a counterexample to several assertions in the literature, such as \[8, 4.3–4.4\], \[10, Lemma 1.1\] and \[6, Lemma 2.4\]. Example: Although the space $`\widehat{Y}`$ of the previous example is not geodesic, it still has curvature $`0`$. The following example shows that even this may fail if $`\mathrm{\Delta }`$ is not complete. We take $`\widehat{X}`$ to be the set of points $`(x,y,z)`$ of $`R^3`$ whose first nonzero coordinate is positive, together with the origin. That is, $`\widehat{X}`$ is the union of an open half-space together with an open half-plane in its boundary, together with a ray in its boundary. We take $`\mathrm{\Delta }`$ to be the set of points of $`\widehat{X}`$ with vanishing $`x`$-coordinate, which is the union of the open half-plane and the ray. Then $`\widehat{X}`$ is a convex subset of $`R^3`$ and $`\mathrm{\Delta }`$ is closed and convex in $`\widehat{X}`$. As before, any cover of $`X=\widehat{X}\mathrm{\Delta }`$ is a union of copies of $`X`$, and we take $`Y`$ to be the cover with 2 sheets. Then $`\widehat{Y}`$ is isometric to the subset of $`R^3`$ given by $$\widehat{Y}=\mathrm{\Delta }\left\{(x,y,z)R^3\text{ }\right|x0\text{ }\},$$ equipped with the path metric induced by the Euclidean metric. It is easy to see that for each $`n1`$ the points $`(\pm 1/n,1/n,1/n)`$ are joined by no geodesic of $`\widehat{Y}`$. Since every neighborhood of $`0`$ contains such a pair of points, $`0`$ has no geodesic neighborhood. In the proofs below we will use the following two facts about $`\widehat{X}`$. First, geodesics are characterized by their endpoints. Second, geodesics vary continuously with respect to their endpoints, by which we mean that for each $`\epsilon >0`$ there is a $`\delta >0`$ such that if $`d(x,x^{})<\delta `$ and $`d(y,y^{})<\delta `$ for $`x,y,x^{},y^{}\widehat{X}`$ then the geodesic from $`x`$ to $`y`$ is uniformly within $`\epsilon `$ of the geodesic from $`x^{}`$ to $`y^{}`$. These facts follow from the CAT($`\kappa `$) inequalities and the fact that $`Diam(\widehat{X})<\pi /\sqrt{\kappa }`$. We will ignore all conditions about perimeters of triangles in $`\widehat{Y}`$ being less than $`2\pi /\sqrt{\kappa }`$ because we have bounded $`Diam(\widehat{Y})`$ in order to guarantee that all triangles in $`\widehat{Y}`$ satisfy this condition. We chose the bounds on $`Diam(\widehat{X})`$ and $`Diam(\widehat{Y})`$ out of convenience; one could probably weaken them, although most theorems about CAT($`\kappa `$) spaces require some sort of extra condition when $`\kappa >0`$. Of course, if $`Diam(\widehat{X})<\pi /3\sqrt{\kappa }`$ then the condition on $`Diam(\widehat{Y})`$ follows automatically. We begin with some elementary properties of geodesics in $`\widehat{Y}`$, and then show that under special circumstances they vary continuously with respect to their endpoints. Lemma 3.2. Under the hypotheses of theorem 3.1, we have the following: (i) If $`\gamma `$ is a geodesic of $`\widehat{X}`$ meeting $`\mathrm{\Delta }`$ and $`w,z\widehat{Y}`$ lie over the endpoints of $`\gamma `$, then there is a unique path in $`\widehat{Y}`$ joining $`w`$ with $`z`$ and projecting to $`\gamma `$. (ii) A path in $`\widehat{Y}`$ projecting to a geodesic of $`\widehat{X}`$ is the unique geodesic with its endpoints. (iii) If $`x\widehat{Y}`$ and $`y\mathrm{\Delta }`$ then there is a unique geodesic of $`\widehat{Y}`$ joining them, and it projects to a geodesic of $`\widehat{X}`$. (iv) A geodesic of $`\widehat{Y}`$ that misses $`\mathrm{\Delta }`$ projects to a geodesic of $`\widehat{X}`$. (v) If $`xY`$ then the set of points of $`\widehat{Y}`$ that may be joined with $`x`$ by a geodesic of $`\widehat{Y}`$ that misses $`\mathrm{\Delta }`$ is open. Proof: (i) Because $`\mathrm{\Delta }`$ is convex in $`\widehat{X}`$, $`\gamma `$ meets $`\mathrm{\Delta }`$ in an interval, with endpoints say $`x`$ and $`y`$. There is clearly a unique lift of this interval. If $`\pi wx`$ then by covering space theory the half-open segment of $`\gamma `$ from $`\pi w`$ to $`x`$ has a unique lift to $`Y`$ beginning at $`w`$. Similarly, there is a unique lift of the segment from $`y`$ to $`\pi z`$. It is obvious that these lifts fit together to form a lift of $`\gamma `$. (ii) This follows from the uniqueness of geodesics in $`\widehat{X}`$ and the uniqueness of their lifts with specified endpoints in $`\widehat{Y}`$, which in turn follows from (i) for geodesics that meet $`\mathrm{\Delta }`$ and from covering space theory for those that do not. (iii) This follows from (i) and (ii) by lifting a geodesic of $`\widehat{X}`$ joining $`\pi x`$ and $`\pi y`$. (iv) Suppose $`\gamma `$ is a geodesic of $`\widehat{Y}`$ from $`x`$ to $`z`$ that misses $`\mathrm{\Delta }`$, and that $`\pi \gamma `$ is not a geodesic of $`\widehat{X}`$. Consider the homotopy $`\mathrm{\Gamma }`$ from $`\pi \gamma `$ to the constant path at $`\pi x`$ given by retraction along geodesics. Suppose first that $`\mathrm{\Gamma }`$ meets $`\mathrm{\Delta }`$. Then there is a point $`y`$ of $`\pi \gamma `$ that is joined with $`\pi x`$ by a geodesic $`\beta `$ of $`\widehat{X}`$ that meets $`\mathrm{\Delta }`$. By (i), there is a lift $`\stackrel{~}{\beta }`$ of $`\beta `$ from $`x`$ to the point $`\stackrel{~}{y}`$ of $`\gamma `$ lying over $`y`$. By (ii), $`\stackrel{~}{\beta }`$ is the unique geodesic of $`\widehat{Y}`$ with these endpoints. But then the subsegment of $`\gamma `$ from $`x`$ to $`\stackrel{~}{y}`$ must coincide with $`\stackrel{~}{\beta }`$, contradicting the fact that $`\gamma `$ misses $`\mathrm{\Delta }`$. Now suppose $`\mathrm{\Gamma }`$ misses $`\mathrm{\Delta }`$. Consider the geodesic $`\delta `$ of $`\widehat{X}`$ from $`\pi x`$ to $`\pi z`$, which is shorter than $`\gamma `$. Since $`\delta `$ is the track of $`\pi z`$, we may regard $`\mathrm{\Gamma }`$ as a homotopy rel endpoints between $`\pi \gamma `$ and $`\delta `$. This lifts to a homotopy between $`\gamma `$ and a lift $`\stackrel{~}{\delta }`$ of $`\delta `$ that joins $`x`$ and $`z`$. Then $`\mathrm{}(\stackrel{~}{\delta })<\mathrm{}(\gamma )`$, contradicting the hypothesis that $`\gamma `$ is a geodesic. (v) Suppose $`\gamma `$ is a geodesic of $`\widehat{Y}`$ from $`x`$ to some point $`y`$ of $`\widehat{Y}`$, and that $`\gamma `$ misses $`\mathrm{\Delta }`$. By (iv), $`\pi \gamma `$ is a geodesic of $`\widehat{X}`$. Since geodesics of $`\widehat{X}`$ depend continuously upon their endpoints, there is an open ball $`U`$ of radius $`r>0`$ about $`\pi y`$ such that the geodesics of $`\widehat{X}`$ from $`\pi x`$ to the various points of $`U`$ are all uniformly within $`d(image(\gamma ),\mathrm{\Delta })`$ of $`\pi \gamma `$. By replacing $`r`$ by a smaller number if necessary, we may also suppose that the open $`r`$-ball $`\stackrel{~}{U}`$ about $`y`$ maps isometrically onto its image. Now, if $`y^{}`$ lies in $`\stackrel{~}{U}`$ then consider the homotopy along geodesics from $`\pi \gamma `$ to the geodesic $`\beta `$ from $`x`$ to $`\pi y^{}`$. This misses $`\mathrm{\Delta }`$, so it lifts to a homotopy from $`\gamma `$ to a lift $`\stackrel{~}{\beta }`$ of $`\beta `$. Considering the length of the track of $`y`$, we see that the final endpoint of $`\stackrel{~}{\beta }`$ lies within $`r`$ of $`y`$, so that it must coincide with $`y^{}`$. Finally, $`\stackrel{~}{\beta }`$ is a geodesic by (ii). This shows that every point of $`\stackrel{~}{U}`$ is joined with $`x`$ by a geodesic that misses $`\mathrm{\Delta }`$. Lemma 3.3. Under the hypotheses of theorem 3.1, suppose $`x_n`$ and $`y_n`$ are sequences in $`Y`$ converging to points $`x`$ and $`y`$ of $`\widehat{Y}`$, respectively. Suppose also that for each $`n`$ there is a geodesic of $`\widehat{Y}`$ from $`x_n`$ to $`y_n`$ that misses $`\mathrm{\Delta }`$, and let $`\gamma _n:[0,1]\widehat{Y}`$ be a parameterization of this geodesic proportional to arclength. Then there is a geodesic of $`\widehat{Y}`$ from $`x`$ to $`y`$, and the $`\gamma _n`$ converge uniformly to (the obvious reparameterization of) it. Proof: By lemma 3.2(iv), each $`\pi \gamma _n`$ is a geodesic of $`\widehat{X}`$. By the continuous dependence of geodesics in $`\widehat{X}`$ on their endpoints, the $`\pi \gamma _n`$ converge uniformly to a (suitably parameterized) geodesic $`\beta `$ from $`\pi x`$ to $`\pi y`$. We distinguish two cases. First, suppose that $`\beta `$ misses $`\mathrm{\Delta }`$. Then there is a unique lift $`\stackrel{~}{\beta }`$ of $`\beta `$ with $`\stackrel{~}{\beta }(0)=x`$, and this is a (reparameterized) geodesic by 3.2(ii). We claim that the $`\gamma _n`$ converge uniformly to $`\stackrel{~}{\beta }`$ and that $`\stackrel{~}{\beta }(1)=y`$, so that $`\stackrel{~}{\beta }`$ is the required geodesic. The second claim follows from the first. To see convergence, first choose a constant $`\delta >0`$ such that $`\delta <d(image(\beta ),\mathrm{\Delta })`$ and such that the open ball of radius $`\delta `$ about $`x`$ maps isometrically onto its image. By discarding finitely many terms of the sequence, we may suppose that all the $`x_n`$ are within $`\delta `$ of $`x`$ and that all the $`\pi \gamma _n`$ are uniformly within $`\delta `$ of $`\beta `$. We claim that for each $`n`$, the uniform distance between $`\gamma _n`$ and $`\stackrel{~}{\beta }`$ equals the uniform distance between $`\pi \gamma _n`$ and $`\beta `$. This clearly implies that the $`\gamma _n`$ converge uniformly to $`\stackrel{~}{\beta }`$. To see the claim, simply construct the homotopy along geodesics in $`\widehat{X}`$ from $`\pi \gamma _n`$ to $`\beta `$ and lift this to a homotopy from $`\gamma _n`$ to some lift of $`\beta `$, and then argue as in lemma 3.2(v) that this lift coincides with $`\stackrel{~}{\beta }`$. On the other hand, suppose that $`\beta `$ meets $`\mathrm{\Delta }`$. Then parts (i) and (ii) of lemma 3.2 show that there is a unique lift $`\stackrel{~}{\beta }`$ of $`\beta `$ with $`\stackrel{~}{\beta }(0)=x`$ and $`\stackrel{~}{\beta }(1)=y`$, and that this is a (reparameterized) geodesic of $`\widehat{Y}`$. We must show that the $`\gamma _n`$ converge uniformly to $`\stackrel{~}{\beta }`$. Let $`u`$ (resp. $`w`$) be the least (resp. greatest) element of $`[0,1]`$ whose image under $`\beta `$ lies in $`\mathrm{\Delta }`$. It is easy to see that the $`\gamma _n`$ converge uniformly to $`\stackrel{~}{\beta }`$ on $`[u,w]`$. (One just uses the fact that the distance between an element of $`\widehat{Y}`$ and an element of $`\mathrm{\Delta }\widehat{Y}`$ coincides with the distance between their projections.) From this and the fact that the $`\gamma _n`$ are all parameterized proportionally to arclength, one obtains the following slightly stronger statement: for each $`\epsilon >0`$ there exist an $`N`$ and a $`\delta >0`$ such that for all $`n>N`$ and all $`v[u\delta ,w+\delta ]`$, $`d(\gamma _n(v),\stackrel{~}{\beta }(v))<\epsilon `$. Therefore it suffices to prove that for all $`\delta >0`$, the $`\gamma _n`$ converge uniformly on $`[0,u\delta ]`$ and on $`[w+\delta ,1]`$. Since the images of these intervals under $`\beta `$ are disjoint from $`\mathrm{\Delta }`$ we can use the same argument as in the case that $`\beta `$ missed $`\mathrm{\Delta }`$. Lemma 3.4. A triangle of $`\widehat{Y}`$ with an edge in $`\mathrm{\Delta }`$ satisfies CAT($`\kappa `$). Proof: We write $`T`$ for the triangle, and represent the edge in $`\mathrm{\Delta }`$ as $`A:[0,1]\mathrm{\Delta }`$, a geodesic parameterized proportionally to arclength. Let $`a`$ be the vertex of $`T`$ opposite $`A`$, and for $`t[0,1]`$ let $`B^t`$ be the geodesic from $`a`$ to $`A(t)`$, parameterized proportionally to arclength. In particular, the edges of $`T`$ are $`A`$, $`B^0`$ and $`B^1`$. Let $`T^{}`$ be a comparison triangle in $`M_\kappa ^2`$ for $`T`$, and suppose that $`p`$ and $`q`$ are points of given edges of $`T`$. Let $`p^{}`$ and $`q^{}`$ be the corresponding points of the corresponding edges of $`T^{}`$, and let $`k=d_T^{}(p^{},q^{})`$. We must show $`d_{\widehat{Y}}(p,q)k`$. If one of $`p`$ and $`q`$ lies on $`A`$ then we are done, because $$d_{\widehat{Y}}(p,q)=d_{\widehat{X}}(\pi p,\pi q)k,$$ where we have used the definition of a simple branched cover, the fact that $`\widehat{X}`$ is CAT($`\kappa `$), and the fact that $`T^{}`$ is a comparison triangle for $`\pi T`$ as well as for $`T`$. Now we consider the case in which neither $`p`$ nor $`q`$ is given as lying on $`A`$. To avoid trivialities we suppose that they lie on different edges of $`T`$, so we may suppose that $`p`$ lies on $`B^0`$ and $`q`$ on $`B^1`$. The obvious idea is to construct the geodesic in $`\widehat{X}`$ joining $`x_0=\pi p`$ with $`x_1=\pi q`$, and then lift it to a geodesic of $`\widehat{Y}`$. The problem is that while we may always lift the geodesic, there is no guarantee that the lift will join $`p`$ and $`q`$. We will circumvent this problem by joining $`x_0`$ to $`x_1`$ by a path $`\alpha `$ that may fail to be a geodesic, but will have length $`k`$. Our path will have the virtue of lying in the ‘surface’ $`S`$ swept out by the geodesics $`\pi B^t`$, which will allow us to lift it to a path from $`p`$ to $`q`$. By the continuous dependence of geodesics on their endpoints, $`(t,z)\pi B^t(z)`$ is continuous on $`[0,1]\times [0,1]`$, and so $`S`$ is compact. We will need some “comparison complexes” $`\overline{K}_n`$ as well as the comparison triangle $`T^{}`$. For $`0tu1`$ we define $`T(t,u)`$ to be the triangle with edges $`B^t`$, $`B^u`$ and $`A|_{[t,u]}`$. For each $`n=0,1,2,\mathrm{}`$ we let $`D_n`$ be the set of dyadic rational numbers in $`[0,1]`$ of the form $`t_{n,i}=i/2^n`$. For each $`i=1,\mathrm{},2^n`$ we define $`\overline{T}_{n,i}`$ to be the comparison triangle in $`M_\kappa ^2`$ for $`T(t_{n,i1},t_{n,i})`$. We write $`\overline{B}_{n,i}^{}`$ and $`\overline{B}_{n,i}^+`$ for the edges of $`\overline{T}_{n,i}`$ corresponding to $`B^{t_{n,i1}}`$ and $`B^{t_{n,i}}`$, $`\overline{A}_{n,i}`$ for the edge corresponding to $`A|_{[t_{n,i1},t_{n,i}]}`$, and $`\overline{a}_{n,i}`$ for the vertex corresponding to $`a`$. We take $`\overline{U}_{n,i}`$ to be the convex hull of $`\overline{T}_{n,i}`$ in $`M_\kappa ^2`$. Finally, we define $`\overline{K}_n`$ as the union of disjoint copies of the $`\overline{U}_{n,i}`$, subject to the identification of the the segment $`\overline{B}_{n,i}^+`$ in $`\overline{U}_{n,i}`$ with $`\overline{B}_{n,i+1}^{}`$ in $`\overline{U}_{n,i+1}`$, in such a way that $`\overline{a}_{n,i}`$ is identified with $`\overline{a}_{n,i+1}`$, for each $`i=1,\mathrm{},2^n1`$. In short, $`\overline{K}_n`$ is a ‘fan’ of $`2^n`$ triangular pieces cut from $`M_\kappa ^2`$, although some of these pieces may degenerate to segments. We equip $`\overline{K}_n`$ with its natural path metric. The paths $`\overline{A}_{n,i}`$ in the $`\overline{U}_{n,i}`$ fit together to form a path $`\overline{A}_n:[0,1]\overline{K}_n`$. The vertices $`\overline{a}_{n,i}`$ are identified with each other, resulting in a single point $`\overline{a}_n`$ of $`\overline{K}_n`$. If $`tD_n`$ then we let $`\overline{B}_n^t:[0,1]\overline{K}_n`$ be the geodesic from $`\overline{a}_n`$ to $`\overline{A}_n(t)`$, parameterized proportionally to arclength. These paths, together with $`\overline{A}_n`$, form the ‘1-skeleton’ of $`\overline{K}_n`$. We claim that $`\overline{A}_n`$ is a geodesic of $`\overline{K}_n`$ for all $`n`$. Otherwise, a simple application of the CAT($`\kappa `$) property of $`\widehat{X}`$ would show that $`\pi A`$ failed to be a geodesic. We use this to deduce that points of the ‘1-skeleton’ of $`\overline{K}_n`$ are at least as far apart as the corresponding points of $`\overline{K}_{n+1}`$. To make this precise, observe that if $`tD_n`$ then $`\pi B^t`$ and $`\overline{B}_n^t`$ have the same length. Therefore to each point $`x`$ of $`\pi B^t`$ we may associate a point $`\overline{x}`$ of $`\overline{B}_n^t`$ and vice-versa. The relationship is $`d_{\widehat{X}}(\pi a,x)=d_{\overline{K}_n}(\overline{a}_n,\overline{x})`$. If $`t`$ also lies in $`D_m`$ then we can identify $`\overline{B}_n^t`$ with $`\overline{B}_m^t`$ in a similar way. We claim that if $`u`$ and $`w`$ lie in $`D_{n1}`$ and $`b`$ and $`c`$ are points on $`\overline{B}_{n1}^u`$ and $`\overline{B}_{n1}^w`$, with corresponding points $`\beta `$ and $`\gamma `$ on $`\overline{B}_n^u`$ and $`\overline{B}_n^w`$, then $$d_{\overline{K}_n}(\beta ,\gamma )d_{\overline{K}_{n1}}(b,c).$$ $`(3.1)`$ To prove this it suffices to treat the case in which $`u`$ and $`w`$ are consecutive elements of $`D_{n1}`$. Since $`\overline{A}_n|_{[u,w]}`$ is a geodesic, we may consider the geodesic triangle in $`\overline{K}_n`$ with this edge together with $`\overline{B}_n^u`$ and $`\overline{B}_n^w`$. This satisfies CAT($`\kappa `$) because we may subdivide it along the altitude $`\overline{B}_n^v`$ (where $`v=(u+w)/2`$), into two triangles which are pieces of $`M_\kappa ^2`$ and therefore obviously CAT($`\kappa `$). Furthermore, as a comparison triangle we may take the triangle in $`\overline{K}_{n1}`$ bounded by $`\overline{A}_{n1}|_{[u,w]}`$, $`\overline{B}_{n1}^u`$ and $`\overline{B}_{n1}^w`$, since this triangle is also a piece of $`M_\kappa ^2`$. Then (3.1) follows immediately. We write $`D=D_n`$ for the set of all dyadic rational numbers in $`[0,1]`$. We will use the $`\overline{K}_n`$ to construct a point $`x_u`$ on $`\pi B^u`$ for each $`uD`$. Then we will string the $`x_u`$ together to build the path $`\alpha `$. We have already defined $`x_0=\pi p`$ and $`x_1=\pi q`$. We will sometimes write $`x_{n,i}`$ for $`x_{i/2^n}`$, so we have just defined $`x_{0,0}=x_0`$ and $`x_{0,1}=x_1`$. Supposing that all the $`x_{n1,i}`$ have been defined, we define the $`x_{n,j}`$ as follows. We have already defined the $`x_{n,j}`$ for even $`j`$, namely $`x_{n,j}=x_{n1,j/2}`$. If $`j`$ is odd then take $`u=(j1)/2^n`$, $`v=j/2^n`$ and $`w=(j+1)/2^n`$, and consider the points $`\overline{x}_u`$ and $`\overline{x}_w`$ of $`\overline{K}_n`$ that lie on $`\overline{B}_n^u`$ and $`\overline{B}_n^w`$ and correspond to $`x_u`$ and $`x_w`$. We construct the geodesic of $`\overline{K}_n`$ joining $`\overline{x}_u`$ and $`\overline{x}_w`$, and let $`\overline{x}_v`$ be any point of $`\overline{B}_n^v`$ that it meets. Such an intersection point exists by the construction of $`\overline{K}_n`$. (Typically, there will be a unique intersection point, but this can fail if one of $`\overline{T}_{n,j}`$ and $`\overline{T}_{n,j+1}`$ degenerates to a segment.) We take $`x_v`$ to be the point of $`\pi B^v`$ corresponding to $`\overline{x}_v`$. For each $`n`$ we write $`\overline{x}_{n,i}`$ for the point of $`\overline{B}_n^u`$ corresponding to $`x_u`$, where $`u=i/2^n`$. (In particular, $`\overline{x}_{n,i}`$ and $`\overline{x}_{n+1,2i}`$ are points of different spaces, but both correspond to $`x_u`$.) Consider the sum $$k_n=\underset{i=1}{\overset{2^n}{}}d_{\overline{K}_n}(\overline{x}_{n,i1},\overline{x}_{n,i}).$$ $`(3.2)`$ We claim that $`k_nk_{n1}`$. To see this, observe that $$k_n=\underset{i=2,4,\mathrm{},2^n}{}d_{\overline{K}_n}(\overline{x}_{n,i2},\overline{x}_{n,i})$$ because of the construction of the $`x_{n,i}`$ for odd $`i`$. By (3.1), when $`i`$ is even we have $$d_{\overline{K}_n}(\overline{x}_{n,i2},\overline{x}_{n,i})d_{\overline{K}_{n1}}(\overline{x}_{n1,(i2)/2},\overline{x}_{n1,i/2}),$$ and $`k_nk_{n1}`$ follows. We immediately obtain $`k_nk_0=k`$ for all $`n`$. By applying the CAT($`\kappa `$) inequality for $`\widehat{X}`$ we see that for all $`n`$, $$\underset{i=1}{\overset{2^n}{}}d(x_{n,i1},x_{n,i})k.$$ It follows immediately that if $`u_0,\mathrm{},u_j`$ is any increasing sequence of dyadic rationals, then $$d(x_{u_0},x_{u_1})+\mathrm{}+d(x_{u_{j1}},x_{u_j})k.$$ $`(3.3)`$ We are now ready to string the $`x_u`$’s together into a path. There is a technical complication, which we will work around in the next few paragraphs. Specifically, the map $`ux_u`$ might not be continuous on $`D`$. This can happen if some of the comparison triangles $`\overline{T}_{n,i}`$ are degenerate. However, if $`t(0,1]`$ then as the $`uD`$ approach $`t`$ from the left, the $`x_u`$ do converge to a limit $`L(t)`$. Similarly, if $`t[0,1)`$ then as the $`uD`$ approach $`t`$ from the right, the $`x_u`$ converge to a limit $`R(t)`$. We will treat $`L(t)`$; the discussion of $`R(t)`$ is similar. Certainly there is some sequence $`u_i`$ of dyadic rationals approaching $`t`$ from below, such that the $`x_{u_i}`$ converge, since the $`x_u`$ all lie in the compact set $`S`$. We will call this limit $`L(t)`$. Now we show that if $`u_j^{}`$ is any sequence in $`D`$ approaching $`t`$ from below then the $`x_{u_j^{}}`$ converge to $`L(t)`$. For otherwise we could suppose (by passing to a subsequence) that the $`x_{u_j^{}}`$ converge to some other other limit. Then by interleaving terms of the sequences $`u_i`$ and $`u_j^{}`$ we could violate (3.3). This establishes the existence of the left and right limits $`L(t)`$ and $`R(t)`$. It is obvious that $`L(t)`$ and $`R(t)`$ lie on $`\pi B^t`$. For completeness we define $`L(0)=x_0`$ and $`R(1)=x_1`$. Next, we claim that if $`t_1,t_2,\mathrm{}`$ is an increasing sequence in $`[0,1]`$ with limit $`t`$, then $`L(t)=lim_n\mathrm{}R(t_n)`$. If this failed then there would be such a sequence that converged to some point other than $`L(t)`$. But then for each $`n`$ we could choose $`u_nD`$ such that $`t_n<u_n<t_{n+1}`$ and $`d(x_{u_n},R(t_n))<1/n`$. Then the $`x_{u_n}`$ would converge to a point other than $`L(t)`$, while the $`u_n`$ approach $`t`$ from below, a contradiction. A symmetric argument shows that if $`t_1,t_2,\mathrm{}`$ is a decreasing sequence with limit $`t`$ then $`R(t)=lim_n\mathrm{}L(t_n)`$. Our path $`\alpha `$ will pass through all the points $`L(t)`$ and $`R(t)`$ in order. To accomplish this, we define for $`t(0,1]`$ the quantity $$l^{}(t)=sup\left\{\underset{i=1}{\overset{n}{}}d(L(t_{i1}),R(t_{i1}))+d(R(t_{i1}),L(t_i))\right\},$$ $`(3.4)`$ where the supremum is over all increasing sequences $`0=t_0<\mathrm{}<t_n=t`$. For completeness we define $`l^{}(0)=0`$. Then for $`t[0,1]`$ we define $`l^+(t)=l^{}(t)+d(L(t),R(t))`$. To motivate these definitions, we mention that the length of the subpath of $`\alpha `$ from $`x_0`$ to $`L(t)`$ (resp. $`R(t)`$) will be $`l^{}(t)`$ (resp. $`l^+(t)`$). It is obvious that if $`t<t^{}`$ then $`l^{}(t)l^+(t)l^{}(t^{})l^+(t^{})`$. Furthermore, $`l=l^+(1)`$ satisfies $`lk`$. To see this, consider an increasing sequence $`0=t_0<\mathrm{}<t_n=1`$ such that $$d(L(t_0),R(t_0))+d(R(t_0),L(t_1))+\mathrm{}+d(L(t_n),R(t_n))$$ approximates $`l`$. We may approximate each $`t_i`$ (except $`t_0`$) by a dyadic rational $`u_i`$ smaller than $`t_i`$, and each $`t_i`$ (except $`t_n`$) by a dyadic rational $`v_i`$ larger than $`t_i`$. We may do this in such a way that the sequence $`0,v_0,u_1,v_1,\mathrm{},u_{n1},v_{n1},u_n,1`$ is increasing. Then the $`u_i`$ approximate the $`L(t_i)`$ and the $`v_i`$ approximate the $`R(t_i)`$. It follows that $$d(x_0,x_{v_0})+d(x_{v_0},x_{u_1})+d(x_{u_1},x_{v_1})+\mathrm{}+d(x_{u_n},x_1)$$ approximates $`l`$, and then $`lk`$ follows from (3.3). Finally, we claim that if $`t(0,1]`$ then $`l^{}(t)=sup_{t^{}<t}l^+(t^{})`$ and if $`t[0,1)`$ then $`l^+(t)=inf_{t^{}>t}l^{}(t^{})`$. This follows from the relationship between the $`L(t)`$ and the $`R(t)`$, together with the fact that $`l`$ is finite. Now we build $`\alpha `$. One can check that there is a unique function $`\alpha :[0,l]\widehat{X}`$ such that for each $`t[0,1]`$ the restriction of $`\alpha `$ to $`[l^{}(t),l^+(t)]`$ is the geodesic from $`L(t)`$ to $`R(t)`$. It follows from the relations between the $`L(t)`$ and the $`R(t)`$ that $`\alpha `$ is continuous, and from the definitions of $`l^\pm (t)`$ that $`\alpha `$ is parameterized by arclength. In particular, $`\mathrm{}(\alpha )=lk`$. Finally, $`\alpha `$ lies in $`S`$ since $`L(t)`$ and $`R(t)`$ lie on $`\pi B^t`$ for each $`t`$. Now we will lift $`\alpha `$ to $`\widehat{Y}`$. Suppose first that $`\alpha `$ misses $`\mathrm{\Delta }`$. If a point $`x`$ of $`\alpha `$ lies on $`\pi B^t`$, then the subsegment of $`\pi B^t`$ from $`\pi a`$ to $`x`$ misses $`\mathrm{\Delta }`$, for otherwise to convexity of $`\mathrm{\Delta }`$ would force $`x\mathrm{\Delta }`$. We may regard the retraction of $`\alpha `$ along geodesics to $`\pi a`$ as a homotopy rel endpoints between $`\alpha `$ and the path $`\beta `$ which travels along $`\pi B^0`$ from $`x_0`$ to $`\pi a`$ and then along $`\pi B^1`$ from $`\pi a`$ to $`x_1`$. Of course $`\beta `$ lifts to a path $`\stackrel{~}{\beta }`$ from $`p`$ to $`q`$, and since the homotopy misses $`\mathrm{\Delta }`$ it may also be lifted. Therefore there is a lift $`\stackrel{~}{\alpha }`$ of $`\alpha `$ from $`p`$ to $`q`$, with length $`lk`$, as desired. On the other hand, if $`\alpha `$ meets $`\mathrm{\Delta }`$ then the lifting is even easier. One defines $`\stackrel{~}{\alpha }`$ on $`\stackrel{~}{\alpha }^1(\mathrm{\Delta })`$ in the obvious way, and then one defines the rest of $`\stackrel{~}{\alpha }`$ by lifting each component of $`\alpha ^1(\widehat{X}\mathrm{\Delta })`$ however one desires, subject to the conditions $`\stackrel{~}{\alpha }(0)=p`$ and $`\stackrel{~}{\alpha }(l)=q`$. Lemma 3.5. At most one geodesic joins any two given points of $`\widehat{Y}`$. Proof: Suppose $`x,y\widehat{Y}`$; we claim that there is at most one geodesic joining them. If either $`x`$ or $`y`$ lies in $`\mathrm{\Delta }`$ then lemma 3.2(iii) applies. If they are joined by a geodesic missing $`\mathrm{\Delta }`$ then its uniqueness follows from lemma 3.2(iv) and (ii). So it suffices to consider the case with $`x,yY`$ such that every geodesic joining them meets $`\mathrm{\Delta }`$. Let $`\gamma `$ and $`\delta `$ be two such geodesics, meeting $`\mathrm{\Delta }`$ in points $`c`$ and $`d`$ respectively; we will prove $`\gamma =\delta `$. The (unique) geodesic triangle with vertices $`x`$, $`c`$ and $`y`$ satisfies CAT($`\kappa `$) by lemma 3.4. So does the geodesic triangle with vertices $`y`$, $`c`$ and $`d`$. We now apply Alexandrov’s subdivision lemma to the ‘bigon’ formed by $`\gamma `$ and $`\delta `$. Taking $`T`$ to be the triangle with edges $`\delta `$ and the subsegments of $`\gamma `$ joining $`c`$ to each of $`x`$ and $`y`$, and the altitude to be the geodesic joining $`c`$ and $`d`$, we see that $`T`$ satisfies CAT($`\kappa `$). Since $`\mathrm{}(\gamma )=\mathrm{}(\delta )`$, the comparison triangle degenerates to a segment, and the CAT($`\kappa `$) inequality immediately implies $`\gamma =\delta `$. The proofs of the two parts of theorem 3.1 are independent of each other. Proof of theorem 3.1(i): Cases (A)–(G) below show that various sorts of triangles in $`\widehat{Y}`$ satisfy CAT($`\kappa `$). These constitute a proof because every triangle is treated either by case (A) or by case (G). In each case $`T`$ is a triangle with vertices $`A`$, $`B`$ and $`C`$. For two points $`P`$ and $`Q`$ of $`\widehat{Y}`$ that are joined by a geodesic we write $`\overline{PQ}`$ for the geodesic joining them. For three points $`P`$, $`Q`$ and $`R`$ of $`\widehat{Y}`$, any two of which are joined by a geodesic, we write $`\mathrm{}PQR`$ for the geodesic triangle with edges $`\overline{PQ}`$, $`\overline{QR}`$ and $`\overline{RP}`$. Figure 3.1 illustrates the arguments for a few of the cases. Most of the cases use Alexandrov’s lemma. Because $`\widehat{Y}`$ might not be a geodesic space we have to prove the existence of all the geodesics we introduce, which complicates the argument. At a first reading one should simply assume that all needed geodesics exist. $`A`$$`B`$$`C`$(A)$`A`$$`B`$$`C`$$`B_1`$$`B_n`$$`\mathrm{\Delta }`$(B)$`A`$$`B`$$`C`$$`B^{}`$$`\mathrm{\Delta }`$(B)(C)(C)(E) Figure 3.1. Three of the cases in the proof of theorem 3.1(i). Each picture represents a triangle in $`\widehat{Y}`$. The bold letters in case (E) indicate the earlier cases to which the problem is reduced. (A) Suppose no altitude from $`A`$ meets $`\mathrm{\Delta }`$. The set of points of $`\overline{BC}`$ to which there is a geodesic from $`A`$ is nonempty because it contains $`C`$. Because no altitude from $`A`$ meets $`\mathrm{\Delta }`$, this set is open by lemma 3.2(v). It is also closed (lemma 3.3), hence all of $`\overline{BC}`$. For each $`P`$ in $`\overline{BC}`$, let $`\gamma _P:[0,1]\widehat{Y}`$ be a parameterization proportional to arclength of the geodesic $`\overline{AP}`$ from $`A`$ to $`P`$. For each fixed $`s[0,1]`$ the map $`\mathrm{\Gamma }:\overline{BC}\times [0,1]\widehat{Y}`$ given by $`(P,s)\gamma _P(s)`$ is continuous in $`P`$, by lemma 3.3. For each fixed $`P`$ the map is lipschitz as a function of $`s`$, with lipschitz constant $`Diam(T)`$. It follows that $`\mathrm{\Gamma }`$ is jointly continuous in $`P`$ and $`s`$. The fact that $`T`$ satisfies CAT($`\kappa `$) now follows from a standard subdivision argument, like that of \[12, p. 328\]. (B) Suppose that the only altitude from $`A`$ meeting $`\mathrm{\Delta }`$ is $`\overline{AB}`$. If $`B=C`$ then $`T`$ degenerates to a segment (by the uniqueness of geodesics) and therefore automatically satisfies CAT($`\kappa `$). If $`BC`$ then arguing as in the previous case we see that each point $`P`$ of $`\overline{BC}`$ is joined with $`A`$ by a geodesic. We choose a sequence of points $`B_n`$ of $`\overline{BC}\{B\}`$ approaching $`B`$. For each $`n`$, $`\mathrm{}AB_nC`$ satisfies CAT($`\kappa `$) by case (A). By lemma 3.3, the geodesics $`\overline{AB_n}`$ converge uniformly to $`\overline{AB}`$. As a uniform limit of triangles that satisfy CAT($`\kappa `$), $`\mathrm{}ABC`$ does also. (C) Suppose $`\mathrm{\Delta }`$ contains two vertices of $`T`$. This is lemma 3.4. (D) Suppose $`\mathrm{\Delta }`$ contains a vertex of $`T`$ and also a point of an opposite side. Then there is a geodesic joining these points, by lemma 3.2(iii). Subdivide $`T`$ along this altitude and apply case (C) to each of the resulting triangles. (E) Suppose $`\mathrm{\Delta }`$ contains a vertex (say $`B`$) of $`T`$. If $`\overline{AC}`$ meets $`\mathrm{\Delta }`$ then apply the previous case. So suppose $`\overline{AC}`$ misses $`\mathrm{\Delta }`$ and consider the set of points $`P`$ of $`\overline{BC}`$ that are not joined to $`A`$ by a geodesic that misses $`\mathrm{\Delta }`$. This set is closed by lemma 3.2(v) and nonempty because it contains $`B`$; we let $`B^{}`$ be a point of this set closest to $`C`$. Since $`B^{}C`$ there is a sequence of points in the interior of $`\overline{B^{}C}`$ approaching $`B^{}`$, each of which is joined with $`A`$ by a geodesic missing $`\mathrm{\Delta }`$. By lemma 3.3, there is a geodesic $`\overline{AB^{}}`$. By the construction of $`B^{}`$, $`\overline{AB^{}}`$ meets $`\mathrm{\Delta }`$. Subdivide $`T`$ along this altitude and apply case (B) to $`\mathrm{}AB^{}C`$ and case (D) to $`\mathrm{}ABB^{}`$, which of course reduces in turn to two applications of case (C). (F) Suppose $`\mathrm{\Delta }`$ contains a point of $`T`$. By lemma 3.2(iii) there is a geodesic joining this point with a vertex opposite it. Subdivide $`T`$ along this altitude and apply case (E) to each of the resulting triangles. (G) Suppose an altitude of $`T`$ meets $`\mathrm{\Delta }`$. Subdivide $`T`$ along this altitude and apply case (F) to each of the resulting triangles. Proof of theorem 3.1(ii): Suppose that $`\mathrm{\Delta }`$ is complete; we must show that $`\widehat{Y}`$ is geodesic. In light of lemma 3.2(iii) it suffices to show that any two points $`x,z`$ of $`Y`$ are joined by a geodesic. We write $`D`$ for $`d(x,z)`$. Suppose first that there exists a sequence $`y_i`$ in $`\mathrm{\Delta }`$ such that the sequence $`d(x,y_i)+d(y_i,z)`$ converges to $`D`$. Then for each $`i`$ there is a geodesic $`\alpha _i`$ (resp. $`\beta _i`$) from $`x`$ (resp. $`z`$) to $`y_i`$ and we write $`a_i`$ (resp. $`b_i`$) for its length. By passing to a subsequence we may suppose that the $`a_i`$ converge to a limit, say $`a`$. Then the $`b_i`$ converge to $`b=Da`$. Since neither $`x`$ nor $`z`$ lies in the closed set $`\mathrm{\Delta }`$, the $`a_i`$ and $`b_i`$ are bounded away from $`0`$, so $`a>0`$ and $`b>0`$. We will show that the $`y_i`$ form a Cauchy sequence. We let $`\delta >0`$ be sufficiently small, by which we mean that $`\delta <a`$, $`\delta <b`$, $`a+\delta <\pi /2\sqrt{\kappa }`$ and $`b+\delta <\pi /2\sqrt{\kappa }`$. This is possible because each $`a_i`$ and $`b_i`$, hence each of $`a`$ and $`b`$, is bounded above by $`Diam(\widehat{X})<\pi /2\sqrt{\kappa }`$. For such $`\delta `$, let $`Y_\delta =\left\{y_i\text{ }\right||a_ia|<\delta \text{ and }|b_ib|<\delta \text{ }\}`$. Now suppose $`y_i,y_jY_\delta `$ and let $`\gamma `$ be the geodesic joining them. Since $`\gamma `$ lies in $`\mathrm{\Delta }`$, the geodesic triangle formed by $`\alpha _i`$, $`\alpha _j`$ and $`\gamma `$ satisfies CAT($`\kappa `$), by lemma 3.4. Similarly, the triangle formed by $`\beta _i`$, $`\beta _j`$ and $`\gamma `$ also satisfies CAT($`\kappa `$). We will show that if $`\gamma `$ were very long then its midpoint would be problematic. Let $`A`$ be a closed annulus in $`M_\kappa ^2`$ with inner radius $`a\delta `$ and outer radius $`a+\delta `$. The conditions we have imposed on $`\delta `$ guarantee that the inner radius is positive and (if $`\kappa >0`$) that $`A`$ lies in an open hemisphere. Let $`f(\delta )`$ be the maximum of the lengths of geodesic segments of $`M_\kappa ^2`$ that lie entirely in $`A`$. We observe that $`f(\delta )`$ tends to $`0`$ as $`\delta `$ does. Also, given any geodesic of $`M_\kappa ^2`$ with length $`>2f(\delta )`$ and endpoints in $`A`$, its midpoint does not lie in $`A`$ and hence lies at distance $`<a`$ from the center of $`A`$. (By ‘the’ center of $`A`$ when $`\kappa >0`$ we mean the center closer to $`A`$.) We define $`g(\delta )`$ in a similar way, with $`b`$ in place of $`a`$. Now, if $`\gamma `$ were longer than $`\mathrm{max}(2f(\delta ),2g(\delta ))`$, then by the CAT($`\kappa `$) inequalities its midpoint $`y^{}`$ would satisfy $`d(x,y^{})<a`$ and $`d(y^{},z)<b`$, a contradiction of the fact that $`d(x,z)=a+b`$. We have shown that any two elements of $`Y_\delta `$ have distance bounded by $`\mathrm{max}(2f(\delta ),2g(\delta ))`$. Since this bound tends to $`0`$ as $`\delta `$ does, the $`y_i`$ form a Cauchy sequence and hence converge. Since the limit lies in $`\mathrm{\Delta }`$, there are geodesics joining it with $`x`$ and with $`z`$. Concatenating these yields the required geodesic. Now suppose that there is no such sequence $`y_i`$. Then there is a positive number $`k`$ such that $`d(x,y)+d(y,z)>D+k`$ for all $`y`$ in $`\mathrm{\Delta }`$, and there is also a sequence of paths in $`Y`$ from $`x`$ to $`z`$ with lengths tending to $`D`$. From the first of these facts we deduce that no path from $`x`$ to $`z`$ of length $`<D+k`$ meets $`\mathrm{\Delta }`$. So let $`\gamma :[0,1]Y`$ be a path from $`x`$ to $`y`$ of length $`<D+k`$ and let $`U`$ be the set of $`t[0,1]`$ for which there is not only a geodesic of $`\widehat{Y}`$ from $`x`$ to $`\gamma (t)`$ but even one that misses $`\mathrm{\Delta }`$. The theorem follows because $`U`$ contains $`0`$ and is open (lemma 3.2(v)) and closed. To see that it is closed, let a sequence $`t_i`$ in $`U`$ converge to a point $`t`$ of $`[0,1]`$, and let $`\beta _i`$ be a geodesic from $`x`$ to $`\gamma (t_i)`$ that misses $`\mathrm{\Delta }`$. By lemma 3.3, the $`\beta _i`$ converge to a geodesic $`\beta `$ from $`x`$ to $`\gamma (t)`$. The concatenation of $`\beta `$ and $`\gamma |_{[t,1]}`$ has length bounded by that of $`\gamma `$, which is less than $`D+k`$. It follows that the concatenation does not meet $`\mathrm{\Delta }`$. In particular, $`\beta `$ misses $`\mathrm{\Delta }`$ and hence $`tU`$. Example: In some circumstances one can show that a branched cover is CAT($`\kappa `$), even when the branch locus is not complete. To illustrate this, we take $`\widehat{X}`$ to be the open unit ball in $`R^3`$ and $`\mathrm{\Delta }`$ to be a diameter. With $`Y`$ as the universal cover of $`X=\widehat{X}\mathrm{\Delta }`$, we can deduce that $`\widehat{Y}=Y\mathrm{\Delta }`$ is geodesic, even though $`\mathrm{\Delta }`$ is not complete. One simply takes $`\overline{\mathrm{\Delta }}`$ to be the metric completion of $`\mathrm{\Delta }`$, $`\overline{X}=X\overline{\mathrm{\Delta }}`$, and considers the the branched cover of $`\overline{X}`$ over $`\overline{\mathrm{\Delta }}`$, where $`X=\overline{X}\overline{\mathrm{\Delta }}`$ is the same as before, $`Y`$ is the universal cover of $`X`$, and $`\overline{Y}=Y\overline{\mathrm{\Delta }}`$. That is, we add the endpoints of the diameter, then remove them along with the diameter, take the cover as before, and then glue the diameter and its endpoints back in. Theorem 3.1 shows that $`\overline{Y}`$ is CAT(0). Then, as an open ball in the CAT(0) space $`\overline{Y}`$, $`\widehat{Y}`$ is convex and hence also CAT(0). We now present a local form of theorem 3.1. The main new feature is that the projection map $`\widehat{Y}\widehat{X}`$ is no longer required to be 1-1 on the branch locus. It also allows us to dispense with the explicit diameter bounds for $`\widehat{X}`$ and $`\widehat{Y}`$. We say that a metric space $`\mathrm{\Delta }`$ is locally complete if each of its points has a neighborhood whose closure is metrically complete. This is equivalent to $`\mathrm{\Delta }`$ being an open subset of its completion. The pathological local properties of the second example following theorem 3.1 stem from the fact that the branch locus used there is not locally complete. If $`\widehat{X}`$ is a metric space and $`\mathrm{\Delta }\widehat{X}`$ then we say that $`\mathrm{\Delta }`$ is locally convex (in $`\widehat{X}`$) if each point has a neighborhood $`V`$ such that $`V\mathrm{\Delta }`$ is convex in $`\widehat{X}`$. Theorem 3.6. Suppose $`\widehat{X}`$ is a metric space of curvature $`\kappa `$ for some $`\kappa R`$, and that $`\mathrm{\Delta }`$ is a locally convex, locally complete subset of $`\widehat{X}`$. Suppose $`\widehat{Y}`$ is a metric space and that $`\pi :\widehat{Y}\widehat{X}`$ has the following properties. First, each element of $`\stackrel{~}{\mathrm{\Delta }}=\pi ^1(\mathrm{\Delta })`$ has a neighborhood $`V`$ such that $`\pi |_V`$ is a simple branched cover of its image $`\pi (V)`$, over $`\pi (V)\mathrm{\Delta }`$. Second, $`\pi `$ is a local isometry on $`Y=\widehat{Y}\stackrel{~}{\mathrm{\Delta }}`$. Then $`\widehat{Y}`$ has curvature $`\kappa `$. We omit the proof because we do not need the result and the argument is a straightforward application of theorem 3.1 and the idea used in the example above. 4. Iterated branched covers of Riemannian manifolds In this section we define precisely what we mean by a branched cover which is locally an iterated branched cover of a manifold over a family of mutually orthogonal totally geodesic submanifolds. Then we show that such a branched cover satisfies the same upper bounds on local curvature as the base manifold. We prove this only in the case of nonpositive curvature, but we indicate what else is needed in the general case. We say that a collection $`S_0`$ of subspaces of a real vector space $`A`$ is normal if the intersection of any $`k1`$ members of $`S_0`$ has codimension $`2k`$. This means that each subspace has codimension 2 and that they are as transverse as possible to each other. The basic example is a subset of the coordinate hyperplanes in $`C^n`$, with $`A`$ being the underlying real vector space. This is essentially the only example, in the following sense. If $`S_1,\mathrm{},S_n`$ are the elements of $`S_0`$ then we may introduce a basis $`w_1,\mathrm{},w_m,x_1,y_1,\mathrm{},x_n,y_n`$ of $`A`$ such that each $`S_i`$ is the span of $`w_1,\mathrm{},w_m`$ and those $`x_j`$ and $`y_j`$ with $`ji`$. We will write $`S`$ for the union of the elements of $`S_0`$. Now suppose $`H_0`$ is a family of immersed submanifolds of a Riemannian manifold $`\widehat{M}`$ with union $`H`$. We say that $`H_0`$ is normal at $`x\widehat{M}`$ if there is a family $`S_0`$ of orthogonal subspaces of $`T_x\widehat{M}`$ that are normal in the sense above and have the following property. We require that there be an open ball $`U`$ about $`0`$ in $`T_x\widehat{M}`$ which the exponential map $`\mathrm{exp}_x`$ carries diffeomorphically onto its image $`V`$, such that $`VH=\mathrm{exp}_x(US)`$, and such that $`\mathrm{exp}(SU)`$ is a convex subset of $`V`$ for each $`SS_0`$. We say that $`H_0`$ is normal if it is normal at each $`x\widehat{M}`$. In this case, each element of $`H_0`$ is totally geodesic, distinct elements of $`H_0`$ meet orthogonally everywhere along their intersection, and each self-intersection of an element of $`H_0`$ is also orthogonal. Let $`x\widehat{M}`$, $`U`$, $`V`$, $`S`$ and $`H`$ be as above, and write $`S_1,\mathrm{},S_n`$ for the elements of $`S_0`$. Then $$\pi _1(VH)\pi _1(US)\pi _1(T_x\widehat{M}S)Z^n;$$ the first two isomorphisms are obvious and canonical, and the last follows from the explicit description of $`S_0`$ given above. That is, $`T_x\widehat{M}S`$ is a product of $`n`$ punctured planes and a Euclidean space. We choose generators $`\sigma _1,\mathrm{},\sigma _n`$ for $`\pi _1(T_x\widehat{M}S)`$ by taking a representative for $`\sigma _i`$ to be a simple circular loop that links $`S_i`$ but none of the other $`S_j`$. We say that a connected covering space of $`T_x\widehat{M}S`$ is standard if the subgroup of $`Z^n`$ to which it corresponds is generated by $`\sigma _1^{d_1},\mathrm{},\sigma _n^{d_n}`$ for some $`d_1,\mathrm{},d_nZ`$. We apply the same terminology to the corresponding cover of $`VH`$. In particular, the universal cover is standard. An arbitrary covering space of $`VH`$ is called standard if each of its components is. Now suppose $`\widehat{M}`$ is a Riemannian manifold and $`H_0`$ is a normal family of immersed submanifolds. We write $`M`$ for $`\widehat{M}H`$. If $`\pi :NM`$ is a covering space then we say that $`N`$ is a standard cover of $`M`$ if for each $`x\widehat{M}`$ with $`V`$ as above, $`\pi :\pi ^1(VH)VH`$ is a standard covering in the above sense. In this case, we take $`\widehat{N}`$ to be a certain subset of the metric completion of $`N`$: those points of the completion which map to points of $`\widehat{M}`$ under the completion of $`\pi `$. In particular, if $`\widehat{M}`$ is complete then $`\widehat{N}`$ is the completion of $`N`$. We write $`\widehat{\pi }`$ for the natural extension $`\widehat{N}\widehat{M}`$ of $`\pi `$, and we say that this map is a standard branched covering of $`\widehat{M}`$ over $`H_0`$. The simplest example of a standard branched cover is when $`\widehat{M}=C^n`$, $`H_0`$ is the set of coordinate hyperplanes, $`M`$ is their complement and $`\pi :NM`$ is the covering space with $`N=C^nH`$ and $`\pi :(z_1,\mathrm{},z_n)(z_1^{d_1},\mathrm{},z_n^{d_n})`$. The generalization to the case of locally infinite branching requires the more complicated discussion in terms of metric completions of covering spaces. It is also possible that different components of the preimage of $`VH`$ are inequivalent covering spaces of $`VH`$. This can happen when $`NM`$ is an irregular cover. We will need the following two general lemmas, whose proofs should be skipped on a first reading. The first simplifies the task of establishing local curvature conditions and the second says that one may often ignore the added points when taking a metric completion of a length space. Lemma 4.1. Let $`X`$ be a length space with metric $`d`$. Let $`Y`$ be a path-connected subset of $`X`$ with the property that $$\delta (y,z)=inf\left\{\mathrm{}(\gamma )\text{ }\right|\gamma \text{ a path in }Y\text{ joining }y\text{ and }z\text{ }\}$$ is finite for all $`y,zY`$, so that $`(Y,\delta )`$ is a length space. Suppose also that $`(Y,\delta )`$ is CAT($`\kappa `$) for some $`\kappa R`$. Then any point of the interior of $`Y`$ admits a neighborhood which is convex in $`X`$ and also CAT($`\kappa `$). Proof: We will define three open balls; all are balls with respect to the metric $`d`$, rather than $`\delta `$. Suppose $`x`$ lies in the interior of $`Y`$ and that $`U`$ is an open ball centered at $`x`$ and lying in $`Y`$. Let $`r`$ be the radius of $`U`$, and let $`V`$ be the open ball with center $`x`$ and radius $`r/2`$. A simple argument shows that $`d(y,z)=\delta (y,z)`$ for all $`y,zV`$. Let $`W`$ be the open ball with center $`x`$ and radius $`r^{}=\mathrm{min}(r/4,\pi /\sqrt{\kappa })`$. Any two points $`y`$, $`z`$ of $`W`$ are joined by a path $`\gamma `$ in $`Y`$ that is a geodesic with respect to $`\delta `$. Now, $`\gamma `$ lies in $`V`$ by the triangle inequality, so $`\gamma `$ is also a geodesic with respect to $`d`$, and $`d(x,t)=\delta (x,t)`$ for all points $`t`$ of $`\gamma `$. By the CAT($`\kappa `$) inequality in $`(Y,\delta )`$, applied to a triangle obtained by joining $`y`$ and $`z`$ to $`x`$ with geodesics, we see that $`\delta (x,t)<r^{}`$ for all $`t`$. This shows that $`\gamma `$ lies in $`W`$. Since the same argument applies to every geodesic of $`X`$ joining $`y`$ and $`z`$, we see that $`W`$ is convex in $`X`$. Since $`d`$ and $`\delta `$ coincide on $`W`$, $`W`$ is CAT($`\kappa `$). We say that the interior of a path $`\gamma `$ with domain $`[a,b]`$ lies in a subset $`Z`$ of $`\widehat{X}`$ if $`\gamma ((a,b))Z`$. Lemma 4.2. Let $`X`$ be a length space with metric $`d`$ and let $`\widehat{X}`$ be its metric completion. For any $`x,y\widehat{X}`$ there are paths joining $`x`$ and $`y`$ with interiors in $`X`$ and lengths arbitrarily close to $`d(x,y)`$. Furthermore, the intersection of $`X`$ with any open ball in $`\widehat{X}`$ is path-connected. Proof: Choose a sequence of points $`x_iX`$ that tend to $`x`$, such that $`d(x_i,x_{i+1})<\mathrm{}`$. By choosing short paths in $`X`$ joining each $`x_i`$ to $`x_{i+1}`$ and concatenating them, we obtain an open path $`\gamma :[0,1)X`$ of finite length, which can be extended to $`[0,1]`$ by defining $`\gamma (1)=x`$. The extension is continuous because $`\gamma `$ has finite length. By taking subpaths of $`\gamma `$ we see that for all $`\epsilon >0`$ there is a path of length $`<\epsilon `$ from some point $`x^{}`$ of $`X`$ to $`x`$, with interior in $`X`$. The same result holds with $`y`$ in place of $`x`$. Given $`\epsilon >0`$, choose such paths of lengths $`<\epsilon /4`$. Then $`x^{}`$ and $`y^{}`$ may be joined by a path in $`X`$ of length $`<d(x,y)+2\epsilon /4`$. Putting the three paths together establishes the first claim. Now suppose $`U\widehat{X}`$ is the open ball of radius $`r`$ and center $`x`$, and that $`y,zUX`$. By the above, there are paths from $`y`$ and $`z`$ to $`x`$ with lengths $`<r`$ and interiors in $`X`$. These paths obviously lie in $`U`$. If $`y^{}`$ and $`z^{}`$ are points in $`UX`$ on these paths at distance $`<r/2`$ from $`x`$, then they may be joined by a path in $`X`$ of length $`<r`$. Such a path lies in $`U`$ by the triangle inequality. We have joined $`y`$ to $`y^{}`$, $`y^{}`$ to $`z^{}`$ and $`z^{}`$ to $`z`$ by paths in $`UX`$, establishing the second claim. Theorem 4.3. If a Riemannian manifold $`\widehat{M}`$ has sectional curvature bounded above by $`\kappa 0`$ and $`\widehat{\pi }:\widehat{N}\widehat{M}`$ is a standard branched cover over a normal family $`H_0`$ of immersed submanifolds of $`\widehat{M}`$, then $`\widehat{N}`$ is locally CAT($`\kappa `$). Remark: For a global version of this result see theorem 5.1. Proof: We will write $`\stackrel{~}{H}`$ for $`\widehat{\pi }^1(H)`$. Let $`\stackrel{~}{x}\widehat{N}`$, $`x=\widehat{\pi }(\stackrel{~}{x})`$, and let $`U`$, $`V`$, $`S_0`$ and $`S`$ be as in the definition of the normality of $`H_0`$ at $`x`$. We may suppose without loss of generality that $`S_0\mathrm{}`$. Let $`r`$ be the common radius of $`U`$ and $`V`$. Without loss of generality we may take $`r`$ small enough so that $`V`$ and all smaller balls centered at $`x`$ are convex in $`\widehat{M}`$. We write $`S_1,\mathrm{},S_n`$ for the elements of $`S_0`$, and $`T_i`$ for $`\mathrm{exp}_x(US_i)V`$. We choose $`0<r^{}<r`$ such that the orthogonal projection maps from the closed $`r^{}`$-ball $`B`$ about $`x`$ to the $`T_i`$ are well-behaved. By this we mean that for each $`i`$, there is a fiberwise starshaped (about $`0`$) set in the restriction to $`T_iB`$ of the normal bundle of $`T_i`$, which is carried diffeomorphically onto $`B`$ by the exponential map. The orthogonal projection maps $`BBT_i`$ are then obtained by applying the inverse of this diffeomorphism followed by the natural projection of the normal bundle to $`T_i`$. These maps will not be used until late in the proof. For $`t[0,1]`$ and $`pV`$ let $`t.p`$ denote the point in $`V`$ on the radial segment from $`x`$ to $`p`$ at distance $`td(x,p)`$ from $`x`$. Then the radial homotopy $`\mathrm{\Gamma }:[0,1]\times VV`$ given by $`(t,p)(1t).p`$ is a deformation retraction of $`V`$ to $`\{x\}`$. Observe that if $`pVH`$ then $`t.p`$ also lies in $`VH`$ for all $`t0`$. We may therefore lift $`\mathrm{\Gamma }|_{[0,1)\times (VH)}`$ to an ‘open homotopy’ $`[0,1)\times \pi ^1(VH)\pi ^1(VH)`$ in the obvious way. This is a lipschitz map and therefore extends to a homotopy $`\stackrel{~}{\mathrm{\Gamma }}:[0,1]\times \widehat{\pi }^1(V)\widehat{\pi }^1(V)`$. We call this the radial homotopy; its tracks are geodesics of $`\widehat{N}`$ and project to radial segments of $`V`$. The first consequence of this analysis is that $`\widehat{\pi }^1(V)`$ is the union of the open $`r`$-balls about the points of $`\widehat{\pi }^1(x)`$. The second consequence is that distinct preimages of $`x`$ lies at distance $`2r`$ from each other. For a path of length $`<2r`$ between two preimages would lie in $`\widehat{\pi }^1(V)`$ and then the deformation retraction of $`\widehat{\pi }^1(V)`$ to $`\widehat{\pi }^1(x)`$ shows that the endpoints of the path coincide. This implies that the open $`r`$-ball $`\stackrel{~}{V}`$ about $`\stackrel{~}{x}`$ is a component of $`\widehat{\pi }^1(V)`$, and therefore the restriction of $`\pi `$ to $`\stackrel{~}{V}\stackrel{~}{H}`$ is a covering map. We note that $`\stackrel{~}{V}\stackrel{~}{H}`$ is connected, by lemma 4.2. The radial homotopy also shows that the closed $`r^{}`$-ball $`\stackrel{~}{B}`$ about $`\stackrel{~}{x}`$ is the preimage in $`\stackrel{~}{V}`$ of $`B`$, that $`\stackrel{~}{B}`$ is path-connected, and that for all $`\stackrel{~}{y},\stackrel{~}{z}\stackrel{~}{B}`$, $$\delta (\stackrel{~}{y},\stackrel{~}{z})=inf\left\{\mathrm{}(\gamma )\text{ }\right|\gamma \text{ a path in }\stackrel{~}{B}\text{ joining }\stackrel{~}{y}\text{ and }\stackrel{~}{z}\text{ }\}$$ is finite. Finally, since $`\stackrel{~}{V}\stackrel{~}{H}`$ is connected, the radial homotopy shows that $`\stackrel{~}{B}\stackrel{~}{H}`$ is also connected. To show that $`\stackrel{~}{x}`$ admits a convex CAT($`\kappa `$) neighborhood it suffices by lemma 4.1 to show that $`(\stackrel{~}{B},\delta )`$ is CAT($`\kappa `$). We will prove this by realizing $`\stackrel{~}{B}`$ as an iterated simple branched cover of $`B`$. We let $`\sigma _1,\mathrm{},\sigma _n`$ denoted generators for $`G=\pi _1(BH)=\pi _1(T_x\widehat{M}S)Z^n`$ of the sort discussed above. Since $`\pi :NM`$ is a standard covering, there are $`d_1,\mathrm{},d_nZ`$ such that the subgroup of $`G`$ associated to the covering $`\stackrel{~}{B}\stackrel{~}{H}BH`$ is generated by $`\sigma _1^{d_1},\mathrm{},\sigma _n^{d_n}`$. For each $`k=0,\mathrm{},n`$, let $`G_k`$ be the subgroup generated by $`\sigma _1^{d_1},\mathrm{},\sigma _k^{d_k},\sigma _{k+1},\mathrm{},\sigma _n`$. We let $`B_k`$ be the metric completion of the cover of $`BH`$ associated to $`G_k`$, equipped with the natural path metric. Then $`B_k`$ is the standard branched cover of $`B`$, branched over the $`T_iB`$, with branching indices $`d_1,\mathrm{},d_k,1,\mathrm{},1`$. In particular, $`B_0=B`$ and $`B_n=(\stackrel{~}{B},\delta )`$. We write $`p_k`$ for the natural projection $`B_kB`$ obtained by extending the covering map to a map of metric completions. Because $`G_{k+1}G_k`$, there is a covering map $`B_{k+1}p_{k+1}^1(H)B_kp_k^1(H)`$ whose completion $`q_{k+1}:B_{k+1}B_k`$ satisfies $`p_kq_{k+1}=p_{k+1}`$. For each $`k=0,\mathrm{},n1`$ we let $`\mathrm{\Delta }_k=p_k^1(T_{k+1})`$. We will show that $`q_{k+1}`$ is a simple branched covering map with branch locus $`\mathrm{\Delta }_k`$, for each $`k=0,\mathrm{},n1`$. First we claim that $`q_{k+1}`$ carries $`q_{k+1}^1(\mathrm{\Delta }_k)`$ bijectively to $`\mathrm{\Delta }_k`$ and is a covering map on the complement of $`q_{k+1}^1(\mathrm{\Delta }_k)`$. To see this, observe that $`B`$ is bilipshitz to the metric product $`A_1\times \mathrm{}\times A_n\times D`$ of $`n`$ closed Euclidean disks and a closed Euclidean ball, such that $`T_iB`$ is identified with the set of points in the product whose $`i`$th coordinate is the center (say $`0`$) of $`A_i`$. This identifies each $`B_k`$ with $`\stackrel{~}{A}_1\times \mathrm{}\times \stackrel{~}{A}_k\times A_{k+1}\times \mathrm{}\times A_n\times D`$, where $`\stackrel{~}{A}_j`$ is the metric completion of the $`d_j`$-fold cover of $`A_k\{0\}`$ (or the universal cover if $`d_j=0`$). The metric completion of this cover of $`A_j\{0\}`$ is obtained from the cover by adjoining a single point, which lies over $`0`$. Then $`q_{k+1}:B_{k+1}B_k`$ is given by the branched cover $`\stackrel{~}{A}_{k+1}A_{k+1}`$ and the identity maps on $`\stackrel{~}{A}_1,\mathrm{},\stackrel{~}{A}_k`$, $`A_{k+2},\mathrm{},A_n`$, and $`D`$. The claim is now obvious. Now we show that $`q_{k+1}`$ is a local isometry away from $`q_{k+1}^1(\mathrm{\Delta }_k)`$. If $`\stackrel{~}{y}B_{k+1}q_{k+1}^1(\mathrm{\Delta }_k)`$ then by the previous paragraph there is an $`s>0`$ and a neighborhood $`E`$ of $`\stackrel{~}{y}`$ such that $`q_{k+1}`$ carries $`E`$ homeomorphically onto its image, which is the $`s`$-ball about $`q_{k+1}(\stackrel{~}{y})`$. It is then easy to see that $`q_{k+1}`$ carries the open $`(s/2)`$-ball about $`\stackrel{~}{y}`$ isometrically onto its image. To complete the proof that $`q_{k+1}`$ is a simple branched cover, we need only show that $`d(\stackrel{~}{y},\stackrel{~}{z})=d(y,z)`$, when $`\stackrel{~}{y},\stackrel{~}{z}B_{k+1}`$ have images $`y`$ and $`z`$ under $`q_{k+1}`$ and at least one of $`y`$ and $`z`$ lies in $`\mathrm{\Delta }_k`$. Without loss of generality we may take $`z\mathrm{\Delta }_k`$, and by continuity it suffices to treat the case $`y\mathrm{\Delta }_k`$. It is obvious that $`d(y,z)d(\stackrel{~}{y},\stackrel{~}{z})`$. To see the converse, let $`\gamma _i`$ be a sequence of paths in $`B_k`$ from $`y`$ to $`z`$, with interiors in $`B_k\mathrm{\Delta }_k`$ and lengths approaching $`d(y,z)`$. This is possible by lemma 4.2. By lifting each path except for its final endpoint, and extending using completeness, we obtain paths $`\stackrel{~}{\gamma }_i`$ from $`\stackrel{~}{y}`$ to points of $`B_{k+1}`$ lying over $`z`$, with the same lengths as the $`\gamma _i`$. By the injectivity of $`q_{k+1}`$ on $`q_{k+1}^1(\mathrm{\Delta }_k)`$, all of these are paths from $`\stackrel{~}{y}`$ to $`\stackrel{~}{z}`$, establishing the claim. In order to use theorem 3.1 inductively, we will need to know that $`\mathrm{\Delta }_k`$ is a convex subset of $`B_k`$. We will use the orthogonal projections introduced earlier. Each of these projections $`BBT_j`$ may be realized by a deformation retraction along geodesics. The retraction is distance non-increasing, since each $`T_j`$ is totally geodesic and $`\widehat{M}`$ has sectional curvature $`0`$. Because $`T_1,\mathrm{},T_{k1}`$ are totally geodesic and orthogonal to $`T_k`$, the track of the deformation retraction to $`T_k`$ starting at a point outside $`T_1\mathrm{}T_{k1}`$ misses $`T_1\mathrm{}T_{k1}`$ entirely. Therefore the deformation lifts to a deformation retraction of $`B_kp_k^1(T_1\mathrm{}T_{k1})`$ to $`\mathrm{\Delta }_kp_k^1(T_1\mathrm{}T_{k1})`$. This extends to a distance nonincreasing retraction $`B_k\mathrm{\Delta }_k`$, which we will also call an orthogonal projection. Now we prove by simultaneous induction that $`B_k`$ is CAT($`\kappa `$) and that $`\mathrm{\Delta }_k`$ is convex in $`B_k`$. The fact that $`B_0=B`$ is CAT($`\kappa `$) follows from its convexity in $`V`$ and the fact that $`V`$ is CAT($`\kappa `$), which in turn follows from the proof of \[15, theorem 12\]. That theorem asserts that simply connected complete Riemannian manifolds of sectional curvature $`\kappa `$ are CAT($`\kappa `$), but the proof shows that the completeness condition may be replaced by the weaker condition that the manifold be geodesic. The convexity of $`\mathrm{\Delta }_0=T_1B`$ in $`B`$ follows from the convexity of $`T_1`$ in $`V`$. Now the inductive step is easy. If $`B_k`$ is CAT($`\kappa `$) and $`\mathrm{\Delta }_k`$ is convex in $`B_k`$ then $`B_{k+1}`$ is CAT($`\kappa `$) by theorem 3.1. In particular, geodesics in $`B_{k+1}`$ are unique. Then if $`\gamma `$ is a geodesic of $`B_{k+1}`$ with endpoints in $`\mathrm{\Delta }_{k+1}`$, the orthogonal projection to $`\mathrm{\Delta }_{k+1}`$ carries $`\gamma `$ to a path of length $`\mathrm{}(\gamma )`$ with the same endpoints. By the uniqueness of geodesics, $`\gamma `$ lies in $`\mathrm{\Delta }_{k+1}`$, so we have proven that $`\mathrm{\Delta }_{k+1}`$ is convex in $`B_{k+1}`$. The theorem follows by induction. Remark: We indicate here the additional work required to prove the theorem when $`\kappa >0`$. A minor point is that one should take $`r<\pi /4\sqrt{\kappa }`$, so that all of the $`B_k`$ have diameters $`<\pi /2\sqrt{\kappa }`$. A more substantial change is required where we used the fact that the orthogonal projection maps $`BBT_j`$ are distance non-increasing, because this fails in the presence of positive curvature. All that is important here is that the length of a path in $`B`$ with endpoints in $`T_k`$ does not increase under projection to $`T_k`$. Even this is not true, but we only need the result for paths of length $`<2r`$. One should choose $`r^{}`$ small enough so that any path in $`B`$ of length $`<2r`$ with endpoints in $`T_k`$ grows no longer under the projection to $`T_k`$. Presumably this can be done but I have not checked the details. Theorem 4.3 has been widely believed, but ours seems to be the first proof. The theorem overlaps partly with theorem 5.3 of , which considers locally finite branched covers of Riemannian manifolds over subsets considerably more complicated than mutually orthogonal submanifolds. Unfortunately there is a gap in the proof of that theorem which I do not know how to bridge (lemma 5.7 does not seem to follow from lemma 5.6). Nevertheless I regard the ‘infinitesimal’ CAT($`\kappa `$) condition (condition 3 of theorem 5.3) as very natural, and expect that the theorem not only holds but extends to the case of locally infinite branching. 5. Applications In this section we solve the problems which motivated our investigation, concerning the asphericity of certain moduli spaces. By using known models for the moduli spaces of cubic surfaces in $`CP^3`$ and Enriques surfaces we will show that both of these spaces have contractible universal covers. In both cases a key ingredient is the following theorem, which is a sort of global version of theorem 4.3. Theorem 5.1. Let $`\widehat{M}`$ be a complete simply connected Riemannian manifold with section curvature bounded above by $`\kappa 0`$. Let $`H_0`$ be a family of complete submanifolds which are normal in the sense of section 4, and let $`H`$ be the union of the members of $`H_0`$. Then the metric completion $`\widehat{N}`$ of the universal cover $`N`$ of $`\widehat{M}H`$ is CAT($`\kappa `$), and $`N`$ and $`\widehat{N}`$ are contractible. Corollary 5.2. The moduli space $`M`$ of smooth cubic surfaces in $`CP^3`$ is aspherical. Proof: We begin by recalling the main result of . We let $`\omega `$ be a primitive cube root of unity and set $`E=Z[\omega ]`$, a discrete subring of $`C`$. Let $`L`$ be the lattice $`E^5`$ equipped with the Hermitian inner product $$h(x,y)=x_0\overline{y}_0+x_1\overline{y}_1+\mathrm{}+x_4\overline{y}_4.$$ Then the complex hyperbolic space $`CH^4`$ may be taken to be the set of lines in $`C^5`$ on which $`h`$ is negative-definite, so that $`CH^4`$ is a subset of $`CP^4`$. Let $`H_0`$ be the set of (complex) hyperplanes in $`CH^4`$ which are the orthogonal complements of those $`rL`$ with $`h(r,r)=1`$. Let $`\mathrm{\Gamma }`$ be the unitary group of $`L`$, which is obviously discrete in $`\mathrm{U}(4,1)`$. By , $`(CH^4H)/\mathrm{\Gamma }`$ is isomorphic as an orbifold to $`M`$. To see that $`H_0`$ is locally finite, observe that $`\mathrm{\Gamma }`$ contains a complex reflection in each element $`H`$ of $`H_0`$, That is, if $`H`$ corresponds to $`rL`$ with $`h(r,r)=1`$, then $`\mathrm{\Gamma }`$ contains an element fixing $`r^{}`$ (and hence $`H`$) pointwise and multiplying $`r`$ by $`1`$. If $`H_0`$ failed to be locally finite then the existence of these reflections would contradict the discreteness of $`\mathrm{\Gamma }`$. Now consider two elements of $`H_0`$ that meet in $`CH^4`$, and suppose that they are associated to $`r,r^{}L`$. Since they meet, $`h`$ is positive-definite on the span of $`r`$ and $`r^{}`$. Since $`h(r,r)=h(r^{},r^{})=1`$, positive-definiteness requires $`|h(r,r^{})|<1`$. Since $`h(r,r^{})E`$ we must have $`h(r,r^{})=0`$. This shows that any two elements of $`H_0`$ that meet do so orthogonally. Since $`CH^4`$ has negative sectional curvature, theorem 5.1 implies that $`CH^4H`$ has contractible universal cover. This is also the orbifold universal cover of $`(CH^4H)/\mathrm{\Gamma }`$, so the result follows. Corollary 5.3. The period space for smooth complex Enriques surfaces (defined below) is aspherical. Proof: This is similar to the previous proof. By the Torelli theorem for Enriques surfaces ( and ), the isomorphism classes of smooth complex Enriques surfaces are in 1-1 correspondence with the points of the period space $`(DH)/\mathrm{\Gamma }`$. Here $`D`$ is the symmetric space for $`O(2,10)`$, $`\mathrm{\Gamma }`$ is a certain discrete subgroup, and $`H_0`$ is a certain $`\mathrm{\Gamma }`$-invariant arrangement of complex hyperplanes. By , $`\mathrm{\Gamma }`$ may be taken to be the isometry group of the lattice $`L`$ which is $`Z^{12}`$ equipped with the inner product $$xy=x_1y_1+x_2y_2x_3y_3\mathrm{}x_{12}y_{12}.$$ A concrete model for $`D`$ is the set of $`vLC`$ satisfying $`vv=0`$ and $`v\overline{v}>0`$. $`H_0`$ may be taken to be the set of (complex) hyperplanes in $`D`$ which are the orthogonal complements of the norm $`1`$ vectors of $`L`$. The arguments of the previous proof show that $`H_0`$ is normal. As a symmetric space of noncompact type, $`D`$ has sectional curvature $`0`$, so that theorem 5.1 applies and $`DH_0`$ is aspherical. It follows that the period space, the orbifold $`(DH)/\mathrm{\Gamma }`$, is also aspherical. Remark: We have referred to the period space of Enriques surfaces rather than to a moduli space. This is because it is highly nontrivial to assemble the isomorphism classes of Enriques surfaces into a moduli space $`M`$. One way to do this is to equip the surfaces with suitable extra structure and then use geometric invariant theory, as in . Then $`M`$ has a natural topology and is identified with some finite cover $`C`$ of $`(DH)/\mathrm{\Gamma }`$. Strictly speaking, one should also impose additional structure to make sure that $`C`$ is a manifold and not just an orbifold. The reason is that the orbifold structure on $`C`$ is not terribly relevant to $`M`$. (There are Enriques surfaces with infinitely many automorphisms, as well as those with only finitely many, so there is not much hope of a reasonable orbifold structure on any $`M`$.) But if $`C`$ is a manifold then the homeomorphism of $`C`$ with $`M`$ shows that $`M`$ is aspherical. Now we will prove theorem 5.1. In the proofs we will conform to the notation of section 4 by writing $`M`$ for $`\widehat{M}H`$, $`\pi `$ for the covering map $`NM`$, $`\widehat{\pi }`$ for its completion, and $`\stackrel{~}{H}`$ for $`\widehat{\pi }^1(H)\widehat{N}`$. Lemma 5.4. The map $`\pi `$ is a standard covering and the map $`\widehat{\pi }`$ is a standard branched cover over $`H`$. More precisely, suppose $`\stackrel{~}{x}\widehat{N}`$, $`x=\widehat{\pi }(x)`$, $`V`$ is an open ball about $`x`$ that meets no element of $`H_0`$ except those passing through $`x`$, and $`\stackrel{~}{V}`$ is the ball of the same radius about $`\stackrel{~}{x}`$. Then $`\stackrel{~}{V}\stackrel{~}{H}`$ is a copy of the universal cover of $`V\stackrel{~}{H}`$. Proof: We prove only the last assertion. Let $`I_0`$ be the set of elements of $`H_0`$ which contain $`x`$, and let $`I`$ be their union. Then there is a natural sequence of group homomorphisms $$\pi _1(VI)\pi _1(\widehat{M}H)\pi _1(\widehat{M}I)\pi _1(VI).$$ The first and second maps are induced by inclusions of the indicated spaces and the third is induced by a retraction of $`\widehat{M}I`$ to $`VI`$ along geodesic rays based at $`x`$. The composition is obviously the identity map, which shows that $`\pi _1(VI)\pi _1(\widehat{M}H)`$ is injective. Therefore each component of the preimage in $`N`$ of $`VH`$ is a copy of the universal cover of $`VH`$. By lemma 4.2, $`\stackrel{~}{V}\stackrel{~}{H}`$ is connected, and as in the proof of theorem 4.3 it is a component of $`\pi ^1(VH)`$. This completes the proof. Lemma 5.5. The inclusion $`N\widehat{N}`$ is a weak homotopy equivalence. Proof: First we show that for each $`\stackrel{~}{x}\widehat{N}`$ there is a homotopy of $`\widehat{N}`$ to itself that carries $`N`$ into itself and also some neighborhood of $`\stackrel{~}{x}`$ into $`N`$. We write $`n`$ for the number of hyperplanes passing through $`x=\pi (\stackrel{~}{x})`$. By the previous lemma and the ideas of the proof of theorem 4.3, there is a closed neighborhood $`\stackrel{~}{V}`$ of $`\stackrel{~}{x}`$ which is homeomorphic to the metric product $`(\stackrel{~}{A})^n\times D`$, where $`\stackrel{~}{A}`$ is the metric completion of the universal cover of a closed Euclidean disk minus its center and $`D`$ is a closed Euclidean ball. It is easy to see that $`\stackrel{~}{A}`$ is homeomorphic to a ‘wedge’ in the plane, by which we mean $$\stackrel{~}{A}\{(0,0)\}\left\{(x,y)R^2\text{ }\right|0<x,|y|<x,x^2+y^21\text{ }\}.$$ There is obviously a homotopy of $`\stackrel{~}{A}`$ into $`\stackrel{~}{A}\{(0,0)\}`$ which is supported on a small neighborhood of $`(0,0)`$. We obtain the desired homotopy of $`\widehat{N}`$ by applying this homotopy to each factor $`\stackrel{~}{A}`$ of $`\stackrel{~}{V}`$ and fixing each point of $`\widehat{N}\stackrel{~}{V}`$. Now, if $`f:S^k\widehat{N}`$ represents any element of the homotopy group $`\pi _k(\widehat{N})`$ then we may cover $`f(S^k)`$ with finitely many open sets, each of which is carried into $`N`$ by some homotopy of $`\widehat{N}`$ that also carries all of $`N`$ into $`N`$. Applying these homotopies one after another shows that $`f`$ is homotopic to a map $`S^kN`$. This shows that $`\pi _k(N)`$ surjects onto $`\pi _k(\widehat{N})`$ for all $`k`$. The same argument applied to balls rather than spheres shows that $`\pi _k(N)`$ also injects, completing the proof. Proof of theorem 5.1: By lemma 5.4, $`\widehat{N}`$ is a standard branched cover of $`\widehat{M}`$ over the normal family $`H_0`$. Since $`\widehat{M}`$ has sectional curvature $`\kappa 0`$, theorem 4.3 shows that $`\widehat{N}`$ is locally CAT($`\kappa `$). Since $`N`$ is simply connected lemma 5.5 implies that $`\widehat{N}`$ is also. Theorem 2.1 now implies that $`\widehat{N}`$ is CAT($`\kappa `$) and hence contractible. In particular, all of its homotopy groups vanish, and by another application of lemma 5.5 the same is true of $`N`$. As a manifold all of whose homotopy groups vanish, $`N`$ is contractible. In the introduction we promised to show that the inclusion $`N\widehat{N}`$ is a homotopy equivalence, but all we have established so far is a weak homotopy equivalence. The stronger result follows immediately from theorem 5.1, since any inclusion of one contractible space into another is a homotopy equivalence. References A. D. Alexandrov. A theorem on triangles in a metric space and some of its applications. Trudy Math. Inst. Steks., 38:5–23, 1951. D. Allcock, J. Carlson, and D. Toledo. The complex hyperbolic geometry of the moduli space of cubic surfaces. Preprint, 1998. D. J. Allcock. The period lattice for Enriques surfaces. Preprint 1999. M. Bridson and A. Haefliger. Metric spaces of non-positive curvature. Book preprint, 1995. R. Charney and M. Davis. Singular metrics of nonpositive curvature on branched covers of Riemannian manifolds. Am. J. Math., 115:929–1009, 1993. M. Davis and T. Januszkiewicz. Hyperbolization of polyhedra. J. Diff. Geom., 34:347–388, 1991. E. Ghys et al., editors. Group Theory from a Geometrical Viewpoint. World Scientific, 1991. M. Gromov. Hyperbolic groups. In S. M. Gersten, editor, Essays in Group Theory, volume 8 of MSRI Publications, pages 75–263. Springer-Verlag, 1987. E. Horikawa. On the periods of Enriques surfaces. I. Math. Ann., 234:73–88, 1978. T. Januszkiewicz. Hyperbolizations. In Ghys et al. , pages 464–490. Y. Namikawa. Periods of Enriques surfaces. Math. Ann., 270:201–222, 1985. F. Paulin. Construction of hyperbolic groups via hyperbolizations of polyhedra. In Ghys et al. , pages 313–372. J. Shah. Projective degenerations of Enriques’ surfaces. Math. Ann., 256(4):475–495, 1981. W. Thurston and M. Gromov. Pinching constants for hyperbolic manifolds. Invent. Math., 89(1):1–12, 1987. M. Troyanov. Espaces à courbure négative et groups hyperboliques. In E. Ghys and P. de la Harpe, editors, Sur les Groupes Hyperboliques d’après Mikhael Gromov, volume 83 of Progress in Mathematics, pages 47–66. Birkäuser, 1990. Department of Mathematics, Harvard University One Oxford Street, Cambridge, MA 02138. email: allcock@math.harvard.edu web page: http://www.math.harvard.edu/$``$allcock
no-problem/9905/hep-ph9905308.html
ar5iv
text
# Bicocca–FT–99–12hep-ph/9905308 BFKL and CCFM final statesTalk presented at 7th International Workshop on Deep Inelastic Scattering, Zeuthen, Germany, April 1999. ## 1 Introduction There are two basic approaches to the study of final states at small-$`x`$. The BFKL equation is derived in the ‘Multi-Regge’ limit, i.e. assuming large rapidity intervals between successive emissions. This guarantees the leading logarithms $`(\alpha _\text{s}\mathrm{ln}x)^n`$ (LL) for sufficiently inclusive quantities (such as the total cross section, forward-jet rates) but not necessarily for more exclusive final properties, such as multiplicities or correlations. A different approach, the CCFM equation , goes beyond the ‘Multi-Regge’ approximation, and explicitly treats coherence (angular ordering) and soft emissions (the $`1/(1z)`$ part of the splitting function, where $`z`$ is the fraction of energy remaining after a parton splitting). This guarantees the leading logarithms for any small-$`x`$ observable, regardless of how exclusive it is. The main practical disadvantage of the CCFM equation compared to the BFKL equation is that it is much more difficult to solve, both numerically and analytically. It is therefore of some importance to understand precisely in which situations the BFKL equation will give the correct answer. A few years ago it was shown by Marchesini that for quantities such as multiplicities, the two equations differed at the level of double logarithms (DL) of $`x`$, $`(\alpha _\text{s}\mathrm{ln}^2x)^n`$ . More recently Forshaw and Sabio Vera introduced a resolvability parameter $`\mu _R`$ and showed (at fixed order, subsequently extended to all orders by Webber ) that $`n`$-resolved-particle ($`n`$-jet) rates are the same in BFKL and CCFM at leading DL order (all terms $`(\alpha _\text{s}\mathrm{ln}^2x)^m(\alpha _\text{s}\mathrm{ln}x\mathrm{ln}Q)^n`$). Since multiplicities are just a weighted sum of $`n`$-particle rates, but with $`\mu _R=0`$, one is led to ask how these two, apparently contradictory, results are related. A second issue is that the above results were obtained without any consideration of the soft emissions. What effect do they have? These questions were discussed in and the basic ideas are presented in the next section. Essentially, the inclusion of soft emissions leads to all BFKL and CCFM predictions being identical at LL level. The phenomenological implications of this result are discussed in sections 3 and 4. ## 2 Theoretical properties of final states The fundamental property used in for the study of final-state properties, is that in both the BFKL and the CCFM equations it is possible to separate emissions which change the exchanged transverse momentum $`k`$ (i.e. have the largest transverse momentum of all emissions so far — $`k`$-changing emissions) from those which don’t ($`k`$-conserving). The $`k`$-changing emissions are responsible for determining the cross section, and can be shown, quite simply, to have the same structure in BFKL and CCFM. It is the $`k`$-conserving emissions which are organised differently between BFKL and CCFM, so I will consider just those. For BFKL, fig. 1 shows as the shaded region the distribution of $`k`$-conserving emissions in $`x`$ (longitudinal momentum fraction of the emitted gluon) and $`q_t`$ (transverse momentum of the gluon) space. Emissions are independent with mean density $$\frac{dn}{d\mathrm{ln}q_td\mathrm{ln}1/x}=2\overline{\alpha }_\text{s},$$ (1) where $`\overline{\alpha }_\text{s}=\alpha _\text{s}N_C/\pi `$. The CCFM equation has the extra constraint that emissions be ordered in rapidity. This is illustrated in fig. 2 — the diagonal lines are constant rapidity ($`\mathrm{ln}q_t/x`$), and the requirement of a given emission having a larger rapidity than the previous one (being to the right of the diagonal line from the previous one) eliminates the small-$`q_t`$ emissions. Mathematically this translates into a mean density of emissions of $$\frac{dn}{d\mathrm{ln}q_td\mathrm{ln}1/x}2\overline{\alpha }_\text{s}e^{\overline{\alpha }_\text{s}\mathrm{ln}^2q_t/k},$$ (2) which differs from the BFKL result by a subleading factor (containing no logarithms of $`x`$). So for finite $`q_t`$ one obtains the result that BFKL and CCFM emission rates are the same at LL, in accord with the results of Forshaw and Sabio Vera, and Webber . But integrating over all $`q_t`$ and $`x`$ to get the total multiplicity gives $$n\sqrt{\pi \overline{\alpha }_\text{s}\mathrm{ln}^2x},$$ (3) which is a double logarithm of $`x`$ just as found by Marchesini in . The message is that formally subleading transverse DLs $`\alpha _\text{s}\mathrm{ln}^2q_t`$ play a fundamental role and can be thrown away only in specific circumstances. So far in the CCFM equation we have considered results including just hard emissions, those from the $`1/z`$ part of the splitting function. The inclusion of the soft emissions, those from the $`1/(1z)`$ part of the splitting function, changes the results radically, filling in the regions between the dashed lines of fig. 2 in such a way that the combination of soft and hard emissions turns out to correspond to independent emissions with the density given by (1), identical to the result from the BFKL equation. This equivalence between BFKL and CCFM predictions holds at LL accuracy (all terms $`(\alpha _\text{s}\mathrm{ln}x)^n`$) . There are actually still some differences at subleading transverse DL level $`\alpha _\text{s}\mathrm{ln}^2q_t`$ but they are confined to the end of the chain of gluons and do not resum in such a way as to affect the LL results. ## 3 Implications for phenomenology The above result on the LL equivalence of BFKL and CCFM final states is a formal statement. It has relevance for analytical calculations of the LL properties of final states, e.g. . But the BFKL and CCFM equations have fundamentally different physical origins, and this is reflected in differences at subleading order: the BFKL equation (formulated as an evolution in $`x`$, for DIS) has a value of $`z`$ (energy fraction remaining after a parton splitting) which is determined essentially by an arbitrary collinear cutoff $`\mu `$, present for regularisation purposes: $$\mathrm{ln}\frac{1}{z}\frac{1}{2\overline{\alpha }_\text{s}\mathrm{ln}k/\mu }.$$ (4) In the formal limit of small $`\overline{\alpha }_\text{s}`$, typical $`z`$ values are small and there are no problems. But with $`\mu /k0`$, $`z`$ becomes arbitrarily close to $`1`$. Since rapidities go as $`\mathrm{ln}zq_t/(1z)`$, the rapidities of the emitted gluons (even the hardest, i.e. jets) depend, at subleading level, on the collinear cutoff.<sup>1</sup><sup>1</sup>1If one instead formulates the BFKL equation as an evolution in rapidity, as done for Monte Carlo implementations for the Tevatron the problem disappears, but other serious difficulties arise for DIS, such as structure functions containing spurious transverse DLs. In contrast, because of the explicit treatment of coherence and the separation of soft emissions, the CCFM equation never shows such pathological behaviour ($`z`$ is well-defined), and so is a much better candidate for detailed phenomenology, for example in the form of a Monte Carlo program such as smallx (for an application to HERA data see ). But the CCFM equation is not entirely free of problems: there are subleading ambiguities in its implementation which can have large effects on its predictions . Additionally the CCFM equation lacks an important symmetry: it works well evolving from a low transverse scale to a high one (DIS), but not in the opposite direction. The symmetry issue is actually resolved in the Linked Dipole Chain approach (LDC) , which like the CCFM equation has a separation of hard and soft emissions. But the LDC does not reproduce the BFKL cross section at LL level. It is not currently clear whether this might be related to its problems in describing the data. ## 4 Outlook The present phenomenological situation is that none of the approaches contains all of the physics that might be considered mandatory. Though the formal LL equivalence of BFKL and CCFM final states is of limited immediate relevance for phenomenology, it is an important step in our general understanding of small-$`x`$ final states: together with information from the NLL corrections it gives us a picture of the features required in future phenomenological approaches. ## Acknowledgements I am grateful to Yu. Dokshitzer, M. Ciafaloni, G. Marchesini and S. Munier for discussions.
no-problem/9905/cond-mat9905126.html
ar5iv
text
# Stationary state in a two-temperature model with competing dynamics Nonequilibrium phase transitions have been studied extensively in the past decade. One of the important questions to address is how nonequilibrium constraints influence the order-disorder phase transition and the stationary state. A widely studied example is the celebrated kinetic Ising model where the nonequilibrium stationary states are produced by competing dynamics . The competing dynamics can be combined Glauber (spin-flip) processes at different temperatures , competition of the Glauber and the Kawasaki (spin-exchange) dynamics , or spin exchanges in different directions with different probabilities. The latter case (anisotropic Kawasaki dynamics) may be interpreted as a driven diffusive , or a two-temperature lattice gas model depending on whether the spin exchanges in different directions are governed by an external field or two different temperatures. One can introduce the isotropic version of two-temperature lattice gas model in which the hopping of particles (spin exchange) is governed by randomly applied heat baths at different temperatures independently from hopping directions. Although it is one of the simplest model with competing dynamics, it has not been studied yet. On can suspect that this kind of mixture of Kawasaki dynamics does not result in relevant nonequilibrium behavior and the system can be described by introducing the concept of an effective temperature. In fact, an earlier study of Ising model with competing Glauber dynamics has concluded similar result. In the present paper we study a two-dimensional lattice gas model where particles are coupled to two thermal baths at different temperatures independently of the hopping direction (isotropic Kawasaki dynamics). The Monte Carlo simulations demonstrate that the stationary state differs completely from those of the corresponding equilibrium model. In spite of the mentioned expectations the model shows relevant nonequilibrium behaviors. We consider a two-dimensional lattice gas on a square lattice with $`L\times L=N`$ sites under periodic boundary conditions. The occupation variable $`n_i`$ at site $`i`$ takes the values $`0`$ (empty) or $`1`$ (occupied) and half-filled occupation ($`_in_i=N/2`$) is assumed. The energy of the system is given by $$E=J\underset{(i,j)}{}n_in_j,$$ (1) where the summation is over the nearest-neighbor pairs and $`J>0`$. The particles can jump to one of the empty nearest-neighbor sites with the hopping rate $$W=pw(\mathrm{\Delta }E,T_1)+(1p)w(\mathrm{\Delta }E,T_2),$$ (2) where $`\mathrm{\Delta }E`$ is the energy difference between the final and initial configurations. The probability $`w(\mathrm{\Delta }E,T_\alpha )=min[1,\mathrm{exp}(\mathrm{\Delta }E/T_\alpha )]`$ is the familiar Metropolis rate ($`\alpha =1,2`$) where the lattice constant and the Boltzmann constant are chosen to be unity. The hopping rate defined in Eq. (2) may be interpreted as a randomly chosen contact to a thermal bath at temperature $`T_1`$ with probability $`p`$ and another thermal bath at temperature $`T_2`$ with probability $`1p`$. In the case of $`T_1=T_2`$, evidently, the above defined model is equivalent to the standard kinetic Ising model which undergoes an order-disorder phase transition at $`T_c=0.567`$. In this half-filled system the particles condense into a strip below $`T_c`$ to minimize the interfacial energy. It should be noted that in the ordered state the interface can be oriented either horizontally or vertically, thus this ordered phase violates the $`xy`$ symmetry. A suitable order parameter for characterizing the transition to strip-like order is the anisotropic squared magnetization $$m=\sqrt{|M_y^2M_x^2|},$$ (3) where $$M_y^2=\frac{1}{L}\underset{x}{}[\frac{1}{L}\underset{y}{}(2n_{xy}1)]^2.$$ (4) Henceforth we will restrict ourselves to the case of $`T_1=0`$ and $`T_2=\mathrm{}`$. Obviously, for small value of $`p`$ the hopping of particles is mostly governed by the heat bath at temperature $`T_2`$, therefore the stationary state is expected to be disordered. In the opposite case, the stationary state becomes ordered in the $`p1`$ limit. To calculate the critical value $`p_c`$ of the order-disorder transition point, we employ the dynamical mean-field approach suggested by Dickman . This method has been applied successfully in a number of other nonequilibrium models . The value of $`p_c`$ can be obtained by the linear stability analysis of the spatially homogeneous disordered phase. In this approach the first step is to set up the master equations which describe the time evolution of probabilities of clusters, where the size of clusters characterizes the level of approximation. Next, we determine the stationary solution of the master equations by assuming disordered phase. In the following a small density gradient is applied and the current generated in response to the density gradient is calculated. Decreasing the parameter $`p`$ the sign of the current changes is changed at a given value which can be identified as the critical point. The results of these approximations are $`p_c^{(2p)}=0.893`$ at the two-point and $`p_c^{(4p)}=0.907`$ at the four-point levels. Monte Carlo simulation has been carried out to check the validity of the above predictions. We have used independent random numbers for choosing what heat bath to couple the particle to and for comparing with the corresponding hopping probability during an elementary Monte Carlo step. However, the qualitative behavior remained unchanged if the same random number was used for the above mentioned two steps. The simulation were started from a perfectly ordered strip in the presumed ordered region (at $`p=0.97`$). During the simulation we have monitored the relaxation of order parameter defined in Eq. (3). Comparing the results of different system sizes, a puzzling behavior is observed. Namely, the stationary value of the order parameter decreases and tends to zero if we increase the system size. To clarify this feature we have written a computer program displaying the time evolution of configuration. This visualization of the particle configurations has indicated that the nonequilibrium condition influences the stability of interfaces. Namely, the interface in the (01) and (10) directions became unstable. At the same time, the interfaces in the (11) and 3 other symmetrically equivalent directions proved to be stable. Consequently, the particles condense into a tilted square in contrary to the strips observed in equilibrium system. In Fig. 1 some typical configurations are shown at different values of the control parameter $`p`$. The titled square is the real stationary state because the system evolves into this state from either vertically or horizontally oriented strip. The opposite evolution has never been observed. However, the necessary time ($`\tau _E`$) to evolve from ”strip configuration” to the ”tilted square” may be rather long. As an example, $`\tau _E4\times 10^6`$ Monte Carlo steps for a system size $`L\times L=100\times 100`$ and at $`p=0.97`$. We have concluded that the order parameter defined by Eq. (3) cannot describe the novel type of the ordering process which becomes striking especially for large systems. An adequate order parameter for the new nonequilibrium state can be defined as $$\rho =m_x\times m_y,$$ (5) where $$m_x=\frac{4}{L^2}\underset{y}{}|\underset{x}{}(n_{xy}\frac{1}{2})|.$$ (6) Using this definition we can describe the ordering process as demonstrated in Fig. 2. As the new ordered state differs from the corresponding equilibrium ordered phase, our system cannot be described by the equilibrium model with an effective temperature. The explanation of instability of horizontal (vertical) interface is related to the material transport along the domain interface. To understand the microscopic mechanism for this effect, it is instructive to compare a horizontal and a diagonal oriented interface. Suppose, a particle jumps out from a horizontal interface in consequence of fluctuations and leaves a hole in the initial site. This particle can easily move along the horizontal interface since there is no energy difference between an initial and a final site. If the system size is large enough, the particle (hole) may meet another particle (hole) and it initiates the break-up of the interface. A significant difference has been detected in the movement of particles along the diagonal oriented interface. Here, jumps are blocked and the material transport is reduced leaving the interface unchanged. It is an interesting question how a modification of the dynamics influences the stability of the diagonal interface. The movement along the interface can be reduced to only one jump by allowing for next-nearest-neighbor jump as well. Now, the move along the interface occurs with probability one (since $`\mathrm{\Delta }E=0`$), similarly to the case of horizontal interface. As a consequence, the diagonal orientation is not selected by the interfacial mobility and the horizontal (or vertical) direction, which contains lower interface energy, may be preferred. To test this argument, we have performed MC simulation on the modified model and the equilibrium strip-like state is found to be stable. We should mention that diagonal oriented interface, which ensures minimum excess interfacial energy on a square lattice has been obtained in a phase separation in chemically reactive mixtures . A new type of stationary state, as a consequence of nonequilibrium conditions, has already been observed in other systems. For example, in a ferromagnetic Ising system with competing Glauber and Kawasaki dynamics the stationary state is identified with the antiferromagnetic state in a special parameter regime . Returning to our model, we can define the derivative of energy with respect to the control parameter $`p`$ similarly to the specific heat for equilibrium models. The quantity $`C_p=E/p`$ behaves like the equilibrium specific heat. The location of the maximum in $`C_p`$ can be identified as a transition point for a finite lattice. Plotting the location of $`C_p`$ peak against $`L^1`$, the linear fit yields $`p_c=0.947(5)`$ in the thermodynamic limit. This numerical result agrees very well with the prediction of dynamical mean-field approximation at four-point level (the difference is only $`4\%`$). Finally, we turn now to the problem of critical behavior briefly. A possible method to determine the critical indexes of a continuous phase transition is the finite-size scaling which has often yielded useful result for nonequilibrium models . In the following we assume that the order parameter depends on the system size and the distance from the critical point as $$\rho L^{\beta /\nu }f((pp_c)L^{1/\nu }),$$ (7) where $`\beta `$ and $`\nu `$ are the exponents of the order parameter and correlation-length. Monte Carlo data for the order parameter are fitted to the scaling form (7) with the Ising exponents and we have found good data collapse. This result is in agreement with the conjecture of Grinstein et. al. for nonequilibrium ferromagnetic spin models with up-down symmetry . In summary, we have shown that the isotropic combination of the Kawasaki dynamics for two temperatures on a square lattice can result in nontrivial behaviors in nonequilibrium stationary state. At a critical value of the control parameter $`p_c`$ the system segregates into a high-density ”liquid” and a low-density ”gas” phase. However, in the stationary state the energy of the interface is higher than those of the corresponding equilibrium model. In the stationary state the diagonal interfaces become preferred to the horizontal and vertical ones. The phase transition describable by using a new suitable order parameter belongs to the Ising universality class. The stability of interfaces are related to the mobility of particles along the interfaces, where the diagonal orientation minimizes the influence of the energy flow between the two heat baths. Although the stability of interfaces may be tied to the type of lattice, the study of the corresponding coarse-grained macroscopic model would be useful. However, there is no straightforward way to find the macroscopic counterpart of a microscopic model. There are examples where the microscopic and supposed macroscopic model yield different morphology . Nevertheless, we believe that the behavior of our model is part of the general phenomenon where the external energy input results in interfacial effects modifying the morphology of the resulting stationary state . Further work is required to clarify the connection between the suggested model and the above mentioned driven nonequilibrium models. The author thanks György Szabó for his critical reading of the manuscript and Ole G. Mouritsen who inspired this study. This research was supported by the Hungarian National Research Fund (OTKA) under Grant Nos. F-19560 and F-30449.
no-problem/9905/nucl-th9905041.html
ar5iv
text
# On the 𝑞-deformation of the NJL model ## Abstract Using a $`q`$-deformed fermionic algebra we perform explicitly a deformation of the Nambu–Jona-Lasinio (NJL) Hamiltonian. In the Bogoliubov-Valatin approach we obtain the deformed version of the functional for the total energy, which is minimized to obtain the corresponding gap equation. The breaking of chiral symmetry and its restoration in the limit $`q0`$ are then discussed. PACS numbers: 11.30.Rd, 03.65.Fd, 12.40.-y Keywords: Deformed Algebras, Hadronic Physics, Effective Models One of the most beautiful aspects in physics is the appearance of concepts which are universal in the sense that they are common to many different branches in physics. A very important phenomenon in this context is the dynamical symmetry breaking, which appears in areas as different as statistical mechanics or nuclear and particle physics. In the last few years, results obtained in the treatment of many body systems when the underlying algebra is deformed suggests that the appearance of symmetry breaking in this new framework might be an universal aspect as well. The chiral symmetry breaking in Quantum ChromoDynamics (QCD) due to the appearance of the quark condensates, and its restoration at high temperature is of fundamental importance in medium and high energy physics. These mechanisms are responsible for the mass generation of the more fundamental constituents of matter. On the other hand, the $`q`$-deformed algebras have been an alternative and elegant way to investigate the symmetry breaking process in different areas of physics . It has been shown that the $`q`$-deformation of the fermionic algebra produces changes in the creation and annihilation operators . Frequently, physical quantities depend on the action of these operators in some physical state and therefore they must be sensitive to changes in the operators’ definition. The aim of this work is to apply this prescription to hadronic systems by performing directly the $`q`$-deformation in the Hamiltonian that describes a particular system. For this purpose we have chosen the NJL model , which has become a popular effective model for QCD, due to its simplicity and at the same time the richness in describing some features which are very difficult to reproduce directly from the more fundamental quantum chromodynamics. For instance, the dynamical breaking of chiral symmetry and its restoration at large temperature and density are very well described using such an effective model . In a recent work , the effect of a $`q`$-deformation in the NJL gap equation was studied through a $`q`$-deformed calculation of the quark condensates leading to an enhancement of the dynamical mass. This can be understood as a result of a larger effective coupling between the quarks in the deformed case. In this case, the deformed gap equation is given by $$m=2G\overline{\psi }\psi _q,$$ (1) where $`\overline{\psi }\psi _q`$ stands for the $`q`$-deformed calculation of the quark condensates. In this work we use the same deformation procedure as in the previous paper but, instead of deforming the gap equation, we apply the $`q`$-deformation in a more fundamental way by deforming directly the NJL Hamiltonian, which is the starting point to obtain the gap equation in the variational Bogoliubov-Valatin approach . Here it is important to note that this approach is completely different from simply deforming the condensate in the gap equation, as will be seen in our results. The Hamiltonian of the Nambu–Jona-Lasinio model is given by $$_{NJL}=i\overline{\psi }\gamma \psi G\left[\left(\overline{\psi }\psi \right)^2+\left(\overline{\psi }i\gamma _5\tau \psi \right)^2\right],$$ (2) and corresponds to the following Lagrangian $$_{NJL}=\overline{\psi }i\gamma ^\mu _\mu \psi +G\left[\left(\overline{\psi }\psi \right)^2+\left(\overline{\psi }i\gamma _5\tau \psi \right)^2\right].$$ (3) This Lagrangian is constructed from contact interactions in such way that it has the main symmetries of QCD. As it is well known, one of the most important features of the QCD Lagrangian is that it has chiral symmetry, which is the most important symmetry concerning the dynamics of the lightest hadrons. The $`q`$-deformed fermionic algebra that we shall use is based on the work of Ubriaco , where the thermodynamic properties of a many fermion system were studied. An extension of this procedure was used in the construction of a $`q`$-covariant form of the BCS approximation , and further applied to the NJL gap equation . As a consequence, this deformation procedure only modifies negative helicity quarks (anti-quarks) operators. Making use of the $`q`$-deformed creation and annihilation operators we write the modified quark fields as $$\psi _q(x,0)=\underset{s}{}\frac{d^3p}{\left(2\pi \right)^3}\left[B(𝐩,s)u(𝐩,s)e^{i𝐩𝐱}+D^{}(𝐩,s)v(𝐩,s)e^{i𝐩𝐱}\right].$$ (4) The $`q`$-deformed quark and anti-quark creation and annihilation operators $`B`$, $`B^{}`$, $`D`$, and $`D^{}`$, are expressed in terms of the non-deformed ones $`B_{}`$ $`=`$ $`b_{}\left(1+Qb_+^{}b_+\right),B_{}^{}=b_{}^{}\left(1+Qb_+^{}b_+\right),`$ (5) $`D_{}`$ $`=`$ $`d_{}\left(1+Qd_+^{}d_+\right),D_{}^{}=d_{}^{}\left(1+Qd_+^{}d_+\right),`$ (6) where $`Q=q^11`$. The positive helicity operators are not modified . The variational approach to obtain the gap equation consists on the following procedure: a) to define a variational vacuum, b) to calculate the vacuum expectation value of the Hamiltonian, obtaining the functional for the total energy, and c) to minimize the functional, obtaining the variational parameters and the gap equation. Following the above steps, we now define our variational BCS-like vacuum $$|NJL=\underset{𝐩,s=\pm 1}{}\left[\mathrm{cos}\theta (p)+s\mathrm{sin}\theta (p)b^{}(𝐩,s)d^{}(𝐩,s)\right]|0,$$ (7) which, for a given momentum $`𝐩`$, is expanded as $`|NJL`$ $`=`$ $`\mathrm{cos}^2\theta (p)|0`$ (11) $`+\mathrm{sin}\theta (p)\mathrm{cos}\theta (p)b^{}(𝐩,+)d^{}(𝐩,+)|0`$ $`\mathrm{sin}\theta (p)\mathrm{cos}\theta (p)b^{}(𝐩,)d^{}(𝐩,)|0`$ $`\mathrm{sin}^2\theta (p)b^{}(𝐩,)d^{}(𝐩,)b^{}(𝐩,+)d^{}(𝐩,+)|0.`$ Here it is important to note that the deformed version of this vacuum differs from the non-deformed one only by a phase and, therefore, the effects of the deformation comes solely from the deformed component of the Hamiltonian. The deformed functional for the total energy will be obtained from the vacuum expectation value of the $`q`$-deformed NJL Hamiltonian $$𝒲^q\left[\theta (p)\right]=NJL\left|_{NJL}^q\right|NJL,$$ (12) where $$_{NJL}^q=i\overline{\psi }_q\gamma \psi _qG\left(\overline{\psi }_q\psi _q\right)^2,$$ (13) and $`\psi _q`$ is given by Eq. (4). Due to the additive structure of the $`q`$-deformation in Eq. (5) and Eq. (6), the deformed Hamiltonian can be written as $$_{NJL}^q=_{NJL}+H(Q),$$ (14) and the functional reads $$𝒲^q\left[\theta (p)\right]=𝒲\left[\theta (p)\right]+W[Q,\theta (p)].$$ (15) The last terms of Eqs. (14) and (15), namely $`H(Q)`$ and $`W[Q,\theta (p)]`$, stand for the new terms of first order in $`Q`$ generated when the algebra is deformed, and therefore, they must vanish for $`q=1\left(Q=0\right)`$. Table I shows the increase in the number of operators in the NJL Hamiltonian as also in its matrix elements, due to the deformation of the fermionic algebra. The hard task is then to find the non-vanishing matrix elements of the $`q`$-deformed Hamiltonian. In the non-deformed case, which corresponds to $`q=1\left(Q=0\right)`$, the total energy is given by $$𝒲\left[\theta (p)\right]=2N_cN_f\frac{d^3p}{\left(2\pi \right)^3}p\mathrm{cos}2\theta (p)4G\left(N_cN_f\right)^2\left[\frac{d^3p}{\left(2\pi \right)^3}\mathrm{sin}2\theta (p)\right]^2.$$ (16) The minimization of this functional $$\frac{\delta 𝒲\left[\theta (p)\right]}{\delta \left[2\theta (p)\right]}=0,$$ (17) leads to the NJL gap equation $$p\mathrm{tan}2\theta (p)=4GN_cN_f\frac{d^3p^{}}{\left(4\pi \right)^3}\mathrm{sin}2\theta (p^{}),$$ (18) which takes its more familiar form $$m=4GN_cN_f\frac{d^3p}{\left(2\pi \right)^3}\frac{m}{\sqrt{𝐩^2+m^2}},$$ (19) provided the variational angles acquire the following structure $$\mathrm{tan}2\theta (p)=\frac{m}{p},\mathrm{sin}2\theta (p)=\frac{m}{\sqrt{𝐩^2+m^2}},\mathrm{cos}2\theta (p)=\frac{p}{\sqrt{𝐩^2+m^2}}.$$ (20) Calculating the new matrix elements arising from the $`q`$-deformation of the NJL Hamiltonian, and adding them up to the non-deformed functional, we obtain the full $`q`$-deformed functional for the total energy $$𝒲^q\left[\theta (p)\right]=2N_cN_f\frac{d^3p}{\left(2\pi \right)^3}P_q\mathrm{cos}2\theta (p)4G^{^{}}\left(N_cN_f\right)^2\left[\frac{d^3p}{\left(2\pi \right)^3}\mathrm{sin}2\theta (p)\right]^2.$$ (21) As in the non-deformed case, the same minimization procedure yields to $$P_q\mathrm{tan}2\theta (p)=4G^{^{}}N_cN_f\frac{d^3p^{}}{\left(4\pi \right)^3}\mathrm{sin}2\theta (p^{}),$$ (22) which becomes $$M=4G^{^{}}N_cN_f\frac{d^3p}{\left(2\pi \right)^3}\frac{M}{\sqrt{𝐏_q^2+M^2}}.$$ (23) The variational angles have the same old structure but now are $`q`$-dependent $$\mathrm{tan}2\theta _q(p)=\frac{M}{P_q},\mathrm{sin}2\theta _q(p)=\frac{M}{\sqrt{𝐏_q^2+M^2}},\mathrm{cos}2\theta _q(p)=\frac{P_q}{\sqrt{𝐏_q^2+M^2}},$$ (24) where the new variables appearing in the deformed equations are defined as $`P`$ $`=`$ $`\left(1+{\displaystyle \frac{Q}{2}}\right)p,`$ (25) $`P_0`$ $`=`$ $`{\displaystyle \frac{N_cN_f}{3\pi ^2}}{\displaystyle \frac{Q}{2}}G\mathrm{\Lambda }^3,`$ (26) $`P_q`$ $`=`$ $`PP_0,`$ (27) $`G^{^{}}`$ $`=`$ $`G\left(1+{\displaystyle \frac{Q}{4}}\right).`$ (28) It is easy to see that, when $`q1\left(Q0\right)`$, Eqs. (22), (23), and (24) reduce to their non-deformed versions Eqs. (18), (19), and (20), since $`Pp`$, $`P_00`$, $`P_qp`$, and $`G^{^{}}G`$. In analogy with the non-deformed case we can write the gap equation in terms of the quark condensates as $$M=2G^{^{}}\overline{\mathrm{\Psi }}\mathrm{\Psi }.$$ (29) Comparing the two forms of the gap equation Eqs. (23) and (29), we find a new deformed condensate given by $$\overline{\mathrm{\Psi }}\mathrm{\Psi }=\frac{N_c}{\pi ^2}_0^\mathrm{\Lambda }𝑑pp^2\frac{M}{\sqrt{𝐏_q^2+M^2}}$$ (30) for each quark flavour. This condensate is different from the one obtained in our previous work, where the condensate was explicitly deformed . It also has exactly the same form of the non-deformed one, but is written in terms of the new variables. It is worth to mention that the new condensate is not obtained by calculating the vacuum expectation value of a deformed scalar density, it corresponds to the gap equation which arises from the variational procedure started from the $`q`$-deformed Hamiltonian. We can also obtain a new pion decay constant in analogy to the non-deformed case <sup>*</sup><sup>*</sup>*In Ref. the factor $`N_cm^2`$ is missing in Eq. (24). $$F_\pi ^2=N_cM^2_0^\mathrm{\Lambda }\frac{d^3p}{\left(2\pi \right)^3}\frac{1}{\left(𝐏_{𝐪}^{}{}_{}{}^{2}+M^2\right)^{3/2}}.$$ (31) In the non-deformed case, a coupling constant equal to $`G/G_c=0.75`$ leads to a pion decay constant $`F_\pi =88`$ MeV. By setting the deformation parameter to $`q=1.2`$, the pion decay constant is shifted to $`F_\pi =92`$ MeV which is very close to experimental value $`f_\pi =93`$ MeV. This is in agreement with the results obtained in the context of chiral perturbation theory by Gasser and Leutwyler , where the calculated pion decay constant is reduced by $`6\%`$ becoming $`F_\pi =88`$ MeV when current quark masses $`m_{u,d}`$ are set to zero. The variational angle plays an important role in the chiral symmetry breaking process. When $`2\theta =0`$, there is no breaking of chiral symmetry and therefore no dynamical mass is generated. The generation of the dynamical mass is associated to a chiral rotation and the presence of a current quark mass can be associated to a non-vanishing angle, namely to a permanent chiral rotation. The study of the behavior of the variational angle provides an interesting way to observe the dynamical chiral symmetry breaking in the Bogoliubov-Valatin approach to the NJL model. By fixing the dynamical mass we can use Eqs. (24) to obtain the $`q`$ -dependence of the variational angle, which is shown in Fig. (1) for different values of the momentum. Then, starting from the fixed value of the dynamical mass, we use the $`q`$-dependence of $`2\theta `$ to obtain the mass generated when the fermionic algebra is deformed. In Fig. (2) we show the difference between the fixed value of the dynamical mass and the $`q`$-dependent one, compared to typical values for the current $`u`$ and $`d`$ masses. The NJL phase transition can be studied by solving the gap equation Eq. (23) and calculating the quark condensate $`\overline{\mathrm{\Psi }}\mathrm{\Psi }`$, which is the chiral order parameter. If we solve this $`q`$-deformed self-consistent equation in terms of the new variables defined above $`(P_q,E_q)`$ the phase transition will look exactly like in the non-deformed case, since Eq. (23) is identical to usual gap equation Eq. (19). However, if we solve it in terms of the old variables $`(p,E)`$ we can see the effect of the $`q`$-deformation in the phase transition. The curves shown in Fig. (3) for different values of $`q`$ are compared to the previous approach, where the $`q`$-deformation was performed only in the quark condensates . This feature can be understood as follows. We have two separated scenarios: the non-deformed and the $`q`$-deformed one. In both situations the gap equation has exactly the same form and yields the same results. The effects of the deformation are observed when we express the gap equation of the deformed case in terms of the original physical quantities of the non-deformed one. For $`q>1`$ the value of the quark condensates and the dynamical mass increase with the deformation and are larger than in the case where only the quark condensates were deformed. For $`q`$ $`1`$ we have the opposite effect and the value of the condensate decreases. It is therefore tempting to explore the behavior of the condensate for smaller values of $`q`$, even considering that the truncation at order $`Q`$ may not be granted. In this case, we can see that the chiral symmetry is restored in the limit $`q0`$, since the condensate vanishes. The value of the condensate for $`q<1`$ is shown in Fig. ( 4), and in Fig. (5) we can see the chiral symmetry restoration at small values of the deformation parameter $`q`$ at fixed value of the coupling constant. The chiral symmetry restoration here is different from the obtained at finite temperature . It seems to be important an investigation of the effect of both temperature and $`q`$-deformation in the chiral symmetry restoration process. This study is in progress and will be left for a future publication . So far we have performed the $`q`$-deformation of the NJL Hamiltonian and used with the variational Bogoliubov-Valatin approach to obtain the $`q`$ -deformed functional that leads to a new gap equation. As far as effects of the deformation are concerned, our main conclusions can be summarized as follows. In this approach the variational angles become $`q`$-dependent, meaning that the dynamical mass generation is affected by the $`q`$-deformation. The effect of the deformation is to enhance the condensate and the dynamical mass for $`q>1`$, and to restore chiral symmetry when $`q0`$. Here the effect is stronger than in the case where the deformation is performed directly into the gap equation. In terms of the new $`q`$-deformed variables the functional for the total energy, the gap equation, the variational angles, and the quark condensates have the same form of the non-deformed case, which is a consequence of the quantum group invariance of the NJL Lagrangian. Acknowledgments The authors are grateful to U. Meißner and M. R. Robilotta for the suggestion which motivated our study on the $`f_\pi `$ behaviour. C. L. L. is grateful to D. Galetti and B. M. Pimentel for very helpful discussions, and V. S. T. would like to acknowledge FAPESP for financial support. This work was supported by FAPESP Grant Nos. 98/6590-2 and 98/2249-4.
no-problem/9905/hep-ph9905448.html
ar5iv
text
# On finite-density QCD at large 𝑁_"c" ## I Introduction In contrast with finite-temperature QCD, QCD at high baryonic densities remains remarkably poorly understood. One of the main reasons is the lack of lattice simulations due to the complex fermion determinant in finite-density QCD. Meanwhile, the physics in the core of the neutron stars, and possibly of heavy-ion collisions, depends crucially on the structure and properties of the ground state of QCD at finite densities. It was suggested that at sufficiently high densities, the ground state of QCD is a color superconductor . Such state arises from the instability of the Fermi surface under the formation of Cooper pairs of quarks. The superconducting phase of quark matter is the subject of many recent studies and we will not discuss its properties in this paper. We will only note that a reliable treatment is currently available only in the perturbative regime of asymptotically high densities ; in the physically most interesting regime of moderate densities, QCD is strongly coupled and one has to resort to various toy models, e.g. those with four-fermion interactions . To shed light on possible new phases that may occur in the non-perturbative regime of moderate baryonic densities, one might hope to be able to make use of alternative limits, such as the large $`N_\text{c}`$ limit, where one takes the number of colors $`N_\text{c}`$ to infinity, keeping $`g^2N_\text{c}`$ fixed ($`g`$ is the gauge coupling) . This limit has proved to be a convenient framework for understanding many properties of QCD (for example, Zweig rule), although QCD at infinite $`N_\text{c}`$ is still not analytically treatable. In the context of finite-density QCD, the first work that discussed the implications of the large $`N_\text{c}`$ limit was done by Deryagin, Grigoriev, and Rubakov (DGR) . DGR noticed that color superconductivity is suppressed at large $`N_\text{c}`$ due to the fact that the Cooper pair is not a color singlet (the diagram responsible for color superconductivity is non-planar).At arbitrary $`N_\text{c}`$, using the technique of Ref. ,the asymptotic behavior of the BCS gap can be found to be $`\mathrm{\Delta }\mu \mathrm{exp}\left(\sqrt{\frac{6N_\text{c}}{N_\text{c}+1}}\frac{\pi ^2}{g}\right)`$. This tends to 0 as $`N_\text{c}\mathrm{}`$, provided one keeps $`g^2N_\text{c}`$ fixed. Working in the perturbative regime $`g^2N_\text{c}1`$, DGR noticed another instability of the Fermi surface, this time with respect to the formation of chiral waves with wavenumber $`2p_\text{F}`$, where $`p_\text{F}`$ is the Fermi momentum. As shown by DGR, this instability is not suppressed in the limit $`N_\text{c}\mathrm{}`$. The purpose of this paper is to see what happens to the DGR instability at large but finite $`N_\text{c}`$. Our motivation is to see whether the limit $`N_\text{c}\mathrm{}`$ is relevant for the physics of high-density QCD at $`N_\text{c}=3`$. In this paper, we find that at any fixed value of the chemical potential $`\mu `$, in order for the DGR instability to occur we require the number of colors $`N_\text{c}`$ to be larger than some minimum value $`N_\text{c}(\mu )`$, which grows with $`\mu `$. What is surprising is that even for moderate values of $`\mu `$, the minimum value $`N_\text{c}(\mu )`$ is very large (of order of a few thousands for a modest chemical potential $`\mu =3\mathrm{\Lambda }_{\text{QCD}}`$). Therefore one should not expect the large $`N_\text{c}`$ limit to be of direct relevance for physics with $`N_\text{c}=3`$ at finite densities. The paper is organized as follows. Section II reviews the results of DGR. A convenient technical approach to DGR instability which is based on renormalization group is developed in Sec. III and applied to the case of finite $`N_\text{c}`$ in Sec. IV. Section V contains concluding remarks. ## II Review of DGR results Let us review the key results of Ref. . Throughout our paper, we assume all quarks are massless, and make no distinction between Fermi momentum and Fermi energy: $`p_\text{F}=\mu `$. In the $`N_\text{c}\mathrm{}`$ limit, the DGR result states that the Fermi surface is unstable under the development of chiral waves with wavenumber $`2\mu `$, $$\overline{\psi }(x)\psi (y)=e^{i𝐏(𝐱+𝐲)}d^4qe^{iq(xy)}f(q)$$ (1) where $`𝐏`$ is a vector with modulus $`|𝐏|=\mu `$ whose direction is fixed arbitrarily. Since $`\overline{\psi }\psi `$ is a color singlet, it survives the limit $`N_\text{c}\mathrm{}`$. The condensate (1) can be interpreted as the formation of particle-hole pairs with total momentum $`2𝐏`$ (Fig. 1). In such a pair, both the particle and the hole are near the Fermi surface, and the momenta of the particle and the hole are both near $`𝐏`$. In this sense the condensate (1) is different from the usual chiral condensate $`\overline{\psi }\psi =\text{const}`$, which corresponds to the pairing of a particle and a antiparticle moving in opposite directions. Moving in the same directions, the scattering between the particle and the hole is nearly in the foward direction, and since the amplitude of forward scattering is singular, one could expect the formation of the pair to be energetically favorable. In fact, this is the reason why the total momentum $`2\mu `$ is special. The function $`f(q)`$ has the physical meaning of the wave function of the pair in the center-of-mass frame, so $`𝐏+𝐪`$ is the momentum of the particle and $`𝐏𝐪`$ is that of the hole. DGR found that the wave function is localized in an exponentially small region of momenta $`q<\mathrm{\Delta }_{}`$ where $$\mathrm{\Delta }_{}\mu e^{\pi /2h},h^2=\frac{g^2N_\text{c}}{4\pi ^2}.$$ (2) Recall that $`h`$ is kept constant in the limit $`N_\text{c}\mathrm{}`$. The binding energy of the pair is found to be at an even smaller scale, $$E_{\text{bind}}\mu e^{\pi /h}.$$ (3) Both scales $`\mathrm{\Delta }_{}`$ and $`E_{\text{bind}}`$ are parametrically larger than the non-perturbative scale $`\mathrm{\Lambda }_{\text{QCD}}\mu e^{6/11h^2}`$. For more details, see Ref. . It may seem surprising that the DGR instability occurs in the perturbative regime. Indeed, the analogs of (1), in non-relativistic fermion systems, are the charge-density wave (CDW) and the spin-density wave (SDW). Since it is known that in three dimensions CDW and SDW do not develop at small four-fermion interaction, one could ask how such instability could occur at small $`g^2N_\text{c}`$. The key observation is that in our case the effective four-fermion interaction is singular due to the $`1/q^2`$ behavior of the gluon propagator at small $`q`$. In general, this singularity is cut off by screening, but because the diagrams responsible for screening involve fermion loops, the screening effects at large $`N_\text{c}`$ are of order $`g^2\mu ^2O(1/N_\text{c})`$ and therefore suppressed. This singular nature of the interaction explains why the DGR instability can occur perturbatively at large $`N_\text{c}`$. The argument presented above also implies that at each value of the coupling $`h`$, there must be a lower limit on $`N_\text{c}`$, below which the interaction is not singular enough due to the screening, and the DGR instability disappears. This limit grows as one decreases $`h`$, or, equivalently, as the chemical potential increases. Finding this lower bound on $`N_\text{c}`$ as a function of $`\mu `$ is the purpose of this paper. ## III Renormalization group approach to DGR instability Before tackling our main problem, let us formulate an efficient RG technique that reproduces the results of DGR in the $`N_\text{c}\mathrm{}`$ limit. While in the limit $`N_\text{c}\mathrm{}`$ this technique does not give us anything new over what has been already found by DGR, it has the advantage that it can be applied to the case of finite $`N_\text{c}`$, where the effects of screening make the generalization of the original method of Ref. very difficult, if at all possible. We will not try to rigorously justify the RG in this paper. Let us stay in the Fermi liquid phase, where quarks are deconfined, and consider the scattering between a particle and a hole with momenta $`𝐏+𝐪`$ and $`𝐏𝐪`$. The total momentum of the pair is $`2\mu `$. A singularity of this scattering amplitude in the upper half of the complex energy plane would signify an exponentially growing mode, i.e. an instability . In terms of the diagrams, the most important contribution to the scattering amplitude comes from the ladder graphs (Fig. 2). Adding a rung to the ladder brings two more logarithms: one comes from the collinear divergence, i.e. the singular gluon propagator, and the other from the fact that the two new fermion propagators are near the mass shell. We will design the RG to resum these double logs.<sup>§</sup><sup>§</sup>§A similar but not identical RG procedure has been developed to resum the double logs in the BCS channel . Our first step is to derive a 1+1 dimensional effective theory capable of describing the DGR instability. On the most naive level, such description exists due to the fact that the modes of interest move in directions close to $`\pm 𝐏`$. Technically, the (1+1)D effective theory arises from integrating, in each Feynman graph, over the momentum components perpendicular to $`𝐏`$ . Let us consider a ladder graph and ask what happens if one adds one more rung. The diagram now contains an extra loop integral, $$\frac{d^4q}{(2\pi )^4}G(P+q)G(P+q)D(q)$$ (4) where $`G`$ and $`D`$ are fermion and gluon propagators respectively, and the Dirac structure of the fermion propagators is ignored for the purposes of the discussion presented below. Consider first the fermion line with momentum $`𝐏+𝐪`$. The component of the fermion momentum parallel to $`𝐏`$ will be denoted as $`\mu +q_{}`$, and those perpendicular to $`𝐏`$ will be denoted as $`q_{}`$. Note that $`q_{}`$ is a two-dimensional vector. We will assume that for all fermion lines in the Feynman diagram $`q_{}\mathrm{\Delta }`$, where $`\mathrm{\Delta }`$ is an arbitrary momentum scale much less than $`\mu `$. In other words, we will be interested only in the modes located inside two small “patches” on the Fermi sphere, each having the size of order $`\mathrm{\Delta }`$ in directions perpendicular to $`𝐏`$ (eventually, $`\mathrm{\Delta }`$ will be identified with $`\mathrm{\Delta }_{}`$ in Eq. (2)). When $`q_{}`$ is also small compared to $`\mu `$, the fermion propagator has the form $$G(q)\frac{1}{iq_0+|𝐏+𝐪|\mu }\frac{1}{iq_0+q_{}+\frac{q_{}^2}{2\mu }}.$$ (5) If $`q_{}q_{}^2/\mu \mathrm{\Delta }^2/\mu `$, the $`q_{}`$ dependence drops out and the propagator is simply $`(iq_0+q_{})^1`$. Therefore, in the regime $`q_{}\mathrm{\Delta }^2/\mu `$, the fermion propagator does not depend on the perpendicular (with respect to $`𝐏`$) momenta. In this regime, in Eq. (4) only the gluon propagator $`D(q)`$ depends on $`q_{}`$. Hence, the integration over $`q_{}`$ has the form $$\frac{d^2q_{}}{(2\pi )^2}\frac{1}{q_0^2+q_{}^2+q_{}^2}.$$ (6) If $`q_0`$ and $`q_{}`$ are not only small compared to $`\mu `$, but also much smaller than $`\mathrm{\Delta }`$, then the integral over $`q_{}`$ in Eq. (6) is a logarithmic one $`d^2q_{}/q_{}^2`$. The integral is cut off in the IR by $`q_{}`$ and in the UV by $`\mathrm{\Delta }`$ and yields $`\frac{g^2}{4\pi }\mathrm{ln}\frac{\mathrm{\Delta }}{q_{}}`$. Effectively, this integration replaces the internal gluon line by a four-fermion vertex $`\frac{g^2}{4\pi }\mathrm{ln}\frac{\mathrm{\Delta }}{q_{}}`$, where $`q_{}`$ is determined by the momentum of the fermions coming in and out of the vertex (Fig. 3). Recall that the simplification takes place only in the region $`\mathrm{\Delta }q_{}\mathrm{\Delta }^2/\mu `$, as only in this region the integration over $`q_{}`$ decouples from that over $`q_{}`$. At the end of this section we argue why the restriction of $`q_{}`$ to the region $`\mathrm{\Delta }^2/\mu q_{}\mathrm{\Delta }`$ is well justified. We have taken the integration over the perpendicular components of one particular gluon momentum, but nothing prevents us from integrating over the perpendicular components of all the gluon momenta. By doing this integration, we resum one set of logarithms (the one related to the collinear divergence) in a series of double logs. Now, as the only remaining integrals are over $`q_0`$ and $`q_{}`$, all Feynman diagrams are identical to those of some 1+1 dimensional model with a four-fermion interaction. Our task is to find out the precise form of the Lagrangian of this model. First, we note that the kinetic term for the fermions in the effective theory can be obtained from the original Lagrangian by omitting spatial derivatives in directions other than $`z`$, $$L_{\text{kin}}=i\overline{\psi }\gamma ^0_0\psi +i\overline{\psi }\gamma ^3_3\psi +\mu \overline{\psi }\gamma ^0\psi .$$ (7) It is more convenient, however, to recast the Lagrangian (7) into the form of a (1+1)D theory of a doublet of Dirac fermions (which are two-component in (1+1)D) at zero chemical potential. This is indeed possible, since spinless fermions at finite chemical potential can be rewritten as one Dirac fermion at zero chemical potential (the modes near two points of the “Fermi surface” serve as its two components ). It is not surprising that in our case the spin-$`\frac{1}{2}`$ fermions can be rewritten as a doublet of (1+1)D Dirac fermions. Let us do it explicitly when $`𝐏`$ is directed along the $`z`$-axis, $`𝐏=(0,0,\mu )`$. Denote the four components of the Dirac spinor $`\psi `$ (in chiral basis) as $`\psi ^\mathrm{T}=(\psi _{\mathrm{L1}},\psi _{\mathrm{L2}},\psi _{\mathrm{R1}},\psi _{\mathrm{R2}})`$. The antiparticles have energy of order $`2\mu `$ and decouple from the low-energy effective theory that is being derived. This allows us to consider only the components of $`\psi `$ corresponding to particles, which are $`\psi _{\mathrm{L2}}`$ and $`\psi _{\mathrm{R1}}`$ when the particle’s momentum is near $`𝐏`$, and $`\psi _{\mathrm{L1}}`$ and $`\psi _{\mathrm{R2}}`$ when it is near $`𝐏`$. Although these fields are slowly varying in time, they still vary rapidly in space. To compensate for this spatial variation, we introduce new fields, $$\phi =\left(\begin{array}{c}e^{i\mu z}\psi _{\mathrm{L2}}\\ e^{i\mu z}\psi _{\mathrm{R2}}\end{array}\right),\chi =\left(\begin{array}{c}e^{i\mu z}\psi _{\mathrm{R1}}\\ e^{i\mu z}\psi _{\mathrm{L1}}\end{array}\right)$$ (8) which are soft in both space and time. We can now translate from the (3+1)D language of $`\psi `$ to the (1+1)D language of $`\phi `$ and $`\chi `$. The kinetic part of the Lagrangian (7) becomes $$L_{\text{kin}}=i\overline{\psi }\gamma ^0_0\psi +i\overline{\psi }\gamma ^3_3\psi +\mu \overline{\psi }\gamma ^0\psi i\overline{\phi }\gamma _{\text{2D}}^\mu _\mu \phi +i\overline{\chi }\gamma _{\text{2D}}^\mu _\mu \chi .$$ (9) What is the interaction term in the effective theory? A look at the Feynman diagram in Fig. 3 tells us that such interaction is of the current-current type. The current operator can also be translated into the (1+1)D counterparts, $$\overline{\psi }\gamma ^\mu \psi \overline{\phi }\gamma _{\text{2D}}^\mu \phi +\overline{\chi }\gamma _{\text{2D}}^\mu \chi $$ (10) where $`\gamma _{\text{2D}}^\mu `$ are two (1+1)D Dirac matrices, $`\gamma _{\text{2D}}^0=\sigma ^1`$, $`\gamma _{\text{2D}}^1=i\sigma ^2`$. Below we will write these matrices simply as $`\gamma ^\mu `$ in all expressions belonging to the (1+1)D effective theory. Noting that each vertex in Fig. 3 corresponds to a factor of $`\frac{g^2}{4\pi }\mathrm{ln}\frac{\mathrm{\Delta }}{q_{}}`$, where $`q_{}`$ is the parallel momentum transfer, we find that the Lagrangian of the (1+1)D effective theory is similar to that of the non-Abelian Thirring model $$L_{\text{eff}}=i\overline{\mathrm{\Psi }}\gamma ^\mu _\mu \mathrm{\Psi }\frac{g^2}{4\pi }\mathrm{ln}\frac{\mathrm{\Delta }}{q_{}}\left(\overline{\mathrm{\Psi }}\gamma ^\mu \frac{T^a}{2}\mathrm{\Psi }\right)^2.$$ (11) where we have combined the two fields $`\phi `$ and $`\chi `$ into a doublet $`\mathrm{\Psi }`$. The only difference between (11) and the non-Abelian Thirring model is the dependence of the four-fermion coupling on the scale of the parallel momentum exchange $`q_{}`$. The theory (11) describes the interaction between fermions with perpendicular momenta of order $`\mathrm{\Delta }`$ and parallel momenta between $`\mathrm{\Delta }^2/\mu `$ and $`\mathrm{\Delta }`$. To understand the properties of the model (11), let us recall what is known about the conventional Thirring model, where the interaction term is $`\lambda (\overline{\mathrm{\Psi }}\gamma ^\mu \frac{T^a}{2}\mathrm{\Psi })^2`$. The Thirring model is asymptotically free. The only diagram contributing to the $`\beta `$ function at large $`N_\text{c}`$ is the “zero-sound” diagram, Fig. 4. The running of the coupling $`\lambda `$ is governed by the RG equation, $`{\displaystyle \frac{\lambda (s)}{s}}={\displaystyle \frac{N_\text{c}}{\pi }}\lambda ^2(s)`$where $`s`$ is the RG parameter, and $`\lambda (s)`$ is the coupling at the energy scale $`\mathrm{\Delta }e^s`$. The coupling $`\lambda `$ hits a Landau pole at $`p\mathrm{\Delta }e^{\pi /\lambda N_\text{c}}`$. The physics in the IR is characterized by the formation of the chiral condensate $`\overline{\mathrm{\Psi }}\mathrm{\Psi }`$ which gives mass to the fermions. Using Eq. (8), one can see that $`\overline{\mathrm{\Psi }}\mathrm{\Psi }=\mathrm{cos}2\mu z\overline{\psi }\psi i\mathrm{sin}2\mu z\overline{\psi }\gamma ^0\gamma ^3\psi `$, so a constant $`\overline{\mathrm{\Psi }}\mathrm{\Psi }`$ translates into space-dependent condensates $`\overline{\psi }\psi `$ and $`\overline{\psi }\gamma ^0\gamma ^3\psi `$. These basic properties hold for the model (11) as well, but the estimation for the scale of the Landau pole is different. The latter can be found using RG. Now the RG equation needs to be written for a coupling which is a function of the parallel momentum transfer $`q_{}`$. At $`s=0`$, $$\lambda (q_{})=\frac{g^2}{4\pi }\mathrm{ln}\frac{\mathrm{\Delta }}{q_{}}.$$ (12) The RG equation is found from the diagram drawn in Fig. 4. The internal fermion lines have the momentum of order $`\mathrm{\Delta }e^s`$, which is much larger than the momentum of the external lines, therefore the momentum transfer at each vertex is $`\mathrm{\Delta }e^s`$. The RG equation, therefore, is $`{\displaystyle \frac{}{s}}\lambda (s,q_{})={\displaystyle \frac{N_\text{c}}{\pi }}\lambda ^2(s,\mathrm{\Delta }e^s).`$It is convenient to use the logarithmic parameter $`u`$, defined by $`q_{}=\mathrm{\Delta }e^u`$, and rewrite the RG equation as $$\frac{}{s}\lambda (s,u)=\frac{N_\text{c}}{\pi }\lambda ^2(s,s).$$ (13) The initial condition (12) becomes $$\lambda (0,u)=\frac{g^2}{4\pi }u.$$ (14) One should note that at the moment $`s`$ of the RG evolution, all fermion modes with energy larger than $`\mathrm{\Delta }e^s`$ have been integrated out, therefore the function $`\lambda (s,u)`$ is defined only for $`u>s`$. The solution to Eq. (13) with the initial condition Eq. (14) is $$\lambda (s,u)=\frac{\pi }{N_\text{c}}f(s)+\frac{g^2}{4\pi }(us)$$ (15) where $`f(s)`$ satisfies the equation $$\frac{}{s}f(s)=h^2+f^2(s)$$ (16) and $`h^2=g^2N_\text{c}/4\pi ^2`$. Solving Eq. (16) one finds $`f(s)=h\mathrm{tan}hs`$, which hits a Landau pole at $`s=s_\text{L}=\pi /2h`$. The corresponding scale is $`E_\text{L}=\mathrm{\Delta }e^{\pi /2h}`$. Recall now that RG evolution occurs for $`\mathrm{\Delta }^2/\mu q_{}\mathrm{\Delta }`$. From this condition one finds that the Landau pole can only be achieved if $`\mathrm{\Delta }^2/\mu E_\text{L}`$ or $`\mathrm{\Delta }\mu e^{\pi /2h}`$. Under this constraint, the maximal value of $`E_\text{L}`$ is achieved when $`\mathrm{\Delta }\mu e^{\pi /2h}`$, at which $`E_\text{L}=\mathrm{\Delta }^2/\mu \mu e^{\pi /h}`$. Thus, the estimation for the Landau pole scale $`E_\text{L}`$ and for $`\mathrm{\Delta }`$ coincide with the result found by DGR for the binding energy of the particle-hole pair and the size of the pair wave function, Eqs. (2,3). Now, it is easy to demonstrate why we were justified to consider only the region $`\mathrm{\Delta }^2/\mu q_{}\mathrm{\Delta }`$ in the argument presented above. On one hand, when $`q_{}`$ drops below the scale $`\mathrm{\Delta }^2/\mu `$, we cannot neglect the dependence of the fermion propagator on $`q_{}`$, which now acts as a cut off for the RG flow. Hence, for $`q_{}\mathrm{\Delta }^2/\mu `$, there is no RG flow in the effective (1+1)D theory and the Landau pole is never reached. On the other hand, when $`q_{}`$ becomes comparable with $`q_{}`$ (i.e. $`\mathrm{\Delta }`$), we cannot neglect $`q_{}`$ dependence in the gluon propagator. One can estimate the effect of such dependence by noticing that the four-fermion coupling in the effective (1+1)D theory (11) now reads $`\lambda (q_{})={\displaystyle \frac{g^2}{8\pi }}\mathrm{ln}\left(1+{\displaystyle \frac{\mathrm{\Delta }^2}{q_{}^2}}\right)`$and the RG equation (16) becomes $$\frac{}{s}f(s)=\frac{h^2}{1+e^{2s}}+f^2(s).$$ (17) One can see that for $`q_{}\mathrm{\Delta }`$, the RG flow in the effective (1+1)D is completely negligible. Therefore, to find the DGR instability we can restrict the values of $`q_{}`$ to lie between $`\mathrm{\Delta }^2/\mu `$ and $`\mathrm{\Delta }`$. Having reproduced the DGR results by our RG procedure, let us turn to the case of large, but finite $`N_\text{c}`$. ## IV DGR instability at finite $`N_\text{c}`$ The RG technique described above can be very easily extended to the case of large but finite $`N_\text{c}`$. The effect of finite $`N_\text{c}`$ is to cut off the IR singularity of the gluon propagator at small momentum exchange by Thomas-Fermi screening and Landau damping. The electric propagator becomes $`(q^2+m^2)^1`$, and the magnetic propagator becomes $`(q^2+im^2|q_0|/q)^1`$ , where $`m`$ is the Thomas-Fermi screening scale of order $`g\mu `$. If the screening mass $`m`$ is smaller than the scale of the Landau pole found in Sec. III, i.e. $`\mu e^{\pi /h}`$, then our previous calculations are not affected. However, if $`m>\mu e^{\pi /h}`$, we need to modify the RG to take into account the screening. The screening affects the integration over perpendicular components of the gluon propagators: before these integrals were cut off by the parallel exchanged momentum $`q_{}`$, now it is cut off by the largest scale among $`q_{}`$ and $`m`$ in the case of electric gluons, and among $`q_{}`$ and $`m^{2/3}q_{}^{1/3}`$ in the case of magnetic gluons. The effective (1+1)D theory is now a Thirring-like model with different scale-dependent couplings for the electric and magnetic interactions, $$L_{\text{eff}}=i\overline{\mathrm{\Psi }}\gamma ^\mu _\mu \mathrm{\Psi }\lambda _0(q_{})\left(\overline{\mathrm{\Psi }}\gamma ^0\frac{T^a}{2}\mathrm{\Psi }\right)^2+\lambda _1(q_{})\left(\overline{\mathrm{\Psi }}\gamma ^1\frac{T^a}{2}\mathrm{\Psi }\right)^2.$$ (18) where $`\lambda _0(q_{})`$ $`=`$ $`{\displaystyle \frac{g^2}{4\pi }}\mathrm{ln}{\displaystyle \frac{\mathrm{\Delta }}{\text{max}(q_{},m)}}`$ $`\lambda _1(q_{})`$ $`=`$ $`{\displaystyle \frac{g^2}{4\pi }}\mathrm{ln}{\displaystyle \frac{\mathrm{\Delta }}{\text{max}(q_{},m^{2/3}q_{}^{1/3})}}.`$ The RG equations for $`\lambda _+=(\lambda _0+\lambda _1)/2`$ and $`\lambda _{}=(\lambda _0\lambda _1)/2`$ decouple: $`{\displaystyle \frac{}{s}}\lambda _+(s,u)`$ $`=`$ $`{\displaystyle \frac{N_\text{c}}{\pi }}\lambda _+^2(s,s)`$ (19) $`{\displaystyle \frac{}{s}}\lambda _{}(s,u)`$ $`=`$ $`0.`$ (20) where again $`u=\mathrm{ln}\frac{\mathrm{\Delta }}{q_{}}`$. Therefore, only $`\lambda _+`$ changes during the RG evolution. The initial condition for $`\lambda _+`$ can be read from Eq. (18), $$\lambda _+(0,u)=\{\begin{array}{cc}\frac{g^2}{4\pi }u\hfill & \text{if }u<s_m\hfill \\ \frac{g^2}{4\pi }\left(\frac{5}{6}s_m+\frac{1}{6}u\right)\hfill & \text{if }u>s_m\hfill \end{array}$$ (21) where $`s_m=\mathrm{ln}\frac{\mathrm{\Delta }}{m}`$. The solution to Eq. (19) with the initial condition (21) can be written in the form of Eq. (15), where $`f(s)`$ now satisfies the equation $`{\displaystyle \frac{}{s}}f(s)=\{\begin{array}{cc}f^2+h^2\hfill & \text{if }s<s_m\hfill \\ f^2+{\displaystyle \frac{h^2}{6}}\hfill & \text{if }s>s_m\hfill \end{array}.`$The solution to this equation is $`f(s)=\{\begin{array}{cc}h\mathrm{tan}hs\hfill & \text{if }s<s_m\hfill \\ {\displaystyle \frac{h}{\sqrt{6}}}\mathrm{tan}{\displaystyle \frac{h}{\sqrt{6}}}(s+c)\hfill & \text{if }s>s_m\hfill \end{array}`$where $`c`$ can be found by matching the solution at $`s=s_m`$: $`c={\displaystyle \frac{\sqrt{6}}{h}}\mathrm{arctan}(\sqrt{6}\mathrm{tan}hs_m)s_m.`$The Landau pole occurs at $`s_\text{L}={\displaystyle \frac{\sqrt{6}\pi }{2h}}c={\displaystyle \frac{\sqrt{6}}{h}}\mathrm{arctan}({\displaystyle \frac{1}{\sqrt{6}}}\mathrm{cot}hs_m)+s_m.`$Recall that for the instability to really occur, the scale of the Landau pole should be larger than the scale $`\mathrm{\Delta }^2/\mu `$, one finds a condition on $`m`$, $`m=\mathrm{\Delta }e^{s_m}<\mu e^{s_\text{L}s_m}=\mu \mathrm{exp}\left[{\displaystyle \frac{\sqrt{6}}{h}}\mathrm{arctan}\left({\displaystyle \frac{1}{\sqrt{6}}}\mathrm{cot}hs_m\right)2s_m\right].`$One can maximize the right hand side (RHS) of this equation to find the maximum value of $`m`$ where the Landau pole still can be achieved. One finds that for the Landau pole to be reached, $`m`$ should be smaller than $`m_{\text{max}}=\mu e^{c/h}`$, where $`c=\sqrt{6}\mathrm{arctan}{\displaystyle \frac{1}{2}}+2\mathrm{arctan}\sqrt{{\displaystyle \frac{2}{3}}}2.5051.`$This restriction on $`m`$ leads to a condition on $`N_\text{c}`$ and $`\mu `$ for the DGR instability to occur. Recall that the Thomas-Fermi mass is $`m=\sqrt{{\displaystyle \frac{N_\text{f}}{2\pi ^2}}}g\mu `$(which is of order $`N_{\text{c}}^{}{}_{}{}^{1/2}`$), we see that at a fixed coupling $`g^2N_\text{c}`$ (or, equivalently, $`\mu `$), there exists a lower bound on $`N_\text{c}`$ where condition $`m<\mu e^{c/h}`$ is satisfied. The lower bound can be easily found to be $$N_\text{c}2N_\text{f}h^2e^{2c/h}.$$ (22) Since our arguments rely on the comparison of scales, Eq. (22) contains an extra unknown coefficient of order 1 on the RHS. As the chemical potential $`\mu `$ increases, the effective coupling $`h`$ decreases; using the one-loop beta function $$h^2=\frac{6}{11\mathrm{ln}\frac{\mu }{\mathrm{\Lambda }_{\text{QCD}}}}$$ (23) and according to Eq. (22) the lower bound on $`N_\text{c}`$ increases. In reality, the numerical constant $`2c`$ in the exponent on the RHS of Eq. (22) is relatively large ($`5`$), so the lower bound is already large at moderate values of $`\mu `$. For example, if one uses the value of $`h`$ corresponding to $`\mu =3\mathrm{\Lambda }_{\text{QCD}}`$, the RHS of Eq. (22) is of order $`1000N_\text{f}`$! Barring the possibility of a very small numerical constant on the RHS of Eq. (22), which seems unlikely, this lower bound is always much larger than 3. From Eqs. (22,23), one can construct the phase diagram of QCD in the $`(N_\text{c},\mu )`$ plane. The result is shown in Fig. 5. In the shaded region, $`N_\text{c}`$ satisfies the inequality (22), which means that DGR instability occurs. We restrict this region by the line $`\mu =3\mathrm{\Lambda }_{\text{QCD}}`$, for below this line QCD is certainly strongly-coupled and not much can be said from our calculation. Above the curved line, inequality (22) is not satisfied, and the Fermi surface is stable in the DGR channel. However, the BCS instability is still there (though suppressed by large $`N_\text{c}`$), thus implying that the ground state of QCD is a color superconductor in that region. At any given (large) $`N_\text{c}`$, the DGR instability occurs only in a finite window of the values of the chemical potential. The maximal value of $`\mu `$ where DGR instability still occurs, $`\mu _{\text{crit}}`$, can be found by solving (22) with respect to $`\mu `$. Asymptotically, $`\mu _{\text{crit}}\mathrm{exp}(\gamma \mathrm{ln}^2N_\text{c}+O(\mathrm{ln}N_\text{c}\mathrm{ln}\mathrm{ln}N_\text{c}))\mathrm{\Lambda }_{\text{QCD}}N_{\text{c}}^{}{}_{}{}^{\gamma \mathrm{ln}N_\text{c}}\mathrm{\Lambda }_{\text{QCD}}`$where $`\gamma ={\displaystyle \frac{3}{22c^2}}=0.02173\mathrm{}`$The smallness of the numerical constant $`\gamma `$ and the logarithmic dependence of $`\mu _{\text{crit}}`$ on $`N_\text{c}`$ are the reasons why it requires a numerically large $`N_\text{c}`$ for $`\mu _{\text{crit}}`$ to be as small as $`3\mathrm{\Lambda }_{\text{QCD}}`$. However, asymptotically $`\mu _{\text{crit}}`$ grows faster than any power of $`N_\text{c}`$. ## V Conclusion In this paper we have seen that in finite-density QCD the Fermi surface is unstable under the DGR instability in a finite range of chemical potential. We have also found that the number of colors $`N_\text{c}`$ needs to be numerically very large for the DGR instability to occur in perturbation theory. This indicates that at low $`N_\text{c}`$ (like $`N_\text{c}=3`$), the DGR instability might not have a chance to realize itself at any value of the chemical potential and the only instability of the Fermi surface is the BCS one, which leads to color superconductivity. Returning to the case of very large $`N_\text{c}`$, the next logical step is to ask what is the ground state once the Fermi liquid is unstable under the DGR particle-hole pairing. This seems to be a purely academic exercise due to the large $`N_\text{c}`$ required, but it might still be interesting because of the possibility, at least in principle, of a new phase, distinct from the Fermi liquid and BCS superconducting phases in 3D fermionic systems. In the original paper , DGR constructed a “standing chiral wave state”, in which $`\overline{\psi }\psi `$ varies periodically in space with wavenumber $`2\mu `$. This state is periodic only along one spatial direction and does not break translational symmetry along the other two directions. Since translational symmetry cannot be broken in only one direction, such state cannot be the ground state of QCD. One notices that a chiral wave with a particular wavevector utilizes only fermion modes in a small region with size $`\mathrm{\Delta }_{}`$ (Eq. (2)) near two opposite points on the Fermi sphere. It is clear how to make a state with energy smaller than the original DGR standing wave state. Indeed, one can pair up particles and holes in different pairs of opposite patches on the Fermi sphere. Since the size of each patch is exponentially small compared to the total area of the Fermi surface, one can have a large number of patches that do not overlap with each other. From the size of the patches one deduces that one can place a maximum of $`e^{\pi /h}`$ patches on the sphere. The condensate has the form of a linear combination of $`e^{i𝐤_i𝐱}`$, where all $`𝐤_i`$ have modulus equal to $`2\mu `$ but point in different directions. It is easy to estimate the energy gain from forming such a state. Indeed, the pairing affects fermions in a thin shell near the Fermi surface; the thickness of the shell is the scale at which we have found the Landau pole, i.e. $`\mu e^{\pi /h}`$. Therefore, the fraction of fermions affected is $`e^{\pi /h}`$, and each pair lowers the energy by $`\mu e^{\pi /h}`$. Therefore, the gain in energy density is $$\mu ^4e^{2\pi /h}.$$ (24) For comparison, the DGR standing wave state has the energy gain $`\mu ^4e^{3\pi /h}`$. The factor of $`e^{\pi /h}`$ difference is explained by the fact that DGR state involves only two patches on the Fermi surface with a relative area of $`e^{\pi /h}`$. Alternatively, it might be energetically more favorable for the patches on the Fermi sphere to be overlapping. In this case, a given particle (or hole) near the Fermi sphere participates in many pairings simultaneously. It could be expected that the binding energy of each individual pair is lower than the value it would have in the non-overlapping case, but nothing can be said about the total energy of the system. Indeed, our preliminary estimation shows that the energy gain is still parametrically given by Eq. (24). Further investigation is required to find the true ground state of QCD at very large $`N_\text{c}`$. Finally, let us note an interesting possibility that the ground state of finite-density QCD at very large $`N_\text{c}`$ might be similar to the “tomographic Luttinger liquid” in 2D, advocated by Anderson as the normal state of high-$`T_c`$ cuprates . Such similarity could stem from the singular interaction between fermions moving in the same directions, which is also characteristic of tomographic Luttinger liquids. As in the case of the latter, one could expect the chiral symmetry to be unbroken, but the chiral response to be singular at wavenumber $`2\mu `$. ###### Acknowledgements. The authors thank M. Alford and K. Rajagopal for stimulating discussions. DTS thanks P.A. Lee for helpful conversations. This work is supported in part by funds provided by the U.S. Department of Energy (DOE) under cooperative research agreement #DE-FC02-94ER40818. The work of ES is also supported in part by funds provided by the National Science Foundation (NSF) through the NSF Graduate Fellowship.
no-problem/9905/hep-th9905077.html
ar5iv
text
# 1 Introduction ## 1 Introduction Spherical field theory is a non-perturbative method which uses the spherical partial wave expansion to reduce a general $`d`$-dimensional Euclidean field theory into a set of coupled radial systems (, ). High spin partial waves correspond with large tangential momenta and can be neglected if the theory is properly renormalized. The remaining system can then be converted into differential equations and solved using standard numerical methods. $`\varphi ^4`$ theory in two dimensions was considered in . In that case there was only one divergent diagram, and it could be completely removed by normal ordering. In general any super-renormalizable theory can be renormalized by removing the divergent parts of divergent diagrams. Using a high-spin cutoff $`J_{\mathrm{max}}`$ and discarding partial waves with spin greater than $`J_{\mathrm{max}}`$, we simply compute the relevant counterterms using spherical Feynman rules. The $`J_{\mathrm{max}}`$ cutoff scheme however is not translationally invariant. It preserves rotational invariance but regulates ultraviolet processes differently depending on radial distance. In the two-dimensional $`\varphi ^4`$ example it was found that the mass counterterm had the form $$_{c.t.}\varphi ^2(\stackrel{}{t})\left[K_0(\mu t)I_0(\mu t)+2_{n=1,J_{\mathrm{max}}}K_n(\mu t)I_n(\mu t)\right],$$ (1) where $`I_n`$, $`K_n`$ are $`n^{\text{th}}`$-order modified Bessel functions of the first and second kinds, $`\mu `$ is the bare mass, and $`t`$ is the magnitude of $`\stackrel{}{t}`$. As $`J_{\mathrm{max}}\mathrm{}`$, we find $$_{c.t.}\varphi ^2(\stackrel{}{t})\left[\mathrm{log}(\frac{2J_{\mathrm{max}}}{\mu t})+O(J_{\mathrm{max}}^1)\right].$$ (2) Our regularization scheme varies with $`t`$, and we see that the counterterm also depends on $`t`$. The physically relevant issue, however, is whether or not the renormalized theory is independent of $`t`$. In this case the answer is yes. Any $`t`$ dependence in renormalized amplitudes is suppressed by powers of $`J_{\mathrm{max}}^1`$, and translational invariance becomes exact as $`J_{\mathrm{max}}\mathrm{}`$. We now consider general renormalizable theories, in particular those which are not super-renormalizable. In this case the number of divergent diagrams is infinite. Since we are primarily interested in non-perturbative phenomena, a diagram by diagram subtraction method is not useful. In the same manner strictly perturbative methods such as dimensional regularization are not relevant either. Our interest is in non-perturbative renormalization, where coefficients of renormalization counterterms are determined by non-perturbative computations.<sup>4</sup><sup>4</sup>4We should mention that Pauli-Villars regularization is compatible with non-perturbative renormalization. However this introduces additional unphysical degrees of freedom and tends to be computationally inefficient. In this paper we analyze the general theory of non-perturbative renormalization in the spherical field formalism. In the course of our analysis we answer the following three questions: (i) Can ultraviolet divergences be cancelled by a finite number of local counterterms? (ii) Can the renormalized theory be made translationally invariant? (iii) What is the general form of the counterterms? The organization of the paper is as follows. We begin with a discussion of differential renormalization, a regularization-independent method which will allow us to construct local counterterms. Next we describe a regularization procedure which is convenient for spherical field theory. In the large radius limit $`t\mathrm{}`$ our regularization procedure (which we call angle smearing) is anisotropic but locally invariant under translations. For general $`t`$ we expand in powers of $`t^1`$ to generate the general form of the counterterms. We conclude with two examples of one-loop divergent diagrams. We show by direct calculation that the predicted counterterms render these processes finite and translationally invariant. ## 2 Differential renormalization Differential renormalization is the coordinate space version of the BPHZ method.<sup>5</sup><sup>5</sup>5Paraphrase of private communication with Jose Latorre. It is framed entirely in coordinate space, and renormalized amplitudes can be defined as distributions without reference to any specific regularization procedure. Differential renormalization was introduced in , and a systematic analysis of differential renormalization to all orders in perturbation theory using Bogoliubov’s recursion formula was first described in . The usual implementation of differential renormalization is carried out using singular Poisson equations and their explicit solutions. In our discussion, however, we find it more convenient to operate directly on the distributions.<sup>6</sup><sup>6</sup>6Our approach is similar to the natural renormalization scheme described in . In contrast with , however, we do not a priori specify the finite parts of amplitudes. We describe the details of our approach in the following. We should stress that the two approaches are equivalent, differing only at the level of formalism. We assume that we are working with a renormalizable theory. For indices $`i_1,\mathrm{}i_j`$ let us define $`t^{i_1,\mathrm{}i_j}`$ $`=t^{i_1}t^{i_2}\mathrm{}t^{i_j},`$ (3) $`_{i_1,\mathrm{}i_j}`$ $`=_{i_1}_{i_2}\mathrm{}_{i_j}.`$ (4) Let $`f(\stackrel{}{t})`$ be a smooth test function, and let $`I(\stackrel{}{t}\stackrel{}{t}^{};\mu ^2)`$ be a smooth function with support on a region of scale $`\mu ^1`$. We define $`S_\stackrel{}{t}^{}^j\left[f\right](\stackrel{}{t})`$ as $`I(\stackrel{}{t}\stackrel{}{t}^{};\mu ^2)`$ multiplied by the $`j^{th}`$ order term in the Taylor series of $`f(\stackrel{}{t})`$ about the point $`\stackrel{}{t}^{}`$. Inserting delta functions, we have $`S_\stackrel{}{t}^{}^jf(\stackrel{}{t})`$ $`=I(\stackrel{}{t}\stackrel{}{t}^{};\mu ^2)\underset{i_1,\mathrm{}i_j}{}\left[\frac{(tt^{})^{i_1,\mathrm{}i_j}}{j!}_{i_1,\mathrm{}i_j}f(\stackrel{}{t}^{})\right]`$ (5) $`=I(\stackrel{}{t}\stackrel{}{t}^{};\mu ^2)\underset{i_1,\mathrm{}i_j}{}\frac{(tt^{})^{i_1,\mathrm{}i_j}}{j!}d^4\stackrel{}{z}_{i_1,\mathrm{}i_j}^\stackrel{}{t}^{}\delta ^4(\stackrel{}{t}^{}\stackrel{}{z})f(\stackrel{}{z}).`$ For the purposes of this discussion we will require $$I(\stackrel{}{t}\stackrel{}{t}^{};\mu ^2)=1+O^N(\stackrel{}{t}\stackrel{}{t}^{})\text{ as }\stackrel{}{t}^{}\stackrel{}{t}\text{,}$$ (6) where $`N`$ is some positive integer greater than the superficial degree of divergence of any subdiagram<sup>7</sup><sup>7</sup>7In our discussion a subdiagram is a subset of vertices together with all lines contained in those vertices. in the theory we are considering. For any renormalizable theory $`N>2`$ will suffice. In our formalism, $`I(\stackrel{}{t}\stackrel{}{t}^{};\mu ^2)`$ determines how finite parts of renormalized amplitudes are assigned, and $`\mu `$ is the renormalization mass scale. We now consider a particular diagram, $`G`$, with $`n`$ vertices. We define $`K(\stackrel{}{t}_1,\mathrm{}\stackrel{}{t}_n)`$ to be the kernel of the amputated diagram, i.e., the value of the diagram with vertices fixed at points $`\stackrel{}{t}_1,\mathrm{}\stackrel{}{t}_n`$. The amplitude is obtained by integrating $`K(\stackrel{}{t}_1,\mathrm{}\stackrel{}{t}_n)`$ with respect to all internal vertices. We will regard $`K`$ as a distribution acting on $`n`$ smooth test functions $`f_1,\mathrm{}f_n.`$(For external vertices containing more than one external line and/or derivatives, $`f_{ext}(\stackrel{}{t}_{ext})`$ should be regarded as a product of test functions, with possible derivatives, at $`\stackrel{}{t}_{ext}`$.) $$K:f_1,\mathrm{}f_nd^4\stackrel{}{t}_1\mathrm{}d^4\stackrel{}{t}_nK(\stackrel{}{t}_1,\mathrm{}\stackrel{}{t}_n)f_1(\stackrel{}{t}_1)\mathrm{}f_n(\stackrel{}{t}_n).$$ (7) Let us assume that our diagram is primitively divergent with superficial degree of divergence $`j`$. We now define another distribution $`T_GK`$, which extracts the divergent part of $`K`$. We start with the case when $`G`$ has more than one vertex. Let us define $`T_GK:f_1,\mathrm{}f_n`$ $$\underset{_{j_1+\mathrm{}+j_nj}}{}d^4\stackrel{}{t}_1\mathrm{}d^4\stackrel{}{t}_nK(\stackrel{}{t}_1,\mathrm{}\stackrel{}{t}_n)S_{\stackrel{}{t}_{ave}}^{j_1}f_1(\stackrel{}{t}_1)\mathrm{}S_{\stackrel{}{t}_{ave}}^{j_n}f_n(\stackrel{}{t}_n),$$ (8) where $`\stackrel{}{t}_{ave}=\frac{1}{n}(\stackrel{}{t}_1+\mathrm{}+\stackrel{}{t}_n).`$ We note that the subtracted distribution $`KT_GK`$ is finite and well-defined for all $`f_1,\mathrm{}f_n`$. Let us define $`F_K^{i_{1,1},i_{2,1}\mathrm{}i_{j_n,n}}(\stackrel{}{t})`$ (9) $`=d^4\stackrel{}{t}_1\mathrm{}d^4\stackrel{}{t}_n\delta ^4(\frac{\stackrel{}{t}_1+\mathrm{}+\stackrel{}{t}_n}{n}\stackrel{}{t})K(\stackrel{}{t}_1,\mathrm{}\stackrel{}{t}_n)\left[\underset{k=1,\mathrm{}n}{}\frac{I(\stackrel{}{t}_k\stackrel{}{t};\mu ^2)(t_kt)^{i_{1,k},\mathrm{}i_{j_k,k}}}{j_k!}\right].`$ We can then rewrite $`T_GK:f_1,\mathrm{}f_n`$ $$\underset{_{j_1+\mathrm{}+j_nj}}{}\underset{_{\begin{array}{c}i_{1,1},i_{2,1}\mathrm{}\\ i_{1,n}\mathrm{}i_{j_n,n}\end{array}}}{}\left[\begin{array}{c}d^4\stackrel{}{t}F_K^{i_{1,1},i_{2,1}\mathrm{}i_{j_n,n}}(\stackrel{}{t})d^4\stackrel{}{z}_1\mathrm{}d^4\stackrel{}{z}_n\\ \left(_{k=1,\mathrm{}n}_{i_{1,k},\mathrm{}i_{j_k,k}}^\stackrel{}{t}\delta ^4(\stackrel{}{t}\stackrel{}{z}_k)\right)f_1(\stackrel{}{z}_1)\mathrm{}f_n(\stackrel{}{z}_n)\end{array}\right].$$ (10) The delta functions make this kernel completely local. We can read off the corresponding counterterm interaction by functional differentiation with respect to each of the component functions of $`f_{ext}(\stackrel{}{t}_{ext})`$ for the external vertices and setting $`f_{int}(\stackrel{}{t}_{int})=1`$ for the internal vertices. We now turn to the case when $`G`$ has only one vertex. For this case we set $`T_GK=K`$, which is equivalent to normal ordering the interactions in our theory. In this case $`K`$ is itself local and therefore $`T_GK`$ and our counterterm interaction are again local. We now extend the definition of $`T_G`$ in (10) to include the case of subdiagrams. Let $`G`$ be a general 1PI diagram, and let $`G^{}`$ be a renormalization part<sup>8</sup><sup>8</sup>8A renormalization part is a 1PI subdiagram with degree of divergence $`0`$. of $`G`$ with superficial degree of divergence $`j^{}`$. For notational convenience we will label the vertices of $`G`$ so that the first $`n^{}`$ vertices lie in $`G^{}`$. If $`G^{}`$ has only one vertex then again we normal order the interaction. Otherwise we define $`T_G^{}K:f_1,\mathrm{}f_n`$ $$\underset{j_1^{}+\mathrm{}+j_n^{}^{}j^{}}{}d^4\stackrel{}{t}_1\mathrm{}d^4\stackrel{}{t}_nK(\stackrel{}{t}_1,\mathrm{}\stackrel{}{t}_n)\left[\begin{array}{c}S_{\stackrel{}{t}_{ave}}^{j_1^{}}f_1(\stackrel{}{t}_1)\mathrm{}S_{\stackrel{}{t}_{ave}}^{j_n^{}}f_n^{}(\stackrel{}{t}_n^{})\\ f_{n^{}+1}(\stackrel{}{t}_{n^{}+1})\mathrm{}f_n(\stackrel{}{t}_n)\end{array}\right],$$ (11) where $`\stackrel{}{t}_{ave}=\frac{1}{n^{}}(\stackrel{}{t}_1+\mathrm{}+\stackrel{}{t}_n^{})`$.<sup>9</sup><sup>9</sup>9After applying $`T_G^{}`$, we regard $`G^{}`$ as being contracted to single vertex at $`\stackrel{}{t}_{ave}`$. This definition can be used recursively to define products of $`T_{G_1^{}}T_{G_2^{}}`$ for disjoint subdiagrams $`G_1^{}G_2^{}=\mathrm{}`$ or nested subdiagrams $`G_1^{}G_2^{}.`$ For the case of nested subdiagrams we always order the product so that larger diagrams are on the left. It is straightforward to show that the $`T`$ operation acts as the identity on local interactions and thus treats overlapping divergences in the same manner as BPHZ. Following the standard BPHZ procedure ($``$), we can write Bogoliubov’s $`\overline{R}`$ operation using Zimmerman’s forest formula, $$\overline{R}=\underset{F}{}\underset{\gamma F}{}(T_\gamma ),$$ (12) where $`F`$ ranges over all forests<sup>10</sup><sup>10</sup>10A forest is any set of non-overlapping renormalization parts. of $`G`$, and $`\gamma `$ ranges over all renormalization parts of $`F.`$ In the product we have again ordered nested subdiagrams so that larger diagrams are on the left. Let $`j`$ be the superficial degree of divergence of $`G`$. The renormalized kernel, $`RK`$, is given by $$\begin{array}{c}RK=\overline{R}K\\ RK=(1T_G)\overline{R}K\end{array}\begin{array}{c}\text{for }j<0\\ \text{for }j0.\end{array}$$ (13) Our final result is that all required counterterms are local, and the form of the counterterms is $$_{c.t.}=_{A(\varphi ,_i\varphi )\text{ }}F_A(\stackrel{}{t})A(\varphi (\stackrel{}{t}),_i\varphi (\stackrel{}{t})),$$ (14) where the sum is over operators of renormalizable type. For the case of gauge theories, our renormalization procedure is supplemented by the additional requirement that the renormalized amplitudes satisfy Ward identities.<sup>11</sup><sup>11</sup>11See , for a discussion of gauge theories using the method of differential renormalization. If our regularization procedure breaks gauge invariance these identities are not automatic and the required local counterterms will in general be any operators of renormalizable type (not merely gauge-invariant operators). This is, however, a separate discussion, and the details of implementing Ward identity constraints will be discussed in future work. ## 3 <br>Regularization by angle smearing In this section we determine the functional form of the coefficients $`F_A(\stackrel{}{t})`$ in (14). To make the discussion concrete, we will illustrate using the example of massless $`\varphi ^4`$ theory in four dimensions $$=\frac{1}{2}\varphi ^2\varphi \frac{\lambda }{4!}\varphi ^4+_{c.t.}.$$ (15) From (14) $`_{c.t.}`$ is given by $$F_{\varphi ^2}(\stackrel{}{t})\varphi ^2(\stackrel{}{t})+\text{ }_{i,j}F_{\varphi \varphi }^{ij}(\stackrel{}{t})_i\varphi (\stackrel{}{t})_j\varphi (\stackrel{}{t})+F_{\varphi ^4}(\stackrel{}{t})\varphi ^4(\stackrel{}{t}).\text{ }$$ (16) Let $`G(\stackrel{}{t},\stackrel{}{t}^{})`$ be the free two-point correlator. We will use a regularization scheme which preserves rotational invariance and is convenient for spherical field theory, but one which breaks translational invariance. We regulate the short distance behavior of $`G`$ by smearing the endpoints over a radius $`t`$ spherical shell within a conical region $`R_{M^2}(\stackrel{}{t})`$, where $`R_{M^2}(\stackrel{}{t})`$ is the set of vectors $`\stackrel{}{u}`$ such that the angle between $`\stackrel{}{t}`$ and $`\stackrel{}{u}`$ is between $`\frac{1}{Mt}`$ and $`\frac{1}{Mt}`$ (see Figure 1). The result is a regulated correlator $$G_{M^2}(\stackrel{}{t},\stackrel{}{t}^{})=\frac{1}{_{\widehat{u}R_{M^2}(\stackrel{}{t})}d^3\widehat{u}_{\widehat{u}^{}R_{M^2}(\stackrel{}{t}^{})}d^3\widehat{u}^{}}_{\begin{array}{c}\widehat{u}R_{M^2}(\stackrel{}{t})\\ \widehat{u}^{}R_{M^2}(\stackrel{}{t}^{})\end{array}}d^3\widehat{u}d^3\widehat{u}^{}G(t\widehat{u},t^{}\widehat{u}^{}).$$ (17) We recall that our renormalized theory is determined by the translationally invariant function $`I(\stackrel{}{t}\stackrel{}{t}_{ave};\mu ^2)`$ described in the previous section. Even though our regularization scheme breaks translational invariance, the renormalized theory nevertheless remains invariant. As the radius $`t`$ increases the curvature of the angle-smearing region becomes negligible. In the limit $`t\mathrm{}`$ the region becomes a flat three-dimensional ball with radius $`\frac{1}{M}`$ lying in the plane perpendicular to the radial vector. In this limit our regularization is invariant under local transformations and the counterterms converge to constants independent of $`\stackrel{}{t}`$, $`\underset{t\mathrm{}}{lim}F_{\varphi \varphi }^{ij}(\stackrel{}{t})`$ $`=c_{\varphi \varphi }^{ij,(0)}(\frac{\mu ^2}{M^2})`$ (18) $`\underset{t\mathrm{}}{lim}F_{\varphi ^2}(\stackrel{}{t})`$ $`=M^2c_{\varphi ^2}^{(0)}(\frac{\mu ^2}{M^2})`$ (19) $`\underset{t\mathrm{}}{lim}F_{\varphi ^4}(\stackrel{}{t})`$ $`=c_{\varphi ^4}^{(0)}(\frac{\mu ^2}{M^2}).`$ (20) We have chosen our coefficients $`c_A^{(0)}`$ to be dimensionless. Although our regularization scheme is invariant under rotations about the origin, the radial vector has a special orientation which is normal to our three-dimensional ball. Our regularization scheme is therefore not isotropic. The result (as should be familiar from studies of anisotropic lattices) is that the coefficient of the kinetic term has two independent components $$c_{\varphi \varphi }^{ij,(0)}(\frac{\mu ^2}{M^2})=c_{\varphi \varphi }^{\widehat{t}\widehat{t},(0)}(\frac{\mu ^2}{M^2})+\delta ^{ij}c_{\varphi \varphi }^{(0)}(\frac{\mu ^2}{M^2}).$$ (21) Starting with the $`t\mathrm{}`$ result at lowest order, we now expand our coefficient functions in powers of $`\frac{1}{Mt}`$, $`F_{\varphi \varphi }^{ij}(\stackrel{}{t})`$ $`=c_{\varphi \varphi }^{ij,(0)}(\frac{\mu ^2}{M^2})+\frac{1}{Mt}c_{\varphi \varphi }^{ij,(1)}(\frac{\mu ^2}{M^2})+\frac{1}{M^2t^2}c_{\varphi \varphi }^{ij,(2)}(\frac{\mu ^2}{M^2})+\mathrm{}`$ (22) $`F_{\varphi ^2}(\stackrel{}{t})`$ $`=M^2c_{\varphi ^2}^{(0)}(\frac{\mu ^2}{M^2})+\frac{M}{t}c_{\varphi ^2}^{(1)}(\frac{\mu ^2}{M^2})+\frac{1}{t^2}c_{\varphi ^2}^{(2)}(\frac{\mu ^2}{M^2})+\mathrm{}`$ (23) $`F_{\varphi ^4}(\stackrel{}{t})`$ $`=c_{\varphi ^4}^{(0)}(\frac{\mu ^2}{M^2})+\frac{1}{Mt}c_{\varphi ^4}^{(1)}(\frac{\mu ^2}{M^2})+\frac{1}{M^2t^2}c_{\varphi ^4}^{(2)}(\frac{\mu ^2}{M^2})+\mathrm{}.`$ (24) For the moment let us assume $`t\mathrm{\Lambda }^1`$ for $$\mathrm{\Lambda }=m_0^zM^{1z},$$ (25) for some fixed mass $`m_0`$ and constant $`z`$ such that $`0<z<\frac{1}{2}`$. In this region our dimensionless expansion parameter $`\frac{1}{Mt}`$ is bounded by $`\frac{m_0^z}{M^z}`$ and therefore diminishes uniformly as $`M\mathrm{}`$. In general the $`\frac{\mu ^2}{M^2}`$ dependence in the functions $`c_A^{(k)}`$ will contain analytic terms as $`\mu ^20`$ as well as logarithmically divergent terms. There are, however, no inverse powers of $`\frac{\mu ^2}{M^2}`$. These would indicate severe infrared divergences not present in the processes we are considering, as can be deduced from the long distance behavior of the integral in (9).<sup>12</sup><sup>12</sup>12If our theory contained bare masses $`m_i`$, similar arguments would apply for the infrared limit $`\mu ^2,m_i^20,`$ with $`\frac{m_i^2}{\mu ^2}`$ fixed. With this we can neglect terms which vanish as $`M\mathrm{},`$ $`F_{\varphi ^2}(\stackrel{}{t})`$ $`=M^2c_{\varphi ^2}^{(0)}(\frac{\mu ^2}{M^2})+\frac{1}{t^2}c_{\varphi ^2}^{(2)}(\frac{\mu ^2}{M^2})`$ (26) $`F_{\varphi \varphi }^{ij}(\stackrel{}{t})`$ $`=c_{\varphi \varphi }^{ij,(0)}(\frac{\mu ^2}{M^2})`$ (27) $`F_{\varphi ^4}(\stackrel{}{t})`$ $`=c_{\varphi ^4}^{(0)}(\frac{\mu ^2}{M^2}).`$ (28) Since our regularization scheme is invariant under $`MM`$, we have also omitted the term proportional to $`c_{\varphi ^2}^{(1)}`$ which is odd in $`M`$. We now consider what occurs in the small region near the origin, $`t\mathrm{\Lambda }^1`$. For the theory we are considering (and in fact for any renormalizable theory) the highest ultraviolet divergence possible is quadratic.<sup>13</sup><sup>13</sup>13There may be additional logarithmic factors but this does not matter for our purposes here. In the limit $`M\mathrm{}`$ we deduce that each $`F_A`$ scales no greater than $`O(M^2).`$ On the other hand the volume of the region $`t\mathrm{\Lambda }^1`$ diminishes as $`O(M^{4z4}).`$ Thus the total contribution from the region $`t\mathrm{\Lambda }^1`$ scales as $`O(M^{4z2})`$ and can be entirely neglected. To summarize our results, the counterterm Lagrange density has the form $$c_{\varphi \varphi }^{(0)}(\stackrel{}{}\varphi (\stackrel{}{t}))^2+c_{\varphi \varphi }^{\widehat{t}\widehat{t},(0)}(\widehat{t}\stackrel{}{}\varphi (\stackrel{}{t}))^2+(M^2c_{\varphi ^2}^{(0)}+\frac{1}{t^2}c_{\varphi ^2}^{(2)})\varphi ^2(\stackrel{}{t})+c_{\varphi ^4}^{(0)}\varphi ^4(\stackrel{}{t}).$$ (29) ## 4 <br>Spherical fields We now examine the results of the previous section in the context of spherical field theory. We start with the spherical partial wave expansion, $$\varphi =_{l=0,1,\mathrm{}}_{n=0,\mathrm{}l}_{m=n,\mathrm{}n}\varphi _{l,n,m}(t)Y_{l,n,m}(\theta ,\psi ,\phi ),$$ (30) where $`Y_{l,m,n}`$ are four-dimensional spherical harmonics satisfying $$d^3\mathrm{\Omega }Y_{l^{},n^{},m^{}}^{}(\theta ,\psi ,\phi )Y_{l,n,m}(\theta ,\psi ,\phi )=\delta _{l^{},l}\delta _{n^{},n}\delta _{m^{},m},$$ (31) $$Y_{l,n,m}^{}(\theta ,\psi ,\phi )=(1)^mY_{l,n,m}(\theta ,\psi ,\phi ).$$ (32) The explicit form of $`Y_{l,m,n}`$ can be found in .<sup>14</sup><sup>14</sup>14 deserves credit as the first discussion of radial (or covariant Euclidean) quantization, an important part of the spherical field formalism. The integral of the free massless Lagrange density in terms of spherical fields is $$d^4\stackrel{}{t}=_0^{\mathrm{}}𝑑t\left\{\underset{l,m,n}{}\left[(1)^m\varphi _{l,n,m}\left[\frac{}{t}\frac{t^3}{2}\frac{}{t}\frac{t}{2}l(l+2)\right]\varphi _{l,n,m}\right]\right\}.$$ (33) It can be shown that the process of angle smearing the field $`\varphi (\stackrel{}{t})`$ is equivalent to multiplying $`\varphi _{l,n,m}(t)`$ by an extra factor $`s_l^M(t)`$ where $$s_l^M(t)=\frac{2Mt\left[(l+2)\mathrm{sin}(\frac{l}{Mt})l\mathrm{sin}(\frac{l+2}{Mt})\right]}{l(l+1)(l+2)\left[2Mt\mathrm{sin}(\frac{2}{Mt})\right]}.$$ (34) For large $`l`$, $`s_l^M(t)`$ diminishes as $`l^2`$, and so the correlator receives an extra suppression of $`l^4`$. We will later use this result to estimate the contribution of high spin partial waves. The regularization of our correlator can be implemented in our Lagrange density by dividing factors of $`s_l^M(t),`$ $`\varphi _{l,n,m}\left[\frac{}{t}\frac{t^3}{2}\frac{}{t}\frac{t}{2}l(l+2)\right]\varphi _{l,n,m}`$ (35) $`\left[(s_l^M(t))^1\varphi _{l,n,m}\right]\left[\frac{}{t}\frac{t^3}{2}\frac{}{t}\frac{t}{2}l(l+2)\right]\left[(s_l^M(t))^1\varphi _{l,n,m}\right].`$ We now include the interaction and counterterms. We first define $`\left[\genfrac{}{}{0.0pt}{}{l_1,n_1,m_1;l_2,n_2,m_2}{l_3,n_3,m_3;l_4,n_4,m_4}\right]`$ (36) $`=d^3\mathrm{\Omega }Y_{l_1,n_1,m_1}(\theta ,\psi ,\phi )Y_{l_2,n_2,m_2}(\theta ,\psi ,\phi )Y_{l_3,n_3,m_3}(\theta ,\psi ,\phi )Y_{l_4,n_4,m_4}(\theta ,\psi ,\phi ).`$ We can write the full functional integral as $$𝒟\varphi \mathrm{exp}\left[d^4\stackrel{}{t}\right]\left(_{l,n,m}𝒟\varphi _{l,n,m}^{}\right)\mathrm{exp}\left[_0^{\mathrm{}}𝑑t(L_1+L_2+L_3)\right],$$ (37) where $$L_1=\underset{l,m,n}{}\left[(1)^m\left[(s_l^M(t))^1\varphi _{l,n,m}^{}\right]\left[\frac{}{t}\frac{t^3}{2}\frac{}{t}\frac{t}{2}l(l+2)\right]\left[(s_l^M(t))^1\varphi _{l,n,m}^{}\right]\right],$$ (38) $$L_2=\underset{l,m,n}{}\left[(1)^m\varphi _{l,n,m}^{}\left[\begin{array}{c}\left[c_{\varphi \varphi }^{(0)}c_{\varphi \varphi }^{\widehat{t}\widehat{t},(0)}\right]\frac{}{t}\frac{t^3}{2}\frac{}{t}\\ +c_{\varphi \varphi }^{(0)}\frac{t}{2}l(l+2)+t^3(M^2c_{\varphi ^2}^{(0)}+\frac{1}{t^2}c_{\varphi ^2}^{(2)})\end{array}\right]\varphi _{l,n,m}^{}\right],$$ (39) $$L_3=t^3(\frac{\lambda }{4!}c_{\varphi ^4}^{(0)})\underset{l_i,m_i,n_i}{}\left[\genfrac{}{}{0.0pt}{}{l_1,m_1,n_1;l_2,m_2,n_2}{l_3,m_3,n_3;l_4,m_4,n_4}\right]\varphi _{l_1,m_1,n_1}^{}\varphi _{l_2,m_2,n_2}^{}\varphi _{l_3,m_3,n_3}^{}\varphi _{l_4,m_4,n_4}^{}.$$ (40) We have used primes in preparation for redefining the fields, $$(s_l^M(t))^1\varphi _{l,n,m}^{}=\varphi _{l,n,m}.$$ (41) The Jacobian of this transformation is a constant (although infinite) and can be absorbed into the normalization of the functional integral. Now the Lagrangian $`L_1`$ has the usual free-field form in terms of $`\varphi _{l,n,m}`$ while $`L_2`$ and $`L_3`$ are now functions of $`s_l^M(t)\varphi _{l,n,m}`$. With $`M`$ serving as our ultraviolet regulator, the contribution of high-spin partial waves decouples for sufficiently large spin $`l`$. We can estimate the order of magnitude of this contribution in the following manner. We first identify $`t^1l`$ (where $`t`$ is the characteristic radius we are considering) as an estimate of the magnitude of the tangential momentum, $`p_T`$. For $`p_TMt^1`$ our correlator scales as $`\frac{M^4}{p_T^6}.`$ By dimensional analysis, a diagram with $`N_L`$ loops and $`N_I`$ internal lines will receive a contribution from partial waves with spin $`l`$ of order $$\left(\frac{M^4}{p_T^6}\right)^{N_I}\left(p_T\right)^{4N_L}=\left(\frac{M^4}{(t^1l)^6}\right)^{N_I}\left(t^1l\right)^{4N_L}.$$ (42) ## 5 One-loop examples We will devote the remainder of our discussion to computing one-loop spherical Feynman diagrams as a check of our results. Our calculations are done both numerically and analytically. The diagrams we will consider are shown in Figures 2 and 3. We start with the two-point function in Figure 2. The amplitude can be written as $`t^3B(t)`$ where $$B(t)_{l,n,m}\frac{1}{t^2(l+1)}(s_l^M(t))^2.$$ (43) Constants of proportionality are not important here and so we will define $`B(t)`$ to be equal to the right side of (43). Our results tell us that if we choose our mass counterterms appropriately, the combination $$B(t)+M^2c_{\varphi ^2}^{(0)}+\frac{1}{t^2}c_{\varphi ^2}^{(2)}$$ (44) should be independent of $`t`$, or more succinctly, $$B(t)+\frac{1}{t^2}c_{\varphi ^2}^{(2)}$$ (45) is independent of $`t`$. Let us first check this analytically. In the absence of a high-spin cutoff, we can explicitly calculate the sum in (43): $$B(t)=\frac{1}{t^2}+b(t)$$ (46) where $$b(t)=\frac{4M^2\mathrm{sin}^4(\frac{1}{Mt})}{(2Mt\mathrm{sin}(\frac{2}{Mt}))^2}.$$ (47) In the limit $`M\mathrm{},`$ $$B(t)\frac{1}{t^2}+\frac{9}{4}M^2.$$ (48) We conclude that $`c_{\varphi ^2}^{(2)}=1`$ and $`B(t)+\frac{1}{t^2}c_{\varphi ^2}^{(2)}`$ is in fact translationally invariant. In Figure 4 we have plotted $`B(t)\frac{1}{t^2}`$, computed numerically for various values of the high-spin cutoff $`J_{\mathrm{max}}`$. We have also plotted the limiting values $`b(t)`$ and $`\frac{9}{4}M^2`$. In our plot $`t`$ is measured in units of $`m^1`$ and $`B(t)\frac{1}{t^2}`$ is in units of $`m^2`$, where $`m`$ is an arbitrary mass scale such that $`M=3m`$. As expected, the errors are of size $`\frac{M^4t^2}{J_{\mathrm{max}}^2}`$. There is clearly a deviation from $`\frac{9}{4}M^2`$ for $`tM^1`$ but the integral of the deviation is negligible as $`M\mathrm{}`$. We now turn to the four-point function in Figure 3. The amplitude can be written as $`t_1^3t_2^3C(t_1,t_2)`$ where $$C(t_1,t_2)_{l,n,m}\frac{(s_l^M(t_1))^2(s_l^M(t_2))^2}{(l+1)^2}\left[\frac{t_1^l}{t_2^{l+2}}\theta (t_2t_1)+\frac{t_2^l}{t_1^{l+2}}\theta (t_1t_2)\right]^2.$$ (49) Again constants of proportionally are not important and so we will define $`C(t_1,t_2)`$ to be equal to the right side of (49). We can write $`C(t_1,t_2)`$ in terms of the regulated correlator $`G_{M^2}(\stackrel{}{t}_1,\stackrel{}{t}_2),`$<sup>15</sup><sup>15</sup>15We recall that the regulated correlator goes with $`\varphi _{l,n,m}^{}`$ rather than $`\varphi _{l,n,m}`$. But this is not important here since $`\varphi _{0,0,0}^{}`$ $`=\varphi _{0,0,0}`$. $$C(t_1,t_2)d^3\widehat{t}_1d^3\widehat{t}_2\left[G_{M^2}(\stackrel{}{t}_1,\stackrel{}{t}_2)\right]^2d^3\widehat{t}_1\left[G_{M^2}(\stackrel{}{t}_1,\stackrel{}{t}_2)\right]^2.$$ (50) Since the coupling constant counterterm $$c_{\varphi ^4}^{(0)}\delta ^4(\stackrel{}{t}_1\stackrel{}{t}_2)$$ (51) is translationally invariant, the amplitude by itself should be translationally invariant. Let us define $$d^4\stackrel{}{t}_2e^{i\stackrel{}{p}(\stackrel{}{t}_1\stackrel{}{t}_2)}\left[G_{M^2}(\stackrel{}{t}_1,\stackrel{}{t}_2)\right]^2=f(\stackrel{}{p}^2),$$ (52) so that $$d^4\stackrel{}{t}_2e^{i\stackrel{}{p}\stackrel{}{t}_2}\left[G_{M^2}(\stackrel{}{t}_1,\stackrel{}{t}_2)\right]^2=e^{i\stackrel{}{p}\stackrel{}{t}_1}f(\stackrel{}{p}^2).$$ (53) Integrating over $`\widehat{t}_1`$, we find $$𝑑t_2t_2^2J_1(pt_2)C(t_1,t_2)\frac{1}{t_1}J_1(pt_1)f(\stackrel{}{p}^2).$$ (54) Let us define $$C(t)=𝑑t_2t_2^2J_1(pt_2)C(t,t_2).$$ (55) We now check that in fact $$C(t)\frac{1}{t_1}J_1(pt_1)\text{.}$$ (56) In the absence of a high-spin cutoff, we find that $`C(t)`$ is given by<sup>16</sup><sup>16</sup>16This calculation is somewhat lengthy. Details can be obtained upon request from the authors. $$C(t)=\frac{1}{t_1}J_1(pt_1)\left[\frac{1}{2}\mathrm{log}\frac{M^2}{p^2}+c\right]+\mathrm{},$$ (57) where the ellipsis represents terms which vanish as $`M\mathrm{}`$ and $$c=324\left[_0^{1/2}𝑑k\left(\frac{(\mathrm{sin}kk\mathrm{cos}k)^4}{4k^{13}}\frac{1}{324k}\right)+_{1/2}^{\mathrm{}}𝑑k\frac{(\mathrm{sin}kk\mathrm{cos}k)^4}{4k^{13}}\right].$$ (58) In Figure 5 we plot $`C(t)`$ for different values of the high-spin cutoff $`J_{\mathrm{max}}`$ as well as the large-$`M`$ limit value $$C_1(t)=\frac{1}{t_1}J_1(pt_1)\left[\frac{1}{2}\mathrm{log}\frac{M^2}{p^2}+c\right].$$ (59) In our plot $`t`$ is measured in units of $`p^1`$ and $`M=3p`$. From (42) the expected error is of size $`\frac{M^8t^8}{J_{\mathrm{max}}^8}.`$ We see that the data is consistent with the results expected. Again the deviation for $`tM^1`$ integrates to a negligible contribution as $`M\mathrm{}`$. ## 6 Summary We have examined several important features of non-perturbative renormalization in the spherical field formalism and answered the three questions posed in the introduction. Ultraviolet divergences can be cancelled by a finite number of local counterterms in a manner such that the renormalized theory is translationally invariant. Using angle-smearing regularization we find that the counterterms for $`\varphi ^4`$ theory in four dimensions can be parameterized by five unknown constants as shown in (29). Aside from our remarks about Ward identity constraints in gauge theories, the extension to other field theories is straightforward. We hope that these results will be useful for future studies of general renormalizable theories by spherical field techniques. Acknowledgments We gratefully acknowledge useful correspondence with Daniel Freedman, Roman Jackiw, and Jose Latorre. Figures Figure 1. Sketch of the angle-smearing region (three-dimensional rendering). Figure 2. One-loop two-point correlator for $`\varphi _{0,0,0}`$. Figure 3. One-loop four-point correlator for $`\varphi _{0,0,0}`$. Figure 4. Plot of $`B(t)\frac{1}{t^2}`$. Figure 5. Plot of $`C(t)`$.
no-problem/9905/astro-ph9905242.html
ar5iv
text
# The Age of Beta Pic1footnote 11footnote 1Based on observations collected by the Hipparcos satellite ## 1 Introduction This is the fourth paper in a series devoted to the ages of Vega-like stars. This is achieved by finding late type stars physically associated to them. Then, several time-dependent properties are analyzed and an age is derived. Stauffer et al. (Stauffer1995 (1995)) studied the secondary of HR4796, a conspicuous Vega-like star discovered by Jura (Jura1991 (1991)). They concluded that the binary is remarkably young (8$`\pm `$2 Myr). More recently, Barrado y Navascués et al. (ByN1997 (1997)) focused on Fomalhaut. The physical association with the late type star GL 879 served to constrain the age to 200$`\pm `$100 Myr. The realization of the fact that Fomalhaut shares its Galactic movement with other stars, including Castor and Vega, produced another determination of the age, 200$`\pm `$100 Myr (Barrado y Navascués ByN1998 (1998)). Vega-like stars show large IR excesses originated in dusty circumstellar disks, which are thought to be the remnants of the T Tauri disks or a consequence of the formation of planets (see Backman & Paresce Backman1993 (1993)). Until the discovery of the first extra-solar planet by Mayor & Queloz (Mayor1995 (1995)), they provided some of the best evidence for the presence of planetary systems outside of our own. There has been much recent progress on these systems in terms of understanding the structure of their disks (Jura et al. 1998; Jayawardhana et al. 1998; Koerner et al. 1998; Greaves et al. 1998; Schneider et al. 1999), spectral distribution (Zuckerman & Becklin 1993; Holland et al. 1998; Fajardo-Acosta et al. 1998) and evolution (Zuckerman et al. 1995; Thakur et al. 1997; Song et al. 1998). However, it is still true that accurate ages for these systems are still in considerable short supply. In this paper, we provide what we believe to be an accurate age for $`\beta `$ Pic. ## 2 A common origin based on the kinematic properties Following Barrado y Navascués (ByN1998 (1998)), we selected an initial list of possible $`\beta `$ Pic companions from Agekyan & Orlov (Agekyan1984 (1984)), which provides a extensive search of kinematic groups. We also included stars from Soderblom (Soderblom1990 (1990)), Poveda et al. (Poveda1994 (1994)) and Tokovinin (Tokovinin1997 (1997)). Then, we computed the Galactic components of these stars using equatorial coordinates, parallaxes, proper motions –the Hipparcos (ESA, ESA1997 (1997)) and PPM (Bastian & Roeser Bastian1993 (1993); Roeser & Bastian Roeser1994 (1994)) catalogs– and the radial velocities (Duflot et al. Duflot1995 (1995)). For $`\beta `$ Pic itself, we used the values derived by Lagrange et al. (1995) and Jolly et al. (1998), based on HST/GHRS spectra of narrow absorption lines of Fe and CO (thought to be due to stationary circumstellar gas but external to its disk). Of the initial stars selected, only six have space motions within 2 sigma of that of $`\beta `$ Pic to be at all plausible companions. Tables 1 and 2 provide various data for these stars and the dynamics. The UVW components of the Galactic velocity were computed following Johnson & Soderblom (1987), using PPM proper motions. Similar results can be computed with Hipparcos. We have used these data for two different purposes: First, we have tried to verify if indeed these stars are physically associated. Second, using several properties of the late type stars, we have estimated the age of the moving group. The V component imposes the strongest constraint to determine whether a group of stars are associated (Soderblom & Mayor Soderblom1993 (1993)). V should correspond to a drift rate which would lead to a secular increase in separation between two given stars (as opposed to U or W components, where a difference in current velocity may not matter, because stars oscillate in those directions). Since 2 km/s is about 2 pc/Myr, two stars whose space motion differed by that amount would separate by 40 pc in 20 Myr (our final estimation of the age of $`\beta `$ Pic). Therefore, they could not both have been born in the same place 20 Myr ago and now both be within 20 pc of the Sun. Given the accuracies to which we can estimate the space motions of our selected stars, we believe that GL 97 can be rejected as a possible companion to $`\beta `$ Pic. If we use the PPM proper motions, it has too large a difference in the V component; if instead, we adopt the Hipparcos proper motions, then the U velocity differs by an amount ($`>`$8 km/s) that is too large. There are other spectroscopic reasons to also believe that GL 97 is too old to be a possible companion to $`\beta `$ Pic (Pasquini et al. Pasquini1994 (1994)). We also choose to exclude GL 601 from further consideration because we have no observational data that allows us to usefully estimate its age. In the next section, we will examine age estimates for Beta Pic and for the remaining four stars in order to try to establish whether any of them appear to be coeval. ## 3 The age of $`\beta `$ Pic ### 3.1 Isochrone Fitting for $`\beta `$ Pic Itself Isochrone fitting for $`\beta `$ Pic has been previously attempted, yielding an age of 100 Myr (Backman & Paresce Backman1993 (1993)). Lanz et al. (Lanz1995 (1995)), using the same procedure, concluded that an age around 12 Myr or larger than 300 Myr would be possible, although they judged the first value as the most likely. From Figure 2 of Brunini & Benvenuto (Brunini1996 (1996)), which represents evolutionary tracks, an age between 20 and 40 Myr could be inferred. Finally, Crifo et al. (Crifo1997 (1997)), using Hipparcos data, confirmed that the star is very close to or on the ZAMS, and it is older than 8 Myr. All these studies show that this technique is not very restrictive and the age of $`\beta `$ Pic remains uncertain. ### 3.2 Isochrone Fitting for the Possible Companions of $`\beta `$ Pic The photometry of late type stars can provide accurate ages, if they are cool and young enough to be in the PMS phase, by comparison with isochrones. We have compared the four candidate $`\beta `$ Pic companions to PMS isochrones from D’Antona & Mazzitelli (1997, priv. comm.=DM97), where we have used a color-temperature conversion based on requiring the DM97 125 Myr isochrone to coincide with the main-sequence locus of stars in the Pleiades (c.f. Stauffer et al. Stauffer1995 (1995); Stauffer Stauffer1998 (1998)). In order to place our four candidate $`\beta `$ Pic companions in context, we also include in this figure all of the nearby M dwarfs for which Leggett (1992) has compiled accurate photometry and which have parallaxes from Hipparcos with $`\sigma (\pi )`$/$`\pi `$ $`<`$ 0.10. For binary stars where it is known that the two components are nearly equal in brightness (including GL 799), we have added 0.75 mag to the M<sub>V</sub> in order to correct the binarity effect; for know binaries with unknown mass ratios, we plot the star as an open diamond symbol but do not correct the M<sub>V</sub>. Clearly, GL 799 and GL 803 are among the brightest stars, compared with stars of the same color, in the solar neighborhood. That is, they are very young. In fact, the three youngest objects in Figure 1 are GL 182, GL 799 and GL 803. GL 182 is known to be a very young, dMe star (Favata et al. Favata1998 (1998)); its kinematics indicates that it is not, however, moving with the same space motion of GL 799 and 803, so we do not consider it further. Within the errors, the locations of GL 799 and 803 in Figure 1 are consistent with their having the same age (20 Myr). GL 781.2 and GL 824 appear to be older, with ages formally consistent with being 40 Myr. However, because they are higher mass objects and their displacement above the ZAMS is less, their locations in Figure 1 are actually consistent with any age up to several hundred Myr given uncertainties in their photometry and metallicity and the placement of the isochrones into the observational plane. We provide evidence in the next section that GL 824, at least, is quite old ($`>`$ 600 Myr), and unlikely to be physically connected to $`\beta `$ Pic. ### 3.3 Stellar activity Stellar activity, a consequence of the presence of magnetic fields in late–type stars (due to the combination of the rotation and convection, or dynamo effect), is a well studied phenomenon. Because of main sequence angular momentum loss, the rotation rates of low mass stars decline with age - and hence activity levels also decline with age (e.g., Stauffer Stauffer1988 (1988)). We can use measures of stellar activity, therefore, as proxies for age in an attempt to identify which of our candidate stars might be coeval with $`\beta `$ Pic. Figure 2 depicts the X-ray luminosity against (B–V). In panel a, crosses represent ROSAT All Sky data (Hünsch et al. Hunsch1999 (1999)) for the Gliese stars. In panel b, Pleiades and Hyades stars appear as open and solid symbols, respectively. Clearly, the X-ray luminosity of GL 824 is relatively low, even compared to that of the stars in the Hyades (age$``$600 Myr); we infer from this that GL 824 is older than the Hyades. Based on its location in a CM diagram, $`\beta `$ Pic cannot be as old as 600 Myr, and therefore the X-ray data provide strong evidence that GL 824 is older than $`\beta `$ Pic. On the other hand, the activity of GL 803 and GL 799 is very high, consistent with the young ages deduced from their position in the CM diagram. Unfortunately, there is no published X-ray for GL781.2, and we cannot further constrain its age at this time. ### 3.4 Summary of Age Constraints Using the best available data, analysis of the location of $`\beta `$ Pic in a CM diagram only allows one to conclude that its age is somewhere between a few million and a few hundred million years. Of the four late type stars whose kinematics match that of $`\beta `$ Pic, GL 824 is removed from contention because its activity indicates it is older than the maximum age for $`\beta `$ Pic. GL 781.2 is essentially unconstrained in its age due to a lack of activity data and due to its early spectral type (precluding an accurate HR diagram age. However, the other two candidates have a well-constrained age from their location in a CM diagram, have activity levels consistent with that age, and share the motion of $`\beta `$ Pic to within 1 km/s. We believe, therefore, that the age derived for these stars from PMS isochrones - 20$`\pm `$10 Myr - is the best estimate for the age of $`\beta `$ Pic. The spatial location of the three stars is compatible with this age and the derived relative space motions. We note that Poveda et al. (1994) has previously identified GL 799 and GL 803 as being likely siblings - we are now simply adding $`\beta `$ Pic as their bigger brother. ## 4 The Correlation of IR Excess and Age for $`\beta `$ Pic Stars In their comprehensive review of the Vega phenomenon, Backman & Paresce (Backman1993 (1993)) described specifically the evolutionary status of the three prototypes, $`\beta `$ Pic, Vega and Fomalhaut, estimating their ages as 100, 200 and 400 Myr, respectively. Several studies have tried to relate these ages with different properties which appear as a consequence of the presence of circumstellar disks, in order to see if there is an evolutionary sequence. For instance, Fig. 2 of Holland et al. (1998) suggests a dependence with the age of the total amount of dust in the disk. Our results support this type of dependence (Figure 3). The inferred rapid decline in dust mass supports the hypothesis that the Vega phenomenon is a normal stage in the early life of intermediate mass and solar-like stars. DBN thanks the IAC (Spain) and the DFG (Germany) for their fellowship. JRS acknowledges support from NASA Grant NAGW-2698 and 3690. We thank X. Delfosse and D. Fischer for providing data prior to publication, We have used the Simbad database.
no-problem/9905/astro-ph9905067.html
ar5iv
text
# Possible Long-Lived Asteroid Belts in the Inner Solar System The recent years have witnessed a carnival of discoveries in the Outer Solar System \[1-3\]. Here we provide evidence from numerical simulations of orbital stability to suggest the possible existence of two long lived belts of primordial planetesimals in the Solar System. The first is the domain of the Vulcanoids ($`\mathbf{0.09}\mathbf{0.21}`$ AU) between the Sun and Mercury, where remnant planetesimals may survive on dynamically stable orbits provided they possess a characteristic radius greater than $`\mathbf{0.1}`$ km. The second is a belt between the Earth and Mars ($`\mathbf{1.08}\mathbf{1.28}`$ AU) on which an initial population of particles on circular orbits may survive for the age of the Solar System. A search through the catalogues of Near-Earth Objects reveals an excess of asteroids with low eccentricities and inclinations occupying this belt, such as the recently discovered objects 1996 XB27, 1998 HG49 and 1998 KG3. Symplectic integrators with individual timesteps provide a fast algorithm that is perfectly suited to long numerical integrations of low eccentricity orbits in a nearly Keplerian force field. Individual timesteps are a great boon for our work, as orbital clocks tick much faster in the Inner Solar System than the Outer. Over a thousand test particles are distributed on concentric rings with values of the semimajor axis between $`0.1`$ AU and $`2.2`$ AU. Each of these rings is located in the invariable plane and hosts five test particles with starting longitudes $`n\times 72^{}`$ with $`n=0,\mathrm{},4`$. Initially, the inclinations and eccentricities vanish for the whole sample of test particles. The test particles are perturbed by the Sun and planets but do not themselves exert any gravitational forces. The full gravitational effects of all the planets (except Pluto) are included. The initial positions and velocities of the planets, as well as their masses, come from the JPL Planetary and Lunar Ephemerides DE405. For all the computations, the timestep for Mercury is $`14.27`$ days. The timesteps of the planets are in the ratio $`1:2:2:4:8:8:64:64`$ for Mercury moving outward through to Neptune. The relative energy error is oscillatory and has a peak amplitude of $`10^6`$ over the 100 million year integration timespans (c.f. ). After each timestep, the test particles are examined. If their orbits have become hyperbolic or have entered the sphere of influence of any planet or have approached closer than ten solar radii to the Sun, they are removed from the simulation. This general procedure is familiar from a number of recent studies on the stability of test particles in the Solar System . For example, Holman uncovered evidence for a possible belt between Uranus and Neptune by a similar integration of test particles in the gravitational field of the Sun and the four giant planets. His integrations reached the impressive timescale of 4.5 Gyrs – of the order of the age of the Solar System. Simulations of the inner Solar System are much more laborious, as the orbital period of Mercury is $`88`$ days (as compared to $`4332`$ days for Jupiter, the giant planet with the shortest orbital period). This forces us to use a much smaller timestep and roundoff error becomes a menacing obstacle to believable results. So, we adopt the strategy of running on a fleet of nearly twenty personal computers of varying processor speeds, so that the calculations are performed in long double precision implemented in hardware. The integration of the orbits of $`1050`$ test particles for $`100`$ Myrs occupied this fleet of computers for over four months. Fig. 1 shows the results of this calculation. The survival times of the test particles are plotted against starting semimajor axis. There are five test particles at each starting position, so the vertical lines in the figure join five filled circles which mark their ejection times. The locations of particles that survive for the entire $`100`$ Myr timespan are marked by diamonds on the upper horizontal axis. Around each of the terrestrial planets, there is a swathe of test particles that are ejected rapidly on a precession timescale. This band is much broader around Mars than the Earth or Venus, perhaps because of the higher eccentricity of Mars’ orbit. There are also narrow belts of test particles that survive for the full integration. So, for example, all 50 of the test particles with starting semimajor axes between $`0.10.19`$ AU are still present at the end of the 100 Myr integration. The existence of a population of small asteroid-like bodies – known as the Vulcanoids – wandering in intra-Mercurial orbits has been hypothesised before \[11-13\]. There are 16 surviving test particles with starting semimajor axes between $`0.6`$ and $`0.66`$ AU, suggesting the possible existence of a narrow belt between Mercury and Venus. A somewhat larger third belt of 33 surviving test particles occupies a belt between Venus and the Earth. Their starting semimajor axes range from $`0.79`$ to $`0.91`$ AU. Finally, there is a broad belt between the Earth and Mars from $`1.08`$ to $`1.28`$ AU in which a further 26 test particles survive. The possibility of the existence of belts between Venus and the Earth and between the Earth and Mars was raised by Mikkola & Innanen on the basis of 3 Myr integrations. Of course, these results must be treated with considerable reserve, as 100 Myrs is just $`2\%`$ of the age of the Solar System since the assembly of the terrestrial planets ($`5\mathrm{Gyr}`$). It is straightforward to estimate that if this simulation in long double precision were to be continued till the integration time reaches even $`1\mathrm{Gyr}`$, then it would consume $`3.5`$ years of time. Accordingly, we use the standard, albeit approximate, device of re-simulating with greater resolution and extrapolating the results. At semimajor axes separated by $`0.002`$ AU, five test particles are again launched on initially circular orbits with starting longitudes $`n\times 72^{}`$ with $`n=0,\mathrm{},4`$. The number of test particles $`N(t)`$ remaining after time $`t`$ is monitored for each of the four belts. Table 1 gives the results of fitting the data between $`1\mathrm{Myr}`$ and $`100\mathrm{Myr}`$ to the following logarithmic and power-law decays: $$N(t)=a+b\mathrm{log}_{10}\left(t[\mathrm{yrs}]\right),N(t)=\frac{10^c}{\left(t[\mathrm{yrs}]\right)^d}$$ (1) In the last two columns, the expected number of test particles remaining after $`1\mathrm{Gyr}`$ and $`5\mathrm{Gyr}`$ is computed. The uncertainties in the fitted parameters suggest that the logarithmic fall-off is a better – and more pessimistic – fit to the asymptotic behaviour than a power-law (c.f. ). Our extrapolations suggest that two of the belts – those lying between Mercury and Venus, and between Venus and the Earth – will become almost entirely depleted after $`5\mathrm{Gyr}`$. However, even taking a staunchly pessimistic outlook, it seems certain that some of the Vulcanoids and some of the test particles with starting semimajor axes between $`1.081.28`$ AU in the Earth-Mars belt may survive for the full age of the Solar System. We can make a crude estimate of present-day numbers by extrapolation from the Main Belt asteroids (c.f. ). Assuming that the primordial surface density falls inversely like distance, we find that the Earth-Mars belt may be occupied by perhaps a thousand or so remnant objects. Of course, these objects are now outnumbered by the more recent arrivals, the asteroids ejected via resonances from the Main Belt, which may number a few thousand in total . A systematic search for Vulcanoids has already been conducted by Leake et al. , who exploited the fact that bodies so close to the Sun are identifiable from their substantial infrared excess. No candidate objects were found. However, the survey was limited to a small area of just 6 square degrees and was estimated to be $`75\%`$ efficient to detection of bodies brighter than 5th magnitude in the L band. This result places constraints on the existence of a population of objects with radii greater than $`50`$ km, but minor bodies with the typical sizes of small asteroids ($`10`$ km) will have evaded detection. On theoretical grounds, Vulcanoids have been proposed to resolve apparent contradictions between the geological and the geophysical evidence on the history of the surface features on Mercury . The robustness of the Vulcanoid orbits partly stems from the fact that there is only one neighbouring planet and so may be compared to the stability of the Kuiper-Edgeworth belt. Even after $`100\mathrm{Myr}`$, some $`80\%`$ of our Vulcanoid orbits still have eccentricities $`e<0.2`$ and inclinations $`i<10^{}`$. It is this evidence, together with the low rate of attrition of their numbers, that suggests that they can continue for times of the order the age of the Solar System. The outer edge of the Vulcanoid belt is at $`0.21`$ AU. Objects beyond this are dynamically unstable and are excited into Mercury-crossing orbits on $`100\mathrm{Myr}`$ timescales. The inner edge of the belt is not so sharply defined. Small objects close to the Sun may be susceptible to destruction both by Poynting-Robertson drag and by evaporation . Taking the mean density and Bond Albedo of a typical Vulcanoid to be the same as that of Mercury, we find that objects with radii satisfying $`0.1<z<50`$ km can evade both drag and evaporation in the Vulcanoid belt. This is one of the most dynamically stable regimes in the entire Solar System. If further searches do not detect any intra-Mercurial objects, this is a strong indicator that other processes – such as planetary migrations – may have disrupted the population. Although there are no known intra-Mercurial bodies, we can find candidate objects for the Earth-Mars belt. Suppose we search an asteroidal database for objects with inclinations $`i<10^{}`$ and eccentricities $`e<0.2`$ between the semimajor axes of Earth and Mars, then we find that there are ten objects. Of these, seven lie within our suggested Earth-Mars belt ($`1.081.28`$ AU), which is evidence for an enhancement of nearly circular orbits in this region. An even more striking test is to search through the objects between the Earth and Mars for low eccentricity and inclination asteroids that are not planet-crossing. Then, there are only three objects (1996 XB27, 1998 HG49 and 1998 KG3) among the entire asteroids in the database, and all three lie between 1.08 and 1.28 AU. Most of the $`50`$ asteroids with semimajor axes presently located in our Earth-Mars belt are moving on orbits with large eccentricities and inclinations. They are not dynamically stable and will evolve on timescales of the order of a few Myrs. Most of these objects are believed to be asteroids ejected from resonance locations in the Main Belt, although a handful may even be comets whose surfaces have become denuded of volatiles . However, the seeming enhancement of circular orbits in this region hints at a primordial population whose orbits are very mildly eccentric and mildly inclined. Ejection from the Main Belt will tend to increase the eccentricity of an asteroid . So, the mildly eccentric objects may well be remnant planetesimals, the original denizens of the region before it was colonized by asteroids from the resonance locations in the Main Belt. Acknowledgments We thank the Royal Society for the money to purchase dedicated computers, as well as the Oxford Supercomputing Centre (OSCAR). Above all, we wish to thank Prasenjit Saha and Scott Tremaine for their stimulating and insightful suggestions and comments, as well as their advice on computational matters. Helpful criticism from John Chambers, Luke Dones and the two referees is also gratefully acknowledged.
no-problem/9905/astro-ph9905192.html
ar5iv
text
# Published in Monthly Notices Decorrelating the Power Spectrum of Galaxies ## 1 Introduction Accurate measurements of the principal cosmological parameters now appear to be within reach (Turner 1999). Large redshift surveys of galaxies, notably the Two-Degree Field Survey (2dF) (Colless 1998; Folkes et al. 1999) and the Sloan Digital Sky Survey (SDSS) (Gunn & Weinberg 1995; Margon 1998), should be a gold mine of cosmological information over wavenumbers $`k0.003`$$`300h\mathrm{Mpc}^1`$. Information from such surveys on large scales ($`k0.1h\mathrm{Mpc}^1`$) should improve greatly the accuracy attainable from upcoming Cosmic Microwave Background (CMB) experiments alone (Eisenstein et al. 1999), but in fact most of the information in these surveys is on small scales, in the nonlinear regime (Tegmark 1997b). While the power spectrum may be the ideal carrier of information at the largest, linear scales where density fluctuations may well be Gaussian, it proves less satisfactory at moderate and smaller scales, since nonlinear evolution induces a broad covariance between estimates of power at different wavenumbers, as emphasized by Meiksin & White (1999) and Scoccimarro, Zaldarriaga & Hui (1999). Ultimately, one can imagine that there exists some kind of mapping that translates each independent piece of information in the (Gaussian) linear power spectrum into some corresponding independent piece of information in the nonlinear power spectrum, or some related quantity. That such a mapping exists at least at some level is evidenced by the success of the analytic linear $``$ nonlinear mapping formulae of Hamilton et al. (1991, hereafter HKLM) and Peacock & Dodds (1994, 1996). However, if such a mapping really existed, that was not only invertible but also, as in the HKLM-Peacock-Dodds formalism, mapped delta-functions of linear power at each linear wavenumber into delta-functions of nonlinear power (or some transformation thereof) at some possibly different nonlinear wavenumber, then that mapping ought to translate uncorrelated quantities in the linear regime – powers at different wavenumbers – into uncorrelated quantities in the nonlinear regime. The fact that the nonlinear power spectrum is broadly correlated over different wavenumbers shows that the HKLM-Peacock-Dodds formalism cannot be entirely correct. In the preceding paper (Hamilton 2000, hereafter Paper 3), it was shown that prewhitening the nonlinear power spectrum – transforming the power spectrum in such a way that the noise covariance becomes proportional to the unit matrix – substantially narrows the covariance of power. Moreover, this narrowing of the covariance of power occurs for all power spectra tested, including both realistic power spectra and power law spectra over the full range of indices permitted by the hierarchical model. It should be emphasized that these conclusions are premised on the hierarchical model for the higher order correlations, with constant hierarchical amplitudes, and need to be tested with $`N`$-body simulations. In the meantime, if indeed there exists an invertible linear $``$ nonlinear mapping of cosmological power spectra, then it would appear that the prewhitened nonlinear power spectrum should offer a closer approximation to the right hand side of this mapping than does the nonlinear power spectrum itself. Whatever the case, the prewhitened nonlinear power spectrum has the practical benefit of enabling the agenda of the present paper – decorrelating the galaxy power spectrum – to succeed over the full range of linear to nonlinear wavenumbers. That is to say, if one attempted to decorrelate the nonlinear power spectrum itself into a set of uncorrelated band-powers, then the band-power windows would be so broad, with almost canceling positive and negative parts, that it would be hard to interpret the band-powers as representing the power spectrum in any meaningful way. By contrast, the covariance of the prewhitened nonlinear power is already narrow enough that decorrelation into band-powers works quite satisfactorily. Paper 3 showed how to construct a near approximation to the minimum variance estimator and Fisher information matrix of the prewhitened nonlinear power spectrum in the Feldman, Kaiser & Peacock (1994, hereafter FKP) approximation, valid at wavelengths short compared to the scale of the survey. In the present paper we describe how to complete the processing of the prewhitened nonlinear power spectrum into a set of decorrelated band-powers that come close to fulfilling the ideals of (a) being uncorrelated, (b) having the highest possible resolution, and (c) having the smallest possible error bars. The idea of decorrelating the power spectrum was proposed by Hamilton (1997; hereafter Paper 2), and was further discussed and successfully applied to the CMB by Tegmark & Hamilton (1998). Like Paper 3, the present paper ignores redshift distortions and other complicating factors, such as light-to-mass bias, on the Alfa-Romeo (1969 P159 Shop Manual) principle that it is best to adjust one thing at a time. ## 2 Data This paper presents examples mainly for two redshift surveys of galaxies, the Sloan Digital Sky Survey (SDSS; Gunn & Weinberg 1995; Margon 1998) and the Las Campanas Redshift Survey (LCRS; Shectman et al. 1996; Lin et al. 1996). The information content of the prewhitened power spectrum of the IRAS 1.2 Jy survey (Fisher et al. 1995) is also shown for comparison in Figure 7. Calculating the Fisher matrix of the prewhitened power spectrum, as described in §6 of Paper 3, involves evaluating the FKP-weighted pair integrals $`R(r;\mu )`$ (Paper 3, eq. ), commonly denoted $`RR`$, of the survey. For SDSS and IRAS 1.2 Jy, the pair integrals $`R(r;\mu )`$, were computed in the manner described by Hamilton (1993), over a logarithmic grid spaced at $`0.02\mathrm{dex}`$ in pair separation $`r`$ and $`0.1\mathrm{dex}`$ in FKP constant $`\mu `$. For LCRS, we thank Huan Lin (1996, private communication) for providing FKP-weighted pair integrals $`R(r;\mu )`$ on a grid of separations $`r`$ and FKP constants $`\mu `$. The geometry of SDSS is described by Knapp, Lupton & Strauss (1997, ‘Survey Strategy’ link). We consider only the main galaxy sample, not the Bright Red Galaxy sample. We thank D. H. Weinberg (1997, private communication) for providing the anticipated radial selection function of the survey, and M. A. Strauss (1997, private communication) for a computer file and detailed explanation of the angular mask. The angular mask of the northern survey consists of 45 overlapping stripes each $`2\stackrel{}{.}5`$ wide in declination, forming a roughly elliptical region extending $`110^{}`$ in declination by $`130^{}`$ in the orthogonal direction. The angular mask of the southern survey consists of 3 separate $`2\stackrel{}{.}5`$ wide stripes centred at declinations $`15^{}`$, $`0^{}`$, and $`10^{}`$. The survey covers a total area, north plus south, of $`3.357\mathrm{sr}`$. The LCRS (Shectman et al. 1996) covers six narrow slices each approximately $`1\stackrel{}{.}5`$ in declination by $`80^{}`$ in right ascension, three each in the north and south galactic caps. The angular mask of the IRAS 1.2 Jy survey (Fisher et al. 1995) is the same as that of the 2 Jy survey (Strauss et al. 1990, 1992), and consists of the sky at galactic latitudes $`|b|>5^{}`$, less 1465 approximately $`1^{}\times 1^{}`$ squares, an area of $`11.0258\mathrm{sr}`$ altogether. We thank M. A. Strauss (1992, private communication) for a computer listing of the excluded squares. The radial selection function was computed by the method of Turner (1979). Distances were computed from redshifts assuming a flat matter-dominated cosmology, and galaxy luminosities $`L`$ were corrected for evolution with redshift $`z`$ according to $`L1+z`$. ## 3 Decorrelation ### 3.1 Decorrelation matrices Let $`𝖥`$ be the Fisher information matrix (see Tegmark, Taylor & Heavens 1997 for a review) of a set of estimators $`\widehat{\theta }`$ of parameters $`\theta `$ to be measured from observations. Below we will specialize to the case where the parameters are the prewhitened power spectrum, but for the moment the parameters $`\theta `$ could be anything. Assume, thanks to the central limit theorem or otherwise, that the covariance matrix of the estimators $`\widehat{\theta }`$ is adequately approximated by the inverse of the Fisher matrix $$\mathrm{\Delta }\widehat{\theta }\mathrm{\Delta }\widehat{\theta }^{}=𝖥^1.$$ (1) A decorrelation matrix $`𝖶`$ is any real square matrix, not necessarily orthogonal, satisfying $$𝖥=𝖶^{}\mathsf{\Lambda }𝖶$$ (2) where $`\mathsf{\Lambda }`$ is diagonal. The quantities $`𝖶\widehat{\theta }`$ are uncorrelated because their covariance matrix is diagonal $$𝖶\mathrm{\Delta }\widehat{\theta }\mathrm{\Delta }\widehat{\theta }^{}𝖶^{}=\mathsf{\Lambda }^1.$$ (3) Being inverse variances of $`𝖶\widehat{\theta }`$, the diagonal elements of the diagonal matrix $`\mathsf{\Lambda }`$ are necessarily positive. The Fisher matrix of the decorrelated quantities $`𝖶\widehat{\theta }`$ is the diagonal matrix $`\mathsf{\Lambda }`$, since $$\left(𝖶\mathrm{\Delta }\widehat{\theta }\mathrm{\Delta }\widehat{\theta }^{}𝖶^{}\right)^1=\mathsf{\Lambda }.$$ (4) Without loss of generality, the decorrelated quantities $`𝖶\widehat{\theta }`$ can be scaled to unit variance by multiplying them by the square root of the corresponding diagonal element of $`\mathsf{\Lambda }`$. Scaled to unit variance, the decorrelation matrices satisfy $$𝖥=𝖶^{}𝖶.$$ (5) There are infinitely many distinct decorrelation matrices satisfying equation (5). Any orthogonal rotation $`𝖮`$ of a decorrelation matrix $`𝖶`$ yields another decorrelation matrix $`𝖵=\mathrm{𝖮𝖶}`$, since $`𝖵^{}𝖵=𝖶^{}𝖮^{}\mathrm{𝖮𝖶}=𝖶^{}𝖶=𝖥`$. Conversely, if $`𝖵`$ and $`𝖶`$ are two decorrelation matrices satisfying $`𝖥=𝖵^{}𝖵=𝖶^{}𝖶`$, then $`𝖵=\mathrm{𝖮𝖶}`$ is an orthogonal rotation of $`𝖶`$, since $`(𝖶^{})^1𝖵^{}\mathrm{𝖵𝖶}^1=(\mathrm{𝖵𝖶}^1)^{}\mathrm{𝖵𝖶}^1=\mathrm{𝟣}`$ shows that $`𝖮=\mathrm{𝖵𝖶}^1`$ is an orthogonal matrix. ### 3.2 Prewhitened power spectrum The prewhitened nonlinear power spectrum $`X(k)`$ is the Fourier transform of the prewhitened nonlinear correlation function $`X(r)`$ defined in terms of the nonlinear correlation function $`\xi (r)`$ by (Paper 3 §5) $$X(r)\frac{2\xi (r)}{1+[1+\xi (r)]^{1/2}}.$$ (6) In this paper, as in Paper 3, hats are used to denote estimators, so that $`\widehat{X}(k)`$ with a hat on denotes an estimate of the prewhitened power spectrum measured from a galaxy survey (Paper 3 §7). The quantity $`X(k)`$ without a hat denotes the prior prewhitened power spectrum, the model power spectrum whose viability is tested in a likelihood analysis. The covariance of the prewhitened nonlinear power spectrum is approximately equal to the inverse of its Fisher matrix, denoted $`E^{\alpha \beta }`$ (Paper 3 §6), $$\mathrm{\Delta }X_\alpha \mathrm{\Delta }X_\beta ^1=E^{\alpha \beta }.$$ (7) The weighting of data that yields the best (minimum variance) estimate of power depends on the choice of prior power, so that the estimated prewhitened power $`\widehat{X}(k)`$ depends (weakly) on the prior prewhitened power $`X(k)`$. An incorrect guess for the prior will not bias the estimator high or low, but merely makes it slightly noisier than the minimum allowed by the Fisher matrix (Tegmark et al. 1997). The Maximum Likelihood solution to the prewhitened power spectrum of a galaxy survey can be obtained by folding the estimate $`\widehat{X}(k)`$ back into the prior $`X(k)`$ and iterating to convergence (Tegmark et al. 1998). In this paper, the prior power $`X(k)`$ is simply treated as some fiducial prewhitened power spectrum. In all examples illustrated, the prior power spectrum is taken to be an observationally concordant $`\mathrm{\Lambda }`$CDM model from Eisenstein & Hu (1998). ### 3.3 Band-powers It is desirable to define estimated band-powers $`\widehat{B}(k_\alpha )`$ and band-power windows $`W(k_\alpha ,k_\beta )`$ for the estimated prewhitened power spectrum $`\widehat{X}(k)`$ in a physically sensible way. One of the problems is that the prewhitened power spectrum is liable to vary by orders of magnitude, so that, carelessly defined, a windowed power could be dominated by power leaking in from the wings of the band, rather than being a sensible average of power in the band. Let band-powers $`\widehat{B}(k_\alpha )`$ be defined by $$\frac{\widehat{B}(k_\alpha )}{\chi (k_\alpha )}_0^{\mathrm{}}W(k_\alpha ,k_\beta )\frac{\widehat{X}(k_\beta )}{\chi (k_\beta )}\frac{4\pi k_\beta ^2\mathrm{d}k_\beta }{(2\pi )^3}$$ (8) where $`\chi (k)`$ is some scaling function, to be chosen below in equation (13). In manipulations, it can be convenient to treat $`\chi `$ as a matrix that is diagonal in Fourier space with diagonal entries $`\chi (k)`$. The band-power windows $`W(k_\alpha ,k_\beta )`$ in equation (8) are normalized to unit integral $$W(k_\alpha ,k_\beta )\frac{4\pi (k_\alpha k_\beta )^{3/2}}{(2\pi )^3}\mathrm{d}\mathrm{ln}k_\beta =1,$$ (9) the slightly complicated form of which is chosen so as to make its discretized equivalent, equation (11), look simple. Continuous matrices must be discretized in order to manipulate them numerically. As described in §2.3 of Paper 3, discretization should be done in such a way as to preserve the inner product in Hilbert space. Discretized on a logarithmic grid of wavenumbers, a continuous vector such as $`\widehat{X}(k_\alpha )`$ becomes the discrete vector $`\widehat{𝖷}_{k_\alpha }\widehat{X}(k_\alpha )`$$`[4\pi k_\alpha ^3\mathrm{\Delta }\mathrm{ln}k/(2\pi )^3]^{1/2}`$, while a continuous matrix such as $`W(k_\alpha ,k_\beta )`$ becomes the discrete matrix $`𝖶_{k_\alpha k_\beta }W(k_\alpha ,k_\beta )`$$`4\pi (k_\alpha k_\beta )^{3/2}`$$`\mathrm{\Delta }\mathrm{ln}k/(2\pi )^3`$. Thus, discretized on a logarithmic grid of wavenumbers, equation (8) translates into an equation for the discrete band-powers $`\widehat{𝖡}_{k_\alpha }\widehat{B}(k_\alpha )`$$`[4\pi k_\alpha ^3\mathrm{\Delta }\mathrm{ln}k/(2\pi )^3]^{1/2}`$, $$\frac{\widehat{𝖡}_{k_\alpha }}{\chi (k_\alpha )}=\underset{k_\beta }{}𝖶_{k_\alpha k_\beta }\frac{\widehat{𝖷}_{k_\beta }}{\chi (k_\beta )}.$$ (10) Each row of the discrete matrix $`𝖶_{k_\alpha k_\beta }`$ represents a band-power window for the discrete band-power $`\widehat{𝖡}_{k_\alpha }`$, and it is these discrete band-power windows $`𝖶_{k_\alpha k_\beta }`$ that are plotted in Figures 26. Normalizing each discrete band-power window $`𝖶_{k_\alpha k_\beta }`$ to unit sum $$\underset{k_\beta }{}𝖶_{k_\alpha k_\beta }=1$$ (11) ensures that the band-power $`\widehat{𝖡}_{k_\alpha }`$ represents an average of prewhitened power in the band. The scaling function $`\chi (k)`$ in equations (8) or (10) is introduced to ensure a physically sensible definition of the band-power windows. Rewriting equation (10) as $$\widehat{𝖡}_{k_\alpha }=\underset{k_\beta }{}\chi (k_\alpha )𝖶_{k_\alpha k_\beta }\chi (k_\beta )^1\widehat{𝖷}_{k_\beta }$$ (12) makes it plain that choosing different scaling functions $`\chi `$ is equivalent to rescaling the band-power windows as $`𝖶\chi 𝖶\chi ^1`$. Suppose for example that the scaling function in equation (10) were chosen to be one, $`\chi (k)=1`$, so that $`\widehat{𝖡}=𝖶\widehat{𝖷}`$. Then a band-power $`\widehat{𝖡}_{k_\alpha }`$ could be dominated by power leaking in from wavenumbers $`k_\beta `$ where $`\widehat{𝖷}_{k_\beta }`$ is large, even though the window $`𝖶_{k_\alpha k_\beta }`$ were small there, which is not good. A better choice, the one adopted in this paper, is to set the scaling function $`\chi (k)`$ equal to the discretized prior prewhitened power $`𝖷_k`$, $$\chi (k)=𝖷_k,$$ (13) so that the discrete band-powers $`\widehat{𝖡}_{k_\alpha }`$, equation (10), are defined by $$\frac{\widehat{𝖡}_{k_\alpha }}{𝖷_{k_\alpha }}=\underset{k_\beta }{}𝖶_{k_\alpha k_\beta }\frac{\widehat{𝖷}_{k_\beta }}{𝖷_{k_\beta }}.$$ (14) Since the expectation is that $`\widehat{𝖷}_{k_\beta }/𝖷_{k_\beta }1`$, the definition (14) ensures that the contribution from power $`\widehat{𝖷}_{k_\beta }`$ at wavenumber $`k_\beta `$ to a band-power $`\widehat{𝖡}_{k_\alpha }`$ is large where the window $`𝖶_{k_\alpha k_\beta }`$ is large, and small where the window is small, as is desirable. ### 3.4 Decorrelated band-powers Equation (14) expresses the estimated band-powers $`\widehat{𝖡}`$ as linear combinations of scaled prewhitened powers $`\widehat{𝖷}/𝖷`$. The discrete Fisher matrix $`𝖦`$ of the scaled prewhitened power is (again, it is convenient to treat the scaling function $`𝖷_k`$ as a matrix that is diagonal in Fourier space, with diagonal entries $`𝖷_k`$) $$𝖦=𝖷\mathrm{\Delta }\widehat{𝖷}\mathrm{\Delta }\widehat{𝖷}^{}^1𝖷=\mathrm{𝖷𝖤𝖷}$$ (15) where $`𝖤_{k_\alpha k_\beta }E(k_\alpha ,k_\beta )`$$`4\pi (k_\alpha k_\beta )^{3/2}`$$`\mathrm{\Delta }\mathrm{ln}k/(2\pi )^3`$ is the discretized Fisher matrix of the prewhitened power (Paper 3 §6). All the band-power windows $`𝖶`$ constructed in this paper are decorrelation matrices, satisfying $$𝖦=𝖶^{}\mathsf{\Lambda }𝖶$$ (16) where $`\mathsf{\Lambda }`$ is diagonal in Fourier space. By construction, the estimated scaled band-powers $`\widehat{𝖡}/𝖷=𝖶\widehat{𝖷}/𝖷`$ are uncorrelated, i.e. their covariance matrix is diagonal in Fourier space $$𝖷^1\mathrm{\Delta }\widehat{𝖡}\mathrm{\Delta }\widehat{𝖡}^{}𝖷^1=\mathsf{\Lambda }^1.$$ (17) The Fisher matrix of the decorrelated scaled band-powers $`\widehat{𝖡}/𝖷`$ is the diagonal matrix $$𝖷\mathrm{\Delta }\widehat{𝖡}\mathrm{\Delta }\widehat{𝖡}^{}^1𝖷=\mathsf{\Lambda }.$$ (18) The decorrelation process decorrelates not only the scaled prewhitened power $`\widehat{𝖷}/𝖷`$, but also the prewhitened power spectrum $`\widehat{𝖷}`$ itself, as is to be expected since the scaling factors $`1/𝖷`$ are just constants. While the band-power windows for the scaled prewhitened power are $`𝖶`$, the band-power windows for the prewhitened power itself are $`\mathrm{𝖷𝖶𝖷}^1`$, since $`\widehat{𝖡}=(\mathrm{𝖷𝖶𝖷}^1)\widehat{𝖷}`$. The band-power windows $`\mathrm{𝖷𝖶𝖷}^1`$ are themselves decorrelation matrices, equation (2), for the prewhitened power $`\widehat{𝖷}`$, satisfying $$𝖤=(\mathrm{𝖷𝖶𝖷}^1)^{}(𝖷^1\mathsf{\Lambda }𝖷^1)(\mathrm{𝖷𝖶𝖷}^1).$$ (19) The decorrelated band-powers $`\widehat{𝖡}=(\mathrm{𝖷𝖶𝖷}^1)\widehat{𝖷}`$, are uncorrelated because their covariance matrix is diagonal in Fourier space (recall again that the scaling function $`𝖷`$ is effectively a diagonal matrix in Fourier space) $$\mathrm{\Delta }\widehat{𝖡}\mathrm{\Delta }\widehat{𝖡}^{}=(\mathrm{𝖷𝖶𝖷}^1)\mathrm{\Delta }\widehat{𝖷}\mathrm{\Delta }\widehat{𝖷}^{}(\mathrm{𝖷𝖶𝖷}^1)^{}=𝖷\mathsf{\Lambda }^1𝖷.$$ (20) The Fisher matrix of the decorrelated band-powers $`\widehat{𝖡}`$ is the diagonal matrix $$\mathrm{\Delta }\widehat{𝖡}\mathrm{\Delta }\widehat{𝖡}^{}^1=𝖷^1\mathsf{\Lambda }𝖷^1.$$ (21) ### 3.5 Interpretation of scaled powers as log powers It is interesting, although peripheral to the central thread of this paper, to note that the scaled powers $`\widehat{𝖡}/𝖷`$ and $`\widehat{𝖷}/𝖷`$ can be interpreted in terms of log powers, at least in the limit of a large quantity of data, where $`\mathrm{\Delta }\widehat{𝖷}\widehat{𝖷}𝖷𝖷`$. In this limit $`(\widehat{𝖡}/𝖷)1\mathrm{ln}(\widehat{𝖡}/𝖷)`$ and $`(\widehat{𝖷}/𝖷)1\mathrm{ln}(\widehat{𝖷}/𝖷)`$. Thus equation (14), with one subtracted from both sides, can be rewritten $$\mathrm{ln}\left(\frac{\widehat{𝖡}_{k_\alpha }}{𝖷_{k_\alpha }}\right)=\underset{k_\beta }{}𝖶_{k_\alpha k_\beta }\mathrm{ln}\left(\frac{\widehat{𝖷}_{k_\beta }}{𝖷_{k_\beta }}\right)$$ (22) or equivalently $$\mathrm{ln}\widehat{𝖡}_{k_\alpha }=C_{k_\alpha }+\underset{k_\beta }{}𝖶_{k_\alpha k_\beta }\mathrm{ln}\widehat{𝖷}_{k_\beta }$$ (23) where $`C_{k_\alpha }_{k_\beta }𝖶_{k_\alpha k_\beta }\mathrm{ln}(𝖷_{k_\alpha }/𝖷_{k_\beta })`$ are constants that are zero if the prior prewhitened power $`𝖷_k`$ happens to vary as a power law with wavenumber $`k`$, and in practice should be close to zero as long as the band-power windows are narrow. Equation (23) shows that, modulo the small constant offsets $`C`$, the log band-powers $`\mathrm{ln}\widehat{𝖡}`$ can be regarded as windowed averages of the log prewhitened powers $`\mathrm{ln}\widehat{𝖷}`$. Irrespective of the offsets $`C`$, the Fisher matrix of the log prewhitened powers approximates the Fisher matrix $`𝖦`$, equation (15), $$\mathrm{\Delta }\mathrm{ln}\widehat{𝖷}\mathrm{\Delta }\mathrm{ln}\widehat{𝖷}^{}^1𝖷\mathrm{\Delta }\widehat{𝖷}\mathrm{\Delta }\widehat{𝖷}^{}^1𝖷=𝖦$$ (24) and the decorrelation matrices $`𝖶`$, equation (16), can be regarded as decorrelation matrices for the log prewhitened power $`\mathrm{ln}\widehat{𝖷}`$. ### 3.6 Fisher matrices of power in SDSS and LCRS Constructing decorrelated band-powers requires knowing the Fisher matrix. Figure 1 shows the discrete Fisher matrices $`𝖦`$, equation (15), of the scaled prewhitened nonlinear power spectra $`\widehat{𝖷}/𝖷`$ of SDSS and LCRS. The Fisher matrix was computed in the FKP approximation, as described in §6 of Paper 3. The prior power spectrum $`X(k)`$ is taken to be a $`\mathrm{\Lambda }`$CDM model of Eisenstein & Hu (1998), with observationally concordant parameters $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, $`\mathrm{\Omega }_m=0.3`$, $`\mathrm{\Omega }_bh^2=0.02`$, and $`hH_0/(100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1)=0.65`$, nonlinearly evolved according to the Peacock & Dodds (1996) formula. $`N`$-body simulations by Meiksin, White & Peacock (1999) indicate that nonlinear evolution tends to suppress baryonic wiggles in the (unprewhitened) power spectrum, whereas the Peacock & Dodds transformation preserves the wiggles. For simplicity we retain the Peacock & Dodds formalism, notwithstanding its possible defects. The FKP approximation is valid at wavelengths small compared to the scale of the survey, so should work better at smaller scales in surveys with broad contiguous sky coverage, and worse at larger scales in slice or pencil beam surveys. Thus the FKP approximation should work reasonably well at all but the largest scales in SDSS, but probably fails in LCRS at intermediate and large scales. We suspect that the FKP approximation is liable to underestimate the Fisher information (overestimate the error bars) in LCRS, since the density in a thin slice is correlated with (and hence contains information about) the density outside the survey volume. A reliable assessment of the extent to which the FKP approximation under- or over-estimates information awaits future explicit calculations, such as described in Tegmark et al. (1998). At the median depth $`300h^1\mathrm{Mpc}`$ of the LCRS survey, each $`1\stackrel{}{.}5`$ slice is $`7.5h^1\mathrm{Mpc}`$ thick, so the FKP approximation might be expected to break down in LCRS at wavenumbers $`k\pi /7.5h\mathrm{Mpc}^1=0.4h\mathrm{Mpc}^1`$. The Fisher matrix shown in Figure 1 contains negative elements, notably at intermediate wavenumbers $`k1h\mathrm{Mpc}^1`$, whereas for Gaussian fluctuations the Fisher matrix would be everywhere positive. According to equation (32) of Paper 3, the Fisher matrix of the power spectrum for Gaussian fluctuations would be $$F^{\alpha \beta }=\frac{1}{2}D_{ij}^\alpha C^{1ik}C^{1jl}D_{kl}^\beta .$$ (25) Thus in Fourier space the elements of the Fisher matrix of the power spectrum $$F(k_\alpha ,k_\beta )=\frac{1}{2}\left|C^1(k_\alpha 𝒏_i,k_\beta 𝒏_j)\right|^2\frac{\mathrm{d}\mathrm{\Omega }_i\mathrm{d}\mathrm{\Omega }_j}{(4\pi )^2}$$ (26) would all be necessarily positive. The quantities $`\mathrm{d}\mathrm{\Omega }`$ in equation (26) are intervals of solid angle, and the integration is over all unit directions $`𝒏`$ of the wavevectors. This positivity of the Fisher matrix for Gaussian fluctuations is separate from and in addition to the fact that the Fisher matrix is positive definite, i.e. has all positive eigenvalues. The fact that some elements of the Fisher matrix in Figure 1 are negative is a consequence of nonlinearity. Nonlinear evolution induces broad positive correlations in the covariance matrix of estimates of the power spectrum (Paper 3 Fig. 2), leading to negative elements in the Fisher matrix, the inverse of the covariance of power. Prewhitening the nonlinear power spectrum substantially narrows the covariance of power, and diminishes the amount of negativity in the Fisher matrix, but negative elements remain. ## 4 Example Band-Powers ### 4.1 Principal Component Decomposition Let $`𝖮`$ be an orthogonal matrix that diagonalizes the Fisher matrix $`𝖦`$ of the scaled prewhitened power $`\widehat{𝖷}/𝖷`$ $$𝖦=𝖮^{}\mathsf{\Lambda }𝖮.$$ (27) Then $`𝖮`$ is a decorrelation matrix. The decorrelated band powers $`𝖮\widehat{𝖷}/𝖷`$ constitute the principal component decomposition of the scaled prewhitened power spectrum. Figure 2 shows the band-power windows, the rows of $`𝖮`$, for the principal component decomposition of the scaled prewhitened power spectrum of SDSS. Since the Fisher matrix $`𝖦`$ is already fairly narrow in Fourier space, Figure 1, one might have thought that its eigenmodes would be similarly narrow in Fourier space. But this is not so. In fact the eigenmodes, the band-power windows, are generally broad and, aside from the first, fundamental mode, generally wiggly. This makes the principal component decomposition of the power spectrum of little practical use, as previously concluded in Paper 2. This should of course not be confused with a principal component decomposition of the density field itself, which can be of great utility (Vogeley & Szalay 1996; Tegmark et al. 1998). Elsewhere in this paper plotted band-power windows are scaled to unit sum over the window, equation (11), but in Figure 2 the band-power windows $`𝖮`$ are scaled instead to unit sum of squares $$\underset{k_\beta }{}𝖮_{k_\alpha k_\beta }^2=1.$$ (28) In other words, the band-power windows plotted in Figure 2 are just the orthonormal eigenfunctions of the Fisher matrix $`𝖦`$. The problem with scaling the band-power windows to unit sum is that it makes the wigglier windows oscillate more fiercely, obscuring the less wiggly windows. However, the labelling of the band-power windows in Figure 2 is in order of the expected variances of the band-powers normalized to unit sum over the window, $`_{k_\beta }𝖮_{k_\alpha k_\beta }=1`$, not to unit sum of squares. ### 4.2 Cholesky Decomposition Clearly it is desirable to use the infinite freedom of choice (§3.1) of decorrelation matrices to engineer band-power windows that are narrow and not wiggly. To understand why principal component decomposition works badly, consider the mathematical theorem that the eigenmodes of a real symmetric matrix are unique up to arbitrary orthogonal rotations amongst degenerate eigenmodes, that is, amongst eigenmodes with the same eigenvalue. In the present case the eigenvalues of the Fisher matrix, even if not degenerate, are nevertheless many and finely spaced, and there is much potential for nearly degenerate eigenmodes to mix in random unpleasant ways. This suggests that mixing can be reduced in Fourier space by lifting the degeneracy of eigenvalues in Fourier space. One way to lift the degeneracy is to multiply the rows and columns of the Fisher matrix by a strongly varying function $`\gamma (k)`$ of wavenumber $`k`$ before diagonalizing. That is, consider diagonalizing the scaled Fisher matrix $`\gamma ^{}𝖦\gamma `$ $$\gamma ^{}𝖦\gamma =𝖮^{}\mathsf{\Lambda }𝖮$$ (29) where $`\gamma `$ is some scaling matrix, $`𝖮`$ is orthogonal, and $`\mathsf{\Lambda }`$ is diagonal. Then $`𝖶=𝖮\gamma ^1`$ is a decorrelation matrix, with $`𝖦=𝖶^{}\mathsf{\Lambda }𝖶`$. Now take the scaling matrix $`\gamma `$ to be a diagonal matrix in Fourier space with diagonal entries $`\gamma (k)`$, and let $`\gamma (k)`$ be strongly varying with $`k`$. In the limit of an infinitely steeply varying scaling function, $`\gamma (k_1)\gamma (k_2)\mathrm{}`$, the resulting decorrelation matrix $`𝖶`$ becomes upper triangular, as argued in §5 of Paper 2. As pointed out by Tegmark & Hamilton (1998) and Tegmark (1997a), this choice of decorrelation matrix is just a Cholesky decomposition of the Fisher matrix $$𝖦=𝖴^{}𝖴$$ (30) where $`𝖴`$ is upper triangular. If the scaling function $`\gamma `$ is chosen to be infinitely steep in the opposite direction, $`\gamma (k_1)\gamma (k_2)\mathrm{}`$, then the resulting decorrelation matrix is lower triangular. This is equivalent to a Cholesky decomposition of the Fisher matrix $$𝖦=𝖫^{}𝖫$$ (31) with $`𝖫`$ lower triangular. Such a Cholesky decomposition has been successfully employed to construct a decorrelated power spectrum of the CMB from both the COBE DMR data (Tegmark & Hamilton 1998) and the 3 year Saskatoon data (Knox, Bond & Jaffe 1998). Figure 3 shows band-power windows from the upper triangular Cholesky decomposition of the Fisher matrices $`𝖦`$ of the scaled prewhitened power of SDSS and LCRS. These band-powers are a considerable improvement over the principal component decomposition of Figure 2, in the sense that they are narrower and less wiggly. However, the Cholesky decomposition has two defects. The first defect is that the band-power windows are skewed. By construction, the upper triangular Cholesky windows vanish to the left of the diagonal element. Figure 3 shows that the Cholesky windows have a tail to the right that, although small, is nevertheless large enough to start to become worrying, especially in LCRS. A lower triangular Cholesky decomposition leads to band-power windows skewed in the opposite direction (not plotted). The second defect of the Cholesky decomposition is that it does not tolerate negative eigenvalues in the Fisher matrix well, a point previously discussed in Paper 2. In Figure 3, the Fisher matrices have been truncated to a maximum wavenumber of $`k_{\mathrm{max}}=0.015h\mathrm{Mpc}^1`$ in SDSS, and $`k_{\mathrm{max}}=0.034h\mathrm{Mpc}^1`$ in LCRS, to ensure that the computed Fisher matrix has no negative eigenvalues. Although the Fisher matrix should in theory be positive definite, meaning that all its eigenvalues should be positive, numerically it acquires negative eigenvalues when the resolution in $`k`$ is increased to the point where the spacing $`\mathrm{\Delta }k`$ between adjacent wavenumbers is less than of the order of the inverse scale size of the survey. Presumably negative eigenvalues appear thanks to a combination of numerical noise and the various approximations that go into evaluating the Fisher matrix. The band-power windows associated with negative eigenvalues are ill-behaved, unlike those shown in Figure 3. The ill-behaved band-power windows tend to infect their neighbours, and as more and more eigenvalues become negative with increasing resolution, the whole system of Cholesky windows devolves into chaos. Replacing the Fisher matrix with a version of it in which negative eigenvalues are replaced by zero does not solve the difficulty. It is possible to extend the band-powers to smaller wavenumbers by using a linear instead of logarithmic binning in wavenumber. With linear binning, the maximum wavenumber is $`k_{\mathrm{max}}=\pi /1024h\mathrm{Mpc}^1`$ in SDSS, $`k_{\mathrm{max}}=\pi /640h\mathrm{Mpc}^1`$ in LCRS. ### 4.3 Square Root of the Fisher Matrix While the Cholesky band-powers are a definite improvement over the principal component decomposition, evidently one would like the band-power windows to be more symmetric about their peaks. One way to achieve this is to choose the decorrelation matrix to be symmetric, which corresponds to choosing it to be the square root of the Fisher matrix $$𝖦=𝖦^{1/2}𝖦^{1/2}.$$ (32) The band-powers $`𝖦^{1/2}\widehat{𝖷}/𝖷`$ are uncorrelated, with unit variance. Tegmark & Hamilton (1998) previously used the square root of the Fisher matrix to construct the decorrelated power spectrum of CMB fluctuations from the COBE data, and found the resulting window functions to be narrow, non-negative and approximately symmetric. One might therefore hope that these properties would hold also for the case of galaxy surveys. Figure 4 shows the band-power windows constructed from the square root of the Fisher matrix of the scaled prewhitened power of SDSS and LCRS. The plotted windows, the rows of $`𝖦^{1/2}`$, are normalized to unit sum as in equation (11). The band-power windows are all nicely narrow and symmetrical. The Fisher matrix $`𝖦`$ itself is already fairly narrow about the diagonal, Figure 1, and taking its square root narrows it even further, in much the same way that taking the square root of a Gaussian matrix $`M_{xx^{}}=e^{(xx^{})^2}`$ narrows it by $`2^{1/2}`$, to $`M_{xx^{}}^{1/2}=e^{2(xx^{})^2}`$. In practice, the square root of the Fisher matrix is constructed by diagonalizing the Fisher matrix, $`𝖦=𝖮^{}\mathsf{\Lambda }𝖮`$, and setting $$𝖦^{1/2}=𝖮^{}\mathsf{\Lambda }^{1/2}𝖮$$ (33) the positive square root of the eigenvalues being taken. Negative eigenvalues, which as remarked in §4.2 presumably arise from a combination of numerical noise and the approximations that go into evaluating the Fisher matrix, must be replaced by zero in equation (33). The negative eigenvalues are invariably small in absolute value compared to the largest positive eigenvalue. Replacing negative eigenvalues in equation (33) by zeros causes the band-power windows, the rows of $`𝖦^{1/2}`$, to become linearly dependent, and the resulting band-powers to become correlated. Remarkably enough, however, it is still possible to regard the band-powers $`𝖦^{1/2}\widehat{𝖷}/𝖷`$ as remaining uncorrelated with unit variance. The negative eigenvalues correspond to noisy eigenmodes of the Fisher matrix. Replacing the negative eigenvalues by zero in equation (33) corresponds to eliminating the contribution of these noisy modes from the band-power windows $`𝖦^{1/2}`$. Now if the Fisher matrix were calculated perfectly, then it would have no negative eigenvalues, so correctly the noisy modes would make some positive contribution to $`𝖦^{1/2}`$. The contribution would however be small because the modes are noisy and the corresponding eigenvalues are small. Thus one should imagine that the $`𝖦^{1/2}`$ that results from setting negative eigenvalues to zero differs only slightly from the true $`𝖦^{1/2}`$ that yields band powers $`𝖦^{1/2}\widehat{𝖷}/𝖷`$ with unit covariance matrix. As the resolution of the Fisher matrix is increased, more and more eigenvalues become negative. The effect on the band-power windows is intriguing and enlightening. If for example the resolution is doubled, then there are twice as many band-powers. The old band-power windows retain essentially the same shapes as before, except that they are computed with twice as much resolution. The new band-power windows interleave with the old ones, and have shapes that vary smoothly between the shapes of the adjacent band-powers. Normalized to unit sum over the window, the variance of each band-power doubles, in just such a fashion that the net information content, the summed inverse variance, remains constant. This behaviour persists to the highest resolution that we have tested it, $`\mathrm{\Delta }\mathrm{log}k=1/1024`$. It is worth emphasizing just how remarkable this behaviour of the band-powers is. It seems as though the decorrelated band-power windows are converging, as the resolution goes to infinity, towards a well-defined continuous shape. Yet, in spite of the asymptotic constancy of shape, the variance associated with each window seems to continue increasing inversely with resolution, i.e. inversely with the number of windows, tending to infinity in the continuous limit in such a fashion that the net variance over any fixed interval of wavenumber remains constant. How can this be? How can there be infinitely many uncorrelated band-power estimates? It would seem that as the resolution increases, the band-power windows manage to remain decorrelated by incorporating into themselves small contributions of noisy modes that increase the variance of the band-power while changing the windows only slightly. In the continuum limit, the variance of each band-power would continue to increase indefinitely while the shape of the window changes only infinitesimally. In practice, the band-power windows at the largest scales, nominal wavenumbers less than the natural resolution of the survey, become jittery at high resolution. The ‘natural resolution’ here is defined empirically, as the highest resolution for which all the eigenvalues of the Fisher matrix remain numerically positive, $`\mathrm{\Delta }k=\pi /1024h\mathrm{Mpc}^1`$ in SDSS, and $`\mathrm{\Delta }k=\pi /640h\mathrm{Mpc}^1`$ in LCRS. We attribute the jitter in part to the fact that the pair integrals $`R(r;\mu )`$ used to compute the Fisher matrix are themselves only measured with finite resolution and accuracy, and in part to the fact that there is practically no signal at the largest scales, so all the relevant eigenvalues are small, and the numerics have a harder time distinguishing signal from noise. Perhaps there is a better algorithm than the simple one adopted here of setting negative eigenvalues to zero, but the simple algorithm does seem to work well enough. The situation is illustrated in Figure 4, which shows the first band-power window in the sequence for SDSS, the one nominally corresponding to $`k=0.001h\mathrm{Mpc}^1`$, multiplied by a factor of 10 to show it more clearly. This nominal wavenumber is smaller by a factor of $`3`$ than the smallest measurable wavenumber $`k0.003h\mathrm{Mpc}^1`$ in SDSS, and the computed band-power accordingly retreats to a larger effective wavenumber, where there is signal. At scales comparable to the natural resolution of the survey, $`k\mathrm{\Delta }k0.003h\mathrm{Mpc}^1`$, the resolution $`\mathrm{\Delta }\mathrm{log}k=1/32`$ used in Figure 4 is 10 times higher than the natural resolution. For comparison, Figure 4 also shows (shaded) the same band-power computed at a resolution of $`\mathrm{\Delta }\mathrm{log}k=1/1024`$, several hundred times the natural resolution. At this high resolution, the band-power oscillates finely, but overall remains under control. Such robust behaviour contrasts with the chaotic behaviour of the Cholesky windows when similarly pushed beyond the natural resolution of the survey. The methods of Paper 3 permit the Fisher matrix to be discretized over any arbitrary grid of wavenumbers (although all the explicit examples in Paper 3 employ a logarithmic grid). Figure 5 shows the band-power windows constructed from the square root of the Fisher matrix of the scaled prewhitened power of SDSS for a linearly spaced rather than logarithmically spaced grid of wavenumbers. These band-power windows have essentially the same shapes as those computed on the logarithmic grid, Figure 4. The difference is in the number of band-power windows and in the resolution with which they are defined. Figure 5 shows, more clearly than Figure 4, that the band-powers are broader for LCRS than for SDSS, reflecting the smaller effective volume, hence lower resolution in Fourier space, of LCRS. The slice geometry of LCRS leads to wings on the band-powers. The wings are fairly mild here, but in pencil-beam surveys the wings can become quite broad, potentially leading to significant aliasing between power at small and large wavenumbers (Kaiser & Peacock 1991). For LCRS, the first band-power plotted in Figure 5, at a nominal wavenumber of $`k=\pi /1024h\mathrm{Mpc}^1`$, is jittery, illustrating again that band-powers at nominal wavenumbers less than the natural resolution of the survey, $`k=\pi /640h\mathrm{Mpc}^1`$ for LCRS, tend to become jittery at high resolution. Again, we attribute this in part to the fact that the pair integrals $`R(r;\mu )`$ used to compute the Fisher matrix are themselves computed with finite accuracy (in LCRS, the pair integrals were evaluated by the Monte Carlo method), and in part to the fact that the numerics have a harder time distinguishing signal from noise when the signal is weak. ### 4.4 Scaled Square Root of the Fisher Matrix Notwithstanding the success of the square root of the Fisher matrix in many cases, it does not work perfectly in all cases. The problem is that the symmetry of $`𝖦^{1/2}`$ does not imply symmetry of the band-power windows, the rows of $`𝖦^{1/2}`$, about their diagonals. If the Fisher matrix $`𝖦`$ varies steeply along the diagonal, then the band-power windows can turn out quite skewed. In practice, this happens for example in the case of a perfect, shot-noiseless survey, where the information contained in the power spectrum increases without limit as $`k\mathrm{}`$. The case of a noiseless survey was applied in §9.3 of Paper 3 to compute the effective FKP constants $`\mu (k)`$ to be used in an FKP pair-weighting when measuring the prewhitened nonlinear power spectrum of a survey. The difficulty can be remedied by scaling the Fisher matrix before taking its square root. Let $`\gamma `$ be any scaling matrix. Then $$𝖶=(\gamma ^{}𝖦\gamma )^{1/2}\gamma ^1$$ (34) is a decorrelation matrix, satisfying $`𝖦=𝖶^{}𝖶`$. In the case of the perfect, noiseless survey, a diagonal scaling matrix $`\gamma `$ with diagonal elements $$\gamma (k)=k^{3/2}$$ (35) proves empirically to work well. Figure 6 compares band-power windows obtained from the scaled versus unscaled square root of the Fisher matrix, for the perfect, noiseless survey. For clarity only a single band-power window, the one centred at $`k=1h\mathrm{Mpc}^1`$, is shown, but this window is representative. Scaling the Fisher matrix with $`\gamma (k)=k^{3/2}`$ before taking its square root in this case helps to rectify the skew in the band-power windows from the unscaled square root $`𝖦^{1/2}`$. The results shown in Figure 11 of Paper 3 were obtained using the decorrelation matrix of equation (34) scaled by the scaling function $`\gamma (k)=k^{3/2}`$, equation (35). ## 5 Decorrelated Power Spectrum The goal of this paper has been to show how to obtain uncorrelated band-powers $`\widehat{𝖡}`$ that can be plotted, with error bars, on a graph. In §4.3 it was found that, if the band-powers are decorrelated with the square root of the Fisher matrix, then the band-powers can be computed at as high (or low) a resolution as one cares, and the band-powers will remain effectively uncorrelated. The higher the resolution, the larger the error bars on the band-powers. The same result holds if the band-powers are decorrelated with the scaled square root of the Fisher matrix, equation (34). If the band-power windows $`𝖶`$ are scaled to unit sum, equation (11), then the Fisher matrix of the scaled band-powers $`\widehat{𝖡}/𝖷`$ is the diagonal matrix $`\mathsf{\Lambda }`$ in $`𝖦=𝖶^{}\mathsf{\Lambda }𝖶`$, equation (18). The quantity that remains invariant with respect to resolution (and linear or logarithmic binning) is the inverse variance, also called the information, per unit log wavenumber $`\mathrm{d}I/\mathrm{d}\mathrm{ln}k`$ $$\frac{\mathrm{d}I}{\mathrm{d}\mathrm{ln}k}=\frac{\mathrm{\Lambda }_k}{\mathrm{\Delta }\mathrm{ln}k}$$ (36) where $`\mathrm{\Lambda }_k`$ is the diagonal element of $`\mathsf{\Lambda }`$ associated with the scaled band-power $`\widehat{𝖡}_k/𝖷_k`$, and $`\mathrm{\Delta }\mathrm{ln}k`$ is the resolution of the matrix at wavenumber $`k`$. The association of a band-power $`\widehat{𝖡}_k`$ with wavenumber $`k`$ relies on the band-power window being narrow about that wavenumber. In practice the band-power windows have a finite width, and the correspondence with wave number is not exact, a reflection of the uncertainty principle. As discussed in §4.3, the decorrelated band-power windows would remain of finite width even if they were resolved at infinite resolution. Figure 7 shows the information per unit log wavenumber $`\mathrm{d}I/\mathrm{d}\mathrm{ln}k`$ in the scaled prewhitened power spectra of SDSS, LCRS, and also the IRAS 1.2 Jy survey, for the $`\mathrm{\Lambda }`$CDM prior power spectrum. These information curves are similar to those presented by Tegmark (1997b), but are more accurate, and valid also in the nonlinear regime. It should be cautioned that the information plotted in Figure 7 comes from a Fisher matrix computed in the FKP approximation, which is correct only for scales small compared to the scale of the survey. The FKP approximation tends to misestimate, probably underestimate, the information contributed by regions near (within a wavelength of) survey boundaries, since it assumes that those regions are accompanied by more correlated neighbours than is actually the case. The problem is most severe in surveys like LCRS, where everyone lives near the coast. Thus the information plotted in Figure 7 probably underestimates the true information in the LCRS, especially at larger scales. For a particular choice of resolution in wavenumber, the information per unit log wavenumber translates into an uncertainty in the corresponding uncorrelated band-powers. The amount of information $`\mathrm{\Delta }I_k`$ in a scaled band-power $`\widehat{𝖡}_k/𝖷_k`$ of width $`\mathrm{\Delta }\mathrm{ln}k`$ is equal to the corresponding diagonal element $`\mathsf{\Lambda }_k`$ of the Fisher matrix of the scaled band-powers: $$\mathrm{\Delta }I_k=\frac{\mathrm{d}I}{\mathrm{d}\mathrm{ln}k}\mathrm{\Delta }\mathrm{ln}k=\mathsf{\Lambda }_k.$$ (37) The inverse square root of this is the expected error in the scaled band-power $$(\mathrm{\Delta }\widehat{𝖡}_k/𝖷_k)^2^{1/2}=(\mathrm{\Delta }I_k)^{1/2}=\mathsf{\Lambda }_k^{1/2}.$$ (38) Figure 8 illustrates an example of the error bars expected on the decorrelated prewhitened nonlinear power spectrum of SDSS, for the $`\mathrm{\Lambda }`$CDM prior power spectrum. As always in likelihood analysis, ‘the errors are attached to the model, not to the data’. Figure 8 demonstrates that, if baryonic wiggles are present at the expected level, and if nonlinear evolution leaves at least the first wiggle intact, as suggested by $`N`$-body simulations (Meiksin, White & Peacock 1999), then SDSS should be able to recover them. According to the prognostications of Eisenstein, Hu & Tegmark (1999), the detection of baryonic features in the galaxy power spectrum should assist greatly in the business of inferring cosmological parameters from a combination of CMB and large scale structure data. ## 6 Conclusions Amongst the infinity of possible ways to resolve the galaxy power spectrum into decorrelated band-powers, the square root of the Fisher matrix, or a scaled version thereof, offers a particularly good choice. The resulting band-powers are narrow, approximately symmetric, and well-behaved in the presence of noise. By contrast, a principal component decomposition yields band-powers that are broad and wiggly, which renders them of little practical utility. A Cholesky decomposition of the Fisher matrix works better than principal component decomposition, but not as well as the square root of the Fisher matrix. On the good side, Cholesky band-power windows are narrow and not wiggly; on the bad side, Cholesky band-power windows are skewed to one side, and they respond poorly to the presence of small negative eigenvalues in the Fisher matrix, as can occur because of a combination of numerical noise and the various approximations that go into computing the Fisher matrix. In summary, the square root of the Fisher matrix is a useful tool for decorrelating the power spectrum not only of CMB fluctuations (Tegmark & Hamilton 1998), but also of galaxy redshift surveys. We conclude with the caveat that this paper, like the preceding Paper 3, has ignored redshift distortions, and other complicating factors such as light-to-mass bias. In any realistic analysis of real galaxy surveys, such complications must be taken into account. ## Acknowledgements We thank Huan Lin for providing FKP-weighted pair integrals for LCRS, Michael Strauss for details of the angular masks of SDSS and IRAS 1.2 Jy, and David Weinberg for the radial selection function of SDSS. This work was supported by NASA grants NAG5-6034 and NAG5-7128 and by Hubble Fellowship HF-01084.01-96A from STScI, operated by AURA, Inc. under NASA contract NAS5-26555.
no-problem/9905/cond-mat9905151.html
ar5iv
text
# Scanning tunneling microscopy and spectroscopy at low temperatures of the (110) surface of Te doped GaAs single crystals ## I Introduction Since its introduction, scanning tunneling spectroscopy (STS) was expected to reveal the electronic structure of surfaces with a spatial resolution of the order of the interatomic distance . When combined with scanning tunneling microscopy (STM) imaging, STS is a powerful tool for studying the influence of lattice defects and impurities on the local electronic structure. Magnetic impurities in normal metals as well as superconductors or defects in semiconducting materials have been studied. The surface of the III-V compound semiconductor GaAs has been investigated extensively. STM and STS measurements have provided detailed information on the local electronic structure of dopant atoms and other atomic scale defects. Johnson et al. showed by voltage dependent STM imaging at room temperature that substitutional Zn and Be dopant atoms (acceptors) at the surface and in the upper subsurface layers influence the electronic structure of the GaAs (110) surface. Zheng et al. and Domke et al. obtained similar results for substitutional Si dopant atoms (donors). At low temperatures, Van der Wielen et al. observed Friedel charge density oscillations around Si dopant atoms near the GaAs surface. Other groups have reported on the presence of atomic scale defects at the cleaved GaAs (110) surface by STM at room temperature. The geometric and electronic structure of As vacancies , dopant-vacancy complexes and As antisite defects have been identified. The experienced difficulties of reproducibility indicate, however, that the application of combined STS and STM measurements requires a detailed knowledge of the relevant physical processes governing the behavior of nanoscale tunnel junctions. For nanoscale junctions the local density of states in the contact area is strongly altered by tip-sample interactions. These interactions result in considerable shifts of the allowed energy levels, where deep lying levels may be driven through the Fermi level . Localized states can not only be connected with the sample surface , but also appear at the tip apex . Because the radii of the localized states are of the order of the contact area, the tunneling current in STM and STS experiments can be dominated by electron transport through one single localized state . Therefore, we have to take into account the finite relaxation rate of the electrons which occupy the localized states. At low temperatures the relaxation rate may become comparable to the tunneling rate for the electrons which will be driven out of equilibrium . The low relaxation rate will give rise to the appearance of localized charges in the contact area which strongly influence the position of the localized states with respect to the Fermi energy . In the present paper, we present the results of our low temperature STM/STS investigation of Te doped GaAs crystals which are cleaved along the (110) plane. By STM imaging at different voltages and polarities, we are able to distinguish different types of defects which can be located in different subsurface layers. The main type of defect is identified as an n-type substitutional dopant atom, $`\mathrm{Te}_{\mathrm{As}}`$, i.e., a Te atom occupying an As lattice site. The STM images of the dopant atoms depend on the applied sample voltage. We have also obtained spatially resolved spectroscopy curves for different positions of the tip in the vicinity of a Te impurity. The differential conductance curves show the presence of peaks inside the semiconductor band gap. The position and height of the peaks depend on the position of the tip with respect to the impurity. We will argue that both the topographic STM images and the conductivity curves reflect the presence of charges which are localized in the tunnel junction area. As already indicated above, the localized charges are the combined result of the presence of localized states within the nanoscale tunnel junction area and the non-equilibrium electron distribution which is caused by the low relaxation rate at low temperatures. Differential conductance curves taken above a defect free surface area with different STM tips confirm that also the tip contains localized states which can be charged. The appearance and position of the conductance peaks within the band gap can be linked to the complicated voltage dependence of the charging effects. Finally, at negative sample voltages we clearly observe Friedel charge density oscillations around the ionized Te dopant atoms. ## II Experiment The STM and STS data have been obtained with a home built low temperature STM with an in situ cleavage mechanism . The samples are n-type GaAs single crystals which are doped with Te. The nominal concentration of Te atoms is $`5\times 10^{17}\mathrm{cm}^3`$. It is well known that Te acts as a donor dopant atom which occupies an As lattice position . At our relatively low doping level, compensation effects may be neglected. The crystals are cleaved along the (110) plane after cooling down to liquid helium temperature. The partial vapor pressure of oxygen is extremely low at this temperature, implying that surfaces like the GaAs (110) surface will stay atomically clean for many days. All the STM and STS measurements are done with $`\mathrm{Pt}_{80}`$$`\mathrm{Ir}_{20}`$ tips, cut ex situ with scissors. Our samples contain ohmic contacts obtained by thermodiffusion which allow to perform electrical transport measurements. From Hall measurements we have determined that the density of electrical carriers at $`5\mathrm{K}`$ is $`8.9\times 10^{17}\mathrm{cm}^3`$, which is sufficient to result in metallic conductivity at low temperatures . Te is a shallow impurity for GaAs, i.e., the Te atom occupying an As lattice site provides a 5s electron which is weakly bound to a positively charged Te ion. The localization radius for such a 5s valence electron is about $`7\mathrm{nm}`$. At doping levels exceeding $`4.5\times 10^{17}\mathrm{cm}^3`$ the orbitals from neighboring doping atoms will overlap, providing metallic conductivity with a Fermi level close to the edge of the conduction band. The metallic behavior of our samples is confirmed by the temperature dependence of the conductivity $`\sigma (T)\sigma (T0)\sqrt{T}`$ with an extrapolated conductivity $`\sigma (T0)1200\mathrm{\Omega }^1\mathrm{cm}^1`$. The high quality of our GaAs crystals also allows the observation of Shubnikov-de Haas oscillations in the magnetic field dependence of the low temperature conductivity. ## III Experimental results The (110) GaAs surface has a simple (1x1) structural relaxation which leads to surface states located outside of the semiconductor band gap. Unoccupied Ga and occupied As surface states can be found above and below the band gap, respectively. Therefore, the STM image of the clean (110) GaAs surface taken at positive and negative sample voltages is within a first order approximation determined by the Ga and As sublattices, respectively . This is illustrated in the inset of Fig. 1 which shows two STM pictures of the GaAs(110) surface at opposite polarity of the tunneling voltage. A high doping level and/or voltage dependent band bending can result in the presence of occupied states in the conduction band. In that case, depending on the applied voltage, both the Ga and the As sublattice will contribute to the STM images . Figure 1 shows the current versus voltage characteristic above an atomically flat area of the GaAs (110) surface. Due to the voltage dependent band bending, the measured band gap tends to be larger than the bulk value (For GaAs $`E_g1.52\mathrm{eV}\mathrm{at}5\mathrm{K}`$, $`E_g1.43\mathrm{eV}\mathrm{at}300\mathrm{K}`$ ) and at low temperatures this difference can become quite large . Voltage dependent band bending is the result of the space charge region at the surface of the sample which compensates the electric field between the tip and the sample . We will show that at low temperatures this band bending is very sensitive to localized charges which are induced in the STM contact area. Figure 2 shows an STM topographic image of the cleaved GaAs (110) surface at a sample voltage of $`1.5\mathrm{V}`$. We clearly distinguish different types of defects superimposed on the atomic lattice . We will restrict ourselves to the investigation of one type of defect, referred to as the A-type defect. Figures 3(a) and 3(b) show one A-type defect at different values of the sample voltage. This defect is observed as a round hillock feature which at negative polarity of the sample voltage is surrounded by a darker ring. As will be discussed in more detail in Section V, the ring like features can be interpreted in terms of Friedel charge density oscillations which result from the screening of charged defects. The statistical distribution of the charged A-type defects within the different subsurface layers (see below) indicates that these defects correspond to the Te dopant atoms. On the other hand, we note that the two STM images shown in Figs. 3(a) and 3(b) look very similar to voltage dependent STM images of substitutional Si<sub>Ga</sub> dopant atoms for the GaAs (110) surface . We conclude that the A-type defects appearing in Fig. 2 are substitutional Te<sub>As</sub> dopant atoms which occupy As lattice sites. Figure 3(c) is an STM image at a sample voltage of $`+0.5\mathrm{V}`$ of the same $`\mathrm{Te}_{\mathrm{As}}`$ dopant atom shown in Figs. 3(a) and 3(b). While scanning the area surrounding the dopant atom, the image of the dopant atom suddenly switched from a hillock feature to a depression (the scanning direction is downwards). The different topography can not simply be the result of a double tip effect or any other mechanical instability, since we clearly observe continuous atomic rows on the flat surface at the left and right hand side of the dopant atom. When increasing the sample voltage to $`+1.5\mathrm{V}`$, the dopant image continues to appear as a depression (see Fig. 3(d)). We note that the switching of the contrast in Fig. 3(c) only affects the imaging at positive sample voltages, i.e., the Friedel charge density oscillations at negative sample voltages continue to appear as shown in Fig. 3(a). The theoretical model, which will be introduced in Section IV, allows to link the switching of the contrast in Fig. 3(c) to a change of the localized charges residing on the tip apex and/or the dopant atom. The different intensities for the A-type defects in Fig. 2 are caused by the fact that the Te<sub>As</sub> dopant atoms can be observed in different layers below the surface . According to Zheng et al., dopant atoms at the surface are expected to behave differently from the dopant atoms in the subsurface layers . According to the statistical distribution, the A-type dopant atoms can be identified as subsurface dopant atoms, while the B-type defects in Fig. 2 probably correspond to dopant atoms residing at the GaAs surface. The inset of Fig. 4 shows the height profiles across three A-type dopant atoms located in different layers. The observed corrugations can be grouped in five categories corresponding to the top five subsurface layers. In Fig. 4 we have plotted on a logarithmic scale the average corrugation for the five layers. The average corrugation depends exponentially on the depth below the surface. The number of visible layers corresponds to what other authors have reported for measurements at room temperature . On the other hand, we note that Zheng et al. have observed at room temperature a linear dependence of the corrugation height on the layer number for Si dopant atoms. The exponential dependence at low temperatures may be caused by a spatial localization of the probed electron states which becomes more pronounced with decreasing temperature. Figure 5(a) shows the normalized conductance curves $`(dI/dV)/(I/V)`$ which have been obtained in the vicinity of a Te<sub>As</sub> dopant atom. The curves have been averaged within the indicated three square areas around a Te<sub>As</sub> dopant atom located in the first subsurface layer. After averaging about 70 measured $`I(V)`$ curves within one area, the differential conductance $`dI/dV`$ is obtained numerically and normalized. Several larger and smaller peaks can be observed inside the semiconductor band gap. The position as well as the intensity of these peaks strongly depends on the surface area which is being probed. We will show that the peaks reflect the presence of localized states which can be associated with defects and/or the tip apex. When compared to room temperature, the influence of localized states becomes more dominant in the tunneling process at low temperatures because of the low relaxation rate for the electrons. The relevance of states localized at the tip apex is illustrated by the two normalized conductance curves in Fig. 5(b). The data have been obtained above an atomically flat, defect free area on the GaAs surface with two different STM tips. The presence of a peak in the tunneling conductance near the band gap edge is obvious for one tip, while this peak is absent for the other tip used in the experiment. The fact that two different STM tips can result in different conductance curves is consistent with a change of the charge localized at the tip apex. A theoretical model which takes into account such charging effects is presented in Section IV below. As discussed in Section II, the doping with Te atoms provides a metallic conductivity in our GaAs samples even at low temperatures. The ionized Te atoms correspond to a localized charge which is screened by the conduction electrons. This gives rise to the presence of Friedel charge density oscillations which became already visible in Fig. 2 and in Fig. 3. Figure 6 shows a detailed STM image of a $`\mathrm{Te}_{\mathrm{As}}`$ dopant atom on the GaAs (110) surface taken at a sample voltage of $`1.5\mathrm{V}`$. In order to highlight the presence of the Friedel charge density oscillations, the atomic corrugation of the image has been filtered out digitally. The ring like structures are very similar to the Friedel oscillations appearing on the GaAs (110) surface around Si dopant atoms . As discussed in more detail in Section V, the image shown in Fig. 6 can not be simply related to the standard screening model for the bulk material . According to this model, the oscillation period should be given by half the bulk Fermi wave vector: $`\lambda _\mathrm{F}/210.5\mathrm{nm}`$ for the carrier density obtained from the Hall effect (see Section II), while the oscillation period inferred from Fig. 6 is $`3.3\mathrm{nm}`$. ## IV Theoretical model The conductivity of a tunnel junction between a semiconductor and a metal tip is usually calculated by relying on the concept of band bending . The position of the energy bands is determined by the electrostatic potential which is related to the charge density through Poisson’s equation. Integration of Poisson’s equation gives the band bending in the semiconductor. An essential assumption is that the Fermi-Dirac statistics can be used to determine the equilibrium electron occupation numbers from which the charge density distribution is obtained. In a nanoscale tunnel junction, the current can be influenced and even be dominated by tunneling through localized states which can be present at the semiconductor surface as well as at the tip apex . At low temperatures we have to take into account the finite relaxation rate of the electrons occupying the localized states. The electron distribution will be out of equilibrium and an additional charge can appear in the tunnel contact area due to the presence of the localized states . This additional charge will influence the local electronic spectrum. In order to describe this influence we propose a theoretical model which is a generalisation of previously used approaches and includes both non-equilibrium effects and the influence of localized states in the tunnel contact area. The influence of an additional localized charge on the tunneling characteristics has to be treated self-consistently. It is necessary to take into account both a Hubbard type repulsion between localized electrons as well as the electrostatic potential at the semiconductor surface due to the localized charge. This is imposed by the fact that both the typical tip-sample separation and the typical radius of a localized state are comparable to the inter-atomic distance. The Coulomb interaction of the Hubbard type shifts the energy of the localized state by an amount $`Ue^2/a_0`$, where $`a_0`$ is the radius of the localized state. Next, a potential $`W`$ has to be introduced to describe the interaction of the electrons at the semiconductor surface with the additional charge present on the localized state . In general, the exact calculation of the Coulomb potential at a semiconductor surface is very complicated because one needs to know both the geometry of the tunnel contact and the distribution of the electric field in the contact area. There are two limiting cases for which the calculations can be simplified, but are still able to reproduce the main characteristic features of the tunneling conductivity and the STM imaging. In the limit of strong screening, when the effective radius of the potential is of the order of the inter-atomic distance, one can treat $`W`$ as a point like potential. In the limit of weak screening, when the effective radius of the potential is much larger than the inter-atomic distance and the tunnel contact size, the potential $`W`$ at the semiconductor surface stays approximately constant in the vicinity of the contact. Here, we will restrict ourselves to the case of strong screening. The extra charge residing on the localized states and the tunneling conductivity can be obtained from a self-consistent approach based on the diagram technique for non-equilibrium processes . In order to describe the tunneling processes for an STM junction in the presence of a localized state, we use a model which includes three subsystems: an ideal semiconductor, a localized electronic level (connected with a surface defect or with a tip apex state) and a normal metal (the STM tip). The subsystems are connected by tunneling matrix elements. We add an interaction of the semiconductor electrode with a thermal bath in order to take into account a finite relaxation rate for the electrons. We consider such a finite relaxation rate only for the semiconductor electrons: The electrons in the metallic electrode (STM tip) are assumed to be in thermal equilibrium. Figure 7 gives a schematic view of the tunnel junction we are considering. The thermal bath is connected to the semiconductor via a relaxation rate $`\gamma `$. The expression for the tunneling conductivity turns out to be less sensitive to the details of the connection (point-like connection or a distribution of scattering centers) . The initial position of the localized state in the absence of any tip-sample interaction corresponds to an energy $`\epsilon _d^0`$. The STM contact induces a shift of the localized state towards a voltage dependent energy $`\epsilon _d`$. We want to stress that in our model the localized state can in principle be any state which is localized within the tunnel junction area. In our STM/STS measurements the localized state can, e.g., be a surface impurity state or a state localized at the tip apex. In the example illustrated in Fig. 7 the localized state has been assigned to the tip apex and is connected to the bulk of the metallic tip metal via a relaxation rate $`\gamma _0`$. The localized state is connected to the semiconductor electrode via the tunneling rate $`\mathrm{\Gamma }`$. Figure 8 shows some typical results of our numerical evaluation of the analytical expression for the tunneling conductivity. Each curve corresponds to a set of typical values for the relaxation rates $`\gamma `$ and $`\gamma _0`$, the tunneling rate $`\mathrm{\Gamma }`$ and the initial position $`\epsilon _d^0`$ of the localized state. All the relevant parameters are expressed in units of the energy $`\mathrm{\Delta }`$ which corresponds to half the semiconductor band gap, i.e., $`E_g/2`$ (see Fig. 7). The initial model density of states of the semiconductor is shown by the dotted line in Fig. 8. As indicated in Fig. 7, the bands are assumed to have a width $`4\mathrm{\Delta }`$. The on-site Hubbard repulsion $`Ue^2/a_0`$ is about $`0.51\mathrm{eV}`$ for a localization radius $`a_00.51\mathrm{nm}`$. For our calculations we have taken $`U=\mathrm{\Delta }`$. A similar choice $`W=\mathrm{\Delta }`$ has been made for the semiconductor surface Coulomb potential. The qualitative features of the tunneling conductivity are insensitive to variations of the Coulomb parameters $`U`$ and $`W`$ for reasonable choices of these parameters. For the numerical evaluation two different situations have been investigated: (i) The initial position $`\epsilon _d^0`$ of the localized state is inside the band gap (Fig. 8, curve 1), and (ii) the initial position $`\epsilon _d^0`$ of the localized state is in the conduction or the valence band (Fig. 8, curves 2 and 3). The calculated conductivity curves obviously differ from the standard tunneling conductivity curves which are expected for STS measurements. Our calculations clearly reveal a shift of the band gap edges which becomes more pronounced when decreasing the relaxation rates (Fig. 8, curves 2 and 3). The non-equilibrium electron distribution leads to a charge accumulation on the localized state with initial energy $`\epsilon _d^0`$. Due to the Coulomb repulsion this results in a shift of the level to a position $`\epsilon _d`$, where the shift $`\epsilon _d\epsilon _d^0`$ is comparable to the value of the band gap $`E_g=2\mathrm{\Delta }`$. Despite its initial energy, the localized state can emerge as a peak near the band gap edge (Fig. 8, curve 1). Near the band gap edge, the tunneling current rapidly grows with increasing tunneling voltage, implying major changes in the charge residing on the localized state. The exact location of the conductance peak is sensitive to variations of the parameters $`\mathrm{\Gamma }`$, $`\gamma `$, $`\gamma _0`$ and $`\epsilon _d^0`$, which determine the value of the induced charge. According to our calculations, the peak is not influenced by the position of the Fermi level relative to the band gap edges. Our theoretical model can be generalized to the case where there exist several localized states which are connected with the STM tip apex and/or with a defect in the tunnel junction area. Taking into account the induced charges connected with all the localized states, one expects to observe several peaks in the tunneling conductivity. Finally, from our theoretical analysis we can draw the important conclusion that a peak in the tunneling conductance can also appear above an atomically flat surface area, provided a localized state is present at the apex of the STM tip. ## V Discussion Our low temperature STM/STS study of the GaAs (110) surface reproduces several of the basic features which have been reported for room temperature experiments. These features include the imaging of the Ga and As sublattice as well as the possibility to identify dopant atoms and other atomic scale defects. On the other hand, our low temperature experiments reveal specific features which can not be observed or are less pronounced at room temperature. As discussed in Section IV, an unusual behavior of the tunneling conductivity in low temperature STM/STS experiments can be associated with the presence of localized states in the nanoscale junction area. While our model can not provide a complete, quantitative understanding of the results, we are able to provide a qualitative understanding of the specific features which appear at low temperatures. The charge residing on localized states associated with surface impurities causes a local change of band bending, which leads to the observed contrast when imaging a dopant impurity (see Fig. 3). Our theoretical model also indicates that a charge associated with a localized state at the tip apex can have a strong influence on the image contrast. Due to the Coulomb interaction, the energy of a localized state in the tunnel junction area will be shifted when the charge on the localized state changes. Therefore, the STM image contrast also depends on the modification of the initial electronic spectrum by the extra localized charge. This implies that a self-consistent treatment of the tunneling process (see Section IV) is required to understand the voltage dependence of the contrast when imaging an impurity (see Fig. 3). The details of this voltage dependence can obviously be different for different experiments. The sudden change of the contrast in Fig. 3(c) can be explained in terms of a change in the charge which is localized at the tip apex or on the impurity atom. On the other hand, in the absence of any sudden variation in the localized charges, the voltage dependence of the contrast is completely reproducible. As illustrated in Fig. 5, the experimental results for the tunneling conductivity strongly depend on the investigated area as well as on the tip which is being used. In Section IV we have indicated that the presence of several peaks in the tunneling conductivity can be associated with the influence of several localized states. These localized states may be associated with a dopant atom or with the tip apex, but the GaAs surface states can also result in a set of additional states with initial energies lying in the conduction or the valence band. Charging effects are able to shift these surface states into the band gap and give rise to peaks in the tunneling conductivity. Because the charge accumulated on a localized state is determined by the relaxation and tunneling rates, it will also depend on the tip-sample and the tip-defect distance. By changing the STM tip position, one can obtain tunneling conductance curves where the position and the height of the peaks can be very different. The influence of the tip position on the conductance peaks is illustrated in Fig. 5(a). As shown in Fig. 5(b), a conductance peak can also be present for an atomically flat surface. According to our theoretical model such a peak can be directly related to the charge residing on a localized state at the tip apex. We note that the experimental conductance curves shown in Fig. 5 also allow to verify the shift of the semiconductor band gap edges which is predicted by our model. In our experiments Friedel charge density oscillations can only be observed at negative sample voltages. The absence of the oscillations at positive voltages is caused by band bending which results in a depletion of the surface area and a shift of the Fermi level away from the conduction band . Electrons, which are able to screen the positive charge of the ionized Te dopant atoms, are only present at negative sample voltages. As mentioned in Section III, the Friedel oscillations shown in Fig. 6 can not be simply explained in terms of the standard screening model. This is not surprising, since the image shown in Fig. 6 is obtained at one particular sample voltage. The standard model, which predicts that the oscillation period is given by $`\lambda _F/2`$, takes into account conduction electrons with all possible energies . A detailed fitting of the observed Friedel oscillations requires an exact knowledge of the two-dimensional band structure of the GaAs surface. Additional complications arise because the tip-sample separation (about $`0.5\mathrm{nm}`$) does not exceed the width of the ring like structures around the Te impurity (more than $`2\mathrm{nm}`$ for GaAs). Therefore, charged states at the tip apex are likely to modify the distribution of the electron density and the corresponding STM image of the Friedel oscillations. On the other hand, the number of screening electrons below the Fermi level in the surface region depends on the local band bending which is determined both by the applied voltage and the presence of localized charges in the tunnel junction area (see Section IV). Finally, the distance between neighboring dopant atoms is comparable to the screening length. This will result in a superposition of the Friedel oscillations caused by different dopant atoms. The fact that the Friedel oscillations shown in Fig. 6(a) tend to be deformed at larger distances, is probably related to this superposition of oscillations. ## VI Conclusion We have studied n-type GaAs single crystals which are doped with Te atoms. The electrical transport properties reveal a metallic behavior of the GaAs crystals down to liquid helium temperatures. We have performed voltage dependent imaging and spatially resolved spectroscopy on the (110) surface of the in situ cleaved crystals by means of a low temperature scanning tunneling microscope. The larger fraction of the atomic scale defects are identified as substitutional $`\mathrm{Te}_{\mathrm{As}}`$ dopant atoms. These dopant atoms can be observed in the surface layer as well as in the next four subsurface layers and become surrounded by Friedel charge density oscillations at negative sample voltages. We have developed a theoretical model which qualitatively accounts for the voltage dependent contrast of the STM topographic images. The model also provides an explanation for the conductance peaks which appear in the semiconductor band gap and appear very differently when changing the tip position. Our model is based on the presence of charges residing on localized states in the tunnel junction area. The charges appear because of the non-equilibrium electron distribution in the STM contact area which results from a finite relaxation rate for the electrons at low temperatures. The localized charges are not only associated with the Te dopant atoms or with other atomic scale defects, but also appear on states which are localized at the tip apex. The presence of localized states at the tip apex allows to understand tip dependent anomalies which can even be observed on atomically flat surface areas. ## ACKNOWLEDGMENTS The work at the K.U.Leuven has been supported by the Fund for Scientific Research - Flanders (FWO) as well as by the Flemish Concerted Action (GOA) and the Belgian Inter-University Attraction Poles (IUAP) research programs. The collaboration between Moscow and Leuven has been funded by the European Commission (INTAS, project 94-3562). The work in Moscow has been supported by the Russian Ministry of Research (Surface atomic Structures, grant 95-1.22; Nanostructures, grant 1-032) and the Russian Foundation of Basic Research (RFBR, grants 96-0219640a and 96-15-96420). We are much indebted to I. Gordon for performing the electrical transport measurements.
no-problem/9905/cond-mat9905403.html
ar5iv
text
# References Entropy Revisited: The Plausible Role of Gravitation Eshel Ben-Jacob<sup>(a)</sup>, Ziv Hermon<sup>(a)</sup>, Alexander Shnirman<sup>(a),(b)</sup> <sup>(a)</sup>School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel <sup>(b)</sup>Department of Physics, University of Illinois at Urbana-Champaign, 1110 West Green Street, Urbana, Illinois 61801-3080, U.S.A. ## Abstract We first present open questions related to the foundations of thermodynamics and statistical physics. We then argue that in principle one can not have “closed systems”, and that a universal background should exist. We propose that the gravitational field plays this role, due to its vanishing energy-momentum tensor. This leads to a new possible picture, in which entropy and irreversibility in macroscopic systems emerge from their coupling to the background gravitational field. Thermodynamics and statistical physics are the scientific disciplines devoted to the description of macroscopic systems at equilibrium. Quoting Callen : “whether we are physicists, chemists, biologists, or engineers, our primary interface with nature is through the properties of macroscopic matter”. Both disciplines are considered to be well established. Yet they pose open fundamental questions, or even paradoxes. The origin of time irreversibility of macroscopic systems is one example. So are the origin of entropy as a real macroscopic variable, the fundamental relation $`S=S(U,V,N)`$ ($`U`$ being the system’s internal energy) and the second law. However, at present many physicists think that these are not “real” questions. The reductionist approach that dominates physical thought regards the need for the additional laws of thermodynamics (on top of the microscopic laws) and the additional fundamental assumptions of statistical physics as a reflection of our intellectual limitations and not as an ontological reality. In principle, they are believed to be derivable from the microscopic laws. The first goal of this paper is to convince the reader that there are open questions in the foundations of thermodynamics and statistical physics. The second is to propose that the answers to the above questions might have to do with the special nature of gravitation, i. e., the vanishing of the total energy momentum tensor of gravitation and matter . Although this property by itself seems to be in contradiction with both statistical physics and quantum mechanics, we present here a new picture synergizing the above disciplines. Over the years there has been considerable effort to reconcile classical thermodynamics with classical and quantum gravity. . In this paper we discuss the idea that, as every system is coupled to the gravitational metric, no system can be truly isolated. We will argue that this is the source of irreversibility in Nature. This can be shown by tracing over the degrees of freedom of the gravitational metric. The system plus the metric become then a reduced density matrix, describing the system only, and this reduced density matrix evolves in the usual manner towards its final equilibrium state. Thermodynamics is actually a summary of experimental observations of properties and of quasistatic processes in macroscopic systems . They all fit within the same framework, if we assume that an additional real variable related to heat does exist (real in Einstein’s sense, i.e., it can be measured). For closed systems (systems that do not exchange energy, volume, or matter with the surroundings) the new variable, the entropy $`(S)`$, is a homogeneous, first order function of the extensive controlled variables. For these systems, the relation $`S=S(U,V,N)`$ is referred to as the fundamental relation or, the fundamental equation. It is also assumed that $`S`$ is continuous and differentiable and is a monotonically increasing function of $`U`$. The assumption of the existence of the fundamental relation goes hand in hand with the second law of thermodynamics: the entropy reaches a maximum (as a function of the uncontrolled variables) at equilibrium. The formulation of the second law is relatively simple, yet it is perhaps the most mysticism-clad law of physics. Phrases like “macroscopic systems have a tendency to reach equilibrium” or “the natural tendency of closed systems is to maximize their entropy” are used freely. For example, to quote Callen : ”… in all systems there is a tendency to evolve towards states in which the properties are determined by intrinsic factors and not by previously applied external influences. Such simple terminal states are, by definition, time independent. They are called equilibrium states.” The above state of affairs reminds one of Aristotelian times. Then it was said that the ”natural state” of bodies is to be at rest, and that bodies have an internal tendency to reach their natural state. The ”natural state” has also been reflected in the terms used to describe the state of bodies. Non-moving objects were referred to as bodies not at rest. We now understand that it is not an internal tendency of bodies to be at rest. On the contrary, we need dissipation to force the bodies to reach the minimum of a potential well and stay there at rest. Below, we argue that in a metaphorically similar manner, it is the gravitational background that forces the system to reach equilibrium. Since its energy is controlled and the Hamiltonian describing it includes no interacting parts with the environment, a theoretically defined closed system must remain forever in one of its many-body quantum states. Note that this state can be either an energy eigenstate or a coherent superposition of energy states. In both cases, it is a specific microstate of the system. As entropy is a measure of the number of microstates corresponding to a macrostate of a system, it vanishes. This implies that for an ideal closed system there is no sense in defining and talking about the system’s entropy. Usually this difficulty is “solved” by the argument that one should consider not a fixed value of the system’s total energy $`U`$, but rather include some uncertainty so that $`U`$ is controlled up to some $`\delta U`$. We prefer the solution put forward by Callen : “The apparent paradox is seated in the assumption of isolation of a physical system. No (finite) physical system is, or ever can be, truly isolated”. He mentions electromagnetic background, gravitational fields and the vacuum itself; all can exchange energy and matter with the system. A similar argument has been raised by Landau and Lifshitz : “In consequence of the extremely high density of levels, a macroscopic body in practice can never be in a strictly stationary state. First of all, it is clear that the value of the energy of the system will always be “broadened” by an amount of the order of the energy of interaction between the system and the surround bodies. The latter is very large in comparison with the separations between levels, not only for quasi-closed subsystems but also for systems which from any other aspect could be regarded as strictly closed. In Nature, of course, there are no completely closed systems, whose interaction with any other body is exactly zero; and whatever interaction does exist, even if it is so small that it does not affect other properties of the system, will still be very large in comparison with the infinitesimal intervals in the energy spectrum.” In other words, Landau and Lifshitz proposed that in practice we cannot have ideal closed systems. We would like to argue that $`\mathrm{𝑖𝑛𝑝𝑟𝑖𝑛𝑐𝑖𝑝𝑙𝑒}`$ there cannot be ideal closed systems. Consider a “closed” system, which according to the argument of Landau and Lifshitz must be in a mixed state with some uncertainty of energy, $`\delta U`$. Except for postulating its density matrix, there is only one way to describe the system, which is to assume that the system and its environment constitute one big physical system which is prepared in a pure quantum state. Due to the interaction between the small system and the environment, the exact eigenstates of the big system are entangled. This means that by tracing over the states of the environment one obtains a reduced density matrix corresponding to the small system in a mixed state. The entropy of this mixed state may be calculated in the usual manner, and it simply reflects the measure of the entanglement with the environment. Within this approach one can mimic the growth of the entropy of the small system by initially preparing the big system in a “less” entangled state. Indeed, since all the eigenstates of the big system are entangled, an unentangled state must be a very unique superposition of many eigenstates with different energies. The time evolution, then, will always increase the entanglement, at least for some initial period of time. This approach may be very useful for “all practical purposes”, since the larger the environment the longer one can mimic the irreversibility. However, we are left with the big system, which was prepared in a pure state. The entropy of this system vanishes, and none of the fundamental questions is really resolved. We come to the conclusion that there should exist a universal environment (background) to which any “closed” system is coupled. This background may not be united with any physical system to form a bigger system in a pure state. As a result, the closed system is in a mixed state, entropy can be assigned to it, and it is subjected to the second law. The above is valid provided that the interaction with the background is larger than the energy spacing between the many-body quantum states. In other words, since a closed system is actually open, the background induces the transitions which lead to the existence of entropy. Entropy can be viewed as the interaction of the system with its background under natural constraints (minimal interaction with the background). The system does not have a tendency to reach maximum entropy; it is rather the background which forces the system towards equilibrium. We propose that the universal background with which every macroscopic system has minimal interaction is the gravitational field. The other fields, in principle, can be either screened or included as part of the Hamiltonian of the system. If the microscopic states of a system are to be equally probable, so should be the transitions between these states. Thus the coupling with the background has to lead to induced transitions with equal probability. It might be due to the “central limit theorem” of random variables when applied to the background. However, one may think about another possibility. The background may couple a given state of a system only to a small number of states (as is the case for other known interactions). Then a tree-like structure would be induced in the space of all states of the system. Starting at a given state, one may go only to those states which are connected to the initial one by the background. In the next step, another subset of states becomes accessible. The “transport” on such a tree may be very nontrivial. In a recent work , the Cayley tree structure of states was used, and the localization on such a tree was interpreted as a transition from the Fermi liquid picture for high energy states to a more refined one for low energy states. It may happen that the localization on a background induced tree of states corresponds to ergodicity-nonergodicity transition. The essence of thermodynamics and statistical physics is that we can, in principle, define energy and mass (numbers of particles and their masses) for any enclosed finite volume, and decouple it from the environment. This implicit assumption is in contradiction with Einstein’s theory of gravitation. According to the latter we cannot co-define the energy of the gravitational field with matter in any enclosed finite volume. To quote Dirac :“It is not possible to obtain an expression for the energy of the gravitational field satisfying both the conditions: (i) when added to other forms of energy the total energy is conserved, and (ii) the energy within a definite (three-dimensional) region at a certain time is independent of the coordinate system”. Or as Landau and Lifshitz formulate : “ … the gravitational field cannot itself be included in a closed system, since the conservation laws which are, as we have seen, the foundation of statistical physics would then reduce to identities”. These strange properties of the gravitational field follow from the dual role of the metric tensor $`g_{ij}`$. On one hand, it generates the symmetry of general coordinates transformation, i. e., the variational derivative of the action with respect to $`g_{ij}`$ is the energy-momentum tensor. On the other hand, as it is also a physical field (the gravitational field), the same derivative gives the corresponding Lagrange equation of motion. The energy-momentum tensor thus vanishes. We see that the gravitational field is the only one that cannot be screened or be included as a part of the Hamiltonian of the system. The above leads us to propose a new possible interpretation of the entropy, which can also be viewed as a postulation of a new law of Nature. When enclosing a volume to construct a “closed system”, we impose the constraints that the system does not exchange matter, heat and energy besides gravitational energy with its surrounding. Thus a “closed system” is “open” with respect to interaction with the background gravitational field. The latter should be viewed as an entropy bath (in analogy to a heat bath, particle bath, etc.), as it causes transitions between the system’s microstates. Moreover, when the strength of the interaction with the gravitational field is much larger than the energy spacing between the many-body quantum states of the system, the latter becomes irrelevant, and the microstates are determined by the single-particle states . In the new interpretation, the entropy represents the effect of the uncontrolled background on the enclosed system, and is not an inherent property of the system itself. In the same manner, the second law reflects the effect of the background on the system rather than being a “tendency” of the system. At present we lack a theory of quantum gravity which, in our new picture, is necessary for the complete establishment of the foundation of thermodynamics and statistical physics. (It might be that knowing the behaviour of macroscopic systems will actually provide hints on the principles of quantum gravity.) Therefore, we also lack a quantitative evaluation of the strength of the interaction between the macroscopic system and the background gravitational field. Yet, the naive assumption is that, due to the smallness of the Planck scale, the interaction of systems with the background gravitational field should be neglected. Gravity is assumed to play a role either at and below the Planck scale, or on cosmological scales. The relevant background field for non-cosmological thermodynamic systems is assumed to be the background electromagnetic field which has a much stronger interaction with the system. Moreover, the energy of the background radiation ($`3^{}`$K) is considered to be much higher than the yet unknown energy of the background gravitational field, as they departed from mutual equilibrium at an early stage of the universe . Nevertheless, some estimations suggest that the two energies are not that different . Now comes into play the fundamental difference between the two fields, namely, the fact that the electromagnetic field can be screened. Indeed we can perform experiments lowering the temperature of thermodynamic systems well below $`3^{}`$K. Recently, the strength of the interaction with the background field has been estimated. Ellis et al. have proposed that the correction to the time evolution of the density matrix is proportional to $$\delta E=\frac{E^2}{M_{Pl}},$$ (1) where $`E`$ is the energy of the system and $`M_{Pl}=10^{19}`$GeV is the Planck mass (in units of energy). In their case, the ”system” is an elementary or a composite particle. Adaptation of this estimate to thermodynamic systems can be done in two ways: 1. from the point of view of the individual particles composing the system. 2. from the point of view of the whole system. Consider a thermodynamic system of $`1cm^3`$ composed of an Avogadro number ($`N_A`$) of non-interacting particles at temperature $`T=1^{}`$K. Taking the individual particle view, the energy of each particle is approximately $`k_BT`$ ($`k_B`$ is the Boltzmann factor), hence the correction per particle, $`\delta E_1`$ is given by $`\delta E_1=\frac{(k_BT)^2}{M_{Pl}},`$ and the s correction for the system is $$\delta E=N_A\delta E_1=\frac{N_A(k_BT)^2}{M_{Pl}}.$$ (2) Taking the alternative interpretation, the system’s total energy is $`N_Ak_BT`$, hence the correction is $$\delta E=\frac{(N_Ak_BT)^2}{M_{Pl}}.$$ (3) Inserting the parameters indicated above we obtain $`\delta E10^{32}`$J and $`\delta E10^9`$J for the first and second interpretation, respectively. We would like to emphasize that the spacing between two many-body energy levels of the system under consideration is of the order of $`10^{40}`$J. Thus, even if we take the first interpretation, the energy correction is sufficient to mix the energy states and lead to the emergence of entropy. As we have proposed before, it also leads to the breakdown of the many-body states into a distribution over the single-particle states (the Fermi-Dirac and Bose-Einstein distribution for Fermions and Bosons, respectively). The situation is different when the system is in a coherent macroscopic quantum state (e.g. superfluidity, superconductivity, Hall state, etc.) with a large energy gap separating the state from the continuum. In this case, when the energy gap is larger than the effect of the background gravitational field, the latter can be ignored. We expect that the energy correction is given by Eq. (1) as long as the system’s energy is sufficiently high. Otherwise, there is a minimal correction which, in small systems, is proportional to the energy of the background gravitational field times the system’s mass, and inversely proportional to the size of the system. In large systems, it is proportional to the time of interaction times the speed of light. The effect of the gravitational field should therefore saturate at a minimal level as the temperature of the system is lowered. This differs from the effect of the electromagnetic field (within the system), which decreases with the system’s temperature and saturate only at its zero point fluctuations. Thus, the coupling to the background gravitational field might explain the phase transitions in He<sup>3</sup> and the observations that the dephasing time in mesoscopic systems saturates as the temperature is lowered . To explain the latter according to the new picture, one requires additional assumptions: 1. The dephasing due to the gravitational field is carried out not only through direct coupling to the moving particle. The main effect is through the coupling of the field to the whole system which, in turn, is coupled to the particle via the mechanism of Stern et al . 2. The metric of the gravitational field is not quantized, i.e., the dephasing process does not require emission or absorption of gravitons. Accepting the above, we predict that the dephasing saturation temperature depends on the mass density of the system. In the experiments of Mohanty, Jariwala and Webb, the saturation temperature of GaAs is found to be higher, and the dephasing time is found to be shorter relative to those of Si. Clearly, it can also result from the different electronic structure of the two materials. We suggest to distinguish between the two possible mechanisms by using systems of equal dimensions made of different isotopes of the same material. The picture above has immediate implications with respect to quantum measurements and the collapse of the wave function. In this picture, the collapse is a consequence of the interaction of the particle with the background gravitational field mediated via the measuring apparatus. We will discuss this issue elsewhere, together with other issues related to the new picture (e.g., the fact that the universe as a whole seems to evolve towards lower entropy, the evolution of complexity and entropy production of open systems, etc.). To conclude, we propose that the origin of irreversibility in time is the interaction of energy and matter with the metric of space-time (which can be viewed as a generalized Mach-like principle), and that the fundamental relation of thermodynamics originates from the minimal interaction of any enclosed system with its gravitational background. If, indeed, entropy reflects the coupling of macroscopic systems to the background gravitational field, the macroscopic behaviour is not simply derivable from the isolated microscopic dynamics of the system, and we may have to re-examine our reductionist view of Nature. This article required knowledge in thermodynamics, statistical physics, the foundation of quantum mechanics and issues related to classical and quantum gravity. We were lucky to be able to learn from and consult with Y. Aharonov, Y. Bekenstein, D. J. Bergman and B. Reznik, each with his own expertise. We also thank the referee for his constructive comments on the first version of the manuscript and for the valuable references he pointed out to us, and D. Halbing for critical reading of the manuscript. Note added in proof: After the submission of our manuscript we have come across a manuscript entitled: ”Entropy Defined, Entropy Increase and Decoherence Understood, and Some Black-Hole Puzzles Solved” by Bernard S. Kay which presents a similar picture from a quantum gravity starting point.
no-problem/9905/physics9905038.html
ar5iv
text
# Ionization of a Model Atom: Exact Results and Connection with Experiment ## Abstract We prove that a model atom having one bound state will be fully ionized by a time periodic potential of arbitrary strength $`r`$ and frequency $`\omega `$. The survival probability is for small $`r`$ given by $`e^{\mathrm{\Gamma }t}`$ for times of order $`\mathrm{\Gamma }^1`$ $`r^{2n}`$, where $`n`$ is the number of “photons” required for ionization, with enhanced stability at resonances. For late times the decay is like $`t^3`$. Results are for a 1d system with a delta function potential of strength $`g(1+\eta (t))`$ but comparison with experiments on the microwave ionization of excited hydrogen atoms and with recent analytical work indicate that many features are universal. PACS: 32.80.Rm, 03.65.Db, 32.80.Wr. \******* Transitions between bound and free states of a system are of great importance in many areas of science and “much of the practical business of quantum mechanics is calculating exponential decay rates” . There are, however, still many unresolved questions when one goes beyond perturbation theory . Unfortunately, approaches going beyond perturbation theory such as Floquet theory, semi-classical analysis and numerical solution of the time dependent Schrödinger equation are both complicated and also involve, when calculating transitions to the continuum, uncontrolled approximations . It is only recently that some general results going beyond perturbation theory have been rigorously established for models with spatial structure . We still don’t know, however, many basic facts about the ionization process, e.g. the conditions for a time dependent external field to fully dissociate a molecule or ionize an atom, much less the ionization probability as a function of time and of the form of such a field . Granted that the problem is intrinsically complicated it would be very valuable to have some simple solvable models which contain the spatial structure of the bound state and the continuum and can thus serve as a guide to the essential features of the process. In this note we describe new exact results relating to ionization of a very simple model atom by an oscillating field (potential) of arbitrary strength and frequency. While our results hold for arbitrary strength perturbations, the predictions are particularly explicit and sharp in the case where the strength of the oscillating field is small relative to the binding potential—a situation commonly encountered in practice. Going beyond perturbation theory we rigorously prove the existence of a well defined exponential decay regime which is followed, for late times when the survival probability is already very low, by a power law decay. This is true no matter how small the frequency. The times required for ionization are however very dependent on the perturbing frequency. For a harmonic perturbation with frequency $`\omega `$ the logarithm of the ionization time grows like $`r^{2n}`$, where $`r`$ is the normalized strength of the perturbation and $`n`$ is the number of “photons” required for ionization. This is consistent with conclusions drawn from perturbation theory and other methods (the approach in being the closest to ours), but is, as far as we know, the first exact result in this direction. We also obtain, via controlled schemes, such as continued fractions and convergent series expansions, results for strong perturbing potentials. Quite surprisingly our results reproduce many features of the experimental curves for the multiphoton ionization of excited hydrogen atoms by a microwave field . These features include both the general dependence of the ionization probabilities on field strength as well as the increase in the life time of the bound state when $`n\mathrm{}\omega `$, $`n`$ integer, is very close to the binding energy. Such “resonance stabilization” is a striking feature of the Rydberg level ionization curves . These successes and comparisons with analytical results - suggest that the simple model we shall now describe contains many of the essential ingredients of the ionization process in real systems. The model we consider is the much studied one-dimensional system with Hamiltonian , , , $$H_0=\frac{\mathrm{}^2}{2m}\frac{d^2}{dy^2}g\delta (y),g>0,\mathrm{}<y<\mathrm{}.$$ (1) $`H_0`$ has a single bound state $`u_b(y)=\sqrt{p_0}e^{p_0|y|},p_0=\frac{m}{\mathrm{}^2}g`$ with energy $`\mathrm{}\omega _0=\mathrm{}^2p_0^2/2m`$ and a continuous uniform spectrum on the positive real line, with generalized eigenfunctions $$u(k,y)=\frac{1}{\sqrt{2\pi }}\left(e^{iky}\frac{p_0}{p_0+i|k|}e^{i|ky|}\right),\mathrm{}<k<\mathrm{}$$ and energies $`\mathrm{}^2k^2/2m`$. Beginning at some initial time, say $`t=0`$, we apply a perturbing potential $`g\eta (t)\delta (y)`$, i.e. we change the parameter $`g`$ in $`H_0`$ to $`g(1+\eta (t))`$ and solve the time dependent Schrödinger equation for $`\psi (y,t)`$, $`\psi (y,t)=\theta (t)u_b(y)e^{i\omega _0t}`$ (2) $`+{\displaystyle _{\mathrm{}}^{\mathrm{}}}\mathrm{\Theta }(k,t)u(k,y)e^{i\frac{\mathrm{}k^2}{2m}t}𝑑k(t0)`$ (3) with initial values $`\theta (0)=1,\mathrm{\Theta }(k,0)=0`$. This gives the survival probability $`|\theta (t)|^2`$, as well as the fraction of ejected electrons $`|\mathrm{\Theta }(k,t)|^2dk`$ with (quasi-) momentum in the interval $`dk`$. In a previous work we found that this problem can be reduced to the solution of a single integral equation. Using units in which $`p_0,\omega _0,\mathrm{},2m`$ and $`\frac{g}{2}`$ equal $`1`$ we get $`\theta (t)=1+2i{\displaystyle _0^t}Y(s)𝑑s`$ (4) $`\mathrm{\Theta }(k,t)=2|k|/\left[\sqrt{2\pi }(1i|k|)\right]{\displaystyle _0^t}Y(s)e^{i(1+k^2)s}𝑑s`$ (5) where $`Y(t)`$ satisfies the integral equation $$Y(t)=\eta (t)\left\{1+_0^t[2i+M(tt^{})]Y(t^{})𝑑t^{}\right\}$$ (6) with $$M(s)=\frac{2i}{\pi }_0^{\mathrm{}}\frac{u^2e^{is(1+u^2)}}{1+u^2}𝑑u=\frac{1}{2}\sqrt{\frac{i}{\pi }}_s^{\mathrm{}}\frac{e^{iu}}{u^{3/2}}𝑑u.$$ An important result of the present work is that when $`\eta (t)`$ is a trigonometric polynomial with real coefficients $$\eta (t)=\underset{j=1}{\overset{n}{}}A_j\mathrm{sin}(j\omega t)+\underset{j=1}{\overset{m}{}}B_j\mathrm{cos}(j\omega t)$$ (7) the survival probability $`|\theta (t)|^2`$ tends to zero as $`t\mathrm{}`$, for all $`\omega >0`$. This result follows from (4) and (6) once we establish that $`2|Y(t)|=|\theta ^{}(t)|0`$ in an integrable way, and this represents the difficult part of the proof. Since the main features of the behavior of $`y(p)`$ are already present in the simplest case $`\eta =r\mathrm{sin}(\omega t)`$ we now specialize to this case. The asymptotic characterization of $`Y`$ is obtained from its Laplace transform $`y(p)=_0^{\mathrm{}}e^{pt}Y(t)𝑑t`$, which satisfies the functional equation (cf. (6)) $`y(p)={\displaystyle \frac{ir}{2}}\left\{{\displaystyle \frac{y(p+i\omega )}{\sqrt{1ip+\omega }1}}{\displaystyle \frac{y(pi\omega )}{\sqrt{1ip\omega }1}}\right\}`$ (8) $`+{\displaystyle \frac{r\omega }{\omega ^2+p^2}}`$ (9) with the boundary condition $`y(p)0`$ as $`\mathrm{}(p)\pm \mathrm{}`$ (the relevant branch of the square root is $`(1ip\omega )^{1/2}=i(\omega 1+ip)^{1/2}`$ for $`\omega >1`$). We show that the solution of (8) with the given boundary conditions is unique and analytic for $`\mathrm{}(p)>0`$, and its only singularities on the imaginary axis are square-root branch points (see below). This in turn implies that $`|Y(t)|`$ does indeed decay in an integrable way. The proof depends in a crucial way on the behavior of the solutions of the homogeneous equation associated to (8): $`y(p)`$ has poles on a vertical line if the homogeneous equation has a solution that is uniformly bounded along that line. The absence of such solutions in the closed right half plane is shown by exploiting the symmetry with respect to complex conjugation of the underlying physical problem and carries through directly to the more general periodic potential (6). To understand the ionization processes as a function of $`t`$, $`\omega `$, and $`r`$ requires a detailed study of the singularities of $`y(p)`$ in the whole complex $`p`$-plane. This yields the following results: For small $`r`$, $`y(p)`$ has square root branch points at $`p=\{i(n\omega +1)+O(r^2):n𝐙\}`$, is analytic in the right half plane and also in an open neighborhood $`𝒩`$ of the imaginary axis with cuts through the branch points. As $`|q|\mathrm{}`$ in $`𝒩`$ we have $`|y(q)|=O(r\omega |q|^2)`$. If $`|\omega \frac{1}{n}|>\mathrm{const}.r^2,n`$ a positive integer, then for small $`r`$ the function $`y`$ is meromorphic in the strips $`m\omega 1O(r^2)>\mathrm{}(p)>m\omega \omega 1+O(r^2),m𝐙`$ and has a unique pole in each of these strips, at a point $`p`$ with $`0>\mathrm{}(p)=O(r^{2n})`$ for small $`r`$. It then follows that $`\theta (t)`$ can be decomposed as $`\theta (t)=e^{\gamma (r;\omega )t}e^{it}F_\omega (t)+{\displaystyle \underset{m=\mathrm{}}{\overset{\mathrm{}}{}}}e^{i(1+m\omega )t}h_m(t)`$ (10) where $`F_\omega `$ is periodic of period $`2\pi \omega ^1`$ and its Fourier coefficients decay faster than $`r^nn^{n/2}`$, and $`|h_m(t)|const.r^mt^{3/2}`$ for large $`t`$ uniformly in $`m`$. Furthermore, $`h_m(t)_{j=0}^{\mathrm{}}c_{m,j}t^{3/2j}`$ for large $`t`$. Consequently, for times of order $`1/\mathrm{}(\gamma )`$ the survival probability decays as $`\mathrm{exp}(\mathrm{\Gamma }t)`$, $`\mathrm{\Gamma }=2\mathrm{}(\gamma )`$, after which its long time behavior is $`|\theta (t)|^2=O(t^3)`$. This is illustrated in Figure 1 where it is seen that for small $`r`$ exponential decay holds up to times at which the survival probability is extremely small, after which $`|\theta (t)|^2`$ decays polynomially with many oscillations. Note that even for $`r`$ as large as $`0.3`$ the decay is essentially purely exponential for all practical purposes. Thus, for $`\omega >1`$ Fermi’s golden rule works magnificently . Using a continued fraction representation of the solutions of the homogeneous equation associated to (8) we obtain as $`r0`$, $$\mathrm{\Gamma }=\{\begin{array}{ccccccc}\sqrt{\omega 1}\frac{r^2}{\omega };\hfill & \text{if }\omega >1+O(r^2)\hfill & & & & & \\ & & & & & & \\ \frac{\sqrt{2\omega 1}}{(1\sqrt{1\omega })^2}\frac{r^4}{8\omega };\hfill & \text{if }\omega (\frac{1}{2},1)^+\hfill & & & & & \\ \mathrm{}\hfill & \mathrm{}\hfill & & & & & \\ \frac{2^{2n+2}\sqrt{n\omega 1}}{_{m<n}(1\sqrt{1m\omega })^2}\frac{r^{2n}}{n\omega };\hfill & \text{if }\omega (\frac{1}{n},\frac{1}{n1})^+\hfill & & & & & \end{array}$$ (11) where $`\omega (a,b)^+`$ means $`a+O(r^2)<\omega <bO(r^2)`$. The result for $`\omega >1`$ agrees with perturbation theory since the the transition matrix element is $$\left|<u_b(y)|\delta (y)u(k,y)>\right|^2=\frac{1}{2\pi }\frac{k^2}{1+k^2}.$$ (12) In Figure 2 we plot the behavior of $`\mathrm{\Gamma }^1`$ which is just the time needed for $`|\theta (t)|^2`$ to decay significantly, as a function of $`\omega `$. The curve is made up of smooth (roughly self-similar) pieces for $`\omega `$ in the intervals $`(n^1,(n1)^1)`$ corresponding to ionization by $`n`$ photons. Note that at resonances, when $`\omega ^1`$ is an integer (i.e. multiple of $`\omega _0^1`$ here set equal to unity), the coefficient of $`r^{2n}`$, the leading term in $`\mathrm{\Gamma }`$, goes to zero. At such values of $`\omega `$ one has to go to higher order in $`r`$, corresponding to letting $`\omega `$ approach the resonance from below. This yields an enhanced stability of the bound state against ionization by perturbations with such frequencies. The origin of this behavior is, in our model, the vanishing of the matrix element in (12) at $`k=0`$. This behavior should hold quite generally since the quasi-free wavefunction $`u(k,y)`$ may be expected to vanish pointwise as $`k0`$. For $`d1`$ there is an additional factor $`k^{d2}`$ coming from the energy density of states near $`k=0`$. As $`r`$ increases these resonances shift in the direction of increased frequency. For small $`r`$ and $`\omega =1`$ the shift in the position of the resonance, sometimes called the dynamic Stark effect , is about $`\frac{r^2}{\sqrt{2}}`$. In Figure 3 we plot the strength of the perturbation $`r`$, required to make $`|\theta (t)|^2=\frac{1}{2}`$ for a fixed number of oscillations of the perturbing field (time measured in units of $`\omega ^1`$) as a function of $`\omega `$. Also included in this figure are experimental results for the ionization of a hydrogen atom by a microwave field. In these still ongoing beautiful series of experiments, carried out by several groups and reviewed in , the atom is initially in an excited state with principal quantum number $`n_0`$ ranging from 32 to 90. The experimental results in Fig. 3 are taken from Table 1 in , see also Figures 13 and 18 there. The “natural frequency” $`\omega _0`$ is there taken to be that of a transition from $`n_0`$ to $`n_0+1`$, $`\omega _0n_0^3`$. The strength of the microwave field $`F`$ is then normalized to the strength of the nuclear field in the initial state, which scales like $`n_0^4`$. The plot there is thus of $`n_0^4F`$ vs. $`n_0^3\omega `$. To compare the results of our model with the experimental ones we had to relate $`r`$ to $`n_0^4F`$. Given the difference between the hydrogen atom Hamiltonian with potential $`V_0(R)=1/R`$ perturbed by a polarized electric field $`V_1=xF\mathrm{sin}(\omega t)`$, and our model with $`V_1=rV_0`$, this is clearly not something that can be done in any unique way. We therefore simply tried to find a correspondence between $`n_0^4F`$ and $`r`$ which would give the best visual fit. Somewhat to our surprise these fits for different values of $`\omega /\omega _0`$ all turned out to have values of $`r`$ close to $`3n_0^4F`$. A correspondence of the same order of magnitude is obtained by comparing the perturbation-induced shifts of bound state energies in our model and in Hydrogen. The shift in the position of the resonances from the integer fractional values seen in Fig. 2, due to the finite value of $`r`$, was approximated in Fig. 3 using the average value of $`r`$ over the range, $`r0.195`$. In Figure 4 we plot $`|\theta (t)|^2`$ vs. $`r`$ for a fixed $`t`$ and two different values of $`\omega `$. These frequencies are chosen to correspond to the values of $`\omega /\omega _0`$ in the experimental curves. Figure 1 in and Figure 1b in . The agreement is very good for $`\omega /\omega _0.1116`$ and reasonable for the larger ratio. Our model essentially predicts that when the fields are not too strong, the experimental survival curves for a fixed $`n_0^3\omega `$ (away from the resonances) should behave essentially like $`\mathrm{exp}\left(C[n_0^4F]^{\frac{2}{n_0^3\omega }}t\omega \right)`$ with $`C`$ depending on $`n_0^3\omega `$ but, to first approximation, independent of $`n_0^4F`$. The degree of agreement between the behavior of what might be considered as the absolutely simplest quantum mechanical model of a bound state coupled to the continuum and experiments on hydrogen atoms is truly surprising. The experimental results and in particular the resonances have often been interpreted in terms of classical phase space orbits in which resonance stabilization is due to KAM–like stability islands . Such classical analogs are absent in our model as in fact are “photons”. On the other hand, the special nature of the edge of the continuum seems to be quite general, cf. . We note that for $`\omega >\omega _0`$, in the limit of small amplitudes $`r`$, a predominantly exponential decay of the survival probability followed by a power-law decay was proven in for three dimensional models with quite general local binding potentials having one bound state, perturbed by a local potential of the form $`r\mathrm{cos}(\omega t)V_1(y)`$. It seems likely that our results for general $`\omega `$ and $`r`$, including general periodic (perhaps also quasi-periodic) perturbations would extend to a similarly general setting. We are currently investigating various extensions of our model to understand the effect of the restriction to one bound state. This will hopefully lead to a more detailed understanding, and some control over the ionization process. Because $`\mathrm{\Gamma }`$ relates to the position of the poles of the solution of (8), a convenient way to determine $`\mathrm{\Gamma }`$ (mathematical rigor aside), if $`r`$ is not too large, is the following, see also . One iterates $`n`$ times the functional equation (8), $`n`$ appropriately large, to express $`y(p)`$ only in terms of $`y(p\pm mi\omega )`$ with $`|m|>n`$. After neglecting the small contributions of the $`y(p\pm mi\omega )`$, the poles of $`y(p)`$ can be obtained by a rapidly converging power series in $`r`$, whose coefficients are relatively easy to find using a symbolic language program, although a careful monitoring of the square-root branches is required. A complete study of the poles and branch-points of $`y`$ leads to (10) which is effectively the Borel summation of the formal (exponential) asymptotic expansion of $`Y`$ for $`t\mathrm{}`$. Acknowledgments. We thank A. Soffer, M. Weinstein and P. M. Koch for valuable discussions and for providing us with their papers. We also thank R. Barker, S. Guerin and H. Jauslin for introducing us to the subject. Work of O. C. was supported by NSF Grant 9704968, that of J. L. L. and A. R. by AFOSR Grant F49620-98-1-0207. * Also Department of Physics. costin@math.rutgers.edu, lebowitz@sakharov.rutgers.edu, rokhlenk@math.rutgers.edu.
no-problem/9905/chao-dyn9905019.html
ar5iv
text
# Quantum-limited linewidth of a chaotic laser cavity \[ ## Abstract A random-matrix theory is presented for the linewidth of a laser cavity in which the radiation is scattered chaotically. The linewidth is enhanced above the Schawlow-Townes value by the Petermann factor $`K`$, due to the non-orthogonality of the cavity modes. The factor $`K`$ is expressed in terms of a non-Hermitian random matrix and its distribution is calculated exactly for the case that the cavity is coupled to the outside via a small opening. The average of $`K`$ is found to depend non-analytically on the area of the opening, and to greatly exceed the most probable value. \] It has been known since the conception of the laser that vacuum fluctuations of the electromagnetic field ultimately limit the narrowing of the emission spectrum by laser action. This quantum-limited linewidth, or Schawlow-Townes linewidth, $$\delta \omega =\frac{1}{2}\mathrm{\Gamma }^2/I,$$ (1) is proportional to the square of the decay rate $`\mathrm{\Gamma }`$ of the lasing cavity mode and inversely proportional to the output power $`I`$ (in units of photons/s). Many years later it was realised that the fundamental limit is larger than Eq. (1) by a factor $`K`$ that characterises the non-orthogonality of the cavity modes. This excess noise factor, or Petermann factor, has generated an extensive literature (see the recent papers and references therein), both because of its fundamental significance and because of its practical importance. Theories of the enhanced linewidth usually factorise $`K=K_lK_r`$ into a longitudinal and transverse factor, assuming that the cavity mode is separable into longitudinal and transverse modes. Since a longitudinal or transverse mode is essentially one-dimensional, that is a major simplification. Separability breaks down if the cavity has an irregular shape or contains randomly placed scatterers. In the language of dynamical systems, one crosses over from integrable to chaotic dynamics . Chaotic laser cavities have attracted much interest recently , but not in connection with the quantum-limited linewidth. In this paper we present a general theory for the Petermann factor in a system with chaotic dynamics, and apply it to the simplest case of a chaotic cavity radiating through a small opening. Chaotic systems require a statistical treatment, so we compute the probability distribution of $`K`$ in an ensemble of cavities with small variations in shape and size. We find that the average of $`K1`$ depends non-analytically $`T\mathrm{ln}T^1`$ on the transmission probability $`T`$ through the opening, so that it is beyond the reach of simple perturbation theory. The most probable value of $`K1`$ is $`T`$, hence it is parametrically smaller than the average. The spectral statistics of chaotic systems is described by random-matrix theory . We begin by reformulating the existing theories for the Petermann factor in the framework of random-matrix theory. Modes of a closed cavity, in the absence of absorption or amplification, are eigenvalues of a Hermitian operator $`H_0`$. For a chaotic cavity, $`H_0`$ can be modelled by an $`M\times M`$ Hermitian matrix with independent Gaussian distributed elements. (The limit $`M\mathrm{}`$ at fixed spacing $`\mathrm{\Delta }`$ of the modes is taken at the end of the calculation.) The matrix elements are real because of time-reversal symmetry. (This is the Gaussian orthogonal ensemble .) A small opening in the cavity is described by a real, non-random $`M\times N`$ coupling matrix $`W`$, with $`N`$ the number of wave channels transmitted through the opening. (For an opening of area $`𝒜`$, $`N2\pi 𝒜/\lambda ^2`$ at wavelength $`\lambda `$.) Modes of the open cavity are complex eigenvalues (with negative imaginary part) of the non-Hermitian matrix $`H=H_0i\pi WW^T`$. The scattering matrix $`S`$ at frequency $`\omega `$ is related to $`H`$ by $$S=𝟙\mathrm{𝟚}\pi 𝕚𝕎^𝕋(\omega )^\mathrm{𝟙}𝕎.$$ (2) It is a unitary and symmetric, random $`N\times N`$ matrix, with poles at the eigenvalues of $`H`$. We now assume that the cavity is filled with a homogeneous amplifying medium (amplification rate $`1/\tau _a`$). This adds a term $`i/2\tau _a`$ to the eigenvalues, shifting them upwards towards the real axis. The lasing mode is the eigenvalue $`\mathrm{\Omega }i\mathrm{\Gamma }/2`$ closest to the real axis, and the laser threshold is reached when the decay rate $`\mathrm{\Gamma }`$ of this mode equals the amplification rate $`1/\tau _a`$ . Near the laser threshold we need to retain only the contribution from the lasing mode (say mode number $`l`$) to the scattering matrix (2), $`S_{nm}`$ $`=`$ $`2\pi i(W^TU)_{nl}(\omega \mathrm{\Omega }+i\mathrm{\Gamma }/2i/2\tau _a)^1`$ (4) $`(U^1W)_{lm},`$ where $`U`$ is the matrix of eigenvectors of $`H`$. Because $`H`$ is a real symmetric matrix, we can choose $`U`$ such that $`U^1=U^T`$ and write Eq. (4) in the form $$S_{nm}=\sigma _n\sigma _m(\omega \mathrm{\Omega }+i\mathrm{\Gamma }/2i/2\tau _a)^1,$$ (5) where $`\sigma _n=(2\pi i)^{1/2}(W^TU)_{nl}`$ is the complex coupling constant of the lasing mode $`l`$ to the $`n`$-th wave channel. The Petermann factor $`K`$ is given by $$\sqrt{K}=\frac{1}{\mathrm{\Gamma }}\underset{n=1}{\overset{N}{}}|\sigma _n|^2=(U^{}U)_{ll}.$$ (6) The second equality follows from the definition of $`\sigma _n`$ , and is the matrix analogon of Siegman’s non-orthogonal mode expression . The first equality follows from the definition of $`K`$ as the factor multiplying the Schawlow-Townes linewidth . One verifies that $`K1`$ because $`(U^{}U)_{ll}(U^TU)_{ll}=1`$. The relation (6) serves as the starting point for a calculation of the statistics of the Petermann factor in an ensemble of chaotic cavities. Here we restrict ourselves to the case $`N=1`$ of a single wave channel, leaving the multi-channel case for future investigation. For $`N=1`$ the coupling matrix $`W`$ reduces to a vector $`\stackrel{}{\alpha }=(W_{11},W_{21},\mathrm{},W_{M1})`$. Its magnitude $`|\stackrel{}{\alpha }|^2=(M\mathrm{\Delta }/\pi ^2)w`$, where $`w[0,1]`$ is related to the transmission probability $`T`$ of the single wave channel by $`T=4w(1+w)^2`$. We assume a basis in which $`H_0`$ is diagonal (eigenvalues $`\omega _q`$). If the opening is much smaller than a wavelength, then a perturbation theory in $`\stackrel{}{\alpha }`$ seems a natural starting point. To leading order one finds $$K=1+(2\pi \alpha _l)^2\underset{ql}{}\frac{\alpha _q^2}{(\omega _l\omega _q)^2}.$$ (7) The frequency $`\mathrm{\Omega }`$ and decay rate $`\mathrm{\Gamma }`$ of the lasing mode are given by $`\omega _l`$ and $`2\pi \alpha _l^2`$, respectively, to leading order in $`\stackrel{}{\alpha }`$. We seek the average $`K_{\mathrm{\Omega },\mathrm{\Gamma }}`$ of $`K`$ for a given value of $`\mathrm{\Omega }`$ and $`\mathrm{\Gamma }`$. The probability to find an eigenvalue at $`\omega _q`$ given that there is an eigenvalue at $`\omega _l`$ vanishes linearly for small $`|\omega _q\omega _l|`$, as a consequence of eigenvalue repulsion constrained by time-reversal symmetry. Since the expression (7) for $`K`$ diverges quadratically for small $`|\omega _q\omega _l|`$, we conclude that $`K_{\mathrm{\Omega },\mathrm{\Gamma }}`$ does not exist in perturbation theory. This severely complicates the problem. We have succeeded in obtaining a finite answer for the average Petermann factor by starting from the exact relation $$U_{ql}z_l=\omega _qU_{ql}i\pi \alpha _q\underset{p}{}\alpha _pU_{pl}$$ (8) between the complex eigenvalues $`z_q`$ of $`H`$ and the real eigenvalues $`\omega _q`$ of $`H_0`$. Distinguishing between $`q=l`$ and $`ql`$, and defining $`d_q=U_{ql}/U_{ll}`$, we obtain two recursion relations, $`z_l`$ $`=`$ $`\omega _li\pi \alpha _l^2i\pi \alpha _l{\displaystyle \underset{ql}{}}\alpha _qd_q,`$ (10) $`id_q`$ $`=`$ $`{\displaystyle \frac{\pi \alpha _q}{z_l\omega _q}}\left(\alpha _l+{\displaystyle \underset{pl}{}}\alpha _pd_p\right).`$ (11) The Petermann factor of the lasing mode $`l`$ follows from $$\sqrt{K}=\left(1+\underset{ql}{}|d_q|^2\right)\left|1+\underset{ql}{}d_q^2\right|^1.$$ (12) We now use the fact that $`z_l`$ is the eigenvalue closest to the real axis. We may therefore assume that $`z_l`$ is close to the unperturbed value $`\omega _l`$ and replace the denominator $`z_l\omega _q`$ in Eq. (11) by $`\omega _l\omega _q`$. That decouples the two recursion relations, which may then be solved in closed form, $`z_l`$ $`=`$ $`\omega _li\pi \alpha _l^2\left(1+i\pi A\right)^1,`$ (14) $`id_q`$ $`=`$ $`{\displaystyle \frac{\pi \alpha _q\alpha _l}{\omega _l\omega _q}}\left(1+i\pi A\right)^1.`$ (15) We have defined $`A=_{ql}\alpha _q^2(\omega _l\omega _q)^1`$. The decay rate of the lasing mode is $$\mathrm{\Gamma }=2\text{Im}z_l=2\pi \alpha _l^2(1+\pi ^2A^2)^1.$$ (16) Since the lasing mode is close to the real axis, we may linearise the expression (12) for $`K`$ with respect to $`\mathrm{\Gamma }`$, $$K=1+4\underset{ql}{}(\text{Im}d_q)^2=1+\frac{(2\pi \mathrm{\Gamma }/\mathrm{\Delta })B}{1+\pi ^2A^2},$$ (17) with $`B=\mathrm{\Delta }_{ql}\alpha _q^2(\omega _l\omega _q)^2`$. The conditional average of $`K`$ at given $`\mathrm{\Gamma }`$ and $`\mathrm{\Omega }`$ can be written as the ratio of two unconditional averages, $`K_{\mathrm{\Omega },\mathrm{\Gamma }}`$ $`=`$ $`1+(2\pi \mathrm{\Gamma }/\mathrm{\Delta })B(1+\pi ^2A^2)^1Z/Z,`$ (19) $`Z`$ $`=`$ $`\delta (\mathrm{\Omega }\omega _l)\delta \left(\mathrm{\Gamma }2\pi \alpha _l^2(1+\pi ^2A^2)^1\right).`$ (20) In principle one should also require that the decay rates of modes $`ql`$ are bigger than $`\mathrm{\Gamma }`$, but this extra condition becomes irrelevant for $`\mathrm{\Gamma }0`$. For $`M\mathrm{}`$ the distribution of $`\alpha _q`$ is Gaussian $`\mathrm{exp}(\frac{1}{2}\alpha _q^2\pi ^2/w\mathrm{\Delta })`$ . The average of $`Z`$ over $`\alpha _l`$ yields a factor $`(1+\pi ^2A^2)^{1/2}`$, $$K_{\mathrm{\Omega },\mathrm{\Gamma }}=1+(2\pi \mathrm{\Gamma }/\mathrm{\Delta })\frac{B(1+\pi ^2A^2)^{1/2}}{(1+\pi ^2A^2)^{1/2}},$$ (21) where only the averages over $`\alpha _q`$ and $`\omega _q`$ ($`ql`$) remain, at fixed $`\omega _l=\mathrm{\Omega }`$. The problem is now reduced to a calculation of the joint probability distribution $`P(A,B)`$. This is a technical challenge, similar to the level curvature problem of random-matrix theory . The calculation will be presented elsewhere, here we only give the result: $$P(A,B)=\frac{1}{6}\sqrt{\frac{\pi }{2w}}\frac{\pi ^2A^2+w^2}{B^{7/2}}\mathrm{exp}\left[\frac{w}{2B}\left(\frac{\pi ^2A^2}{w^2}+1\right)\right].$$ (22) Together with Eq. (21) this gives the mean Petermann factor $$K_{\mathrm{\Omega },\mathrm{\Gamma }}=1\frac{\mathrm{\Gamma }}{\mathrm{\Delta }}\frac{2\pi }{3}\frac{G_{22}^{22}\left(w^2|\begin{array}{cc}0& 0\\ \frac{1}{2}& \frac{1}{2}\end{array}\right)}{G_{22}^{22}\left(w^2|\begin{array}{cc}\frac{1}{2}& \frac{1}{2}\\ 1& 0\end{array}\right)},$$ (23) in terms of the ratio of two Meijer $`G`$-functions. We have plotted the result in Fig. 1, as a function of $`T=4w(1+w)^2`$. The non-analytic dependence of the average $`K`$ on $`T`$ (and hence on the area of the opening ) is a striking feature of our result. For $`T1`$, the average reduces to $$K_{\mathrm{\Omega },\mathrm{\Gamma }}=1+\frac{\pi }{6}\frac{T\mathrm{\Gamma }}{\mathrm{\Delta }}\mathrm{ln}\frac{16}{T}.$$ (24) The non-analyticity results from the relatively weak eigenvalue repulsion in the presence of time-reversal symmetry. If time-reversal symmetry is broken by a magneto-optical effect (as in Refs. ), then the stronger quadratic repulsion is sufficient to overcome the $`\omega ^2`$ divergence of perturbation theory and the average $`K`$ becomes an analytic function of $`T`$. For this case, we find instead of Eq. (21) the simpler expression $$K_{\mathrm{\Omega },\mathrm{\Gamma }}=1+(2\pi \mathrm{\Gamma }/\mathrm{\Delta })\frac{B}{1+\pi ^2A^2}.$$ (25) Using the joint probability distribution $$P(A,B)=\frac{\left(\pi ^2A^2+w^2\right)^2}{3wB^5}\mathrm{exp}\left[\frac{w}{B}\left(\frac{\pi ^2A^2}{w^2}+1\right)\right],$$ (26) we find the mean $`K`$, $$K_{\mathrm{\Omega },\mathrm{\Gamma }}=1+\frac{\mathrm{\Gamma }}{\mathrm{\Delta }}\frac{4\pi w}{3(1+w^2)},$$ (27) shown dashed in Fig. 1. It is equal to $`K_{\mathrm{\Omega },\mathrm{\Gamma }}=1+\frac{1}{3}\pi T\mathrm{\Gamma }/\mathrm{\Delta }`$ for $`T1`$. So far we have concentrated on the average Petermann factor, but from Eqs. (16), (17), and (22) we can compute the entire probability distribution of $`K`$ at fixed $`\mathrm{\Gamma }`$. We define $`\kappa =(K1)\mathrm{\Delta }/\mathrm{\Gamma }T`$. A simple result for $`P(\kappa )`$ follows for $`T=1`$, $$P(\kappa )=\frac{4\pi ^2}{3}\kappa ^{7/2}\mathrm{exp}(\pi /\kappa ),$$ (28) and for $`T1`$, $$P(\kappa )=\frac{\pi }{12\kappa ^2}\left(1+\frac{\pi }{2\kappa }\right)\mathrm{exp}\left(\frac{1}{4}\pi /\kappa \right),\kappa T1.$$ (29) As shown in Fig. 2, both distributions are very broad and asymmetric, with a long tail towards large $`\kappa `$ . The most probable (or modal) value of $`K1T\mathrm{\Gamma }/\mathrm{\Delta }`$ is parametrically smaller than the mean value (24) for $`T1`$. To check our analytical results we have also done a numerical simulation of the random-matrix model, generating a large number of random matrices $`H_0`$ and computing $`K`$ from Eq. (6). As one can see from Fig. 2, the agreement with Eqs. (28) and (29) is flawless. In conclusion, we have shown that chaotic scattering causes large statistical fluctuations in the quantum-limited linewidth of a laser cavity. We have examined in detail the case that the coupling to the cavity is via a single wave channel, but our random-matrix model applies more generally to coupling via an arbitrary number $`N`$ of wave channels. We have computed exactly the distribution of the Petermann factor for $`N=1`$. It remains an open problem to do the same for $`N>1`$. This problem is related to several recent studies of the statistics of eigenfunctions of non-Hermitian Hamiltonians , but is complicated by the constraint that the corresponding eigenvalue is the closest to the real axis. Our study of a system with a fully chaotic phase space complements previous theoretical work on systems with an integrable dynamics. Chaotic laser cavities of recent experimental interest have a phase space that includes both integrable and chaotic regions. The study of the quantum-limited linewidth of such mixed systems is a challenging problem for future research. We have benefitted from discussions with P. W. Brouwer, K. M. Frahm, Y. V. Fyodorov, and F. von Oppen. This work was supported by the Dutch Science Foundation NWO/FOM and by the TMR program of the European Union.
no-problem/9905/astro-ph9905355.html
ar5iv
text
# NEAR-INFRARED PHOTOMETRY OF BLAZARS ## 1 Introduction The discovery that blazars (i.e., optically violently variable quasars and BL Lac objects) and flat radio-spectrum quasars emit most of their power in high-energy gamma rays (Fichtel, et al. 1994) probably represents one of the most surprising results from the Compton Gamma-Ray Observatory (CGRO). Their luminosity above 100 MeV in some cases exceeds 10<sup>48</sup> ergs s<sup>-1</sup> (assuming isotropic emission) and can be larger (by a factor of 10-100) than the luminosity in the rest of the electromagnetic spectrum. Blazars have smooth, rapidly variable, polarized continuum emission from radio through UV/X-ray wavelengths. All have compact flat-spectrum radio cores, and many exhibit superluminal motions. A strong correlation between gamma-ray and near-infrared luminosities was recently discovered for a sample of blazars and it was suggested that this relation might be a common property of these objects (Xie et al. 1997). For that reason, they conclude that hot dust is likely to be the main source of the soft photons (near-infrared), which are continuously injected within the knot and then produce $`\gamma `$-ray flares by inverse Compton scattering on relativistic electrons. The aim of the following observations was to search for an intraday or a day-to-day NIR variability in a sample of blazars. We expose the results of two consecutive days observation run. ## 2 Observations We observed eight blazars with the 5-meter Hale telescope on Mt. Palomar during the nights of 25 and 26 February 1997, using the Cassegrain Infrared Camera, a $`256\times 256`$-pixel InSb array with the J (1.25 $`\mu `$m), H (1.65 $`\mu `$m) and K<sub>s</sub> (2.15$`\mu `$m) filters and a field-of-view of 32 arcsec. The reduction of data was done under IRAF and included subtraction of the dark noise, flat field corrections, and combination of images to remove bad pixels, cosmic rays, and the sky. Then aperture photometry for each object was performed using nearby faint standards for calibration. The apparent magnitudes at Earth are summarized in Table 1. With the possible exception of PKS 1156+295, no intraday or day-to-day variability was observed during these two nights. Due to the steadyness of the sources, it was possible to fit the data to a powerlaw (f(E) = C E) by a $`\chi ^2`$ minimization. We give the spectral index for each source in Table 1. Further discussion about the results can be found in Chapuis et al. (1998a,b). ###### Acknowledgements. Observations at the Palomar Observatory were made as part of a continuing collaborative agreement between Palomar Observatory and the Jet Propulsion Laboratory. The research described in this paper was carried out in part by the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration.
no-problem/9905/astro-ph9905164.html
ar5iv
text
# Rubidium in Metal-Deficient Disk and Halo Stars ## 1 INTRODUCTION In the solar neighborhood reside stars of differing metallicity. Stars with a metallicity of approximately the solar value belong overwhelmingly to the Galactic disk. Stars of lower metallicity are on orbits that identify them as members of the Galactic halo. The metallicity \[Fe/H\] $`1`$ may be taken as a rough boundary between disk and halo stars. One tool for unravelling the evolution of the Galaxy is the measurement of the chemical compositions of stars as a function of metallicity. In this paper, we present the first extensive series of measurements of the rubidium abundance in disk and halo stars. Rubidium is potentially a special diagnostic of the neutron capture $`s`$-process. Rubidium is present in two isotopic forms: <sup>85</sup>Rb which is stable and <sup>87</sup>Rb which with a half-life of 5 $`\times 10^{10}`$ yr may be deemed effectively stable from the astrophysical point of view. As we remark later, astronomical detection of Rb must rely on the Rb i lines that do not permit measurement of the relative isotopic Rb abundances from stellar spectra. Hence, we discuss the elemental Rb abundance. Analysis of the solar system abundances of Rb and adjoining elements shows that the neutron capture $`s`$\- and $`r`$-processes are about equally responsible for the synthesis of Rb. Scrutiny of the $`s`$-process abundances shows that the ‘main’ $`s`$-process not the ‘weak’ $`s`$-process is the principal source of Rb’s $`s`$-process component. The main $`s`$-process which manufactured elements heavier than about Rb is identified with the He-burning shell of intermediate and low mass AGB stars. The weak $`s`$-process is identified with He-core and possibly C-core burning of massive stars. Evolution of the Galaxy’s $`s`$ and $`r`$-process products is rather directly observed from the stellar abundances of elements that are predominantly attributable to either the $`s`$ or to the $`r`$-process. Traditional tracers include Ba for the $`s`$-process and Eu for the $`r`$-process. Elucidation of the operation of the neutron capture processes requires observations of more than a single element per process. As an example, we note that the abundance ratio of a ‘light’ to a ‘heavy’ $`s`$-process elements, say Zr to Ba, provides information on the integrated exposure of material to neutrons. Rubidium with a roughly equal mix of $`s`$ and $`r`$-process contributions, and an unfavorable electronic structure for ready detection in stellar spectra would seem to be an element of little interest. Closer inspection of the working of the $`s`$-process shows, however, that Rb offers a unique insight into the process: Rb’s role as a monitor of the neutron density at the $`s`$-process site. Along the $`s`$-process path, Rb is preceded by krypton with the path entering Kr at <sup>80</sup>Kr and exiting at either <sup>84</sup>Kr or <sup>86</sup>Kr. Unstable <sup>85</sup>Kr controls the exit. At low neutron density at the $`s`$-process site, stable <sup>84</sup>Kr is converted by neutron capture to <sup>85</sup>Kr that decays to stable <sup>85</sup>Rb with the path continuing to <sup>86</sup>Sr. At high neutron densities, <sup>85</sup>Kr does not $`\beta `$-decay but experiences a neutron capture and is converted to stable <sup>86</sup>Kr. Subsequent neutron capture by <sup>86</sup>Kr leads by $`\beta `$-decay of <sup>87</sup>Kr to (effectively) stable <sup>87</sup>Rb. When a steady flow along the $`s`$-process path is attained, the density of a nuclide is given approximately by the condition that $`\sigma _iN_i`$constant where $`\sigma _i`$ is the neutron capture cross-section of nuclide $`i`$ and $`N_i`$ is the abundance of that nuclide. Since $`\sigma _{87}\sigma _{85}/10`$ for the Rb isotopes, the switch of the <sup>85</sup>Kr branch from its low neutron density routing through <sup>85</sup>Rb to its high neutron density routing through <sup>87</sup>Rb increases the total Rb abundance by about an order of magnitude relative to the abundance of other elements in this section of the $`s`$-process path, such as Sr, Y, and Zr. The isotopic mix of Rb is obviously altered as a function of neutron density but this is not measureable for cool stars. (Krypton is undetectable spectroscopically in cool stars.) Operation of the <sup>85</sup>Kr branch is more complicated than sketched, for example, neutron capture from <sup>84</sup>Kr feeds not only the <sup>85</sup>Kr ground state but a short-lived isomeric state that at all reasonable neutron densities provides some leakage to <sup>85</sup>Rb. A thorough discussion of the <sup>85</sup>Kr branch is provided by Beer & Macklin (1989) and its use in determining the effective neutron density of the $`s`$-process in stars is discussed by Tomkin & Lambert (1983) and Lambert et al. (1995). When detailed abundance measurements are available as in the case of the carbonaceous chondrites, several branches along the $`s`$-process path serve as neutron densitometers but Rb is the only low neutron density branch available to stellar spectroscopists. (At higher neutron densities, a branch controlled by <sup>95</sup>Zr is exploitable in cool stars showing ZrO bands \[Lambert et al. 1995\].) A primary reason for the near neglect of Rb in reports on quantitative spectroscopy of stars is that it is a trace alkali element. The Rb atom’s low ionization potential (4.177 eV) ensures that Rb is primarily ionized but the rare-gas electronic structure of Rb<sup>+</sup> provides resonance lines in the far ultraviolet. Detection of Rb via the Rb i resonance lines at 7800 and 7947 Å at the expected low Rb abundances is possible for cool dwarfs and giants, as our exploratory synthetic spectra indicated. Stars for observation at high-spectral resolution were selected from Schuster & Nissen’s (1988) catalog of dwarfs, and from Pilachowski, Sneden, & Kraft’s (1996) list of giants. Emphasis was placed on metal-poor stars such that the metallicity range $`2<`$ \[Fe/H\] $`<0.5`$ is well represented but metallicities \[Fe/H\] $`>0.5`$ are poorly represented. The following sections describe the observations, the method of analysis, the results, and present an interpretation of the Rb abundances relative to the abundances of other elements (Fe, Y, Zr, and Ba) obtained here. ## 2 OBSERVATIONS AND DATA REDUCTION The program stars are listed in Table 1. They comprise 32 G and K dwarfs and subgiants with metallicities of $`1.8<`$ \[Fe/H\] $`<0.0`$ and 12 G and K giants with metallicities of $`2.0<`$ \[Fe/H\] $`<0.6`$. The observations were made at the McDonald Observatory with the 2.7-m telescope and 2dcoudé echelle spectrometer (Tull et al. 1995). All the program stars were observed at the F3 focus at a resolving power of R = 60,000 and, in addition, eight of the brighter stars were also observed at the F1 focus at a resolving power of 200,000. In order to minimise the influence of cosmic rays, two observations in succession, rather than one longer observation, were generally made of each star. Four different detectors were used for the observations with resolving power 60,000: a Texas Instruments CCD with 15 $`\mu `$m<sup>2</sup> pixels in a 800$`\times `$800 format, a Tektronix CCD with 27 $`\mu `$m<sup>2</sup> pixels in a 512$`\times `$512 format, the Goddard Advanced Imaging System CCD with 21 $`\mu `$m<sup>2</sup> pixels in a 2048$`\times `$2048 format, and a Tektronix CCD with 24 $`\mu `$m<sup>2</sup> pixels in a 2048$`\times `$2048 format. The first two of these CCDs provided partial coverage of the wavelength interval $``$ 5500 – $``$ 8000 Å with large gaps between the end of one spectral order and the beginning of the next. The last two CCDs, which are much larger, provided nearly complete coverage from $``$ 4000 – $``$ 9000 Å; coverage was complete from the start of this interval to 5600 Å and substantial, but incomplete, from 5600 Å to the end of the interval. The Tektronix CCD with 24 $`\mu `$m<sup>2</sup> pixels in a 2048$`\times `$2048 format was used for the 200,000 resolving-power observations, which provided partial coverage of the region from $``$ 5500 – $``$ 8000 Å. The typical signal-to-noise ratio of the extracted one-dimensional spectra is between 100 and 300 at red and near-infrared wavelengths for the 60,000 resolving-power observations, while it is typically between 100 and 250 at the same wavelengths for the 200,000 resolving-power observations. The only accessible Rb lines in stellar spectra are the two Rb i resonance lines at 7800.3 and 7947.6 Å. Since these lines are typically weak we concentrated our attention on the stronger 7800.3 Å line and did not pursue the 7947.6 Å line, which is half as strong as the 7800.3 Å line. Figure Rubidium in Metal-Deficient Disk and Halo Stars shows examples of the 7800.3 Å Rb i line in the program stars; as may be seen, the line is partially blended with a stronger Si i line at 7800.0 Å. The data were processed and wavelength calibrated in a conventional manner with the IRAF<sup>1</sup><sup>1</sup>1 IRAF is distributed by the National Optical Astronomical Observatories, which is operated by the Association for Universities for Research in Astronomy, Inc., under contract to the National Science Foundation. package of programs on a SPARC5 workstation. The 7800.3 Å Rb i line was analyzed by spectrum synthesis because of the presence of the Si i line. Equivalent widths were measured for lines of other elements used in the investigation; these were lines of Fe i and Fe ii and the available lines (Y ii, Zr i, Ba ii, and Nd ii) of other heavy elements besides Rb. Lines suitable for measurement were chosen for clean profiles, as judged by inspection of the solar spectrum at high resolution and signal-to-noise ratio (Kurucz et al. 1984), that could be reliably measured in all, or most of, the program stars. Moore, Minnaert, & Houtgast (1966) was our primary source of line identification. The equivalent width of each line was measured with the IRAF measurement option most suited to the situation of the line; usually this was the fitting of a single, or multiple, Gaussian profile to the line profile. These equivalent widths were measured from the spectra with 60,000 resolving power; the 200,000 resolving-power spectra were not used because their much more limited wavelength coverage excluded most lines of interest. Table 2 gives basic information for the Rb i line and the lines of the other elements. The equivalent widths of the lines are available at JT’s World Wide Web site (http://anchor.as.utexas.edu/tomkin.html). The spectrum of an asteriod (Iris), observed with the same instrumental setup as that used for the 60,000 resolving-power observations of the program stars and reduced and measured in the same manner, provided solar equivalent widths for these lines. ## 3 ANALYSIS An LTE model-atmosphere abundance analysis was made relative to the Sun. Here we briefly discuss the selection of atomic data for the lines, the model atmospheres, and the abundance determinations for Rb and the other heavy elements. ### 3.1 Line Data The lines used for the abundance determinations are given in Table 2. Our first choice for $`gf`$-values for the lines was modern laboratory $`gf`$-values; basic data for the lines, including the $`gf`$-values and their sources, are given in Table 2. In particular we note that the $`gf`$-value of the 7800.3 Å Rb i line, log $`gf`$ = +0.13$`\pm `$0.04 (Wiese & Martin 1980), is reliably determined. For some lines, for which reliable $`gf`$-values are not available, we used solar $`gf`$-values instead. These were calculated with the solar equivalent widths given in Table 2, the solar atmosphere of Holweger & Müller (1974), a microturbulence of 1.15 km s<sup>-1</sup> (Tomkin et al. 1997), and the abundances given in Table 2. The solar equivalent widths were measured from the spectrum of the asteriod Iris. The adoption of the laboratory $`gf`$-values for the Fe i lines follows the prescription of Lambert et al. (1996). In particular, we make a small correction (see Table 2) to the $`gf`$-values of May, Richter, & Wichelmann (1974) to normalize them to those of O’Brian et al. (1991) and Bard, Kock, & Kock (1991). No correction is needed to put the solar Fe i line $`gf`$-values on the same scale as the laboratory $`gf`$-values; we find: log $`gf`$(solar) – log $`gf`$(lab) = +0.03$`\pm `$0.05 (4 lines, 6 $`gf`$-values). Accurate experimental $`gf`$-values are not available for our Fe ii lines, therefore we have used solar $`gf`$-values for these lines. Although the three Zr i lines all have modern laboratory $`gf`$-values (Biémont et al. 1981), we find their solar $`gf`$-values are significantly larger than the laboratory ones: log $`gf`$(solar) – log $`gf`$(lab) = +0.41$`\pm `$0.04. Our adopted solar Zr abundance — log $`ϵ`$(Zr) = 2.60 (Anders & Grevesse 1989) — is not a factor in this discrepancy because it is based on Biémont et al.’s $`gf`$-values and is very similar to the value — log $`ϵ`$(Zr) = 2.56 — Biémont et al. derived in their own investigation of the solar Zr abundance. Some of the discrepancy is attributable to the use of different ionization potentials for Zr i; we have used the accurate value of 6.634 eV (Hackett et al. 1986) while Biémont et al. must have used the significantly higher old value of 6.84 (Allen 1973). This accounts for 0.20 dex of the discrepancy. Line-to-line variation of individual line abundances may also account for some of the discrepancy; the individual line abundances Biémont et al. derive for these three lines are on average 0.06 dex larger than the average Zr abundance they determine from the Zr i lines. Although the remaining 0.15 dex discrepancy is not readily accounted for, the manner in which these three lines strengthen in the cooler program stars leaves no doubt that they are low excitation lines of a neutral species and is thus consistent with their identification as Zr i lines. In order to minimize the influence of whatever is causing the residual discrepancy between the solar and laboratory $`gf`$-values for these lines, we have chosen to adopt the solar $`gf`$-values for them in our analysis. Two Y ii lines (5200.42 and 5402.78 Å) were rejected when it became evident, during an initial abundance analysis of all the stars, that they gave significantly larger Y abundances than the other Y ii lines in many of the stars. Atomic data and solar equivalent widths for the lines are given in Table 2. ### 3.2 Model Atmospheres Plane parallel, line blanketed, flux constant, LTE MARCS model atmospheres, which derive from a development of the programs of Gustafsson et al. (1975), were used for the abundance analysis. The determination of parameters (effective temperature, surface gravity, metallicity, and microturbulence) for the model atmospheres was done in two steps. First we chose preliminary parameters for each star which were used to calculate an initial set of model atmospheres. We then used the initial model atmospheres and the equivalent widths of the Fe i and Fe ii lines to iteratively adjust the atmospheric parameters to determine an adopted set of parameters and atmospheres that are consistent with the Fe i and Fe ii line data. We now briefly describe these two steps. #### 3.2.1 Choice of Preliminary Atmospheric Parameters Strömgren photometry provided the primary means of determining preliminary parameters for the dwarfs and subgiants, which are mostly from the catalogue of Schuster & Nissen (1988) supplemented by three stars from Carney et al. (1994) and four stars from the Bright Star Catalogue (Hoffleit & Jaschek 1982). The stars are intrinsically faint and nearby; an examination of their parallaxes (see below) shows they are all within 100 pc, except for two stars which are at 111$`\pm `$17 and 137$`\pm `$18 pc. The interstellar reddening of the stars is therefore negligible (Schuster & Nissen 1989a) so we have not corrected their indices. Our chief source of preliminary effective temperatures for the dwarfs and subgiants was the color index $`by`$ coupled with the T<sub>eff</sub> vs. $`by`$ calibration of Alonso, Arribas, & Martínez-Roger (1996a, equation 9), who used the infrared flux method to determine effective temperatures for metal-deficient stars. For the components of the visual binary HD 23439, which do not have their own $`by`$, and for HD 150281, which also does not have $`by`$, we took the effective temperatures from Carney et al. (1994); for 61 Cyg A and B, which are too cool for the applicable range of Alonso et al.’s calibration, we took the effective temperatures from Alonso, Arribas, & Martínez-Roger (1996b). Initial surface gravities for the dwarfs and subgiants were determined by the relations $`gM/R^2`$ and $`LR^2\text{T}\text{eff}^4`$ where $`M`$ is the stellar mass, $`R`$ the radius, and $`L`$ the luminosity, with the luminosities being set by the Hipparcos parallaxes. We followed the prescriptions of Nissen, Høg, & Schuster (1997), who have successfully applied this method to determine surface gravities for 54 metal-poor stars. Most of the stars have Hipparcos parallaxes and all of these parallaxes are of sufficient accuracy; the largest uncertainty ($`\sigma /\pi `$) is 0.15 with most uncertainties being much smaller than this. For the small number (five) of stars without Hipparcos parallaxes we took trigonometric parallaxes from Gliese & Jahreiss (1979) or determined photometric parallaxes from $`V`$ and $`M_V`$, with $`M_V`$ estimated from the $`uvby`$ photometry and the recipes of Nissen & Schuster (1991). Preliminary metallicities for the dwarfs and subgiants were estimated from the $`uvby\beta `$ photometry and the calibration of Schuster & Nissen (1989b, equation 3). For a small number (five) of stars the results of Carney et al. (1994) or Alonso et al. (1996b) were used instead. An initial microturbulence of 1.0 km s<sup>-1</sup>, which is representative of dwarfs in this temperature range (Feltzing & Gustafsson 1998), was used for the dwarfs and subgiants. For the giants, which are taken from Pilachowski et al.’s (1996) medium-resolution spectroscopic investigation of Na abundances in metal-poor giants, we adopted Pilachowski et al.’s atmospheric parameters as initial parameters. Pilachowski et al.’s effective temperatures and gravities are photometrically based as modified by their spectroscopic results, while their metallicities and microturbulences are from their spectroscopic analysis. #### 3.2.2 Determination of Adopted Atmospheric Parameters Model atmospheres were computed using the MARCS code (Gustafsson et al. 1975). Those for the dwarfs and subgiants were calculated by interpolation in a grid of MARCS models, which spanned the range of dwarf and subgiant parameters and was provided by B. Edvardsson. The models of Pilachowski et al. (1996), who also used MARCS models for their abundance analysis, were used as preliminary models for the giants and were provided by C. Sneden; iterations of the giant models with modified atmospheric parameters were calculated directly using the MARCS code. Models with the preliminary parameters and the line analysis code MOOG (Sneden 1973) were then applied to the equivalent width data for the Fe i and Fe ii lines. Trends of the Fe i abundances with line excitation potential were used to check the preliminary effective temperatures and trends of the Fe i abundances with equivalent width were used to check the preliminary microturbulences. Where necessary revised parameters were determined, new models calculated, and a new round of abundance calculations done for the Fe i and Fe ii lines. Next the Fe abundances from this round of calculations were used to calculate a new set of models and do another round of abundance calculations for the Fe lines. In a final round of calculations the surface gravities were adjusted, and new models were calculated, so the Fe i and Fe ii lines gave the same Fe abundance. These final adopted parameters for the program stars are given in Table 1. The adopted parameters are generally only moderately different from the initial parameters. For the dwarfs and subgiants the analysis of the Fe lines led to revised effective temperatures for 13 stars and an average temperature increase for these stars of 140$`\pm `$75 K ($`\sigma `$ of the individual differences). The revised gravities of the dwarfs and subgiants tend to be lower than the preliminary gravities, but the inconsistency is small; for the 28 dwarfs and subgiants with measurements of both Fe i and Fe ii lines, for which spectroscopic gravities can thus be determined, the average downward revision of the log $`g`$ is 0.12$`\pm `$0.12 ($`\sigma `$ of the individual differences). This suggests that there is no serious inconsistency between the gravities of these stars and the four remaining dwarfs and subgiants for which we adopt preliminary gravities because they have no Fe ii lines. It is of interest to see how the differences that we find between our spectroscopic gravities and the preliminary Hipparcos-based gravities compare with what Allende Prieto et al. (1999) found in a thorough examination, of spectroscopic gravities, for nearby stars, taken from the literature versus Hipparcos-based gravities. We confine the comparison of log $`g_{\mathrm{spec}}`$ – log $`g_{\mathrm{Hipp}}`$ for our results and theirs to the temperature range (4900 – 5500 K) of our dwarfs and subgiants. (This range excludes the four coolest dwarfs and subgiants because they do not have any measured Fe ii lines.) As mentioned earlier, we find an average difference log $`g_{\mathrm{spec}}`$ – log $`g_{\mathrm{Hipp}}`$ = –0.12$`\pm `$0.12 (28 stars, $`\sigma `$ of individual differences) for our dwarfs and subgiants, while Allende Prieto et al. find an average difference of –0.26$`\pm `$0.29 (9 stars) for their sample of stars. Our results and Allende Prieto et al.’s thus both show that the spectroscopic gravities tend to be smaller than the Hipparcos gravities. Also, the consistency between the spectroscopic and Hipparcos gravities of our stars is somewhat better than it is for Allende Prieto et al.’s sample of stars.<sup>2</sup><sup>2</sup>2 We have not determined Hipparcos-based gravities for the giants because, with the exception of Arcturus, they are much more remote than the dwarfs and subgiants. Although they all have Hipparcos parallaxes, the errors in the parallaxes are comparable to the parallaxes for most of them. Revisions of effective temperature were required for five of the giants; the average temperature increase for these stars was 100$`\pm `$140 K. All the giants have measurements of both Fe i and Fe ii lines thus allowing revision of their preliminary gravities for all 12 stars; the average change of log $`g`$ was -0.04$`\pm `$0.37. The adopted effective temperatures and gravities of the giants thus are in good agreement with the preliminary values (Pilachowski et al. 1996). Our results, which use higher resolution spectra and more numerous Fe lines than those of Pilachowski et al., thus confirm their results. Our Fe abundances are also in good agreement; the average difference between our \[Fe/H\] determinations and theirs is -0.07$`\pm `$0.10. We conclude this section with a brief discussion of the potential influence of non-LTE on the effective temperatures derived from the Fe i line excitation. As remarked earlier, the primary source of our initial effective temperatures (for the dwarfs and subgiants) was $`by`$ and the T<sub>eff</sub> vs. $`by`$ calibration of Alonso, Arribas, & Martínez-Roger (1996a, ), where the effective temperatures in their calibration were determined by the infrared flux method. The $`by`$ based effective temperatures are thus free of non-LTE effects. One way to estimate the possible influence of non-LTE on the effective temperatures derived from the excitation of Fe i lines, therefore, is to consider the difference between the excitation-based temperatures and the $`by`$-based temperatures. Of course non-LTE effects are not the only possible source of such a difference so this check is indicative, rather than conclusive. The average difference T<sub>eff</sub> (Fe i) $``$ T<sub>eff</sub> ($`by`$) = +45$`\pm `$68 K ($`\sigma `$ of the individual differences, 20 stars), where the calculation includes not only stars whose initial effective temperatures were revised, but also stars for which no revision was necessary - as long as they had enough Fe i lines to define the line excitation. This difference is small and indicates that any non-LTE influence on the determination of temperatures from the Fe i lines is minor. ### 3.3 Abundance Determinations for Rb and the Other Heavy Elements Abundances were determined by matching the observed line strengths and theoretical line strengths calculated by MOOG with the adopted model atmospheres. As remarked earlier, the Rb i line was treated by means of spectrum synthesis, while the lines (Y ii, Zr i, Ba ii, and Nd ii) of the other heavy elements were treated by means of equivalent widths. The spectrum synthesis of the 7800.29 Å Rb i line includes the hyperfine structure of the <sup>85</sup>Rb and <sup>87</sup>Rb isotopes, each of which is split into two components, and the blending Si i line at 7800.00 Å. The accurately known wavelengths and relative line strengths of the hyperfine structure components were taken from Lambert & Luck’s (1976) analysis of Rb in Arcturus. We adopted a terrestrial abundance ratio (<sup>85</sup>Rb/<sup>87</sup>Rb = 3) for the Rb isotopes. Although in principle it would be desirable to make direct measurements of the stellar isotopic Rb abundances from the exact shape of the RbI line profile, in practice extreme departures from the terrestrial isotope ratio are required before there is appreciable distortion of the line profile. Lambert & Luck, for example, found that for their spectra, which had a resolving power of 195,000 that is similar to the high resolution spectra of the present investigation, the <sup>85</sup>Rb/<sup>87</sup>Rb ratio had to be as low as 1 or as high as 10 to cause even a small variation of the line profile; they concluded that the isotope ratio in Arcturus is terrestrial with a large uncertainty. Direct determination of Rb isotopic abundances, therefore, is not the thrust of our investigation, although we note that comparison of the observed and synthesised spectra of the Rb line in our program stars does not show any variations attributable to non-terrestrial isotope ratios. We also note that the indeterminacy of the Rb isotope ratios does not interfere with the measurement of the elemental Rb abundances; the Rb i line is weak in all the program stars so the Rb abundances it provides are not affected by changes of the isotopic mixture. The synthesis of the Rb i line also includes the Si i line to the blue. No reliable experimental oscillator strength is available for the Si i line so a solar oscillator strength (log $`gf`$ = -0.65) was adopted. The instrumental and macroturbulent broadening, as well as thermal and microturbulent broadening, were included in the synthesis. The macroturbulent broadening was set by matching the profile of the clean nearby Ni i line at 7797.6 Å. Spectrum synthesis of the solar Rb i line, using the solar model of Holweger & Müller (1974) and the Kurucz et al. (1984) solar atlas, provides a solar Rb abundance log $`ϵ`$(Rb) = 2.60$`\pm `$0.07, which is the same as the Rb abundance in Anders & Grevesse’s (1989) compilation of solar abundances. We note, however, the discrepancy between the photosperic abundance and the somewhat lower meteoritic abundance of 2.40$`\pm `$0.03 (Anders & Grevesse 1989). Although it might be speculated that the relatively low melting and boiling points of Rb (39 and 688 C, respectively) may make it behave like a volatile element and so explain the low meteoritic abundance, this does not appear to be the case. Potassium, which is isoelectronic with Rb and has only slightly higher melting and boiling points (63 and 759 C, respectively), shows no discrepancy of its photospheric and meteoritic abundances, which are log $`ϵ`$(K) = 5.12$`\pm `$0.13 and 5.13$`\pm `$0.03, respectively. Although the discrepancy is a potential source of concern, we note that there is no evidence of the Rb i line being affected by an unknown blend; in particular, in our program stars the strengthening of the line with decreasing effective temperature is consistent with the behavior of a resonance line of a heavy-element neutral species. The observed and synthesised Rb i line profiles for a sample of stars are shown in Figure Rubidium in Metal-Deficient Disk and Halo Stars. Table 3 gives the abundances of Rb and the other heavy elements. The Rb abundances derived from observations made at the F1 focus (resolving power = 200,000) and the F3 focus (resolving power = 60,000) are highly consistent; for stars observed at both foci the average difference between the F1- and F3-based \[Rb/H\] is –0.03$`\pm `$0.01 (s.e., five stars). For bright stars, such as Arcturus and $`\mu `$ Cas, the greater detail provided by the higher resolution F1 observations allows for more precise determination of the Rb abundance. In fainter stars there is not much to choose between the F1 and F3 observations because the greater spectral detail of the F1 observations tends to be counterbalanced by their lower signal-to-noise ratio. The two main sources of errors in the abundances are measurement error and analysis error caused by errors in the adopted model atmosphere parameters. The scatter of the abundances provided by individual lines of the same species, which are caused by measurement errors of the equivalent widths and, to a lesser extent, by errors in the line oscillator strengths, is a good guide to measurement error. This scatter, as measured by the standard deviation of the individual line abundances, is given in Table 3. (Although the standard deviations of the abundances from individual lines are larger than the standard deviations of the mean abundances, the heavy element abundances are based on only a few lines for each element — see Table 2 — so we prefer to consider the standard deviations of the individual line abundances.) For Rb, whose abundance is based on spectrum synthesis of the 7800 Å Rb i line, Table 3 gives errors estimated from the fit of the observed and synthesised spectra. Inspection of Table 3 shows the measurement-related abundance errors range up to $`\pm `$0.15 dex with larger errors in a few cases; a representative error in the \[X/H\] abundances is $`\pm `$0.07 dex, while a representative error of the \[X/Fe\] abundances is $`\pm `$0.1 dex. Estimated errors in the adopted effective temperatures are between $`\pm `$50 K, for stars with a good selection of Fe i lines providing a well-determined excitation temperature, and $`\pm `$100 K, for stars for which we adopted color-based effective temperatures. Representative errors in the adopted log $`g`$ and metallicities are $`\pm `$0.2 and $`\pm `$0.1 dex, respectively. A representative uncertainty in the microturbulence is $`\pm `$0.5 km s<sup>-1</sup>, although we note that the Rb abundances provided by the weak Rb i line have little, or no, microturbulence dependence and that because of the metal deficiency of most of the program stars the abundances of most other elements also have only a small, or negligible, microturbulence dependence. Adopting a representative effective temperature error of $`\pm `$100 K and the stated errors of the other parameters, we find that for a typical dwarf the combined effects of these errors change the Fe abundance (from Fe i lines) by $`\pm `$0.11 dex. The corresponding figure for a typical giant is $`\pm `$0.17 dex. For the heavy elements the abundance of the element relative to Fe, \[El/Fe\], holds the most interest. This ratio is less dependent on the atmospheric parameters than the absolute abundance. In the typical dwarf the combined effects of the errors in the atmospheric parameters change \[El/Fe\] by $`\pm `$0.04 (Rb), $`\pm `$0.05 (Y), $`\pm `$0.05 (Zr), $`\pm `$0.11 (Ba), and $`\pm `$0.06 dex (Nd). The corresponding figures for a typical giant are: $`\pm `$0.07 (Rb), $`\pm `$0.06 (Y), $`\pm `$0.00 (Zr), $`\pm `$0.14 (Ba), and $`\pm `$0.07 dex (Nd). We estimate representative total errors, caused by measurement error and errors in the model atmosphere parameters together, to be 0.1 – 0.2 dex in \[Fe/H\] and 0.1 – 0.2 dex for \[El/Fe\]. We now briefly consider how the Fe and Rb abundances of the dwarfs and subgiants would change if the preliminary effective temperatures, which are mostly based on the infrared flux method (Alonso et al. 1996a), and preliminary gravities, which are mostly Hipparcos-based, were used instead of the adopted effective temperatures and gravities. As discussed earlier, the preliminary effective temperatures were revised upward by an average of 140$`\pm `$75 K ($`\sigma `$ of the individual differences) for 13 of the dwarfs and subgiants, while no revisions of the preliminary effective temperatures were made for the other 19 dwarfs and subgiants. Use of the lower preliminary effective temperatures for these 13 stars would decrease their Fe abundances (from Fe i lines) by an average of 0.14 dex, while their \[Rb/Fe\], as set by Rb i and Fe i lines, would increase by an average of 0.04 dex. The adopted gravities for 28 of the dwarfs and subgiants are spectroscopically determined, while those of the other four dwarfs and subgiants, for which we could not determine spectroscopic gravities, are the preliminary gravities. Use of the preliminary gravities, instead of the adopted spectroscopic gravities, for these 28 stars would not change their Fe or Rb abundances significantly; the adoption of the preliminary log $`g`$, which are 0.12$`\pm `$0.12 dex ($`\sigma `$ of the individual differences) higher on average than the spectroscopic log $`g`$, in place of the spectroscopic log $`g`$ would change the \[Fe/H\] and \[Rb/Fe\] from neutral lines by +0.01 and 0.00 dex, respectively, on average. ## 4 RESULTS ### 4.1 Comparison with the Literature Before we consider the Rb and other heavy-element abundances, we compare our results with those published in the literature. First we consider the results for \[Fe/H\]. Figure Rubidium in Metal-Deficient Disk and Halo Stars compares the \[Fe/H\] determinations for stars in common to this study and earlier high signal-to-noise ratio, high resolution studies. The comparison is not exhaustive, but does include all recent studies (since 1990) which have two, or more, stars in common with the present study. The agreement of the \[Fe/H\] determinations is good over most of the metallicity range and, although the \[Fe/H\] of this study tend to be slightly more negative than the literature \[Fe/H\] in the most metal-deficient stars, the overall agreement is not unsatisfactory. Previous studies which, to our knowledge, have determined Rb abundances for stars in common with those of the present investigation are Mäckle et al.’s (1975a) study of Arcturus and Gratton & Sneden’s (1994) study of heavy-element abundances in metal-poor stars. In Table 4 we compare our \[Rb/Fe\] with those of the two earlier investigations. Because our Rb abundances are based on the 7800 Å Rb i line, while Mäckle et al.’s (1975a) abundance is based on both the 7800 and 7947 Å lines, we also include in the Table their abundance for the 7800 Å Rb i line alone (Mäckle et al. 1975b). (Gratton & Sneden’s results are based only on the 7800 Å line.) In order to make the Rb abundances of our and the earlier studies directly comparable we have adjusted the Rb abundances of the earlier studies to reflect the values they would have if the stellar parameters (T<sub>eff</sub>, log $`g`$, \[M/H\], and $`\xi `$) used in the earlier studies had been the same as those used here. For Arcturus the difference between our \[Rb/Fe\] and Mäckle et al.’s is only –0.03 dex — pleasingly small and not unexpected for the case of such a bright star. For the three stars that we have in common with Gratton & Sneden we note that both investigations determined Rb abundances for two of the stars (HD 64606 and 187111), but were only able to determine upper limits for the third star (HD 122956). The differences between our \[Rb/Fe\] and Gratton & Sneden’s are –0.25 (HD 64606), –0.09 (HD 122956), and +0.17 (HD 187111). That these differences are much larger than in the case of Arcturus can be ascribed to the fact that all three stars are quite metal-deficient and have only a weak or undetectable Rb line. We estimate that for these three stars the uncertainty in the Rb abundance associated with fitting our observed and synthetic spectra of the Rb i line is 0.1 dex for HD 64606 and 187111 and 0.2 dex for HD 122956; Gratton & Sneden’s Rb abundances must be subject to similar uncertainties also. We conclude, therefore, that our \[Rb/Fe\] and Gratton & Sneden’s are probably the same to within the errors of measurement for these three stars. ### 4.2 The Abundances of Rb, Y, Zr, Ba, and Nd As is customary, the abundances in Table 3 are plotted as \[el/Fe\] against \[Fe/H\], in Fig. Rubidium in Metal-Deficient Disk and Halo Stars, Rubidium in Metal-Deficient Disk and Halo Stars, and Rubidium in Metal-Deficient Disk and Halo Stars to reveal trends in the relative abundance of element el and iron. Two points are immediately apparent: (i) three stars are unusually rich in the heavy elements — note especially the \[Y/Fe\] ratios of HD 23439A and B, and BD +5<sup>o</sup> 3640 which we shall dub CH stars, and (ii) the distinctive behavior of \[Rb/Fe\] in the metal-poor stars — \[Rb/Fe\] $`>0`$ when the other heavy elements show \[el/Fe\] $`0`$. Before commenting on these striking results, we compare our results for the Y, Zr, Ba, and Nd abundances with results in the literature. Previous extensive abundance determinations of heavy (and other) elements in metal-poor stars have shown that the run of \[el/Fe\] against \[Fe/H\] is smooth down to about \[Fe/H\] = –2 with a ‘cosmic’ scatter less than the scatter that results from the errors of measurement. Cosmic scatter is present for more metal-poor stars but our sample is devoid of such stars. Therefore, our results are expected to agree well with previous studies despite the lack of a complete overlap in stellar samples. Key papers reporting results on heavy elements are Zhao & Magain (1991) and Gratton & Sneden (1994) with reviews by Wheeler, Sneden, & Truran (1989), Lambert (1989), and McWilliam (1997) amongst others. Our results for \[Y/Fe\], \[Ba/Fe\], and \[Nd/Fe\] are in excellent agreement with previous results, for example, Zhao & Magain (1991) and Gratton & Sneden (1994) find \[Y/Fe\] $`0.1`$ at \[Fe/H\] = –1 with the relative underabundance of Y increasing to about 0.25 at \[Fe/H\] = –2 which agree well with Figure Rubidium in Metal-Deficient Disk and Halo Stars. A discrepancy appears when comparing results for \[Zr/Fe\]. Zhao & Magain (1991) and Gratton & Sneden (1994) report \[Zr/Fe\] $`+0.2`$ for \[Fe/H\] in the range of –1 to –2 but our results (Figure Rubidium in Metal-Deficient Disk and Halo Stars) show \[Zr/Fe\] to be consistently less than zero: a difference in \[Zr/Fe\] of about 0.3 to 0.4 dex relative to the previous studies. This difference is most probably due to our exclusive use of Zr i lines. Brown, Tomkin, & Lambert (1983) found that Zr i lines in mildly metal-poor giants gave a clear Zr underabundance which was plausibly attributed to non-LTE effects such as over-ionization of Zr atoms to Zr<sup>+</sup> ions. For metal-poor stars, Gratton & Sneden remark that their selection of Zr i lines gives a systematically lower Zr abundance than the Zr ii lines: the difference of –0.16$`\pm `$0.05 dex would account in part for our largely negative values of \[Zr/Fe\]. Our results clearly show a relative overabundance of Rb in metal-poor stars: the mean value \[Rb/Fe\] = +0.23$`\pm `$0.02 (s.e.) is found from nine stars with \[Fe/H\] $`<1`$, excluding the three CH stars. This is consistent with the four measurements reported by Gratton & Sneden (1994) and with their six upper limits to \[Rb/Fe\]. Non-LTE effects such as overionization warrant consideration. As a guide to the non-LTE effects on Rb, we consider those calculated for lithium, another alkali. Carlsson et al. (1994) predict that the LTE abundances from the Li i 6707 Å resonance doublet require correction by not more than 0.03 dex for non-LTE effects, a negligible correction in the present and almost all contexts. The abundances of Li and Rb are similar: lithium has the abundance log $`ϵ`$(Li) $`2.2`$ in the warmer dwarf stars comprising the Spite plateau and Rb declines from log $`ϵ`$(Rb) = 2.6 at solar metallicity to 1.9 at \[Fe/H\] = –1 and to 0.8 at \[Fe/H\] = –2. The key point is that optical depth effects in lines and continua are slight for both elements. Rb is probably more affected by photoionization because the atom’s ionization potential is 4.18 eV versus 5.39 eV for the lithium atom. Photoionization of Rb will be enhanced relative to the rate for Li but collisional ionization rates will also be enhanced. The different wavelengths of the resonance (and excited) atomic lines for Rb and Li will introduce no more than slight differences in the non-LTE corrections. If non-LTE effects were large for Rb, we would anticipate that dwarfs and giants of the same metallicity would yield systematically different Rb abundances. This is not the case: four dwarfs with \[Fe/H\] in the range –1 to –2 give a mean \[Rb/Fe\] = +0.28$`\pm `$0.04 (s.e.) and five giants in the same \[Fe/H\] range give the mean \[Rb/Fe\] = +0.18$`\pm `$0.02 (s.e.). (The mean for the dwarfs excludes the three CH stars.) Although a non-LTE analysis for Rb would be of interest, we suggest that our results derived from LTE analyses are not substantially different from non-LTE results. Scatter of the \[el/Fe\] results at a given \[Fe/H\] is not significantly different from that expected from the measurement errors. Obviously, the three stars over-abundant in the heavy elements and dubbed CH stars are set aside as special cases. The scatter for \[Rb/Fe\] between \[Fe/H\] of –0.5 and –1.0 is small and consistent with the measurement errors. There is an apparent moderate increase in scatter of \[Rb/Fe\] below \[Fe/H\] = –1 but this is probably again due to the measurement errors because the Rb i line is very weak in these metal-poor stars. The results are roughly consistent with a constant \[Rb/Fe\] in stars with \[Fe/H\] $`<1`$. ### 4.3 The New CH Stars HD 23439A and B and BD +5<sup>o</sup> 3640 These three stars, which are all dwarfs, are consistently overabundant in all of the five heavy elements investigated in this study. Figure Rubidium in Metal-Deficient Disk and Halo Stars shows a Zr i line and a V i line in HD 23439A and B and HD 103095, a non-CH star with otherwise similar properties. The much greater strength of the Zr i line relative to the V i line in HD 23439A and B as compared with HD 103095 is evident. As may be seen in Fig. Rubidium in Metal-Deficient Disk and Halo Stars, Rubidium in Metal-Deficient Disk and Halo Stars, and Rubidium in Metal-Deficient Disk and Halo Stars the heavy-element abundance enhancements are similar in the three stars. The average enhancements for the three stars are: \[Rb/Fe\] = +0.41, \[Y/Fe\] = +0.34, \[Zr/Fe\] = +0.51, \[Ba/Fe\] = +0.27, and \[Nd/Fe\] = +0.32 (HD 23439A and BD +5<sup>o</sup> 3640); actual values of \[X/Fe\] for the individual stars are given in Table 5. HD 23439 is a nearby visual binary composed of a K1V primary and a K2V secondary; the Hipparcos Catalogue gives 7.<sup>′′</sup>307 and 40.83 mas for the separation of its components and parallax, respectively. These numbers set a lower limit on the A–B linear separation of 179 AU. To the best of our knowledge HD 23439 is the first case of a binary in which both components have been found to be CH stars. We note that HD 23439B is a single-lined spectroscopic binary with a period of 48.7 d and a mass function of 0.0022 (Latham et al. 1988). Could the unseen companion of HD 23439B be the white dwarf descendant of the AGB star responsible for the mass transfer that changed HD 23439A and B into CH stars? Perhaps, but in this scheme it is hard to explain the very similar heavy-element enhancements of HD 23439A and B (see Table 5). Just as today the unseen companion is much closer to component B than to component A so in the past the putative AGB predecessor of the unseen companion must also have been much closer to B than A. How did the AGB star manage to give the A and B components the same heavy-element enhancements? This difficulty with the AGB-star scenario suggests that the heavy-element enhancements must be primordial. ## 5 RUBIDIUM AND STELLAR NUCLEOSYNTHESIS Heavy elements are synthesised by the neutron capture $`s`$\- and $`r`$-processes. (In considering elemental abundances, the small contribution from $`p`$-processes may be neglected.) Detailed dissection of the isotopic abundances measured for carbonaceous chondrites has provided an isotope by isotope resolution of the abundances into $`s`$\- and $`r`$-process contributions (cf. Käppeler, Beer, & Wisshak 1989). As is well known, Ba and Eu are primarily $`s`$\- and $`r`$-process products respectively: Cowan (1998) estimates that Ba is 85% an $`s`$-process product, and Eu is 97% a $`r`$-process product — see Gratton & Sneden (1994) for similar estimates. Rubidium is of mixed parentage; Cowan gives the $`s`$\- and $`r`$-process fractions as 50% each — Gratton & Sneden provide quite similar estimates (48% for $`s`$\- and 52% for the $`r`$-process). As noted in the Introduction, the $`s`$-process contribution may be broken into a ‘weak’ and a ‘main’ component. Gratton & Sneden divide the 48% $`s`$-process Rb contribution into 5% from the weak and 43% from the main $`s`$-process. Our goal is to use the Ba and Eu abundances as monitors of the $`s`$\- and $`r`$-processes respectively to predict the Rb abundances, and then to comment on the consistency between the predicted and observed Rb abundances. The other heavy elements considered here are also a mix of $`s`$\- and $`r`$-processes: Cowan gives the following $`(s,r)`$ %: (72, 28) for Y, (81, 19) for Zr, and (47, 53) for Nd. Gratton & Sneden (1994) put the weak contribution to the total $`s`$-process as 16% for Y, 10% for Zr, and less than 1% for Ba, Nd, and Eu. Especially interesting is the roughly 50-50 split for Nd that matches the split for Rb. Then, the simplest possible scenario of unvarying yields of $`s`$ and $`r`$-process products over the life of the Galaxy would predict that \[Rb/Fe\] and \[Nd/Fe\] would vary identically with \[Fe/H\]. Inspection of Fig. Rubidium in Metal-Deficient Disk and Halo Stars and Rubidium in Metal-Deficient Disk and Halo Stars shows that this is not the case. There are several factors pertinent to the understanding of the run of Rb and other heavy elements with \[Fe/H\]. * It is now well known that, the distribution of heavy elements at low \[Fe/H\] resembles a $`r`$-process pattern and is not the mix of $`s`$\- and $`r`$-processes that prevails at solar metallicities (Truran 1981; Sneden & Parthasarathy 1983; Wheeler et al. 1989; Lambert 1989). There is evidence that the abundance distribution of the $`r`$-process was largely invariant from low metallicities to the present \[Fe/H\] $`0`$. Then, it should suffice in modelling the \[el/Fe\] vs \[Fe/H\] relations to adopt the relative $`r`$-process abundances that are obtained from dissection of the measurements on carbonaceous chondrites. * Europium is assigned to the $`r`$-process: Cowan’s (1998) resolution of the meteoritic abundances is 97% $`r`$-process and a mere 3% $`s`$-process. With declining metallicity the $`s`$/$`r`$ ratio declines. Therefore, we may assume that Eu is a $`r`$-process product throughout the evolution of the Galaxy. The run of \[Eu/Fe\] against \[Fe/H\] is taken from McWilliam’s (1997) review: \[Eu/Fe\] = 0 at \[Fe/H\] = 0 with a smooth transition to \[Eu/Fe\] $`0.3`$ at \[Fe/H\] = –1.0 and to the metallicity limit \[Fe/H\] = –2.5 of interest to us. * Red giants enriched in $`s`$-process heavy elements are likely the major donors of these elements to the Galaxy’s interstellar medium. Analyses of such red giants of differing \[Fe/H\] show that the pattern of $`s`$-process products has evolved with \[Fe/H\]. Smith (1997), who has collated published results, defines Y and Zr as ‘light’ $`s`$-process elements (here, ls) and Ba, La, and Ce as ‘heavy’ $`s`$-process elements (here, hs). He finds that \[hs/ls\], which by definition is 0 at \[Fe/H\] = 0, increases to \[hs/ls\] $``$ 0.6 at \[Fe/H\] = –1.5. This evolution of \[hs/ls\] is attributed to an increase in the average exposure to neutrons in the He-burning shell of the AGB stars that are the site of the main $`s`$-process, as expected on theoretical grounds. * There is limited evidence also from abundance analyses of red giants that the Rb abundance relative to other ls elements increases with decreasing \[Fe/H\]; Smith’s (1997) collection of results implies \[Rb/Zr\] increases by about 0.7 dex from \[Fe/H\] = 0 to –1.5. This increase is attributed to a higher mean neutron density in the He-burning shell of the metal-poor AGB stars. As noted by Smith, this increase is expected on theoretical grounds. * If the magnitude of the weak $`s`$-process contributions to the abundances in carbonaceous chondrites were representative of the $`s`$-process at all relevant \[Fe/H\], the weak $`s`$-process could be safely dropped from our search for an explanation of the run of \[el/Fe\]. A thorough direct check on the weak $`s`$-process is not possible because the majority of the elements between Rb and the Fe-group are inaccessible spectroscopically. Zinc, which offers at least a hint of the behavior of the weak $`s`$-process, is assigned 34% to the $`s`$-process and 66% to the $`r`$-process in Cowan’s breakdown of solar system abundances. The $`s`$-process component is essentially entirely due to the weak $`s`$-process. This breakdown neglects a possible contribution to Zn from the sources that contribute the Fe-group elements. In the metallicity range of interest, \[Zn/Fe\] = 0.0$`\pm `$0.15 (Sneden & Crocker 1988). This result, which is barely compatible with the increase of \[Eu/Fe\] with declining metallicity, implies a drop in \[$`s`$/Fe\] with metallicity and justifies our neglect of the weak $`s`$-process contribution to Rb and other elements. Guided by these facts, it is possible to predict relative abundances of the heavy elements including Rb. We begin by considering Ba, Nd, and Eu. Eu defines the evolution of the $`r`$-process products. The observed run of \[Ba/Fe\] against \[Fe/H\] provides the evolution of the hs component of the $`s`$-process after a small correction for this element’s $`r`$-process component based on the Eu abundances and the meteoritic $`r`$-process Ba/Eu ratio. The adopted runs of \[Eu/Fe\] and \[Ba/Fe\] against \[Fe/H\] are shown in Fig. Rubidium in Metal-Deficient Disk and Halo Starsa. Then, it is a simple matter to predict the run of \[Nd/Fe\] using the meteoritic 50–50 split into $`s`$\- and $`r`$-process contributions. This prediction which is shown too in Fig. Rubidium in Metal-Deficient Disk and Halo Starsa is slightly inconsistent with the observations (Fig. Rubidium in Metal-Deficient Disk and Halo Stars) that show \[Nd/Fe\]$``$ 0 at all metallicities rather than the predicted \[Nd/Fe\] = 0.14 at \[Fe/H\] $`<`$ –1. Earlier, we noted that our Nd abundances are quite consistent with previously published results. This small inconsistency appears not to have been noted previously. It is likely that when the measurement errors are included the prediction and observations will overlap. Note that the prediction uses the Eu and Ba abundances as well as the meteoritic $`s`$ to $`r`$ ratios for Ba, Nd, and Eu. A change in the Nd ratio from 50% $`s`$ and 50% $`r`$ to 75% $`s`$ and 25% $`r`$ reduces \[Nd/Fe\] to 0.0 for metal-poor stars. Yttrium abundances may be predicted from the \[Ba/Fe\] observations and Smith’s estimates of \[hs/ls\] from heavy element enriched red giants such as S and Barium stars. This prediction is shown in Fig. Rubidium in Metal-Deficient Disk and Halo Starsb. The relative underabundance of Y (i.e., \[Y/Fe\] $`<0`$) results largely from the steep increase in \[hs/ls\] with decreasing metallicity that offsets the increase in the $`r`$-process contribution. This particular prediction assumes a meteoritic ratio of 72% $`s`$-process and 28% $`r`$-process and makes no attempt to separate main from weak $`s`$-process contributions. This prediction matches the observations quite well (see Fig. Rubidium in Metal-Deficient Disk and Halo Stars). The Rb prediction corresponding to the Y prediction which is also shown in Figure Rubidium in Metal-Deficient Disk and Halo Starsb does not correspond to the observed \[Rb/Fe\] ratios in metal-poor stars. The limited evidence gathered by Smith suggests that Rb in heavy element enriched red giants is progressively overabundant (relative to Zr) in metal-poor giants. This increase is attributable to a higher neutron density at the $`s`$-process site in red giants. If a smooth curve is drawn through Smith’s assembled data, the resulting run of \[Rb/Fe\] is shown in Fig. Rubidium in Metal-Deficient Disk and Halo Starsb. This prediction is in good agreement with the observations. ## 6 CONCLUDING REMARKS The principal novel result of our survey of Rb abundances in stars is that rubidium relative to iron is systematically overabundant in metal-poor stars. This increase reflects partly the growth of the $`r`$-process abundances relative to iron in metal-poor stars; we model this increase using observed Eu abundances and the assumption that the pattern of $`r`$-process abundances is solar-like at all metallicities. A second and major factor accounting for the increase in the Rb to Fe ratio in metal-poor stars is that the $`s`$-process contribution to Rb increases with decreasing metallicity. We model this increase using published abundances of Rb and other heavy elements collated by Smith (1997) for $`s`$-process enriched red giants that are presumed to be representative of the donors of $`s`$-processed material to the interstellar medium and so to control the chemical evolution of the Galaxy as far as the $`s`$-process is concerned. Two factors influence the Rb abundance: (i) the total exposure to neutrons at the $`s`$-process site, the He-burning shell of an AGB star, increases with decreasing metallicity of the red giant, and (ii) the neutron density at the $`s`$-process site increases with decreasing metallicity. That (i) is true follows from the observed increase of the relative abundance ratio of heavy to light $`s`$-process elements in the $`s`$-process enriched red giants. It is this effect that accounts, for example, for the drop in \[Y/Fe\] in metal-poor stars. That (ii) is true follows from the limited data on Rb abundances in $`s`$-process enriched red giants. As explained above, neutron density influences the Rb abundance through the branch in the $`s`$-process path at <sup>85</sup>Kr. In this thoroughly empirical way we account for the relative enrichment of Rb in metal-poor stars. In short, the observed \[Rb/Fe\] ratios of metal-poor stars are consistent with the expectation that AGB stars control the input of main $`s`$-process products to the Galaxy’s interstellar medium. A serendipitous discovery is the finding that both members of a visual binary are mild CH stars, HD 23439A and B, with $`s`$-process overabundances relative to other stars of the same metallicity. Mass transfer across a binary system is now thought to account for CH stars and the Barium stars, the higher metallicity counterparts of the CH stars. HD 23439B is a single-lined spectroscopic binary and the visible star might have been transformed to a CH star by mass transfer from the companion, then an AGB star and now a white dwarf. HD 23439A appears to be a single star that cannot have captured significant amounts of mass from a very distant AGB star orbiting HD 23439B. We suggested, therefore, that these CH stars testify that the halo’s interstellar medium was not entirely chemically homogeneous. This is not too surprising given that $`s`$-process products are injected into the interstellar medium at low velocity by red giants whereas iron and other elements are injected at very high velocity by supernovae. If the timescale for star formation is shorter than the timescale for thorough mixing of supernovae and red giant ejecta, abundance anomalies will result. We thank Bengt Edwardsson for providing the grid of dwarf and subgiant MARCS model atmospheres, Chris Sneden for providing Pilachowski et al.’s (1996) giant MARCS model atmospheres, and M. Busso and Verne Smith for helpful discussions. We also thank Pilachowski et al. for a list of the stars on their observing program given to us in advance of publication. This research has made use of the Simbad database, operated at CDS, Strasbourg, France. This work has been supported in part by NSF grant AST 9618414 and the Robert A. Welch Foundation of Houston, Texas. Figure Captions
no-problem/9905/cond-mat9905079.html
ar5iv
text
# A measure of conductivity for lattice fermions at finite density. ## 1 Introduction The study of transport properties in a system of charged fermions is an interesting subject in areas as different as Physics of Plasma, Quantum Chromodynamics or Metals. In particular, the measure of the electrical conductivity is a very difficult and yet interesting problem, specially in presence of nonperturbative effects. In such a case, numerical methods are called for. In this work, we want to show that lattice-regularized Euclidean field theories can be useful in this respect, at least in the limit of vanishing temperature. However, it should be emphasized that the so called sign-problem needs still to be overcome in many cases (see, though, Refs. for some successful simulations at finite density). In this paper, we restrict ourselves to a model consisting of fermions that only interact with an external electromagnetic field. In spite of its simplicity, it shares many properties with more realistic models, and it can be considered as a necessary first step to check any numerical method which could be used in models with dynamical interactions. We consider the standard $`U(1)`$ lattice-action, with Wilson fermions and finite chemical potential, but with the gauge variables held fixed. We shall study the residue of the pole of the electrical conductivity at zero-frequency, which is purely imaginary. Since a non-vanishing value for this residue unambiguously signals a conducting phase, this is a rather interesting quantity in our opinion. In order to obtain it, we measure the electrical-current induced in the system by an external electric field. This technique requires a numerical calculation even in the case of an external spatially-homogeneous time-dependent electromagnetic field. The delicate point, however, is that our electric field varies in Euclidean time. One can nevertheless assume that there is a linear relation between the Euclidean current and the Euclidean electric field, at least for small fields. This Euclidean conductivity presents a pole whose residue can be straightforwardly measured. To check that the obtained result is physical, we follow a very elegant procedure due to Kohn. He showed that the real-time residue can be measured by studying the sensitivity of the ground-state energy to an external Aharonov-Bohm electromagnetic field. We show how can this be done in the lattice formalism, and, in this particularly simple case of free fermions, we calculate it (unfortunately, the Kohn recipe seems really hard to use in a Monte Carlo study of a self-interacting problem). Although at present we lack a rigorous proof of the equivalence of both calculations, its excellent numerical agreement gives a strong support to the linear response method. ## 2 The Model Let us consider a model of Wilson Fermions in a lattice of spacing $`a_\mathrm{s}`$ in the three spatial directions and $`a_\mathrm{t}`$ in the temporal one, coupled to an external electromagnetic field. We denote by $`\lambda `$ to the quotient $`a_\mathrm{t}/a_\mathrm{s}`$. The partition function can be written as (the $``$ superscript stands for complex conjugation) $$𝒵[U]=\underset{z}{}\mathrm{d}\mathrm{\Psi }_z\mathrm{d}\overline{\mathrm{\Psi }}_z\mathrm{exp}\left[\underset{x,y}{}\overline{\mathrm{\Psi }}_xM_{xy}(U)\mathrm{\Psi }_y\right],$$ (1) $`M_{xy}(U)`$ $`=`$ $`\mathrm{e}^{\lambda \mu }U_{x,0}(\gamma _0r_\mathrm{t})\delta _{y,x+\widehat{0}}\mathrm{e}^{\lambda \mu }U_{x,0}^{}(\gamma _0+r_\mathrm{t})\delta _{y+\widehat{0},x}`$ $`+`$ $`\lambda {\displaystyle \underset{i=1}{\overset{3}{}}}[U_{x,i}(\gamma _ir_\mathrm{s})\delta _{y,x+\widehat{ı}}U_{x,i}^{}(\gamma _i+r_\mathrm{s})\delta _{y+\widehat{ı},x}]+[(2m+6r_\mathrm{s})\lambda +2r_\mathrm{t}]\delta _{x,y},`$ where $`U_{x,\nu }=\mathrm{e}^{\mathrm{i}A_{x,\nu }}`$, $`A`$ being the gauge field, and $`\mathrm{\Psi }_x`$, $`\overline{\mathrm{\Psi }}_x`$ are the anticonmuting Grassmann fermionic fields. The indices $`x,y`$ run on the points of the four-dimensional space-time lattice. We impose periodic boundary conditions for the gauge field, and periodic in space but antiperiodic in time ($`\nu =0`$) for the Grassmann field. The site $`x+\widehat{\nu }`$ is the neighbor of $`x`$ in the $`\nu =0,1,2,3`$ direction. For finite temporal length, $`L_0`$, the system is at finite temperature $`T=(a_\mathrm{t}L_0)^1`$. In this paper we will only consider the zero temperature ($`L_0\mathrm{}`$) limit. We follow the prescription of introducing the chemical potential through an imaginary gauge field $`A=(\mathrm{i}\lambda \mu ,0,0,0)`$ , which is fairly convenient for analytical calculations. The Wilson parameter, $`r`$, can be taken different for the spatial and time directions. In the limit $`\lambda 0`$ with $`a_\mathrm{s}`$ fixed the model describes a spatial lattice with a continuous time (as electrons in a metal), while for a continuum field theory both spatial and time continuum limits should be taken. We shall use the following representation for the (Euclidean) gamma matrices $$\gamma _0=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right),\gamma _i=\left(\begin{array}{cc}0& \mathrm{i}\sigma _i\\ \mathrm{i}\sigma _i& 0\end{array}\right),$$ (3) where $`\sigma _i`$ are the Pauli matrices. To define the electric four-current in the lattice we recall that in the space-time continuum limit it is defined as $$j_\nu (x)=\overline{\mathrm{\Psi }}(x)\gamma _\nu \mathrm{\Psi }(x),$$ (4) that can be obtained as a logarithmic derivative of the partition function respect to the gauge-field. This calculation can be exactly mimicked on the lattice noticing that a change in the link variable should be of the form $`U_{x,\nu }\mathrm{e}^{\mathrm{i}\alpha _{x,\nu }}U_{x,\nu }`$. In this way one obtains : $$j_{x,\nu }=\mathrm{i}\frac{\mathrm{log}𝒵}{\alpha _{x,\nu }},$$ (5) where now $`j_{x,0}`$ $`=`$ $`\overline{\mathrm{\Psi }}_x\mathrm{e}^{\lambda \mu }U_{x,0}(\gamma _0r_\mathrm{t})\mathrm{\Psi }_{x+\widehat{0}}+\overline{\mathrm{\Psi }}_{x+\widehat{0}}\mathrm{e}^{\lambda \mu }U_{x,0}^{}(\gamma _0+r_\mathrm{t})\mathrm{\Psi }_x`$ (6) $`j_{x,i}`$ $`=`$ $`\lambda \overline{\mathrm{\Psi }}_xU_{x,i}(\gamma _ir_\mathrm{s})\mathrm{\Psi }_{x+\widehat{ı}}+\overline{\mathrm{\Psi }}_{x+\widehat{ı}}U_{x,i}^{}(\gamma _i+r_\mathrm{s})\mathrm{\Psi }_x,i=1,2,3.`$ (7) The $`j_0`$ component is just the electric charge density that one encounters by differentiating with respect to $`\lambda \mu `$ the free energy density . Moreover, from the gauge invariance of the determinant of the fermionic matrix, $`M`$, it is straightforward to prove the lattice continuity equation, for any configuration of the electromagnetic-field: $$0=\underset{\nu }{}\left(j_{x,\nu }j_{x\widehat{\nu },\nu }\right).$$ (8) Eqs. (6) and (7) can be written free of Grassmann variables as $`j_{x,0}`$ $`=`$ $`\mathrm{e}^{\lambda \mu }U_{x,0}\mathrm{Tr}[(\gamma _0r_\mathrm{t})M_{x+\widehat{0},x}^1]+\mathrm{e}^{\lambda \mu }U_{x,0}^{}\mathrm{Tr}[(\gamma _0+r_\mathrm{t})M_{x,x+\widehat{0}}^1],`$ (9) $`j_{x,i}`$ $`=`$ $`\lambda U_{x,i}\mathrm{Tr}[(\gamma _ir_\mathrm{s})M_{x+\widehat{ı},x}^1]+\lambda U_{x,i}^{}\mathrm{Tr}[(\gamma _i+r_\mathrm{s})M_{x,x+\widehat{ı}}^1],`$ (10) where Tr stands for the trace over Dirac indices. The above expressions and the relation $$M(U^{})=\gamma _1\gamma _3\left(M(U)\right)^{}\gamma _3\gamma _1,$$ (11) allow to prove that $$j_{x,\nu }_U^{}=j_{x,\nu }_U^{}.$$ (12) In an uniform electrical field, the charge density should remain constant under field inversion, while the electrical current should change sign. Therefore, from Eq. (12) one expects the former to be real and the latter to be imaginary (Euclidean space-time!). In absence of external fields ($`U=1`$) the matrix $`M`$ can be diagonalized in Fourier space, which allows to explicitly perform the functional integrals, and to compute the free energy and the propagator. For brevity, we only quote the result for the charge density in the case $`r_\mathrm{t}=r_\mathrm{s}=1`$, $`\mu >0`$, that in the infinite volume limit reads (see Refs. for similar calculations), $$\rho (\lambda ,\mu )=2_\pi ^\pi \frac{\mathrm{d}^3𝒌}{(2\pi )^3}\theta \left(\mu \lambda ^1E(𝒌)\right),$$ (13) where $`E(𝒌)`$ $`=`$ $`\mathrm{arccosh}\left[{\displaystyle \frac{1+\lambda ^2_j\mathrm{sin}^2k_j+\left(\lambda \mathrm{\Sigma }(𝒌)+1\right)^2}{2\left(\lambda \mathrm{\Sigma }(𝒌)+1\right)}}\right],`$ (14) $`\mathrm{\Sigma }(𝒌)`$ $`=`$ $`m+{\displaystyle \underset{j}{}}(1\mathrm{cos}k_j).`$ (15) An useful quantity is the mechanical compressibility that, at zero-temperature coincides with the density of states: $$\kappa (\lambda ,\mu )=\frac{\rho (\lambda ,\mu )}{\mu }=2_\pi ^\pi \frac{\mathrm{d}^3𝒌}{(2\pi )^3}\delta \left(\mu \lambda ^1E(𝒌)\right)=2\lambda _{E(𝒌)=\lambda \mu }\frac{\mathrm{d}^2S}{(2\pi )^3}\frac{1}{\mathbf{}_𝒌E(𝒌)}.$$ (16) The density of states of the system present a typical band structure (see the upper part of Fig. 1, dashed line). The upper limit of the band corresponds to the saturation due to the Fermi statistics (one particle per lattice-site). Since the function $`E(𝒌)`$ is periodic, its gradient has zeroes in the Brillouin zone, producing non-analiticies as the cusps in Fig. 1 (the so-called Van-Hove singularities ). ## 3 The Electrical Conductivity In a classical paper, Kohn developed an elegant characterization of a conductor, at zero temperature. His method allows the measurement of the following limit for the imaginary part of the electrical conductivity, $`\sigma ^{\prime \prime }`$, $$Z=\underset{\omega 0}{lim}\omega \sigma ^{\prime \prime }(\omega ).$$ (17) If this limit turns out to be non zero, the system is a conductor. The construction is as follows. The system of interest is constrained to verify periodic boundary conditions in the (say) first spatial direction, and immersed in an Aharonov-Bohm like electromagnetic field $`A=(0,\alpha ,0,0)`$. With this choice of boundary conditions the product $`L_1a_\mathrm{s}\alpha `$ is gauge invariant since it represents the magnetic flux traversing the system. It can be shown that $$Z=\frac{1}{V_\mathrm{s}}\frac{\mathrm{d}^2E_0}{\mathrm{d}\alpha ^2}|_{\alpha =0},$$ (18) where $`E_0`$ is the ground-state energy and $`V_\mathrm{s}`$ is the spatial volume. It is crucial that the infinite limit volume is taken after the $`\alpha `$ derivative is performed, since the effect of the Aharonov-Bohm field can be thought of as a change in the boundary conditions (see below). In the infinite volume limit, the energy no longer depends on $`\alpha `$. In our case, as the free energy and the ground-state energy coincide in the zero temperature limit, we can study the residue in the following way $$Z=\underset{V_\mathrm{s}\mathrm{}}{lim}\underset{T0}{lim}\frac{\mathrm{d}^2\stackrel{~}{f}}{\mathrm{d}\alpha ^2}|_{\alpha =0},$$ (19) where $`\stackrel{~}{f}`$ is obtained from the intensive free energy $`f`$ after subtracting the vacuum contribution: $`\stackrel{~}{f}(\mu )=f(\mu )f(0)`$. Let us sketch the calculation. The free energy should be calculated in a finite volume and at finite temperature. We introduce our system in the Aharonov-Bohm electromagnetic field: $$U_{x,0}=U_{x,2}=U_{x,3}=1,U_{x,1}=\mathrm{e}^{\mathrm{i}\alpha }.$$ (20) This field can be transformed into a boundary effect by performing the following gauge transformation: $$U_{x,\nu }U_{x,\nu }^\mathrm{G}=\mathrm{e}^{\mathrm{i}g(x)}U_{x,\nu }\mathrm{e}^{\mathrm{i}g(x+\widehat{\nu })},g(x)=\alpha x_1,$$ (21) so that $`U^\mathrm{G}=1`$ excepting $$U_{(x_1=L_11),1}^\mathrm{G}=\mathrm{e}^{\mathrm{i}\alpha L_1}.$$ (22) By direct inspection of the fermion matrix in Eq. (2), one can easily recognize that a system verifying periodic boundary conditions in the 1 direction in the field $`U^\mathrm{G}`$ is equivalent the same system with no field at all, but verifying $$\mathrm{\Psi }(x_0,x_1+L_1,x_2,x_3)=\mathrm{e}^{\mathrm{i}\alpha L_1}\mathrm{\Psi }(x_0,x_1,x_2,x_3),$$ (23) This amounts to substituting $`k_1`$ by $`k_1+\alpha `$ in the momentum-quantification in a finite lattice. For a system of free fermions the free energy can be now straightforwardly calculated. Once the $`\alpha `$ derivative is performed, the zero temperature limit can be taken by transforming the $`k_0`$ sum into an integral. We get, in the simplest case $`r_\mathrm{t}=r_\mathrm{s}=1`$, $`\mu >0`$, $$Z=2_\pi ^\pi \frac{\mathrm{d}^3𝒌}{(2\pi )^3}\frac{^2E(𝒌)}{k_1^2}\theta (\mu \lambda ^1E(𝒌)).$$ (24) Notice that for the empty system, $`\mu <\lambda ^1E_{\mathrm{min}}`$, the integral vanishes, as well as for the full band $`\mu >\lambda ^1E_{\mathrm{max}}`$, since $`E(𝒌)`$ is a periodic function of $`k_1`$. The three dimensional integrals (24) can be performed using a Monte Carlo method. The results are shown in Fig. 1 (dashed line in the lower part). ## 4 Numerical Calculations In this section, we are going to reproduce the results of the sections 2 and 3 by directly considering the integration of the partition function. This method has the advantage of being generalizable to inhomogeneous external fields, and also when interacting dynamical fields are present. Examples of how to introduce an external field on an interacting lattice-gauge system can be found in Refs. . To compute the partition function it is necessary to work in finite lattices, consequently, an infinite volume limit should be taken. We have carried out measures in symmetric lattices of sizes $`L=4,6,8,10,12,14`$ and $`16`$, with $`m=1/2`$ and $`\lambda =1`$. For the hopping term, we have taken $`r_\mathrm{s}=r_\mathrm{t}=1`$. As the integral in the fermionic fields is Gaussian, the computation of the electric current just requires the inversion of a $`4V`$ matrix, $`V`$ being the space-time volume. The fermion matrix (2) being sparse, we have used a conjugate-gradient algorithm for the numerical inversion. We first consider the density of states in a vanishing external field. In order to measure $`\rho /\mu `$ we invert the matrix at $`\mu \pm ϵ`$ for $`ϵ`$ small enough. In interacting systems the derivative can be calculated in terms of connected correlation-functions . The numerical results are plotted in Fig. 1, upper part, together with the infinite volume values obtained analytically. Although the finite size effects are non negligible even in the larger lattices for most values of $`\mu `$, there is a clear trend to the asymptotic values. Unfortunately, for an interacting system it is not immediate how to implement Kohn’s method for calculating the residue of the conductivity. In fact, the free energy is rather hard to calculate with a Monte Carlo simulation and what one directly obtains are mean-values. We are now going to present a different way of computing the residue, by directly measuring the system response to an external electrical field. Notice that the presence of an electric field requires a non-homogeneous vector potential and consequently the inversion of the fermion matrix can no longer be performed in closed analytical form. This new recipe can be straightforwardly generalized to interacting systems, but its equivalence with the Kohn’s method is just an ansatz. Nevertheless the agreement is excellent, as we will show. By analogy with continuum electrodynamics, we want to study the electric current induced in the system by an external weak uniform electric field in the $`1`$ direction. The conductivity (in the frequency domain), will be the proportionality constant between the electrical current and the external field. There are some subtleties that need to be considered when putting an external electric field on the lattice. We take the gauge-field configuration ($`t=x_0`$) $$U_{x,0}=\mathrm{e}^{\mathrm{i}_tx_1},U_{x,i}=1,$$ (25) $$_t=\frac{2\pi }{L_1}n_t,n_t\{\frac{L_1}{2},\frac{L_1}{2}+1,\mathrm{},\frac{L_1}{2}1,\frac{L_1}{2}\}.$$ (26) Notice that the quantization of the electric-field is due to the spatial boundary conditions. To preserve the translational symmetry, the displaced gauge field $$U_{x,0}=\mathrm{e}^{\mathrm{i}_t(x_1\xi )},\xi \mathrm{integer},$$ (27) should be a gauge-transform of the one in Eq. (25). Since the needed gauge transformation is analogous to Eq. (21), it is easy to check that the condition that allows this transformation is the trivialness of the Polyakov loop: $$\underset{t=0}{\overset{L_01}{}}U_{(t,𝒙),0}=1\mathrm{or}\underset{t=0}{\overset{L_01}{}}_t=2\pi n,$$ (28) with $`n`$ integer. This condition also allows to transform the gauge field to the Coulomb gauge $`A_0=0`$. If condition (28) is violated, the translational invariance is lost and the electric current is no longer spatially homogeneous even on a homogeneous electric-field. However, with the correct field choice (28), we get a homogeneous electrical current aligned with the external electrical field, and imaginary as anticipated in Eq. (12). In order to directly compare with the results obtained with Eq. (24), let us define $$j(t)=\mathrm{i}j_{x,1}$$ (29) If we want to stay within linear-response theory, we have to postulate a linear relation between the Fourier transform of the electrical current $`j(t)`$ and the external electrical field $`_t`$: $$\widehat{ȷ}(\omega )=\sigma (\omega )\widehat{}(\omega ).$$ (30) Notice that both $`j(t)`$ and $`_t`$ being real, $`\sigma (\omega )=\sigma ^{}(\omega )`$. However, the results can be more cleanly cast in terms of a modified Fourier transform for the electrical field: $$\stackrel{~}{}(\omega )=\frac{1}{\sqrt{L_0}}\underset{t=0}{\overset{L_01}{}}_t\mathrm{e}^{\mathrm{i}\omega (t+1/2)}.$$ (31) The rationale for this is that the electrical field $`_t`$ on the lattice lives mid-way between sites at times $`t`$ and $`t+1`$. The modified conductivity $`\stackrel{~}{\sigma }(\omega )=\widehat{ȷ}(\omega )/\stackrel{~}{}(\omega )`$ is related with the previous one by $$\stackrel{~}{\sigma }(\omega )=\sigma (\omega )\mathrm{e}^{\mathrm{i}\omega /2}.$$ (32) The nice feature of $`\stackrel{~}{\sigma }(\omega )`$ is that it turns out to be purely imaginary. In Fig. 2 we plot the imaginary part of $`\stackrel{~}{\sigma }(\omega )`$ as obtained from a field with $`n_0=1`$ and $`n_1=1`$, (from now on we shall only indicate the non-vanishing $`n_i`$’s), in a system of Wilson fermions with $`m=1/2`$, $`r=1`$ and $`\mu =1`$, that is within the band energy-range and therefore with a non-vanishing Fermi surface. We see that for large frequencies the thermodynamic limit is reached in rather small lattices. However, at the minimal reachable frequency ($`2\pi /L_0`$) the conductivity is rapidly growing suggesting a singularity at zero frequency. In fact, for a (classical) system of free particles of density $`n`$ we expect that $`\sigma (\omega )`$ will behave as $$\sigma ^{\mathrm{free},\mathrm{classical}}\mathrm{i}\frac{e^2n}{m\omega }.$$ (33) Notice that if $`\sigma (\omega )`$ has a pole at $`\omega =0`$ with a purely imaginary residue the same will hold true for $`\stackrel{~}{\sigma }(\omega )`$, and both residues will be equal. Although the Euclidean conductivity $`\stackrel{~}{\sigma }(\omega )`$ do not match the real-time one (being imaginary, it cannot fulfill the Kramers-Kronig relations), one can formally expect the residues to coincide in the passage from $`\omega `$ to $`\mathrm{i}\omega `$. This suggest to define the following quantity which will be the basic object of our study: $$Z^\mathrm{E}=\frac{1}{i}\omega ^{\mathrm{min}}\stackrel{~}{\sigma }(\omega ^{\mathrm{min}}),\omega ^{\mathrm{min}}=\frac{2\pi }{L_0}.$$ (34) In the $`L_i,L_0\mathrm{}`$ limit, $`Z^\mathrm{E}`$ tend to the residue of the pole. In order to measure this, we have considered the smallest of possible external disturbances: $`\{n_0=1,n_{L_0/2}=1\}`$. Our result is shown in Fig. 1, lower part. We see that the Euclidean residue follows quite closely Kohn’s result, which in fact can be considered as the infinite volume limit for our calculation. Moreover, the physical picture is rather transparent: when the band is full, the system gets almost inert, while when the band is empty, it can be excited by the external field creating a hole in the Dirac sea. Since the smallest possible excitation has frequency $`2\pi /L_0`$, to be compared with a gap $`2m`$, it is reasonable that at $`\mu =0`$, the larger is the space-time lattice, the smaller is the system response. In fact, notice that in Fig. 1 when $`\mu `$ is below the lower band limit, the curves get horizontal: in this range of $`\mu `$ the system can be only excited by crossing the gap between the Dirac sea and the conduction band. And the gap is, of course, $`\mu `$ independent in a non-interacting system. We remark that our results have been found within the linear response approximation. We can control this approximation in several ways. One is to study the Fourier transform of the current, for frequencies at which the Fourier transform of the external-field vanishes. In Fig. 3 we show the zero-mode of the electrical current for the electrical-field $`\{n_0=1,n_1=1\}`$. We see that this non-linear effect tends to zero with growing lattice-size, which is quite reasonable since the minimum possible electric field is $`2\pi /L_1`$. The non-linear corrections are oscillating, but modulated by a rapidly decaying function. Roughly speaking, for the largest lattice the non linear effects are of the same order as the distance to the thermodynamic limit. A further check can be done by comparing the residue obtained from the data in Fig. 2 with the one in Fig. 1: in the $`L=14`$ lattice, the differences are at the $`0.3\%`$ level, while in the $`L=6`$ lattice the differences are at the $`1.6\%`$ level. Therefore, we believe that non-linear effects are under control for the not too small fields that we can deal with. ## 5 Conclusions We have presented a simple way of studying the electrical conductivity of a system of Wilson fermions at finite density and zero-temperature in a path-integral formalism. In particular, we have computed the residue of the zero frequency pole of the conductivity, by numerically considering the linear response to an external electric field, varying in Euclidean time. The results have been contrasted with an analytical computation based on a method proposed by Kohn, and an excellent agreement has been found. As a further cross-check, we have computed the density of states both analytically and numerically in a finite lattice, obtaining a nice thermodynamic limit convergence. It should be emphasized that in contrast with the analytical calculation which can only be done for a non interacting system (or, at most, for simple external fields), the numerical calculations are easily generalizable to more complex models, as fermions self-coupled with quartic interactions or via a dynamic bosonic field. An open, very interesting question is the possibility of extracting the full real-time electrical conductivity function from its Euclidean counterpart. We have shown that the residue of the zero-frequency pole can indeed be obtained. It would be also very interesting to extend this approach to systems at finite-temperature. ## 6 Acknowledgements This work was triggered during a discussion with F. Guinea to whom we are indebted for many suggestions, discussions and bibliographical help. The numerical calculations have being carried-out in the RTNN machines at the universities of Zaragoza and Complutense de Madrid. This work has been partially supported by CICYT, contracts AEN97-1680,1693,1708.
no-problem/9905/hep-ph9905522.html
ar5iv
text
# Unitarization of Total Cross Section and Coherent Effect in pQCD ## 1 BASIC IDEA I want to discuss a method to unitarize the leading-log BFKL Pomeron , so that the growth of total cross section $`\sigma _T(s)`$ with energy $`\sqrt{s}`$ obeys the Froissart bound $`\mathrm{ln}^2s`$. The idea is very simple. Total cross section grows because the Yukawa cloud of the colliding particles overlap even at large impact parameters. At low energy the clouds are so tenuous and transparent that they do not contribute to the cross section. But as energy increases, the rarified overlap may contain sufficient energy to produce there a gluon jet, or a pair. When that happens the cloud becomes opague and the effective radii inrease. This continues until shadowing correction becomes important, at which time the rise is dampened and the Froissart bound is reached. This idea is familiar with potential scattering, where Born approximation, if large, must be supplemented by higher-order corrections to restore unitarity. In the present context, instead of potentials we must talk about interacting Reggeons, but the idea is the same. The technical question is how to do so in QCD, where the basic entities are quarks and gluons carrying non-commuting colours, rather than Reggeons. Clearly a characterization of multiple Reggeons in terms of quarks and gluons are needed. We shall show that this can be derived with the help of an $`s`$-channel factorization property, which also leads to an eikonal form for the amplitude which includes shadowing correction needed for the Froissart bound to be obeyed. Moreover, such a factorization is almost synonomous to the statement that the state in the peripheral region in the collision is coherent. ## 2 COHERENCE Consider $`n`$ bosons being emitted from an energetic source with vertex factors $`V_i`$, as in Fig. 1(a). After Bose-Einstein symmetrization, i.e., summation over the $`n!`$ permuted diagrams, it can be shown that each diagram can be factorized into a product of quasi-particle amplitudes. A quasi-particle is made up of any number $`p`$ of gluons, which couples to the source by the nested commutator $`[V_1,[V_2,[V_3,\mathrm{},[V_{p1},V_p]\mathrm{}]]]`$. Therefore it is a colour-octet object just like a single gluon. Indicating factorization by a vertical bar, and a permuted diagram by the gluon lines in the order they appear, here are three examples showing how some $`n=8`$ diagrams factorize: $`[1|2|3|4|5|6|7|8],[5731|2|84|6],[1|2|3|854|76]`$. The general rule is that a vertical bar is put after a number iff no number to its right is smallest than it. Let $`C_m^{}`$ be the operator creating a quasi-particle with $`m`$ bosons from the vacuum state $`|0`$, then the bosons of these three examples are in the states $`(C_1^{})^8|0,C_4^{}C_1^{}C_2^{}C_1^{}|0`$, $`(C_1^{})^3C_3^{}C_2^{}|0`$, respectively. Summing over $`m`$, the bosons emitted by the energetic particle is seen to be in a coherent state $`\mathrm{exp}(C_1^{}+C_2^{}+C_3^{}+C_4^{}+\mathrm{})|0`$. Thus factorization and coherence are two sides of the same coin. It is however important to emphasize that this factorization occurs in the $`s`$-channel, so it is very different from the usual factorization between short and long distances which occurs in the $`t`$-channel. Moreover, the collection of all single quasi-particles turns out to be nothing but a Reggeon, so the natural appearance of quasi-particle through factorization above can be regarded as an algebraic characterization of the Reggeon. ## 3 UNITARIZATION Consider a two-body scattering diagram, Fig. 1(b). The central region is hot and highly incoherent. However, as noted in the last section, the peripheral tree amplitudes associated with the two energetic particles are factorizable and the peripheral regions are ‘cool’ and coherent. The precise manner factorization takes place depends on the numbering of the gluon lines, which can be specified at will once and for all for every set of permuted diagrams. We can use this property to factorize the scattering amplitude as follows . Suppose the central region of Fig. 1(b) falls into $`k`$ disconnected parts once the two energetic particles are removed. For example, $`k=3`$ for Fig. 1(c). Then by suitably choosing the numbering of gluon lines, we can always produce vertical bars (cuts) between every disconnected components, as shown. Whether cuts occur inside each disconnected component depends on the particular permutation of the lines within that component. Fig. 1(c) illustrates the case when no (one, two) cut occurs in the first (second, third) disconnected component. The gluons between cuts form a single colour-octet quasi-particle, indicated in Fig. 1(c) by a single thick line. Thus the first (second, third) disconnected component is made up of the exchange of one (two mutually interacting, three mutually interacting) quasi-particle(s). One can show in the extended leading-log approximation, which keeps only terms with the lowest power of the fine structure constant $`\alpha _s`$ for fixed $`\xi \alpha _s\mathrm{ln}s`$ and for amplitudes with a fixed number of quasi-particles exchanged, that (A). The number of quasi-particles emerging from the bottom line of each disconnected component is equal to the number of top; those that are not equal are subleading in the approximation; (B). For fixed $`\xi `$ the amplitude with $`m`$ quasi-particle exchanged is of the form $`\alpha _s^md_m(\xi )`$. Thus for example the amplitude in Fig. 1(c) is of order $`\alpha _s^6`$; and (C). The non-commuting parts of the colour matrices between different irreducible parts contribute to subleading terms so that we may effectively assume them to commute within the extended leading-log approximation. Summing up all possible diagrams we get the total amplitude in the extended leading-log approximation to be $`𝒜(s,b)`$ $`=`$ $`1\mathrm{exp}(2i\delta (s,b)),`$ (1) $`\delta (s,b)`$ $`=`$ $`{\displaystyle \underset{m}{}}\delta _m(s,b),`$ (2) where $`\delta _m(s,b)`$ is the contribution of a disconnected part with $`m`$ mutually interacting quasi-particles being exchanged; each quasi-particle is a colour octet made up of any number of gluons with arbitrary complexity. We see from this eikonal form that $`\delta (s,b)`$ is simply the phase shift at energy $`\sqrt{s}`$ and impact parameter $`b`$. When the phase shift is small, we may replace the amplitude $`𝒜(s,b)`$ by its Born approximation $`𝒜^{}(s,b)`$. Using a subscript to denote the number of quasi-particles being exchanged, we have $`𝒜_1^{}=2i\delta _1`$ and $`𝒜_2^{}=2i\delta _2+2\delta _1^2`$. The former is of order $`\alpha _s`$ and the exchanged object is a colour octet. This is nothing but the familiar Reggeon amplitude in the leading-log approximation. Thus quasi-particles turn out to be Reggeons in this context, but note that the concept of a quasi-particle is far more general than that of a Reggeon. The latter Born amplitude is of order $`\alpha _s^2`$ and two colour-octets are being exchanged. This contains (at least) a colour singlet and a colour octet. The octet amplitude is negligible compared to $`𝒜_1^{}`$ and will be neglected. The singlet amplitude is nothing but the leading-log BFKL Pomeron amplitude. The amplitude $`A(s,\mathrm{\Delta })`$ with momentum transfer $`\mathrm{\Delta }`$ is given by the Fourier transform, and the total cross section $`\sigma _T(s)`$ is given by the optical theorem, to be $`A(s,\mathrm{\Delta })`$ $`=`$ $`2is{\displaystyle d^2be^{i\mathrm{\Delta }b}𝒜(s,b)},`$ (3) $`\sigma _T(s)`$ $`=`$ $`s^1\mathrm{Im}A(s,0).`$ (4) ## 4 PHENOMENOLOGY These formulas can also be used to compute the energy variation of total cross sections. Assuming the functions $`d_m(\xi )`$ for $`m>2`$ not to be substantially larger than their counter part at $`m=1`$ or $`m=2`$, the contribution from $`\delta _m`$ for $`m>2`$ can be ignored and $`\sigma _T(s)`$ can be computed from $`\delta _1`$ and $`\delta _2`$, which in turn can be obtained from the leading-log Reggeon amplitude $`𝒜_1^{}`$ and the BFKL amplitude $`𝒜_2^{}`$. The former is completely known but unfortunately the latter is only partially known. However, from direct perturbative calculations, scattering amplitudes are known to 8th orders . We can extract from such calculations phase shifts to the same order, and use it to calculate the total cross section in QCD. Unitarity is guaranteed, but since we have not used the full Reggeon and BFKL phase shifts to all orders, the calculation may not be numerically accurate. We refer the readers to Ref. for the result of the calculation and comparision with the experimental data . ## 5 CONCLUSION A unitary formula for total cross section is derived by making use of the coherent property in the peripheral region of the collision. The quasi-Particle so emerged turns out to be nothing but Reggeon fragments. In this way, not only Froissart bound is guaranteed to hold, but an algebraic characterization of the Reggeon is obtained.
no-problem/9905/astro-ph9905018.html
ar5iv
text
# Optical STJ Observations of the Crab Pulsar ## 1 Introduction The possibility of intrinsic determination of individual photon energies in the optical range was first reported by Perryman et al. (1993), who proposed the application of STJ technology to optical photon counting. Incident photons break Cooper pairs responsible for the superconducting state. Since the energy gap between the ground state and excited state is only a few meV (rather than $`1`$ eV in the case of semiconductors), each individual photon creates a large number of free electrons, in proportion to the photon energy. The first experiments demonstrating single optical photon counting with energy resolution were reported by Peacock et al. (1996) using STJs, and further developments have been described by Peacock et al. (1997). Similar results using superconducting transition-edge sensors (TES) as microcalorimeters have recently been reported (Cabrera et al. (1998)), including first observations of the Crab pulsar by Romani et al. (1998). Technology development within the Astrophysics Division at ESA is ultimately aiming at large arrays of low $`T_\mathrm{c}`$ superconductors capable of $`10`$ Å intrinsic energy resolution at high count rates. A $`6\times 6`$ array of $`25\times 25`$ $`\mu `$m<sup>2</sup> Tantalum junctions has been developed as a first astronomical prototype (Rando et al. (1998)). The wavelength response is intrinsically very broad (from $`<300`$ nm to $`>1000`$ nm) but is restricted in the present system to about 300–700 nm, as a result of the atmosphere ($`300`$ nm) and the optical elements required for the suppression of infrared photons ($`700`$ nm). The detector quantum efficiency is around 70% across this wavelength range, limited by the device/substrate geometry rather than by the intrinsic detector response. Count rate limits are about $`10^3`$ photons s<sup>-1</sup>, determined by the output stage electronics, although the device relaxation time is much faster, being below $`10\mu `$s for the present device. The current wavelength resolution, $`100`$ nm at 500 nm, is driven by system electronics and residual thermal background (IR) radiation, although the intrinsic response of the Ta junctions is some factor of 5 better than the present performance. PSR B0531+21 in the Crab Nebula was first observed as an optical pulsar by Cocke et al. (1969), and provides an excellent target for verification of the system’s astronomical performance. Along with PSR B0833–45 in Vela (Wallace et al. (1977)), PSR B0540–69 in the LMC (Middleditch & Pennypacker (1985)) and more recently PSR B0656+14 (Shearer et al. (1997)) and possibly Geminga (Shearer et al. (1998)) it remains one of the few pulsars observed to emit pulsed optical radiation. While the $`33`$ ms period pulsar has been extensively studied at all wavelengths including the optical (e.g. Percival et al. (1993), Eikenberry et al. (1996), Nasuti et al. (1996), Eikenberry & Fazio (1997), Gull et al. (1998), Martin et al. (1998)) it continues to be important in providing new insights into the nature of the pulsar emission mechanism: the pulse profile shape, the separation of the primary and secondary emission peaks by 0.4 in phase and recent results on the energy dependence of the pulse shape over the infrared to ultraviolet range (Eikenberry et al. (1996), Eikenberry & Fazio (1997)) provide a challenge to theoretical models in which $`\gamma `$-rays created through curvature radiation interact with the pulsar magnetosphere to produce the X-ray, ultraviolet, optical and infrared pulsations through a variety of energy-loss mechanisms. A photon counting detector with energy resolution in the optical offers an important possibility to examine further the energy dependence as a function of pulse phase. ## 2 Observations Our prototype $`6\times 6`$ Tantalum STJ array covering an area of $`4\times 4`$ arcsec<sup>2</sup> was operated at the Nasmyth focus of the William Herschel Telescope on La Palma in February 1999. Photon arrival time information was recorded with an accuracy of about $`\pm 5`$ $`\mu `$s with respect to GPS timing signals; while the latter is specified to remain within 1 $`\mu `$s of UTC, typical standard deviations are much less (Kusters (1996)). Observations were made on 4–6 Feb, although modest seeing ($`>2`$ arcsec), especially poor on the first two nights, and a significant number of unstable junctions, meant that total intensities could not be determined reliably. Our present analysis is restricted to the signal extracted from just 6 pixels (corresponding to an indeterminate but small ($`0.1`$) fraction of the overall PSF), and a consideration of the resulting pulse profile and its energy dependence. Data from the STJ are archived in FITS format with the photon records defined by their arrival time, $`x,y`$ pixel coordinate, and energy channel in the range 0–255. Channels 50–100 cover $`\mathrm{\Delta }\lambda 610310`$ nm, with $`\lambda (\mathrm{nm})1238.5/(m\times N_{\mathrm{ch}}+c)`$, where $`N_{\mathrm{ch}}`$ is the channel number, $`m0.04`$, and $`c0.03`$. Energy calibration was performed using an internal calibration source before and after the target observations, and verified using narrow-band filter observations of a standard star. Photon arrival times were translated to the solar system barycentre using the JPL DE200 ephemeris, taking into account gravitational propagation delay. Our reference timing ephemeris for the Crab pulsar used the 15 Feb 1999 (MJD = 51224) values of $`\nu =\mathrm{29.856\hspace{0.17em}514\hspace{0.17em}436\hspace{0.17em}4}`$ Hz and $`\dot{\nu }=\mathrm{374\hspace{0.17em}886.90}\times 10^{15}`$ s<sup>-2</sup> taken from the radio ephemeris of Lyne et al. (1999). Consistent periods were obtained, with a precision of typically $`5\times 10^8`$ s, from a period search of the timing data. ## 3 Results and Discussion Fig. 1 shows the light curve from the 50 min of data obtained on 6 Feb, acquired with a time resolution of 5 $`\mu `$s, and folded into 128 phase bins ($`250\mu `$s per bin), with an arbitrary origin of zero phase, and without background subtraction (impractical due to the combination of poor seeing, small field, and fraction of unstable junctions; the overall system response is undetermined for similar reasons, combined with uncalibrated losses at the derotator entrance aperture). Examination of the light curve, including the peaks, at finer temporal resolution down to about 30 $`\mu `$s per phase bin, reveals no significant sub-structure persisting over the observation interval. Fig. 2 shows the data divided into two separated energy channels, corresponding to E<sub>1</sub> = 310–410 nm and E<sub>2</sub> = 500–610 nm (energy channels 77–99 and 50–61 respectively). The profiles are normalised to the same relative intensities in the peaks, and are displaced vertically. For this choice of energy bins, the ratio of photons is roughly E<sub>2</sub>:E<sub>1</sub> = 4:1. The resulting colour ratio, constructed as E<sub>2</sub>–E<sub>1</sub>/E<sub>2</sub>+E<sub>1</sub> and folded at the pulsar period, is shown in the lower part of the figure. Wavelength-dependent variations across the peaks has been noted by Eikenberry et al. (1996), based on observations spanning the UV to the infrared K band, and by Sandoval et al. (1998). Not suprisingly given the form of these reported variations, no significant variations with pulsar phase are evident in our data. From the nebular emission-line spectrum reported by Davidson (1979) the \[O ii\] doublet ($`\lambda `$ 3726, 3729) lies within our extracted blue passband, while the red passband selected comprises none of the strong emission lines in the red part of the spectrum (\[N ii\] 6584, H$`\alpha `$ 6563, \[N ii\] 6584, \[Si ii\] 6717, 6731), and only marginally collects photons from the \[O iii\] doublet (4959, 5007). Our a posteriori choice of energy channels for this ratio provides sufficient wavelength separation to avoid ‘contamination’ of each bin given the low energy resolution of the device. Further discrimination of the nebular contributions is limited by our presently modest energy resolution, which is also insufficient to confirm the reality of the broad absorption feature near 5920 Å, so far noted only by Nasuti et al. (1996). These observational results can be compared with predictions from models in which the observed flux in a given phase interval is a combination of emission from effectively disjointed physical regions, in which the sections contributing to the emission in a given phase interval depend on the viewing geometry (e.g. Cheng et al. (1986a), Cheng et al. (1986b)). This mixing of physical regions acts to average the total emission, and predicts that observed properties such as the energy ratio remain constant, or change modestly but rapidly close to the emission peak (Romani (1996)). Higher S/N STJ data will be required to probe the small, rapid spectral index variations across the pulse peaks reported by Sandoval et al. (1998). As the first application in astronomy of a superconducting tunnel junction detector capable of providing intrinsic wavelength resolution in the optical, the results are a modest indication of the technology’s capabilities for the future. Significant improvements in performances, and in particular in the wavelength resolution, are expected from the use of lower critical temperature superconductors in the future. ###### Acknowledgements. We acknowledge the contributions of other members of the Astrophysics Division of the European Space Agency at ESTEC involved in the optical STJ development effort, in particular J. Verveer and S. Andersson (who also provided technical and system engineering support at the telescope) and P. Verhoeve for evaluation of device performance. We acknowledge D. Goldie, R. Hart and D. Glowacka of Oxford Instruments Thin Film Group for the fabrication of the array. We are grateful for the assignment of engineering time at the William Herschel Telescope of the ING, and we acknowledge the excellent support given to the instrument’s commissioning, in particular by P. Moore and C.R. Benn. The analysis made use of the up-to-date Jodrell Bank Crab pulsar timing results, maintained on the www by A.G. Lyne, R.S. Pritchard and M.E. Roberts. We thank G. Vacanti and A. Hazell for software updates allowing our data to be processed within the FTOOLS/XRONOS environment. We thank the referee, R.W. Romani, for helpful comments.
no-problem/9905/cond-mat9905153.html
ar5iv
text
# LATERAL TUNNELING THROUGH THE CONTROLLED BARRIER BETWEEN EDGE CHANNELS IN A TWO-DIMENSIONAL ELECTRON SYSTEM ## Abstract We study the lateral tunneling through the gate-voltage-controlled barrier, which arises as a result of partial elimination of the donor layer of a heterostructure along a fine strip using an atomic force microscope, between edge channels at the depletion-induced edges of a gated two-dimensional electron system. For a sufficiently high barrier a typical current-voltage characteristic is found to be strongly asymmetric and include, apart from a positive tunneling branch, the negative branch that corresponds to the current overflowing the barrier. We establish the barrier height depends linearly on both gate voltage and magnetic field and we describe the data in terms of electron tunneling between the outermost edge channels. Recently there has arisen much interest in the lateral tunneling into the edge of a two-dimensional electron system (2DES), which is related not only to the problem of integer and fractional edge states in the 2DES but also to that of resonant tunneling and Coulomb-blockade . The tunneling regime was identified by exponential dependences of the measured current on either source-drain voltage or magnetic field . For producing a tunnel barrier a number of methods were used: (i) gate voltage depletion of a narrow region inside the 2DES ; (ii) focused-ion-beam insulation writing ; (iii) cleaved-edge overgrowth technique . As long as the tunnel barrier parameters are not well controllable values, it is important that using the first method one can tune the barrier on the same sample. In contrast to vertical tunneling into the bulk of the 2DES at a quantizing magnetic field, when the 2DES spectrum shows up , in the lateral tunneling electrons can always tunnel into the Landau levels that bend up at the edge to form edge channels where they intersect the Fermi level, i.e., the spectrum gaps are not seen directly in lateral tunneling. Instead, it reflects the edge channel structure and density of states. For both the integer and fractional quantum Hall effect, a power-law behaviour of the density of states at the 2DES edge is expected. This can interfere with the barrier distortion at an electric field in the nonlinear regime of response and, therefore, results of lateral tunneling experiments obtained from the measurements of current-voltage curves should be treated with care. Here, we investigate the lateral tunneling in the narrow constrictions in which, along a thin strip across, the donor layer of a GaAs/AlGaAs heterostructure is partly removed using an atomic force microscope (AFM). A controlled tunnel barrier is created by gate depletion of the whole of the sample. The well-developed tunneling regime is indicated by strongly asymmetric diode-like current-voltage characteristics of the constriction which are sensitive to both gate voltage $`V_g`$ and normal magnetic field $`B`$. The behaviour of the tunneling part of current-voltage curves points to electron tunneling between the outermost edge channels. The samples are triangular constrictions of a 2D electron layer with different widths $`W=0.7`$, 0.4, 0.3, and 0.2 $`\mu `$m of the thinnest part, see Fig. 1(a). These are made using a standard optical and electron beam lithography from a wafer of GaAs/AlGaAs heterostructure with low-temperature mobility $`\mu =1.6\times 10^6`$ cm<sup>2</sup>/Vs and carrier density $`n_s=4\times 10^{11}`$ cm<sup>-2</sup>. Within each constriction the donor layer is removed along a fine line by locally oxidizing the heterostructure using AFM induced oxidation . This technique allows one to define 140 Å wide oxide lines of sufficient depth and oxide quality so as to partly remove the donor layer and, therefore, locally decrease the original electron density. The whole structure is covered with a metallic gate, which enables us to tune the carrier density everywhere in the sample. When depleting the 2D layer the oxidized regions get depopulated first, resulting in the creation of tunnel barriers. Potential probes are made to the sample to allow transport measurements. For the measurements we apply a dc voltage, $`V_{sd}`$, between source (grounded) and drain contacts of one of the constrictions modulated with small ac voltage with amplitude $`V_{ac}=40`$ $`\mu `$V and frequency $`f=20`$ Hz. A gate voltage is applied between the source and the gate. We measure the real part of the ac current, which is proportional to the differential conductance $`\mathrm{d}I/\mathrm{d}V`$, as a function of bias voltage $`V_{sd}`$ ($`IV`$ characteristics) using a home-made $`IV`$ converter and a standard lock-in technique. The behaviour of $`IV`$ characteristics is investigated with changing both gate voltage and magnetic field. The measurements are performed at a temperature of about 30 mK at magnetic fields of up to 14 T. The results obtained on different constrictions are qualitatively similar. To characterize the sample we extract the gate voltage dependence of the electron density from the behaviour of magneto-conductance plateaux in the barrier region and in the rest of the 2DES (Fig. 1(b)). The analysis is made at high fields where the size-quantization-caused effect of conductance plateaux in narrow constrictions is dominated by magnetic field quantization effects . As seen from Fig. 1(b), if the barrier region is depopulated ($`V_g<V_{th}`$), the electron density in surrounding areas is still high to provide good conduction. The slopes of the dependences $`n_s(V_g)`$ in the oxidized region and in the rest of the 2DES turn out to be equal within our accuracy. The distance between the gate and the 2DES is determined to be $`d570`$ Å; as the corresponding growth parameter is about 400 Å, the 2D layer thickness contributes appreciably to the distance $`d`$. We have found that even in the unoxidized region the electron density at $`V_g=0`$ can be different after different coolings of the sample because of insignificant threshold shifts: it falls within the range $`2.5\times 10^{11}`$ to $`4\times 10^{11}`$ cm<sup>-2</sup> and is always higher compared to the barrier region. A typical $`IV`$ characteristic of the constriction in the well-developed tunneling regime is strongly asymmetric and includes an overflowing branch at $`V_{sd}<0`$ and the tunneling branch at $`V_{sd}>0`$, see Fig. 2(a). The tunneling branch is much smaller and saturates rapidly in zero $`B`$ with increasing bias voltage. The onset voltages $`V_O`$ and $`V_T`$ for these branches are defined in a standard way as shown in the figure. The tunneling regime can be attained by both decreasing the gate voltage and increasing the magnetic field, as is evident from Fig. 2(a). We check that the shape of $`IV`$ characteristics is not influenced by interchanging source and drain contacts. Hence, the tunnel barrier is symmetric and the asymmetry observed is not related to the constriction geometry. To understand the asymmetry origin let us consider a gated 2DES containing a potential barrier of approximately rectangular shape with width $`Ld`$ in zero magnetic field. The 2D band bottom in the barrier region coincides with the Fermi level $`E_F`$ of the 2DES at $`V_g`$ equal to the threshold voltage $`V_{th}`$. Since in the barrier region for $`V_g<V_{th}`$ an incremental electric field is not screened, the 2D band bottom follows the gate potential so that the barrier height is equal to $`e\mathrm{\Delta }V_g=e(V_{th}V_g)`$, where $`e`$ is an electron charge (Fig. 2(b)). Applying a bias voltage $`V_{sd}`$ leads to shifting the Fermi level in the drain contact by $`eV_{sd}`$. Because of gate screening the voltage $`V_{sd}`$ drops on the scale of the order of $`d`$ near the boundary between barrier and drain and so the barrier height on the source side does not practically change, see Fig. 2(b). If $`V_{sd}`$ reaches the onset voltage $`V_O=\mathrm{\Delta }V_g`$, the barrier on the drain side vanishes and electrons start to overflow from drain into source. In contrast, for $`V_{sd}>0`$ the electron tunneling through the barrier from source into drain is only possible. With increasing $`V_{sd}`$ above $`\mathrm{\Delta }V_g`$, the tunneling distance diminishes and the barrier shape becomes close to triangular. Within the triangular barrier approximation, in the quasiclassical limit of small tunneling probabilities, it is easy to deduce that the derivative of the tunneling current over bias voltage is expressed by the relation $$\frac{\mathrm{d}I}{\mathrm{d}V}=\sigma _0\mathrm{exp}\left(\frac{4(2m)^{1/2}(e\mathrm{\Delta }V_g)^{3/2}L}{3\mathrm{}eV_{sd}}\right)\sigma _0,$$ (1) where $`\sigma _0(e^2/h)\mathrm{\Delta }V_gW/V_{sd}\lambda _F`$, $`m=0.067m_0`$ ($`m_0`$ is the free electron mass), and $`\lambda _F`$ is the Fermi wave-length in the source. Obviously, the tunneling current is dominated by electrons in the vicinity of the Fermi level, and the tunneling distance $`L_T=\mathrm{\Delta }V_gL/V_{sd}`$ should satisfy the inequality $`dL_T<L`$. In accordance with Eq. (1), the expected dependence of the tunneling onset voltage $`V_T`$ on gate voltage is given by $`V_T(\mathrm{\Delta }V_g)^{3/2}`$. As seen from Fig. 3(a), the expected behaviour of both $`V_O`$ and $`V_T`$ with changing $`V_g`$ is indeed the case. The dependences $`V_O(V_g)`$ and $`V_T^{2/3}(V_g)`$ are both linear; the slope of the former is very close to one. Extensions of these straight lines intercept the $`V_g`$-axis at slightly different voltages, which points out that the triangular barrier approximation is good. The threshold voltage $`V_{th}`$ for the 2DES’s generation in the barrier region, which is defined as a point of vanishing $`V_O`$ (Fig. 3(a)), is coincident, within experimental uncertainty, with the value of $`V_{th}`$ determined from the analysis of magneto-conductance plateaux (Fig. 1(b)). Fitting the set of $`IV`$ characteristics at different $`V_g`$ by Eq. (1) with parameters $`L`$, $`V_{th}`$, and $`\sigma _0`$ is depicted in Fig. 3(b). The dependence of $`\sigma _0`$ on $`\mathrm{\Delta }V_g`$ and $`V_{sd}`$ is ignored on the background of the strong exponential dependence of $`\mathrm{d}I/\mathrm{d}V`$. Although three parameters are varied, the fit is very sensitive, except for $`\sigma _0`$, to their variation because of the exponential behaviour of $`IV`$ characteristics. One can see from Fig. 3(b) that the above model describes well the experiment at zero magnetic field. As expected, the determined parameter $`L=0.6`$ $`\mu `$m is much larger than $`d`$, i.e. the barrier shape at $`V_{sd}=0`$ is approximately rectangular, and the value of $`V_{th}`$ is close to the point where $`V_O`$ (and $`V_T`$) tends to zero (Fig. 3(a)). The similar results are obtained on the other two constrictions. Besides, we find that the coefficient $`\sigma _0`$ for different constrictions does not scale with the constriction width $`W`$. This probably implies that the tunnel barriers even with submicron lengths are still inhomogeneous, which, however, does not seem crucial for the case of exponential $`IV`$ dependences. Having tested that we deal with the controlled tunnel barrier we investigate the tunneling at a normal magnetic field that gives rise to emerging the tunnel barrier in a similar way to gate depletion (Fig. 2(a)). At constant $`V_g>V_{th}`$, where there is no tunnel barrier in zero $`B`$, the magneto-conductance $`\sigma `$ obeys the $`1/B`$ law at weak fields and drops exponentially with $`B`$ in the high-field limit, signaling the tunneling regime. Fig. 4(a) presents the magnetic field dependence of the onset voltage $`V_O`$ that determines the barrier height. As seen from the figure, the change of the barrier height $`eV_O`$ with $`B`$ is very close to $`\mathrm{}\omega _c/2`$, which points to a shift of the 2D band bottom by half of the cyclotron energy. For describing the tunneling branch of $`IV`$ characteristics we calculate the tunneling probability in the presence of a magnetic field. This is not so trivial as at $`B=0`$ because electrons tunnel through the magnetic parabola between edge channels at the induced edges of the 2DES. In the triangular barrier approximation one has to solve the Schrödinger equation with the barrier potential $$U(x)=\frac{\mathrm{}\omega _c}{2l^2}(xx_0)^2eV_{sd}\frac{x}{L}e\mathrm{\Delta }V_g,\text{ }0<x<L,$$ (2) where $`\omega _c`$ is the cyclotron frequency, $`l`$ is the magnetic length, and $`eV_{sd}`$ is larger than the barrier height in the magnetic field. An electron at the Fermi level in the source tunnels through $`U(x)`$ from the origin to a state with orbit centre $`x_0`$ such that $`0<x_0<L`$. If the barrier potential is dominated by the magnetic parabola (i.e., the magnetic length is the shortest), the problem reduces to the known one to find energy levels in the shifted parabolic potential as caused by the linear term in Eq. (2). The value of $`x_0`$ is determined from the condition of coincidence of a Landau level in the potential $`U(x)`$ with the Fermi level in the source. If the lowest Landau level is regarded only and the spin splitting is ignored, we get the minimum tunneling distance to the outermost edge channel in the drain $$x_0=L_T=\frac{l}{2}\left(\frac{\mathrm{}\omega _cL}{eV_{sd}l}\frac{2\mathrm{\Delta }V_gL}{V_{sd}l}\frac{eV_{sd}l}{\mathrm{}\omega _cL}\right)d.$$ (3) The first term in brackets in Eq. (3), which is dominant, is large compared to one. Knowing the wave function of the lowest Landau level in the potential $`U(x)`$ and neglecting the last term in Eq. (3) we obtain for the shape of $`IV`$ characteristics near the onset, where the tunneling probability is small, $$\frac{\mathrm{d}I}{\mathrm{d}V}=\sigma _B\mathrm{exp}\left(\frac{(\mathrm{}\omega _c/2e\mathrm{\Delta }V_g)^2L^2}{e^2V_{sd}^2l^2}\right)\sigma _B.$$ (4) Here $`\sigma _B`$ is the pre-factor which can be tentatively expected to be of the same order of magnitude as $`\sigma _0`$. From Eq. (4) it follows that at sufficiently strong magnetic fields the tunneling onset voltage $`V_T`$ is related to the barrier height as $`V_Tl\mathrm{}\omega _c/2e\mathrm{\Delta }V_g`$, which is consistent with the experiment (Fig. 4(a)). The solution (4) includes the case $`e\mathrm{\Delta }V_g>0`$ when a tunnel barrier is absent at zero magnetic field but arises with increasing $`B`$. This occurs apparently because of depopulation of the barrier region in the extreme quantum limit of magnetic field. Fig. 4(b) displays the fit of $`IV`$ characteristics at different magnetic fields by Eq. (4) with parameters $`L`$, $`V_{th}`$, and $`\sigma _B`$. The optimum values of $`L=0.6`$ $`\mu `$m and $`V_{th}=1.5`$ mV are found to be very close to the ones for the $`B=0`$ case as determined for the same range of barrier heights, see Fig. 3(b). Although this fact supports our considerations, these are not quite rigorous to discuss the considerable discrepancy between the fore-exponential factors with and without magnetic field. The observed behaviour of $`IV`$ characteristics with magnetic field in the transient region where their asymmetry is not yet strong (Fig. 2(a)) is similar to that of Refs. . Over this region, which is next to the scope of the exponential $`IV`$ dependences at higher magnetic-field-induced tunnel barriers, our $`IV`$ curves are close to power-law dependences, as was discussed in Ref. . There is little doubt that it is very difficult to analyze and interpret such $`IV`$ curves without solving the tunneling problem strictly. We note that the peak structures on the tunneling branch of $`IV`$ characteristics (see Figs. 2(a) and 3(b)) persist at relatively low magnetic fields and are very similar to those studied in Ref. . These may be a hint to resonant tunneling through impurity states below the 2D band bottom. This work was supported in part by the Russian Foundation for Basic Research under Grants No. 97-02-16829 and No. 98-02-16632, the Programme ”Nanostructures” from the Russian Ministry of Sciences under Grant No. 97-1024, and Volkswagen-Stiftung under Grant No. I/68769.
no-problem/9905/astro-ph9905270.html
ar5iv
text
# The Cluster 𝐿_𝑋-𝜎 Relation Has Implications for Scale-Free Cosmologies ## 1. Introduction The dynamical state of a system of galaxies may be characterized by three intimately related physical quantities: the bolometric x-ray luminosity $`L_\mathrm{X}`$, the average emission-weighted plasma temperature $`T`$, and the average dark matter velocity dispersion, $`\sigma _{\mathrm{d}m}`$ (as traced by the projected velocity dispersion of the galaxies, $`\sigma _p`$). There is an observed correlation among these parameters, crudely given by $`T\sigma _p^{\alpha _1}`$, $`L_\mathrm{X}T^{\alpha _2}`$, and $`L_\mathrm{X}\sigma _p^{\alpha _3}`$, and roughly in agreement with the predictions of physical models. My goal in this letter is to show that the value of the slope $`\alpha _3`$ at $`z=0`$ can constrain the spectrum of the primordial density fluctuations, $`P(k)k^n`$. One powerful tool for linking the x-ray properties of galaxy systems with the cosmological parameters already exists. The cluster temperature function, which is directly related to the cluster mass distribution, can be used at zero redshift to measure $`n`$. Henry & Arnaud (1991) analyze data from the Einstein satellite to obtain $`n1.7\pm 0.55`$; data from the ROSAT and ASCA missions (Markevitch 1998), corrected for the presence of cooling flows, implies a somewhat steeper $`n2\pm 0.3`$. The redshift evolution of the cluster temperature function can break the degeneracy between the density parameter, $`\mathrm{\Omega }`$, and the normalization of the primordial spectrum (Henry 1997). Here I introduce a new zero-redshift method. I suggest that $`\alpha _3`$, the slope of the $`L_X\sigma _p`$ relation, is related to $`n`$ in a way that is complementary to the dependence of the temperature function on $`n`$. While it is important to have a flux-limited sample for the temperature function method, using $`\alpha _3`$ to compute $`n`$ requires only a large number of pointed observations of systems of galaxies. In §2 I derive the relationship between $`\alpha _3`$ and $`n`$. In §3 I compare the results with available observations, and discuss possible systematic biases. In §4 I summarize. ## 2. Derivation Here I use some of the definitions in the paper by Navarro, Frenk, & White (1997; NFW). My results, however, are largely independent of their N-body simulations of dark matter halos, and apply to density profiles different from the one they introduce. Eke, Navarro, & Frenk (1998) provide a derivation of the $`L_XT`$ relation which is analogous to part of what follows, but does not focus on any cosmological use of $`\alpha _2`$. ### 2.1. Halo Parameters and Definitions A spherically symmetric dark matter halo may be characterized by $`r_{200}`$, the radius which encloses 200 times the critical density of the universe, $`\rho _{\mathrm{c}rit}`$. The characteristic mass, $`M_{200}`$, is then $`M_{200}`$ $`=`$ $`{\displaystyle \frac{800\pi }{3}}r_{200}^3\rho _{\mathrm{c}rit}`$ (1) $`\rho _{\mathrm{c}rit}`$ $`=`$ $`{\displaystyle \frac{3H_0^2}{8\pi G}}Z(z),`$ (2) $`Z(z,\mathrm{\Omega })`$ $`=`$ $`(1+z)^3{\displaystyle \frac{\mathrm{\Omega }_0}{\mathrm{\Omega }(z)}}.`$ (3) Here $`H_0`$ is the Hubble constant, $`\mathrm{\Omega }`$ is the density parameter, and $`z`$ is the redshift. The halo’s characteristic circular velocity is $`V_{200}=\sqrt{GM_{200}/r_{200}}`$, or, with the substitution of equation (1), $$V_{200}=10H_0r_{200}Z^{1/2}.$$ (4) Now consider a halo density profile of the form, $$\rho _{\mathrm{d}m}(r)=\rho _{\mathrm{c}rit}\delta _c\stackrel{~}{\rho }(\frac{cr}{r_{200}}),$$ (5) where $`c`$ is the halo concentration, $`\delta _c`$ is the characteristic density, and $`\stackrel{~}{\rho }(y)`$ is a nonnegative, declining function of $`y`$. Then equation (1) requires the following relation between $`\delta _c`$ and $`c`$, $`\delta _c`$ $`=`$ $`{\displaystyle \frac{200c^3}{3C(c)}},`$ (6) $`C(y)`$ $`=`$ $`{\displaystyle _0^y}t^2\stackrel{~}{\rho }(t)𝑑t.`$ (7) In scale-free cosmologies, the characteristic density also depends on $`n`$, the slope of the initial fluctuation spectrum, because $`c`$ should trace the mean density of the universe at the time of the halo’s formation. NFW use Press-Schechter theory to examine the exact relationship among $`\delta _c`$, $`M_{200}`$, and $`n`$. They find that the characteristic density should be proportional to the mean density of the universe at the epoch when the nonlinear mass $`M_{}`$ is (1) a fixed and (2) a very small fraction of the current halo mass. This implies that $`\delta _c`$ should scale with $`M`$ and $`n`$ the same way as the average background density scales with $`M_{}`$ and $`n`$: $$\delta _cM_{200}^{(n+3)/2}.$$ (8) The slope of this scaling relation is better than 5% accurate for all $`\mathrm{\Omega }1`$ cosmologies with either $`\mathrm{\Lambda }=0`$ or $`\mathrm{\Omega }+\mathrm{\Lambda }=1`$, where $`\mathrm{\Lambda }`$ is the cosmological constant in units of $`3H^2`$. With the substitution $`\gamma =(n+3)/2`$ and use of equations (1), (4), and (6), $$\frac{c^3}{C(c)}V_{200}^{3\gamma }Z^{\gamma /2}.$$ (9) Next, I compute the circular velocity profile of the halo, $`V_{\mathrm{c}irc}=\sqrt{GM(r)/r}`$. Under the transformation $`ryr_{200}/c`$, $$V_{\mathrm{c}irc}(y)^2=V_{200}^2\frac{cC(y)}{yC(c)}$$ (10) Note that the dark matter velocity dispersion profile, $`\sigma _{\mathrm{d}m}(y)`$, will not in general have the same shape as $`V_{\mathrm{c}irc}(y)`$, contrary to the the assumption in similar derivations (e.g. Eke et al. 1998). Rather, the velocity dispersion of the halo is given by the Jeans equation for a spherical, nonrotating system of collisionless particles (Binney & Tremaine 1987): $$\frac{d\left(\sigma _r^2\rho \right)}{dr}+2\beta _\mathrm{J}\rho \sigma _r^2=\frac{GM\rho }{r^2}.$$ (11) Here $`\sigma _r`$ is the radial velocity dispersion, $`\beta _\mathrm{J}`$ is the velocity anisotropy parameter, and $`M`$ is the mass inside the radius $`r`$. For systems with isotropic velocity dispersion tensors, $`\beta _\mathrm{J}=0`$, and locally $`\sigma _{\mathrm{d}m}^2=3\sigma _r^2`$. In terms of the previously defined quantities, equation (11) has the solution $`\sigma _r^2(y)`$ $`=`$ $`V_{200}^2{\displaystyle \frac{c}{C(c)}}\stackrel{~}{\sigma }^2(y),`$ (12) $`\stackrel{~}{\sigma }^2(y)`$ $`=`$ $`{\displaystyle \frac{1}{\stackrel{~}{\rho }(y)}}{\displaystyle _y^{\mathrm{}}}{\displaystyle \frac{\stackrel{~}{\rho }(t)C(t)}{t^2}}𝑑t,`$ (13) Finally, consider the average dark matter velocity dispersion, $`\sigma _{\mathrm{d}m}^2`$, within the virial radius $`r_{200}`$: $`\sigma _{\mathrm{d}m}^2`$ $`=`$ $`{\displaystyle \frac{_0^c3\sigma _r^2(y)\stackrel{~}{\rho }(y)y^2𝑑y}{_0^c\stackrel{~}{\rho }(y)y^2𝑑y}}`$ (14) $`=`$ $`V_{200}^2{\displaystyle \frac{cD(c)}{C(c)^2}},`$ (15) $`D(y)`$ $`=`$ $`{\displaystyle _0^y}3\stackrel{~}{\sigma }^2(t)\stackrel{~}{\rho }(t)t^2𝑑t.`$ (16) ### 2.2. X-Ray Luminosity The x-ray luminosity of a ball of plasma is $$L_\mathrm{X}=\lambda (T)n_en_i𝑑V.$$ (17) Here $`\lambda (T)`$ is the bolometric emissivity as a function of the local temperature $`T`$, $`n_e`$ is the electron number density, $`n_i`$ is the ion number density, and the integral is over the entire volume of the system. Now if (1) $`n_i=n_e`$, (2) the system is spherically symmetric, and (3) the gas density is related to the dark matter distribution by $`\rho _{\mathrm{g}as}(r)=f\rho _{\mathrm{d}m}(r)`$, where $`f`$ is the gas mass fraction, then $$L_\mathrm{X}=4\pi _0^{\mathrm{}}\lambda (T)\left(\frac{f\rho _{\mathrm{d}m}}{\mu m_p}\right)^2r^2𝑑r,$$ (18) where $`\mu \rho _{\mathrm{g}as}/(n_im_p)`$ is the mean molecular weight. Substitution of equation (5) yields, $$L_\mathrm{X}=4\pi \left(\frac{f\rho _{\mathrm{c}rit}}{\mu m_p}\right)^2\frac{\delta _c^2}{c^3}r_{200}^3_0^{\mathrm{}}\lambda (T)\stackrel{~}{\rho }(y)^2y^2𝑑y.$$ (19) Now eliminate $`r_{200}`$, $`\delta _c`$, and $`\rho _{\mathrm{c}rit}`$ using equations (2), (4) and (6): $`L_X`$ $`=`$ $`{\displaystyle \frac{5H_0Z^{1/2}}{2\pi G^2}}\left({\displaystyle \frac{f}{\mu m_p}}\right)^2{\displaystyle \frac{c^3}{C(c)^2}}V_{200}^3`$ $`\times `$ $`{\displaystyle _0^{\mathrm{}}}\lambda (T)\stackrel{~}{\rho }(y)^2y^2𝑑y.`$ Next, note that thermal bremsstrahlung is the dominant cooling process for rich clusters of galaxies; hence $`\lambda (T)T^{1/2}`$. Then, if the gas is in local hydrostatic equilibrium with the dark matter, $`T(r)\sigma _{\mathrm{d}m}(r)^2`$, and hence $`\lambda (T,r)\sigma _{\mathrm{d}m}(r)`$. Leaving out the constants and substituting equation (12) yields, $$L_\mathrm{X}f^2Z^{1/2}\frac{c^{7/2}}{C(c)^{5/2}}V_{200}^4_0^{\mathrm{}}3\stackrel{~}{\sigma }(y)\stackrel{~}{\rho }(y)^2y^2𝑑y$$ (21) The integral on the right-hand side is just a number, and may be dropped. Now one must replace $`V_{200}`$, which is not observable, with the projected galaxy velocity dispersion $`\sigma _p`$, which can be determined from optical surveys. In the cores of relaxed clusters $`\sigma _p^2`$ is proportional to $`\sigma _{\mathrm{d}m}^2`$, the average dark matter velocity dispersion (equation 14). Then $$L_Xf^2Z^{1/2}\frac{c^{3/2}C(c)^{3/2}}{D(c)^2}\sigma _p^4.$$ (22) If $`f`$ and $`c`$ remain constant, the above expression reduces to the traditional $`L_\mathrm{X}\sigma ^4`$ scaling law from simpler, dimensional arguments (e.g., Quintana & Melnick 1982). However, equation (9) tells us that $`c`$ is a function of the velocity dispersion and the slope of the primordial spectrum. In fact, once $`\stackrel{~}{\rho }(y)`$ is specified, equations (7), (9), (15), and (16) may be used to eliminate $`V_{200}`$, $`c`$, $`C(c)`$, and $`D(c)`$, and one may thus obtain the dependence of $`L_X`$ on $`\sigma _p`$ and $`n`$. Specifically, suppose that within the range of interest for $`c`$, $`C(c)`$ and $`D(c)`$ exhibit a power law behavior of the form $`C(c)c^p`$ and $`D(c)c^q`$. Then $$L_Xf^2Z^{1/2+\xi /6}\sigma _p^{4\xi },$$ (23) where $$\xi =\frac{3(3+n)(3+3p4q)}{3+14p9q+n(6p33q)}$$ (24) ### 2.3. Model Dependency I consider dark matter density profiles of the form $`\stackrel{~}{\rho }(y)=y^a(1+y^b)^d`$. Thus each profile’s shape may be specified by a set of three numbers, $`(a,b,d)`$. Some common profiles and their properties are listed in Table 1. Once the set $`(a,b,d)`$ is specified, $`C(c)`$ and $`D(c)`$ are readily computable. The relevant range of $`c`$ for systems of galaxies comes from measurements of surface number density profiles and of $`r_{200}`$ in clusters and groups of galaxies (Carlberg et al. 1997; Mahdavi et al. 1999). In these works, halos with masses in the range $`10^{14}10^{16}M_{}`$ have concentrations $`c`$ 2.5–9.5, in good agreement with N-body simulations (e.g., NFW). For all sets $`(a,b,d)`$ in Table 1, the power law approximations $`C(c)c^p`$ and $`D(c)c^q`$ are better than 8% accurate everywhere within $`c=`$2.5–9.5. Figure 3 shows $`\xi (n)`$ from equation (24) for various profiles. In all cases, $`\xi (n)`$ is positive, and approaches zero as $`n3`$. As $`n0`$, the models all predict a significant flattening of the $`L_\mathrm{X}\sigma _p`$ relation. This is understandable through equation (8): the characteristic density $`\delta _c`$ is highly anticorrelated with halo mass in the $`n0`$ universes, and hence the emission measure increases quite slowly or not at all with mass. As $`n3`$, $`\delta _c`$ becomes nearly independent of the halo mass, and therefore $`L_\mathrm{X}`$ increases rapidly with the velocity dispersion. ## 3. Application For low-redshift clusters of galaxies, $`Z(z,\mathrm{\Omega })1`$ with high accuracy. The dependence of the gas mass fraction $`f`$ on $`T`$ and hence $`\sigma _p`$ is poorly determined. Some, using standard analysis, claim it should increase slightly with $`T`$ (e.g., David, Jones, & Forman 1995); others say that accounting for cooling flows should cause it to decrease slightly with $`T`$ (e.g. Allen & Fabian 1998). Mohr, Mathiesen, & Evrard (1999), in their detailed study of clusters with ROSAT pointings, find that within 1 Mpc $`f`$ is nearly independent of $`T`$. Because of the range of contrasting findings, I adopt $`fT^{0.0\pm 0.2}\sigma _p^{0.0\pm 0.4}`$ at the 68% confidence level. There is more agreement among observers regarding the empirical value of $`\alpha _3`$: Quintana & Melnick (1982) have $`\alpha _3=4.0\pm 0.7`$ for data from the Einstein satellite; theirs are 2-10 keV luminosities, which scale similarly to bolometric luminosities for rich clusters. More recently, Mulchaey & Zabludoff (1998; MZ98) combined ROSAT observations of groups and clusters with deep optical spectroscopy to obtain $`\alpha _3=4.29\pm 0.37`$ for bolometric luminosities. I adopt the average value, $`\alpha _3=4.15\pm 0.4`$. Assuming normally distributed errors, the adopted values of $`f`$ and $`\alpha _3`$ constrain $`\xi `$ to be $`<1.0`$ at the one-sided 90% confidence level, or $`<1.93`$ at the one-sided 99% confidence level. Figure (3) shows the $`\xi =1.0,1.93`$ boundaries. To accommodate all the models I consider in this scenario, $`n`$ must be $`<2.0`$ at 90% confidence level, and $`<1.1`$ at the 99% confidence level. At least two systematic effects could bias these results. Preheating of the plasma in $`kT<4`$ keV clusters (Ponman, Cannon, & Navarro 1999) might suppress their luminosities, while leaving those of $`kT>4`$ keV clusters unchanged. One can avoid this bias by ignoring all clusters with $`kT<4`$ keV. I find that removing these clusters, which make up $`25\%`$ of the MZ98 sample, does not significantly affect the slope of the observed $`L_X\sigma `$ relation. Also, cooling flows might affect the luminosities. While Markevitch (1998) finds that the slope of the $`L_XT`$ relation does not change as a result of removing cooling flows, Allen & Fabian (1998) find that $`\alpha _2=3.1\pm 0.6`$ changes to $`\alpha _2=2.3\pm 0.4`$ after including cooling flows. The Markevitch (1998) method, which removes the cooling flow altogether, is more appropriate for probing the gravitational potential than the Allen & Fabian (1998) method, which includes the luminosity of the cooling component. To constrain the effect of cooling flows on $`\alpha _3`$, I conduct the following test. Of the MZ98 clusters, 34% are are also contained in Markevitch (1998). I fit $`\alpha _3`$ for just these clusters, using the $`L_X`$ which excludes the cooling component. I find that the slope changes to $`\alpha _3=3.85\pm 0.3`$; with this, the upper limits on $`n`$ become $`n<1.7`$ and $`n<0.9`$ at the one-sided 90% and 99% confidence levels. The chief result of this paper—that if $`n`$ were much greater than $`1`$, $`\alpha _3`$ should be $`2`$ instead of the observed value, $`4`$—is therefore not affected by the 10% correction due to cooling flows. ## 4. Conclusion If the characteristic densities of clusters of galaxies trace the background density of the universe at the time of each cluster’s formation, there should be a relatively simple relationship between the x-ray luminosity $`L_\mathrm{X}`$, the observed velocity dispersion $`\sigma _p`$, and the slope of the primordial power spectrum, $`n`$. This relationship, given by equations (23) and (24), depends slightly on the density profile of the clustered dark matter, but should not significantly depend on $`\mathrm{\Omega }`$ or $`\mathrm{\Lambda }`$ when applied to low-redshift clusters of galaxies. For a wide range of assumed density profiles, the observations imply $`n<2.0`$ and $`n<1.1`$ at the one-sided 90% and 99% confidence levels, respectively. This is consistent with the bounds from the cluster temperature function, which give $`n`$ between $`2.3`$ and $`1.15`$. Improving this constraint depends largely on a better understanding of preheating, cooling flows, and the variation of the gas mass fraction with $`\sigma _p`$. I thank the referee, Vincent Eke, for comments which improved the paper. I am grateful to Margaret Geller for our many discussions. Conversations with Saurabh Jha were also useful. This work was supported by the Smithsonian Institution and by the National Science Foundation. ## References Allen, S. W., & Fabian, A. C. 1998, MNRAS, 297, L57 Binney, J., and Tremaine, S. 1987, Galactic Dynamics (Princeton: Princeton University Press) Carlberg, R. G., et al. 1997, ApJ, 495, L13 David, L. P., Jones, C., & Forman, W. 1995, ApJ, 445, 578 Eke, V., Navarro, J. F., & Frenk, C. S. 1998, ApJ, 503, 569 Henry, J. P. & Arnaud, K. A. 1991, ApJ, 372, 410 Henry, J. P. 1997, ApJ, 489, L1 Hernquist, L. 1990, ApJ, 356, 359 Jaffe, W. 1983, MNRAS, 202, 995 King, I. R. 1962, AJ, 67, 471 Mahdavi, A., Geller, M. J., Böhringer, H., Kurtz, M. J., & Ramella, M. 1997, ApJ, in press (astro-ph/9901095) Markevitch, M. 1998, ApJ, 504, 27 Mohr, J. J., Mathiesen, N., & Evrard, A. E. 1999, ApJ, in press (astro-ph/9901281) Mulchaey, J. S., & Zabludoff, A. I. 1998, ApJ, 496, 73 Navarro, J. F., Frenk. C. S., & White, S. D. M. 1997, ApJ, 490, 493 Ponman, T. J., Cannon, D. B., & Navarro, J. F. 1999, Nature, 397, 135 Quintana, H., and Melnick, J., 1982, AJ, 87, 972
no-problem/9905/math9905088.html
ar5iv
text
# Nest Representations of TAF Algebras ## 1. Introduction Irreducible -representations and their kernels, the primitive ideals, play a fundamental role in the theory of $`\text{C}^{}`$-algebras. For example, the structure of the lattice of all ideals in a $`\text{C}^{}`$-algebra is determined by the space of primitive ideals with the hull-kernel topology (there is a bijection between the lattice of ideals and the lattice of closed subsets of the primitive ideal space) and every ideal is the intersection of those primitive ideals which contain it. Recent work by Michael Lamoureux has shown that a similar situation prevails in a number of non-self-adjoint operator algebra settings. One motivation for investigating non-self-adjoint algebras arises from dynamical systems. While $`\text{C}^{}`$-crossed products constructed from dynamical systems are very useful in the study of dynamical systems, some essential information may be lost. (There are different dynamical systems which give rise to isomorphic $`\text{C}^{}`$-crossed products.) In the case of free, discrete systems, the remedy is to look instead at the semi-crossed product (a non-self-adjoint algebra) associated with the system (see ). In this context there is a bijection between the isomorphism classes of (free, discrete) dynamical systems and their associated semi-crossed products. This correspondance is established via an analysis of the space of ideals of the operator algebra. In attempting to extend this program to other dynamical systems, Lamoureux drew attention to nest representations and their kernels (which he called n-primitive ideals). A nest representation $`\pi `$ of an operator algebra $`A`$ is simply a continuous algebra homomorphism of $`A`$ into some $`()`$ with the property that the lattice of projections invariant under $`\pi (A)`$ is totally ordered. Nest representations do not arise naturally as a concept in the $`\text{C}^{}`$-algebra context. For one thing, most representations arising in $`\text{C}^{}`$-algebra theory are -representations. A projection is invariant under a -representation if, and only if, it is reducing; consequently, for -representations, the family of nest representations reduces to the family of irreducible representations. Even if one looks at representations which are not -representations, the n-primitive ideals in a $`\text{C}^{}`$-algebra are just the primitive ideals. (Nest representations are necessarily topologically cyclic – this is valid for nest representations of any Banach algebra with a bounded approximate identity – and Haagerup has shown that a cyclic representation of a $`\text{C}^{}`$-algebra is similar to a -representation.) In the non-self-adjoint context, the story is quite different. Non-self-adjoint algebras may lack primitive ideals, but n-primitive ideals generally abound. Lamoureux has shown that, at least in the presence of certain hypotheses, n-primitive ideals in semi-crossed products provide information about arc spaces in dynamical systems, so that the semi-crossed product essentially determines the dynamical system. He has also shown that, in a variety of contexts, the n-primitive ideals carry a topology and that the lattice of closed sets in this topology is isomorphic to the lattice of ideals in the original algebra. Also, every ideal is equal to the intersection of the n-primitive ideals which contain it. Thus, for general operator algebras, n-primitive ideals are often a suitable replacement for the primitive ideals of $`\text{C}^{}`$-algebra theory. The topology which Lamoureux puts on the n-primitive ideals is, of course, the hull-kernel topology. In order to show that the hull-kernel operation yields a topology, Lamoureux uses a technical property for ideals which is related to meet irreducibility and which implies meet irreducibility. (An ideal $``$ is meet irreducible if, for any ideals $`𝒥`$ and $`𝒦`$, $`=𝒥𝒦=𝒥\text{ or }=𝒦`$.) Meet irreducibility arises in in connection with semi-crossed products and dynamical systems. The following four results from indicate that meet irreducibility will be closely related to n-primitivity in diverse operator algebra contexts. * Let $``$ be a closed, two-sided ideal in a separable $`\text{C}^{}`$-algebra. Then $``$ is n-primitive $``$ $``$ is primitive $``$ $``$ is prime $``$ $``$ is meet irreducible. (Some of this has, of course, been known for a long while.) * Let $``$ be a closed, two-sided ideal in $`\mathrm{Alg}(𝒩)𝒦`$, where $`𝒩`$ is a nest of closed subspaces in some Hilbert space $``$, $`\mathrm{Alg}(𝒩)`$ is the associated nest algebra consisting of all operators which leave invariant each subspace in $`𝒩`$, and $`𝒦`$ is the algebra of all compact operators acting on $``$. Then, $``$ is n-primitive $``$ $``$ is meet irreducible $``$ $``$ is the kernel of the compression map of $`\mathrm{Alg}(𝒩)𝒦`$ to some interval from $`𝒩`$. * Let $``$ be a closed ideal in the disk algebra, $`A(𝔻)`$, the algebra of continuous functions on the unit disk of $``$ which are analytic in the interior. Then, $``$ is n-primitive $``$ $``$ is meet irreducible $``$ $``$ is either primary or zero. (In this context, it is not true that every ideal is the intersection of meet irreducible ideals.) * Let $`A=T_{n_1}\mathrm{}T_{n_k}`$ be a direct sum of upper triangular $`n_j`$ by $`n_j`$ matrix algebras and let $``$ be a two-sided ideal in $`A`$. Then, $``$ is n-primitive $``$ $``$ is meet irreducible. Meet irreducible ideals in the context of triangular AF algebras (TAF algebras) were investigated in . In particular, it was proven that every meet irreducible ideal in a strongly maximal TAF algebra is n-primitive \[3, Theorem 2.4\]. Once again, the lattice of all ideals is isomorphic to the lattice of closed sets in the space of meet irreducible ideals with the hull-kernel topology and every ideal is an intersection of meet irreducible ideals. The converse, “every n-primitive ideal is meet irreducible,” was left open, however; it is the purpose of this note to investigate this converse in the context of strongly maximal TAF algebras. In section 2 we describe a broad class of algebras (those characterized by “totally ordered spectrum”) for which the converse always holds. In section 3 we show that the converse is valid for any nest representation which is similar to a representation $`\pi `$ which is a -representation on the diagonal $`D`$ of the algebra and for which the von Neumann algebra generated by $`\pi (D)`$ contains an atom. These two sections may be read independently of each other. ### 1.1. Notation We now establish notation and terminology. Let $`B`$ be an AF $`\text{C}^{}`$-algebra and let $`D`$ be a canonical masa in $`B`$. This implies that there is a sequence of finite dimensional $`\text{C}^{}`$-algebras $`B_i`$ and embeddings $`\varphi _i:B_iB_{i+1}`$ such that $`B=\underset{}{\mathrm{lim}}(B_i,\varphi _i)`$ and $`D=\underset{}{\mathrm{lim}}(D_i,\varphi _i)`$, where each $`D_i=DB_i`$ is a masa in $`B_i`$ and each $`\varphi _i`$ maps the normalizer of $`D_i`$ into the normalizer of $`D_{i+1}`$. In particular, the $`D`$-normalizing partial isometries in $`B`$ have a linear span which is dense in $`B`$ ($`w`$ normalizes $`D`$ if $`wDw^{}D`$ and $`w^{}DwD`$; the set of $`D`$-normalizing partial isometries in $`B`$ is denoted by $`N_D(B)`$). A TAF algebra $`A`$ with diagonal $`D`$ is a subalgebra of $`B`$ such that $`AA^{}=D`$. It follows that $`A=\underset{}{\mathrm{lim}}(A_i,\varphi _i)`$, where $`A_i=AB_i`$, for all $`i`$. Each $`A_i`$ is necessarily triangular in $`B_i`$ with diagonal $`D_i`$; if, in addition, each $`A_i`$ is maximal triangular then we say that $`A`$ is strongly maximal. This is equivalent to requiring that $`A+A^{}`$ be dense in $`B`$. AF $`\text{C}^{}`$-algebras are groupoid $`\text{C}^{}`$-algebras with groupoids which are especially tractable: the groupoids are topological equivalence relations. As such, the $`\text{C}^{}`$-algebra can be identified with an algebra of functions on the groupoid. Subalgebras such as TAF algebras and ideals determine and are determined by appropriate substructures of the groupoid. This provides a coordinatization for TAF algebras and their ideals. The following is a brief sketch of this coordinatization for strongly maximal TAF algebras. For a more thorough description, see or . Let $`(D,A,B)`$ be a triple in which $`B`$ is an AF $`\text{C}^{}`$-algebra, $`D`$ is a canonical masa, and $`A`$ is a strongly maximal TAF subalgebra of $`B`$ with diagonal $`D`$. We need to describe the spectral triple, $`(X,P,G)`$ associated with $`(D,A,B)`$. The first ingredient $`X`$ is the usual spectrum of the abelian $`\text{C}^{}`$-algebra $`D`$ (so that $`DC(X)`$). Note that, in the present context, $`X`$ will be a Cantor space. The algebra $`B`$ is generated by partial isometries which normalize $`D`$. Each normalizing partial isometry induces a partial homeomorphism of $`X`$ into itself (a homeomorphism of a closed subset of $`X`$ onto another closed subset). The union of the graphs of these homeomorphisms is an equivalence relation; when this set is topologized so that the graph of each normalizing partial isometry is open and closed, one obtains the groupoid $`G`$. The spectrum $`P`$ of $`A`$ is the union of the graphs of the normalizing partial isometries which lie in $`A`$. If $`xX`$, the equivalence class in $`G`$ which contains $`x`$ is referred to as the orbit of $`x`$ (since it consists of all the images of $`x`$ under homeomorphisms associated with $`D`$-normalizing partial isometries) and is denoted by $`\text{orb}_x`$. When $`(x,y)P`$, we shall often write $`xy`$; when $`A`$ is strongly maximal, $`P`$ induces a total order on each orbit. Now suppose that $`\pi `$ is a nest representation of a TAF algebra $`A`$. Since $`D`$ is an abelian $`\text{C}^{}`$-algebra, results in imply that the restriction of $`\pi `$ to $`D`$ is similar to a -representation. (For the limited domain in which we need this theorem, abelian $`\text{C}^{}`$-algebras, Kadison attributes this fact to unpublished 1952 lecture notes of Mackey.) Thus any nest representation of a TAF algebra is similar to another nest representation whose restriction to the diagonal is a -representation. Accordingly, we assume throughout this paper that any nest representation $`\pi `$ of a TAF algebra $`A`$ acts as a -representation on the diagonal, $`D`$. ## 2. Totally Ordered Spectrum In this section, $`A`$ will be a strongly maximal TAF algebra whose $`\text{C}^{}`$-envelope $`B`$ is simple. This implies that the orbit of each element in $`X`$ is dense in $`X`$. The diagonal of $`A`$ is denoted by $`D`$. The spectral triple for $`(D,A,B)`$ is $`(X,P,G)`$. We assume that $`X`$ has a total order $``$ which agrees on each equivalence class with the total order induced by $`P`$. $`X`$ has a minimal and a maximal element, which we denote by $`0`$ and $`1`$, respectively. If $`aX`$, we say that $`a`$ has a gap above if $`a`$ has an immediate successor and that $`a`$ has a gap below if $`a`$ has an immediate predecessor. Elements of $`B`$ will be viewed as continuous functions in $`C_0(G)`$; elements of $`A`$ are continuous functions whose supports lie in $`P`$. Of course, not all elements of $`C_0(G)`$ are elements of $`B`$, but all those with compact support certainly are in $`B`$. When elements of $`B`$ are viewed as functions, the multiplication is given by a convolution formula with respect to a Haar system consisting of counting measure on each equivalence class from $`G`$. If $`Y`$ is a clopen subset of $`X`$, then $`E_Y`$ denotes the projection in $`D`$ corresponding to $`Y`$ (i.e., the characteristic function of $`Y`$). In the situation where $`X`$ carries a total order compatible with $`P`$, the meet irreducible ideal sets have been completely described in Theorem 3.1 in . For the convenience of the reader, we restate that theorem here. For each pair of elements $`a`$ and $`b`$ in $`X`$, define two subsets of $`P`$: $`\sigma _{a,b}`$ $`=\{(x,y)Pxa\text{ or }by\},`$ $`\tau _{a,b}`$ $`=\sigma _{a,b}\{(a,b)\}.`$ The set $`\sigma _{a,b}`$ is an ideal set in $`P`$; the set $`\tau _{a,b}`$ is an ideal set provided that $`(a,b)P`$ and $`\tau _{a,b}`$ is an open subset of $`P`$. (We always assume that these two conditions hold.) ###### Theorem 2.1. Assume that $`A`$ is a strongly maximal TAF algebra with simple $`\text{C}^{}`$-envelope and totally ordered spectrum. The following is a complete list of all the meet irreducible ideal sets in $`P`$: * $`\sigma _{a,b}`$, if $`(a,b)P`$; * $`\sigma _{a,b}`$, if $`(a,b)P`$ and either $`a`$ has no gap above or $`b`$ has no gap below; * $`\tau _{a,b}`$, where either $`a`$ has no gap above or $`b`$ has no gap below. ###### Theorem 2.2. Let $`\pi `$ be a continuous nest representation of $`A`$, where $`A`$ is a strongly maximal TAF algebra with simple $`\text{C}^{}`$-envelope and totally ordered spectrum. Then the kernel of $`\pi `$ is a meet irreducible ideal in $`A`$. ###### Proof. Let $`\pi `$ be a continuous nest representation of $`A`$ acting on the Hilbert space $``$. Let $`\sigma `$ be the ideal set in $`P`$ which corresponds to the ideal $`\mathrm{ker}\pi `$. Through a series of facts, we will show that $`\sigma `$ is one of the meet irreducible ideal sets listed in Theorem 2.1; consequently, $`\mathrm{ker}\pi `$ is meet irreducible. ###### Fact 1. Suppose that $`a`$ has a gap above or, equivalently, that $`E_{[0,a]}`$ is a projection in $`A`$ not equal to the identity. Then $`E_{[0,a]}`$ is invariant for $`A`$ and hence $`\pi (E_{[0,a]})\mathrm{Lat}\pi (A)`$. ###### Proof. Let $`fA`$; view $`f`$ as a $`C_0`$ function on $`G`$. For any $`(x,y)G`$, $`fE_{[0,a]}(x,y)`$ $`={\displaystyle \underset{z\text{orb}_x}{}}f(x,z)E_{[0,a]}(z,y)`$ $`=\{\begin{array}{cc}f(x,y),\hfill & \text{if }ya\hfill \\ 0,\hfill & \text{otherwise}\hfill \end{array}`$ $`=E_{[0,a]}(x,x)f(x,y)E_{[0,a]}(y,y)`$ $`=E_{[0,a]}fE_{[0,a]}(x,y).`$ We use the fact that $`(x,y)\mathrm{supp}f`$ implies that $`xy`$ in $`X`$. ∎ ###### Fact 2. If $`a`$ has a gap above and if there is a point $`(a,c)P\sigma `$, then $`\pi (E_{[0,a]})0`$ and $`\pi (E_{[0,a]})`$ is a non-trivial invariant projection for $`\pi (A)`$. ###### Remark. If we were not assuming that $`\pi `$ is normalized to a -representation of $`D`$, then $`\pi (E_{[0,a]})`$ would be a non-trivial idempotent whose range is an invariant subspace for $`\pi (A)`$. ###### Proof. Since $`(a,a)(a,c)=(a,c)`$ and $`\sigma `$ is an ideal set, $`(a,a)P\sigma `$. If $`\pi (E_{[0,a]})=0`$, then $`E_{[0,a]}\mathrm{ker}\pi `$; hence, $`\mathrm{supp}E_{[0,a]}\sigma `$. Thus $`(a,a)\sigma `$, a contradiction. ∎ ###### Fact 3. If $`b`$ has a gap below and if there is a point $`(c,b)P\sigma `$, then $`\pi (E_{[b,1]})0`$. ###### Proof. Essentially the same as for Fact 2: $`(c,b)(b,b)=(c,b)`$, so $`(b,b)P\sigma `$. ∎ ###### Fact 4. Assume $`a`$ has a gap above, $`b`$ has a gap below, $`\pi (E_{[0,a]})0`$, and $`\pi (E_{[b,1]})0`$. If $`E_{[0,a]}fE_{[b,1]}\mathrm{ker}\pi `$ for all $`fN_D(A)`$ (or, for all $`f`$ in a set with linear span dense in $`A`$), then $`\pi `$ is not a nest representation. ###### Proof. Since $`\mathrm{ker}\pi `$ is closed, $`E_{[0,a]}fE_{[b,1]}\mathrm{ker}\pi `$, for all $`fA`$. As a consequence, we have $`\pi (E_{[0,a]})\pi (f)\pi (E_{[b,1]})=0`$, for all $`fA`$. Let $`𝒦_1`$ be the range of $`\pi (E_{[0,a]})`$ and $`𝒦_2`$ be the norm closure of $`\pi (A)\pi (E_{[b,1]})`$. Then $`𝒦_1`$ and $`𝒦_2`$ are non-zero invariant subspaces for $`\pi (A)`$ and $`𝒦_1𝒦_2=(0)`$. So $`\mathrm{Lat}\pi (A)`$ is not totally ordered by inclusion; $`\pi `$ is not a nest representation. ∎ In what follows we use the standard identification of $`X`$ with the diagonal of $`P`$, i.e. with $`\{(x,x)xX\}`$. ###### Fact 5. If $`\pi `$ is a nest representation, then $`(P\sigma )X`$ is an interval in $`X`$. ###### Proof. Suppose not. Then there are three points $`a,b,c`$ with $`abc`$ in $`X`$ such that $`(a,a)P\sigma `$, $`(b,b)\sigma `$, and $`(c,c)P\sigma `$. It follows that if $`(x,y)P`$ with $`xb`$ and $`by`$, then $`(x,y)\sigma `$. (Since $`\sigma `$ is open there is a neighborhood of $`(b,b)`$ which is contained in $`\sigma `$; use the assumption that all orbits are dense and the ideal set property for $`\sigma `$.) Choose $`\alpha `$ and $`\beta `$ so that $`a\alpha b`$, $`b\beta c`$, $`\alpha `$ has a gap above, and $`\beta `$ has a gap below. Then $`(a,a)\mathrm{supp}E_{[0,\alpha ]}`$; hence $`\pi (E_{[o,\alpha ]})0`$ and the range of $`\pi (E_{[0,\alpha ]})`$ is in $`\mathrm{Lat}\pi (A)`$. Also, $`(b,b)\mathrm{supp}E_{[\beta ,1]}`$, so $`\pi (E_{[\beta ,1]})0`$. For any $`gA`$, $`(x,y)\mathrm{supp}E_{[0,\alpha ]}gE_{[\beta ,1]}`$ $`x\alpha b\text{ and }b\beta y`$ $`(x,y)\sigma .`$ Consequently, $`E_{[0,\alpha ]}gE_{[\beta ,1]}\mathrm{ker}\pi `$. Fact 4 now implies that $`\pi `$ is not a nest representation, contradicting the hypothesis. ∎ Assume $`\pi `$ is a nest representation and that $`a`$ is the left endpoint of $`(P\sigma )X`$ and $`b`$ is the right endpoint of $`(P\sigma )X`$. Each of $`a`$ and $`b`$ may or may not be elements of $`(P\sigma )X`$. However, if $`a`$ has a gap above, then without loss of generality, we may assume that $`a(P\sigma )X`$ (simply replace $`a`$ by its immediate successor, if necessary). Similarly, if $`b`$ has a gap below, we may assume that $`b(P\sigma )X`$. ###### Fact 6. If $`a\alpha \beta b`$ and $`(\alpha ,\beta )P`$, then $`(\alpha ,\beta )P\sigma `$. ###### Proof. Suppose that $`a\alpha \beta b`$, $`(\alpha ,\beta )P`$, and $`(\alpha ,\beta )\sigma `$. Choose $`\overline{\alpha }`$ and $`\overline{\beta }`$ so that $`a\overline{\alpha }\alpha \beta \overline{\beta }b`$, $`\overline{\alpha }`$ has a gap above, and $`\overline{\beta }`$ has a gap below. \[Exceptions: if $`\alpha =\mathrm{succ}a`$, choose $`\overline{\alpha }=a`$ and note that this element is not in $`\sigma X`$; if $`\beta =\mathrm{pred}b`$, choose $`\overline{\beta }=b`$ and note that this is not in $`\sigma X`$.\] It follows that $`\pi (E_{[0,\overline{\alpha }]})`$ is non-zero and has range in $`\mathrm{Lat}\pi (A)`$ and that $`\pi (E_{[\overline{\beta },1]})0`$. Let $`gA`$ be arbitrary. If $`(x,y)`$ is in $`\mathrm{supp}E_{[0,\overline{\alpha }]}gE_{[\overline{\beta },1]}`$, then $`x\overline{\alpha }\alpha `$ and $`\beta \overline{\beta }y`$; hence $`(x,y)\sigma `$. Thus $`E_{[0,\overline{\alpha }]}gE_{[\overline{\beta },1]}\mathrm{ker}\pi `$, for all $`g`$. Fact 4 now implies that $`\pi `$ is not a nest representation, a contradiction. ∎ ###### Conclusion A. If $`\pi `$ is a nest representation and $`a`$ and $`b`$ are the endpoints of the interval $`(P\sigma )X`$, then $$\{(x,y)Pxa\text{ or }by\}\sigma \{(x,y)Pxa\text{ or }by\}.$$ ###### Proof. The conclusion follows from the following implications for a point $`(x,y)P`$: $`xa(x,x)\sigma (x,y)\sigma `$ $`by(y,y)\sigma (x,y)\sigma `$ $`axyb(x,y)P\sigma `$ Given $`a`$ and $`b`$ in $`X`$, let $`H`$ $`=\{(a,y)Payb\},\text{ and}`$ $`V`$ $`=\{(x,b)Paxb\}.`$ ###### Fact 7. If $`a`$ has no gap above, then $`H\sigma \{(a,b)\}`$. If $`b`$ has no gap below, then $`V\sigma \{(a,b)\}`$. ###### Proof. This follows immediately from the fact that $`\sigma `$ is an open subset of $`P`$. ∎ ###### Fact 8. If $`a`$ has a gap above, then either $`H\sigma =H`$ or $`H\sigma \{(a,b)\}`$. ###### Proof. Assume the contrary. Then $`(a,a)\sigma `$ and there is $`\beta `$ such that $`a\beta b`$ and $`(a,\beta )\sigma `$. Consider two cases. First, assume that $`\beta =\mathrm{pred}b`$. In this case, since $`b`$ has a gap below, we also have available the assumption that $`(b,b)\sigma `$. (See the comments after Fact 5.) Since $`(a,a)\sigma `$ and $`(b,b)\sigma `$, both $`\pi (E_{[0,a]})`$ and $`\pi (E_{[b,1]})`$ are non-zero. Furthermore, for any $`gA`$, $`\mathrm{supp}E_{[0,a]}gE_{[b,1]}\sigma `$. Indeed, if $`(x,y)\mathrm{supp}E_{[0,a]}gE_{[b,a]}`$, then $`xa`$ and $`by`$. If either $`xa`$ or $`by`$, the $`(x,y)\sigma `$. If $`x=a`$ and $`y=b`$, then $`(a,b)=(x,y)P`$; since $`(a,\beta )\sigma `$, we also have $`(a,b)\sigma `$. Fact 4 implies that $`\pi `$ is not a nest representation, a contradiction. In the alternative case, $`\beta b`$ and there is $`\overline{\beta }`$ such that $`\beta \overline{\beta }b`$ and $`\overline{\beta }`$ has a gap below. Since $`(\overline{\beta },\overline{\beta })\sigma `$, $`\pi (E_{[\overline{\beta },1]})0`$. As before, $`\pi (E_{[0,a]})0`$. Let $`(x,y)\mathrm{supp}E_{[0,a]}gE_{[\overline{\beta },1]}`$, where $`g`$ is any element of $`A`$. If $`xa`$ then $`(x,y)\sigma `$. If $`x=a`$ then $`\beta \overline{\beta }y`$, whence $`(x,y)=(a,y)\sigma `$. Once again, Fact 4 yields a contradiction ∎ ###### Fact 9. If $`b`$ has a gap below, then either $`V\sigma =V`$ or $`V\sigma \{(a,b)\}`$. ###### Proof. The idea behind the proof is essentially the same as in the proof of Fact 8. This time, if the conclusion does not hold, then $`(b,b)\sigma `$ and there is $`\alpha `$ such that $`a\alpha b`$ and $`(\alpha ,b)\sigma `$. If $`\alpha =\mathrm{succ}a`$ then take $`\overline{\alpha }=a`$; otherwise, take $`\overline{\alpha }`$ so that $`a\overline{\alpha }\alpha `$ and $`\overline{\alpha }`$ has a gap above. Now apply Fact 4 to $`E_{[0,\overline{\alpha }]}`$ and $`E_{[b,1]}`$. ∎ ###### Conclusion B. Conclusion A and Facts 8 and 9 imply that $`\sigma `$ has one of the following two forms: $`\sigma _{a,b}`$ $`=\{(x,y)Pxa\text{ or }by\}\text{or}`$ $`\tau _{a,b}`$ $`=\sigma _{a,b}\{(a,b)\}`$ Note that the latter is a possibility only if $`(a,b)P`$ and $`\tau _{a,b}`$ is open. ###### Fact 10. If $`a`$ has a gap above and $`b`$ has a gap below and either $`\sigma =\sigma _{a,b}`$ with $`(a,b)P`$ or $`\sigma =\tau _{a,b}`$, then $`\pi `$ is not a nest representation. ###### Proof. Apply Fact 4 to $`E_{[0,a]}`$ and $`E_{[b,1]}`$. ∎ This effectively ends the proof of Theorem 2.2. If $`\pi `$ is a nest representation, then $`\sigma `$ is one of the ideal sets listed in Theorem 2.1. Since these are all meet irreducible, we have proven that $`\mathrm{ker}\pi `$ is meet irreducible. ∎ ## 3. Nest Representations with Atoms In this section we give a condition on a nest representation which guarantees that $`\mathrm{ker}\pi `$ is meet irreducible. This condition requires that $`\pi `$ be a -representation on the diagonal of the strongly maximal TAF algebra. As pointed out in the introduction, any nest representation is similar to one with this property; consequently, we assume throughout this section that the restriction of $`\pi `$ to $`D`$ is a -representation. Recall that, in a von Neumann algebra $`𝒟`$, a projection $`E`$ is said to be an atom if $`E`$ majorizes no proper (nonzero) subprojection. $`𝒟`$ is atomic if, for any projection $`P𝒟`$, $`P=\{E𝒟:E\text{ is an atom and }EP\}`$. If $`\pi `$ is a nest representation of a strongly maximal TAF algebra $`A`$ with diagonal $`D`$ such that the von Neumann algebra $`\pi (D)^{\prime \prime }`$ contains an atom, then $`\mathrm{ker}\pi `$ is meet irreducible. This is established in Theorem 3.9, for which we give two proofs. The first proof depends on Theorem 2.1 in ; the alternative proof is independent of this theorem. Both proofs require the fact (established in Proposition 3.5) that if $`\pi (D)^{\prime \prime }`$ contains an atom, then it is an atomic von Neumann algebra. The alternative proof can be read immediately after Proposition 3.5; from this point on it uses the inductivity of ideals rather than the spectrum characterization of meet irreducible ideals from . In fact, a reader willing to assume that $`\pi (D)^{\prime \prime }`$ is atomic can read the alternative proof to Theorem 3.9 immediately after Lemma 3.1. This provides a much shorter route to a somewhat weaker theorem. The alternative proof can be found at the end of this section. If $`\pi (D)^{\prime \prime }`$ is atomic, then Corollary 3.10 implies that $`\mathrm{Lat}\pi (A)`$ is a purely atomic nest. The reverse implication is false: Example I.3 in provides an example of a -extendible representation of a standard limit algebra for which $`\mathrm{Lat}\pi (A)=\{0,I\}`$ and $`\pi (D)`$ is weakly dense in a continuous masa. Since $`\mathrm{ker}\pi =\{0\}`$ in this example, $`\mathrm{ker}\pi `$ is meet irrreducible; we conjecture that $`\mathrm{ker}\pi `$ is meet irreducible any time that $`\mathrm{Lat}\pi (A)`$ contains an atom. Proposition 3.5 establishes a dichotomy: either $`\pi (D)^{\prime \prime }`$ is atomic or else $`\pi (D)^{\prime \prime }`$ contains no atoms at all. This raises the natural question: is the same dichotomy valid for $`\mathrm{Lat}\pi (A)`$? This dichotomy certainly does not hold for general nest representations (those which are not -representations on the diagonal); the failure is a consequence of the similarity theory for nests. The proof of Theorem 2.4 in applied to a refinement algebra provides an example of a nest representation $`\pi `$ of a TAF agebra whose nest is purely atomic, order isomorphic to the Cantor set, and such that the atoms are ordered as the rationals. There exist nests which are not purely atomic but which are order isomorphic to the Cantor nest and which have (rank one) atoms ordered as the rationals; the similarity theorem for nests gives the existence of an invertible operator which carries the invariant subspace nest for $`\pi `$ onto a nest of the second type. The composition of $`\pi `$ with this similarity yields a nest representation of a strongly maximal TAF algebra (the refinement algebra) which has atoms but is not purely atomic. As before, we let $`X`$ denote the spectrum of the diagonal $`D`$ of a TAF algebra $`A`$. This is a zero dimensional topological space and the clopen sets form a basis for the topology. When $`e`$ is a projection in $`D`$, we let $`\widehat{e}`$ denote the spectrum of $`e`$ in $`X`$ (i.e., the support set of $`e`$ viewed as an element of $`C(X)`$). Now suppose that $`\pi `$ is a -representation of the diagonal $`D`$ of a TAF algebra and let $`E`$ be the spectral measure associated with $`\pi `$. $`E`$ is a regular, projection valued measure defined on the Borel sets of $`X`$ which “agrees” with $`\pi `$ on clopen subsets in the sense that $`E(\widehat{e})=\pi (e)`$, where $`e`$ is any projection in $`D`$ and $`\widehat{e}`$ is its support in $`X`$. If $`𝒟`$ is the von Neumann algebra generated by $`\pi (D)`$, then any projection $`P`$ in $`𝒟`$ is of the form $`E(S)`$, where $`S`$ is a Borel subset of $`X`$. When $`S`$ is a singleton $`\{x\}`$ we shall write $`E_x`$ in place of $`E(\{x\})`$. If $`e_n`$ is a decreasing sequence of projections in $`𝒟`$ such that $`\widehat{e}_n=\{x\}`$, then, by the regularity of $`E`$, $`E_x=\pi (e_n)`$. In particular, $`\pi (e_n)=\pi (f_n)`$ for any two decreasing sequences of projections in $`𝒟`$ with $`\widehat{e}_n=\{x\}`$ and $`\widehat{f}_n=\{x\}`$. If there is a projection $`e`$ in $`\mathrm{ker}\pi `$ with $`x\widehat{e}`$, then clearly $`E_x=0`$. In this case, if $`\widehat{e}_n`$ is any decreasing sequence of clopen sets with $`\widehat{e}_n=\{x\}`$, then, since $`\{\widehat{e}_n\}`$ is a neighborhood basis for $`x`$, we have $`\pi (e_n)=0`$ for all large $`n`$. The alternative is that $`\pi (e)0`$ for any projection with $`x\widehat{e}`$; in particular, for any decreasing sequence $`e_n`$ with $`\widehat{e}_n=\{x\}`$, $`\pi (e_n)0`$ for all $`n`$. The projection $`E_x=\pi (e_n)`$ may or may not be $`0`$; it is, however, independent of the choice of decreasing clopen sets with intersection $`\{x\}`$. Note also that if $`x,yX`$ and $`xy`$, then $`E_xE_y=0`$. ###### Lemma 3.1. Let $`D`$ be the diagonal of a TAF algebra; let $`\pi :D()`$ be a -representation; let $`E`$ be the spectral measure for $`\pi `$; and let $`𝒟`$ be the von Neumann algebra generated by $`\pi (D)`$. For any $`xX`$, if $`E_x0`$ then $`E_x`$ is an atom of $`𝒟`$. Conversely, if $`E_0`$ is an atom for $`𝒟`$, then there is a unique element $`xX`$ such that $`E_0=E_x`$. ###### Proof. Any projection in $`𝒟`$ has the form $`E(S)`$ for some Borel subset $`S`$ of $`X`$. Given $`xX`$, if $`xS`$ then $`E_xE(S)`$ and if $`xS`$ then $`E_xE(S)=0`$. This shows that when $`E_x0`$, it is an atom of $`𝒟`$. Now suppose that $`E_0`$ is an atom of $`𝒟`$. Let $`S`$ be such that $`E_0=E(S)`$. It is evident that there is at most one point $`xS`$ such that $`E_x0`$; we need to prove the existence of such a point. Since $`X`$ is a Cantor set, we can find, for each $`n`$, $`2^n`$ disjoint clopen sets $`\widehat{e}_k^n`$, $`k=1,\mathrm{},2^n`$, whose union is $`X`$, with the further property that any decreasing sequence of these sets has one-point intersection. Since $`E_0`$ is an atom of $`𝒟`$, for each $`n`$ there is a unique integer $`k_n`$ in $`\{1,\mathrm{},2^n\}`$ such that $`E_0\pi (\widehat{e}_{k_n}^n)`$. Let $`x`$ be such that $`_n\widehat{e}_{k_n}^n=\{x\}`$. Clearly, $`E_0E_x`$. But $`E_x`$ is an atom when it is non-zero; hence $`E_0=E_x`$. The uniqueness of $`x`$ follows immediately form the orthogonality of $`E_x`$ and $`E_y`$ when $`xy`$. ∎ ###### Notation. Let $`\pi :A()`$ be a representation of a TAF algebra $`A`$ (which acts as a -representation on the diagonal $`D`$). For $`\xi `$ and $`xX`$ let $`_\xi `$ denote the smallest $`\pi `$-invariant subspace which contains $`\xi `$ and $`_x`$ denote the smallest $`\pi `$-invariant subspace which contains $`\mathrm{Ran}E_x`$. Since the linear span of the $`D`$-normalizing partial isometries is dense in $`A`$, $`_\xi `$ is the closed linear span of $`\{\pi (v)\xi vN_D(A)\}`$ and $`_x`$ is the closed linear span of $`\{\pi (v)\xi vN_D(A),\xi E_x\}`$. We now need to investigate the manner in which $`D`$-normalizing partial isometries act on atoms of $`𝒟`$. Throughout the remainder of this section, $`A`$ will denote a TAF algebra (we will add the hypothesis that $`A`$ is strongly maximal later); $`\pi `$ will denote a nest representation of $`A`$ acting on a Hilbert space $``$; $`D`$ will denote the diagonal of $`A`$ (with spectrum $`X`$); and $`𝒟=\pi (D)^{\prime \prime }`$. ###### Lemma 3.2. Let $`vN_D(A)`$ and let $`xX`$. If $`x\widehat{v^{}v}`$, then $`\pi (v)E_x=0`$. If $`x\widehat{v^{}v}`$, there exists $`yX`$ such that $`(y,x)\widehat{v}`$ and $`\pi (v)E_x=E_y\pi (v)`$. In particular, if $`yx`$, $`\mathrm{Ran}\pi (v)E_x\mathrm{Ran}E_x`$. ###### Proof. Let $`\{e_n\}`$ be a decreasing sequence of projections in $`D`$ with $`\widehat{e}_n=\{x\}`$. If $`x\widehat{v^{}v}`$, then $`ve_n=0`$ for large $`n`$, in which case $`\pi (v)E_x=\pi (v)\pi (e_n)E_x=\pi (ve_n)E_x=0`$. Now suppose that $`x\widehat{v^{}v}`$ and let $`y`$ be such that $`(y,x)\widehat{v}`$. With $`e_n`$ as above, let $`f_n=ve_nv^{}`$, so that $`\widehat{f}_n=\{y\}`$ and $`\pi (f_n)=E_y`$. Then $`ve_n=f_nv`$ for large $`n`$; hence $`\pi (v)\pi (e_n)=\pi (f_n)\pi (v)`$ and, taking strong limits, $`\pi (v)E_x=E_y\pi (v)`$. It follows that $`\mathrm{Ran}(\pi (v)E_x)\mathrm{Ran}E_y`$ and, hence, that $`E_x`$ and $`\pi (v)E_x`$ have orthogonal ranges when $`yx`$. ###### Remark. If $`vN_D(A)`$ and $`(y,x)\widehat{v}`$, then $`\pi (v)E_x=E_y\pi (v)E_x`$. It is possible that $`\pi (v)E_x=0`$ even when $`\pi (v)0`$ and $`E_x0`$. ###### Lemma 3.3. Let $`xX`$. If $`E_x0`$, then $`E_x`$ is a rank-one atom. ###### Proof. Let $`xX`$ and assume that $`\mathrm{Ran}E_x`$ contains two unit vectors $`\xi `$ and $`\zeta `$ such that $`\xi \zeta `$. Let $`vN_D(A)`$. Since $`\pi (v)E_x=\pi (v)\pi (e)E_x`$ for any projection $`e`$ for which $`x\widehat{e}`$, we may, by a suitable restriction, reduce to considering two cases: when $`\widehat{v}`$ is contained in the diagonal $`\{(z,z)zX\}`$ of $`G`$ and when $`\widehat{v}`$ is disjoint from the diagonal. When $`\widehat{v}`$ is contained in the diagonal, $`v`$ is a projection in $`D`$ and $`\pi (v)`$ either dominates $`E_x`$ or is orthogonal to $`E_x`$. In this case, either $`\pi (v)\xi =\xi `$ and $`\pi (v)\zeta =\zeta `$ or $`\pi (v)\xi =0`$ and $`\pi (v)\zeta =0`$. When $`\widehat{v}`$ is disjoint form the diagonal of $`G`$, there is $`yX`$ such that $`yx`$ and $`(y,x)\widehat{v}`$. In this case, $`\pi (v)\xi \mathrm{Ran}E_y`$ and $`\pi (v)\zeta \mathrm{Ran}E_y`$. In particular, $`\pi (v)\xi \zeta `$ and $`\pi (v)\zeta \xi `$ for all $`vN_D(A)`$. It now follows that $`\xi _\xi `$ and $`\xi _\zeta `$ while $`\zeta _\zeta `$ and $`\zeta _\xi `$. But $`\pi `$ is a nest representation and $`_\xi `$ and $`_\zeta `$ are $`\pi `$-invariant; hence one must contain the other. Thus, the rank of $`E_x`$ is at most $`1`$. ∎ ###### Corollary 3.4. Let $`u,wN_D(A)`$ and assume that $`u`$ and $`w`$ have a common subordinate. Let $`(y,x)`$ be a point in $`P`$ such that $`(y,x)\widehat{u}\widehat{w}`$. Then $`\pi (u)E_x=\pi (w)E_x`$ and, if $`E_x`$, $`E_y0`$, $`\mathrm{Ran}\pi (u)E_x=\mathrm{Ran}E_y`$. ###### Proof. Let $`e`$ be a (nonzero) projection in $`D`$ such that $`e\mathrm{min}\{u^{}u,w^{}w\}`$ and $`x\widehat{e}`$. (One could just take $`e=u^{}uw^{}w`$.) Then $`ue=we`$. Since $`\pi (e)`$ dominates $`E_x`$, $$\pi (u)E_x=\pi (u)\pi (e)E_x=\pi (ue)E_x=\pi (we)E_x=\pi (w)\pi (e)E_x=\pi (w)E_x.$$ For the second assertion, suppose $`E_x`$ and $`E_y0`$. Now by Lemma 3.2, for any $`vN_D(A)`$, either $`\pi (v)E_y`$ is zero or else $`\pi (v)E_y`$ is contained in $`\mathrm{Ran}E_z`$ for some $`zX`$ with $`(z,y)\widehat{v}`$. But as $`yx`$, $`\mathrm{Ran}\pi (v)E_y\mathrm{Ran}E_x`$. We have seen that $`_y`$, the smallest $`\pi `$-invariant subspace containing $`\mathrm{Ran}E_y`$, is disjoint from $`\mathrm{Ran}E_x`$; since $`\pi `$ is a nest representation, it follows that $`_y_x`$. Thus, for some $`v`$, $`\mathrm{Ran}\pi (v)E_x\mathrm{Ran}E_y(0)`$. Since the ranges of $`E_x`$ and $`E_y`$ are one-dimensional, $`\mathrm{Ran}\pi (v)E_x=\mathrm{Ran}E_y`$. For such a $`v`$ we have $`(y,x)\widehat{v}`$. By the first paragraph, $`v`$ can be replaced by any $`u`$ with $`(y,x)\widehat{u}`$. ∎ ###### Proposition 3.5. $`E_0=\{E_xxX\}`$ is either $`0`$ or $`I`$. ###### Proof. Let $`E_1=IE_0`$. We shall show that both $`E_0`$ and $`E_1`$ are invariant under $`\pi `$. Since $`\pi `$ is a nest representation, this means that one of them must be zero. If $`E_0`$ is not invariant, then for some $`vN_D(A)`$ and $`\xi E_0`$, $`E_1\pi (v)\xi 0`$. Now $`\xi =\{E_x\xi xX\}`$, so there exists $`xX`$ with $`E_1\pi (v)E_x\xi 0`$. By Lemma 3.2, $`\pi (v)E_x=E_y\pi (v)E_x`$, for some $`y`$. Since $`\pi (v)E_x0`$ it follows that $`E_y0`$; hence $`E_y`$ is an atom in $`𝒟`$. As $`E_1`$ majorizes no atoms, $`E_1E_y=0`$. On the other hand, $`0E_1\pi (v)E_x=E_1E_y\pi (v)E_x`$, so that $`E_1E_y0`$. This contradiction shows that $`E_0`$ is invariant. If $`E_1`$ is not invariant, there exists a vector $`\xi E_1`$ and a $`D`$-normalizing partial isometry $`v`$ in $`A`$ such that $`E_0\pi (v)E_1\xi 0`$. Since $`E_0`$ is the sum of the atoms it majorizes, $`E_y\pi (v)E_1\xi 0`$ for some $`yX`$. By Lemma 3.2, there is an element $`xX`$ with $`E_y\pi (v)=E_y\pi (v)E_x`$, whence $`E_y\pi (v)E_xE_10`$. In particular, $`E_xE_10`$, which contradicts the fact that $`E_1`$ majorizes no atoms. Thus, $`E_1`$ is also invariant. ∎ ###### Remark. Let $`𝒟`$ be the von Neumann algebra generated by $`\pi (D)`$, where $`D`$ is the diagonal of the TAF algebra $`A`$. According to Proposition 3.5, if $`\pi :A()`$ is a nest representation, then either $`𝒟`$ is generated by its atoms (that is, $`𝒟`$ is purely atomic), or else it has no atoms. (This, of course, presumes that the restriction of $`\pi `$ to $`D`$ is a -representation.) ###### Lemma 3.6. If $`x`$ and $`y`$ are two points of $`X`$ such that $`E_x`$ and $`E_y`$ are both nonzero, then $`x`$ and $`y`$ belong to the same orbit in $`X`$. ###### Proof. If $`x`$ and $`y`$ are not in the same orbit, it follows that $`\mathrm{Ran}E_x_y`$ and $`\mathrm{Ran}E_y_x`$ (Lemma 3.2). Since $`\mathrm{Ran}E_y_y`$ and $`\mathrm{Ran}E_x_x`$, $`_x`$ and $`_y`$ are not lineraly ordered, a contradiction. ∎ ###### Lemma 3.7. Assume, further, that $`A`$ is a strongly maximal TAF algebra. If, for some $`xX`$, $`E_x0`$, then $`J=\{zE_z0\}`$ is an interval in the orbit of $`x`$. ###### Proof. If $`E_x0`$, Lemma 3.6 implies that $`J`$ is contained in the orbit of $`x`$. If $`E_z=0`$ for all $`zx`$, we are done. Suppose then, that $`E_y0`$ for some $`yx`$ in the orbit of $`x`$. Without loss of generality we may assume that $`yx`$. Let $`vN_D(A)`$ be such that $`(y,x)\widehat{v}`$. Then $`_y_x`$ and $`\mathrm{Ran}\pi (v)E_x=\mathrm{Ran}E_y`$ (Corollary 3.4). In particular, $`\pi (v)E_x0`$. Let $`z`$ be a point in the orbit of $`x`$ with $`yzx`$. Since $`A`$ is strongly maximal, there exist $`u,wN_D(A)`$ with $`(y,z)\widehat{u}`$ and $`(z,x)\widehat{w}`$. Now $`\pi (v)E_x=\pi (uw)E_x=\pi (u)\pi (w)E_x`$; hence $`\pi (w)E_x0`$. As $`\mathrm{Ran}\pi (w)E_x\mathrm{Ran}E_z`$, it follows that $`E_z0`$ and, hence, $`zJ`$. This shows that $`J`$ is an interval. ∎ ###### Corollary 3.8. If $`𝒟`$ has an atom and $`J`$ is the interval obtained in Lemma 3.7, then $`\{E_xxJ\}=I`$ and $`𝒟`$ is a masa in $`()`$. ###### Proof. The first assertion follows from Lemma 3.1 and Proposition 3.5, since all atoms have the form $`E_x`$, for some $`xX`$. Since the set $`\{E_xxX\}`$ is a collection of commuting, rank-one atoms whose ranges span $``$, the von Neumann algebra which they generate is a masa in $`()`$. ∎ ###### Theorem 3.9. Let $`A`$ be a strongly maximal TAF algebra and $`\pi :A()`$ be a nest representation for which the von Neumann algebra $`𝒟`$ generated by $`\pi (D)`$ contains an atom. Then the kernel of $`\pi `$ is a meet irreducible ideal in $`A`$. ###### Proof. By hypothesis, $`𝒟`$ contains an atom, necessarily of the form $`E_x`$, for some $`xX`$. By Lemma 3.7, there is a nonempty interval $`J`$ in an orbit in $`X`$ with the following property: for any $`D`$-normalizing partial isometry $`v`$, $`\pi (v)0`$ if, and only, if $`J\times J`$ intersects $`\widehat{v}`$. In other words, the complement of the spectrum of the ideal $`\mathrm{ker}\pi `$ contains $`(J\times J)P`$. But the complement is a closed set, so it contains $`\overline{(J\times J)P}`$. On the other hand, if $`v`$ is a $`D`$-normalizing partial isometry with $`\widehat{v}`$ disjoint from $`\overline{(J\times J)P}`$, then $`\pi (v)=0`$. Thus, $$\widehat{\mathrm{ker}\pi }=P\overline{(J\times J)P}.$$ By \[3, Theorem 2.1\], $`\mathrm{ker}\pi `$ is a meet-irreduclble ideal. ∎ ###### Corollary 3.10. Let $`A`$, $`\pi `$, and $`J`$ be as above and let $`𝒩=\mathrm{Lat}\pi (A)`$. * If $`F`$ is decreasing subset of $`J`$, then $`P=\{E_xxF\}𝒩`$. On the other hand, if $`P𝒩`$, then $`F=\{x0E_xP\}`$ is a decreasing subset of $`J`$. This correspondence between decreasing subsets of $`J`$ and projections in $`𝒩`$ is a bijection. * $`𝒟=\mathrm{Alg}𝒩(\mathrm{Alg}𝒩)^{}`$ equals the von Neumann algebra generated by $`𝒩`$. ###### Proof. The first assertion is clear. For the second, first note that, since $`𝒟`$ is a masa, $`𝒟=\mathrm{Alg}𝒩(\mathrm{Alg}𝒩)^{}`$. Let $`xJ`$, let $`P=\{E_yyJ\text{ and }yx\}`$, and let $`P_{}=\{E_yyI\text{ and }yx\}`$. Then $`P_{}`$ is the immediate predecessor of $`P`$ in $`𝒩`$ and $`E_x=PP_{}`$. Thus, every atom from $`𝒟`$, and hence $`𝒟`$ itself, is contained in the von Neumann algebra generated by $`𝒩`$. The reverse inclusion is obvious. ∎ There is an alternate proof for Theorem 3.9 based on a presentation for $`A`$ rather than on the spectrum of $`A`$ and Theorem 2.1 in . This proof is dependent only on the preliminary results through Proposition 3.5; in fact, if the reader is willing to assume that $`\pi (D)^{\prime \prime }`$ is purely atomic, then only Lemma 3.1 is needed. In the alternate proof, we view $`A`$ as the union of an ascending chain of subalgebras each of which is star extendibly isomorphic to a maximal triangular subalgebra of a finite dimensional $`\text{C}^{}`$-algebra. Also we may assume that a system of matrix units for each $`A_k`$ has been selected in such a way that each matrix unit in $`A_k`$ is a sum of matrix units in $`A_{k+1}`$; this gives a matrix unit system for $`A`$. Another fact from the lore of direct limit algebras that we need is that ideals are inductive: if $`I`$ is an ideal in $`A`$, then $`I`$ is the closed union of the ideals $`I_k=IA_k`$ in $`A_k`$. ###### Alternate Proof of Theorem 3.9. Assume that $`\pi `$ is a nest representation but that $`\mathrm{ker}\pi `$ is not meet irreducible. Let $`I`$ and $`J`$ be two ideals in $`A`$ such that $`IJ=\mathrm{ker}\pi `$ and $`IJ`$ differs from both $`I`$ and $`J`$. By the inductivity of ideals, there exist matrix units $`u_IIJ`$ and $`u_JJI`$. These matrix units must lie in some $`A_k`$; since we may replace the sequence $`A_k`$ by a subsequence, we may assume that $`u_I`$ and $`u_J`$ lie in $`A_1`$. Since $`u_IJ`$, $`\pi (u_I)0`$. By Proposition 3.5, $$\pi (u_I)=\underset{x,y}{}E_x\pi (u_I)E_y,$$ where the sum is taken over all pairs of atoms and is convergent in the strong operator topology. Consequently, there exist points $`x_I`$ and $`y_I`$ in $`X`$ such that $`E_{x_I}\pi (u_I)E_{y_I}0`$. For each $`k`$, let $`e_k^I`$ and $`f_k^I`$ be the unique diagonal matrix units in $`A_k`$ such that $`x_I\widehat{e}_k^I`$ and $`y_I\widehat{f}_k^I`$. Since $`u_II_k`$, $`e_k^Iu_If_k^II_k`$, for all $`k`$. On the other hand, $`\pi (e_k^Iu_If_k^I)0`$, so $`e_k^Iu_If_k^IJ_k`$, for all $`k`$. Let $`x_J`$ and $`y_J`$ in $`X`$ and $`e_k^J`$ and $`f_k^J`$ in $`A_k`$ be analogously defined for the ideal $`J`$. This time, $`e_k^Ju_Jf_k^JJ_k`$ and $`e_k^Ju_Jf_k^JI_k`$. We shall show (after possibly reversing the roles if $`I`$ and $`J`$) that for infinitely many $`k`$, $`e_k^IA_kf_k^J\mathrm{ker}\pi `$ and $`f_k^JA_ke_k^I\mathrm{ker}\pi `$. This leads to a contradiction with the hypothesis that $`\pi `$ is a nest representation and so shows that $`\mathrm{ker}\pi `$ is necessarily meet irreducible. Indeed, from $`\pi (e_k^I)\pi (A_k)\pi (f_k^J)`$ $`=0,\text{all }k,`$ $`\pi (f_k^J)\pi (A_k)\pi (e_k^I)`$ $`=0,\text{all }k,`$ it follows that $`E_{x_I}\pi (A)E_{y_J}`$ $`=0,`$ $`E_{y_J}\pi (A)E_{x_I}`$ $`=0,`$ and hence that $`E_{x_I}_{y_J}`$ and $`E_{y_J}_{x_I}`$, where $`_{y_J}`$ and $`_{x_I}`$ are the smallest $`\pi `$-invariant subspaces containing $`E_{y_J}`$ and $`E_{x_I}`$ respectively. But then $`_{y_J}`$ and $`_{x_I}`$ are not related by inclusion, a contradiction. Each finite dimensional algebra $`A_k`$ is a direct sum of $`T_n`$’s and the matrix units $`e_k^I`$ and $`f_k^I`$ are in the same summand, as are $`e_k^J`$ and $`f_k^J`$. If these two summands differ, then $`e_k^IA_kf_k^J=0`$ and $`f_k^JA_ke_k^I=0`$. Should this occur for infinitely many $`k`$, then we are done. So we need consider only the case in which, for all $`k`$, all of $`e_k^I`$, $`e_k^J`$, $`f_k^I`$ and $`f_k^J`$ are in the same summand in $`A_k`$. If $`e`$ and $`f`$ are diagonal matrix units (minimal diagonal projections) in $`A_k`$, let $`\mathrm{m}(e,f)`$ be the matrix unit in $`C^{}(A_k)`$ with initial projection $`f`$ and final projection $`e`$ (if there is such a matrix unit). If $`\mathrm{m}(e,f)A_k`$, then $`ef`$ in the diagonal order on minimal diagonal projections. We shall need the following property of ideals in $`A_k`$: if $`e_1e_2f_2f_1`$ and if $`\mathrm{m}(e_2,f_2)`$ is in an ideal, then $`\mathrm{m}(e_1,f_1)`$ is also in the ideal. Since $`e_k^I`$ and $`e_k^J`$ are in the same $`T_n`$-summand of $`A_k`$, they are related in the diagonal order. By interchanging $`I`$ and $`J`$ and passing to a subsequence, if necessary, we may assume that $`e_k^Ie_k^J`$, for all $`k`$. The facts concerning the membership of $`e_k^Iu_If_k^I`$ and $`e_k^Ju_Jf_k^J`$ in $`I_k`$ and $`J_k`$ may be rephrased as $`\mathrm{m}(e_k^I,f_k^I)I_k`$ $`\text{and }\mathrm{m}(e_k^I,f_k^I)J_k,`$ $`\mathrm{m}(e_k^J,f_k^J)I_k`$ $`\text{and }\mathrm{m}(e_k^J,f_k^J)J_k.`$ As a consequence $`f_k^If_k^J`$, for all k. (If $`f_k^Jf_k^I`$ for some $`k`$, then $`e_k^Ie_k^Jf_k^Jf_k^I`$. Since $`\mathrm{m}(e_k^J,f_k^J)J_k`$, we have $`\mathrm{m}(e_k^I,f_k^I)J_k`$, a contradiction.) But now, $`\mathrm{m}(e_k^I,f_k^I)I_k\text{ and }f_k^If_k^J`$ $`\mathrm{m}(e_k^I,f_k^J)I_k`$ $`\mathrm{m}(e_k^J,f_k^J)J_k\text{ and }e_k^Ie_k^J`$ $`\mathrm{m}(e_k^I,f_k^J)J_k.`$ Thus $`\mathrm{m}(e_k^I,f_k^J)I_kJ_k\mathrm{ker}\pi `$; hence $`e_k^IA_kf_k^J\mathrm{ker}\pi `$. Also, since $`e_k^If_k^If_k^J`$, $`f_k^JA_ke_k^I=\{0\}\mathrm{ker}\pi `$. As pointed out earlier, this implies that $`\pi `$ is not a nest representation; so, when $`\pi `$ is a nest representation with an atomic lattice, $`\mathrm{ker}\pi `$ is meet irreducible. ∎
no-problem/9905/cond-mat9905040.html
ar5iv
text
# Critical Current Enhancement due to an Electric Field in a Granular 𝑑-Wave Superconductor \[ ## Abstract We study the effects of an electric-field in the transport properties of bulk granular superconductors with different kinds of disorder. We find that for a $`d`$-wave granular superconductor with random $`\pi `$-junctions the critical current always increases after applying a strong electric field, regardless of the polarity of the field. This result plus a change in the voltage as a function of the electric field are in good agreement with experimental results in ceramic high $`T_c`$ superconductors. \] Recent experiments in ceramic high-$`T_c`$ superconductors (HTCS) have found an enhancement in the critical current when applying an electric field $`E`$ through an insulating layer . Previous studies of the electric field effects (EFE) in superconducting films have attributed the changes in the critical current to variations in the charge density or to a redistribution of carriers, which appear at the surface layer with depths of the order of the electrostatic screening length $`d_E`$ (in HTCS, $`d_E5\AA `$) . Several experiments in ultrathin YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-x</sub> films ($`510`$nm thick) have found that $`E`$ can affect $`T_c`$ and the $`IV`$ characteristics , in good agreement with this picture . In this case, there is either an enhancement or a depletion of the critical current depending on the polarity of $`E`$ . The surprising observation in of a strong EFE in bulk ceramic HTCS ($`1.5`$ mm thick), however, can not be explained by a surface effect. Moreover, for high enough electric fields, the critical current always increases regardless of the polarity of the field . Rakhmanov and Rozhkov have shown that an electric field can induce a change in the critical currents of the Josephson junctions present in granular samples. However, in their model the critical current either increases or decreases depending on the sign of $`E`$. Recently, Sergeenkov and José have proposed that an electric field applied to a granular superconductor can produce a magneto-electric like effect, which could be indirectly related to the behavior of the critical current in , but no comparison with the experimental results was given. Granular superconductors are usually described as a random network of superconducting grains coupled by Josephson weak links . In the HTCS ceramics several experimental groups have found a paramagnetic Meissner effect (PME) at low magnetic fields . Sigrist and Rice proposed that this effect could be a consequence of the intrinsic unconventional pairing symmetry of the HTCS of $`d_{x^2y^2}`$ type . Depending on the relative orientation of the superconducting grains, it is possible to have weak links with negative Josephson coupling ($`\pi `$-junctions) which, according to , give rise to the PME . In this paper we will show that the presence of $`\pi `$-junctions in ceramic samples also explains the unusual electric field effects observed in . We consider a 3-D cubic network of superconducting grains at $`𝐧=(n_x,n_y,n_z)`$ and with unit vectors $`\widehat{\mu }=\widehat{x},\widehat{y},\widehat{z}`$. The current $`I_{\widehat{\mu }}(𝐧)`$ between two grains $`𝐧`$, and $`𝐧+\widehat{\mu }`$ is given by the sum of the Josephson supercurrent plus a dissipative Ohmic current: $$I_\mu (𝐧)=I_{𝐧,\mu }^0\mathrm{sin}\theta _\mu (𝐧)+\frac{\mathrm{\Phi }_0}{c2\pi R}\frac{d\theta _\mu (𝐧)}{dt}.$$ (1) Here $`\theta _\mu (𝐧)=\theta (𝐧+\widehat{\mu })\theta (𝐧)A_\mu (𝐧,t)`$ is the gauge invariant phase difference, with $`\theta (𝐧)`$ the superconducting phase in each grain, and $`A_\mu (𝐧,t)=\frac{2\pi }{\mathrm{\Phi }_0}_𝐧^{𝐧+\widehat{\mu }}𝐀𝑑𝐥`$ ($`\mathrm{\Phi }_0=h/2e`$). The critical current of each junction is $`I_{𝐧,\mu }^0`$ and $`R`$ is the normal state tunneling resistance between grains. Together with the conditions of current conservation, $`_\mu [I_\mu (𝐧)I_\mu (𝐧\widehat{\mu })]=0`$, this determines the dynamical equations for the Josephson network. We consider periodic boundary conditions (PBC) in a network with $`N\times N\times N`$ grains. When an electric field $`𝐄`$ is applied in the $`\widehat{z}`$ direction, the $`z`$ component of the vector potential is given by: $$A_z(𝐧,t)=A_z(𝐧,0)\frac{2\pi cd}{\mathrm{\Phi }_0}Et,$$ (2) with $`d`$ the intergrain distance or junction thickness. This results in a high-frequency alternating supercurrent in the $`z`$-direction due to the ac Josephson effect . In addition, we consider that the sample is driven by an external current density $`I_{ext}`$ along the $`\widehat{y}`$ direction. Therefore, the vector potential term is $`A_\mu (𝐧,t)=\delta _{\mu ,z}\omega _Et\delta _{\mu ,y}\alpha _y(t)`$, where the electric field frequency is $`\omega _E=2\pi cEd/\mathrm{\Phi }_0`$, and we consider that no external magnetic field is applied. (The experiments had zero magnetic field ). The external current density with PBC determines the dynamics of $`\alpha _y(t)`$ as $$I_{ext}=\frac{1}{N^3}\underset{𝐧}{}I_{𝐧,y}^0\mathrm{sin}\theta _y(𝐧)+\frac{\mathrm{\Phi }_0}{c2\pi R}\frac{d\alpha _y}{dt}.$$ (3) The average voltage per junction induced by the driving current is then obtained by $`V=\frac{\mathrm{\Phi }_0}{2\pi c}\frac{d\alpha _y}{dt}`$. Furthermore, we consider that the applied electric field is screened inside the grains and acting only in the insulating intergrain region of the junctions, of typical thickness $`d1020\AA `$. We also neglect the effects of intergrain capacitance $`C_i`$ and intragrain capacitance $`C_g`$. In general, the effect of the capacitances is to screen the applied field $`E_{ext}`$ inside the sample within an overall screening length $`\lambda _E(C_i/C_g)^{1/2}`$. Since $`C_gC_i`$, $`\lambda _E`$ is very large and we can consider that the internal field acting in the intergrain junctions is $`E_{in}=E\alpha _pE_{ext}`$ with the polarizability $`\alpha _p1`$. We also neglect the possible $`E`$ dependence of the critical currents of the junctions. As shown in this assumption may result in an increase or decrease of $`I_{𝐧,\mu }^0`$ depending on the sign of $`E_{in}`$. We also neglect screening current effects (i.e. finite self-inductances). The self-induced magnetic fields are very important for the description of the critical state at large magnetic fields, and for the PME at low magnetic fields . The experiments of were done at zero magnetic field and finite electric fields. Thus the most important approximation we make is to neglect the capacitances (i.e. the screening of $`𝐄`$). After making all these physical approximations, we will now focus on the collective aspects of the ac Josephson effect induced by the electric field. In this model, the electric field scale is given by $`E_0=RI_0(T=0)/d=\pi \mathrm{\Delta }_s(0)/2ed`$, with $`\mathrm{\Delta }_s(0)`$ the superconducting energy gap; for the YBaCu0 ceramics we have $`\mathrm{\Delta }_s(0)20meV`$, which gives $`E_030MV/m`$, i.e. in the same range as the fields used in . We assume that the effect of disorder in a granular superconductor at zero magnetic field mainly modifies the magnitude of the critical currents. In $`s`$-wave superconductors the sign of the Josephson coupling is always positive. In $`d`$-wave ceramics the sign of $`I_{𝐧,\mu }^0`$ is expected to vary randomly depending on the relative spatial orientation of the grains. Therefore, we consider two models of disorder: (i) Granular $`s`$-wave superconductor (GsS): We consider $`I_{𝐧,\mu }^0`$ a random variable with a uniform distribution in the interval $`[I_0(1\mathrm{\Delta }_c),I_0(1+\mathrm{\Delta }_c)]`$ with $`I_{𝐧,\mu }^0=I_0`$ and $`\mathrm{\Delta }_c<1`$. (ii) Granular $`d`$-wave superconductors (GdS): We consider that there is a random concentration $`c`$ of $`\pi `$-junctions with $`I_{𝐧,\mu }^0=I_0`$, and $`I_{𝐧,\mu }^0=I_0`$ with concentration $`1c`$ . The latter model was used to explain the history-dependent paramagnetic effect , the anomalous microwave absorption and the non-linear susceptibility and glassy behavior observed experimentally in ceramic HTCS . We have performed numerical simulations of systems of sizes $`N=8,16`$ with a numerical integration of the dynamical equations with time step $`\mathrm{\Delta }t=0.1\tau _J`$, with $`\tau _J=\mathrm{\Phi }_0/2\pi cRI_0`$, and we took $`5\times 10^4`$ integration steps. We calculated the current-voltage characteristics (IV) for different amounts of disorder and electric fields, averaging over 20 realizations of disorder in each case. In the absence of disorder, the IV curves are unaffected by the electric field. In a perfect cubic network, a finite electric field induces an ac supercurrent $`I_0\mathrm{sin}(\omega _Et)`$ along the $`z`$ axis only, and therefore the IV curves in the $`xy`$ plane are not affected by the value of $`E`$. When the disorder is small, the amplitude of the ac supercurrents in the $`z`$ direction is random and, due to current conservation in each node of the network, this induces small ac-currents in the $`xy`$ planes with random amplitudes $`\mathrm{\Delta }_cI_0\mathrm{sin}(\omega _Et)`$. Adding a small ac current to a Josephson junction reduces its effective dc critical current . In fact, this is the effect we observe in the GsS model. In Fig. 1(a) we show the IV curve for $`\mathrm{\Delta }_c=0.6`$. There we see that after applying an electric field $`E`$ the whole IV curve shifts to lower values of the current, the apparent critical current decreases, and therefore the voltage change $`\mathrm{\Delta }V=V(E)V(0)`$ is positive for a given current $`I>I_c`$. However, this is in the opposite direction of the experimental results of , where an increase in the critical current was observed. Let us then consider the case of a granular $`d`$-wave superconductor. The presence of randomly distributed negative and positive critical currents changes the previous simplistic picture, since now frustration is introduced in each loop containing an odd number of $`\pi `$-junctions . In Fig. 1(b) we show the IV curve for the GdS model with a concentration $`c=0.5`$ of $`\pi `$-junctions. We note that after applying the electric field the IV curve is shifted towards higher currents, i.e. the “apparent” critical current increases. If we change $`EE`$ the IV curves overlap showing that the effect is independent of the polarity of the electric field. A similar effect was seen in the experiments of : the IV curves shift upwards for large $`E`$ and almost overlap when changing the polarity of the field . This result is surprising since in the usual electric field effect observed in films a change in the polarity of the field drastically changes the sign of the shift of the IV curves . We also see in Fig.1(b) that the EFE is stronger near the critical current, and for large currents the change in voltage $`\mathrm{\Delta }V`$ is smaller, which was also seen in experiment . If the amount of disorder in the GsS is such that $`\mathrm{\Delta }_c>1`$, then a fraction $`(\mathrm{\Delta }_c1)/2\mathrm{\Delta }_c`$ of the critical currents will be negative. This case is unrealistic for $`s`$-wave superconductors, but corresponds to a $`d`$-wave granular superconductor with a small concentration of $`\pi `$-junctions. In fact, we also find an increase of the critical current for $`\mathrm{\Delta }_c>1`$. In Fig. 2 we study the relative change in voltage $`\mathrm{\Delta }V/V(0)`$ for a fixed bias current (above the critical current) as a function of $`E`$. We see that for the GsS model $`\mathrm{\Delta }V`$ is positive, and only when negative critical currents are allowed ($`\mathrm{\Delta }_c>1`$), there is a decrease in the voltage, i.e. $`\mathrm{\Delta }V<0`$. The effect is stronger in the GdS model with $`c=0.5`$ concentration of $`\pi `$-junctions. An effect of the same order, i.e. near a $`50\%`$ decrease in voltage, and in the same scale of electric fields was observed in experiment. In Fig. 3 we show the dependence of the EFE with different amounts of disorder. For a given bias current we calculate $`\mathrm{\Delta }V/V_0`$ after applying a large electric field of $`E=0.8E_0`$. We see in Fig. 3(a) the voltage change in the GsS model as a function of $`\mathrm{\Delta }_c`$. It is clear that only when $`\mathrm{\Delta }_c>1`$ the $`\mathrm{\Delta }V`$ can be negative. In Fig. 3(b) we study the same situation for the GdS model as function of the concentration of the $`\pi `$-junctions. We obtain that a small amount of $`\pi `$-junctions is enough to show a strong decrease in voltage at large electric fields. We see that there is a very rapid change in $`\mathrm{\Delta }V/V_0`$ close to $`c=0.1`$. It is interesting to note that there is another type of disorder that can give a negative $`\mathrm{\Delta }V/V_0`$. In the presence of external magnetic fields, the vector potential has a component $`A_\mu ^H(𝐧)`$. For strong magnetic fields, $`A_\mu ^H(𝐧)`$ can be taken as a random variable in the interval $`[0,2\pi \mathrm{\Delta }_H]`$, with the case $`\mathrm{\Delta }_H=1`$ corresponding to the gauge glass model . We have found that only when $`\mathrm{\Delta }_H>0.5`$ we can obtain a decrease in the relative voltage, $`\mathrm{\Delta }V/V_0<0`$, similar to the results obtained with the GdS model. The case $`\mathrm{\Delta }_H>0.5`$ is when the effective couplings can take negative values. This case, even when interesting in itself, is unrealistic for this problem since it corresponds to ceramics at very large magnetic fields, while the experiments had zero magnetic field. There are other interesting aspects of the experimental results. It was found that for low electric fields the critical current decreases and only for large fields it increases. A typical measurement shows that, when applying a current near the critical current, $`\mathrm{\Delta }V/V_0`$ increases as a function of $`E`$ and, after reaching a maximum, it falls and then becomes negative for large enough fields. Our random $`d`$-wave model for a granular superconductor also reproduces this effect in good agreement with experiment as we show in Fig. 4. We note that the value of $`\mathrm{\Delta }V/V_0`$ at the maximum decreases when increasing the bias current, and only for large enough currents, the voltage change $`\mathrm{\Delta }V/V_0`$ is always negative and decreasing with $`E`$ as previously shown in Fig. 2. The same dependence with the bias current has been observed in experiment (see for example Figs.3 and Fig.4 of and Fig.3 of ). These results can be understood by looking at the shape of the IV curves at low electric fields. As we show in the inset of Fig.4, for low $`E`$ there is a crossing of the IV curves, which explains the behavior observed in $`\mathrm{\Delta }V/V_0`$. The experiments on the electric field effects in ceramic HTSC were surprising for two reasons: (i) a large electric field effect in a bulk ceramic sample was unexpected , (ii) an increase in the apparent critical current as a function of $`E`$ was found to be independent of the polarity of $`E`$ . Here we have qualitatively explained all these experimental features with a simplified model of a granular superconductor, which has two basic ingredients: the ac Josephson effect induced by the electric field and the collective frustration effects due to $`\pi `$-junctions. There are many interesting open questions such as: the coexistence of this effect with the paramagnetic Meissner effect , the time dependent glassy dynamics as a function of electric and magnetic fields , the effects in a chiral glass state , and the effect of finite inductances and capacitances in a more realistic model. We expect that the results presented here will motivate further experimental and theoretical studies of this problem. This work has been supported by a cooperation grant CONICET Res. 697/96 and NSF INT-9602924. DD and CW acknowledge CONICET and CNEA (Argentina) for local financial support. The work by DD was also partially funded by Fundación Antorchas and ANPCyT and the work by JVJ by NSF DMR-9521845.
no-problem/9905/astro-ph9905298.html
ar5iv
text
# Another view on the velocity at the Schwarzschild horizon ## Abstract It is shown that a timelike radial geodesic does not become null at the event horizon. Recently an attempt was made to demonstrate that in the Schwarzschild geometry the radial geodesics of material particles become null at the event horizon . For this purpose, was derived an expression that corresponds to the velocity of a material particle following a radial trajectory as measured by an observer, also on a radial trajectory, when they intersect. The observer mantains its spacelike Kruskal coordinates unchanged and for this reason we call it a Kruskal observer. For $`r>2m`$, the expression is, (eq.(20) of and eq.(8) of ), $$v=\frac{1+\mathrm{tanh}(t/4m)\frac{dt}{dr}(12m/r)}{\mathrm{tanh}(t/4m)+\frac{dt}{dr}(12m/r)},$$ (1) where $`dt`$ and $`dr`$ refer to the movement $`t(r)`$ of the particle. At the event horizon, where $`r=2m`$ and $`t=+\mathrm{}`$, the value of eq.(1) is apparentely indetermined and in it is stated that $`v=1`$, independently of the precise relationship $`t(r)`$. In some manipulations were made maintaining the generality of the expression, i.e. without substituting for $`t(r)`$. These permited to show (eq.(13) of ) that the velocity is always less than 1 along the way, until it obviously turns to $`0/0`$ at $`r=2m`$. So there is no a priori reason to think it is necessarily $`v=1`$. This procedure was commented in a somewhat ungracious manner in without any further explanations being made. The best way to avoid confusion and get a definitive result seems to be to consider a specific geodesic $`t(r)`$, transforming (1) in a function of 1 variable. Let us then consider a material particle in an ingoing radial geodesic parametrized by its proper time $`\tau `$. For this trajectory we can write, in Schwarzschild coordinates, $$ds^2=d\tau ^2=\left(1\frac{2m}{r}\right)dt^2+\left(1\frac{2m}{r}\right)^1dr^2.$$ (2) Inserting the conserved quantity for motion, $$E=\frac{dt}{d\tau }\left(1\frac{2m}{r}\right),$$ (3) we obtain, $$\frac{dt}{dr}=E\left(1\frac{2m}{r}\right)^1\left[E^2\left(1\frac{2m}{r}\right)\right]^{1/2}.$$ (4) Each geodesic is characterized by its value of $`E`$, defined by the initial conditions $`r_i`$ and $`v_i`$ ( $`v_i`$ is refered to the observer at rest at $`r_i`$). $$E=\left(1\frac{2m}{r_i}\right)(1v_i^2)^1.$$ (5) To simplify the integration of $`dt/dr`$ we introduce the approximation, $$\left(1\frac{12m/r}{E^2}\right)^{1/2}1+\frac{12m/r}{2E^2},$$ (6) valid for small $`r2m`$. This way we obtain, $$t(r)=t_0+\underset{r}{\overset{r_0}{}}\frac{s}{s2m}𝑑s+\frac{1}{2E^2}(r_0r)=t_0+(r_0r)\left(1+\frac{1}{2E^2}\right)+2m\mathrm{ln}\left|\frac{r_02m}{r2m}\right|.$$ (7) Each geodesic $`E`$ begins at a point $`r_i`$ at a time $`t_i`$ which we may consider to be $`t_i<0`$. While it does not reach $`r_0`$ we do not know the exact expression for $`t(r)`$. $`r_0`$ is the point where the approximation (6) becomes valid and we make $`t_0=0`$. Now we can insert (7) and (4) in (1) and plot $`v(r)`$ for several geodesics $`E`$. In figure 1 we plot $`|v|`$ for two values of $`E`$ assuming $`r_0=3m`$. Each curve represents the velocity of one particle measured by a family of Kruskal observers, each one intersecting the particle at a different point between $`3m`$ and $`2m`$. Even though eq. (1) was defined as a velocity only for $`r>2m`$, we plot that mathematical function through $`r=1.9m`$ to get a clearer view that $`v(r=2m)`$ is less than 1. Using (5) we note that the farthest point $`r_i`$ where a geodesic with $`E<1`$ can begin is $`r_i=2m/(1E^2)`$. For $`E=1/2`$ we get $`r_i=(8/3)m`$ and that is why that curve in the figure does not reach $`r=3m`$. We can also get valuable information from the plot of $`v`$ at $`r=2m`$ as a function of $`E`$. In figure 2 we can see that $`|v|`$ tends to 1 only at the limits of the domain of $`E`$. That is when $`E=+\mathrm{}`$ which is a null geodesic and when $`E=0`$. From eq.(5) we see the latter corresponds to a geodesic with $`r_i=2m`$ which is at rest relatively to the horizon. This corresponds to the well known result that a particle at rest on the horizon must be a photon and its velocity is 1 relatively to a radial observer. For all the other cases the modulus of the velocity plotted in figure 2 is less than 1. For greater values of $`E`$, $`v`$ is negative which means the particle follows the observer and reaches it from one side. For smaller values of $`E`$, $`v`$ is positive which means the observer follows the particle and see it approaching from the other side of the $`r`$ axis. This result is in agreement with the one presented in . There a new set of coordinates is introduced. The transformations are, essentially, $$\{\begin{array}{ccc}x_0=(wr)/\sqrt{2}\hfill & & \\ & & \\ x_1=(w+r)/\sqrt{2}\hfill & & \\ & & \\ x_2=2m\theta \hfill & & \\ & & \\ x_3=2m\phi \hfill & & \end{array}$$ (8) where $`w`$ is the ingoing Eddington-Finkelstein coordinate, $$w(t,r)=t+r+2m\mathrm{ln}\left|\frac{r2m}{2m}\right|.$$ (9) In these coordinates the metric takes the form, $$ds^2=\left[\frac{1}{2}\left(1\frac{2\sqrt{2}m}{x_1x_0}\right)1\right]dx_0^2\left(1\frac{2\sqrt{2}m}{x_1x_0}\right)dx_0dx_1+\left[\frac{1}{2}\left(1\frac{2\sqrt{2}m}{x_1x_0}\right)+1\right]dx_1^2+$$ $$+\left(\frac{x_1x_0}{2\sqrt{2}m}\right)^2\left[dx_2^2+\mathrm{sin}^2(\frac{x_2}{2m})dx_3^2\right],$$ (10) which at the horizon $`(x_1x_0=2\sqrt{2})`$ (and with $`\theta =\pi /2`$) reduces to the Minkowski form, $$ds^2=dx_0^2+dx_1^2+dx_2^2+dx_3^2.$$ (11) Let us now define the Janis observer as the one who mantains the spacelike coordinates $`x_1,x_2,x_3`$ constant. Like the Kruskal observer it follows a radial geodesic. The velocity of a material particle that moves along an ingoing geodesic $`dt/dr`$ as measured by this observer is, $$v_1=\frac{dx_1}{dx_0}=\left(\frac{1+dr/dt\frac{r}{r2m}+dr/dt}{1+dr/dt\frac{r}{r2m}dr/dt}\right).$$ (12) This expression is written as a function of only 1 variable $`(r)`$. Inserting $`dr/dt`$ from (4) and approximating the squared factor in that expression analogously to what was made in eq.(6), we obtain, $$v_1^2=\left(\frac{\frac{1}{2E}E+\frac{1\frac{2m}{r}}{2E}}{\frac{1}{2E}+E\frac{1\frac{2m}{r}}{2E}}\right)^2.$$ (13) For $`r=2m`$ we get, $$v_1^2(r=2m)=\left(\frac{12E^2}{1+2E^2}\right)^2,$$ (14) which is eq.(10) of . From here we see that $`v_1<1`$ unless $`E=0`$ or $`E=+\mathrm{}`$. In fact the graph of this function $`v_1(E)`$ is identical to the one in figure 2. This is the general behaviour for the relative velocity of two moving material particles at the Schwarzschild horizon.
no-problem/9905/cond-mat9905267.html
ar5iv
text
# Topological Phase Diagram of a Two-Subband Electron System \[ ## Abstract We present a phase diagram for a two-dimensional electron system with two populated subbands. Using a gated GaAs/AlGaAs single quantum well, we have mapped out the phases of various quantum Hall states in the density-magnetic filed plane. The experimental phase diagram shows a very different topology from the conventional Landau fan diagram. We find regions of negative differential Hall resistance which are interpreted as preliminary evidence of the long sought reentrant quantum Hall transitions. We discuss the origins of the anomalous topology and the negative differential Hall resistance in terms of the Landau level and subband mixing. \] Extensive works has been carried out on modulation-doped GaAs/AlGaAs heterostructures containing a two dimensional electron gas (2DEG) within the framework of quantum Hall effect. In most of these structures, only one subband is populated. Even though studies of heterostructures with two populated electric subbands have a long history, the inherent additional inter-subbands scattering has precluded two-subband system from being a primary candidate to study various aspects of quantum Hall effect. Most of the investigations carried out thus far, not surprisingly, have focused on scattering while others dealt with population effects. Recently, it has become increasingly apparent that in one-band systems, disorder induced Landau level mixing can play a critical role in the evolution of the quantum Hall effect, especially in the regime of vanishing magnetic fields. Landau level mixing and its effects have been the subjects of numerous recent experimental and theoretical studies. Similarly, in a two-subband system, crossing of Landau levels of the two different subbands can lead to substantial mixing even in relatively strong magnetic fields. The consequences of Landau level mixing on the topology of the phase boundaries between different quantum Hall states in the two-band system are expected to be surprising and possibly profound. To explore some of these consequences, we have conducted a systematic magneto-transport study on gated, modulation-doped GaAs/AlGaAs single quantum well samples in which there are two populated subbands. We have constructed a topological phase diagram of the two-band system. We found this phase diagram to be considerably more complex than the conventional Landau fan diagram. One of the spectacular consequences of its unusual topology is that there are multiple reentrant quantum Hall transitions. We have observed negative differential Hall resistance in certain regions of the density-magnetic field plane (the $`n`$-$`B`$ plane). The negative differential Hall resistance, in our opinion, is indicative of the reentrant quantum Hall transition. The sample used in this study is a symmetrical modulation-doped single quantum well with a width of 250 Å. Two Si $`\delta `$-doped layers ($`n_d=8\times 10^{11}\text{ cm}^2`$) are placed on either side of the well. There is a 200 Å spacer between the $`\delta `$-doped layer and the well on each side. Heavy doping creates a very dense 2DEG, resulting in the filling of two subbands in the well. As determined from the Hall resistance data and Shubnikov-de Hass oscillations, the total density is $`n=1.21\times 10^{12}\text{cm}^2`$. The higher subband has a density $`n_1=3.3\times 10^{11}\text{ cm}^2`$ while the lower subband has a density of $`n_2=8.8\times 10^{11}\text{ cm}^2`$ at $`B`$ = 0. The electron mobility at zero gate voltage is about $`8\times 10^4\text{ cm}^2/\text{V-s}`$. The samples are patterned into Hall bars with a $`3:1`$ aspect ratio using standard lithography techniques. An Al gate was evaporated on top so that by applying a negative gate voltage, the carrier density can be varied continuously. A total of 9 samples with different lengths (varying from 30 $`\mu `$m to 3 mm) were studied systematically. For consistency, we present the data from only one sample here. During the experiment, the sample was thermally connected to the mixing chamber of a dilution refrigerator. Magnetic fields up to 12 T were applied normal to the plane containing the 2DEG. Standard lock-in techniques with an excitation frequency of about 13 Hz and a current of 10 nA were employed to carry out the magnetoresistance measurements. A typical trace of the diagonal resistivity $`\rho _{xx}`$ and the Hall resistivity $`\rho _{xy}`$ as a function of $`B`$ at a temperature of 70 mK is shown in Fig. 1. The integer numbers in the figure identify each quantum Hall state by its quantized value in the unit of $`h/e^2`$ (i.e., $`S_{xy}=(h/e^2)/R_{xy}`$). The peaks in $`\rho _{xx}`$ represent the positions of the delocalized states and together they mark the phase boundaries between various quantum Hall states. This criterion was used to construct the experimental phase diagram in Fig. 3. Before presenting the experimental phase diagram, it is useful to discuss what one should expect for the simplest case in the absence of Landau level mixing. Using the energy separation between the two subbands for the present sample, we plot in Fig. 2a the energy $`E`$ as function of magnetic field. The corresponding positions of the delocalized states in the $`n`$-$`B`$ plane can be calculated and the resulting phase diagram is displayed in Fig. 2b. From Fig. 2b, one can see that the electrons fill the Landau levels of upper and lower subbands in alternating fashion as the magnetic field is increased. In this case, the phase diagram has an ordinary “fan-like” appearance identical to that for a single band system. The actual phase diagram is, however, very different from this simple picture. We present in Fig. 3 an experimental phase diagram. The density on the right axis is related to the gate voltage on the left axis by a linear relation determined by the sample geometry. To construct the phase diagram, we have swept both the gate voltage, i.e., the carrier density, at a fixed magnetic field (a “$`V_g`$-scan”), and the field at a fixed gate-voltage (a “$`B`$-scan”). Each peak in $`\rho _{xx}`$ corresponds to a single data point in the phase diagram. The data points represent the phase boundaries between various quantum Hall liquid states. Limited by the base temperature of our cryostat, the plateaus in regions very near to the intersections of phase boundary lines are normally not well resolved. In such cases, we follow the evolutionary development of the plateaus away from these places to assign sensible values of $`S_{xy}`$. Moreover, we could not determine the phase boundaries reliably at low magnetic fields ($`B2T`$) as the peak becomes progressively more difficult to resolve in a decreasing magnetic field. In the low-density regime of the phase diagram (i.e., $`n8.5\times 10^{11}\text{ cm}^2`$ ), with the upper subband depopulated, the experiment phase diagram is identical to that of a one-subband system . Note the transition of spin-resolved quantum Hall states to spin-degenerate quantum Hall state at around 8 T. This type of level “pinch off” has been studied recently in detail both theoretically and experimentally. With the upper subband populated, the phase diagram becomes very rich in topology. The most pronounced feature is the sawtooth-like pattern for densities in the range between $`n=9\times 10^{11}\text{ cm}^2`$ and $`11\times 10^{11}\text{ cm}^2`$. A similar pattern can also be seen for higher densities between $`n=10.5\times 10^{11}\text{ cm}^2`$ and $`12\times 10^{11}\text{ cm}^2`$. There are also apparently “triple” and “quadruple” points which separate the different quantum Hall phases. Despite the complexity of this phase diagram, we find that the “selection rules” for the quantum Hall phase transitions are never violated. According to the selection rules, the Fermi level must cross one delocalized level at a time. For the level counting, “one” is used for the spin resolved case and “two” is used for the spin unresolved case. Therefore for either a “$`V_g`$-scan” or a “$`B`$-scan”, the number for $`S_{xy}`$ changes either by one for crossing a spin-resolved subband level, or by two for crossing a spin-degenerate subband level, or even conceivably by four for crossing a spin-degenerate and subband-mixed level. One of the striking consequences of this unusual topology is that the differential Hall resistance (NDHR) $`d\rho _{xy}/dB`$ can be negative during the $`B`$-scan in certain regions of the $`n`$-$`B`$ diagram. For example a NDHR can be seen in the Fig. 1 around $`B=7`$ T (in the circled area). This NDHR is certainly unusual. In a one-band system, only positive differential Hall resistance (i.e. the classical Hall resistance or in the region between two plateaus) and zero differential Hall resistance (i.e., in the plateau region) have been observed. For this two-band system we found, in fact, that NDHR can be seen during a $`B`$-scan along a trajectory cutting through the top portion of any sawtooth. We present, in the left shown of Fig. 4, the evolution of the NDHR for various $`V_g`$ at a fixed temperature of 70 mK for the sawtooth between 6 to 8 T. For the convenience of tracking $`S_{xy}`$, we have plotted $`1/R_{xy}`$ (in unit of $`e^2/h`$) as the vertical axis. At $`V_g=0.34`$ V (at the tip of the tooth), a slight dip is seen at the middle of the well developed $`h/7e^2`$ Hall plateau. As the $`V_g`$ gets more negative, the dip shows more deviation from $`h/7e^2`$. The dip gets progressively deeper and wider. At $`V_g=0.38`$ V, the deviation is the greatest while the high field portion of the $`h/7e^2`$ plateau is still visible. The high-field side of this dip leads to the unusual NDHR. As the gate voltage gets even more negative, the dip develops into the $`h/6e^2`$ Hall plateau (see for example $`0.54`$ V ) giving, eventually, the normal $`S_{xy}=7`$ to $`S_{xy}=6`$ (“6-7” in short ) quantum Hall transition. We have also investigated the temperature dependence of $`d\rho _{xy}/dB`$ at $`V_g=0.41`$ V. $`B`$-scan traces, in the right shown of Fig. 4, were taken at $`T=4.2`$ K, 1.2 K, and 70 mK. At the highest temperature, the $`h/7e^2`$ plateau is not resolved and there was no sign of the NDHR. At $`T=1.2`$ K, the plateau starts to form and a small dip becomes visible near the expected positions of the $`h/7e^2`$ and $`h/6e^2`$ plateau. As the temperature goes further down, both $`h/7e^2`$ and $`h/6e^2`$ plateaus are well resolved and the dip becomes deeper. The deviation has reached a value of about $`h/6.5e^2`$ at 70 mK. It is apparent that the NDHR is associated closely with the formation of the quantum Hall states of $`S_{xy}=6`$ and $`S_{xy}=7`$ in this case. In an attempt to understand the topological anomalies of the phase diagram, we have performed a simple numerical calculation to account for the effect of Landau level mixing of the two bands. In this calculation, we have made the simple assumption that the density of states can be modeled as two sequences of gaussian functions centered around the Landau levels for the lower and upper subband respectively. The width of the gaussian functions is determined from the conductivity of the sample. For each and every maximum in the density of states, the electron density is calculated at a given magnetic field. We assume that the delocalized states lie at the local maxima in the resulting density of states. In this way, we can obtain a theoretical phase diagram. Of course, in reality, the Landau level mixing due to both the level repulsion and disorder broadening is far more complicated than this simple assumption. Our simple calculation is nevertheless able to produce the sawtooth-like structure qualitatively as seen in the experimental data. Therefore, we believe the sawtooth pattern is a result of the mixing of the Landau levels of the two bands. Every time the two levels move towards a crossing, the position of the delocalized states deviate from the normal fan lines and “float up” in density or equivalently in energy. At the same time, when two levels move away from the crossing, the position of the delocalized states “sinks down” back to the normal fan lines. We think this effect has the same origin as the “floating” observed in the one-band system in a vanishing $`B`$. In our numerical calculations, the above criterion results in a general floating up in energy of individual delocalized states with decreasing $`B`$. Therefore, the unusual sawtooth patterns are caused by delocalized states rising above their corresponding host Landau levels. The sawtooth structure at 7.5 T can be identified as due to the mixing of the spin-degenerate third Landau level ($`N=2`$) of the first subband with the spin-up state of the lowest Landau level of the second subband ($`N^{}=0`$). The features at around 5.5 T, 4.4 T, and 3.5 T, are due to the mixing of the ($`N^{}=0`$) level with ($`N=3,4,5`$) levels respectively. Another portion of the sawtooth pattern at high densities is due to the mixing of ($`N^{}=1`$) level with the $`N=5,6`$ levels. It is important to point out here that the sawtooth patterns indicate a sequence of reentrant quantum Hall transitions (i.e., 7-6-7, 10-8-10, and 12-10-12 etc.). This type of reentrant quantum Hall transitions have been proposed theoretically for single band quantum Hall systems. Experimentally, it has not been seen to date. An analogous transition which has been observed is the reentrant insulator-quantum Hall transition known as the 0-2-0 transition or the 0-1-0 transition. For the present experiment, we believe the NDHR observed in certain regions shows preliminary evidence of the long sought reentrant quantum Hall transitions. For example, in the region between 6 T and 8 T, a $`B`$-scan at the appropriate $`V_g`$ at the top portion of the sawtooth is equivalent to traversing the phase diagram horizontally, thus cutting through two sides of a sawtooth. To the left of the sawtooth, we can identify the quantum Hall state as $`S_{xy}=7`$ state. Inside the sawtooth it is a $`S_{xy}=6`$ state. To the right, within a very narrow range of $`B`$, it is again a $`S_{xy}=7`$ state. For the 7-6-7 transition, the Hall resistance should vary from $`h/7e^2`$ to $`h/6e^2`$ and back to $`h/7e^2`$ with increasing $`B`$. However, the $`S_{xy}=6`$ and the “re-entrant” $`S_{xy}=7`$ plateaus in our experiment can not be well resolved simultaneously at a given $`V_g`$. As a result, the values of the dip and the peak in $`R_{xy}`$ only reach, at best, about $`h/6.5e^2`$ rather than $`h/6e^2`$ and $`h/7e^2`$ respectively. The NDHR can be considered as a precursor of the reentrant $`S_{xy}=7`$ state. We believe one should be able to observe the true 7-6-7 transition at lower temperatures. We however cannot eliminate the possibility of that a true quantum Hall state (i.e., with zero diagonal resistance and a quantized Hall plateau) would be intrinsically prohibited by the Landau level mixing. The authors would like to thank S. Kivelson and D. Orgad for helpful discussions. This work is supported by NSF under grant #DMR 9705439.
no-problem/9905/astro-ph9905071.html
ar5iv
text
# The Dyadosphere of Black Holes and Gamma-Ray Bursts <sup>1</sup><sup>1</sup>institutetext: I.C.R.A.-International Center for Relativistic Astrophysics and Physics Department, University of Rome “La Sapienza”, I-00185 Rome, Italy email: ruffini@icra.it ## Abstract Works on the Dyadosphere are reviewed. ###### Key Words.: black holes – gamma ray bursts – Dyadosphere I am proposing and give reasons that with Gamma Ray Bursts, for the first time we are witnessing, in real time, the moment of gravitational collapse to a Black Hole. Even more important, the tremendous energies involved by the energetics of these sources, especially after the discoveries of their afterglows and their cosmological distances (Kulkarni et. al. 1998), clearly point to the necessity and give the opportunity to use as an energy source of these objects the extractable energy of Black Holes. That Black Holes can only be characterized by their mass-energy $`E`$, charge $`Q`$ and angular momentum $`L`$ has been advanced in a classical article (Ruffini & Wheeler 1971), the proof of this has been made after twenty five years of meticulous mathematical work. One of the crucial points in the Physics of Black Holes was to realize that energies comparable to their total mass-energy could be extracted from them. The computation of the first specific example of such an energy extraction process, by a gedanken experiment, was given in (Ruffini & Wheeler 1970) and (Floyd and R. Penrose 1971) for the rotational energy extraction from a Kerr Black hole, see Figure (1). The reason of showing this figure is not only to recall the first such explicit computation, but to emphasize how contrived and difficult such a mechanism can be: it can only work for very special parameters and should be in general associated to a reduction of the rest mass of the particle involved in the process. To slow down the rotation of a Black Hole and to increase its horizon by the accretion of counterrotating particles is almost trivial, but to extract the rotational energy from a Black Hole, namely to slow down the Black Hole and keep its surface area constant, is extremely difficult, as clearly pointed out also by the example in Figure (1). The above gedanken experiments, extended as well to electromagnetic interactions, became of paramount importance not for their direct astrophysical significance but because they gave the tool for testing the physics of Black Holes and identifing their general mass-energy formula (Christodoulou & Ruffini 1971). The crucial point is that a transformation at constant surface area of the Black Hole, or reversible in the sense of ref. Christodoulou & Ruffini (1971), could release an energy up to 29% of the mass-energy of an extremal rotating Black Hole and up to 50% of the mass-energy of an extremely magnetized and charged Black Hole. Various models have been proposed in order to tap the rotational energy of Black Holes by the processes of relativistic magnetohydrodynamic. It is likely however that these processes are relevant over the very long time scales characteristic of the accretion processes. In the present case of the Gamma Rays Bursts a prompt mechanism, on time scales shorter then a second, for depositing the entire energy in the fireball at the moment of the triggering process of the burst, appears to be at work. For this reason we are here considering a more detailed study of the vacuum polarization processes a’ la Heisenberg-Euler-Schwinger (Heisenberg W. & Euler H. 1931, Schwinger J. 1951) around a Kerr-Newman Black Hole first introduced by Damour and Ruffini (Damour T. and Ruffini R., 1975). The fundamental points of this process can be simply summarized: * They occur in an extended region arround the Black Hole, the Dyadosphere, extending from the horizon radius $`r_+`$ to the Dyadosphere radius $`r_{ds}`$ see (Preparata, Ruffini & Xue 1998a,b). Only Black Holes with a mass larger than the upper limit of a neutron star and up to a maximum mass of $`610^5M_{}`$ can have a Dyadosphere, see (Preparata, Ruffini & Xue 1998a,b) for details. * The efficiency of transforming the mass-energy of Black Hole into particle-antiparticle pairs outside the horizon can approach 100%, for Black Holes in the above mass range see (Preparata, Ruffini & Xue 1998a,b) for details. * The pair created are mainly positron-electron pairs and their number is much larger than the quantity $`Q/e`$ one would have naively expected on the ground of qualitative considerations. It is actually given by $`N_{\mathrm{pairs}}=\frac{Q}{e}(1+\frac{r_{ds}}{\mathrm{}/mc})`$, where $`m`$ is the electron mass. The energy of the pairs and consequently the emission of the associated electromagnetic radiation peaks in the X-gamma rays region, as a function of the Black Hole mass. I recall some of the results on the Dyadosphere. We consider the collapse to almost general Black Hole endowed with an electromagnetic field (EMBH). Following Preparata, Ruffini & Xue (1998a,b), for simplicity we consider the case of a non rotating Reissner-Nordstrom EMBH to illustrate the basic gravitational-electrodynamical process. The number density of pairs created in the Dyadosphere is $$N_{e^+e^{}}\frac{QQ_c}{e}\left[1+\frac{(r_{ds}r_+)}{\frac{\mathrm{}}{mc}}\right],$$ (1) where $$r_{\mathrm{ds}}=\left(\frac{\mathrm{}}{mc}\right)^{\frac{1}{2}}\left(\frac{GM}{c^2}\right)^{\frac{1}{2}}\left(\frac{m_\mathrm{p}}{m}\right)^{\frac{1}{2}}\left(\frac{e}{q_\mathrm{p}}\right)^{\frac{1}{2}}\left(\frac{Q}{\sqrt{G}M}\right)^{\frac{1}{2}}.$$ (2) Due to the very large pair density and to the sizes of the cross-sections for the process $`e^+e^{}\gamma +\gamma `$, the system is expected to thermalize to a plasma configuration for which $$N_{e^+}=N_e^{}=N_\gamma =N_{\mathrm{pair}}$$ (3) and reach an average temperature $$kT_{}=\frac{E_{e^+e^{}}^{\mathrm{tot}}}{3N_{\mathrm{pair}}2.7},$$ (4) where $`k`$ is Boltzmann’s constant. The discussions on the relativistic expansion of the Dyadosphere are presented in a separated paper (see e.g. Ruffini, Salmonson, Wilson & Xue 1999 in this proceeding). Before concluding I would like to return to the suggestion, advanced by Damour and Ruffini, that a discharged EMBH can be still extremely interesting from an energetic point of view and responsible for the acceleration of ultrahigh energy cosmic rays. I would like just to formalize this point with a few equations: It is clear that no matter what the initial conditions leading to the formation of the EMBH are, the final outcome after the tremendous expulsion of the PEM pulse will be precisely a Kerr Newman solution with a critical value of the charge. If the Background metric has a Killing Vector, the scalar product of the Killing vector and the generalized momentum $$P_\alpha =mU_\alpha +eA_\alpha ,$$ (5) is a constant along the trajectory of any charged gravitating particle following the relativistic equation of motion in the background metric and electromagnetic field (Jantzen and Ruffini 1999). Consequently an electron (positron) starting at rest in the Dyadosphere will reach infinity with an energy $`E_{kinetic}2mc^2(\frac{GM}{c^2})/(\frac{\mathrm{}}{mc})10^{22}`$eV for $`M=10M_{}`$.
no-problem/9905/astro-ph9905293.html
ar5iv
text
# Finding Gravitational Lenses With X-rays ## 1 INTRODUCTION Gravitational lenses are an increasingly powerful tool for studies of cosmology (Falco, Kochanek & Muñoz 1998, Cooray 1999, Helbig 1999), the Hubble constant (Impey et al. 1998, Barkana et al. 1999, Bernstein & Fischer 1999, Fassnacht et al. 1999), galactic structure (Keeton, Kochanek & Falco 1998) and galactic evolution (Kochanek et al. 1999). Their utility is growing at a fast pace because the number of known lenses is increasing rapidly, having reached $`50`$ systems at present (see http://cfa-www.harvard.edu/castles). Despite the larger samples, we have discovered only a small fraction of the total number of lenses detectable with modern instruments. Confusion is a fundamental problem for existing gravitational lens surveys. Even at high Galactic latitudes, most point sources found near quasars are stars rather than gravitationally lensed images (see Kochanek 1993a). Confusion in radio lens surveys is caused by the range of source structures – flat-spectrum radio lens surveys contain far more compact doubles than two-image lenses (see Helbig et al. 1999), and steep-spectrum surveys must cope with the enormous variety of extended radio-emission morphologies (e.g., Griffith et al. 1991). These problems vanish for high-resolution X-ray imaging observations, where confusing Galactic sources are rare (as at radio wavelengths) and source structure is simple (as for quasars). However, the image separations in lenses are small – of the nearly $`50`$ known lenses, 90% are larger than 0$`\stackrel{}{\mathrm{.}}`$5, the median separation is 1$`\stackrel{}{\mathrm{.}}`$5, and only 10% are larger than 2$`\stackrel{}{\mathrm{.}}`$5 (Keeton et al. 1998) – thus, high angular resolution (of order $`1\mathrm{}`$) is required. The resolution problem can be avoided by looking for lensed images produced by rich clusters, where the image separation is much larger (e.g., Luppino et al. 1999). The High Resolution Camera (HRC) and the AXAF CCD Imaging Spectrometer (ACIS) on the Chandra X-ray Observatory will allow the first direct searches for gravitational lenses at X-ray wavelengths. Both HRC and ACIS combine a relatively wide field of view with high spatial resolution near the center of the field (50% enclosed energy radius $`r_{50}0\stackrel{}{\mathrm{.}}5`$). Unfortunately, the resolution worsens with the distance from the field center $`D`$, with $`r_{50}0\stackrel{}{\mathrm{.}}5+6\stackrel{}{\mathrm{.}}0(D/10^{})^2`$ (see Kenter et al. 1997), and only the central portion of the detector will be useful for recognizing the typical lensed source. In §2 we estimate the probability of lensing X-ray emitting AGN as a function of flux, including rough estimates of the observational selection effects for the HRC and ACIS detectors. In §3 we estimate the probability of finding lensed X-ray AGN in fields centered on massive clusters at intermediate redshift, where the larger image separations make the worsening resolution at large off-axis angles relatively unimportant. We summarize our conclusions in §4. ## 2 SERENDIPITOUS LENSES The method for calculating the expected number of lenses is well developed; we follow the calculations used for the radio lens surveys by King & Browne (1996), Kochanek (1996), Falco, Kochanek & Muñoz (1998), Cooray (1999) and Helbig et al. (1999). We assume that the lens galaxies are described by singular isothermal spheres (SIS) with parameter normalizations derived from the best fits to the multiply-imaged radio sources and quasars in Kochanek (1996) and Falco et al. (1998). The SIS mass distribution is generally consistent with the available lens data (see, e.g., Kochanek 1995), as well as local stellar dynamical measurements (Rix et al. 1997) and X-ray observations (e.g., Fabbiano 1989) of early-type galaxies. We ignore spiral galaxy lenses, as they are a small fraction of all lenses (10–20%) and produce $`50\%`$ smaller image separations (see Kochanek 1996). We describe the early-type lens galaxies by a constant comoving Schechter (1976) luminosity function $$\frac{dn}{dL}=\frac{n_{}}{L_{}}\left(\frac{L}{L_{}}\right)^\alpha \mathrm{exp}(L/L_{})$$ (1) and a Faber-Jackson (1976) relation to convert from luminosity to velocity dispersion, $$\frac{L}{L_{}}=\left(\frac{\sigma }{\sigma _{}}\right)^\gamma .$$ (2) The parameters $`n_{}=0.0061h^3`$ Mpc<sup>-3</sup> , $`\alpha =1.0`$ and $`\gamma =4`$ are measured for nearby galaxies, and $`\sigma _{}=225\text{ km s}\text{-1}`$ is measured by fitting the observed separation distribution of lenses. This parameterization represents the “standard” model of Kochanek (1996) and Falco et al. (1998). Recent revisions to the model suggested by Chiba & Yoshii (1999) and Cheng & Krauss (1999) are generally inconsistent with the observations (see Kochanek et al. 1999). The probability that a source lies within the multiple-imaging region of a lens, also known as the optical depth, has a characteristic scale of $`\tau _{}=16\pi ^3(\sigma _{}/c)^4n_{}r_H^3=0.026`$ given the parameters for the mass and number of lens galaxies. Although the Hubble radius $`r_H=c/H_0`$ enters the expression for the optical depth, the quantity $`r_H^3n_{}`$ is independent of the value of the Hubble constant. In a flat cosmological model, the optical depth is simply $`\tau =(\tau _{}/30)(D_{OS}/r_H)^3`$, where $`D_{OS}`$ is the comoving distance to the source (Turner 1990; see Carroll, Press & Turner 1992, Kochanek 1993b for general expressions). The average optical depth is closely related to the square of the observed image separations, with $`\tau \mathrm{\Delta }\theta ^2n_{}D_{OS}^3`$ for all cosmologies and lens models. The characteristic image separation is $`\mathrm{\Delta }\theta _{}=8\pi (\sigma _{}/c)^2=2\stackrel{}{\mathrm{.}}92`$, and in a flat cosmological model the average image separation is simply $`\mathrm{\Delta }\theta =\mathrm{\Delta }\theta _{}/2`$. We use the soft X-ray (0.3–3.5 KeV) luminosity functions derived by Boyle et al. (1994), particularly their models H (for $`\mathrm{\Omega }_0=1`$) and K (for $`\mathrm{\Omega }_0=0`$). For an X-ray luminosity function $`dN/dLdz`$, the total number of unlensed X-ray sources brighter than flux $`S`$ per unit solid angle is $$\frac{dN}{d\mathrm{\Omega }}(>S)=_0^{\mathrm{}}𝑑V_s_{L_{min}}^{\mathrm{}}𝑑L\frac{dN}{dLdz}(L)$$ (3) where $`dV_s`$ is the volume element, and $`L_{min}=4\pi D_{lum}^2S(1+z)^{\alpha 1}`$ is determined from the luminosity distance $`D_{lum}`$ and the spectral index $`\alpha `$ defined by $`F_\nu \nu ^\alpha `$. Boyle et al. (1994) use $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, and assume a fixed spectral index of $`\alpha =1`$. In a flat cosmology the volume element is $`dV_s=D_{OS}^2dD_{OS}`$ where $`D_{OS}`$ is the comoving distance to the source, and the luminosity distance is $`D_{lum}=D_{OS}(1+z_s)`$ in all cosmologies. To find the number of lensed X-ray sources we must include the redshift-dependent optical depth and the magnification bias (see Schneider, Ehlers & Falco 1992), so the number of lensed X-ray sources in a flat cosmology is $$\left(\frac{dN}{d\mathrm{\Omega }}\right)_L(>S)=_0^{\mathrm{}}𝑑V_s\tau (z_s)_{L_{min}}^{\mathrm{}}𝑑L_{M_{min}}^{\mathrm{}}\frac{dM}{M}\frac{dP}{dM}\frac{dN}{dLdz}\left(\frac{L}{M}\right)C\left(\frac{\mathrm{\Delta }\theta _{min}}{\mathrm{\Delta }\theta _{}}\right).$$ (4) The number of lensed sources is related to the number of unlensed sources through the optical depth at a given source redshift $`\tau (z_s)`$, the magnification of the lensed sources relative to the unlensed sources as described by the magnification probability distribution $`dP/dM`$, and selection limits on the detectable image flux ratios and separations. For the SIS lens the probability distribution for the magnification is $`dP/dM=8/M^3`$ and the minimum detectable flux ratio $`f<1`$ sets the lower limit of the magnification integral, $`M_{min}=2(1+f)/(1f)`$. We must also eliminate the lenses with separations below the resolution limit of Chandra. The angular selection function $$C(x=\mathrm{\Delta }\theta _{min}/\mathrm{\Delta }\theta _{})=30_0^1𝑑uu^2(1u)^2\mathrm{exp}(x^2/u^2)$$ (5) gives the fraction of lenses with separations larger than a minimum value $`\mathrm{\Delta }\theta _{min}`$. The expressions for the optical depth and the volume element change for cosmological models with non-zero curvature (see Carroll et al. 1992 and Kochanek 1993b for general expressions). We present the results for the two cosmologies, $`\mathrm{\Omega }_0=0`$ and $`\mathrm{\Omega }_0=1`$, for which the X-ray LF was derived by Boyle et al. (1994). The results for the $`\mathrm{\Omega }_0=0`$ model should be similar to those for a flat model with $`\mathrm{\Omega }_0=0.5`$ and a cosmological constant (see Carroll et al. 1992). Figure 1 shows the expected number of X-ray sources and lensed X-ray sources per square degree as a function of flux assuming a perfect detector ($`f>0`$ or $`M_{min}=2`$, and $`C(x)=1`$). Figure 2 shows the redshift distribution of the lensed and unlensed sources for integrations of 1, 10 and 100 ksec assuming an exposure time of $`(S/2.5\times 10^{13}\text{ergs s}^1\text{cm}^2)^1`$ ksec for a 5–$`\sigma `$ point source detection (e.g., Jerius et al. 1997). The lensing probability peaks near $`S=10^{13}`$ ergs s<sup>-1</sup> cm<sup>-2</sup>, which provides the best balance between magnification bias and source redshift. The magnification bias is highest for the brightest sources (steep number counts, far from the break in number counts), while the lens cross section is highest for faint sources (highest average redshift). For brighter sources the probability drops because of the low average source redshift and for fainter sources it drops because of the flattening of the number counts distribution. The peak lensing probability of 0.2–0.4% (depending on the cosmological model) is lower than for bright quasars (about 1%) but higher than for radio sources (about 0.1–0.2%). The total number of X-ray lenses is enormous, reaching roughly one per square degree for $`S>10^{15}`$ ergs s<sup>-1</sup> cm<sup>-2</sup>. Particularly for the $`\mathrm{\Omega }_0=1`$ model, the predictions are underestimates because the luminosity function models underpredict the observed number counts of sources (see Boyle et al. 1994). Observational selection effects determine the fraction of these lenses that can be found, so we next estimate the number of observable lenses per HRC or ACIS exposure. The fundamental problem with the Chandra Observatory for conducting a lens survey is the strong variation in the resolution with the distance from the field center. We estimate (from Kenter et al. 1997) that the radius encircling 50% of the energy is approximately $`r_{50}=0\stackrel{}{\mathrm{.}}5+6\stackrel{}{\mathrm{.}}0(D/10\mathrm{})^2`$ at a distance $`D`$ from the field center. The minimum separation for recognizing multiple images can be approximated by a small multiple of $`r_{50}`$, $`\mathrm{\Delta }\theta _{min}=\xi r_{50}`$ with $`1<\xi <2`$. Thus, we can define an effective area for the detection of multiply imaged X-ray sources by $$\mathrm{\Delta }\mathrm{\Omega }_{eff}(\xi )=2\pi _0^{\mathrm{}}D𝑑DC(\xi r_{50}/\mathrm{\Delta }\theta _{})$$ (6) where $`C(x)`$ is the angular selection function introduced in eqn. (5). We can use an upper limit to the integral of $`\mathrm{}`$ rather than the physical detector size because the exponential cutoff in $`C(x)`$ makes it unimportant. For reasonable count rates the best estimate is $`\mathrm{\Delta }\mathrm{\Omega }_{eff}(\xi =1)=0.012`$ square degrees, but if pessimistic, $`\mathrm{\Delta }\mathrm{\Omega }_{eff}(\xi =2)=0.0035`$ square degrees. Unfortunately, the effective area of the detector is far smaller than its total area. Figure 3 shows the expected number of lenses per telescope pointing (i.e. in an area $`\mathrm{\Delta }\mathrm{\Omega }_{eff}(\xi =1)`$) for limits on the detectable flux ratio of $`f>0`$, $`0.1`$, $`0.25`$ and $`0.5`$. The effect of the flux ratio limit is smallest for bright sources, where the magnification bias leads to a sample dominated by lenses with modest flux ratios, and enormous for faint sources. While the expected number of lenses drops rapidly as we move to brighter sources, the reduced exposure time needed to detect bright sources greatly increases the number of possible exposures. The number of exposures that can be taken per year is roughly $`N_{exp}=10^4(S/10^{13}\text{ergs s}^1\text{cm}^2)`$, so the number of detectable lenses per year of high resolution imaging ($`\mathrm{\Delta }\mathrm{\Omega }_{eff}N_{exp}dN/d\mathrm{\Omega }(>S)`$) is roughly independent of the flux limit (see Figure 3). If the Chandra Observatory were devoted only to high resolution imaging, then we would expect to find 1–3 lenses per year. ## 3 CLUSTER LENSES It is very unlikely to find a cluster acting as a lens in a randomly selected field (see Kochanek 1995, Wambsganss et al. 1995, Flores & Primack 1996, Maoz et al. 1997) – the high cross sections of clusters compared to galaxies are far outweighed by their rarity. However, many Chandra observations will be centered on intermediate redshift clusters, so they are pre-selected to have a massive, lensing object in the field. The critical radius scale for an SIS lens with velocity dispersion $`\sigma _c`$ is $`b_{}=4\pi (\sigma _c/c)^2`$. For a particular lens and source redshift the image separation is $`\mathrm{\Delta }\theta =2b_{}D_{LS}/D_{OS}`$ for distances from the lens (observer) to the source of $`D_{LS}`$ ($`D_{OS}`$). The multiple-imaging cross section is $`\tau _c(z_s)=\pi \mathrm{\Delta }\theta ^2/4`$, so the expected number of lenses behind a cluster of velocity dispersion $`\sigma _c`$ and redshift $`z_l`$ is $$N_c(>S)=_{z_l}^{\mathrm{}}𝑑V_s\tau _c(z_s)_{L_{min}}^{\mathrm{}}𝑑L_{M_{min}}^{\mathrm{}}\frac{dM}{M}\frac{dP}{dM}\frac{dN}{dLdz}\left(\frac{L}{M}\right),$$ (7) where as before, $`dV_s=D_{OS}^2dD_{OS}`$ for a flat cosmology. The expected number of lenses $`N_c`$ is very weakly dependent on the cosmological model (because the cross section depends only on the distance ratio $`D_{LS}/D_{OS}`$), so we restricted the calculation to $`\mathrm{\Omega }_0=1`$ and luminosity function H. Even so, the number of lenses is underestimated because the LF model underestimates the number of faint X-ray sources. The image separations produced by a massive cluster are sufficiently large to allow us to assume that no systems are lost due to limitations in angular resolution, although we must still impose limits on the detectable flux ratios. Figure 4 shows the number of lenses expected behind a typical “giant-arc” cluster (velocity dispersion $`\sigma _c=1200`$ km s<sup>-1</sup>) at redshift $`z_c=0.4`$ as a function of the image flux ratio limit $`f`$. The expected number of lensed sources is roughly equal to the number of X-ray sources expected within solid angle $`\pi b_{}^2`$ – while the average cross section is smaller than $`\pi b_{}^2`$, the magnification bias compensates. Figure 5 shows the expected number of lenses found in $`1`$, $`10`$ and $`100`$ ksec images of clusters as a function of their redshift and velocity dispersion. As in the serendipitous surveys, individual observations are unlikely to detect multiply-imaged systems, but the accumulated results of all imaging programs will find lensed sources. The number of lenses detected is of order 1–10 for each year devoted to imaging clusters, depending on the mass and redshift distributions of the clusters. Whether the SIS is a realistic representation of cluster lenses is an open question (e.g. see Williams, Navarro & Bartelmann 1999), but the cross section estimates should be approximately correct. ## 4 SUMMARY The Chandra X-ray Observatory will discover both serendipitous lenses, where a random background source is found to be lensed by a foreground galaxy, and cluster lenses, where a background source is found to be lensed by a cluster that is the target of a Chandra pointed observation. The number of detectable systems is 1–3 serendipitous lenses and 1–10 cluster lenses per year of imaging time, roughly independent of the flux limit of the observations and including strong limits on the detectability of the lensed images. These are probably underestimates because the Boyle et al. (1994) luminosity functions we used for our calculations undercount the numbers of X-ray sources at faint flux limits. The X-ray Multi-Mirror Mission (XMM, http://astro.estec.esa.nl/XMM), with its coarser angular resolution (5$`\stackrel{}{\mathrm{.}}`$0 FWHM), will be unable to detect gravitational lenses produced by galaxies. However, its high sensitivity will make it very useful for detecting cluster lenses. The total number of lensed X-ray sources is enormous, roughly $`(10^{15}\text{ergs s}^1\text{cm}^2/S)`$ lenses per square degree brighter than a soft X-ray flux $`S`$, with none of the confusion problems which interfere with searches for gravitational lenses in the optical or radio. An X-ray telescope with the resolution of the Chandra Observatory over a wide field of view would be an extraordinarily efficient instrument for finding gravitational lenses. Alternatively, deep, high resolution optical images of X-ray sources should be an efficient means of searching for new gravitational lenses. Acknowledgments: We would like to thank Adam Dobrzycki for comments and for producing simulated HRC images of gravitational lenses. We would also like to thank Richard Mushotzky and Xavier Barcons for their comments. CSK is supported by NASA ATP grant NAG5-4062.
no-problem/9905/astro-ph9905382.html
ar5iv
text
# Keplerian Motion of Broad-Line Region Gas as Evidence for Supermassive Black Holes in Active Galactic Nuclei ## 1 Introduction Since the earliest days of quasar research, supermassive black holes (SBHs) have been considered to be a likely, if not the most likely, principal agent of the activity in these sources. Evidence for the existence of SBHs in active galactic nuclei (AGNs), and indeed in non-active nuclei as well, has continued to accumulate (e.g., Kormendy & Richstone 1995). In the specific case of AGNs, probably the strongest evidence to date for SBHs has been Keplerian motions of megamaser sources in the Seyfert galaxy NGC 4258 (Miyoshi et al. 1995) and asymmetric Fe K$`\alpha `$ emission in the X-ray spectra of AGNs (e.g., Tanaka et al. 1995), though the latter is still somewhat controversial as the origin of the Fe K$`\alpha `$ emission has not been settled definitively. The kinematics of the broad-line region (BLR) potentially provide a means of measuring the central masses of AGNs. A virial estimate of the central mass, $`Mr\sigma ^2/G`$, can be made by using the line velocity width $`\sigma `$, which is typically several thousands of kilometers per second, and the size of the emission-line region $`r`$. For this to be meaningful, we must know that the BLR gas motions are dominated by gravity, and we must have some reliable estimate of the BLR size. The size of the BLR can be measured by reverberation mapping (Blandford & McKee 1982), and this has been done for more than two dozen AGNs. Whether or not the broad emission-line widths actually reflect virial motion is still somewhat problematic: while the relative response time scales for the blueshifted and redshifted wings of the lines reveal no strong signature of outflow, there are still viable models with non-gravatitionally driven cloud motions. However, if the kinematics of the BLR can be proven to be gravitationally dominated, then the BLR provides an even more definitive demonstration of the existence of SBHs than megamaser kinematics because the BLR is more than two orders of magnitude closer to the central source than the megamaser sources. Recent investigations of AGN virial masses estimates based on BLR sizes have been quite promising (e.g., Wandel 1997; Laor 1998) and suggest that this method ought to be pursued. In this Letter, we argue that the broad emission-line variability data on one of the best-studied AGNs, the Seyfert 1 galaxy NGC 5548, demonstrates that the BLR kinematics are Keplerian, i.e., that the emission-line cloud velocities are dominated by a central mass of order $`7\times 10^7`$$`M_{}`$ within the inner few light days ($`r\stackrel{<}{}5\times 10^{15}`$ cm). We believe that this strongly supports the hypothesis that SBHs reside in the nuclei of active galaxies. ## 2 Methodology Measurement of virial masses from emission lines requires (1) determination of the BLR size, (2) measurement of the emission-line velocity dispersion, and (3) a demonstration that the kinematics are gravitationally dominated. A correlation between the BLR size and line-width of the form $`r\sigma ^2`$ is consistent with a wide variety of gravitationally dominated kinematics. It thus provides good evidence for such a dynamical scenario, although alternative pictures which contrive to produce a similar result cannot be ruled out. Indeed, the absence of such a relationship has been regarded as the missing item in AGN SBH measurements (Richstone et al. 1998). For gravitationally dominated dynamics, the size–line-width relationship must hold for all lines at all times. To test this, we consider the case of NGC 5548, which has been the subject of extensive UV and optical monitoring campaigns by the International AGN Watch consortium<sup>1</sup><sup>1</sup>1Information about the International AGN Watch and copies of published data can be obtained on the World-Wide Web at URL http://www.astronomy.ohio-state.edu/$``$agnwatch/. (Alloin et al. 1994) for more than ten years. The data are from UV monitoring programs undertaken with the International Ultraviolet Explorer (IUE) in 1989 (Clavel et al. 1991) and with IUE and Hubble Space Telescope (HST) in 1993 (Korista et al. 1995), plus ground-based spectroscopy from 1989 to 1996 (Peterson et al. 1999 and references therein). We consider the response of a variety of lines in two separate observing seasons (1989 and 1993) and the response of H$`\beta `$ over an eight-year period. Cross-correlation of the continuum and emission-line light curves yields a time delay or “lag” that is interpreted as the light-travel time across the BLR. Specifically, the centroid of the cross-correlation function (CCF) $`\tau _{\mathrm{cent}}`$ times the signal propagation speed $`c`$ is the responsivity-weighted mean radius of the BLR for that particular emission line (Koratkar & Gaskell 1991). We have measured $`\tau _{\mathrm{cent}}`$ for various emission lines using light curves of NGC 5548 in the AGN Watch data base and the interpolation cross-correlation method as described by White & Peterson (1994). The UV measurements for 1989 are the GEX values from Clavel et al. (1991). The UV measurements for 1993 are taken from Tables 12–14 and 16–17 of Korista et al. (1995). The optical data for 1989–1993 are from Wanders & Peterson (1996) and from Peterson et al. (1999) for 1994–1996. Uncertainties in these values were determined as described by Peterson et al. (1998b). The results are given in Table 1, in which columns (1) and (2) give the epoch of the observations and the emission line, respectively. Column (3) gives the lag $`\tau _{\mathrm{cent}}`$ and its associated uncertainties. Emission-line widths are not simple to measure on account of contamination by emission from the narrow-line region, and in some cases, contamination from other broad lines. We have circumvented this problem by using a large number of individual spectra to compute mean and root-mean-square (rms) spectra, and we measure the width of the emission features in the rms spectrum. The advantage of this approach is that constant or slowly varying components of the spectrum do not appear in the rms spectrum, and the emission features in the rms spectrum accurately represent the parts of the emission line that are varying, and for which the time delays are measured (Peterson et al. 1998a). This technique requires very homogeneous spectra: for the 1989 UV spectrum, we used the GEX-extracted SWP spectra. For the 1993 UV spectrum, we used the HST FOS spectra, excluding those labeled “dropouts” by Korista et al. (1995) which were not optimally centered in the FOS aperture. For the optical spectra through 1993, we used the homogeneous subset analyzed by Wanders & Peterson (1996), and a similar subset for 1994–1996. In each rms spectrum, we determined the full-width at half-maximum (FWHM) of each measurable line, with a range of uncertainty estimated by the highest and lowest plausible settings of the underlying continuum. The line widths are given as line-of-sight Doppler widths in kilometers per second in column (4) of Table 1. Each emission line provides an independent measurement of the virial mass of the AGN in NGC 5548 by combining the emission-line lag with its Doppler width in the rms spectrum. Column (5) of Table 1 gives a virial mass estimate $`M=fr_{\mathrm{BLR}}\sigma _{\mathrm{rms}}^2/G`$ for each line, where $`\sigma _{\mathrm{rms}}=\sqrt{3}V_{\text{FWHM}}/2`$ (Netzer 1990) and $`r_{\mathrm{BLR}}=c\tau _{\mathrm{cent}}`$. The factor $`f`$ depends on the details of the geometry, kinematics, and orientation of the BLR, as well as the emission-line responsivity of the individual clouds, and is expected to be of order unity. Uncertainty in this factor limits the accuracy of our mass estimate to about an order of magnitude (see §3). Neglecting the systematic uncertainty in $`f`$, the unweighted mean of all these mass estimates is $`6.8(\pm 2.1)\times 10^7`$$`M_{}`$. To within the quoted uncertainties, all of the mass measurements are consistent. The large systematic uncertainty should not obscure the key result, namely that the quantity $`r_{\mathrm{BLR}}\sigma _{\mathrm{rms}}^2/G`$ is constant and argues strongly for a central mass of order $`7\times 10^7`$$`M_{}`$. In Fig. 1, we show the measured emission-line lag $`\tau _{\mathrm{cent}}`$, plotted as a function of the width of the line in the rms spectrum for various broad emission lines in NGC 5548. Within the measurement uncertainties, all the lines yield identical values for the central mass. A weighted fit to the relationship $`\mathrm{log}(\tau _{\mathrm{cent}})=a+b\mathrm{log}(V_{\text{FWHM}})`$ yields $`b=1.96\pm 0.18`$, consistent with the expected value $`b=2`$, although the somewhat high reduced $`\chi _\nu ^2`$ value of 1.70 (compared with $`\chi _\nu ^2=2.14`$ for a forced $`b=2`$ fit as shown in the figure) suggests that there may be additional sources of scatter in this relationship beyond random error. If our virial hypothesis is indeed correct, we should measure the same mass using independent data obtained at different times. The H$`\beta `$ emission line in NGC 5548 is the only line for which reverberation measurements have been made for multiple epochs. In Fig. 2a, we show the measured H$`\beta `$ lag as a function of the width of the H$`\beta `$ line in the rms spectrum for the six years listed in Table 1. The relationship is shallower than that seen in the multiple-line data shown in Fig. 1 ($`b=0.72\pm 0.29`$ with $`\chi _\nu ^2=0.79`$), and indeed is poorly fit with the expected virial slope (for the $`b=2`$ fit shown in the figure, $`\chi _\nu ^2=3.71`$, although more than 50% of the contribution to $`\chi _\nu ^2`$ is due to the single data point from 1996). Note that data from two years, 1993 and 1995, have not been included in this plot because the rms spectra for these two years have a strong double-peaked structure that we are unable to account for at present. We also note that a rather better relationship between the H$`\beta `$ time lag and rms line width is found if we use the CCF peak rather than the centroid for the BLR size, as shown in Fig. 2b ($`b=1.47\pm 0.21`$ with $`\chi _\nu ^2=0.59`$, and for the $`b=2`$ fit shown in the figure, $`\chi _\nu ^2=1.58`$). The CCF centroid represents the responsivity-weighted mean radius of the H$`\beta `$ line-emitting region, but the CCF peak has no similarly obvious interpretation, though in some geometries the cross-correlation peak is a probe of the emission-line gas closest to the central source. In any case, the virial mass we infer from the mean of the H$`\beta `$ data is the same within the uncertainties regardless of whether the CCF centroid ($`7.3(\pm 2.0)\times 10^7`$$`M_{}`$) or peak ($`6.8(\pm 1.0)\times 10^7`$$`M_{}`$) is used to infer the BLR size. There are a number of possible reasons for the large $`\chi ^2`$ values for the virial fits; it is important to remember that both the lag and line width are dynamic quantities that are dependent on the mean continuum flux, which can change significantly over the course of an observing season. We attempted to test this by isolating individual “events” in the light curves and repeating the analysis. Unfortunately, the relatively few spectra in each event significantly degraded the quality of both the lag and line-width measurements and thus proved to be unenlightening. A diagram similar to our Fig. 1 was published by Krolik et al. (1991) for NGC 5548. We believe that our improved treatment, plus additional data, makes the case more compelling primarily because we measured the broad-line widths from the variable part of the spectrum only (i.e., the rms spectrum) rather than by multiple-component fitting of the broad-line profiles. Also, we included only lines for which we could determine both accurate lags and line widths in the rms spectra, thus excluding Ly$`\alpha `$$`\lambda 1215`$ because of contamination by geocoronal Ly$`\alpha `$ in the rms spectrum, N v$`\lambda 1240`$ because it is weak and badly blended with Ly$`\alpha `$, and O i$`\lambda 1304`$ on account of its low contrast in the rms spectrum. We excluded Mg ii$`\lambda 2798`$ because of its poorly defined time lag — the response of this line is long enough for aliasing to be a problem. Finally, we also included optical lines (H$`\beta `$ and He ii$`\lambda 4686`$) not included by Krolik et al., plus additional UV measurements from the 1993 monitoring campaign. An obvious question to ask is whether or not it is possible to directly determine the BLR kinematics by differential time delays between various parts of emission lines (e.g., in the case of purely radial infall, the redshifted side of an emission line should respond to continuum changes before the blueshifted side). In general, cross-correlations of emission-line fluxes in restricted Doppler-velocity ranges have failed to yield significant time lags in the several AGNs tested to date (e.g., Korista et al. 1995), consistent with, although not proving, the virial hypothesis. ## 3 Discussion We have shown that the emission-line time-lag/velocity-width relationship argues very strongly for an SBH of mass $`7\times 10^7`$$`M_{}`$ in the nucleus of NGC 5548. The accuracy of this determination is limited by unknown systematics involving the geometry, kinematics, and line reprocessing physics of the BLR. As a simple illustration, we consider C iv$`\lambda 1549`$ line emission from a BLR consisting of clouds in a Keplerian disk with radial responsivity proportional to $`r^{2.5}`$ (which is steep enough to make the results fairly insensitive to the outer radius of the disk) and inner radius $`R_{\mathrm{in}}=3`$ lt–days. A relatively low central mass ($`5\times 10^6`$$`M_{}`$) with high inclination ($`i=90`$<sup>o</sup>) disk and asymmetric line emission can fit the 1989 C iv results in Table 1. At the other extreme, a larger mass ($`1.1\times 10^8`$$`M_{}`$) is required for a lower inclination ($`i=20`$<sup>o</sup>) and isotropic line emission. For further comparison, the specific model of Wanders et al. (1995), based on anisotropically illuminated clouds in randomly inclined Keplerian orbits, requires $`M=3.8\times 10^7`$$`M_{}`$, and extrapolation to the BLR of the Fe K$`\alpha `$ disk model of Nandra et al. (1997) requires $`M=3.4\times 10^7`$$`M_{}`$. As shown by Peterson et al. (1999), the H$`\beta `$ emission-line lag varies with continuum flux, though as with the results discussed here, the correlation shows considerable scatter, probably because of the dynamic nature of the quantities being measured. But it seems clear that as the continuum luminosity increases, greater response is seen from gas at larger distances from the central source. We argue here that this also results in a change in the emission-line width; as the response becomes dominated by gas further away from the central source, the Doppler width of the responding line becomes narrower. This shows that the different widths of various emission lines is related to the radial distribution of the line-emitting gas — high-ionization lines arise at small distances and have large widths, and low-ionization lines arise primarily at larger radii and are thus narrower. While this accounts for some important characteristics of AGN emission lines and their variability, it is nevertheless clear that this is not the entire story; there is still scatter in the relationships that is unaccounted for by these correlations, and their are other phenomena that are not accounted for in this simple interpretation. For example, for central masses as large as reported here, observable relative shifts in the positions of the emission lines are expected from differential gravitational redshifts. The gravitational redshift for each line in NGC 5548 is given by $$\mathrm{\Delta }V=\frac{GM}{cr_{\mathrm{BLR}}}\frac{1160\text{km s}\text{-1}}{r_{\mathrm{BLR}}\text{(light days)}}.$$ (1) This clearly predicts that that high-ionization lines ought to be redshifted relative to the low-ionization lines, when in fact the opposite is observed in higher-luminosity objects (Gaskell 1982; Wilkes 1984). However, the gravitational redshift in NGC 5548 should apply to the variable component of the emission line only and would be sufficiently small to be unobservable in our rms spectra. The occasional appearance of double-peaked rms profiles is yet another complication. As noted earlier, in two of the eight years of optical data on NGC 5548, the H$`\beta `$ profile in the rms spectrum is strongly double-peaked. We do not see an obvious explanation for why the emission line should be single-peaked on some occasions and double-peaked on others. ## 4 Summary We have shown that in the case of the Seyfert 1 galaxy NGC 5548 emission-line variability data yield a consistent virial mass estimate $`M7\times 10^7`$$`M_{}`$, though systematic uncertainties about the BLR geometry, kinematics, and line-reprocessing physics limit the accuracy of the mass determination to about an order of magnitude. Data on multiple emission lines spanning a factor of ten or more in distance from the central source shows the $`r_{\mathrm{BLR}}V_{\text{FWHM}}^2`$ correlation expected for virialized BLR motions. The time delay of H$`\beta `$ emission is known to vary by at least a factor of two over a decade (Peterson et al. 1999), and we show here that the line-width variations are anticorrelated with the time-delay variations. The central mass is concentrated inside a few light days, which corresponds to about 250 Schwarzschild radii ($`R_\mathrm{S}=2GM/c^2`$) for the mass we infer, which argues very strongly for the existence of an SBH in NGC 5548. BMP is grateful for support of this work by the National Science Foundation and NASA through grants AST–9420080 and NAG5–8397, respectively, to The Ohio State University. AW wishes to acknowledge the hospitality of the Department of Physics and Astronomy at UCLA during this work. We thank K.T. Korista, M.A. Malkan, P.S. Osmer, R.W. Pogge, J.C. Shields, and B.J. Wilkes for critical reading of the draft manuscript and A. Gould for helpful advice.
no-problem/9905/quant-ph9905001.html
ar5iv
text
# Bogoliubov dispersion relation and the possibility of superfluidity for weakly-interacting photons in a 2D photon fluid ## I Introduction The quantum many-body problem, with its many, rich manifestations in condensed matter physics, has had a long and illustrious history. In particular, superconductivity and superfluidity were two major discoveries in this field. Although at present much is well understood (e.g., the BCS theory of superconductivity), the recent experimental discoveries of Bose-Einstein condensation in laser-cooled atoms raises new and interesting questions, such as whether the observed Bose-Einstein condensates are superfluids, or whether persistent currents can exist in these new states of matter. Historically speaking, in the study of the interaction of light with matter, most of the emphasis has been on exploring new states of matter, such as the recently observed atomic Bose-Einstein condensates. However, not as much attention has been focused on exploring new states of light. Of course, the invention of the laser led to the discovery of a new state of light, namely the coherent state, which is a very robust one. Two decades ago, squeezed states were discovered, but these states are not as robust as the coherent state, since they are easily degraded by scattering and absorption. In contrast to the laser, which involves a system far away from equilibrium, we shall explore here states close to the ground state of a photonic system. Hence they should be robust ones. Here we shall study the many-body problem by studying the interacting many-photon system (the “photon fluid”) near its ground state. In this paper we shall explore some theoretical considerations which suggest the possibility of a new state of light, namely, the superfluid state. In particular, we shall derive the Bogoliubov dispersion relation for the weakly-interacting photon gas with repulsive photon-photon interactions, starting both from the microscopic (i.e., second-quantized) level, and also from the macroscopic (i.e., classical-field) level. Thereby we shall find an expression for the effective chemical potential of a photon in the photon fluid, and shall relate the velocity of sound in the photon fluid to this nonvanishing chemical potential. In this way, we lay the theoretical foundations for an experiment in progress to measure the sound-wave-like dispersion relation for the photon fluid. We also propose another experiment to measure the critical velocity of this fluid, and thus to test for the possibility of the superfluidity of the resulting state of the light. Although the interaction Hamiltonian used in this paper is equivalent to that used earlier in four-wave squeezing, we emphasize here the many-body, collective aspects of the problem which result from multiple photon-photon interactions. This leads to the idea of the “photon fluid.” Since the microscopic and macroscopic analyses yield the same Bogoliubov dispersion relation for excitations of this fluid, it may be argued that there is nothing fundamentally new in the microscopic analysis given below which is not already contained in the macroscopic, classical nonlinear optical analysis. However, it is the microscopic analysis which leads to the new, heuristic viewpoint of the interacting photon system as a “photon fluid,” a conception which could give rise to new ways of understanding and discovering nonlinear optical phenomena. Furthermore, the interesting question of the quantum optical state of the light inside the cavity resulting from multiple interactions between the photons (i.e., whether it results in a coherent, squeezed, Fock, or some other quantum state), cannot be addressed by classical nonlinear optical methods. Thus this paper represents a first attempt to formulate the new concept of a “photon fluid” starting from the microscopic viewpoint, and to lay the foundations for answering the question concerning the resulting quantum optical state of the light. ## II The Bogoliubov problem Here we re-examine one particular many-body problem, the one first solved by Bogoliubov . Suppose that one has a zero-temperature system of bosons which are interacting with each other repulsively, for example, a dilute system of small, bosonic hard spheres. Such a model was intended to describe superfluid helium, but in fact it did not work well there, since the interactions between atoms in superfluid helium were too strong for the theory for be valid. In order to make the problem tractable theoretically, let us assume that these interactions are weak. In the case of light, the interactions between the photons are in fact always weak, so that this assumption is a good one. However, these interactions are nonvanishing, as demonstrated by the fact that photon-photon collisions mediated by atoms excited near, but off, resonance have been experimentally observed . We start with the Bogoliubov Hamiltonian $`H`$ $`=`$ $`H_{free}+H_{int}`$ (1) $`H_{free}`$ $`=`$ $`{\displaystyle \underset{p}{}}ϵ(p)a_p^{}a_p`$ (2) $`H_{int}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{\kappa pq}{}}V(\kappa )a_{p+\kappa }^{}a_{q\kappa }^{}a_pa_q,`$ (3) where the operators $`a_p^{}`$ and $`a_p`$ are creation and annihilation operators, respectively, for bosons with momentum $`p`$, which satisfy the Bose commutation relations $$[a_p,a_q^{}]=\delta _{pq}\text{ and }[a_p,a_q]=[a_p^{},a_q^{}]=0.$$ (4) The first term $`H_{free}`$ in the Hamiltonian represents the energy of the free boson system, and the second term $`H_{int}`$ represents the energy of the interactions between the bosons arising from the potential energy $`V(\kappa )`$. The interaction term is equivalent to the one responsible for producing squeezed states of light via four-wave mixing . It represents the annihilation of two particles, here photons, of momenta $`p`$ and $`q`$, along with the creation of two particles with momenta $`p+\kappa `$ and $`q\kappa `$, in other words, a scattering process with a momentum transfer $`\kappa `$ between a pair of particles with initial momenta $`p`$ and $`q`$, along with the assignment of an energy $`V(\kappa )`$ to this scattering process. ## III The free-photon dispersion relation inside a Fabry-Perot resonator Photons with momenta $`p`$ and $`q`$ also obey the above commutations relations, so that the Bogoliubov theory should in principle also apply to the weakly-interacting photon gas. The factor $`ϵ(p)`$ represents the energy as a function of the momentum (the dispersion relation) for the free, i.e., noninteracting, bosons. In the case of photons in a Fabry-Perot resonator, the boundary conditions of the mirrors cause the $`ϵ(p)`$ of a photon trapped inside the resonator to correspond to an energy-momentum relation which is identical to that of a nonrelativistic particle with an effective mass of $`m=\mathrm{}\omega /c^2`$. This can be understood starting from Fig. 1. For high-reflectivity mirrors, the vanishing of the electric field at the reflecting surfaces of the mirrors imposes a quantization condition on the allowed values of the $`z`$-component of the photon wave vector, $`k_z=n\pi /L`$, where $`n`$ is an integer, and $`L`$ is the distance between the mirrors. Thus the usual frequency-wavevector relation $$\omega (k)=c[k_x^2+k_y^2+k_z^2]^{1/2},$$ (5) upon multiplication by $`\mathrm{}`$, becomes the energy-momentum relation for the photon $$E(p)=c[p_x^2+p_y^2+p_z^2]^{1/2}=c[p_x^2+p_y^2+\mathrm{}^2n^2\pi ^2/L^2]^{1/2}=c[p_x^2+p_y^2+m^2c^2]^{1/2},$$ (6) where $`m=\mathrm{}n\pi /Lc`$ is the effective mass of the photon. In the limit of small-angle (or paraxial) propagation, where the small transverse momentum of the photon satisfies the inequality $$p_{}=[p_x^2+p_y^2]^{1/2}p_z=\mathrm{}k_z=\mathrm{}n\pi /L,$$ (7) we obtain from a Taylor expansion of the relativistic relation, a nonrelativistic energy-momentum relation for the 2D noninteracting photons inside the Fabry-Perot resonator $$E(p_{})mc^2+p_{}^2/2m,$$ (8) where $`m=\mathrm{}n\pi /Lc\mathrm{}\omega /c^2`$ is the effective mass of the confined photons. It is convenient to redefine the zero of energy, so that only the effective kinetic energy, $$ϵ(p_{})p_{}^2/2m,$$ (9) remains. To establish the connection with the Bogoliubov Hamiltonian, we identify the two-dimensional momentum $`p_{}`$ as the momentum $`p`$ that appears in this Hamiltonian, and the above $`ϵ(p_{})`$ as the $`ϵ(p)`$ that appears in Eq. (3). ## IV The Bogoliubov dispersion relation for the photon fluid Now we know that in an ideal Bose gas at absolute zero temperature, there exists a Bose condensate consisting of a macroscopic number $`N_0`$ of particles occupying the zero-momentum state. This feature should survive in the case of the weakly-interacting Bose gas, since as the interaction vanishes, one should recover the Bose condensate state. Hence following Bogoliubov, we shall assume here that even in the presence of interactions, $`N_0`$ will remain a macroscopic number in the photon fluid. This macroscopic number will be determined by the intensity of the incident laser beam which excites the Fabry-Perot cavity system, and turns out to be a very large number compared to unity (see below). For the ground state wave function $`\mathrm{\Psi }_0(N_0)`$ with $`N_0`$ particles in the Bose condensate in the $`p=0`$ state, the zero-momentum operators $`a_0`$ and $`a_0^{}`$ operating on the ground state obey the relations $`a_0|\mathrm{\Psi }_0(N_0)`$ $`=`$ $`\sqrt{N_0}|\mathrm{\Psi }_0(N_01)`$ (10) $`a_0^{}|\mathrm{\Psi }_0(N_0)`$ $`=`$ $`\sqrt{N_0+1}|\mathrm{\Psi }_0(N_0+1).`$ (11) Since $`N_01`$, we shall neglect the difference between the factors $`\sqrt{N_0+1}`$ and $`\sqrt{N_0}`$. Thus one can replace all occurrences of $`a_0`$ and $`a_0^{}`$ by the $`c`$-number $`\sqrt{N_0}`$, so that to a good approximation $`[a_0,a_0^{}]0`$. However, the number of particles in the system is then no longer exactly conserved, as can be seen by examination of the term in the Hamiltonian $$\underset{\kappa }{}V(\kappa )a_\kappa ^{}a_\kappa ^{}a_0a_0N_0\underset{\kappa }{}V(\kappa )a_\kappa ^{}a_\kappa ^{},$$ (12) which represents the creation of a pair of particles, i.e., photons, with momenta $`\kappa `$ and $`\kappa `$ out of nothing. However, whenever the system is open one, i.e., whenever it is connected to an external reservoir of particles which allows the total particle number number to fluctuate around some constant average value, then the total number of particles need only be conserved on the average. Formally, one standard way to compensate for the lack of exact particle number conservation is to use the Lagrange multiplier method and subtract a chemical potential term $`\mu N_{op}`$ from the Hamiltonian (just as in statistical mechanics when one goes from the canonical ensemble to the grand canonical ensemble) $$HH^{}=H\mu N_{op},$$ (13) where $`N_{op}=_pa_p^{}a_p`$ is the total number operator, and $`\mu `$ represents the chemical potential, i.e., the average energy for adding a particle to the open system described by $`H`$. In the present context, we are considering the case of a Fabry-Perot cavity with low, but finite, transmissivity mirrors which allow photons to enter and leave the cavity, due to an input light beam coming in from the left and an output beam leaving from the right. This permits a realistic physical implementation of the external reservoir, since the Fabry-Perot cavity allows the total particle number inside the cavity to fluctuate due to particle exchange with the beams outside the cavity. However, the photons remain trapped inside the cavity long enough so that a thermalized condition is achieved after many photon-photon interactions (i.e., after many collisions), thus allowing the formation of a photon fluid. It will be useful to separate out the zero-momentum components of the interaction Hamiltonian, since it will turn out that there is a macroscopic occupation of the zero-momentum state due to Bose condensation. The prime on the sums $`_p^{}`$, $`_{p\kappa }^{}`$, and $`_{\kappa pq}^{}`$ in the following equation denotes sums over momenta explicitly excluding the zero-momentum state, i.e., all the running indices $`p`$, $`\kappa `$, $`q`$,$`p+\kappa `$,$`q\kappa `$ which are not explicitly set equal to zero, are nonzero: $`H_{int}`$ $`=`$ $`{\displaystyle \frac{1}{2}}V(0)a_0^{}a_0^{}a_0a_0+V(0){\displaystyle \underset{p}{}^{}}a_p^{}a_pa_0^{}a_0+`$ (16) $`{\displaystyle \underset{p}{}^{}}\left(V(p)a_p^{}a_0^{}a_pa_0+{\displaystyle \frac{1}{2}}\left[V(p)a_p^{}a_p^{}a_0a_0+V(p)a_0^{}a_0^{}a_pa_p\right]\right)+`$ $`{\displaystyle \underset{p\kappa }{}^{}}V(\kappa )\left(a_{p+\kappa }^{}a_0^{}a_pa_\kappa +a_{p+\kappa }^{}a_\kappa ^{}a_pa_0\right)+{\displaystyle \frac{1}{2}}{\displaystyle \underset{\kappa pq}{}^{}}V(\kappa )\left(a_{p+\kappa }^{}a_{q\kappa }^{}a_pa_q\right).`$ Here we have also assumed that $`V(p)=V(p)`$. By thus separating out the zero-momentum state from the sums in the Hamiltonian, and replacing all occurrences of $`a_0`$ and $`a_0^{}`$ by $`\sqrt{N_0}`$, we find that the Hamiltonian $`H^{}`$ decomposes into three parts $$H^{}=H_0+H_1+H_2,$$ (17) where $$H_0=\frac{1}{2}V(0)a_0^{}a_0^{}a_0a_0\frac{1}{2}V(0)N_0^2,$$ (18) $$H_1\underset{p}{}^{}ϵ^{}(p)a_p^{}a_p+\frac{1}{2}N_0\underset{p}{}^{}V(p)\left(a_p^{}a_p^{}+a_pa_p\right),$$ (19) $$H_2\sqrt{N_0}\underset{p\kappa }{}^{}V(\kappa )\left(a_{p+\kappa }^{}a_pa_\kappa +a_{p+\kappa }^{}a_\kappa ^{}a_p\right)+\frac{1}{2}\underset{\kappa pq}{}^{}V(\kappa )\left(a_{p+\kappa }^{}a_{q\kappa }^{}a_pa_q\right),$$ (20) where $$ϵ^{}(p)=ϵ(p)+N_0V(0)+N_0V(p)\mu $$ (21) is a modified photon energy, and where $`N_0`$ and $`\mu `$ are given by $$N_0+<\mathrm{\Psi }_0|\underset{p}{}^{}a_p^{}a_p|\mathrm{\Psi }_0>=N$$ (22) and $$\mu =\frac{E_0}{N}.$$ (23) Here $`E_0=\mathrm{\Psi }_0|H|\mathrm{\Psi }_0`$ is the ground state energy of $`H`$. In the approximation that there is little depletion of the Bose condensate due to interactions (i.e., $`NN_01`$), the first term of Eq. (16) (i.e., $`H_0`$ in Eq. (18)) dominates, so that $$E_0\frac{1}{2}N_0^2V(0)\frac{1}{2}N^2V(0),$$ (24) and therefore that $$\mu NV(0)N_0V(0).$$ (25) This implies that the effective chemical potential of a photon, i.e., the energy for adding a photon to the photon fluid, is given by the number of photons in the Bose condensate times the repulsive pairwise interaction energy between photons with zero relative momentum. It should be remarked that the fact that the chemical potential is nonvanishing here makes the thermodynamics of this two-dimensional photon system quite different from the usual three-dimensional, Planck blackbody photon system . In the same approximation, Eq. (21) becomes $$ϵ^{}(p)ϵ(p)+N_0V(p).$$ (26) This is the single-particle photon energy in the Hartree approximation. In the same approximation, it is also assumed that $`|H_1||H_2|`$, i.e., that the interactions between the bosons are sufficiently weak, again so as not to deplete the Bose condensate significantly. In the case of the weakly-interacting photon gas inside the Fabry-Perot resonator, since the interactions between the photons are indeed weak, this assumption is a good one. Following Bogoliubov, we now introduce the following canonical transformation in order to diagonalize the quadratic-form Hamiltonian $`H_1`$ in Eq. (19): $`\alpha _\kappa `$ $`=`$ $`u_\kappa a_\kappa +v_\kappa a_\kappa ^{}`$ (27) $`\alpha _\kappa ^{}`$ $`=`$ $`u_\kappa a_\kappa ^{}+v_\kappa a_\kappa .`$ (28) Here $`u_\kappa `$ and $`v_\kappa `$ are two real $`c`$-numbers which must satisfy the condition $$u_\kappa ^2v_\kappa ^2=1,$$ (29) in order to insure that the Bose commutation relations are preserved for the new creation and annihilation operators for certain quasi-particles, $`\alpha _\kappa ^{}`$ and $`\alpha _\kappa `$, i.e., that $$[\alpha _\kappa ,\alpha _\kappa ^{}^{}]=\delta _{\kappa ,\kappa ^{}}\text{ and }[\alpha _\kappa ,\alpha _\kappa ^{}]=[\alpha _\kappa ^{},\alpha _\kappa ^{}^{}]=0.$$ (30) We seek a diagonal form of $`H_1`$ given by $$H_1=\underset{\kappa }{}^{}\left[\stackrel{~}{\omega }(\kappa )\left(\alpha _\kappa ^{}\alpha _\kappa +\frac{1}{2}\right)+\text{constant}\right],$$ (31) where $`\stackrel{~}{\omega }(\kappa )`$ represents the energy of a quasi-particle of momentum $`\kappa `$. Substituting the new creation and annihilation operators $`\alpha _\kappa ^{}`$ and $`\alpha _\kappa `$ given by Eq. (28) into Eq. (31), and comparing with the original form of the Hamiltonian $`H_1`$ in Eq. (19), we arrive at the following necessary conditions for diagonalization: $`\stackrel{~}{\omega }(\kappa )u_\kappa v_\kappa `$ $`=`$ $`{\displaystyle \frac{1}{2}}N_0V(\kappa )`$ (32) $`u_\kappa ^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left[1+ϵ^{}(\kappa )/\stackrel{~}{\omega }(\kappa )\right]`$ (33) $`v_\kappa ^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left[1+ϵ^{}(\kappa )/\stackrel{~}{\omega }(\kappa )\right].`$ (34) Squaring Eq. (32) and substituting from Eqs. (33) and (34), we obtain $$\stackrel{~}{\omega }(\kappa )^2=ϵ^{}(\kappa )^2N_0^2V(\kappa )^2=ϵ(\kappa )^2+2ϵ(\kappa )N_0V(\kappa ),$$ (35) where in the last step we have used Eq. (26). Thus the final result is that the Hamiltonian $`H_1`$ in Eq. (31) describes a collection of noninteracting simple harmonic oscillators, i.e., quasi-particles, or elementary excitations of the photon fluid from its ground state. The energy-momentum relation of these quasi-particles is obtained from Eq. (35) upon substitution of $`ϵ(\kappa )=\kappa ^2/2m`$ from Eq. (9) $$\stackrel{~}{\omega }(\kappa )=\left[\frac{\kappa ^2N_0V(\kappa )}{m}+\frac{\kappa ^4}{4m^2}\right]^{1/2},$$ (36) which we shall call the “Bogoliubov dispersion relation.” This dispersion relation is plotted in Fig. 2, in the special case that $`V(\kappa )=V(0)=`$ constant. (Note that Landau’s roton minimum could in principle also be incorporated into this theory by a suitable choice of the functional form of $`V(\kappa )`$.) For small values of $`\kappa `$ this dispersion relation is linear in $`\kappa `$. This feature, together with the fact that the operator $`\alpha _\kappa ^{}\alpha _\kappa `$ in Eq. (31) describes a density fluctuation in the fluid, indicates that the nature of the elementary excitations here is that of phonons, which in the classical limit of large phonon number leads to sound-like waves propagating inside the photon fluid at the sound speed $$v_s=\underset{\kappa 0}{lim}\frac{\stackrel{~}{\omega }(\kappa )}{\kappa }=\left(\frac{N_0V(0)}{m}\right)^{1/2}=\left(\frac{\mu }{m}\right)^{1/2}.$$ (37) At a transition momentum $`\kappa _c`$ given by $$\kappa _c=2\left(mN_0V(\kappa _c)\right)^{1/2}$$ (38) (i.e., when the two terms of Eq. (36) are equal), the linear relation between energy and momentum turns into a quadratic one, indicating that the quasi-particles at large momenta behave essentially like nonrelativistic free particles with an energy of $`\kappa ^2/2m`$. The reciprocal of $`\kappa _c`$ defines a characteristic length scale $$\lambda _c2\pi \mathrm{}/\kappa _c=\pi \mathrm{}/mv_s,$$ (39) which characterizes the distance scale over which collective effects arising from the pairwise interaction between the photons become important. Thus in the above analysis, we have shown that all the approximations involved in the Bogoliubov theory should be valid ones for the case of the 2D photon fluid inside a nonlinear Fabry-Perot cavity. Hence the Bogoliubov dispersion relation should indeed apply to this fluid; in particular, there should exist sound-like modes of propagation in the photon fluid. ## V Classical Picture of Sound Waves in a Nonlinear Optical Fluid A classical nonlinear optical treatment of a Fabry-Perot cavity which is filled with a medium with a self-defocussing Kerr nonlinearity (see Fig. 3), also indicates the existence of modes of sound-like wave propagation in the nonlinearly interacting light. Such a nonlinear medium could consist of an alkali atomic vapor excited by a laser detuned to the red side of resonance. In fact, it turns out that fluctuations in the light intensity in this medium propagate with a dispersion relation which is identical to that given above in Eq. (36) for the weakly-interacting Bose gas. To derive this dispersion relation classically, we begin by considering the planar Fabry-Perot cavity shown in Fig. 3. Two parallel planar mirrors of reflectivity $`R`$ and transmissivity $`T`$ (with $`R+T=1`$, i.e., with no dissipation) are normal to the $`z`$-axis and separated by a distance $`L`$. A laser beam travelling in the $`+z`$ direction is incident on the cavity, and there results five interacting light beams in the problem. The region between the mirrors (inside the cavity) contains a nonlinear polarizable medium. The classical electric field obeys Maxwell’s equations, written in wave-equation form in CGS units as $$\frac{^2\mathrm{E}}{z^2}+_{}^2\mathrm{E}\frac{1}{c^2}\frac{^2\mathrm{E}}{t^2}=\frac{4\pi }{c^2}\frac{^2\mathrm{P}}{t^2},$$ (40) where $`\mathrm{E}`$ is the (real) electric field amplitude, $`\mathrm{P}`$ is the polarization introduced in the medium, and $`_{}^2`$ is the Laplacian in the transverse coordinates $`x`$ and $`y`$. This equation is supplemented by boundary conditions at the two mirrors. Equation (40) simplifies considerably when the following assumptions are made: 1. The slowly-varying envelope approximation is justified, in which case we recast Eq. (40) in terms of the field envelope $``$. 2. The frequency spacing between adjacent longitudinal cavity modes is much greater than 1. the incident laser linewidth, and 2. the nonlinearity bandwidth, allowing us to neglect the $`z`$-dependence of the field envelope (this is sometimes called the uniform field approximation). 3. The atomic response time is much shorter than the cavity lifetime, allowing us to adiabatically eliminate the atomic response (i.e., the nonlinearity is fast). Under these reasonable assumptions the cavity’s internal field envelope is governed by the Lugiato-Lefever equation , written here as $$\frac{}{t}=\frac{ic}{2k}_{}^2+i\omega n_2||^2+i(\mathrm{\Delta }\omega )\mathrm{\Gamma }(_d),$$ (41) where $`(x,y,t)`$ is the internal cavity field envelope amplitude, $`k`$ is the longitudinal wavenumber, $`\omega `$ is the laser angular frequency, $`n_2`$ is the nonlinear index inside the cavity ($`n1+n_2||^2`$), $`\mathrm{\Delta }\omega =\omega \omega _{cav}`$ is the detuning of the driving laser from linear cavity resonance, $`\mathrm{\Gamma }=cT/2L`$ is the cavity decay rate, and $`_d(x,y)`$ is a driving laser amplitude. In other contexts, Eq. (41) is called the Nonlinear Schrödinger (NLS) equation, or the Ginzburg-Landau equation, or the Gross-Pitaevskii equation. The latter two of these were introduced as descriptions of superfluid and of Bose-Einstein-condensed systems, with a complex order parameter $`\mathrm{\Psi }`$, which here is identified with $``$. Equation (41) has the nonlinear plane-wave solution $$=_0\mathrm{exp}[i(\omega n_2_0^2+\mathrm{\Delta }\omega )t]$$ (42) when $`\mathrm{\Gamma }`$ is negligible , in which case $`_0`$ can be assumed real without loss of generality. Linearizing around this solution by substituting the form $$=\left(_0+a(x,y,t)\right)\mathrm{exp}[i(\omega n_2_0^2+\mathrm{\Delta }\omega )t],$$ (43) we get the following linear equation for the fluctuation amplitude (we have assumed that $`|a(x,y,t)|_0`$): $$\frac{a}{t}=\frac{ic}{2k}_{}^2a+i\omega n_2_0^2(a+a^{}).$$ (44) Here we look for a cylindrically symmetric solution appropriate for the experimental geometry (see Fig. 4). Substituting the trial solution $$a(\rho ,t)=\alpha J_0(K\rho )e^{i\mathrm{\Omega }^{}t}+\beta J_0(K\rho )e^{i\mathrm{\Omega }t},$$ (45) where $`J_0(K\rho )`$ is the zero-order Bessel function, $`\rho =(x^2+y^2)^{1/2}`$ is the transverse radial distance from the origin of a fluctuation, and $`K`$ is the wavenumber of the fluctuation, we obtain the following dispersion relation for small-amplitude intensity fluctuations in the light filling the cavity : $$\mathrm{\Omega }(K)=\left[c^2K^2\left|n_2\right|_0^2+\frac{c^4K^4}{4\omega ^2}\right]^{1/2},$$ (46) where $`\mathrm{\Omega }`$ and $`K`$ are the angular frequency and wavenumber respectively of the transverse sound-like mode. For transverse wavelengths much longer than $`\mathrm{\Lambda }_c\lambda /\left(\mathrm{\Delta }n\right)^{1/2}`$, where $`\lambda `$ is the optical wavelength and $`\mathrm{\Delta }n=\left|n_2\right|_0^2`$ is the nonlinear index shift induced by the background beam, the transverse mode propagates with the constant phase velocity $$v_s=c\sqrt{\mathrm{\Delta }n}=c\sqrt{\left|n_2\right|_0^2},$$ (47) which we identify as a sound-wave velocity. This velocity is identical to the one found earlier in Eq. (37) for the velocity of phonons in the photon fluid, provided that one identifies the energy density of the light inside the cavity with the number of photons in the Bose condensate as follows: $$_0^2=8\pi N_0\mathrm{}\omega /V_{cav},$$ (48) where $`V_{cav}`$, the cavity volume, is also the quantization volume for the electromagnetic field, and provided that one makes use of the known proportionality between $`n_2`$ and $`V(0)`$ . In fact, the entire dispersion relation, Eq. (46), found above classically for sound-like waves associated with fluctuations in the light intensity inside a resonator filled with a self-defocusing Kerr medium, is formally identical to the Bogoliubov dispersion relation, Eq. (36), obtained quantum mechanically for the elementary excitations of the photon fluid, in the approximation $`V(\kappa )=V(0)=`$ constant. This is a valid approximation, since the pairwise interaction potential between two photons is given by a transverse 2D pairwise spatial Dirac delta function, whose strength is proportional to $`n_2`$ . It should be kept in mind that the phenomena of self-focusing and self-defocusing in nonlinear optics can be viewed as arising from pairwise interactions between photons when the light propagation is paraxial and the Kerr nonlinearity is fast . Since in a quantum description the light inside the resonator is composed of photons, and since these photons as the constituent particles are weakly interacting repulsively with each other through the self-defocusing Kerr nonlinearity to form a photon fluid, this formal identification is a natural one. ## VI An Experiment in Progress We are in the process of investigating experimentally the existence of the sound-like propagating photon density waves predicted above for a planar Fabry-Perot cavity containing a self-defocusing ($`n_2<0`$) nonlinear medium (see Fig. 4). The sound-like mode is most simply observed by applying two incident optical fields to the nonlinear cavity: a broad plane wave resonant with the cavity to form the nonlinear background fluid on top of which the sound-like mode can propagate, and a weaker amplitude-modulated beam which is modulated at the sound wave frequency in the radio range by an electro-optic modulator, and injected by means of an optical fiber tip at a single point on the entrance face of the Fabry-Perot. The resulting weak time-varying perturbations in the background light induce transversely propagating waves in the photon fluid, which propagate away from the point of injection like ripples on a pond. This sound-like mode can be phase-sensitively detected by another fiber tip placed at the exit face of the Fabry-Perot some transverse distance away from the injection point, and its sound-like wavelength can be measured by scanning this fiber tip transversely across the exit face. The experiment employs a cavity length $`L`$ of $`2`$ cm and mirrors with intensity reflectivities of $`R=0.997`$, for a cavity finesse of roughly $`1000`$. The optical nonlinearity is provided by rubidium vapor at $`80^\mathrm{o}`$ C, corresponding to a number density of $`10^{12}`$ rubidium atoms per cubic centimeter. We use a circularly-polarized laser beam, detuned by around $`600`$ MHz to the red side of the $`{}_{}{}^{87}\mathrm{Rb}`$, $`F=2F^{}=3`$ transition of the $`D_2`$ line; using this closed transition eliminates optical pumping into the $`F=1`$ ground state. This 600 MHz detuning of the laser from the atomic resonance is considerably larger than the Doppler width of 340 MHz, and the residual absorption arising from the tails of the nearby resonance line gives rise to a loss which is less than or comparable to the loss arising from the mirror transmissions. This extra absorption loss contributes to a slightly larger effective cavity loss coefficient $`\mathrm{\Gamma }`$, but does not otherwise alter the qualitative behavior of the Bogoliubov dispersion relation, nor any of the other main conclusions of this paper. The above criteria (1-3) for the validity of Eq. (41), as well as those for the validity of the microscopic Bogoliubov theory, should be well satisfied by these experimental parameters. An intracavity intensity of $`40\mathrm{W}/\mathrm{cm}^2`$ results in $`\mathrm{\Delta }n=2\times 10^6`$, for a sound speed $`v_s=4.2\times 10^7\mathrm{cm}/\mathrm{s}`$ and transition wavelength $`\mathrm{\Lambda }_c1\mathrm{mm}`$. For this intensity, $`N_08\times 10^{11}`$, so that the condition for the validity of the Bogoliubov theory, $`N_01`$, is well satisfied. ## VII Discussion and Future Directions We suggest here that the Bogoliubov form of dispersion relation, Eq. (36) or (46), implies that the photon fluid formed by repulsive photon-photon interactions in the nonlinear cavity is actually a photon superfluid. This means that a superfluid state of light might actually exist. Although the exact definition of superfluidity is presently still under discussion, especially in light of the question whether the recently discovered atomic Bose-Einstein condensates are superfluids or not , one indication of the existence of a photon superfluid would be that there exists a critical transition from a dissipationless state of superflow, i.e., a laminar flow of the photon fluid below a certain critical velocity past an obstacle, into a turbulent state of flow, accompanied by energy dissipation associated with the shedding of von-Karman-like $`quantized`$ vortices past this obstacle, above this critical velocity. (It is the generation of quantized vortices above this critical velocity which distinguishes the onset of superfluid turbulence from the onset of normal hydrodynamic turbulence.) The Bogoliubov dispersion relation (plotted earlier in Fig. 2) consists of two regimes: (1) a linear regime, in which there is a linear relationship between the energy of the elementary excitation and its momentum near the origin (i.e., for low energy excitations) corresponding to the sound-like waves, or more precisely, to the phonons in the photon fluid, produced by the collective oscillations of this fluid, in which the photons are coupled to each other by the mutually repulsive interactions between them, and (2) a quadratic regime, in which there is a quadratic relation for sufficiently large transverse momenta corresponding to the diffraction of the component photons, which would dominate when the pairwise interactions between the photons can be neglected. A crude one-dimensional model can give rise to an understanding of the origin of the sound-like waves in the photon fluid: Consider a system consisting of identical steel balls placed on a frictionless track. This system of balls is initially motionless. Now set a ball at the one end of the track into motion so that it collides with its nearest neighbor. The momentum transfer between adjacent hard spheres on this track, as they collide with one another, sets up a moving pattern of density fluctuations among the balls, which propagates like a sound wave from one end of the track towards the other end. Such a sound-like wave carries energy and momentum with it as it propagates. It may be asked why the classical nonlinear optical calculation gives the same result as the quantum many-body calculation. One answer is that one expects classical sound waves to have the same dispersion relation as phonons in a quantum many-body system: there exists a classical, correspondence-principle limit of the quantum many-body problem, in which the collective excitations (i.e., their dispersion relation) do not change their form in the classical limit of large phonon number. The physical meaning of this dispersion relation is that the lowest energy excitations of the system consist of quantized sound waves or phonon excitations in a superfluid, whose maximum critical velocity is then given by the sound wave velocity. By inspection of this dispersion relation, a single quantum of any elementary excitation cannot exist with a velocity below that of the sound wave. Hence no excitation of the superfluid at zero temperature is possible at all for any object moving with a velocity slower than that of the sound wave velocity, according to an argument by Landau . Hence the flow of the superfluid must be dissipationless below this critical velocity. Above a certain critical velocity, dissipation due to vortex shedding is expected from computer simulations based on the Gross-Pitaevskii (or Ginzburg-Landau or nonlinear Schrödinger) equation which should give an accurate description of this system at the macroscopic level . We propose a follow-up experiment to demonstrate that the sound wave velocity, typically a few thousandths of the vacuum speed of light, is indeed a maximum critical velocity of a fluid, i.e., that this photon fluid exhibits persistent currents in accordance with the Landau argument based on the Bogoliubov dispersion relation. Suppose we shine light at some nonvanishing incidence angle on a Fabry-Perot resonator (i.e., exciting it on some off-axis mode). This light produces a uniform flow field of the photon fluid, which flows inside the resonator in some transverse direction and at a speed determined by the incidence angle. A cylindrical obstacle placed inside the resonator will induce a laminar flow of the superfluid around the cylinder, as long as the flow velocity remains below a certain critical velocity. However, above this critical velocity a turbulent flow will be induced, with the formation of a von-Karman vortex street associated with quantized vortices shed from the boundary of the cylinder . The typical vortex core size is given by the light wavelength divided by the square root of the nonlinear index change. Typically the vortex core size should therefore be around a few hundred microns, so that this nonlinear optical phenomenon should be readily observable. ## Acknowledgments We thank L.M.A. Bettencourt, D.A.R. Dalvit, I.H. Deutsch, J.C. Garrison, D.H. Lee, M.W. Mitchell, J. Perez-Torres, D.S. Rokhsar, D.J. Thouless, E.M. Wright, and W.H. Zurek for helpful discussions. The work was supported by the ONR and by the NSF. ## FIGURES
no-problem/9905/math9905080.html
ar5iv
text
# 1 Introduction ## 1 Introduction The study of formal (1-differentiable) deformations of the Lie-Poisson algebra of functions on a symplectic manifold was initiated in a paper by M. Flato, A. Lichnerowicz and D. Sternheimer . Shortly afterwards, this program was extended to star-products, i.e., associative deformation of the usual product of functions, on a symplectic manifold giving, among others, a profound interpretation of Quantum Mechanics as a deformation of Classical Mechanics in the direction of the Poisson bracket. The existence problem of star-products has been solved by successive steps from special classes of symplectic manifolds to general Poisson manifolds. The existence of star-products on any finite dimensional symplectic manifold was first shown by M. De Wilde and P. Lecomte . Since then, more geometric proofs have appeared , and a proof of existence for regular Poisson manifolds was published by M. Masmoudi . For the non-regular Poisson case, first examples of star-products appeared in in relation with the quantization of angular momentum. They were defined on the dual of $`𝔰𝔬(n)`$ endowed with its natural Kirillov-Poisson structure. The case for any Lie algebra follows from the construction given by S. Gutt of a star-product on the cotangent bundle of a Lie group $`G`$. This star-product restricts to a star-product on the dual of the Lie algebra of $`G`$. It translates the associative structure of the enveloping algebra in terms of functions on the dual of the Lie algebra of $`G`$. The problem of existence of star-products on any finite dimensional Poisson manifold was given a solution by M. Kontsevich . The proof is based on an explicit expression of a star-product on $`^d`$ endowed with a general Poisson bracket, which itself follows from more general formulae which allowed him to show his formality conjecture for $`^d`$ and then for any finite dimensional manifold $`M`$. Recently, by using different techniques, D. Tamarkin has indicated another proof of the formality conjecture. This is one of the ingredients in the most recent fundamental paper by M. Kontsevich . On the dual of a Lie algebra, we have a priori two different star-products: Gutt and Kontsevich star-products. D. Arnal showed that when the Lie algebra is nilpotent, these two star-products do coincide. Here we shall give an elementary proof that in the general case Gutt and Kontsevich star-products are equivalent and explicitly construct the equivalence between them. For that purpose, we use the notion of Weyl star-products on $`^d`$. These are star-products enjoying the following property: $`X_{\mathrm{}}\mathrm{}_{\mathrm{}}X`$ ($`k`$ factors) is equal to $`X^k`$ (usual product) for any linear polynomial $`X`$on $`^d`$ and $`k0`$. Any star-product on $`^d`$ is equivalent to a Weyl star-product and, in the case of the dual of a Lie algebra, Gutt star-product is the unique covariant Weyl star-product. From this fact, one immediately obtains that Kontsevich and Gutt star-products are equivalent. The equivalence operator is obtained by applying a method used in in the context of generalized Abelian deformations. It turns out that the equivalence operator is an exponential of constant coefficients linear operators given by the trace of powers of the adjoint map of the Lie algebra. The formula we have found is closely related to the discussion given in about Lie algebras. The paper is organized as follows. We review Kontsevich construction in Section 2. Then we proceed to the study of Weyl star-products on $`^d`$ and get a characterization of Gutt star-product. The main results about equivalence is proved in Section 4. We end the paper with some remarks on the general Poisson case. Since the first version of this paper was completed, several preprints dealing with Kontsevich star-product on the dual of a Lie algebra have appeared . The equivalence result found here has also been obtained by D. Arnal, N. Ben Amar, and M. Masmoudi in a completely different approach involving cohomology of Kontsevich graphs. ## 2 Kontsevich star-product The reader is referred to for the general theory on star-products and to the excellent review by D. Sternheimer for further details and recent applications. We shall briefly review the construction of a star-product on $`^d`$ given in . Consider $`^d`$ endowed with a Poisson bracket $`\pi `$. We denote by $`(x^1,\mathrm{},x^d)`$ the coordinate system on $`^d`$, the Poisson bracket of two smooth functions $`f,g`$ is given by $`\pi (f,g)=_{1i,jn}\pi ^{ij}_if_jg`$, where $`_k`$ denotes the partial derivative with respect to $`x^k`$. What follows remain valid if, instead of the whole of $`^d`$, one considers an open subset of it. We slightly depart from the notations used in . The formula for Kontsevich star-product is conveniently defined by considering, for each $`n0`$, a family of oriented graphs $`G_n`$. To a graph $`\mathrm{\Gamma }G_n`$ is associated a bidifferential operator $`_\mathrm{\Gamma }`$ and a weight $`w(\mathrm{\Gamma })`$. The sum $`_{\mathrm{\Gamma }G_n}w(\mathrm{\Gamma })_\mathrm{\Gamma }`$ gives us the term at order $`\mathrm{}^n`$, i.e., the cochain $`C_n`$ of the star-product. Here is the formal definition of $`G_n`$. An oriented graph $`\mathrm{\Gamma }`$ belongs to $`G_n`$, $`n0`$, if: * $`\mathrm{\Gamma }`$ has $`n+2`$ vertices labeled $`\{1,2,\mathrm{},n,L,R\}`$ where $`L`$ and $`R`$ stand for Left and Right, respectively, and $`\mathrm{\Gamma }`$ has $`2n`$ oriented edges labeled $`\{i_1,j_1,i_2,j_2,\mathrm{},i_n,j_n\}`$; * The pair of edges $`\{i_k,j_k\}`$, $`1kn`$, starts at vertex $`k`$; * $`\mathrm{\Gamma }`$ has no loop (edge starting at some vertex and ending at that vertex) and no parallel multiple edges (edges sharing the same starting and ending vertices). When it is needed to make explicit at which vertex $`v\{1,\mathrm{},n,L,R\}`$ some edge, e.g. $`j_k`$, is ending at, we shall use the notation $`j_k(v)`$. The set of graphs in $`G_n`$ is finite. For $`n1`$, the first edge $`i_k`$ starting at vertex $`k`$ has $`n+1`$ possible ending vertices (since there is no loop), while the second edge $`j_k`$ has only $`n`$ possible ending vertices, since there is no parallel multiple edges. Thus there are $`n(n+1)`$ ways to draw the pair of edges starting at some vertex and therefore $`G_n`$ has $`(n(n+1))^n`$ elements. For $`n=0`$, $`G_0`$ has only one element: The graph having as set of vertices $`\{L,R\}`$ and no edges. A bidifferential operator $`(f,g)_\mathrm{\Gamma }(f,g)`$, $`f,gC^{\mathrm{}}(^d)`$, is associated to each graph $`\mathrm{\Gamma }G_n`$, $`n1`$. To each vertex $`k`$, $`1kn`$, one associates the components $`\pi ^{i_kj_k}`$ of the Poisson tensor, $`f`$ is associated to the vertex $`L`$ and $`g`$ to the vertex $`R`$. Each edge, e.g. $`i_k(v)`$ acts by partial differentiation with respect to $`x^{i_k}`$ on its ending vertex $`v`$. There is no better way than to draw the graph $`\mathrm{\Gamma }`$ to illustrate the correspondence $`\mathrm{\Gamma }_\mathrm{\Gamma }`$. See for a general formula. The graph in Fig. 1 gives the bidifferential operator $$_\mathrm{\Gamma }(f,g)=\underset{0i_{},j_{}d}{}\pi ^{i_1j_1}_{j_1j_3}\pi ^{i_2j_2}_{i_2}\pi ^{i_3j_3}_{i_1j_2}f_{i_3}g.$$ Notice that for $`n=0`$, we simply have the usual product of $`f`$ and $`g`$. Now let us describe how the weight $`w(\mathrm{\Gamma })`$ of a graph $`\mathrm{\Gamma }`$ is defined. Again the reader is referred to for details and a nice geometrical interpretation of what follows. Let $`=\{z|\mathrm{Im}(z)>0\}`$ be the upper half-plane. $`_n`$ will denote the configuration space $`\{z_1,\mathrm{},z_n|z_iz_j\mathrm{for}ij\}`$. $`_n`$ is an open submanifold of $`^n`$. Let $`\varphi :_2/2\pi `$ be the function: $$\varphi (z_1,z_2)=\frac{1}{2\sqrt{1}}\mathrm{Log}\left(\frac{(z_2z_1)(\overline{z}_2z_1)}{(z_2\overline{z}_1)(\overline{z}_2\overline{z}_1)}\right).$$ (1) $`\varphi (z_1,z_2)`$ is extended by continuity for $`z_1,z_2`$, $`z_1z_2`$. For a graph $`\mathrm{\Gamma }G_n`$, the vertex $`k`$, $`1kn`$, is associated with the variable $`z_k`$, the vertex $`L`$ with $`0`$, and the vertex $`R`$ with $`1`$. The weight $`w(\mathrm{\Gamma })`$ is defined by integrating an $`2n`$-form over $`_n`$: $$w(\mathrm{\Gamma })=\frac{1}{n!(2\pi )^{2n}}__n\underset{1kn}{}\left(d\varphi (z_k,I_k)d\varphi (z_k,J_k)\right),$$ (2) where $`I_k`$ (resp. $`J_k`$) denotes the variable or real number associated with the ending vertex of the edge $`i_k`$ (resp. $`j_k`$). For example, the weight of the graph in Fig. 1 consists in integrating the $`6`$-form $`d\varphi (z_1,0)d\varphi (z_1,z_2)d\varphi (z_2,z_3)d\varphi (z_2,0)d\varphi (z_3,1)d\varphi (z_3,z_2)`$ on $`_3`$. It is clear from the definition of the weights that they are universal in the sense that they do not depend on the Poisson structure or the dimension $`d`$. The origin of the weights has been elucidated by A. S. Cattaneo and G. Felder . These authors have been able to construct a bosonic topological field theory on a disc whose perturbation series (after a finite renormalization taking care of tadpoles) makes Kontsevich graphs and weights appear explicitly. It is showed in that the integral in Eq. (2) is absolutely convergent. A pillar result in is ###### Theorem 1 (Kontsevich) For any Poisson structure $`\pi `$ on $`^d`$, the map $$(f,g)\underset{n0}{}\mathrm{}^n\underset{\mathrm{\Gamma }G_n}{}w(\mathrm{\Gamma })_\mathrm{\Gamma }(f,g),f,gC^{\mathrm{}}(^d),$$ defines an associative product. We call this product the Kontsevich star-product and it will be denoted by $`_{\mathrm{}}^K`$ and the corresponding cochains by $`C_r^K`$. Actually the preceding theorem holds if $`\pi `$ is replaced by any formal Poisson bracket $`\pi _{\mathrm{}}=\pi +_{r1}\mathrm{}^r\pi _r`$. Moreover, equivalence classes of star-products are in one-to-one correspondence with equivalence classes of formal Poisson brackets . At first it may seem that any computation involving the graphs becomes rapidly cumbersome as $`\mathrm{\#}G_n=(n(n+1))^n`$. But the situation is not that bad, there are many isomorphic graphs obtained by permuting the vertices or interchanging the edges $`\{i_k,j_k\}\{j_k,i_k\}`$. These operations do not affect $`w(\mathrm{\Gamma })_\mathrm{\Gamma }`$ as each factor picks up a minus sign. Also in $`G_n`$, $`n2`$, there are “bad” graphs that can be eliminated right away. These graphs are those for which the vertices $`L`$ or $`R`$ (or both) do not receive any edge. As it should, the weights associated to these graphs vanish. We will illustrate that by giving the explicit form of the second cochain $`C_2`$ which requires at the end the computation of only $`3`$ graphs (notice that $`\mathrm{\#}G_2=36`$). The graphs in Fig. 2 have weights $`w(\mathrm{\Gamma }_1)=1/8`$, $`w(\mathrm{\Gamma }_2)=1/24`$, $`w(\mathrm{\Gamma }_3)=1/48`$. By counting the symmetries, the graph $`\mathrm{\Gamma }_1`$ contributes $`4`$ times, $`\mathrm{\Gamma }_2`$ contributes $`8`$ times, and $`\mathrm{\Gamma }_3`$ contributes $`8`$ times. There is also a sister-graph for $`\mathrm{\Gamma }_2`$ which is obtained by performing $`LR`$ which contributes also $`8`$ times. Taking into account that there are $`8`$ “bad” graphs, we a have a total of $`36`$ graphs, and we find that: $`C_2^K(f,g)`$ $`=`$ $`{\displaystyle \frac{1}{2}}\pi ^{i_1j_1}\pi ^{i_2j_2}_{i_1i_2}f_{j_1j_2}g`$ $`+{\displaystyle \frac{1}{3}}\pi ^{i_1j_1}_{i_1}\pi ^{i_2j_2}(_{j_1j_2}f_{i_2}g+_{i_2}f_{j_1j_2}g)`$ $`{\displaystyle \frac{1}{6}}_{j_2}\pi ^{i_1j_1}_{j_1}\pi ^{i_2j_2}_{i_1}f_{i_2}g,`$ where summation over repeated indices is understood. ## 3 Weyl star-products on $`^d`$ Let $`\pi `$ be a general Poisson structure on $`^d`$. Let Pol be the algebra of polynomials in the variables $`x^1,\mathrm{},x^d`$ and let Lin denote the subspace of linear homogeneous polynomials. Let $`_{\mathrm{}}`$ be a star-product on $`(^d,\pi )`$. We shall show that $`_{\mathrm{}}`$ is (differentially) equivalent to a star-product $`_{\mathrm{}}^{}{}_{}{}^{}`$ having the following property: $$X^_{\mathrm{}}^{}{}_{}{}^{}k=X^k,k0,X\text{Lin},$$ (4) where $`X^_{\mathrm{}}^{}{}_{}{}^{}k=X_{\mathrm{}}^{}{}_{}{}^{}\mathrm{}_{\mathrm{}}^{}{}_{}{}^{}X`$ ($`k`$ factors). This is reminiscent of Weyl ordering in Quantum Mechanics and we introduce: ###### Definition 1 A star-product on $`(^d,\pi )`$ satisfying Eq. (4) is called a Weyl star-product. The consideration of this kind of star-products amount to generalized Abelian deformations . We recall the proof of the following: ###### Theorem 2 Any star-product on $`(^d,\pi )`$ is equivalent to a Weyl star-product. Proof. The proof consists to establish the differentiability of the following $`[[\mathrm{}]]`$-linear map $`\rho :\text{Pol}[[\mathrm{}]]C^{\mathrm{}}(^d)[[\mathrm{}]]`$ uniquely defined by: $$\rho (X^k)=X^_{\mathrm{}}k,k0,X\text{Lin}.$$ (5) The map $`\rho `$ is a formal sum of linear maps $`\rho =_{i0}\mathrm{}^i\rho _i`$ with $`\rho _0`$ being the identity map on Pol. We will show that the $`\rho _r`$’s are differential operators. By definition $`\rho _r(1)=\rho _r(X)=0`$ for $`r1`$ and $`X\text{Lin}`$. It is easy to see from Eq. (5) that the $`\rho _r`$’s satisfy the following recurrence relation for $`k1,r1`$: $$\delta \rho _r(X,X^{k1})=C_r(X,X^{k1})+\underset{\genfrac{}{}{0pt}{}{a+b=r}{a,b1}}{}C_a(X,\rho _b(X^{k1})),$$ (6) (the $`C_r`$’s are the 2-cochains of the star-product). For $`r=1`$ the sum on the right-hand side is omitted and $`\delta `$ is the Hochschild differential. Before going further we need a lemma. ###### Lemma 1 Let $`\psi :\text{Pol}C^{\mathrm{}}(^d)`$ be an $``$-linear map such that $`\psi (1)=\psi (X)=0`$, for $`X\text{Lin}`$, and let $`\varphi :C^{\mathrm{}}(^d)\times C^{\mathrm{}}(^d)C^{\mathrm{}}(^d)`$ be a bidifferential operator vanishing on constants. If $`\psi `$ satisfies $$\delta \psi (X,X^{k1})=\varphi (X,X^{k1}),k1,X\text{Lin},$$ (7) then there exists a differential operator $`\eta `$ on $`^d`$ such that $`\psi =\eta |_{\text{Pol}}`$. Proof. For two functions $`f,g`$, let $`_{I,J}\varphi ^{I,J}_If_Jg`$ be the expression of $`\varphi (f,g)`$ in local coordinates, where $`I`$ and $`J`$ are multi-indices and $`\varphi ^{I,J}`$ is a smooth function vanishing for $`|I|`$ or $`|J|`$ greater than some integer (for $`I=(i_1,\mathrm{},i_n)`$, $`|I|`$ denotes its length $`i_1+\mathrm{}+i_n`$). In Eq. (7), only first derivatives can be applied to the first argument of $`\varphi `$ and one can check the following series of equalities: $$\varphi (X,X^{k1})=\underset{i,J}{}\varphi ^{i,J}_iX_JX^{k1}=\frac{1}{k}\underset{i,J}{}\varphi ^{i,J}_{iJ}X^k=\delta \eta (X,X^{k1}),$$ (8) where $`\eta =_{i,J}\frac{1}{1+|J|}\varphi ^{i,J}_{iJ}`$. Now Eq. (7) can be written as $`\delta (\psi \eta )(X,X^{k1})=0`$ and by observing that $`\psi \eta `$ vanishes on $`1`$ and $`X`$ we get that $`\psi \eta =0`$ on Pol. This shows the lemma. The term of order $`1`$ in Eq. (6) yields $`(\delta \rho _1+C_1)(X,X^{k1})=0`$. We have that $`C_1=\pi +\delta \theta _1`$ for some differentiable $`1`$-cochain $`\theta _1`$, which can be chosen such that $`\theta _1(X)=0`$ for $`X\text{Lin}`$ by adding an appropriate Hochschild $`1`$-cocycle (i.e., a vector field). Then as before $`\delta (\rho _1+\theta _1)(X,X^{k1})=0`$ gives us $`\rho _1=\theta _1`$ on Pol showing that $`\rho _1`$ is a differential operator. With the help of Lemma 1, a simple recurrence on $`r`$ in Eq. (6) shows that for each $`r1`$, $`\rho _r`$ coincides with the restriction of a differential operator to Pol. Clearly the map $`\rho `$ can be naturally extended to an $`[[\mathrm{}]]`$-linear map on $`C^{\mathrm{}}(^d)[[\mathrm{}]]`$. We still denote this extension by $`\rho `$. The map $`\rho `$ is invertible as $`\rho _0`$ is the identity map and we can use it to define an equivalent star-product $`_{\mathrm{}}^{}{}_{}{}^{}`$ to $`_{\mathrm{}}`$ by: $$\rho (f_{\mathrm{}}^{}{}_{}{}^{}g)=\rho (f)_{\mathrm{}}\rho (g),f,gC^{\mathrm{}}(^d).$$ (9) Notice that $`X^_{\mathrm{}}^{}{}_{}{}^{}k=\rho ^1(\rho (X)^_{\mathrm{}}k)=\rho ^1(X^_{\mathrm{}}k)=X^k`$ for $`k0`$ and $`X\text{Lin}`$, therefore $`_{\mathrm{}}^{}{}_{}{}^{}`$ is a Weyl star-product. ### 3.1 Gutt star-product on $`𝔤^{}`$ Let $`G`$ be a real finite-dimensional group of dimension $`d`$. The Lie algebra of $`G`$ is denoted by $`𝔤`$ and its dual by $`𝔤^{}`$. The universal enveloping algebra (resp. symmetric algebra) of $`𝔤`$ is denoted by $`𝒰(𝔤)`$ (resp. $`𝒮(𝔤)`$). Also we denote by $`\text{Pol}(𝔤^{})`$ the space of polynomials on $`𝔤^{}`$. It is well known that the space of smooth functions on $`𝔤^{}`$ carries a natural Poisson structure defined by the Kirillov-Poisson bracket denoted by $`\mathrm{\Pi }`$. Fix a basis for $`𝔤`$, let $`C_{ij}^k`$ be the structure constants in that basis, and let $`(x^1,\mathrm{},x^d)`$ be the coordinates on $`𝔤^{}`$. Then the Kirillov-Poisson bracket is defined by: $$\mathrm{\Pi }(f,g)=\underset{1i,j,kd}{}x^kC_{ij}^k_if_jg,f,gC^{\mathrm{}}(𝔤^{}).$$ (10) Of course this definition is independent of the chosen basis for $`𝔤`$. S. Gutt has defined a star-product on the cotangent bundle $`T^{}G`$ of a Lie group $`G`$ . When one restricts this star-product between functions not depending on the base point in $`G`$, one gets a star-product on $`𝔤^{}`$. We shall call the induced product on $`C^{\mathrm{}}(𝔤^{})`$, Gutt star-product on $`𝔤^{}`$ and denote it by $`_{\mathrm{}}^G`$. Gutt star-product on $`𝔤^{}`$ can also be directly obtained by transporting the algebraic structure of the enveloping algebra $`𝒰(𝔤)`$ of $`𝔤`$ to the space of polynomials on $`𝔤^{}`$. This is achieved through the natural isomorphism between $`\text{Pol}(𝔤^{})`$ and $`𝒮(𝔤)`$ and with the help of the symmetrization map $`\sigma :𝒮(𝔤)𝒰(𝔤)`$. The product between two homogeneous elements $`P`$ and $`Q`$ in $`𝒮(𝔤)\text{Pol}(𝔤^{})`$ of degrees $`p`$ and $`q`$, respectively, is given by: $$P_{\mathrm{}}^GQ=\underset{0rp+q1}{}(2\mathrm{})^r\sigma ^1((\sigma (P)\sigma (Q))_{p+qr}),$$ (11) where $``$ is the product in $`𝒰(𝔤)`$ and, for $`v𝒰(𝔤)`$, $`(v)_k`$ means the $`k`$-th component of $`v`$ with respect to the associated grading of $`𝒰(𝔤)`$. Formula (11) defines an associative deformation of the usual product on $`\text{Pol}(𝔤^{})`$ which admits a unique extension to $`C^{\mathrm{}}(𝔤^{})`$. As a direct consequence of Eq. (11) we have that $`_{\mathrm{}}^G`$ is a Weyl star-product on $`(𝔤^{},\mathrm{\Pi })`$. Moreover the following property is easily verified: $$X_{\mathrm{}}YY_{\mathrm{}}X=2\mathrm{}\mathrm{\Pi }(X,Y),X,Y\text{Lin}(𝔤^{}),$$ where $`\text{Lin}(𝔤^{})`$ is the subspace of homogeneous polynomials of degree $`1`$ on $`𝔤^{}`$. Star-products on $`(𝔤^{},\mathrm{\Pi })`$ satisfying the preceding relation are called $`𝔤`$-covariant star-products. Actually, there is a characterization of Gutt star-product given by: ###### Lemma 2 Gutt star-product is the unique $`𝔤`$-covariant Weyl star-product on $`(𝔤^{},\mathrm{\Pi })`$. Any $`𝔤`$-covariant star-product on $`(𝔤^{},\mathrm{\Pi })`$ is equivalent to Gutt star-product. Proof. Any star-product $`_{\mathrm{}}`$ on $`(𝔤^{},\mathrm{\Pi })`$ is determined by the quantities $`\mathrm{exp}(X)\mathrm{exp}(Y)`$, $`X,Y\text{Lin}(𝔤^{})`$. Suppose that $`_{\mathrm{}}`$ is a $`𝔤`$-covariant Weyl star-product, then the star-exponential of $`X\text{Lin}(𝔤^{})`$ defined by: $$\mathrm{exp}_{_{\mathrm{}}}(X)=\underset{k0}{}\frac{1}{k!}X^_{\mathrm{}}k,$$ coincides with the usual exponential $`\mathrm{exp}(X)`$. The covariance property of $`_{\mathrm{}}`$ allows to use the Campbell-Hausdorff formula: $$\mathrm{exp}(X)_{\mathrm{}}\mathrm{exp}(Y)=\mathrm{exp}_{_{\mathrm{}}}(CH_{\mathrm{}}(X,Y))=\mathrm{exp}(CH_{\mathrm{}}(X,Y)),$$ (12) where $`CH_{\mathrm{}}(X,Y)`$ is the usual Campbell-Hausdorff series with respect to the bracket $`[X,Y]=2\mathrm{}\mathrm{\Pi }(X,Y)`$. For $`X,Y\text{Lin}(𝔤^{})`$, $`CH_{\mathrm{}}(X,Y)`$ is an element of $`\text{Lin}(𝔤^{})[[\mathrm{}]]`$. We still have $`CH_{\mathrm{}}(X,Y)^_{\mathrm{}}k=CH_{\mathrm{}}(X,Y)^k`$ for $`k0`$. It follows from Eq. (12) that there is at most one $`𝔤`$-covariant Weyl star-product on $`𝔤^{}`$, i.e., Gutt star-product. The second statement of the lemma follows from Theorem 2 and from the fact that the equivalence operator $`\rho `$ preserves the covariance property, i.e., $`\rho (X)=X`$ for $`X\text{Lin}(𝔤^{})`$ (cf. Eq. (5)). From Eq. (12), one can derive an explicit expression for the cochains of Gutt star-product. Denote by $`c_i`$, $`i1`$, the Campbell-Hausdorff coefficients: $`c_1(X,Y)=X+Y`$, $`c_2(X,Y)=\frac{1}{2}[X,Y]`$, etc. The term of order $`\mathrm{}^r`$ in $`\mathrm{exp}(X)_{\mathrm{}}^G\mathrm{exp}(Y)`$ for $`X,Y\text{Lin}(𝔤^{})`$ is obtained by expanding in powers of $`\mathrm{}`$ the right hand-side in Eq. (12), it is given by: $`C_r^G(\mathrm{exp}(X),\mathrm{exp}(Y))`$ (13) $`=2^r\mathrm{exp}(X+Y){\displaystyle \underset{1kr}{}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{m_1>\mathrm{}>m_k1}{\genfrac{}{}{0pt}{}{n_1,\mathrm{}n_k1}{m_1n_1+\mathrm{}+m_kn_k=r}}}{}}{\displaystyle \underset{1jk}{}}{\displaystyle \frac{1}{n_j!}}(c_{m_j+1}(X,Y))^{n_j},`$ where the bracket $`[X,Y]`$ in the $`c_i`$’s is taken to be $`\mathrm{\Pi }(X,Y)`$. For $`r=2`$, we easily get the differential expression for $`C_2^G(f,g)`$, $`f,gC^{\mathrm{}}(𝔤^{})`$: $$\frac{1}{2!}\mathrm{\Pi }^{i_1j_1}\mathrm{\Pi }^{i_2j_2}_{i_1i_2}f_{j_1j_2}g+\frac{1}{3}\mathrm{\Pi }^{i_1j_1}_{i_1}\mathrm{\Pi }^{i_2j_2}(_{j_1j_2}f_{i_2}g+_{i_2}f_{j_1j_2}g).$$ Comparing with the general expression of the second cochain of Kontsevich star-product given by (2), we see that in general Kontsevich and Gutt star-products differ. Notice that the extra term in (2) is a Hochschild $`2`$-coboundary. It is instructive to derive from Eq. (13) an expression for $`X_{\mathrm{}}^Gg`$, $`X\text{Lin}(𝔤^{})`$, $`gC^{\mathrm{}}(𝔤^{})`$. Using the standard recurrence formula for the $`c_i`$’s, it is easy to establish that $`c_i(0,X)=c_i(X,0)=0,i2;`$ (14) $`{\displaystyle \frac{}{s}}c_i(sX,Y)|_{s=0}={\displaystyle \frac{B_{i1}}{(i1)!}}(ad_Y)^{i1}(X),i2;`$ for $`X\text{Lin}(𝔤^{})`$, $`s`$, $`ad_Y`$ is the adjoint map $`X[Y,X]`$, and the $`B_n`$’s are the Bernoulli numbers. The substitution $`XsX`$ in Eq. (13) and differentiation with respect to $`s`$ gives: $$C_r^G(X,\mathrm{exp}(Y))=\frac{2^rB_r}{r!}(ad_Y)^r(X)\mathrm{exp}(Y),$$ which leads to $$C_r^G(X,g)=(1)^r\frac{2^rB_r}{r!}\underset{1i_{},j_{}d}{}\mathrm{\Pi }^{i_1j_1}_{i_1}\mathrm{\Pi }^{i_2j_2}\mathrm{}_{i_{r1}}\mathrm{\Pi }^{i_rj_r}_{i_r}X_{j_1\mathrm{}j_r}g.$$ (15) ###### Remark 1 From Eq. (13) or, better, Eq. (15), it is clear that the weights of the graphs appearing in Gutt star-product are essentially products of Bernoulli numbers. ## 4 Equivalence In this section, as in Sect. 3, we consider a Lie algebra $`𝔤`$ of dimension $`d`$ and use the notations previously introduced. We have seen that in general Kontsevich and Gutt star-products are not identical. We will show that they are equivalent and explicitly determine the equivalence operator by computing a subfamily of graphs. ###### Lemma 3 Kontsevich star-product $`_{\mathrm{}}^K`$ on $`(𝔤^{},\mathrm{\Pi })`$ is a $`𝔤`$-covariant star-product. Proof. We just need to see what kind of graphs contribute to $`X_{\mathrm{}}^KY`$, $`X,Y\text{Lin}(𝔤^{})`$. The graphs for $`C_r^K(X,Y)`$ must be such that the vertices $`L`$ and $`R`$ receive only one edge, respectively. For $`r=1`$, we simply have the Poisson bracket $`\mathrm{\Pi }`$. If $`r2`$, we need to draw $`2r2`$ edges in such a way that each vertex $`k`$, $`1kr`$ receives at most one edge (since the Poisson bracket $`\mathrm{\Pi }`$ is linear in the coordinates) and this is possible only if $`2r2r`$, i.e., $`r2`$. For $`r=2`$, the only graph contributing (up to symmetry factors) is the graph $`\mathrm{\Gamma }_3`$ in Fig. 2, whose associated bidifferential operator $`_{\mathrm{\Gamma }_3}`$ is symmetric. Thus we have $`X_{\mathrm{}}YY_{\mathrm{}}X=2\mathrm{}\mathrm{\Pi }(X,Y)`$. As a consequence of Lemmas 2 and 3 we have: ###### Corollary 1 On the dual of a Lie algebra, Kontsevich and Gutt star-products are equivalent. The formal series of differential operators realizing the equivalence between Kontsevich and Gutt star-products is the map $`\rho `$ defined in the proof of Theorem 2. We have $`\rho (f_{\mathrm{}}^Gg)=\rho (f)_{\mathrm{}}^K\rho (g)`$, $`f,gC^{\mathrm{}}(𝔤^{})`$ and in the present situation $`\rho `$ is defined by $`\rho (X^k)=X^{_{\mathrm{}}^Kk}`$, $`X\text{Lin}(𝔤^{})`$, $`k0`$. We will see (cf. Theorem 3) that to solve the recurrence relation (6) satisfied by the $`\rho _r`$’s, it is sufficient to consider graphs contributing to $`C_r^K(X,X^k)`$. The graphs having a non-trivial contribution must have only one edge ending at vertex $`L`$, e.g. $`i_k`$, and the other edge $`j_k`$ must end at some vertex $`k^{}k`$, $`1k^{}r`$. We shall say that a graph $`\mathrm{\Gamma }G_r`$ is the union of two subgraphs $`\mathrm{\Gamma }_1G_{r_1}`$ and $`\mathrm{\Gamma }_2G_{r_2}`$ with $`r_1+r_2=r`$, if the subset $`(1,\mathrm{},r)`$ of the set of vertices of $`\mathrm{\Gamma }`$ can be split into two parts $`(a_1,\mathrm{},a_{r_1})`$ and $`(b_1,\mathrm{},b_{r_2})`$ such that there is no edge between these two subsets of vertices. A graph that is not the union of two subgraphs is called indecomposable. By recalling the definition of the weight of a graph, the following is straightforward: ###### Lemma 4 If a graph $`\mathrm{\Gamma }G_r`$ is the union of two subgraphs $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }_2`$, respectively, in $`G_{r_1}`$ and in $`G_{r_2}`$ with $`r_1+r_2=r`$, then $`w(\mathrm{\Gamma })=\frac{r_1!r_2!}{r!}w(\mathrm{\Gamma }_1)w(\mathrm{\Gamma }_2)`$. In view of this lemma, we just need to confine ourselves to indecomposable graphs whose union is contributing to $`C_r^K(X,X^k)`$. ###### Lemma 5 For $`r2`$, up to an isomorphism of graphs, an indecomposable graph in $`G_r`$ contributing to $`C_r^K(X,X^k)`$ falls into one of the two types illustrated in Fig. 4. Proof. As the vertex $`L`$ can receive at most one edge, we distinguish two cases. i) The vertex $`L`$ receives no edge. We will see that the vertex $`R`$ must receive exactly $`r`$ edges. If there are strictly more than $`r`$ edges ending at vertex $`R`$, then there must be a vertex $`k`$, $`1kr`$, such that the edges $`i_k`$ and $`j_k`$ are ending at vertex $`R`$. This is excluded by definition of $`G_r`$. If there are strictly less than $`r`$ edges ending at vertex $`R`$, then at least one of the vertices $`(1,\mathrm{},r)`$ must receive two or more edges and the bidifferential operator associated to such a graph is vanishing since the Poisson bracket is a linear function of the coordinates. We are left with the case where exactly $`r`$ edges are ending at vertex $`R`$. Then every vertex in $`(1,\mathrm{},r)`$ must receive exactly one edge and, up to an isomorphism, there is precisely one such a graph, i.e., graph $`\mathrm{\Gamma }_2^{(r)}`$ in Fig. 4. ii) The vertex $`L`$ receives one edge. For this case, the vertex $`R`$ receives $`r1`$ edges. By relabeling the vertices and the edges, we may suppose that the edge ending at vertex $`L`$ is $`i_1`$. Then the second edge starting at the vertex $`1`$, i.e., $`j_1`$, cannot end at vertex $`R`$ because, by skew-symmetry of the Poisson bracket, the associated bidifferential operator is vanishing on $`(X,X^k)`$. Hence we may suppose that the edge $`j_1`$ is ending at vertex $`2`$. We still have $`2r2`$ edges starting from the vertices $`(2,\mathrm{},r)`$ to draw. Let $`a`$ be the number of edges ending at vertex $`R`$ and let $`b`$ be the total number of edges ending at the vertices $`(1,\mathrm{},r)`$. We have $`2r1=a+b`$. Since each vertex in $`(1,\mathrm{},r)`$ can receive at most one edge, we have that $`br`$ and it follows that $`ar1`$. If $`a>r1`$, it means that there are parallel multiple edges between at least one of the vertices $`(2,\mathrm{},r)`$ and the vertex $`R`$. Hence the vertex $`R`$ must receive $`r1`$ edges. Clearly every such edge must start at one the vertices $`(2,\mathrm{},r)`$. The other $`r1`$ edges must end at the vertices $`(1,3,\mathrm{},r)`$. Thus, up to an isomorphism, we find that there is only the graph $`\mathrm{\Gamma }_1^{(r)}`$ in Fig. 4 for this case. The preceding lemma tells us that graphs contributing to $`C_r^K(X,X^k)`$ must be of the form: $$\mathrm{\Gamma }_1^{(a)}\mathrm{\Gamma }_2^{(b_1)}\mathrm{}\mathrm{\Gamma }_2^{(b_s)},$$ with $`a+b_1+\mathrm{}+b_s=r`$. Notice that there can be only one graph of the type $`\mathrm{\Gamma }_1^{(a)}`$, since the vertex $`L`$ can receive only one edge. Quite a bit of simplification is allowed by ###### Lemma 6 For $`r2`$, the weight of the graph $`\mathrm{\Gamma }_2^{(r)}`$ in Fig. 4 vanishes. Proof. The form $`_{1kr}d\varphi (z_k,1)d\varphi (z_k,z_{k+1})`$, where $`z_{r+1}z_1`$, is $`0`$. This easily follows from a simple recurrence using explicit expressions for the forms $`d\varphi (z_i,z_j)`$. When they appear alone, the graphs $`\mathrm{\Gamma }_2^{(r)}`$ constitute an example of what was called “bad” graphs in Sect. 2. ###### Lemma 7 For $`r2`$, up to an isomorphism, the only graph contributing to $`C_r^K(X,X^k)`$ is the graph $`\mathrm{\Gamma }_1^{(r)}`$ in Fig. 4. The associated bidifferential operator has constant coefficients and is given by: $$_{\mathrm{\Gamma }_1^{(r)}}(f,g)=\underset{1i_{}d}{}\text{Tr}(ad_{x^{i_1}}\mathrm{}ad_{x^{i_r}})_{i_1}f_{i_2\mathrm{}i_r}g,r2.$$ (16) Proof. The first statement follows directly from Lemmas 4, 5, and 6. The bidifferential operator for the graph $`\mathrm{\Gamma }_1^{(r)}`$ is $$_{\mathrm{\Gamma }_1^{(r)}}(f,g)=\underset{1i_{},j_{}d}{}_{j_r}\mathrm{\Pi }^{i_1j_1}_{j_1}\mathrm{\Pi }^{i_2j_2}\mathrm{}_{j_{r1}}\mathrm{\Pi }^{i_rj_r}_{i_1}f_{i_2\mathrm{}i_r}g,$$ clearly it has constant coefficients and using the expression (10) for $`\mathrm{\Pi }`$, we see that the previous equation can be written as a trace of adjoint maps. The computation of the weights of the graphs $`\mathrm{\Gamma }_1^{(r)}`$ is a delicate question. The presence of cycles (wheels) does not allow do derive a simple recurrence relation among the weights. A direct calculation for $`\mathrm{\Gamma }_1^{(2)}`$ using residues gives a weight equals to $`1/48`$, but this method becomes unpractical for $`r3`$. The isomorphic graphs to $`\mathrm{\Gamma }_1^{(r)}`$ are obtained by permuting the vertices $`(1,\mathrm{},r)`$ and alternating the edges $`\{i_k,j_k\}\{j_k,i_k\}`$ for $`1kr`$, thus we get a symmetry factor $`r!2^r`$. Hence we have $`C_r^K(X,X^k)=2^rr!w(\mathrm{\Gamma }_1^{(r)})_{\mathrm{\Gamma }_1^{(r)}}(X,X^k)`$. ###### Theorem 3 On any finite-dimensional Lie algebra, the equivalence $`\rho `$ between Kontsevich and Gutt star-products: $`\rho (f_{\mathrm{}}^Gg)=\rho (f)_{\mathrm{}}^K\rho (g)`$, is given by $$\rho =\mathrm{exp}\left(\underset{r2}{}\mathrm{}^r2^r(r1)!w(\mathrm{\Gamma }_1^{(r)})D_r\right),$$ (17) where the $`D_r`$’s are differential operators with constant coefficients: $$D_r=\underset{1i_{}d}{}\text{Tr}(ad_{x^{i_1}}\mathrm{}ad_{x^{i_r}})_{i_1\mathrm{}i_r}.$$ Proof. Recall that $`\rho `$ is defined by $`\rho (X^k)=X^{_{\mathrm{}}^Kk}`$, $`X\text{Lin}(𝔤^{})`$, $`k0`$. It was shown that the $`\rho _r`$’s in $`\rho =I+_{r1}\mathrm{}^r\rho _r`$ are differential operators. Here we have only to solve the recurrence relation for $`\rho _r`$ appearing in the proof of Theorem 2: $$\delta \rho _r(X,X^{k1})=C_r^K(X,X^{k1})+\underset{\genfrac{}{}{0pt}{}{a+b=r}{a,b1}}{}C_a^K(X,\rho _b(X^{k1})),$$ (18) where $`X\text{Lin}(𝔤^{}),k1,r1`$. According to Lemma 1, there exist differential operators $`\eta _r`$ such that $`C_r^K(X,X^k)=\delta \eta _r(X,X^k)`$. From Lemma 7 it follows that $$\eta _r=2^r(r1)!w(\mathrm{\Gamma }_1^{(r)})\underset{1i_{}d}{}\text{Tr}(ad_{x^{i_1}}\mathrm{}ad_{x^{i_r}})_{i_1\mathrm{}i_r},r2.$$ For each $`r2`$, $`\eta _r`$ is a differential operator with constant coefficients and is homogeneous of degree $`r`$ in the derivatives. To a differential operator $`\eta `$ on $`𝔤^{}`$ with constant coefficients we can associate a polynomial $`\widehat{\eta }`$ on $`𝔤\text{Lin}(𝔤^{})`$. Here we have $`\widehat{\eta }_r(X)=2^r(r1)!w(\mathrm{\Gamma }_1^{(r)})\text{Tr}((ad_X)^r)`$ and one can check that $$\delta \eta _r(X,X^k)=\frac{r}{k}\eta _r(X^k)=\frac{r}{k}\frac{k!}{(kr)!}\widehat{\eta }_r(X)X^{kr}.$$ (19) The preceding implies that the $`\rho _r`$’s, $`r1`$, have constant coefficients and are homogeneous of degree $`r`$. We have $`\rho _1=0`$ and a recurrence on $`r`$ in Eq. (18) shows the property for all of the $`\rho _r`$’s. Using Eq. (19) we can express Eq. (18) in terms of the polynomials $`\widehat{\rho }_r`$ and $`\widehat{\eta }_r`$ and find that: $$\widehat{\rho }_r(X)=\widehat{\eta }_r(X)\frac{1}{r}\underset{\genfrac{}{}{0pt}{}{a+b=r}{a,b1}}{}a\widehat{\eta }_a(X)\widehat{\rho }_b(X).$$ By defining $`\widehat{\eta }_0(X)`$ to be identically equal to zero and $`\widehat{\rho }_0(X)`$ to be $`1`$, we can rewrite the previous equation as $$r\widehat{\rho }_r(X)=\underset{\genfrac{}{}{0pt}{}{a+b=r}{a,b0}}{}a\widehat{\eta }_a(X)\widehat{\rho }_b(X),$$ (20) then by considering the formal series $`\widehat{\rho }(X)I+_{r1}\mathrm{}^r\widehat{\rho }_r(X)`$ and $`\widehat{\eta }(X)_{r2}\mathrm{}^r\widehat{\eta }_r(X)`$ (recall that $`\eta _1=0`$), we see that Eq. (20) simply states that $`\widehat{\rho }^{}(X)=\widehat{\eta }^{}(X)\widehat{\rho }(X)`$ where the prime denotes formal derivative with respect to $`\mathrm{}`$. Thus $`\widehat{\rho }(X)=\mathrm{exp}(\widehat{\eta }(X))`$ and Eq. (17) follows. For a nilpotent Lie algebra, all of the operators $`D_r`$ do vanish. Hence we deduce the result of : ###### Corollary 2 For a nilpotent Lie algebra, Kontsevich star-product coincides with Gutt star-product. ### 4.1 Remarks The equivalence between Kontsevich and Gutt star-products shows us that, in the linear Poisson case, graphs with cycles play no role with respect to the associativity of the product. Here the contribution of these graphs is completely absorbed into the equivalence operator. In other words: Weights of the graphs $`\mathrm{\Gamma }_1^{(r)}`$ can be chosen arbitrarily and they do not affect the associativity of the star-product. We suspect that the situation described above is the general one, i.e., for $`^d`$ endowed with any Poisson structure $`\pi `$, it would be possible to get a new star-product by removing graphs with cycles in Kontsevich’s construction. We conjecture that the Weyl star-product associated with Kontsevich star-product $`_{\mathrm{}}^K`$ on $`(^d,\pi )`$ contains no cycle, and it is obtained from $`_{\mathrm{}}^K`$ by ignoring the graphs with cycles. Acknowledgements. Discussions with M. Flato and D. Sternheimer were at the origin of this paper and I am the most grateful to both of them for remarks and encouragement. Most of the work presented here was done while the author was visiting RIMS with a JSPS grant, and it a pleasure to thank Prof. I. Ojima for warmest hospitality.
no-problem/9905/astro-ph9905198.html
ar5iv
text
# The fall and rise of V854 Centauri: long-term ultraviolet spectroscopy of a highly-active R Coronae Borealis star ## 1 Introduction The UV spectrum of an RCB star at maximum light closely resembles the UV spectra of late F-type supergiants. The most prominent features of the spectra are absorption lines of Mg II $`\lambda `$2800 and Fe II multiplets around 2400 and 2600 Å. The low resolution of most of the International Ultraviolet Explorer (IUE ) spectra make line identifications difficult, particularly in the early-decline spectrum when many blended emission lines are present. Evans et al. (1985) attempted to identify the UV lines in an RY Sgr decline spectrum. Clayton et al. (1992a) identified many lines in the 1991 decline of V854 Cen. Holm et al. (1987) noted the similarity between the solar chromospheric spectrum and emission seen in the 1983 decline of R CrB. Table 3 of Clayton et al. (1992b) contains a line list prepared by comparing the decline emission spectra of RY Sgr, R CrB and V854 Cen. Clayton et al. (1992b) summarized all previous spectroscopic work in the UV and visible on RCB stars, covering 10 declines of 3 RCB stars; R CrB, RY Sgr and V854 Cen. Most studies have involved observations of a small portion of a decline. In the visible spectral region, only the 1967 decline of RY Sgr (Alexander et al. 1972) and the 1988 decline of R CrB (Cottrell et al. 1990) had good coverage from early in the decline until the return to maximum light. Prior to the IUE observations reported in this paper, only the 1982–1983 and 1990–1991 declines of RY Sgr had reasonable coverage in the UV (Clayton et al. 1992b). The general behavior of the emission spectrum in RCB declines is well known. As the photospheric light is extinguished by the forming dust cloud, a rich narrow-line emission spectrum appears. In the visible, this spectrum consists of many lines of neutral and singly ionized metals including Mg I, Si I, Ca I, Sc II, Ti II, V II, Cr II, Mn I, Fe I, Fe II, Sr II, Y II, La II, Ba II, and La II (Alexander et al. 1972; Payne-Gaposchkin 1963). Most of the lines in this spectrum, referred to as E1 (Alexander et al. 1972), are short-lived. Within a few days or weeks, most of these lines have faded and are replaced by a simpler broad-line (100–200 km s<sup>-1</sup>) spectrum dominated by Ca II H and K, and Na I D. Some of the early-decline spectral lines remain strong for an extended period of time. These lines, also narrow and referred to as E2, are primarily multiplets of Sc II and Ti II. In particular, the Sc II (7) $`\lambda `$4246 line remains strong. The E2 lines are primarily low excitation. There are many C I absorption lines which fill-in but never go into emission (Alexander et al. 1972). The Balmer lines, which are typically very weak due to the hydrogen deficiency in these stars, do not go into emission. The exception is in V854 Cen, which is much less hydrogen-deficient than the other RCB stars. V854 Cen shows strong Balmer line absorption at maximum light and emission in decline (Kilkenny & Marang 1989; Whitney et al. 1992). The late-decline spectrum is dominated by 5 strong broad-lines, Ca II H and K, the Na I D lines and a line at 3888 Å that may be He I. The broad-line (BL) emission spectrum remains visible until the star begins to return to maximum light and the photospheric continuum regains dominance. The UV spectrum undergoes a very similar evolution to that in the optical (Clayton et al. 1992b). However, because all of the IUE observations during declines were made at low resolution, there is no information on the width of the emission lines. The UV spectral evolution is most clearly seen in the 1991 decline of V854 Cen (Clayton et al. 1992a) but can also be seen in the other UV declines. The very early-decline UV spectrum consists of blends of many emission lines, primarily multiplets of Fe II which make up a pseudo-continuum. The Mg II $`\lambda `$2800 doublet is present but not yet strong. The strong apparent absorption at 2650 $`\mathrm{\AA }`$ is probably an absence of emission similar to that seen in the solar chromosphere (Holm et al. 1987). The actual photospheric continuum is the bottom of this apparent absorption feature. The V854 Cen spectrum is characterized by strong C II\] $`\lambda `$2325 emission (Clayton et al. 1992a). With time, the early-decline spectrum begins to fade and be replaced by the late-decline spectrum. In the transition between these two spectral phases, there is still much blended emission but Mg II $`\lambda `$2800, Mg I $`\lambda `$2852 and some of the Fe II lines have started to become relatively stronger. The late-decline spectrum is characterized by blended emission from multiplets of Fe II (2) $`\lambda `$2400, Fe II (1) $`\lambda `$2600, Fe II (62, 63) $`\lambda `$2750, as well as from Mg II and Mg I. In addition, V854 Cen shows C I and C II\] emission which is generally not seen in the other stars. Emission at C II $`\lambda `$1335 is visible at maximum light in R CrB and RY Sgr (Holm et al. 1987; Holm & Wu 1982). Rao, Nandy & Bappu (1981) report that emission is visible in Mg II in R CrB at maximum light. The data presented here represent the most extensive coverage of an RCB star ever obtained in the UV during declines. 54 LWP low-resolution, 2 LWP high-resolution, and 13 SWP low-resolution spectra were obtained from the archive. Nearly half of these spectra were obtained when V854 Cen was 3 or more magnitudes below maximum light. ## 2 Observations and data reduction SWP and LWP spectra of V854 Cen were obtained with the IUE satellite during 1991–1994. These are listed in Table 1, along with the FES and estimated visual magnitude of the star at the time of the IUE observation. Most of these data are large-aperture LWP and SWP low-resolution spectra. All these files have been reprocessed with NEWSIPS. Clayton et al. (1993) reported large-scale changes in UV line profiles in V854 Cen that appeared to be phased to the 43.2-d period of the star (Lawson et al. 1992). The NEWSIPS (and INES; Schartel & Skillen 1998) reduction of the IUE spectra shows no such effects. The line profile changes observed were likely an artifact of the previous reduction due to the low signal-to-noise, and nature (the spectra consist of weak emission lines with weak or absent stellar continuum) of these data. The light curve of V854 Cen covering the interval 1987–1998, starting from near the discovery of the RCB-nature of the star (McNaught & Dawes 1986), is shown in Fig. 1. The star has been almost continuously monitored visually by one of us (AFJ) since discovery. Fig. 1 shows over 1600 visual estimates made with a 0.3-m telescope; the estimates obtained on average every 2.5 d. Sterken & Jones (1997) discuss the visual observing procedure. The uncertainty in the visual estimates is, at best, 0.1 mag, but can rise to 0.3–0.5 mag when the star is varying rapidly in brightness or color. Also, the visual estimates have an effective faint limit of $`V13.5`$. Table 2 lists 323 sets of UBVR<sub>c</sub>I<sub>c</sub> photometry of V854 Cen obtained with the 0.5-m telescope at the South African Astronomical Observatory during 1989–1998. The measurements of V854 Cen were tied to observations of E-region photometric standards, and most of the $`V`$ magnitudes and colors have 1-$`\sigma `$ uncertainties of $`<`$ 0.01 mag. Measurements of lower quality (normally when the star is faint) are given to correspondingly lower precision in Table 2. The $`V`$ light curve is shown in Fig. 1 and is discussed in Section 3.1; the color curves are shown in Fig. 2 and are discussed in Section 3.2. Figs 1 and 2 also show decline onset-times used to improve the Lawson et al. (1992) decline ephemeris of V854 Cen, which we discuss in detail in Section 4. Fig. 3 shows the 1991–1994 photometry and visual estimates on an expanded scale. Note the generally good correspondence between the photoelectric and visual data, where these overlap. Times of the IUE LWP and SWP observations are indicated in both Figs 1 and 3. ## 3 Description of the observations ### 3.1 Linking the light curve and the IUE spectra The discovery decline (1987; commencing near JD6970; where Julian Dates are given as the difference JD–2440000) of V854 Centauri (then known as NSV 6708) was only observed visually. The 1988 decline (JD7400) was observed by Kilkenny & Marang (1989) and Lawson & Cottrell (1989), who obtained UBVRIJHKL photometry and visual spectroscopy. The 1989–1991 light curve has been discussed in detail by Lawson et al. (1992). The 1991 decline appeared to consist of three separate events (near JD8310, 8350 and 8395) with respective minima of $`V`$ 11, 14 and 15. The first series of IUE spectra (1991; JD8335–8500) were acquired during the rise from the first fade, soon after the second minimum and throughout the deep third minimum. The final 1991 IUE spectra were made at $`V`$ 10.0 as the star began to recover towards maximum light. The star recovered to maximum light ($`V`$ = 7.3) only briefly near JD8610, before fading to $`V`$ = 13.5 by JD8750. Structure in the light curve is apparent during the initial decline (Fig. 3), with two partial recoveries near JD8700 and JD8730, then again at minimum near JD8810. The 1992 series of IUE spectra follow the decline from JD8650 ($`V`$ = 7.7) through JD8868 ($`V`$ 13). The star experienced a prolonged minimum until JD8980 (1993 January). There are no photometric measurements from JD8850 until JD9030 by which time the star was $`V`$ = 9.0. The few visual estimates made between JD8850–8980 indicate the star was highly-variable, and to have briefly reached $`V`$ 10 near JD8960. There were no UV spectra during this period. The star regained maximum light near JD9100. The rise to maximum, and the time at maximum light, was well-monitored in the UV; the 1993 series of IUE spectra consist of 27 observations made between JD9009–9217. High-resolution LWP spectra were obtained on JD9093 and JD9140 when V854 Cen was at, or near, maximum light (Lawson et al., in preparation). 1993 was the only year in the data set not characterised by the onset of a deep decline; however the star experienced a relatively long ($``$ 200 d) low-amplitude decline from JD9180–9350 that was unique in the 12-yr light curve. The early-1994 light curve was characterised by the steepest decline in our data set. The $`V`$ magnitude decreased from 7.5 (at JD9430) to 13 in $``$ 20 d. 11 IUE spectra were obtained during this time; from JD9427–9456. V854 Cen remained at minimum for $``$ 30 d before rising to $`V`$ = 9 near JD9530. The star subsequently faded again; slowly to $`V`$ = 10 near JD9600, then rapidly to $`V`$ = 13.5 near JD9620. The final 6 IUE spectra were obtained from JD9508–9539. The remainder of the light curve was characterised by a major decline commencing in 1994 November (JD9860) with the star not fully-regaining maximum light until mid-1997 (near JD10600). Short-duration fades were seen near JD10345 and JD10775. The star entered a deep decline on JD10855 (1998 February). ### 3.2 The color curves In Fig. 2 we present the color curves of V854 Cen from 1989–1998 (JD7500–11100). The colors during 1989–91 have been discussed by Lawson et al. (1992). The longer-term trends in the colors are similar to that already reported for this star and other RCB stars. For instance, in the 1990, 1991 and 1992 decline events, while the ($`UB`$) decreased, the other colors reddened and varied in sympathy with the $`V`$ curve (Figs 1 and 3). This effect is understood as the ($`UB`$) index being ‘driven’ by the appearance of the emission line region which becomes prominent as the ejected dust cloud obscures the photosphere. Cottrell et al. (1990) observed that the ($`UB`$) color turned blueward upon the appearance of the E1 emission spectrum during the 1988 decline of R CrB. Sometimes both the ($`UB`$) and ($`BV`$) colors decrease during the early stages of the decline before eventually reddening. Cottrell et al. (1990) termed these events ‘blue’ declines. The 1994 and 1998 declines showed significant variations in the ($`UB`$) color. The former decline is characterized by all the colors displaying blueward trends which correlate with the extreme nature of the event, i.e., a rapid decrease in visual light to $`V`$14 in $``$ 15 d. Presumably the photosphere was obscured rapidly without significant obscuration of the emission line region. On JD9440, 10 d after the decline onset when the star was $`V`$ = 13.4, the ($`UB`$) color was 0.9 mag bluer (at –0.5) than at maximum (0.4) and the ($`BV`$) color was 0.5 mag bluer (0.0 cf. 0.5 at maximum). The next photometry was obtained at JD9485, by which time the star had recovered to $`V`$ 12. The color behavior during the 1998 minimum differed in that the ($`UB`$) and ($`BV`$) colors become bluer during the decline minimum, at a time when the ($`VR`$) and ($`VI`$) colors were rapidly reddening. The ($`UB`$) color peaked at –0.5 on JD10947, when the star was $`V`$ = 13.5; 92 d after the decline onset at JD10855. Extreme color variations at minimum are probably due by the emergence of the optical BL spectrum (lines such as Ca H and K and Na D and broad continuum emission; see, e.g. Cottrell et al. 1990, figs. 2–4 and Clayton et al. 1993, fig. 2) as well as optical depth variations in the dust and coverage of the photosphere. However, the decline is not a ‘blue’ decline following the Cottrell et al. (1990) description. Cottrell et al. (1990) also describe ‘red’ declines where both the photospheric and chromospheric fluxes are simultaneously reduced. This results in the color indices increasing, i.e. reddening. The 1992 decline appeared to show this trend. The initial behavior of the colors during the 1998 decline was also redward. ### 3.3 The UV spectrum evolution A description of the UV spectrum of V854 Cen at maximum and minimum light was given by Clayton et al. (1992b). Lines of interest in the SWP and LWP spectra include C II $`\lambda `$1335, C III\] $`\lambda `$1909 and C II\] $`\lambda `$2325, Fe II multiplets at $`\lambda `$2400, 2600 and 2750, Mg II $`\lambda `$2800, Mg I $`\lambda `$2852 and C I $`\lambda `$2965. Some of these are known to vary in strength as V854 Cen goes into a decline. Gross variations in the appearance of the SWP and LWP spectra are shown in Figs 4 and 5, where we show these spectral regions at maximum and minimum light. The NEWSIPS reduction has revealed the C IV\] $`\lambda `$1550 transition-region line for the first time in the spectrum of V854 Cen, and the only detection of this line in an RCB star. IUE spectra of RY Sgr and R CrB of similar signal-to-noise show no feature at this wavelength. Clayton et al. (1999) observed RY Sgr with STIS in the far-UV ($`\lambda \lambda `$1140–1740 Å ) and observed strong C II $`\lambda `$1335 and Cl I $`\lambda `$1351 emission, and possibly fluoresced CO emission pumped by C II $`\lambda `$1335. There was no indication of C IV\] $`\lambda `$1550 in the STIS spectrum of RY Sgr. The acquisition of UV data over an extended time span such as that covered by these observations may uncover long-term trends in the UV spectral evolution, particularly the quantitative changes in the strengths of the lines during declines. The long series of UV decline spectra of V854 Cen is unique for an RCB star, and is unlikely to be surpassed by current platforms such as HST . Only a few measurements of ultraviolet emission-line strengths have been published. Herbig (1949) reported that the strength of the Ca II H and K lines of R CrB peaked as the star went into a decline and subsequently faded to one-fifth of their original strength. In contrast, the C II $`\lambda `$1335 line in the same object was found to roughly remain constant during the 1983 decline at the same value measured at maximum light (Holm et al. 1987). Clayton et al. (1992b) provided further data concerning the strength of the Mg II emission line for the 1983 and 1988 decline of R CrB, the 1982 and 1990 decline of RY Sgr, and the early 1991 decline of V854 Cen. They found in R CrB that the line peak flux remained constant for 200 d into a decline, while in RY Sgr and V854 Cen the emission strength appeared to decrease after $``$ 100 d in a manner similar to that observed by Herbig (1949) for the Ca II H and K lines in R CrB. We have measured the peak line strengths in the IUE spectra of V854 Cen. Over the course of these observations (1991–1994) the C II $`\lambda `$1335, C IV\] $`\lambda `$1550, C III\] $`\lambda `$1909 and C II\] $`\lambda `$2325 have average strengths of $`1\times 10^{14}`$, $`7\times 10^{15}`$, $`1\times 10^{14}`$ and $`4\times 10^{14}`$, respectively, as seen at the IUE resolution (units are erg s<sup>-1</sup> cm<sup>-2</sup> Å<sup>-1</sup>). The Fe II $`\lambda `$2600, 2750 lines were at their maximum strengths during 1991 ($`8\times 10^{15}`$ and $`5\times 10^{15}`$, respectively). Subsequently, throughout the data set, they were generally weaker than these values. The Mg I and Mg II lines, which were generally present during all minima, had respective maximum fluxes in 1991 of $`1.5\times 10^{14}`$ and $`5\times 10^{15}`$. Finally, C I $`\lambda `$2965 had a similar maximum peak flux during 1991, 1992 and 1994 of $`4\times 10^{15}`$. We have also measured the integrated flux (line intensities) of 9 key emission features in the SWP and LWP spectra except where the rising background continuum (at or near maximum light in the SWP spectra, and always present to some extent in the LWP spectra) made continuum-subtraction measurements unreliable. The line intensities of the C II $`\lambda `$1335, C IV\] $`\lambda `$1550 and C III\] $`\lambda `$1909 lines (in the SWP spectra), and the C II\] $`\lambda `$2325, Fe II $`\lambda `$2400, 2600, 2750, Mg II $`\lambda `$2800, Mg I $`\lambda `$2852 and C I $`\lambda `$2965 lines (in the LWP spectra) are shown in Fig. 3 (units are erg s<sup>-1</sup> cm<sup>-2</sup>). The uncertainty of a typical measurement is 10–20 percent; the uncertainty is contributed to by the choice of continuum placement and the quality of the flux calibration. Importantly, the trends seen in Fig. 3 agree with the visual assessment of the temporal evolution of the spectral features. Only large-aperture LWP spectra are measured because of the uncertain photometric corrections needed for the small-aperture observations. We have compared a number of SWP and LWP spectra extracted using both NEWSIPS and INES (Schartel & Skillen 1998) and find minimal differences in the appearance of the spectra and the emission line fluxes between the two reductions. All the measured features are blends, and the intensities plotted in Fig. 3 represent the integrated flux of the blend, e.g. Fe II $`\lambda `$2600 consists of several weak Fe II lines with wavelengths ranging from 2586 to 2631 Å (see Fig. 5; some of these lines are resolved); Mg II $`\lambda `$2800 consists of a doublet at $`\lambda `$2796, 2803. Wu et al. (1992; see table 1.1) identifies many of these blends. Clayton (1992b; see table 3) list the presence of these lines across a number of declines for several RCB stars. There is no indication in the IUE spectra that the relative contribution of lines composing the blends changes as the blend intensity varies with time. #### 3.3.1 The 1991 decline The 1991 decline (from JD8310) is characterised by 3 progressively deeper fades with the first IUE spectrum obtained near the first local minimum. All the LWP emission lines generally weakened during the interval JD8335–8500 (see Fig. 3). Line-intensity half-lives range from $``$ 50 d for Fe II $`\lambda `$2750, Mg II $`\lambda `$2800 to $``$ 200 d for C II\] $`\lambda `$2325 (see Fig. 6 and Table 3). Several features reached minimum strength at JD8436, in agreement with the time of the light curve minimum (Fig. 3). In particular, Fe II $`\lambda `$2600 increased in strength by a factor of $``$ 4 from the light of minimum light ($`V`$ 15) as the star brightened to $`V`$ = 10. Lines such as Fe II $`\lambda `$2750 continued to decay during this time. In contrast to general behavior of other LWP spectral lines, the Mg I $`\lambda `$2852 may have strengthened slightly as the star passed through the first 2 secondary minima and after the onset of the third minimum at JD8395. Thereafter, as the star faded to $`V=15`$, the line intensity also rapidly decreased. In the 1991 SWP spectra, obtained after the decline minimum, all three key lines (C II $`\lambda `$1335, C IV\] $`\lambda `$1550 and C III\] $`\lambda `$1909) appeared to remain constant. #### 3.3.2 The 1992–1993 decline The 1992–1993 decline was the longest in our data set, lasting $``$ 400 d. Similar to many other declines of V854 Cen, this event shows multiple (in this case, 2) local minima between the onset of the decline from maximum light ($`V`$ = 7.3) at JD8610 and minimum ($`V`$ 14) near JD8750. SWP and LWP spectra were obtained during the fade; other LWP spectra were obtained near minimum when the star brightened briefly to $`V`$ = 12 near JD8810. SWP and LWP spectra were obtained during the rise to maximum light during 1993, including 2 LWP high-resolution spectra. The C II $`\lambda `$1335 line decayed by a factor of $``$ 3 during the fade. The behavior of the line is unusual compared to the 1991 observations which showed the line to be bright (comparable to the intensity of the line at maximum light) even though the observations were made when the star was fainter than at any time during the 1992 decline. The next SWP observations were not made until JD9009, just prior to the star beginning to brighten towards maximum light; thus no SWP spectra were obtained across the 200 d minimum. During the rise, the C II $`\lambda `$1335 line intensity was slightly ($``$ 50 percent) greater than when measured at the decline minimum. However, it remained a factor of $``$ 2 weaker than during 1991. C IV\] $`\lambda `$1550 may have been fainter during the decline minimum, based upon a single faint flux measurement at JD8787. C III\] $`\lambda `$1909 was insensitive to the decline and it remained at a level similar to that seen in 1991. The LWP spectrum lines showed a variety of responses to the decline. C II\] $`\lambda `$2325 remained relatively constant and possibly stronger than in 1991. Fe and Mg lines were weak, but some brightened in sympathy with the 2 magnitude increase in visual flux near JD8820. (The actual rise in flux may have been somewhat greater than 2 magnitudes since the decline minimum was not well-observed photometrically and the visual estimates are unreliable below $`V`$ 13.5.) C I $`\lambda `$2965 showed somewhat different behavior to that of the other lines. Measurements made during the final fade to minimum light in 1992 showed the line strength decreased from $`5\times 10^{14}`$ erg s<sup>-1</sup>cm<sup>-2</sup> to unmeasureably low ($`<5\times 10^{15}`$) between JD8720 and JD8724, and then regained its former strength by JD8792. At the time of the local maximum near JD8820, the line appeared to decrease in strength by 30–50 percent before recovering. During the rise to maximum light during early-1993 (JD9000–) most LWP spectra are affected by the rising stellar continuum and the line strengths are unreliable or unmeasureable. However, on JD9009, most LWP lines had brightened above the average level measured near JD8800. #### 3.3.3 The 1994 decline The 1994 decline was the most extreme of our data set, taking only $``$ 16 d for the star to fade from maximum light ($`V`$ 7.5) at JD9430 to minimum ($`V`$ 13.50) at JD9446. As in the previous decline, the absence of photometric data through the decline minimum did not allow an accurate determination of the amplitude of the event. Visual estimates indicated the star remained at or below $`V`$ = 13.5 for $``$ 30 d. Subsequently, the star partly recovered (to $`V`$ = 9, including a minor fade), which was then followed by another deep decline. Most of the LWP spectra acquired during 1994 were taken during the onset of the first decline with 3 LWP and 2 SWP spectra monitoring the recovery phase of the light curve. The SWP spectra showed that C IV\] $`\lambda `$1550 and C III\] $`\lambda `$1909 remained at a strength comparable to that during the 1991 and 1992–93 declines. However, the strength of C II $`\lambda `$1335 was similar to that during the 1992–93 decline. Presumably all the LWP line strengths decreased rapidly as the optical flux of V854 Cen faded towards minimum. This behavior was only observed in C II\] $`\lambda `$2325. Measurement of the line at JD9437 (7 d after the decline onset) gave an intensity of $`5\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup>. By JD9449 (19 d after the decline onset) the line strength was comparable to the level recorded during the 1991 decline minimum ($`2.5\times 10^{13}`$ erg s$`^1`$cm<sup>-2</sup>). By JD9449, all other LWP spectral lines were weak. With the possible exception of Fe II $`\lambda `$2600, all lines remained weak during the final series of IUE observations obtained as the star rose in brightness to $`V`$ = 9 by JD9540. ### 3.4 Summary of line behavior Trends in the line strengths presented in Fig. 3 suggest that it may be possible to group some spectral lines according to their responses to a decline event. Clayton et al. (1992b) attempted to relate changes in the UV spectrum to those seen in the visible spectrum (Alexander et al. 1972, Cottrell et al. 1990). In their scheme, the ultraviolet E1, E2 and BL spectral features roughly correspond to their visible counterparts and were defined as: E1 (fade in 10–30 d): Many blended lines of Fe II and other ionized metals, the apparent absorption feature at 2650 Å and weak Mg II $`\lambda `$2800. E2 (fade in 50–150 d): Most of the E1 lines have faded leaving Fe II multiplets at $`\lambda `$2400, 2600 and 2750. BL (fade but never disappear): Long-lasting lines of C II\] $`\lambda `$2325, Mg II $`\lambda `$2800, Mg I $`\lambda `$2852 and C I $`\lambda `$2965. In the 1991 decline, during the JD8300–8500 interval, C II $`\lambda `$1335, C IV\] $`\lambda `$1550 and C III\] $`\lambda `$1909 are clearly BL as they show little change in strength during the decline minimum. C II\] $`\lambda `$2325 also probably originates in the BL emission region although it does weaken by $``$ 50 percent (Fig. 6). The Fe II $`\lambda `$2750, Mg II $`\lambda `$2800, Mg I $`\lambda `$2852 and C I $`\lambda `$2965 lines all undergo significant decreases in intensity over the 170 day IUE coverage and thus are classified as E2 (Fig. 6). Mg I $`\lambda `$2852 may be stronger near JD8400, in phase with the local maximum (Fig. 3). This may indicate that its region of origin is closer to the star than is the case for the other E2 lines. The behavior of Fe II $`\lambda `$2600 is unusual as, unlike other E2 lines, it recovers strongly after only $``$ 100 d (Fig. 3). However, the decrease in intensity seen between JD8360–8466 is not rapid enough to classify the line as E1 and hence it is considered to be E2 (Table 3). The only time an E1 LWP spectrum was obtained during 1991 was at JD8335. This spectrum looks similar to the early-decline spectra of RY Sgr and R CrB (Clayton et al. 1992a) as it showed a myriad of blended Fe II lines, apparent absorption at 2650 Å and Mg II $`\lambda `$2800 substantially filled-in by emission. The first two E1 features were absent in the next LWP spectrum (JD8361) with Mg II $`\lambda `$2800 being stronger (in emission) than at any other time in the data set. Clayton et al. (1992b) noted when there are several local minima, the E1 spectrum does not generally reappear, unless there is a long time between the local minima. During the 1991 decline, E1 features are only seen during the first local minimum. No IUE spectra were obtained between JD8500 (after the end of the 1991 minimum) and JD8650 (near the onset of the 1992–93 decline). During this time, the spectrum showed full recovery from one dominated by weak emission to one swamped by deep photospheric absorption. (Figs 4 and 5 show the gross changes in the SWP and LWP spectra from that at maximum light to the spectrum during the decline minimum.) Mg II $`\lambda `$2800 best demonstrates the transition (Fig. 5). Visual inspection of the spectra (where line strengths could not be reliably measured) further supports the E2-type nature of this line. The behavior of some of the lines during the 1992-93 decline shows the difficulty in trying to uniquely characterize their nature using emission-line decay times, e.g. the correlation between the light curve and the timescale for the decrease in C II $`\lambda `$1335 is classic E2 (Table 3), unlike the BL appearance of this line in 1991. Inspection of Fig. 3 confirms the BL nature of C II\] $`\lambda `$2325 and C III\] $`\lambda `$1909; C IV\] $`\lambda `$1550 is probably BL. The timescales and appearance of the Mg lines suggest that these are E2 lines. Inspection of the LWP spectra shows that, during the multiple fade to minimum, Mg II $`\lambda `$2800 emission progressively fills-in the deep Mg II $`\lambda `$2800 absorption feature, but remains weak or absent and only becomes prominent during the decline minimum. During the rise to maximum light in 1993 it appeared as a weak emission feature on JD9009, whereas it was visible (in emission) at the bottom of a weak photospheric absorption at JD9045. Unlike the 1991 decline, the 2650 Å absorption feature was absent during this event. The behavior of the Fe II $`\lambda `$2600, 2750 and C I $`\lambda `$2965 lines (Fig. 3) was more difficult to interpret. In particular, it was difficult to reconcile their changes in strength (and timescales) assuming the E1/E2/BL model. The rapid decay of C I $`\lambda `$2965 at JD8720 is compatible with E1-type behavior. No other LWP lines showed this rapid response to the decline minimum. These lines show rapid and discordant behavior during the local maximum near JD8820. Fe II $`\lambda `$2720 appeared to peak at JD8829; whereas C I $`\lambda `$2965 decreased in strength by a factor of $``$ 2 near JD8829/8835. Both lines recover their pre-JD8829 fluxes by JD8868. The timescales of both lines are more suggestive of E1 than E2. Fe II $`\lambda `$2600 showed little or no reaction to the local minimum. Finally, measurements made during the 1994 decline presented a similar classification dilemma to that discussed above. In this event, all SWP lines remained at strengths similar to that observed during previous declines, and thus are BL, but all LWP lines faded rapidly and apparently in phase with the light curve. The decrease in the line intensity was only measured in CII\] $`\lambda `$2325 (Fig. 3 and Section 3.3.3) on timescales (the line faded by a factor of 2 in 20 d) consistent with E1 behavior, when the line either showed long-E2 or BL behavior during 1991-93. By JD9449, 19 d after the decline onset, all LWP lines were faint at levels similar to that observed during the 1991 and 1992-93 decline minima (Fig. 3). Thus we would conclude from the apparent timescales for the fading of these lines that all these features are E1, if it were not for the combined photometric and spectroscopic evidence that the 1994 decline was particularly rapid compared to the other declines. The color curves (Fig. 2) indicate that the ($`UB`$) color was 0.9 mag bluer than at maximum at JD9440, only 10 d after the decline onset – evidence that the photosphere was rapidly obscured, exposing the emission line region. The LWP spectrum obtained on JD9449 indicated that the emission line region was then obscured on a timescale of $`<`$ 9 d. From inspection of the LWP spectra, we can also estimate that it took $`<`$ 19 d after the decline onset for the 2650 Å feature to disappear and for the Mg II $`\lambda `$2800 absorption line to weaken and fill-in, and then go into emission. In summary – while the emission line behavior across several declines of V854 Cen was generally in agreement with the E1/E2/BL model, the behavior is more complex than the simple model predicts. Temporal coverage of the declines is insufficient to discern the geometry of the emission line regions and that of the eclipsing dust, e.g. UV Mg lines were expected to behave like optical BL lines (such as Ca II H and K, and Na I D) but instead showed more-rapid activity commensurate with E1 and E2 lines. As more data is obtained of these types of events, we expect these stars and their individual declines will probably show large intrinsic variation in the nature of the emitting region and the eclipsing dust cloud. ## 4 A revised pulsation-decline ephemeris for V854 Cen Lawson et al. (1992) discovered a link between decline onset times and the probable pulsation period of the star, with 8 decline onset times between JD6970 (1987) and JD8395 (1991) being fitted by the linear solution: JD<sub>n</sub> = 2447400.6 ($`\pm `$ 1.1) + 43.2 ($`\pm `$ 0.1) $`n`$ d, between cycle numbers $`n`$ = –10 and 23. The ephemeris fitted the onset times of the 1987 (JD6970, $`n=10`$) and 1988 (JD7400, $`n=0`$) declines, two low-amplitude fades and one large-amplitude decline in 1989 (JD7705, 7785 and 7875; $`n`$ = 7, 9 and 11, respectively), and the triple 1991 decline (JD8310, 8350 and 8395; $`n`$ = 21, 22 and 23, respectively). The ephemeris also appeared to satisfy times of maxima on the light curve during 1989 that may be due to low-amplitude pulsations of the star. If this is the case, V854 Cen is similar to the RCB star RY Sgr, which has pulsation maxima and decline onset times tied to a 37.8-d period (Pugach 1977). A 43-day pulsation period for V854 Cen would be entirely consistent with the pulsation period of other RCB stars of similar $`T_{\mathrm{eff}}`$ most of which have pulsation periods of $``$ 40 d (Lawson et al. 1990, Lawson & Cottrell 1997). With the light curve from 1987–1998 available to us (Fig. 1), we have extended the Lawson et al. decline ephemeris across the entire data set. We fit the onset times (15 epochs) of all declines from maximum light ($`V`$ 7.5; 13 epochs) within the 1987–1998 interval, irrespective of decline amplitude, and the last 2 fades of the triple 1991 decline (the first fade occurred from maximum light and is included in the 13 epochs above; the other two fades occurred from $`V`$ 10), with the linear solution: JD<sub>n</sub> = 2447400.43 ($`\pm `$ 1.33) + 43.23 ($`\pm `$ 0.03) $`n`$ d, between cycle numbers $`n`$ = –10 and 80. The 15 epochs are indicated in Fig. 1, and some are shown in Figs 2 and 3 where the epochs extend across the interval observed photometrically. The revised solution is in agreement with the Lawson et al. (1992) ephemeris to within the respective uncertainties of the two solutions. Table 4 lists the observed epochs, calculated epochs from the ephemeris, and the observed–calculated (O–C) residuals. Figure 7 plots the O–C residuals as a function of period number, and binned as a histogram. The 1-$`\sigma `$ scatter of all 15 residuals is 3.3 d. The earlier ($`n`$ = –10 to 28) residuals determined mainly from photoelectric measurements have lower scatter (1-$`\sigma `$ = 2.1 d) than the later residuals ($`n`$ = 41 to 80, 1-$`\sigma `$ = 4.9 d) which are determined mainly from the visual estimates. However, all of these 1-$`\sigma `$ values are similar to the typical uncertainty in the observed decline onset time of $`\pm `$ 5 d. There is currently no evidence for higher-order, e.g. quadratic, terms in the ephemeris, as has been claimed for RY Sgr (Kilkenny 1982, Marraco & Milesi 1982). Most of the declines showed complex structure in the light curve as the star faded from maximim light, with several local maxima giving the appearance of multiple decline events. Only in the 1991 decline (commencing JD8310) did the times of the local maxima seen during the fade (JD8350, 8395) support both the Lawson et al. (1992) and revised ephemerides. Some of the structure seen during declines was approximately fitted by the revised ephemeris. In a number of declines, local maxima seen during the initial fade, and during the decline minima, gave an O–C residual of –10 to –15 d, i.e. the features had ‘maxima’ that occurred 10–15 d before the ephemeris prediction. Such features occurred during the 1992 (near JD8690, 8725, 8815), 1994 (JD9540, 9590) and 1998 (JD10885, 10930) declines. The apparent connection between these times and the ephemeris, only offset by 10–15 d, suggests they are also linked in some way to the 43.23 d periodicity. ## 5 Discussion ### 5.1 Dust formation in RCB stars The traditional model for the declines in RCB stars considers the homogenous nucleation of carbon particles in thermodynamic equilibrium at temperatures of $``$ 1500 K and at a distance of $`20R_{}`$ (see, e.g. Feast 1986). Radiation pressure slowly dissipates the dust cloud. Over the past 10 years, a wealth of observational and theoretical evidence has pointed increasingly towards dust formation in the near-star field, at distances of $`2R_{}`$. For example, (i) simultaneous optical photometry and spectroscopy indicates the dust cloud can initially obscure only part of the photosphere. The cloud can then be rapidly accelerated away from the star by radiation pressure. The photosphere is obscured on timescales of typically 10–20 d, revealing a rich emission line region of which the inner regions (the E1 spectral region) are obscured in a further 10–20 d (Cottrell et al. 1990, Lawson 1992). The timescale of the fade to minimum light of 30–50 d, followed by recovery on timescales of hundreds of days, can be modelled by radial expansion at 100–200 km s<sup>-1</sup> of an obscuring cloud from a point of close proximity to the star. The expansion velocity is consistent with that of blue-shifted absorption features seen during declines (Alexander et al. 1972, Cottrell et al. 1990, Clayton et al. 1993). (ii) The decay timescales for lines formed in the emission region suggests the E1 region has an extent of 1.5–2 $`R_{}`$, the E2 region is $``$ 5 times larger, and the BL region is larger still. The available evidence suggests at least 2 temperature regimes in the emission line region; a cool ($``$ 5000 K; Clayton et al. 1992a) inner region likely to be the site of neutral and singly-ionized species composing the E1 and E2 spectrum, and a much hotter outer region indicated by the presence of BL lines such as C III\] $`\lambda `$1909, C IV\] $`\lambda `$1550 and He I $`\lambda `$10830. The presence of C IV\] $`\lambda `$1550 implies the presence of a transition region with an electron temperature $`T_\mathrm{e}10^5`$ K (Jordan & Linsky 1987). (iii) The linking of decline onset times and pulsation periods in RY Sgr (Pugach 1977) and V854 Cen (Section 4) is only possible if dust formation is intimately associated with the pulsations of the star, with little phase delay between the time of maximum light on the pulsation cycle and the onset of the decline. Clayton et al. (1992b) reviewed empirical evidence for dust formation near the stellar surface. More recently, (iv) Woitke, Goeres & Sedlmayr (1996) produced models that suggested the presence of (pulsation-induced) shocks in the outer atmosphere of a hydrogen-deficient star might result in conditions far-removed from thermodynamic equilibrium, encouraging particle nucleation. Such photospheric shocks are observed in the RCB star RY Sgr (Lawson, Cottrell & Clark 1991, Clayton et al. 1994) and may be present in other RCB stars. (v) Lawson & Cottrell (1997) showed all well-observed RCB stars were pulsating stars, and (vi) Clayton et al. (1999) reported the probable discovery in RY Sgr of CO, critical to the Woitke et al. (1996) model. Polar molecules such as CO play a major role in gas radiative heating and cooling. In hydrogen-deficient atmospheres, CO is expected to be the most abundant polar molecule by two orders of magnitude. To this evidence, measurement of UV emission lines in V854 Cen shows some consistency with the E1/E2/BL model developed from the behavior of the optical spectrum during declines. Although we have poor sampling near the times of decline onset, the few E1 spectra obtained suggest characteristic E1 lines decay on timescales of several tens of days (see Section 3.4). E2-region lines in 1991 decay on timescales of 50–120 d (see Section 3.3.1). BL-region lines throughout the data set generally decay on timescales of hundreds of days (e.g. C II $`\lambda `$1335). Some lines may be ‘super-BL’ (coronal and transition-region lines such as C III\] $`\lambda `$1909 and C IV\] $`\lambda `$1550) and remain essentially constant despite the high-degree of activity seen in the V854 Cen light curve. Uniquely for an RCB star, we have analysed spectroscopy across several consecutive major declines of V854 Cen. Emission line decay timescales clearly differ between declines. This can be related to probable free parameters such as the initial size and extent of the obscuring cloud, ejection velocity and future evolution of the cloud, and axis of the cloud motion with respect to the line-of-sight (e.g. Pugach 1990). Hardly surprising, the E1/E2/BL scheme needs to be interpreted as a simple classification scheme for characterising the evolution of the post-decline spectrum, where decay timescales and the behavior of individual lines should be seen only as indicative. ### 5.2 Dust formation in other types of cool hydrogen-deficient carbon stars Recent analysis of IUE spectra of other hydrogen-deficient carbon stars has revealed possible differences between the RCB stars and the (optically) spectroscopically-similar HdC stars (Lambert 1986). Brunner, Clayton & Ayres (1998) found no evidence of C II $`\lambda `$1335 in the HdC star HD 182040, whereas the line is present at all times in the UV spectrum of V854 Cen (Fig. 4), R CrB (Holm et al. 1987) and RY Sgr (Clayton et al. 1999). The line may also be absent in XX Cam (Brunner et al. 1998), which has been classified as both RCB and HdC star, but has no bright-IR excess like HD 182040. Absent or weak C II $`\lambda `$1335 may be an important discriminator between RCB and HdC stars. Lawson & Cottrell (1997) found that the HdC stars are either low-amplitude pulsators, or that they are not pulsating above a 1-$`\sigma `$ radial velocity limit of $``$ 1.5 km s<sup>-1</sup>. Only 1 of 5 HdC stars measured by Lawson & Cottrell, HD 175883, was found to have a photometric and radial velocity amplitude comparable to the RCB stars. HD 175893, along with HD 173409, was suspected of having a weak (compared to RCB stars) infrared excess in IRAS 12-µm photometry (Walker 1986). ISO 12- and 25-µm photometry confirms that only the excess in HD 175893 is real (Lawson et al., in preparation). Pulsations in these types of stars seem responsible for encouraging mass-loss in the form of high-velocity outflows and dust, but only if the radial velocity amplitude exceeds 10–15 km s<sup>-1</sup> peak-to-peak. The dust and gas may be linked by the presence of 200–400 km s<sup>-1</sup> blue-shifted absorption seen during the declines (Alexander et al. 1972, Cottrell et al. 1990, Clayton et al. 1993). This may be gas being dragged away from the star by the ejected dust cloud. Gas density enhancement in the BL region due to enhanced mass-loss in RCB stars, compared to HdC stars, may be responsible for the strong C II $`\lambda `$1335 emission. ### 5.3 V854 Cen as a pulsating star? All well-observed RCB stars are pulsating stars. Most RCB stars with $`T_{\mathrm{eff}}`$ similar to V854 Cen (such as RY Sgr and R CrB; $`T_{\mathrm{eff}}`$ 7000 K), have a radial velocity-to-light (RV/$`V`$) amplitude ratio of $``$ 50 km s<sup>-1</sup> mag<sup>-1</sup>, similar to radially-pulsating Cepheids (Lawson & Cottrell 1997). Typical amplitudes (peak-to-peak) are 10–15 km s<sup>-1</sup> in radial velocity and 0.2–0.3 in $`V`$, although RY Sgr is more active (30–40 km s<sup>-1</sup> and 0.5–0.7 mag, respectively). So far, it has not been possible to reliably measure pulsation amplitudes for V854 Cen, due to the extreme nature of the light curve of the star, and the likely low-amplitude of the pulsations. Some observations at maximum in the 1989–1991 light curve (Lawson et al. 1992) showed semi-regular variations on timescales of $``$ 40-d and with amplitudes of 0.1–0.2 mag that are probably due to radial pulsations. Lawson & Cottrell (1989) did not detect radial velocity variations in a short series of measurements made in 1988, but the individual measurements had 1-$`\sigma `$ uncertainties of 3–5 km s<sup>-1</sup>. If RV/$`V50`$ for V854 Cen, like other RCB stars, then the radial velocity amplitude is expected to be only $``$ 10 km s<sup>-1</sup>. The onset times of declines in V854 Cen are satisfied by a 42.23 d period, which is probably the pulsation period of the star. Other RCB stars of similar $`T_{\mathrm{eff}}`$ have similar periods (Lawson et al. 1990, Lawson & Cottrell 1997). It remains unresolved why V854 Cen is curently more active than other RCB stars. The greater hydrogen abundance in V854 Cen, compared to any other known RCB star, may encourage dust production. However, other RCB stars are known to have experienced prolonged intervals of dust production in the past. WAL thanks the University College ADFA Special Research Grant Scheme, and Department of Physics and Astronomy at LSU for financial support. WAL and MMM thank the Australian Research Council Small Grant Scheme FY97 for supporting this research. GCC was supported by NASA grant JPL 961526.
no-problem/9905/math-ph9905023.html
ar5iv
text
# The Atiyah-Singer fixed point theorem and the gauge field copy problemThis paper was published in Bol. Soc. Paran. Matem. vol. 16 59-62 (1996). ## 1 Introduction Gauge field theories may be defined as physical interpretations of the theory of connections in a principal fiber bundle . Some gauge fields (curvature forms) admit two or more potentials (connection forms), which are related to the fields by the equation field (structure equation). Such an ambiguity is known as the gauge field copy problem and it was discovered in 1975 by T.T. Wu and C.N. Yang . Gauge copies fall into two cases: ‘true copies’ (potentials that are not locally related by a gauge transformation) and ‘false copies’ (locally gauge equivalent potentials). Our goal, in this paper, is to establish an analytical condition for the existence of false gauge field copies. We use a result due to F.A. Doria , where a topological condition to the existence of false gauge field copies is presented. The Atiyah-Singer Fixed Point Theorem helps to make the connection between the topological condition and the analytical one. We use standard notation. For details see , , and . Let $`P(M,G)`$ be a principal fiber bundle, where $`M`$ is a finite-dimensional smooth real manifold and $`G`$ is a finite-dimensional Lie group. If we denote by $`(P,\alpha )`$ the principal fiber bundle $`P(M,G)`$ endowed with the connection-form $`\alpha `$, and by $`L`$ the field that corresponds to the potential $`A`$ associated to $`\alpha `$, then: ###### Definition 1.1 The field $`L`$ or the potential $`A`$ are reducible if the corresponding bundle $`(P,\alpha )`$ is reducible. ###### Theorem 1.1 Let $`P(M,G)`$ be as previously stated but with the extra condition that $`G`$ is semi-simple. $`L`$ has potentials that are locally related by a gauge transformation if and only if $`L`$ is reducible. Proof: See .$`\mathrm{}`$ ## 2 The Atiyah-Singer Fixed Point Theorem Let $`G`$ be a compact Lie group acting on a smooth manifold $`X`$, and let $`D`$ be a $`G`$-invariant elliptic partial differential operator on $`X`$. We can now state the Atiyah-Singer fixed point theorem : ###### Theorem 2.1 The Lefschetz number $`L(g,D)`$ is related to the fixed point set $`X^g=\{xX;gx=x\}`$ by the formula $$L(g,D)=(1)^m\left(\frac{ch_g(j^{}\sigma (D))}{ch_g(_1^gN^g𝐂)}td(T^g𝐂)\right)[TX^g],$$ (1) where $`m`$ is the dimension of $`X^g`$, $`j^{}:K_G(TX)K_G(TX^g)`$ is induced by the inclusion mapping $`j:X^gX`$, $`\sigma (D)`$ is the symbol of $`D`$ (and so it is an element of the Grothendieck Group $`K_G(TX)`$), $`N^g`$ is the normal bundle $`NX^g`$ of $`X^g`$ in $`X`$, $`T^g`$ is the tangent bundle $`TX^g`$ of $`X^g`$ in $`X`$, $`ch_g`$ is the Chern Character, $`td`$ is the Todd class, $`_1^g`$ is the Thom class, and C denotes the topological field of complex numbers. ## 3 An Analytical Condition For The Existence of False Gauge Field Copies Gauge fields and gauge potentials can be defined as cross-sections of vector bundles associated with the principal fiber bundle $`P(M,G)`$. The potential space (or connection space) coincides with the space of all $`C^k`$ cross-sections of the vector bundle $`E`$ of $`l(G)`$-valued 1-forms on $`M`$, where $`l(G)`$ is the group’s Lie algebra, while the field space (or curvature space) coincides with the space of all $`C^k`$ cross-sections of the vector bundle $`𝐄`$ of $`l(G)`$-valued 2-forms on $`M`$. Let $`F`$ and $`𝐅`$ be manifolds on which $`G`$ acts on the left and such that $`E=P\times _GF`$ and $`𝐄=P\times _G𝐅`$, where $`P`$ is the total space of $`P(M,G)`$. In other words, $`E`$ is the quotient space of $`P\times F`$ by the group action. Similarly, $`𝐄`$ is the quotient space of $`P\times 𝐅`$ by the action of the group G. Before the statement of our main result, we must recall that a topological group $`G`$ acts freely on a topological space $`X`$ if and only if $`xg=x`$ implies that $`g=e`$, where $`xX`$, $`gG`$, and $`e`$ is the identity element in $`G`$. ###### Theorem 3.1 If $`𝒟_G:C^{\mathrm{}}(P;P\times F)C^{\mathrm{}}(P;P\times 𝐅)`$ is a $`G`$-invariant elliptic partial differential operator, then the Lefschetz number $`L(g,𝒟_G)`$ can be defined if and only if $`g=e`$. Proof: As $`G`$ acts freely on $`P`$, then $`P^e=P`$ and $`P^g=\mathrm{}`$ if $`ge`$. Hence, $`j^{}\sigma (𝒟)`$ (which is necessary to compute the Lefschetz number of $`𝒟`$) is not defined for $`ge`$, since $`TP^g=\mathrm{}`$.$`\mathrm{}`$ Let’s abbreviate $`L(e,𝒟_G)`$ as $`L(𝒟_G)`$. With theorem 3.1 in mind, we establish the following result: ###### Theorem 3.2 If a gauge field (a cross-section of $`𝐄`$) is associated to copied potentials that are locally gauge-equivalent, then there is: a non-trivial sub-group of $`G`$, denoted by $`G^{}`$; a $`G^{}`$-manifold $`P^{}`$; and two $`G^{}`$-vector spaces $`F^{}`$ and $`𝐅^{}`$ such that if there is an elliptic partial differential operator $$𝒟_G^{}:C^{\mathrm{}}(P^{};P^{}\times F^{})C^{\mathrm{}}(P^{};P^{}\times 𝐅^{})$$ (2) $`G^{}`$-invariant, then its Lefschetz number $`L(𝒟_G^{})`$ can be defined as a function of $`G`$-spaces. Proof: If a gauge field is associated to copied potentials that are locally gauge-equivalent, then such a field is reducible (Theorem 1.1). Therefore $`P(M,G)`$ is reducible (Definition 1.1). So, there is a non-trivial sub-group $`G^{}`$ of $`G`$ and a monomorphism $`\phi :G^{}G`$ such that it can be defined a reduced principal fiber bundle $`P^{}(M,G^{})`$ and a reduction $`f:P^{}(M,G^{})P(M,G)`$. So, there are induced reductions f: $`TP^{}TP`$, and f: $`NP^{}NP`$. If we consider $`\phi ^{}:K_G(TP)K_G^{}(TP^{})`$, $`\varphi ^{}:K_G(N𝐂)K_G^{}(N^{}𝐂)`$, and $`\mathrm{\Phi }^{}:K_G(T𝐂)K_G^{}(T𝐂)`$ as monomorphisms induced by $`\phi `$, then, in accordance with theorem 2.1: $$L(𝒟_G^{})=(1)^m\left(\frac{ch_e(j^{}\phi ^{}\sigma (𝒟_G))}{ch_e(_1\varphi ^{}N𝐂)}td(\mathrm{\Phi }^{}T𝐂)\right)\phi ^{}[TP].$$ (3) $`\mathrm{}`$ Theorem 1.1 is a topological condition for false gauge field copies since it deals with the concept of reducibility to sub-groups of a topological group. Theorem 3.2 refers to the Lefschetz number of an elliptic partial differential operator, which characterizes an analytical condition. An obvious corollary may be obtained with respect to the $`G`$-signature theorem . But this is left as an exercise to the reader. Other theorems may be obtained by the use, e.g., of the Atiyah-Singer index theorem , in the sense of obtaining an analytical condition for the existence of gauge field copies. We are working also in a generalization of such ideas for the case of true gauge field copies. If we modify the geometry of an irreducible principal fiber bundle in a manner to handle true copies as false , it seems possible to state topological and analytical conditions for generic gauge field copies. ## 4 Aknowledgements This paper was partially prepared during a stay at Stanford University as a post-doctoral fellow. I would like to thank Patrick Suppes for his kind hospitality. I thank also the financial support from CNPq.
no-problem/9905/hep-ph9905336.html
ar5iv
text
# Intrinsic transverse momentum and transverse spin asymmetries ## 1 Introduction Large single transverse spin asymmetries have been observed in the process $`pp^{}\pi X`$ . Of course, one experiment only cannot reveal the origin(s) of such asymmetries conclusively and one needs comparison to other experiments, for instance the planned RHIC spin physics experiments. The transverse momentum dependence of transverse spin asymmetries should be related to the transverse momentum of quarks inside a hadron. Our goal is to investigate the relation between the transverse spin and transverse momentum of quarks. In Ref. we have argued that conventional perturbative QCD and higher twist effects do not produce large –if any– single transverse spin asymmetries. Less conventional higher twist mechanisms, such as soft gluon poles in twist-3 matrix elements or the effectively equivalent twist-3 T-odd distribution functions , could produce a single spin asymmetry. For the Drell-Yan (DY) process it is expected to be similar in size to the double spin asymmetry $`A_{LT}`$ , which is estimated to be much smaller than the double transverse spin asymmetry $`A_{TT}`$, that is estimated to be on the order of percents for RHIC energies . Therefore, we proposed an alternative explanation of such a single spin asymmetry, involving a particular leading twist, intrinsic transverse momentum dependent, chiral-odd, T-odd distribution function , called $`h_1^{}`$ (cf. Fig. 1; depicted are probabilities of specific quark states (black dot) inside a hadron). This can not only offer an explanation for single spin asymmetries in $`pp^{}\pi X`$ or the DY process, but also for the large azimuthal $`\mathrm{cos}2\varphi `$ dependence of the unpolarized DY cross section , which still lacks understanding. Unlike its chiral-even counterpart $`f_{1T}^{}`$ (investigated in ), which depends on the polarization of the parent hadron (cf. Fig. 2), the function $`h_1^{}`$ signals an intrinsic handedness inside an unpolarized hadron. It would mean an orientation dependent correlation between the transverse spin and the transverse momentum of quarks inside an unpolarized hadron<sup>1</sup><sup>1</sup>1In Ref. we have discussed the theoretical difficulties associated with T-odd distribution functions.. One can use the polarization of another hadron to become sensitive to the polarization of quarks inside an unpolarized hadron. In this way it could provide a new way of measuring the transversity distribution function $`h_1`$. For this purpose we propose two measurements that could be done at RHIC using polarized proton-proton collisions. ## 2 An unpolarized asymmetry A large $`\mathrm{cos}2\varphi `$ angular dependence in the unpolarized DY process $`\pi ^{}N\mu ^+\mu ^{}X`$, where $`N`$ is either deuterium or tungsten and for instance using a $`\pi ^{}`$ beam of 194 GeV, was found by the NA10 Collaboration . The perturbative QCD prediction (NLO) for the cross section written as $$\frac{d\sigma }{d\mathrm{\Omega }}1+\mathrm{cos}^2\theta +\mathrm{sin}^2\theta \left[\mu \mathrm{cos}\varphi +\frac{\nu }{2}\mathrm{cos}2\varphi \right],$$ (1) is $`\mu 0,\nu 0`$. However, $`\nu `$ acquires values of more than 0.3 depending on the transverse momentum $`Q_T`$ of the muon pair (its invariant mass is between 4 and 8 GeV/$`c^2`$), cf. Fig. 3. Even though the cross section itself is dependent on the nuclear target, since $`\sigma _W(Q_T)/\sigma _D(Q_T)`$ is an increasing function of $`Q_T`$, the analyzing power $`\nu (Q_T)`$ shows no apparent nuclear dependence, indicating that the asymmetry arises at the quark-hadron level. In Ref. we have observed that within the framework of transverse momentum dependent distribution functions , this asymmetry can only be accounted for by the function $`h_1^{}`$, unless $`1/Q^2`$ suppressed . Moreover, higher twist effects are expected to produce $`\mu >\nu `$, which is not the case. We found $`\nu h_1^\pi h_1^N`$ and used this observation to fit the function $`h_1^{}`$, assuming some simplifications, like independence of the type of parent hadron, cf. Fig. 3. In a similar way one can try to measure $`\mathrm{cos}2\varphi `$ in unpolarized $`pp\mu ^+\mu ^{}X`$ at RHIC and obtain a parametrization of $`h_1^p`$. ## 3 Single spin asymmetries At RHIC they will also be able to measure $`pp^{}\mu ^+\mu ^{}X`$. With one transversely polarized hadron the DY cross section will have more complicated azimuthal dependences. For instance (displaying only two terms): $$\frac{d\sigma }{d\mathrm{\Omega }d\varphi _{S_T}}\mathrm{sin}^2\theta \left[\frac{\nu }{2}\mathrm{cos}2\varphi \rho \mathrm{sin}(\varphi +\varphi _{S_T})\right].$$ (2) The analyzing power $`\rho `$ is proportional to the product $`h_1^{}h_1`$ . Hence, the measurement of $`\mathrm{cos}2\varphi `$ combined with a measurement of the single spin azimuthal asymmetry $`\mathrm{sin}(\varphi +\varphi _{S_T})`$ could provide information on $`h_1`$. In other words, a nonzero function $`h_1^{}`$ will imply a relation between $`\nu `$ and $`\rho `$, which in case of one flavor is ($`\nu _{\mathrm{max}}`$ is the maximum value attained by $`\nu (Q_T)`$) $$\rho =\frac{1}{2}|𝑺_T|\sqrt{\frac{\nu }{\nu _{\mathrm{max}}}}\frac{h_1}{f_1}.$$ (3) This depends on the magnitude of $`h_1`$ compared to $`f_1`$ and in Fig. 4 we display three options for $`\rho `$, using the fitted function $`\nu `$ which we view as an optimistic upper bound. We note that the function $`f_{1T}^{}`$ generates a totally different angular single spin asymmetry, namely $`(1+\mathrm{cos}^2\theta )|𝑺_T|\mathrm{sin}(\varphi \varphi _{S_T})f_{1T}^{}f_1`$. The large single spin asymmetries found in $`pp^{}\pi X`$ can also arise from leading twist T-odd functions with transverse momentum dependence. There are three options: $`h_1^{}(x_1,𝒑_T)h_1(x_2)D_1(z),`$ (4) $`f_{1T}^{}(x_1,𝒑_T)f_1(x_2)D_1(z),`$ (5) $`h_1(x_1)f_1(x_2)H_1^{}(z,𝒌_T).`$ (6) The first two options are similar to the ones described above, accompanied by the unpolarized fragmentation function $`D_1`$. The third option contains the Collins effect function $`H_1^{}`$ , which is formally the fragmentation function analogue of $`h_1^{}`$, but in principle unrelated in magnitude. The last two options were investigated in . ## 4 Conclusion The chiral-odd T-odd distribution function $`h_1^{}`$ can not only offer an explanation for single transverse spin asymmetries in hadron-hadron collisions, but also for the unpolarized $`\mathrm{cos}2\varphi `$ asymmetry in the $`\pi ^{}N\mu ^+\mu ^{}X`$ data (unlike any other function in this approach, unless $`1/Q^2`$ suppressed). It would relate unpolarized and polarized observables and thus would offer a new possibility to access $`h_1`$ in $`pp\mu ^+\mu ^{}X`$.
no-problem/9905/nucl-th9905007.html
ar5iv
text
# HADRONIC SCATTERINGS OF CHARM MESONS AND ENHANCEMENT OF INTERMEDIATE MASS DILEPTONS ## 1 Introduction Heavy quark production in hadronic reactions is reasonably well described by perturbative QCD. However, in heavy ion collisions, final-state interactions may affect the spectra of produced heavy mesons. At the Relativistic Heavy Ion Collider (RHIC), a dense partonic system, often called the quark gluon plasma (QGP), is expected to be formed at the early stage. Since the QGP may induce a strong radiative energy loss of the produced heavy quarks, a change in the spectra of heavy meson observables could provide us information on the properties of the QGP. But interactions between heavy mesons and other hadrons may not be negligible and need to be studied. Such a hadronic modification of the charm spectra in heavy ion collisions has been recently suggested as a possible explanation for the observed enhancement of dimuons of intermediate masses in the NA50 experiments at the CERN-SPS. Assuming that charm mesons develop a transverse flow due to rescatterings with hadrons, leading to a harder charm meson $`m_{}`$ spectra, dimuons from charm meson decays are also found to have a harder $`p_{}`$ spectrum. Based on the energy cuts for muons at the NA50 experiment, more dimuons would then be found to have an invariant mass above $`1.5`$ GeV. Another explanation based on dilepton productions from secondary meson-meson interactions has also been proposed . Whether or not charm mesons acquire a transverse flow depends on how strongly charm mesons interact with other hadrons during their propagation through the matter. In this study, we shall evaluate the scattering cross sections of charm mesons with pion, rho, and nucleon, using an effective Lagrangian. The effects of hadronic scatterings on the charm meson transverse momentum spectra and dileptons from charm meson decays are then estimated for heavy ion collisions at CERN-SPS energies. ## 2 Charm meson interactions with hadrons We consider the scattering of charm mesons ($`D^+`$, $`D^{}`$, $`D^0`$, $`\overline{D}^0`$, $`D^+`$, $`D^{}`$, $`D^0`$, and $`\overline{D}^0`$) with pion, rho, and nucleon. If the $`SU(4)`$ symmetry were exact, interactions involving pseudo-scalar and vector mesons could be described by the following Lagrangian: $`_{PPV}=igTr\left(P^{}V^\mu _\mu P\right)+h.c.`$ (1) where $`P`$ and $`V`$ represent, respectively, the following $`4\times 4`$ pseudo-scalar and vector meson matrices: $`P`$ $`=`$ $`\left(\begin{array}{cccc}\frac{\pi ^0}{\sqrt{2}}+\frac{\eta }{\sqrt{6}}+\frac{\eta _c}{\sqrt{6}}& \pi ^+& K^+& \overline{D}^0\\ \pi ^{}& \frac{\pi ^0}{\sqrt{2}}+\frac{\eta }{\sqrt{6}}+\frac{\eta _c}{\sqrt{6}}& K^0& D^{}\\ K^{}& \overline{K}^0& \eta \sqrt{\frac{2}{3}}+\frac{\eta _c}{\sqrt{6}}& D_s^{}\\ D^0& D^+& D_s^+& \frac{3\eta _c}{\sqrt{6}}\end{array}\right),`$ (6) $`V`$ $`=`$ $`\left(\begin{array}{cccc}\frac{\rho ^0}{\sqrt{2}}+\frac{\omega }{\sqrt{6}}+\frac{J/\psi }{\sqrt{6}}& \rho ^+& K^+& \overline{D}^0\\ \rho ^{}& \frac{\rho ^0}{\sqrt{2}}+\frac{\omega }{\sqrt{6}}+\frac{J/\psi }{\sqrt{6}}& K^0& D^{}\\ K^{}& \overline{K}^0& \omega \sqrt{\frac{2}{3}}+\frac{J/\psi }{\sqrt{6}}& D_s^{}\\ D^0& D^+& D_s^+& \frac{3J/\psi }{\sqrt{6}}\end{array}\right).`$ (11) Expanding the Lagrangian in Eq.(1) in terms of the meson fields explicitly, we obtain the following Lagragians for meson-meson interactions: $`_{\pi DD^{}}=ig_{\pi DD^{}}\overline{D}^\mu \stackrel{}{\tau }\left[D(_\mu \stackrel{}{\pi })(_\mu D)\stackrel{}{\pi }\right]+h.c.,`$ $`_{\rho DD}=ig_{\rho DD}\left[\overline{D}\stackrel{}{\tau }(_\mu D)(_\mu \overline{D})\stackrel{}{\tau }D\right]\stackrel{}{\rho }^\mu ,`$ $`_{\rho \pi \pi }=g_{\rho \pi \pi }\stackrel{}{\rho }^\mu \left(\stackrel{}{\pi }\times _\mu \stackrel{}{\pi }\right).`$ We also need the following Lagrangians for meson-baryon interactions: $`_{\pi NN}=ig_{\pi NN}\overline{N}\gamma _5\stackrel{}{\tau }N\stackrel{}{\pi },`$ $`_{\rho NN}=g_{\rho NN}\overline{N}\left(\gamma ^\mu \stackrel{}{\tau }\stackrel{}{\rho }_\mu +{\displaystyle \frac{\kappa _\rho }{2m_N}}\sigma ^{\mu \nu }\stackrel{}{\tau }_\mu \stackrel{}{\rho }_\nu \right)N.`$ Fig.1 shows the Feynman diagrams considered in this study for charm meson interactions with the pion (diagrams 1 to 8), the rho meson (diagrams 9 and 10), and the nucleon (diagrams 11 to 13). The differential cross sections of these processes can be found in the reference . Possible interferences among diagrams 3, 4 and 8 have not been included. For coupling constants, we take $`g_{\rho \pi \pi }=6.1`$, $`g_{\pi DD^{}}=4.4`$, $`g_{\rho DD}=2.8`$ , $`g_{\pi NN}=13.5`$, $`g_{\rho NN}=3.25`$, and $`\kappa _\rho =6.1`$. The $`SU(4)`$ symmetry assumed in the Lagrangian in Eq.(1) would give the following relations: $`g_{\pi KK^{}}(3.3)=g_{\pi DD^{}}(4.4)=g_{\rho KK}(3.0)=g_{\rho DD}(2.8)={\displaystyle \frac{g_{\varphi KK}}{\sqrt{2}}}(3.4)={\displaystyle \frac{g_{\rho \pi \pi }}{2}}(3.0).`$ The empirical values given in parentheses agree reasonably well with this prediction even though the $`SU(4)`$ symmetry is badly broken. Form factors are introduced at the vertices to take into account the structure of hadrons. For $`t`$-channel vertices, monopole form factors of the form $`f(t)=(\mathrm{\Lambda }^2m_\alpha ^2)/(\mathrm{\Lambda }^2t)`$ are used, where $`\mathrm{\Lambda }`$ is a cut-off parameter , and $`m_\alpha `$ is the mass of the exchanged meson. It should be noted that the cross sections for diagrams 2 and 9 ($`D^{}\pi D\rho `$) are singular because the intermediate mesons can be on-shell. In the present study, we simply add an imaginary part of $`50`$ MeV to the mass of the intermediate pion as the regulator. The thermal averaged cross section, $`<\sigma v>`$, is shown in Fig.2(a) for initial particles with a thermal distribution at temperature $`T`$. Only the dominant scattering channels which have values above $`1.1`$ mb are shown. ## 3 Estimates of Rescattering Effects In this section, we estimate the effects of hadronic rescatterings on both the charm meson $`m_{}`$ spectra and the invariant mass distribution of dileptons from charm meson decays in heavy ion collisions at SPS energies. We first determine the squared momentum transfer to a charm meson, $`p_0^2`$, as the squared momentum of the final charm meson $`D_2`$ in the rest frame of $`D_1`$ for a scattering process $`D_1X_1D_2X_2`$. In the charm meson local frame, we assume the time evolution of the hadron densities as $`\rho (\tau )1/\tau `$. Then the total number of scatterings for a charm meson is given by $`N={\displaystyle _{\tau _0}^{\tau _F}}\sigma v\rho 𝑑\tau \sigma v\rho _0\tau _0\mathrm{ln}\left({\displaystyle \frac{R_Am_{}^D}{\tau _0p_{}^D}}\right),`$ and the squared total momentum transfer from hadronic scatterings is $`<p_\mathrm{S}^2>=<Np_0^2>=\left[{\displaystyle \underset{i=\pi ,\rho ,N\mathrm{}}{}}<\sigma vp_0^2>_i\rho _{i0}\right]\tau _0\mathrm{ln}\left({\displaystyle \frac{R_A<m_{}^D>}{\tau _0<p_{}^D>}}\right).`$ Thus, the relevant quantity is the thermal average $`<\sigma vp_0^2>`$ instead of the usual $`<\sigma v>`$. Fig.2(b) shows this thermal average for the dominant scattering channels which have values above $`0.75`$ mb$``$GeV<sup>2</sup>. Summing up contributions from all scattering channels in Fig.1 (a), (b) and (c) separately, and simply dividing by 2 to account for the average over $`D`$ and $`D^{}`$, we get $`<\sigma vp_0^2>1.1,1.5\mathrm{and}\mathrm{\hspace{0.33em}2.7}\mathrm{mb}\mathrm{GeV}^2\mathrm{at}T=150\mathrm{MeV}`$ for $`\pi `$, $`\rho `$ and $`N`$ scatterings with charm mesons, respectively. For central $`Pb+Pb`$ collisions at SPS energies, the initial total numbers of particles are about $`500(\pi )`$, $`220(\rho )`$, $`100(\omega )`$, $`80(\eta )`$, $`180(N)`$, $`60(\mathrm{\Delta })`$, and $`130`$(higher baryon resonances) . We have not calculated the scattering cross sections between charm mesons and hadrons such as kaons, $`\omega `$, $`\eta `$, $`\mathrm{\Delta }`$, and higher baryon resonances. For a conservative estimate of the effect, we only include $`\pi `$, $`\rho `$ and nucleon, and we obtain $`\rho _0\tau _00.79(\pi ),0.35(\rho ),\mathrm{and}\mathrm{\hspace{0.33em}0.28}(\mathrm{nucleon})\mathrm{fm}^2,`$ $`<p_\mathrm{S}^2>0.61\mathrm{GeV}^2T_\mathrm{S}=96\mathrm{MeV}.`$ In the above, we have taken $`\tau _0=1`$ fm. The parameter $`T_\mathrm{S}`$ characterizes the scattering strength and is given by $`T_\mathrm{S}<p_\mathrm{S}^2>/(3m_D)`$ in the lowest-order approximation . Based on Monte Carlo simulations, $`T_\mathrm{S}`$ has been related to the inverse slope $`T_{\mathrm{eff}}`$ of the final charm meson $`m_{}`$ spectrum, and this is shown in Fig.6 of Ref. . From that figure, we find that the charm meson $`T_{\mathrm{eff}}`$ increases from $`160`$ MeV to about $`235`$ MeV, leading to a dimuon enhancement factor of about $`2.1`$ for the NA50 acceptance. For heavy ion collisions at RHIC energies, in addition to hadronic rescatterings of charm mesons, partonic rescattering effects on charm quarks also need to be included. Furthermore, radiative processes of charm quarks inside the QGP would further complicate the issue as they may cause energy loss and soften the charm meson $`m_{}`$ spectra. More studies are thus needed before one can make predictions for RHIC. ## 4 Discussions and Summary From our calculated scattering cross sections of charm mesons with hadrons such as pion, rho meson and nucleon, we have given an estimate of the rescattering effect at the SPS energies. We find that hadronic rescatterings in heavy ion collisions are likely to have a significant effect on charm meson spectra, and also the dilepton spectra from charm meson decays. The estimates given above are, however, based on a simple assumption on the time evolution of the hadronic system, which enables us to make a more analytical estimate for the rescattering effects. For a quantitative study of the rescattering effects on charm meson observables, studies with a partonic and hadronic cascade program are much needed as the time evolution and the chemical equilibration of the dense system are better simulated in such a model. ## Acknowledgments This work was supported by the National Science Foundation under Grant No. PHY-9870038, the Welch Foundation under Grant No. A-1358, and the Texas Advanced Research Project FY97-010366-068. ## References
no-problem/9905/hep-ex9905062.html
ar5iv
text
# Forward 𝜋^∘-meson production at HERA ## 1 INTRODUCTION It is the unique kinematical reach of the ep collider HERA which has enabled us to study deep-inelastic scattering (DIS) at values of Bjorken-$`x`$ down to $`x10^6`$ as well as at momentum transfers up to $`Q^2`$ $``$ 30000 GeV<sup>2</sup>. In the classical DIS picture a parton in the proton can undergo a QCD cascade resulting in several parton emissions before the final parton interacts with the virtual photon. Differences between different dynamical assumptions on the parton cascade are expected to be emphasized in the region towards the proton remnant direction, i.e. away from the scattered quark in the HERA kinematical range. In the HERA laboratory frame this has been termed the forward region. In this paper we study forward single $`\pi ^{}`$ production for a considerably larger data sample and in an enlarged kinematical range as compared to a previous publication by the H1 collaboration . The production of high $`p_T`$ particles is strongly correlated to the emission of hard partons in QCD and is therefore sensitive to the dynamics of the strong interaction. ## 2 MEASUREMENT The analysis is based on data representing an integrated luminosity of $``$$`=5.8\mathrm{pb}^1`$ taken by H1 during the 1996 running period. Deep-inelastic scattering events are selected in the range $`0.1<y<0.6`$ and $`2<Q^2<70`$ GeV<sup>2</sup>. About 600k events remain after the selection. The $`\pi ^{}`$-mesons are measured using the dominant decay channel $`\pi ^{}`$ $`2\gamma `$. The $`\pi ^{}`$ candidates are selected in the region $`5^{}<`$$`\theta _\pi `$$`<25^{}`$, where $`\theta _\pi `$ is the polar angle of the produced $`\pi ^{}`$. Candidates are required to have an energy of $`x_\pi `$$`=`$$`E_\pi `$/$`E_{\mathrm{proton}}`$ $`>`$ 0.01, with $`E_{\mathrm{proton}}`$ the proton beam energy, and a transverse momentum in the hadronic cms, $`p_{T,\pi }`$, greater than 2.5 GeV. At the high $`\pi ^{}`$ energies considered here, the two photons from the decay cannot be separated, but appear as one object (cluster) in the calorimetric response. The standard method of reconstructing the invariant mass from the separate measurement of the two decay photons to identify the $`\pi ^{}`$-meson is hence not applicable. Instead, a detailed analysis of the longitudinal and transverse shape of the energy depositions is performed . This approach is based on the compact nature of electromagnetic showers as opposed to showers of hadronic origin, which are broader. The main experimental challenge in this analysis is the high particle and energy density in this region of phase space, with hadronic showers ‘obscuring’ the clear electromagnetic signature provided by the two photons of a $`\pi ^{}`$ decay. This overlap is mainly responsible for losses of $`\pi ^{}`$ detection efficiency, since the distortion of the shape estimators it causes will in many cases lead to the rejection of the cluster candidate. With this selection about 1700 $`\pi ^{}`$ candidates are found with a detection efficiency above 45$`\%`$. Monte Carlo studies using a detailed simulation of the H1 detector yield a purity of about 70% for the selected $`\pi ^{}`$-meson sample, with impurities from misidentified hadrons from the main vertex, partly (10%) also from secondary vertices in dead material of the tracking detector. ## 3 RESULTS The final experimental results of the analysis are obtained as differential cross sections of forward $`\pi ^{}`$-meson production as a function of $`Q^2`$, and as a function of $`x`$, $`\eta _\pi `$ and $`p_{T,\pi }`$ in three regions of $`Q^2`$ for $`p_{T,\pi }`$ $`>2.5`$ GeV. In addition the $`\pi ^{}`$ cross sections as a function of $`x`$ and $`Q^2`$ are measured for $`p_{T,\pi }`$ $`>`$ 3.5 GeV. The phase space is given by 0.1 $`<`$ $`y`$ $`<`$ 0.6, 2 $`<`$ $`Q^2`$ $`<`$ 70 $`\mathrm{GeV}^2`$, $`5^{}<`$ $`\theta _\pi `$ $`<25^{}`$ and $`x_\pi `$ $`=`$ $`E_\pi `$$`/`$$`E_{\mathrm{proton}}`$ $`>`$ 0.01 in addition to the $`p_{T,\pi }`$ thresholds given above. $`\theta _\pi `$ and $`x_\pi `$ are taken in the H1 laboratory frame, $`p_{T,\pi }`$ is calculated in the hadronic cms. The measurement extends down to values of $`x`$ $`>`$ 5$``$10<sup>-5</sup>, covering two orders of magnitude in $`x`$. Of these cross sections only $`d\sigma _\pi /d`$$`x`$ for $`p_{T,\pi }`$ $`>2.5`$ GeV are shown here. All observables are corrected for detector effects and for the influence of QED radiation by a bin-by-bin unfolding procedure. The typical systematic uncertainty is 15-25$`\%`$, compared to a statistical uncertainty of about 10$`\%`$. Contributions to the systematic error include among others the energy scales of the calorimeter, uncertainties in the selection of $`\pi ^{}`$-mesons and the model dependence of the bin-by-bin correction procedure. The cross sections as a function of $`x`$ shown in Figure 1 exhibit a strong rise towards small $`x`$. An interesting observation is that this rise corresponds to the rise of the inclusive ep cross section. The ratio of the two cross sections is shown in Figure 2 and shows no dependence on $`x`$ in the three regions of $`Q^2`$. The production rates do decrease with decreasing $`Q^2`$ for fixed $`x`$. The inclusive ep cross section for this comparison is obtained by integrating the H1 QCD fit to the 1996 structure function data as presented in for every bin of the present measurement of inclusive $`\pi ^{}`$-meson cross sections. The DGLAP prediction for pointlike virtual photon scattering only, represented by LEPTO , falls clearly below the data. The mechanism of emitting partons according to the DGLAP splitting functions, combined with pointlike virtual photon scattering only, is clearly not supported by the data in particular at low $`x`$. A considerable improvement of the description of the data is achieved by considering processes where the virtual photon entering the scattering process is resolved. Such a prediction is provided by RAPGAP . All predicted cross sections increase by up to 30$`\%`$ when the scale in the hard scattering is increased to $`Q^2+4p_T^2`$ from $`Q^2+p_T^2`$ , and hence does not improve the overall description. Whether this mechanism is adequate to describe the $`\pi ^{}`$ cross sections down to the lowest available $`x`$ therefore cannot finally be decided by RAPGAP. Next we compare with a calculation following the BFKL formalism in order $`𝒪(\alpha _s)`$. Fragmentation functions are used to calculate the $`\pi ^{}`$-meson cross sections from the partonic final state. The predictions obtained with these calculations turn out to be in good agreement with the neutral pion cross sections measured in the entire available phase space with a slight tendency to be below the data at the lowest values of $`x`$ available. ## 4 CONCLUSIONS With the present measurement of inclusive $`\pi ^{}`$-meson cross sections it has become possible for the first time to measure observables of the hadronic final state in this region of the phase space with relatively small experimental uncertainties. It provides testing ground for any theory that claims to describe processes at small $`x`$ with large phase space for parton emissions and a reasonably hard scale. Models using $`𝒪(\alpha _s)`$ QCD matrix elements and parton cascades according to the DGLAP splitting functions cannot describe the differential neutral pion cross sections at low $`x`$. Including processes in which the virtual photon is resolved leads to an improved description of the data. Renormalization and factorization scale uncertainties however limit the precision of the predictions. A calculation based on the BFKL formalism is in good agreement with the data. Considering the relatively small uncertainties given for this calculation it is the best available approximation of QCD in the considered phase space.
no-problem/9905/cond-mat9905354.html
ar5iv
text
# The Link Overlap and Finite Size Effects for the 3D Ising Spin Glass ## I Introduction A series of computer simulations performed in the past few years appear to support the claim that the three-dimensional Edwards-Anderson spin glass shows signatures of replica-symmetry breaking (RSB), implying the existence of infinitely many pure states in the low-temperature phase. In contrast, almost rigorous arguments and recent experiments favor the droplet picture with only one pair of pure states. In a recent paper , we have suggested that due to the high temperatures and small system sizes the computer simulations are strongly affected by the critical point and do not reflect the true low-temperature behaviour. This suggestion was supported by a numerical calculation of the Parisi overlap function using the Migdal-Kadanoff approximation (MKA). For the system sizes and temperatures typically used in computer simulations we found overlap functions similar to those in , however for lower temperatures we found agreement with the predictions of the droplet picture. That the results of computer simulations are strongly affected by the critical point can also be concluded from , where the Parisi overlap function shows critical scaling (with effective exponents) down to temperatures $`0.8T_c`$. In recent publications , it is claimed that nontrivial behaviour of a quantity called the link overlap is a reliable indicator of RSB. However, in order to place such a claim on solid ground, one would have to show that the data cannot be interpreted within the framework of the droplet picture. It is the goal of this paper to furnish this discussion which has been missing so far. As in , we use the MKA which is known to agree with the droplet picture. Our results show, as in the case of the Parisi overlap function, that several nontrivial features attributed to RSB are in fact due to finite-size effects, and that the numerical data on the link overlap published so far are indeed in agreement with the droplet picture. We also derive expressions for the effective coupling at any temperature as a function of system size and find that one indeed needs rather large systems or low temperatures to see droplet-like behaviour. The outline of this paper is as follows: After introducing the model and defining the quantities to be evaluated, we present first our analytical and numerical results for the link overlap distribution function. Then, we evaluate the link overlap in the presence of a weak coupling between the two replicas. In the following section we explain why finite size effects are so large for the three–dimensional Ising spin glass. Finally, we summarize and discuss our findings. ## II Definitions The Hamiltonian $`H_0`$ of the Edwards-Anderson (EA) spin glass in the absence of an external magnetic field is given by $$\beta H_0=\underset{i,j}{}J_{ij}\sigma _i\sigma _j,$$ where $`\beta =1/k_BT`$. The Ising spins can take the values $`\pm 1`$, and the nearest-neighbour couplings $`J_{ij}`$ are independent from each other and gaussian distributed with a standard deviation $`J`$. It has proven useful to consider two identical copies (replicas) of the system, and to measure overlaps between them. This gives information about the structure of the low-temperature phase, in particular about the number of pure states. The quantity considered in this paper is the link overlap $$q^{(L)}(ϵ)=(1/N_L)\underset{i,j}{}\sigma _i\sigma _j\tau _i\tau _j$$ (1) where the sum is over all nearest-neighbour pairs $`i,j`$ of a lattice with $`N_L`$ bonds and $`N`$ sites, and the brackets denote the thermal and disorder average. $`\sigma `$ and $`\tau `$ denote the spins in the two replicas. The Hamiltonian used for the evaluation of the thermodynamic average is $$\beta H[\sigma ,\tau ]=\beta H_0[\sigma ]+\beta H_0[\tau ]ϵ\underset{i,j}{}\sigma _i\sigma _j\tau _i\tau _j,$$ (2) where $`H_0`$ is the ordinary spin glass Hamiltonian given above, and the term in $`ϵ`$ introduces a coupling between the two replicas. In cases where the random couplings $`J_{ij}`$ are taken to have the values $`\pm 1`$, the link overlap is identical to the energy overlap. The main qualitative differences between the Parisi overlap $$q^{(P)}=\underset{i=1}{\overset{N}{}}(1/N)\sigma _i\tau _i,$$ and the link overlap are (i) that flipping all spins in one of the two replicas changes the sign of $`q^{(P)}`$ but leaves $`q^{(L)}`$ invariant, and (ii) that flipping a droplet of finite size in one of the two replicas changes $`q^{(P)}`$ by an amount proportional to the volume of the droplet, and $`q^{(L)}`$ by an amount proportional to the surface of the droplet. Below, we will show that, just as for the Parisi overlap, the MKA can reproduce all the essential features of the link overlap found in Monte Carlo simulations. These results refute the claim made in that the agreement between the MKA and simulations for the Parisi overlap reported in is a mere coincidence that does not extend to the link overlap. The conclusion must be drawn that there is no evidence for RSB in three dimensional Ising spin glasses. Evaluating a thermodynamic quantity in MKA in three dimensions is equivalent to evaluating it on a hierarchical lattice that is constructed iteratively by replacing each bond by eight bonds, as indicated in Fig. 1. The total number of bonds after $`I`$ iterations is $`8^I`$, which is identical to the number of lattice sites of a three-dimensional lattice of size $`L=2^I`$. Thermodynamic quantities are then evaluated iteratively by tracing over the spins on the highest level of the hierarchy, until the lowest level is reached and the trace over the remaining two spins is calculated . This procedure generates new effective couplings, which have to be included in the recursion relations. In , it was proved that in the limit of infinitely many dimensions (and in an expansion away from infinite dimensions) the MKA reproduces the results of the droplet picture. We have shown in that the MKA agrees with the droplet picture in three dimensions as well. For this reason, no feature that is seen in MKA can be attributed to RSB. ## III The probability distribution of the link overlap We first set the coupling strength $`ϵ`$ in Eq. (2) to zero and study the probability distribution $`P(q^{(L)})`$ of the link overlap, averaged over a sufficiently large number of samples. RSB should manifest itself in $`P(q^{(L)})`$ according to in an asymmetric (non-Gaussian) shape and a nonzero width even at infinitely large system sizes. Furthermore, the link overlap for single samples should show large variations between different samples. Here, we show that the asymmetric shape and large sample to sample variations can even be seen in MKA for moderate system sizes and can therefore not be taken as evidence for RSB. The only reliable indicator for RSB would be a width of $`P(q^{(L)})`$ that does not shrink with increasing system size. However, the only Monte Carlo simulation data published so far for $`P(q^{(L)})`$ are taken for a four-dimensional Ising spin glass in a magnetic field, and they show a shrinking width, in agreement with the expectations from the droplet picture. We have obtained the function $`P(q^{(L)})`$ by first calculating its Fourier transform, $$F(y)=\mathrm{exp}\left(iy\underset{ij}{}\frac{\sigma _i\tau _i\sigma _j\tau _j}{N_L}\right).$$ The coefficients $`a_n`$ in $$P(q^{(L)})=\underset{n=N_L/2}{\overset{N_L/2}{}}a_n\delta (q^{(L)}2n/N_L).$$ are then found from $`F(y)`$: $$a_n=(1/\pi N_L)_{\pi N_L/2}^{\pi N_L/2}F(y)\mathrm{exp}(2iyn/N_L)𝑑y.$$ Figures 2, 3 and 4 show our result for $`P(q^{(L)})`$ in MKA for three different temperatures. All curves have been averaged over several thousand samples. The curves for small $`L`$ are asymmetric at $`T=T_c`$ with a tail on the right-hand side. With decreasing temperature, the asymmetry becomes stronger, the tail moves to the left-hand side, and a shoulder is formed. All these features seem to be finite-size effects, as they become weaker with increasing system size. Figures 5 and 6 show $`P(q^{(L)})`$ for single samples at $`T=0.7T_c`$ and for $`L=8`$ (Fig. 5) and $`L=16`$ (Fig. 6). In particular for $`L=8`$, there are large variations between different samples, a feature that is usually assumed to be a clear indicator of RSB. However, since the MKA does not show RSB, we must asign this feature to finite-size effects. Finally, let us study the width $$\mathrm{\Delta }q^{(L)}=\sqrt{_1^1(q^{(L)}\overline{q}^{(L)})^2P(q^{(L)})𝑑q^{(L)}}$$ of $`P(q^{(L)})`$. Fig. 7 shows our results on a double logarithmic plot, together with a power-law fit $`q^{(L)}L^\omega `$. At $`T=T_c`$, the exponent $`\omega `$ is sufficiently close to $`d/2=1.5`$ to suggest that the leading contribution to $`\mathrm{\Delta }q^{(L)}`$ comes from the superposition of independent contributions of the different parts of the system, just as it does for higher temperatures. Below, we shall see explicitely that critical point nonanalyticities are indeed subleading to the regular contributions. Within the framework of the droplet picture, the value of $`\omega `$ at the zero temperature fixed point can be calculated as follows: The main excitations at low temperatures or large scales are droplets of flipped spins of a radius $`rL`$ in one of the replicas. Such droplets occur with a probability proportional to $$(L/r)^dr^\theta kT,$$ and each of them makes a contribution of the order $$r^{2d_s}L^{2d}$$ to $`(\mathrm{\Delta }q^{(L)})^2`$. $`d_s`$ is the fractal dimension of the droplet surface and is $`d1`$ in MKA, and $`\theta `$ is the scaling exponent of the coupling strength and has a value around 0.24 in MKA in $`d=3`$. We therefore find $$\mathrm{\Delta }q^{(L)}\sqrt{kT}L^{d_sd\theta /2},$$ giving $`\omega 1.12`$ in MKA. Our above data for $`T=0.45T_c`$ are not far from this result. For larger system size, they must ultimately converge to it. Just as in the case of the Parisi overlap , the crossover from critical to low-temperature behaviour is so slow that even for $`T0.4T_c`$ the asymptotic regime is not reached for system sizes up to 32, and the curves appear to show effective exponents. For a cubic lattice, we have $`d_s2.2`$ and $`\theta 0.2`$, predicting a value $`\omega 0.9`$ within the framework of the droplet picture. This has to be compared to the RSB scenario, where $`\omega =0`$. In , it is claimed that Monte-Carlo simulation data at $`T0.6T_c`$ and $`L12`$ show already the signatures of RSB. However, as yet there is no published data for $`\mathrm{\Delta }q^{(L)}`$. Only if a value of $`\omega `$ smaller than 0.9 is found, one can conclude that the droplet picture is inappropriate and that RSB occurs. As long as $`\omega `$ appears to be larger than 0.9, the data are compatible with the droplet picture and are affected by finite-size effects. ## IV The link overlap in the presence of a coupling between the two replicas In , the authors suggested studying the expectation value of the link overlap in the presence of a coupling between the two replicas, $`q^{(L)}(ϵ)`$, in order to test whether a system shows RSB. A positive coupling $`ϵ`$ in Eq. (2) favours a configuration where both replicas are in the same state, while a negative $`ϵ`$ favours configurations with smaller overlaps. If the RSB scenario were correct, the distribution $`P(q^{(L)})`$ would have a finite width even for $`L\mathrm{}`$ and range from some minimum value $`q_{\mathrm{min}}`$ to a maximum value $`q_{\mathrm{max}}`$. Consequently, the expectation value of the link overlap, $`q^{(L)}(ϵ)`$, would have a jump from $`q_{\mathrm{min}}`$ to $`q_{\mathrm{max}}`$ at $`ϵ=0`$ in the thermodynamic limit $`L\mathrm{}`$ . In contrast, as we will show next, the droplet picture predicts a continuous and at $`ϵ=0`$ nonanalytic function $`q^{(L)}(ϵ)`$ . Part of these results were published in . Within the droplet picture, the scaling dimension of $`ϵ`$ in the spin glass phase can easily be obtained. At length scale 1, $`ϵ`$ is equivalent to the energy cost of flipping one spin in one of the two replicas. On a scale $`l`$, this becomes the energy cost of flipping a droplet of radius $`l`$ in one of the replicas, which is proportional to $`ϵl^{d_s}`$. The scaling dimension of $`ϵ`$ is therefore $`d_s`$, where $`d_s`$ is the fractal dimension of the droplet surface. Equivalently, $`d_s`$ is the fractal dimension of a domain wall. Within MKA, we have $`d_s=d1`$. The same value $`d1`$ for the scaling dimension of $`ϵ`$ is also obtained from an analytical calculation of the recursion relations for $`ϵ`$ and the strength $`J`$ of the random couplings near the $`T=0`$ fixed point. The positive dimension of $`ϵ`$ implies that the coupling between the two replicas is a relevant perturbation and that on large scales a behaviour different from that of an independent system can be seen. When $`ϵ`$ is positive, the energy cost $`ϵl^{d_s}`$ for the excitation of droplets of radius $`l`$ leads to the suppression of droplets larger than $$l^{}(kT/ϵ)^{1/d_s}.$$ On scales beyond $`l^{}`$, droplet excitations must occur in both replicas simultaneously. This costs twice the energy of a single droplet in an independent system. The coupled system thus behaves on large scales exactly like a single system with twice the coupling strength $`J`$. As stated in the preceding section, for $`ϵ=0`$ droplets of size $`l`$ occur with a probability proportional to $`(L/l)^dl^\theta kT`$ in a system of size $`L`$. Since for positive $`ϵ`$ droplets of size greater than $`l^{}`$ are suppressed, the change in the link overlap due to a small positive $`ϵ`$ can be written as $$q^{(L)}(ϵ)q^{(L)}(0)kT\underset{l>l^{}}{\overset{L}{}}l^{d_sd\theta }.$$ (3) In order to suppress each droplet only once, the sum must be taken over distinct length scales $`l`$, e.g., $`l=l^{},2l^{},4l^{},\mathrm{}`$, and it is proportional to the first term. Therefore, we can write $`q^{(L)}(ϵ)q^{(L)}(0)kT(l^{})^{d_sd\theta }`$, and using the expression for $`l^{}`$ given above we have, $$q^{(L)}(ϵ)q^{(L)}(0)kT(ϵ/kT)^{(d+\theta d_s)/d_s}.$$ (4) For negative $`ϵ`$, flipping a droplet of radius $`l`$ in one of the replicas changes the system’s energy by an amount proportional to $`l^\theta |ϵ|l^{d_s}`$, which is negative for $`l>l_c`$ with $`l_c|ϵ|^{1/(\theta d_s)}`$. Therefore, there is a proliferation of droplets beyond this length scale and the spin glass state is completely restructured. We followed the flow of the parameters $`ϵ`$, $`J`$, and $`\mathrm{\Delta }ϵ`$ (the width of the distribution of $`ϵ`$) under a change of scale in the MKA and found that $`\mathrm{\Delta }ϵ`$ diverges, while $`J`$ and $`ϵ`$ eventually decrease to zero. Such a system is an Edwards-Anderson spin glass with the effective spins $`\rho _i=\sigma _i\tau _i`$. Since droplets of size larger than $`l_c`$ proliferate for negative $`ϵ`$, the change in the link overlap is given in this case by $$q^{(L)}(ϵ)q^{(L)}(0)(l_c)^{d_sd}|ϵ|^{(dd_s)/(d_s\theta )}.$$ (5) We thus find that $`q^{(L)}(ϵ)q^{(L)}(0)`$ has the form $`A_\pm |ϵ|^{\lambda _\pm }`$, with values $`A`$ and $`\lambda `$ that depend on the sign of $`ϵ`$. Within MKA, it is $`\lambda _+0.62`$, and $`\lambda _{}0.57`$. For a cubic lattice, $`\lambda _+0.45`$, and $`\lambda _{}0.40`$. For finite temperatures and small systems, there are corrections to this asymptotic behaviour due to finite-size effects which replace the nonanalyticity at $`ϵ=0`$ with a linear behaviour for small $`|ϵ|`$, and due to the influence of the critical fixed point, where the leading behaviour is linear in $`ϵ`$ (see below). As we have argued in , the influence of the critical fixed point changes the apparent value of the low-temperature exponents for the system sizes studied in Monte-Carlo simulations and the MKA. The data shown in with an apparent value of 0.5 for $`\lambda _\pm `$ are fully compatible with the above predictions of the droplet picture. There is no indication of a jump at $`ϵ=0`$ in $`q^{(L)}(ϵ)`$, which would be the signature of RSB. For the MKA, the apparent exponent at $`0.7T_c`$ is close to 1 for $`L16`$, leading to the “trivial” behaviour found in . However, at lower temperatures, for the same small system sizes the above-mentioned nontrivial features predicted by the droplet picture become clearly visible, as shown in Fig. 8. Let us conclude this section with a discussion of the link overlap at the critical temperature. Fig. 9 shows our result in MKA. Clearly, the curves are linear at $`ϵ=0`$, indicating that the regular part dominates over the singular, critical contribution. This conclusion is confirmed by studying the scaling dimension of $`ϵ`$ at $`T_c`$. Iterating the recursion relations for the coupling constants, we find that under a change in length scale, $`xx/b`$, we obtain $`ϵb^\varphi ϵ`$ with $`\varphi 1.14`$. Now, $`q^{(L)}(ϵ)`$ can be obtained from the free energy via the relation $$q^{(L)}(ϵ)=(1/3N)(\mathrm{ln}Z)/ϵ,$$ implying a scaling behaviour $`q^{(L)}b^{d\varphi }q^{(L)}`$. Substituting $`b`$ with $`ϵ`$ gives then the relation $$q^{(L)}ϵ^{(d\varphi )/\varphi }ϵ^{1.6}.$$ Compared to the linear regular part, this singular dependence cannot be seen for small $`ϵ`$. ## V Finite size effects As we have argued throughout this paper and in earlier work , finite size effects appear to be large for the Ising spin glass in three dimensions. To understand this behaviour, we iterated the MK recursion relations starting at various temperatures below $`T_c`$. In Fig.10 the solid lines show the data for $`J_n^2`$ as a function $`L=2^n`$ where $`n`$ is the number of iterations of the MK recursion relations. For large L, one expects $`J_n^22^{2n\theta }`$. However, it is apparent that even at $`T0.7T_c`$ one needs system sizes $`L100`$ to see this behaviour. Because the change in the slope $`d\mathrm{ln}J^2/d\mathrm{ln}L`$ is so slow, $`J^2(L)`$ appears to be described by a power law with some effective exponent over small windows of one decade in $`L`$, just as we found for $`P^{(P)}(0)`$ in . To understand the large crossover regime, we consider an expansion around the zero-temperature fixed point, where the effective temperature $`T=1/J`$ at a length scale $`L`$ can be written as $$dT/d\mathrm{ln}L=\theta T+AT^3+BT^5+\mathrm{},$$ (6) where A, B, … are constants and even order terms are absent because $`TT`$ (or $`\{J_{ij}\}\{J_{ij}\}`$) is a symmetry of the Hamiltonian. Now, $`d=3`$ is close to the lower critical dimension for Ising spin glasses (so that $`T_c`$ is in some sense small), and one might expect that truncating the above equation after the first few terms will give a good approximation for $`T`$ or equivalently $`J`$ throughout the low temperature phase. We now show that this is indeed the case: keeping terms up to $`T^5`$ gives a good description of the MK data of Fig.10. The analysis is as follows: At $`T_c`$, $`dT/dL=0`$ so that $`\theta =AT_{c}^{}{}_{}{}^{2}+BT_{c}^{}{}_{}{}^{4}`$. For small deviations away from $`T_c`$ i.e. $`T=T_c+\delta T`$, the correlation length exponent $`\nu `$ is defined via $`d\mathrm{ln}(\delta T)/d\mathrm{ln}L=1/\nu `$, leading to $`1/\nu =2\theta +2BT_c^4`$. Solving for $`A`$ and $`B`$ in terms of $`\nu `$, $`\theta `$ and $`T_c`$, we find $`A=(2\theta 1/2\nu )/T_c^2`$ and $`B=(1/\nu 2\theta )/2T_c^4`$. Now we substitute these expressions into the above equation and integrate from a length scale $`L_0`$ (temperature $`T_0`$) to a length scale $`L`$ (temperature $`T_L`$) to get an equation relating $`T_L`$ to $`T_0`$. Then using $`T_L=1/J_L`$ gives the corresponding equation for $`J_L`$. Setting $`L/L_0=2^n`$ and $`2\theta \nu =x`$, we find $$\frac{(J_n^2J_c^2)^x}{\left[J_n^2\frac{x1}{x}J_c^2\right]^{(x1)}}=\frac{(J_0^2J_c^2)^x}{\left[J_0^2\frac{x1}{x}J_c^2\right]^{(x1)}}2^{2n\theta },$$ (7) where $`J_0`$ is the coupling at the starting point $`n=0`$ and $`J_n`$ is the coupling after $`n`$ iterations. To test whether Eq. (7) is a good approximation, we have inserted the values $`J_n^2`$ obtained from the MK recursion relations into the left hand side of Eq. (7). We have chosen $`\nu =2.8`$ and $`\theta =0.25`$, in agreement with . The result are the data points given in Fig. 10. They satisfy a power law with exponent $`2\theta `$ and a prefactor $`(J_0^2J_c^2)^x/(J_0^2(x1/x)J_c^2)^{(x1)}`$, showing that Eq. (7) is indeed an appropriate description of the growth of the coupling throughout the low temperature phase. Now, for large $`J_n`$ and $`J_0`$, Eq. (7) reduces to pure power law behaviour viz. $`J_n^2J_0^22^{2n\theta }`$ and the crossover length for the different temperatures can be read off as the length for which $`(J_n^2J_c^2)^{2\theta \nu }/(J_n^2(2\theta \nu 1)J_c^2/2\theta \nu )^{(2\theta \nu 1)}`$ becomes of the same order as $`J_n^2`$. Beyond the crossover length, we should see the correct low-temperature exponents. Thus, we have seen that the three-dimensional spin glass in MKA can be well described by an approximation that is valid close to the lower critical dimension and that shows explicitely that crossover scales are large. The analysis described above is quite general and should be applicable also to the Ising spin glass on a cubic lattice. There is one caveat however: the analysis we have carried out is valid for the couplings alone. Other quantities (for example $`P(q^{(P)})`$ or $`P(q^{(L)})`$) would have to be studied separately and it is possible that the crossover lengths would be somewhat different for different quantities. ## VI Conclusion In this paper, we have studied the link overlap between two identical replicas of a three-dimensional Ising spin glass in Migdal-Kadanoff approximation. The width of the link overlap distribution decreases to zero with increasing system size at $`T_c`$ as well as in the low-temperature phase. These findings are in agreement with the predictions of the droplet picture. For system sizes similar to the ones used in Monte Carlo simulations of a cubic lattice, we find the same large sample-to-sample fluctuations and asymmetric curve shapes as reported from those simulations. They must be interpreted as a finite-size effect and cannot be taken as an indicator for replica-symmetry breaking. The only reliable indicator for RSB would be a width of the overlap distribution that does not decrease with increasing system size. Similarly, the link overlap in the presence of a weak coupling between the two replicas shows in MKA in the spin glass phase the singular behaviour predicted by the droplet picture. Data from Monte Carlo simulations are also in full agreement with the droplet picture. The RSB picture predicts a jump in the mean value of the link overlap at zero coupling strength that is not visible in Monte Carlo simulation data published so far. We have reproduced phenomenologically the influence of the critical point on the growth of the coupling constants using an approximation that is valid close to the lower critical dimension. This gives us a direct estimate of the lengthscale at any given temperature beyond which one needs to go in order to see zero-temperature (droplet) scaling without crossover effects from critical point behaviour intruding too strongly. This crossover effect often seems to get overlooked in the literature. For example Komori, Yoshino and Takayama in a numerical simulation of the Ising spin glass at a temperature of $`0.84T_c`$ found that critical scaling of the dynamics worked well: they found that correlation data at time $`t`$ could be collapsed for systems of linear dimension $`L`$ for values of $`L`$ up to 7, by plotting against the variable $`t/L^{z(T)}`$, where $`z(T)`$ is similar to the critical point dynamical exponent. This behaviour was interpreted by Marinari, Parisi and Ruiz-Lorenzo as evidence against droplet scaling but it is clear from our work that for the system sizes studied at temperatures so close to $`T_c`$ droplet scaling would be quite unobservable and that critical point scaling should indeed work quite well. We therefore conclude that the droplet picture, combined with finite-size effects, can fully explain all data for the link overlap in the Ising spin glass. There is no evidence for the presence of RSB. ###### Acknowledgements. This work was supported by EPSRC Grants GR/K79307 and GR/L38578.
no-problem/9905/cond-mat9905310.html
ar5iv
text
# References Optical spectra measured on cleaved surfaces of double-exchange ferromagnet La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> Koshi Takenaka,<sup>1</sup> Kenji Iida, Yuko Sawaki, Shunji Sugai Yutaka Moritomo, and Arao Nakamura Department of Physics, Nagoya University, Chikusa-ku, Nagoya 464-8602, Japan CIRSE, Nagoya University, Chikusa-ku, Nagoya 464-8603, Japan <sup>1</sup>author to whom correspondence should be addressed institution: Department of Physics, Faculty of Science, Nagoya University address: Furo-cho, Chikusa-ku, Nagoya 464-8602, Japan E-mail: k46291a@nucc.cc.nagoya-u.ac.jp Fax: +81-52-789-2933 Abstract > Optical reflectivity spectra were measured on cleaved surfaces of La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> single crystals ($`x`$$`=`$0.175, $`T_\mathrm{C}`$$`=`$283 K) over a temperature range 10$``$295 K. The optical conductivity $`\sigma \left(\omega \right)`$ shows, keeping single-component nature, incoherent-to-coherent crossover with increase of electrical conductivity. The $`\sigma \left(\omega \right)`$ spectrum of low-temperature ferromagnetic-metallic phase (10 K) exhibits a pronounced Drude-like component with large spectral weight, contrary to the previous result. The present result indicates that the optical spectrum of the manganites is sensitive to condition of sample surfaces. PACS numbers: 78.30.$``$j, 71.27.+a, 71.30.+h Substance Classification: S1.2, S10.15 Optical reflectivity studies are important from both fundamental and practical viewpoints, since it enables us not only to deduce the dielectric function, but also to examine separately two key-elements of the charge transport, i.e., the carrier density (or Drude weight) and the scattering time. The charge transport is one of the central concerns for both camps. However, for the case of the double-exchange ferromagnetic-metal manganites, which have recently attracted renewed interest because of its intriguing phenomenon, colossal magnetoresistance (CMR) , the previous results of the reflectivity studies are rather confused and controversial . We report the optical reflectivity spectra $`R(\omega )`$ of La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> ($`x`$$`=`$0.175, $`T_\mathrm{C}`$$`=`$283 K) measured on cleaved surfaces of single crystals over a wide temperature range (10$``$295 K). The optical conductivity $`\sigma (\omega )`$ exhibits a single-component with large spectral weight, contrary to the previous results. The present result indicates that the optical spectrum of the manganites is very sensitive to condition of surfaces, which can partly explain the above confusion . Single crystals of La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> were grown by a floating zone method . The size of the cleaved surface was at largest 1.0$`\times `$1.0 mm<sup>2</sup>. Temperature-dependent optical reflectivity was measured using a Fourier-type interferometer (0.02$``$1.6 eV) and a grating spectrometer (0.8$``$6.6 eV). The experimental error of reflectivity, $`\mathrm{\Delta }R`$, determined by the reproducibility, is less than 1% for the far-IR to visible region and less than 2% for the ultraviolet region. Figure 1 shows the temperature-dependent (10$``$295 K) reflectivity spectra measured on the cleaved surfaces of La<sub>0.825</sub>Sr<sub>0.175</sub>MnO<sub>3</sub> single crystals on a logarithmic energy scale in the range 0.01$``$6.0 eV. The Curie temperature $`T_\mathrm{C}`$ measured by dc resistivity was 283 K. With decreasing temperature, the reflectivity spectrum $`R(\omega )`$ changes gradually from insulating to metallic behavior: the reflectivity edge at about 1.6 eV becomes sharpened, though its position does not shift appreciably, and the optical phonons are screened corresponding to increase of electrical conductivity. Below 100 K, the optical phonons almost fade away and the spectrum is characterized by a sharp edge and a large spectral weight below it. The present spectrum measured on cleaved surfaces is much higher at a mid-IR-to-visible region compared with the previous results measured on polished surfaces . In order to make more detailed discussions, we deduce optical conductivity $`\sigma (\omega )`$ (Fig. 2) from $`R(\omega )`$ shown in Fig. 1 via a Kramers-Kronig transformation. We measured reflectivity spectra at each temperature below 6.6 eV and above 6.6 eV we assumed the data measured at room temperature (295 K) using a Seya-Namioka type spectrometer for vacuum-ultraviolet (VUV) synchrotron radiation up to 40 eV at Institute for Molecular Science, Okazaki National Research Institutes. Such a procedure is possible and reasonable because the variation of $`R(\omega )`$ at 6.0$``$6.6 eV is negligible small and the data below 6.6 eV could be connected smoothly with the VUV-data. For the extrapolation at the lower-energy part, we assumed a constant $`R(\omega )`$ for the insulating phase ($`T`$$`=`$295 K). For the metallic phase ($`T`$$``$278 K), we make a smooth extrapolation using a Hagen-Rubens formula. Extrapolating parameter $`\sigma (0)`$ is roughly in accord with the dc value . Variation of the extrapolation procedures had negligible effect on $`\sigma (\omega )`$ in the energy region of interest (0.03$``$6.0 eV). The optical conductivity $`\sigma (\omega )`$ shows, keeping single-component nature, incoherent-to-coherent crossover with decreasing temperature: Above $`T_\mathrm{C}`$, the $`\sigma (\omega )`$ spectrum is characterized solely by a broad peak centered at $``$1.5 eV. At the temperature range 278$``$220 K, this broad peak gradually develops and its position shifts downwards as $`T`$ decreases but a Drude-like component is not confirmed in the present experiment, though the material is ferromagnetic-metallic. Below 155 K, on the other and, the spectrum exhibits a single Drude-like component centered at $`\omega `$$`=`$0 and it becomes narrow as $`T`$ decreases without increase of spectral weight. Integrated spectral weight defined as $$N_{\mathrm{eff}}^{}\left(\omega \right)=\frac{2m_0V}{\pi e^2}_0^\omega \sigma \left(\omega ^{}\right)𝑑\omega ^{}$$ $`\left(1\right)`$ ($`m_0`$: a bare electron mass; $`V`$: the unit-cell volume) represents an effective density of carriers contributing to optical transitions below a certain cutoff energy $`\mathrm{}\omega `$ (inset of Fig. 2). The characteristic single-component which shows incoherent-to-coherent crossover consists of the spectral weight transferred from the two bands at $``$3 eV and at $``$5 eV. The present result suggests that the exchange-split down-spin band consists of two bands. Because the curves of $`N_{\mathrm{eff}}^{}\left(\omega \right)`$ merge into almost a single line above 6 eV, the down-spin band does not seem to split to more than two bands. Imperfect convergence is most likely due to the increasing experimental error on $`\sigma \left(\omega \right)`$ with $`\omega `$. This partly justifies our procedure that the data above 6.6 eV at room temperature is connected with the data measured at each temperature. Finally we show that the discrepancy between our result and the previous result may originate from sensitivity of the spectrum to the condition of the surface. In Fig. 3 are shown the room-temperature (295 K) reflectivity spectra measured on a cleaved surface (solid line) as well as that measured on a surface polished by lapping films with diamond powder (dotted line). It is found that polishing dramatically alters $`R(\omega )`$ for the ferromagnetic metal La<sub>0.70</sub>Sr<sub>0.30</sub>MnO<sub>3</sub> \[Fig. 3(b)\] whereas it affects only slightly the spectrum for the undoped LaMnO<sub>3</sub> \[Fig. 3(a)\]. The previous data resembles closely the spectrum measured on the polished surface. The damage of the surface probably localizes the carriers. However, light with long wavelength reaches inside the bulk and hence $`R(\omega )`$ recovers the intrinsic spectrum, which is consistent with the observation that the discrepancy almost disappears below 0.03 eV . “Small Drude weight” may originate from the above restoration process \[inset of Fig. 3(b)\]. In summary, we have reported the temperature-dependent optical spectra of the prototypical double-exchange system La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> measured on the cleaved surfaces. The optical conductivity spectra are characterized by a single component which shows incoherent-to-coherent crossover with increase of the electrical conductivity and a pronounced Drude-like component is observed for the ferromagnetic-metallic phase at low temperatures. The present result indicates that the optical spectrum of the doped manganites is sensitive to condition of surfaces. The charge dynamics of the doped manganites might be extremely sensitive to static imperfections. We would like to thank M. Kamada, M. Hasumoto, and R. Yamamoto for the help in the experiments. One of us (K.T.) is also grateful to N. E. Hussey for a critical reading of the manuscript. This work was partly supported by Grant-in-Aid for Scientific Research from the Ministry of Education, Science, and Culture of Japan and by CREST of JST. Figure Captions Fig. 1 Temperature-dependent optical reflectivity spectra measured on cleaved surfaces of La<sub>0.825</sub>Sr<sub>0.175</sub>MnO<sub>3</sub> ($`T_\mathrm{C}`$$`=`$283 K). Fig. 2 Temperature-dependent optical conductivity spectra of La<sub>0.825</sub>Sr<sub>0.175</sub>MnO<sub>3</sub> deduced from the reflectivity spectra measured on the cleaved surfaces (shown in Fig. 1) via a Kramers-Kronig transformation. Inset: Effective carrier number per Mn-atom $`N_{\mathrm{eff}}^{}\left(\omega \right)`$ defined as the integration of $`\sigma \left(\omega \right)`$. Fig. 3 Room-temperature (295 K) optical reflectivity spectra measured on the cleaved (solid line) and polished (dotted line) surfaces: (a) LaMnO<sub>3</sub> and (b) La<sub>0.70</sub>Sr<sub>0.30</sub>MnO<sub>3</sub>. Inset: Optical conductivity spectra deduced from the Kramers-Kronig transformation of the reflectivity spectra shown in Fig. 3(b).
no-problem/9905/astro-ph9905025.html
ar5iv
text
# Giant Outbursts of Luminous Blue Variables and the Formation of the Homunculus Nebula Around 𝜂 Carinae ## 1 Introduction Current stellar evolution models predict that Galactic stars initially more massive than 25–30$`M_{}`$ lose more than 50% of their initial mass, and stars above 30–35$`M_{}`$ more than 80% (Maeder 1992, Woosley, Langer & Weaver 1993). Much of this mass loss is thought to occur during a short-lived and highly unstable stage preceding the Wolf-Rayet stage, that may be observed in the form of the luminous blue variables (LBVs; cf. Maeder 1989, Pasquali et al. 1997), which are located close to the Humphreys-Davidson (HD) limit in the HR diagram, beyond which no normal stars are observed (Humphreys & Davidson 1979). It is strongly debated what produces the giant LBV outbursts observed in these stars (Langer et al. 1994, Nota & Lamers 1997). Among the potential mechanisms (see Humphreys & Davidson 1994, and references therein), the idea that massive stars reach their Eddington luminosity close to the HD limit is particularly appealing since the Eddington limit appears to be located very close to the HD limit in the HR diagram (cf. Davidson 1971, Lamers & Noordhoek 1992). Time-varying massive star winds are able to produce circumstellar nebulae with kinematic properties similar to those observed around LBVs (Frank, Balick & Davidson 1995; Nota et al. 1995; García-Segura, Mac Low & Langer 1996, Frank, Ryu, & Davidson 1998). The morphology of the nebula around the extraordinary star $`\eta `$Carinae may give essential clues for the understanding of these giant outbursts. During its eruption from 1840 to 1860 A.D., $`\eta `$Car — today a telescopic object — was the second brightest star in the sky (van Genderen & The 1984). Recent observations by the Hubble Space Telescope have revealed in spectacular detail the resulting circumstellar nebula (Humphreys & Davidson 1994, Morse et al. 1998), the hourglass-shaped inner part of which is known as the Homunculus (Meaburn, Wolstencroft & Walsh 1987, Allen 1989). This is now the best studied example of the bipolar structures often observed around LBVs such as $`\eta `$Car (Nota et al. 1995). ## 2 Winds of rotating stars near the Eddington limit It has been proposed (e.g. Maeder 1989) that sufficiently luminous stars, after core hydrogen exhaustion, may arrive at or exceed their Eddington limit $$\mathrm{\Gamma }L/L_{\mathrm{Edd}}=1,$$ (1) as their Eddington luminosity $`L_{\mathrm{Edd}}=4\pi cGM/\kappa `$ drops below their actual luminosity $`L`$ as the opacity coefficient $`\kappa `$ increases. The theory of radiation-driven stellar winds predicts that the mass loss rate will increase as $`\dot{M}(1\mathrm{\Gamma })^\mu `$ with $`\mu >0`$ for stars approaching the Eddington limit (cf. Castor, Abbott & Klein 1975). This mass loss rate formally diverges for $`\mathrm{\Gamma }1`$ (though see Owocki & Gayley 1997), suggesting that the strong mass loss associated with $`\mathrm{\Gamma }1`$ may be related to giant outbursts of LBVs. However, rotation reduces the luminosity required for all external forces to balance each other at the stellar surface (Langer 1997). The Eddington limit $`\mathrm{\Gamma }<1`$ should then be replaced by a criterion that we will call the $`\mathrm{\Omega }`$-limit, $$\mathrm{\Omega }v_{\mathrm{rot}}/v_{\mathrm{crit}}<1,$$ (2) with $`v_{\mathrm{crit}}^2v_{\mathrm{esc}}^2/2=GM(1\mathrm{\Gamma })/R`$, $`M`$ and $`R`$ being the stellar mass and radius, and $`v_{\mathrm{esc}}`$ being the polar escape velocity. $`\mathrm{\Omega }=1`$ implies that centrifugal and radiation force balance gravity at the equator, while at higher latitudes gravity still dominates. The feedback of rotation on the local surface luminosity is neglected here, since according to the generalized von Zeipel theorem (Kippenhahn 1977) the radiation flux at the equator may be either reduced or enhanced, depending on the internal rotation law; this may have been overlooked in the recent criticism of the $`\mathrm{\Omega }`$-limit by Glatzel (1998). Here, we apply the result of Friend & Abbott (1986) that the mass-loss rate of rotating hot stars depends on $`\mathrm{\Omega }`$ as $`\dot{M}(1\mathrm{\Omega })^\nu `$ with $`\nu 0.43`$. We divide the evolution of a star approaching the $`\mathrm{\Omega }`$-limit — that is, going through an outburst — into three phases. In the first phase, before the star reaches the $`\mathrm{\Omega }`$-limit ($`\mathrm{\Omega }<1`$), it has the fast, energetic wind expected of a luminous blue star. In the second phase, reaching the $`\mathrm{\Omega }`$-limit ($`\mathrm{\Omega }1`$) has three consequences for the wind: the mass loss rate is much higher than before; the mass flux increases strongly at latitudes close to the equator; and the bulk of the wind is slow since the equatorial escape velocity is almost zero. In the third phase the outburst is over, the stellar radius has decreased, and the configuration is similar to the first phase. The smaller value of $`\mathrm{\Gamma }`$ before and after the outburst has two consequences: larger wind velocities due to larger escape velocities and smaller values of $`\mathrm{\Omega }`$ leading to more spherical winds. To compute the latitudinal dependence of the wind properties of a star close to critical rotation ideally requires multi-dimensional models of the star and its outflowing atmosphere, which are not available. However, Langer (1997, 1998) argued that the stellar flux and the radius might still vary only weakly from pole to the equator in very luminous stars. Therefore, we applied equations similar to those found by Bjorkman & Cassinelli (1993, BC) for winds of rotating stars in the limit of large distance from the star: $$v_{\mathrm{}}(\theta )=\zeta v_{\mathrm{esc}}\left(1\mathrm{\Omega }\mathrm{sin}\theta \right)^\gamma ,$$ (3) $$(4\pi r^2\rho )_{\mathrm{}}(\theta )=\frac{\alpha }{2}\delta \dot{M}_0\left(1\mathrm{\Omega }\mathrm{sin}\theta \right)^\xi /v_{\mathrm{}}(\theta ),$$ (4) where we set the parameters defined in BC to $`\zeta =1`$ ,$`\gamma =0.35`$, and $`\xi =0.43`$. The correction factor $`\delta `$ is introduced to ensure that the total stellar mass loss rate $`\dot{M}_0`$ obeys $`\dot{M}_0=v_{\mathrm{}}(\theta )\rho (\theta )\mathrm{sin}\theta d\theta d\varphi `$ at the inner boundary of our grid (cf. Table 1). $`v_{\mathrm{}}`$ is the terminal wind velocity, and $`(4\pi r^2\rho )_{\mathrm{}}`$ the terminal wind density times $`4\pi r^2`$, as function of the polar angle $`\theta `$. The quantity $`\alpha `$ is defined by $$\alpha =\left(\mathrm{cos}\varphi ^{}+\mathrm{cot}^2\theta \left(1+\gamma \frac{\mathrm{\Omega }\mathrm{sin}\theta }{1\mathrm{\Omega }\mathrm{sin}\theta }\right)\varphi ^{}\mathrm{sin}\varphi ^{}\right)^1,$$ (5) with $`\varphi ^{}=\mathrm{\Omega }\mathrm{sin}\theta v_{\mathrm{crit}}/(2\sqrt{2}v_{\mathrm{}}(\theta ))`$. Eq. (5) differs from the corresponding quantity defined by BC in their implicit formula (26) (cf. also Owocki et al. 1994, Ignace et al. 1996). This difference came along originally through a misinterpretation of BC’s equations, i.e., in equations (3) to (5) and in the equation defining $`\varphi ^{}`$, the $`\theta `$s were taken as $`\theta _0`$s, the initial co-latitude of the streamline. We note that since for wind compressed zone models near the equator it is $`\theta _0\theta `$, our models are similar to wind compressed zone models when $`\mathrm{\Omega }\mathrm{\Omega }_{\mathrm{th}}1`$, where $`\mathrm{\Omega }_{\mathrm{th}}`$ is the threshold value for the formation of a wind-compressed disk. Since $`\varphi ^{}<\pi /2`$ for $`\mathrm{\Omega }0.995`$, our formulation has the advantage of avoiding the formation of wind compressed disks for large $`\mathrm{\Omega }`$, a structure which can not be numerically resolved in our calculations, whose properties cannot be well predicted, and whose very existence has even been questioned (Owocki et al.1994, 1996). At the same time, wind density and velocity distributions obtained from our approach are similar to those derived from the formalism of BC, provided that $`\mathrm{\Omega }_{\mathrm{th}}1`$ and we choose $`\mathrm{\Omega }_{\mathrm{BC}}=\mathrm{\Omega }\mathrm{\Omega }_{\mathrm{th}}`$, where $`\mathrm{\Omega }_{\mathrm{BC}}`$ is the value to be inserted in BC’s equations. We shall see that the exact nature of the latitude dependence of the wind properties is not essential for our main results, as long as a dense wind with enhanced mass loss rate close to the equator occurs between two phases of an energetic, more or less spherical wind. We simulate the LBV outburst phenomenon by assuming the wind properties to be constant during each phase. For the pre-outburst wind, which is only used to initialize the numerical grid and to which our results are insensitive, we took a mass-loss rate of $`\dot{M}=10^3M_{}\mathrm{y}r^1`$, a wind final velocity $`v_{\mathrm{}}=450\text{ km\hspace{0.17em}s}\text{-1}`$ and $`\mathrm{\Omega }=0.53`$. For the post-outburst wind, we used two different sets of parameters. The one that we prefer appears to reasonably reproduce the observed morphology of the Homunculus, with $`\dot{M}=1.7\times 10^4M_{}\mathrm{y}r^1`$, $`v_{\mathrm{}}=1800\text{ km\hspace{0.17em}s}\text{-1}`$, and $`\mathrm{\Omega }=0.13`$. For comparison we also computed a model that represents $`\eta `$ Car’s presently observed wind parameters of $`\dot{M}=3\times 10^3M_{}\mathrm{y}r^1`$, $`v_{\mathrm{}}=800`$ km s<sup>-1</sup>, and $`\mathrm{\Omega }=0.3`$ (Davidson et al. 1995). The high wind velocity of our preferred model, $`v_{\mathrm{}}=1800\text{ km\hspace{0.17em}s}\text{-1}`$, corresponds to a stellar radius $`R21R_{}`$ or $`\mathrm{log}T_{\mathrm{eff}}4.7`$ for an O star wind with $`\zeta =3`$ at $`\mathrm{log}L/L_{}=6.4`$, implying that the star strongly contracted after the episode of mass-loss. The parameters of the outburst wind largely determine the morphology of the resulting nebula. To compute these parameters, we assumed the following stellar properties: $`M=80M_{}`$, $`\mathrm{log}L/L_{}=6.4`$, and $`\mathrm{log}T_{\mathrm{eff}}=4.2`$, implying $`R=210R_{}`$, in agreement with observational estimates (cf. Humphreys & Davidson 1994). The final wind velocity follows from the escape velocity for a specified value of $`\mathrm{\Gamma }`$. For our preferred model we took a mass loss rate of $`\dot{M}=7\times 10^3M_{}\mathrm{y}r^1`$ and obtained a Homunculus mass of roughly $`0.15M_{}`$ (van Genderen & The 1984), while for our model assuming present-day wind parameters, we used an increased outburst mass loss rate of $`5\times 10^2M_{}\mathrm{y}r^1`$ such that we obtained a nebula mass of $`1M_{}`$ (Humphreys & Davidson 1994). ## 3 Hydrodynamic models We perform two-dimensional hydrodynamic simulations of the wind interaction; first results have already been reported by García-Segura, Langer & Mac Low (1997). We use the hydrocode ZEUS-3D developed by M. L. Norman and the Laboratory for Computational Astrophysics. ZEUS-3D is a finite-difference, fully explicit, Eulerian code descended from the code described by Stone & Norman (1992). We used spherical coordinates for our simulations, with a symmetry axis at the pole, and reflecting boundary conditions at the equator and the polar axis. See García-Segura et al. (1996) for further details about our numerical method. Our models have grids of $`200\times 360`$ zones, with a radial extent of 0.125 pc, and an angular extent of $`90^{}`$. The innermost radial zone lies at $`r=9.7\times 10^{15}`$cm. We compute the hydrodynamic evolution of the circumstellar gas, starting our computations at a time $`t=1840`$yr, and run them until $`t=1995`$yr. Our outburst scenario leads to a characteristic distribution of the circumstellar gas. The initial fast wind blows a stellar wind bubble which forms the background for the subsequent development of the nebula. During outburst, the wind becomes slow and dense, and the stellar rotation concentrates it toward the equatorial plane (BC; Ignace et al. 1996). When the final fast wind starts in the center of this nebula, it sweeps up the dense wind from the outburst into a thin, radiatively cooled shell that fragments due to dynamical instabilities (García-Segura et al. 1996). The shell expands more easily into the lower density wind at the poles, producing a double-lobed structure, as shown in Figures 1 and 2. In Figure 1, we show six models computed with various values of $`\mathrm{\Omega }`$ and $`\mathrm{\Gamma }`$ for the outburst wind given in Table 1, and otherwise using the parameters of our preferred model. We find three major results: First, the nebular shape appears nearly independent of the Eddington factor, $`\mathrm{\Gamma }`$. Second, the nebula is strongly confined in the equatorial plane only for the case with nearly critical rotation ($`\mathrm{\Omega }=0.98`$). Finally, in this case, we obtain a structure very similar to that of the Homunculus, with two almost spherical lobes and an equatorial density enhancement that is an expanded relic of the outburst wind. In Figure 2 we show the results of a computation identical to that of Figure 1a — in particular with $`\mathrm{\Omega }=0.98`$ — but using twice the radial resolution. We find it striking how well this model, without much fine tuning, not only reproduces the large-scale, bipolar morphology, but also the small-scale turbulent structure seen in high-resolution observations of the Homunculus (Humphreys & Davidson 1994, Morse et al. 1998). We also compute a model at the same resolution as Figure 2 using $`\eta `$ Car’s presently observed wind parameters. We find that the large scale shape of the resulting nebula, shown in Figure 3, is almost identical to that shown in Figure 2, but the higher wind densities cause the wind termination shock to be strongly radiative. This changes the Vishniac instabilities (Vishniac 1983) seen in Figure 2 into ram-ram-pressure instabilities (Vishniac 1994, García-Segura et al. 1996), which have a much spikier morphology and a shorter wavelength, inconsistent with the observed structures. We emphasize that only the properties of the post-eruption wind are responsible for this feature, not the larger shell mass obtained in this case. This result might imply that the current wind is not representative of the wind over the last 140 years. Instead, $`\eta `$ Car might have been smaller and hotter in the recent past, with a post-outburst wind that has become slower and more dense with time. This is consistent with its gradual visual brightening over the last 140 years (Humphreys & Davidson 1994), and suggests that it is evolving towards another giant eruption. ## 4 Discussion Our work extends previous hydrodynamic models for bipolar LBV nebulae (Nota et al. 1995, Frank et al. 1995, 1998, Dwarkadas & Balick 1998) by relating the outburst to the properties of evolving, massive, post-main sequence stars. In contrast to Nota et al. (1995) and Frank et al. (1995), who concluded that a strong equatorial density enhancement must have existed before the outburst occurred, we obtain the two lobes, including their small-scale structure, and the equatorial density enhancement self-consistently as a consequence of the evolutionary state of the star. Frank et al. (1998) used an arbitrary non-spherical wind during the post-outburst phase to produce a bipolar nebula. However, such a wind will only produce a bipolar shape if the wind termination shock is strongly radiative and therefore momentum conserving, a condition we have shown to be inconsistent with the small-scale morphology of the Homunculus. Dwarkadas & Balick (1998) introduce instead a ring-like density distribution, again without relating it to the underlying star. Our result appears to be quite general, because all stars, even slow rotators, must by definition arrive at critical rotation if they approach their Eddington limit. The strong dependence of the nebula shape on $`\mathrm{\Omega }`$ shown in Figure 1, as well as the clear bipolar nature of virtually all LBV nebulae (Nota et al. 1995), lends strong support to the idea that LBV’s are stars approaching their Eddington limits that reach critical rotation and lose large amounts of mass quickly. In fact, a general mechanism for giant LBV outbursts is needed in order to understand the absence of stars beyond the Humphreys-Davidson limit in the HR diagram, and the bipolar nature of most LBV nebulae. Therefore, even though bipolar nebulae may also form from interacting binary stars (e.g. Han, Podsiadlowski & Eggleton 1995), and binarity has been repeatedly proposed also for $`\eta `$ Car (van Genderen, de Groot & The 1994, Damineli, Conti & Lopes 1997), it appears useful to continue to pursue single star models. The conjecture that the Homunculus nebula around $`\eta `$ Car is a paradigm rather than a freak has recently been supported by its strong similarity to the nebula around the LBV HR Carinae, as found by Weis et al. (1997) and Nota et al. (1997). NL is grateful to Gloria Koenigsberger for enabling an extended visit to the UNAM Astronomy Institute, and to many colleagues there for their extremely warm hospitality. We thank J. Fliegner, S. Owocki, M. Peimbert, S. White and in particular J. Bjorkman for useful comments and discussions, and M. L. Norman and the Laboratory for Computational Astrophysics for the use of ZEUS-3D. The computations were performed at the Pittsburgh Supercomputing Center, the Supercomputer Center of the Universidad Nacional Autónoma de México and the Rechenzentrum Garching of the Max-Planck-Gesellschaft. This work was partially supported by the Deutsche Forschungsgemeinschaft through grants La 587/15-1 and 16-1 and by the US National Aeronautics and Space Administration. Figure Captions
no-problem/9905/cond-mat9905369.html
ar5iv
text
# Supression of magnetic subbands in semiconductor superlattices driven by a laser f ield. ## I Introduction. The continous increase of laser intensity is making possible the study of a wide range of non-linear phenomena in atoms, plasmas and solids under the action of intense electromagnetic radiation . In the last decade these studies have been extended to semiconductor nanostructures under intense electric fields, originated by an applied ac voltage or a high-intensity infrared laser. At frequencies of the order of $`1THz`$, typical of free-electron lasers, photon energies are comparable to the energy separation of the electronic levels and nanostructures couple strongly to the electromagnetic field. In the case of semiconductor superlattices of period $`d`$, driven by electromagnetic radiation of frequency $`\omega `$, amplitude $`F`$ and polarized in a direction perpendicular to the interfaces, the I vs V curves are modified and non-linear behaviour is observed. Holthaus has predicted the collapse of minibands and the dynamical localization of electrons with the corresponding supression of the coherent tunneling current if $`F/F_0`$ is a zero of the Bessel function $`J_0`$, being $`F_0=\mathrm{}\omega /ed`$. As a consequence of Tucker’s formula , tunneling processes in which the absorption or emission of N photons take place are inhibited when $`F/F_0`$ is a zero of $`J_N`$ . Sequential photon-assisted tunneling, with differential negative resistivity, dynamical electron localization, formation of electric field domains and absolute negative conductance for low applied voltages have been observed experimentally . In the miniband transport regime the effect of inverse Bloch oscillations has been reported . Many of these phenomena have been theoretically explained and several interesting predictions have been made: oscillations of the superlattice reflection coefficient of the radiation due to dynamical electron localization , step-like behaviour of the interminiband multiphotonic absorption coefficient , spontaneous breaking of temporal symmetry with the appearance of a dc current in presence of an ac voltage and the occurrence of Rabi oscillations when $`\mathrm{}\omega `$ is close to the energy difference between two minibands . A new kind of microwave frequency multiplier, an ultra-fast detector of $`THz`$ radiation and a Bloch laser based in the driven motion of the electrons in one miniband have been proposed . The energy spectrum of a superlattice of period $`d`$ under the action of a magnetic field $`B`$ perpendicular to the growth direction has also been well studied . Instead of Bloch minibands, one has Landau magnetic subbands, with energies $`E_n(x_0)+\mathrm{}^2k^2/(2m^{})`$, where $`x_0`$, ($`d/2x_0d/2`$) is the position of the centre of the cyclotronic orbit, $`k`$ is the momentum along the magnetic field direction and $`m^{}`$ the conduction electron effective mass. Let $`l_B=(\mathrm{}c/eB)^{1/2}`$ be the magnetic length. If the ratio $`d/l_B<1`$, the first subbands are almost flat and those of higher index $`n`$ present a dispersive character. If $`d/l_B>1`$, all subbands are dispersive. The difference between these two regimes is observed in optical intraband absorption at frequencies close to that of cyclotronic resonance $`\omega _c=eB/m^{}c`$ . The dispersion of magnetic subbands makes possible tunneling in the growth direction, which occurs via transitions between consecutive subbands in the anti-crossing points . This work considers a new, combined situation, in which a superlattice undergoes the action of a magnetic field perpendicular to its growth direction and intense electromagnetic radiation, linearly polarized in a direction perpendicular to $`\stackrel{}{B}`$. We search for the occurrence of effects such as the collapse of magnetic subbands and electronic localization similar to those predicted for Bloch minibands. The magnetic field introduces a new characteristic frequency (the cyclotron frequency) in the system under consideration, and an enhanced effect of the external electromagnetic field is expected near the resonance conditions. Additionally, $`\stackrel{}{B}`$ couples the motion in the directions perpendicular to it and, therefore, a similar result should be observed for waves polarized in either of these directions. We have found that non-linear effects become important for relatively lower intensities and radiation polarized in any direction perpendicular to the magnetic field. Quasienergies and density of states in the Kramers - Henneberger approximation are calculated and the conditions under which collapse of magnetic subbands and supression of N-photon absorption or emission processes occur are analyzed. ## II Schrodinger equation. Kramers-Henneberger approximation. Let $`OX`$ be the growth direction of the superlattice and $`OZ`$ the magnetic field direction. The electromagnetic radiation of frequency $`\omega `$ and amplitude $`F`$ can be polarized either in $`OX`$ or $`OY`$ directions. In the effective mass approximation, the envelope wavefunction of a conduction electron can be written as $$\psi (\stackrel{}{r},t)=\frac{1}{S}\mathrm{exp}\left(\frac{i}{\mathrm{}}(qy+kz)\right)\mathrm{exp}\left(\frac{i}{\mathrm{}}\frac{k^2}{2m^{}}t\right)\widehat{U}\phi (x,t)$$ (1) where $`S`$ is the transverse area of the sample, and $`m^{}`$ is the conduction electron effective mass. We have considered that $`m^{}`$ is constant, independent of $`x`$. The validity of this, commonly used, approximation has been discussed in . The time-dependent unitary transformation $`\widehat{U}`$ is given by : $$\widehat{U}=\mathrm{exp}\left(\frac{i}{\mathrm{}}u(t)\widehat{p}\right)\mathrm{exp}\left(\frac{i}{\mathrm{}}v(t)x\right)\mathrm{exp}\left(\frac{i}{\mathrm{}}w(t)\right)$$ (2) With the choice $`u(t)`$ $`=`$ $`{\displaystyle \frac{eF}{m^{}}}\left({\displaystyle \frac{\omega }{\omega _c}}\right)^{s1}{\displaystyle \frac{\mathrm{cos}\omega t}{\omega _c^2\omega ^2}}`$ (3) $`v(t)`$ $`=`$ $`eF\omega _c\left({\displaystyle \frac{\omega _c}{\omega }}\right)^s{\displaystyle \frac{\mathrm{sin}\omega t}{\omega _c^2\omega ^2}}`$ (4) $`w(t)`$ $`=`$ $`{\displaystyle \frac{e^2F^2}{4m^{}(\omega ^2\omega _c^2)}}t{\displaystyle \frac{e^2F^2(\omega _c^2+\omega ^2)}{8m^{}\omega (\omega _c^2\omega ^2)^2}}\mathrm{sin}2\omega t`$ (5) $`+`$ $`{\displaystyle \frac{eFl_B^2q}{(\omega _c^2\omega ^2)}}\omega _c\left({\displaystyle \frac{\omega _c}{\omega }}\right)^s\mathrm{sin}\omega t`$ (6) for $`\omega \omega _c`$, the function $`\phi (x,t)`$ satisfies the equation $$i\mathrm{}\frac{\phi }{t}=\left(\widehat{H}_0+\widehat{W}(t)\right)\phi $$ (7) where $`\widehat{H}_0={\displaystyle \frac{\mathrm{}^2}{2m^{}}}{\displaystyle \frac{d^2}{dx^2}}+{\displaystyle \frac{1}{2}}m^{}\omega _c^2(xx_0)^2+V_0(x)`$ (8) $`\widehat{W}(t)=2{\displaystyle \underset{N=1}{\overset{\mathrm{}}{}}}V_N(x)\mathrm{cos}N\omega t`$ (9) $`V_N(x)=V{\displaystyle \underset{m=\mathrm{}}{\overset{\mathrm{}}{}}}a_mJ_N\left(m{\displaystyle \frac{F}{F_0}}\right)\mathrm{exp}\left(i{\displaystyle \frac{2\pi mx}{d}}\right)N=0,1,\mathrm{}`$ (10) $`F_0(\omega )={\displaystyle \frac{edB^2}{2\pi m^{}c^2}}\left({\displaystyle \frac{\omega }{\omega _c}}\right)^s\left[\left({\displaystyle \frac{\omega }{\omega _c}}\right)^21\right]`$ (11) $`x_0={\displaystyle \frac{l_B^2}{\mathrm{}}}q`$ (12) Here $`Va_m`$ are the Fourier coefficients of the superlattice potential: $$V(x)=V\underset{m}{}a_m\mathrm{exp}\left(i\frac{2\pi mx}{d}\right)$$ (13) and $`s`$ takes values $`0`$ or $`1`$ for electromagnetic waves polarized along $`OX`$ or $`OY`$ axis. The effect of radiation is expressed in the ”dressed” potential $`V_0(x)`$, which renormalizes magnetic subbands, and the superposition of harmonic terms with amplitude $`V_N(x)`$, giving raise to N-photon emission and absorption processes. The dynamical Stark effect is included in Eq.(3) for $`w(t)`$ as a phase shift of the wavefunction, and is given by $$\mathrm{\Delta }E=\frac{e^2F^2}{4m^{}(\omega ^2\omega _c^2)}$$ (14) When $`F<<F_0`$, $`V_NV(F/F_0)^N`$ and standard time-dependent perturbation theory can be applied. Non-linear effects become important for $`FF_0`$. If $`F>>F_0`$, all $`V_N`$ have the same order of magnitude and $`WV(F_0/F)^{1/2}`$. When $`\omega `$ is close to $`\omega _c`$, $`F_0`$ is relatively small. For example, if $`B=16T`$, $`m^{}=0.067m_e`$ and $`d=20nm`$, then $`F_010^2V/cm`$. This corresponds to intensities about $`10^2W/cm^2`$, which can be reached with a c.w. laser. Therefore, the dynamics of recombination processes and many-body effects due to high levels of photoexcitation need not to be considered. Quasienergies $`\epsilon `$ are the eigenvalues of the operator $`\widehat{H}_0+\widehat{W}i\mathrm{}/t`$ and can be obtained from the poles of $`\widehat{G}(t)`$, the retarded Green’s function averaged over a period $`T=2\pi /\omega `$ of the initial time. The Kramers-Henneberger approximation (KHA), widely used in atoms , approximates $`\epsilon `$ by the eigenvalues of $`\widehat{H}_0`$ and corresponds to the zeroth order term of the expansion of $`\widehat{G}(t)`$ in even powers of $`\widehat{W}`$: $$\widehat{G}(t)=\underset{p=0}{\overset{\mathrm{}}{}}\widehat{G}_0\left(\widehat{W}\widehat{G}_0\right)^{2p}$$ (15) where $`\mathrm{}`$ denotes the time-average operation described above. Following the arguments of , it is easy to show that the above series converges absolutely. Note that $`V_0`$ enters the expansion in zeroth order, whereas $`V_N`$ ($`N>1`$) contributes only to second and higher orders. When $`F>>F_0`$ Eq.(15) is an expansion in powers of $`\alpha =(V/(\mathrm{}\omega _c))(F_0/F)^{1/2}`$. Then, as KHA neglects terms of the order of $`\alpha ^2`$ it is expected to give a good result if $`\alpha ^2<<1`$. Since in this case $`V_0(x)=Va_0+O(\alpha )`$, the dressed potential can also be considered an small perturbation partially breaking the degeneracy of Landau levels. It is important to realize that the limit $`\alpha 0`$ can be achieved not only by increasing the laser intensity ($`I\mathrm{}`$) but also by approaching the cyclotronic frequency ($`\omega \omega _c`$). In what follows, we will consider a particular potential: $$V(x)=\frac{V}{2}\left(1\mathrm{cos}\frac{2\pi x}{d}\right)$$ (16) for which the expressions of $`V_N(x)`$ have the simple form $$V_N(x)=\frac{V}{2}\left[\delta _{N,0}J_N\left(\frac{F}{F_0}\right)\mathrm{cos}\left(\frac{2\pi x}{d}+\frac{N\pi }{2}\right)\right]$$ (17) In this case, magnetic subbands collapse when $`F/F_0`$ is a zero of $`J_0`$, and processes involving absorption or emission of N photons are quenched at zeros of $`J_N`$. However, for an arbitrary superlattice, $`V_N(x)`$ is given by Eqs.(10) and different harmonics are modified in different proportions, so $`V_N(x)`$ does not necessarily become zero for finite values of $`F/F_0`$. When $`F/F_0\mathrm{}`$ ($`\alpha 0`$) and $`N`$ is fixed, $`V_N`$ approaches zero as $`V(F_0/F)^{1/2}`$ and magnetic subbands are supressed. In this limit multiphoton processes of many orders have a contribution of similar magnitude to the absorption and emission probabilities. To order $`\alpha `$ the quasienergies are given by the expression $$E_n^{(1)}(x_0)=\mathrm{}\omega _c\left(n+\frac{1}{2}\right)+\frac{V}{2}U_n\mathrm{cos}\frac{2\pi x_0}{d}$$ (18) where $$U_n=\frac{V}{2}J_0\left(\frac{F}{F_0}\right)\mathrm{exp}\left(\frac{\pi ^2l_B^2}{d^2}\right)L_n\left(\frac{2\pi ^2l_B^2}{d^2}\right)$$ (19) and $`L_n(z)`$ are Laguerre polynomials. ## III Results and discussion. Numerical diagonalization of $`\widehat{H}_0`$ gives the electron quasienergies $`E_n(x_0)`$ in the KHA, shown in Fig.1 (solid lines) for $`V=0.3eV`$, $`m^{}=0.067m_e`$, $`B=16T`$, $`d=20nm`$ and $`F/F_0=0;5;10;15;20;30`$, corresponding to laser intensities lower than $`10^5W/cm^2`$ at frequencies $`\omega \omega _c`$. We have also shown the quasienergies obtained from Eq.(18)(dashed lines). The curves show the following features: 1. In the absence of radiation ($`F=0`$) all magnetic subbands are dispersive, in agreement with , since for the values of $`B`$ and $`d`$ considered $`d>l_B`$. 2. As $`F/F_0`$ increases, the magnetic subbands become flat. 3. As $`F/F_0`$ increases, the results derived from Eq.(18) approach those calculated in KHA approximation. 4. The facts described in b. and c. do not occur monotonously, because of the oscillations of $`J_0`$. For example, these effects are stronger for $`F/F_0=15`$ than for $`F/F_0=20`$ because $`J_0`$ has a zero in $`14.9309`$. 5. One must expect the same qualitative behaviour for any superlattice, but in the general case there are no finite values of $`F/F_0`$ for which magnetic subbands collapse. Note that in this case $$V_0(x)=V\underset{m=\mathrm{}}{\overset{\mathrm{}}{}}a_mJ_0\left(m\frac{F}{F_0}\right)\mathrm{exp}\left(i\frac{2\pi mx}{d}\right)$$ (20) In Fig.2 we show the density of electron states (DOS) calculated from Eq.(18) for $`F/F_0=5`$ and $`15`$. When magnetic subbands become flat, the time averaged velocity in the $`OY`$ direction is zero and electron motion becomes one-dimensional, as can be seen in the shape of the DOS curve for $`F/F_0=15`$. Under these conditions, tunneling in the direction of growth is expected to be quenched, as for Bloch minibands. We conclude that, when $`\omega `$ approaches $`\omega _c`$, magnetic subbands are supressed, tunneling in the growth direction is inhibited and multiphoton processes dominate intersubband absorption spectra. ## ACKNOWLEDGMENTS This work has been partially supported by an Alma Mater Project of the University of Havana. Authors thank Humberto S. Brandi for helpful suggestions. One of us (C. R. C.) also acknowledges Carlos Tejedor and Gloria Platero for warm hospitality and detailed discussions of the manuscript at Universidad Autónoma and Institute for Materials in Madrid, Spain. Figure captions: Fig 1. Electron quasienergies $`E_n(x_0)`$ ($`n=0,1,2,3,4`$) as functions of the centre of the cyclotronic orbit for different values of $`F/F_0`$. (Solid lines: KHA. Dashed lines: 1st order perturbation theory) Fig 2. Density of electron states in 1st order perturbation theory for $`F/F_0=5`$ and $`15`$.
no-problem/9905/cond-mat9905292.html
ar5iv
text
# Empirical Phase Diagram of Congested Traffic Flow ## Abstract We present an empirical phase diagram of the congested traffic flow measured on a highway section with one effective on-ramp. Through the analysis of local density-flow relations and global spatial structure of the congested region, four distinct congested traffic states are identified. These states appear at different levels of the upstream flux and the on-ramp flux, thereby generating a phase digram of the congested traffic flow. Observed traffic states are discussed in connection with recent theoretical analyses. Rich physical phenomena in traffic flow have been investigated by both empirical and theoretical studies. Two distinct traffic states, the free flow and the traffic jam state, have been identified from measurements on homogeneous highways and various properties of them have been successfully reproduced by traffic models . Further empirical studies have reported the presence of additional traffic states, where typical vehicle velocities take intermediate values between the velocity levels of the free flow and the traffic jam. In these states, vehicle motions in all lanes are synchronized and the notion of a unique density-flow relation breaks down. From local traffic patterns, three different types are classified . These congested traffic states appear mostly near ramps and theoretical studies of the ramp effects reproduced some of their features such as the discontinuous transition from the free flow to the congested flow and the high flux level of the congested flow. Recently, more extensive theoretical studies have revealed that for highways with an on-ramp, several distinct congested traffic states appear depending on the levels of the upstream flux and the on-ramp flux. A phase diagram is constructed and metastabilities between these states are also investigated . Yet the relation between these studies and the observations is not clear and especially no empirical evidence for the predicted phase diagram has been reported to our knowledge. In this Letter, we present an analysis of the congested traffic flow measured on a highway in Seoul, Korea. From the studies of the traffic patterns at fixed locations and the global spatial structures, we find 4 qualitatively distinct congested traffic states. Appearance of each state depends on the upstream flux and the on-ramp flux, from which we construct the empirical phase diagram. These 4 states exhibit features that agree with theoretical studies . But differences are also discovered. For the study, we use the traffic data of the Olympic Highway which connects the east and west ends of Seoul. Since it is an intra-city highway, there are many ramps and the spacings between them are rather short ($``$ 1 km). But in some sections, lane dividers separate the lane 1 and 2 from the lane 3 and 4, and as a result, the two inner lanes become a ramp-free highway. Our investigation is focused on a 14 km east-bound section, between Seoul bridge and Young-dong bridge, that contains a long ($``$ 5 km) lane divider . In this section, 6 on-ramps and 6 off-ramps exist (all of them connected to the outermost lane 4) and 15 image detectors record the flux $`q`$ and the average velocity $`v`$ for each lane at every one minute interval \[Fig. 1(a)\]. Among the ramps located outside the ideal ramp-free region, usually only one or two of them are effective and the flux through other ramps are small. For the analysis, inner-lane traffic data for 14 different days in June and July, 1998 are used. We search congested traffic states that are long-lived ($``$ 1 hour) and appear practically everyday. In this way, we identify 4 congested traffic states, for each of which, data for a particular day are presented below to demonstrate its typical features. The first kind of the congested traffic states, which we call CT1, is shown in Fig. 1(b), where the on-ramp ON3 at $`x=8.6`$ km is the main ramp in the time interval depicted in the figure and the traffic between $`x3`$ km and $`x10`$ km is congested. The spatial extension of the congested region does not expand with time (within our estimated accuracy of 1 km/h) but the boundaries of the region are not stationary. Also systematic oscillation develops spontaneously near the upstream boundary, which is manifest in the velocity vs. time plot at D4 \[Fig. 1(c)\]. Notice that the velocity oscillation is amplified and its period is enhanced as the upstream boundary is approached. Peaks of the velocity move with velocity $``$13 km/h. Similar features are reported in Ref. . We mention that the oscillation is not caused by the temporal variations in the upstream flux. Another charateristic of the CT1 state is the two-dimensional covering of the density-flow relation \[Fig. 1(d)\]. The second kind of the congested traffic states, CT2, is shown in Fig. 2(a), where the effective ramp is ON3. The density-flow relations cover two-dimensional areas \[Fig. 2(b)\]. In contrast to the CT1 state, however, the boundaries of the congested region are almost motionless and the development of the large amplitude oscillation is not observed \[Fig. 2(c)\] even though the velocity levels are comparable to those in the congested region of the CT1 state. The third kind of the congested traffic states, CT3, is shown in Fig. 3(a), where the effective ramp is ON4. The boundaries of the congested region are again motionless. The congested region is much shorter compared to the CT1 and CT2 states. More important difference appears in Fig. 3(b). The density-flow relation at each detector location inside the congested region forms a straight line, implying that the velocity ($`v=q/\rho `$) remains almost constant even under the significant flux fluctuations. However the values of the velocity are different for different detectors. \[In 3 out of 14 investigated days, we also observe a wide congested region ($``$ 4 km) with the stationary and almost homogeneous velocity profile. Here we do not present this as another distinct traffic state since the number of available data sets is too small.\] The fourth and the last kind of the congested traffic states, CT4, is shown in Fig. 4, which appears during morning rush hours. The effective ramp is ON3. While the downstream boundary of the congested region remains stationary, the upstream boundary propagates backwards and the congested region expands monotonically, unlike all other congested traffic states mentioned above. The expansion rate is higher for the higher level of the flux in the upstream region where the free flow is maintained, and the observed values of the rate range from 2.2 km/h to 8.8 km/h. The density-flow relation covers a two-dimensional area and the development of the large amplitude oscillation is not observed. We examine differences in the appearance conditions of the 4 congested traffic states. Recent theoretical studies using one lane models suggest that the flux level $`f_{\mathrm{up}}`$ at the right upstream of the congested region, where the free flow is maintained, and the on-ramp flux $`f_{\mathrm{rmp}}`$ are the two important control parameters. These studies assume an ideal situation where $`f_{\mathrm{up}}`$ and $`f_{\mathrm{rmp}}`$ are constants. While they fluctuate in reality, however, dominant fluctuations come from short time scale (one minute) variations and the fluctuations are greatly suppressed in long time scale (ten minutes or longer). Thus we use the average values of $`f_{\mathrm{up}}`$ and $`f_{\mathrm{rmp}}`$ over the time intervals (typically 1 hour long) during which a particular state is maintained . In Fig. 5, each point ($`f_{\mathrm{rmp}}`$, $`f_{\mathrm{up}}`$) thus obtained is marked with a different symbol depending on the maintained congested traffic state. Notice that although there are some overlaps, each symbol occupies a clearly distinguishable region in the $`f_{\mathrm{rmp}}`$-$`f_{\mathrm{up}}`$ plane. This difference in the data locations verifies the roles of $`f_{\mathrm{rmp}}`$ and $`f_{\mathrm{up}}`$ as important control parameters, in agreement with Refs. . Also the metastability between the free flow and the CT1, CT2, CT3 states is observed as studied in Ref.. We now make a detailed comparison of the measurement data with the theoretical studies . The CT1 state is similar to the recurring hump (RH) state in Refs. . In both states, the congested regions do not expand and systematic oscillations develop. The oscillation in the CT1 state, however, exhibits features that are not shared by the RH state, such as the oscillation amplification and the period enhancement . The CT2 state can be related to the pinned localized cluster (PLC) state \[in Ref. , a different term “standing localized cluster” (SLC) state is used to denote the same state\]. In both states, the congested region does not expand and no systematic oscillation develops. Also the spatial variation of the long time ($``$ 1 hour) averaged density-flow relations from the upstream to the downstream is essentially identical to the pattern for the PLC state \[Fig. 2(b) in Ref. \]. And the data locations of the CT2 state in the $`f_{\mathrm{rmp}}`$-$`f_{\mathrm{up}}`$ plane \[Fig. 5\] are to the left of those of the CT1 state, which agrees with the relationship between the PLC and the RH state . The two-dimensional covering of the density-flow plane in the CT1 and CT2 states, on the other hand, is not shared by the RH and PLC states. We speculate that the covering property may be due to fluctuations effects that are not taken into account in Refs. . For example, it is recently demonstrated that fluctuations in vehicle types can generate the two-dimensional covering . We also suspect that short time scale fluctuations in $`f_{\mathrm{up}}`$ and $`f_{\mathrm{rmp}}`$ may generate a similar effect. The CT3 state is also similar to the predicted PLC state in regard to the stationary boundaries of the congested region and the absence of the systematic oscillations. The data locations of the state in the $`f_{\mathrm{up}}`$-$`f_{\mathrm{rmp}}`$ plane \[Fig. 5\] are also in reasonable agreement with those of the PLC state . However the property of the constant velocity in this state is not shared by the PLC state. This property implies that the car following dynamics in this state is significantly different from that assumed in many theoretical models , that is, the velocity adjustment to the spatial gap. The CT4 state can be related to the oscillating congested traffic (OCT) state or homogeneous congested traffic (HCT) state , both of which exhibit the expansion of the congested region to the upstream. In theoretical studies, it is found that the congested region of the OCT state contains large clusters with the jam character while the congested region of the HCT state is homogeneous . The density-flow relation of the CT4 state does not demonstrate the characteristic line of the jam, which suggests that the HCT state is the proper theoretical counterpart of the CT4 state. As for the two-dimensional covering property of the CT4 state, we mention that the same covering can be reproduced for the HCT state . Also the data locations of the CT4 state in the phase diagram are consistent with the prediction for the HCT state. We next compare our data with the German highway data in Ref. , where congested traffic flows are classified into 3 types (i,ii,iii) according to local density-flow relations without much regard to spatial structures. In this classification, the CT1, CT2, and CT4 states with nonstationary density-flow relations belong to the type (iii), and the CT3 state with the stationary velocity property to the type (ii). We also observe the type (i) state that is characterized by the stationary velocity and density profiles. However, this state is always short-lived (less than 5 minutes), which is too short for our analysis, and we do not include this state in our classification. It is also interesting to compare the CT1 state with the congested traffic state reported in Ref. , which we call CT1’ state tentatively. In both states, the amplification and the periodicity enhancement of the velocity oscillation occur and the average velocity levels rise during the amplification. On the other hand, the velocity oscillation in the CT1’ state grows to generate mature jam clusters and the density-flow relation approaches the characteristic line (the line J in Ref. ) of the traffic jam, while this feature is absent in the CT1 state \[see Fig. 1(d)\]. This difference suggests that the CT1 and CT1’ states may be distinct congested traffic states. They also seem to appear at different regions in the $`f_{\mathrm{rmp}}`$-$`f_{\mathrm{up}}`$ plane. From the data give in Ref. , we estimate $`f_{\mathrm{up}}`$ 1600 veh/h (information for $`f_{\mathrm{rmp}}`$ is not available) for the CT1’ state (compare with Fig. 5). A recent study also suggests that the CT1’ state is related rather to the theoretically predicted OCT state , instead of the RH state. In summary, 4 congested traffic states are identified by combining temporal traffic patterns at fixed locations and spatial structure of the congested region. It is found that these 4 states appear at different levels of the upstream flux and the on-ramp flux. An empirical phase diagram is constructed and compared with recent predictions. Many properties of the observed states agree with predictions but deviations are also found. Lastly we mention that there exist regions in the $`f_{\mathrm{rmp}}`$-$`f_{\mathrm{up}}`$ plane which are not probed by our data. Thus it is possible that additional congested traffic states exist in those regions. Further investigation is necessary. We thank Young-Ihn Lee and Seung Jin Lee for providing the traffic data, and Sung Yong Park for fruitful discussions. H.-W.L. was supported by the Korea Science and Engineering Foundation. This work is supported by the Korea Science and Engineering Foundation through the SRC program at SNU-CTP, and also by Korea Research Foundation (1998-015-D00055).
no-problem/9905/astro-ph9905180.html
ar5iv
text
# An Application of Kerr Blackhole Fly-Wheel Model to Statistical Properties of QSOs/AGNs ## 1 INTRODUCTION We discuss the evolution of QSO/AGN activities under the fly-wheel (rotation driven) model which is one of the plausible models for the powerful engine of the AGNs including a rotating central blackhole (BH). This fly-wheel engine might not be familiar comparing with the fuel engine (accretion driven engine), however, this is very attractive because this model can explain the evolution and the lifetime of AGN activities very naturally. It is widely believed that recent discovery of the red tail of emission lines (Fe K$`\alpha `$) from the central region of AGNs suggests that the central blackholes are quickly rotating, i.e., the monster BH should be the Kerr BH (see Tanaka et al. 1995, Iwasawa et al. 1996, Dabrowski et al. 1997). The innermost stable circular orbit around the Kerr BH can intrude more close to the horizon than the Schwartzschild hole (non-rotating BH) of the same mass. Hence the line emission from the innermost region of the accretion disk should be considerably red-shifted by the gravitation, and makes the red tail. From the theoretical point of view, the majority believes that the central BH presumably gets an enormous angular momentum at the formation stage of the monster. For example, Sasaki & Umemura (1996) showed the formation scenario by using the Compton drag process as follows. After the neutralization of universe at $`z10^3`$, density fluctuations grow to form the proto-galactic cloud, and each fluctuation obtains the angular momentum through the tidal interaction among them. At the era $`z10^2`$, nuclear reactions occur in the central region of the rotating proto-galactic cloud, and the matters are reionized by stellar UV radiation. The rotating reionized matters must interact again with the uniformly distributed cosmic background radiation from the last scattered surface. This interaction efficiently extracts the angular momentum of the cloud material to the radiation field by the Compton scattering, then the angular momentum of the matter decreases, and when it goes just below a critical value of the angular momentum, the matter suddenly collapses and makes the quickly rotating monster blackhole. In this scenario, the initial angular momentum of the central hole shall be very close to the maximum value (say, $`am`$ where $`a`$ is the Kerr parameter and $`m`$ is the mass of the hole) which the Kerr hole can hold. Hence it seems reasonable to suppose that the central blackhole is similar to the extreme Kerr hole at the formation stage. Such holes have an enormous rotation energy ($`10^{54}\text{J}10^{39}\text{W}\times 10^9\text{yr}`$ for the hole with mass $`10^8M_{}`$). This is enough to explain the total energy release of AGNs. There are two different types of engines for energy production at the blackhole-accretion disk systems. The first is the well-known “fuel engine”, which is the accretion powered engine. This is the major one which has been frequently adopted to explain the AGN activities. The fuel engine acts by a process to convert the gravitational energy released from the infalling matter to the radiation. The standard disk model (Shakura & Sunyaev 1973) is the representative of this model. The second is the “fly-wheel engine”. This is the rotation powered engine. While the fly-wheel engine is not so familiar in the field of AGNs, this engine is as powerful as the fuel engine, and has very interesting features as discussed in the following sections. In contrast with the fuel engines, the energy source is the rotation of the Kerr BH itself. Of course, the rotation of the accretion disk also can be another energy source. However our scope is focused to the case in which the rotation of the BH is energy source, because, the rotation energy of the disk is supplied by accretion, so that there is an apprehension of confusion of two engines. The author strongly hopes to introduce this fascinating engine to researchers working in the field of AGNs. The comparison of the fly-wheel engine and the fuel engine is discussed in section 2. Let us summarize the properties of the fly-wheel engine. The idea to extract the rotation energy of the Kerr holes is firstly proposed by Penrose (1969). When an incident particle into the Kerr hole splits into two parts inside the ergo-sphere with a very large relative velocity, one particle can be thrown into the negative energy orbit falling to the hole and another particle escapes outward with larger energy than the initial energy of the incident particle. In this case, the rotation energy of the hole is reduced by the infall of the negative energy particle, and reduced energy is carried by the escaping particle. This is well-known “Penrose process”. Unfortunately it is pointed out that the Penrose process is not effective for astrophysical problems (see Bardeen et al. 1972) because the critical value of the relative velocity in order to realize the negative energy orbit is close to one half of the light velocity. Such a large relative velocity can be realized only by nuclear reactions of particles and may not be achieved by usual dynamical processes, e.g., the tidal disruption of the accreting matter. The electromagnetic mechanism extracting the rotation energy of the Kerr BHs is firstly proposed by Blandford & Znajek (1977). This is well known as the “magnetic breaking process” or the “BZ process”. In their study, the magnetosphere is supposed to be “force-free” (strictly saying, this is “magnetically dominated”), and they clearly showed the energy extraction in the form of the Poynting flux when the rotation speed of BH is greater than it of the magnetosphere. An extension of the magnetic breaking process to the full MHD (magnetohydrodynamic) system was performed by Takahashi et al. (1990) as an elementary process of the BH engines. By precise analysis of the MHD accretion flow onto the Kerr BH, they succeeded to clarify the condition to realize the “negative energy MHD inflow”. This process is named as the “MHD Penrose process”. Nitta et al. (1991) studied the magnetospheric structure filled with the trans-magnetosonic MHD inflow onto the Kerr hole, and applied the MHD Penrose process to the problem of the individual evolution of AGNs. The result of this work is briefly reviewed in section 3. Recently, the BZ process is speculatively staged again as the elementary process of the $`\gamma `$-ray burst (GRB, see Paczynski 1998). In this case, the rotation energy $`10^{47}`$ \[J\] of nearly maximum rotating Kerr BH of mass $`10M_{}`$ is considered to be extracted by very strong magnetic field $`10^{11}`$ \[T\] in a few seconds. The extracted Poynting energy is expected to produce the ultra relativistic wind with the Lorentz factor $`>10^2`$. Of course, “magnetically dominated” assumption of original BZ process is too simple to treat the wind acceleration, and it should be extended to full MHD fly-wheel model. The fly-wheel model is still unclear, but it should be one of fascinating process to unify physics of quasars and micro-quasars. In this paper, the result of Nitta et al. (1991) for individual evolution of fly-wheel engine is applied to the statistics of ensemble of QSOs/AGNs and compared with the observation. Figure 1 shows observation of the luminosity function (LF) of QSOs. Figure 2 shows observation of the evolution of the spatial number density of QSOs. Our attention will be focused to explain the evolution of QSOs/AGNs in a range $`0z5`$ by a mechanical process. In order to discuss the physical process of the plasma inflows and the magnetospheric structure of the Kerr BH, we suppose general relativistic, stationary and axisymmetric ideal cold MHD flows. In this case, MHD equations reduce to well-known basic equations: the Bernoulli equation and the Grad-Shafranov equation (see Takahashi et al. 1990 and Nitta et al. 1991) with constants of the motion. By using these basic equations, we discuss the properties of the fly-wheel engine and apply it to the evolution of ensemble of QSOs/AGNs in section 4. We should note again that our primary purpose is to demonstrate the fascinating properties of the fly-wheel model and not to produce a serious model for evolution and statistics of QSOs/AGNs. ## 2 COMPARISON OF FLY-WHEEL ENGINE VS. FUEL ENGINE Here we compare the fly-wheel model with the fuel model, and clarify the differences between them. Our attention is focused to the energy source, the power output and the form of the energy transfer. The most fundamental difference is in the energy source of them. The energy source of the fuel engine is gravitational energy of infalling matter released through some dissipation process like $`\alpha `$-viscosity (Shakra & Sunyaev 1973). Hence this engine can act while the accretion is continued. The energy source of the fly-wheel engine is the rotation energy of the central spinning BH itself which can be extracted through an electromagnetic process (the magnetic breaking). We should note that the rotation energy of the BH is obtained at the formation stage, and is finite. Thus the lifetime of the fly-wheel engine, on the contrary, must be finite. The output power of the fuel engine essentially depends upon the mass accretion rate of the infalling matter, and is widely variable. Upper boundary of the power approximately corresponds to the Eddington luminosity. On the contrary, the output power of the fly-wheel engine is determined by the magnetospheric equilibrium. In a typical case, the power $`10^{39}`$\[W\] for the BH mass $`10^8`$ M (see the next section), and is enough to explain actual QSOs/AGNs. In the fuel engine, generated power is in a thermal form (e.g., the standard disk model) and is immediately converted to the radiation from the central region. The mechanical process of the fueling is so complicated. We need several models of the angular momentum extraction for each decade of distance from the BH. These mechanisms must be matched consistently, however, this is very difficult. In the fly-wheel engine, the extracted rotation energy from the BH is once stored in the form of the Maxwell stress of the magnetosphere. This stress causes the global electric current circuit in the magnetosphere, and magnetocentrifugal force drives the plasma outflow, for example, the highly collimated bipolar jets in radio-loud AGNs or the equatorial wind in BAL QSOs. The kinetic energy of the plasma outflow is finally converted to radiation at some distant region through some emission processes (e.g., the synchrotron radiation produced by the 1st Fermi acceleration on the shock). The mechanical process of the fly-wheel engine is simple. We can clarify it by closed discussion in the vicinity of the BH. In addition, we should note that the fly-wheel model can be a simple and unified mechanism throughout the entire magnetosphere in a range from AU to Kpc or Mpc. These properties of the fuel engine and the fly-wheel engine are summarized in table 1. ## 3 EVOLUTION OF THE POWER OUTPUT FROM THE KERR BLACKHOLE MAGNETOSPHERE In the MHD scheme, the magnetosphere near the horizon of a BH must be filled with super-magnetosonic accretion flow to keep the causality. The trans-magnetosonic condition crucially restricts the magnetospheric structure. Nitta et al. (1991) studied the magnetospheric structure of a Kerr BH filled with trans-magnetosonic accretion flow. The BH magnetosphere is characterized by coexistence of the outgoing flow and the accreting flow. In order to realize the outgoing flow and the accreting flow @simultaneously, they suppose the stagnation region (source region) which is sustained by the magnetocentrifugal force against the gravity. This is the source of ingoing/outgoing flows. The stagnation region may correspond to the pair creation region near the outer gap (near the surface $`\omega =\mathrm{\Omega }_F`$ where $`\mathrm{\Omega }_F`$ is the angular velocity of the magnetosphere and $`\omega `$ is the Lense-Thirring rotation of the inertial frame: see Hirotani & Okamoto 1998) or the disk halo. The accretion flow starts with very low poloidal velocity, and accelerates toward the horizon, then the flow must pass through the Alfvén point and the fast point before reaching the horizon. In their result, it is clarified that the strong gravity of BH cause the accretion flow and amplifies the magnetic flux, but the total magnetic flux $`\mathrm{\Psi }_H`$ and the particle number flux $`\dot{N}_H`$ threading the horizon are suppressed by the rotation of the hole, $$\mathrm{\Psi }_H\frac{\mathrm{\Omega }_F}{\omega _H}\mathrm{\Psi }_0$$ (1) where $`\mathrm{\Psi }_0`$ denotes the total magnetic flux of the magnetosphere and $`\omega _H`$ is the angular velocity of the Kerr blackhole (the Lense-Thirring angular velocity at the horizon), $$\dot{N}_H\frac{B_{0}^{}{}_{}{}^{2}}{\mu \mathrm{\Omega }_F\omega _H}$$ (2) where $`B_0`$ is the magnetic field at the source region of the accretion flow and $`\mu `$ is the averaged rest mass of the particle of the flows. We also find that one infalling particle can release the rotation energy of BH of the order of its rest mass energy. According to these results we can estimate the total power output $`L_{BH}`$ from the rotating BH as $$L_{BH}(m,ϵ,B_0;t)=P_0(m,B_0,ϵ)\frac{\theta (tt_{max}(m,B_0,ϵ))}{\sqrt{1t/\tau _{evo}(m,B_0,ϵ)}}$$ (3) where $`t`$ is the time after the birth of BH, $`P_0=m^2B_0^2/ϵ`$, $`\tau _{evo}=ϵ^2/(mB_0^2)`$, $`t_{max}=(1ϵ^2)\tau _{evo}`$, $`ϵm\mathrm{\Omega }_F`$ (always less than unity, see Nitta et al. 1991), $`\theta `$ is the Heaviside function ($`\theta (x)=0`$ for $`x<0`$, $`\theta (x)=1`$ for $`x0`$). We should note that this formula is somewhat simplified from the original form (see equation \[5.9\] of Nitta et al. 1991). Since this formula is derived by estimation of order of magnitude, the formula contains unspecified factors of order of unity. Here we assume these factors to be unity for simplicity. For the typical case, the values of $`P_0`$ and $`\tau _{evo}`$ are given as $$P_010^{39}[\text{W}]\left(\frac{ϵ}{0.1}\right)^1\left(\frac{m}{10^8M_{}}\right)^2\left(\frac{B_0}{1[\text{T}]}\right)^2,$$ (4) $$\tau _{evo}10^9[\text{yr}]\left(\frac{ϵ}{0.1}\right)^2\left(\frac{m}{10^8M_{}}\right)^1\left(\frac{B_0}{1[\text{T}]}\right)^2.$$ (5) In this model, initially quickly rotating BH ($`\omega _H\mathrm{\Omega }_F`$) spins down by the magnetic breaking process and releases its rotation energy. When $`t=t_{max}`$ ($`\omega _H=\mathrm{\Omega }_F`$), output power is maximum, then the engine ceases to act suddenly. A typical sample is shown in figure 3 where $`L_{BH}`$ is denoted as a function of the time after the BH formation. In this scheme, the extracted energy is in the form of the Poynting flux, and it will be converted to the thermal energy of the magnetospheric plasma through some dissipative process or the kinetic energy of outflows by magnetocentrifugal drive which will act outside the source region. Let us assume here that all the extracted energy converted to the radiation through some unspecified mechanism, hence the total output power should be interpreted as the bolometric luminosity. We should note that during the evolution, the BH mass $`m`$ is nearly constant, because the time scale of the mass variation is much longer than it of the angular momentum variation which determines the time scale of evolution (see Nitta et al. 1991). Here, the evolution of power output of individual Kerr BH magnetosphere has been clarified. We now try to apply this result to the statistical discussion for ensemble of QSOs/AGNs. It should be noted that the strength of the BZ process depends crucially on that of the magnetic field threading the horizon. Blandford & Znajek (1977) firstly discussed the BH fly-wheel engine, but total magnetic flux on the horizon is a free parameter in their discussion based on the magnetically dominated limit. On the contrary, Nitta et al. (1991) using full MHD discussion enable to obtain the magnetic flux on the horizon as a result of the inner magnetospheric equilibrium based on full MHD. Instead of it, the total magnetic flux of the entire magnetosphere is treated as a free parameter. This point will be discussed in section 6. ## 4 APPLICATION TO STATISTICS OF QSOs/AGNs General relativistic theory of QSO core formation is still an open question. Hence we do not have statistical properties of the parameters of QSO BHs. If we assume the statistical distributions of the BH mass $`m`$ (the initial mass function), the Kerr parameter $`a`$ (the initial Kerr parameter function) of seed BH and the magnetic field strength $`B_0`$ at the source region (this should depend on the BH mass, the accretion rate and the dynamo theory), we can sum up the contribution of each BH over the ensemble, and can suggest the statistical properties of QSOs/AGNs by the Kerr BH fly-wheel model. Here we will demonstrate a preliminary application of Kerr BH fly-wheel model to QSO statistics. The discussion is based on the Press-Schechter formalism as a probable seed BH formation scenario. Sasaki & Umemura (1996) discussed an additional process, the Compton drag scenario, for further angular momentum extraction to form the proto-galactic cloud to form the seed BH, and suggest the initial mass function in figure 1 of their paper. Unfortunately, the distribution of the Kerr parameter and the magnetic field strength at the source region are not mentioned there. Hence we should assume the magnetic field strength $`B_0`$ at the source region and the initial Kerr parameter $`a/m`$ as follows. The magnetic filed at the source region is usually estimated as $`B_01`$\[T\] for $`m=10^8M_{}`$ in order to explain typical QSO luminosity. This value probably depends on the BH mass, then we assume $$B_0=1[\text{T}]\times (m/10^8M_{})^\zeta .$$ (6) We also assume the initial Kerr parameter $`a/m1`$ (nearly the extreme Kerr BH at the initial stage) and the small parameter $`ϵm\mathrm{\Omega }_F0.1`$. Then the power output $`L_{BH}`$ is the function of $`m`$ and $`t`$ (see equation 3). Thus the only we need is the initial mass function of BHs. ### 4.1 Evolution of the luminosity function From Sasaki & Umemura (1996) we obtain the initial mass function $`f_{BH}`$ of BHs based on the standard CDM model as $$f_{BH}(m)=\frac{n+3}{6}\frac{\rho _0}{M(m)^2}\sqrt{\frac{2}{\pi }}\nu e^{\nu ^2/2}$$ (7) where $$\nu =\left(\frac{M(m)}{M_{c0}}\right)^{(n+3)/6}(1+z_{vir}),$$ (8) with $`M`$ is the total (dark matter+baryon) mass of the proto-galactic cloud, $`\rho _0`$ is the present total mass density, $`M_{c0}=\rho _04\pi (16\text{Mpc})^3/3`$. $`M`$ is related with the BH mass $`m`$ as $$m=r_{BH}\mathrm{\Omega }_bM,$$ (9) where $`\mathrm{\Omega }_b`$ is the fraction of the baryonic mass to the total mass and $`r_{BH}`$ is the ratio of the BH mass to the bakoryonic mass. We assume $`\rho _0=6.9\times 10^{10}[M_{}/\text{Mpc}^3]`$, $`\mathrm{\Omega }_b=0.05`$ and $`r_{BH}=0.1`$ in this paper (we adopt a cosmological model with total density parameter $`\mathrm{\Omega }_0=1`$ and the present Hubble constant $`H_0=50`$\[km/s/Mpc\]). In their paper, the BH formation epoch $`z_{vir}`$ is obtained from somewhat complicated procedure, however, in a BH mass range $`10^6M_{}m10^{10}M_{}`$, $`z_{vir}`$ varies in a very narrow range around $`z_{vir}200`$, then we neglect the mass dependence for simplicity and put $`z_{vir}200`$ throughout this paper. The resultant mass functions of BHs are shown in figure 4. We can obtain the luminosity function $`\mathrm{\Phi }`$ at the cosmic time $`t`$, $`\mathrm{\Phi }(m;t)`$ $`=\left|{\displaystyle \frac{dn_{BH}(m)}{dL_{BH}(m;t)}}\right|`$ (10) $`=\left|{\displaystyle \frac{dn_{BH}(m)}{dm}}/{\displaystyle \frac{dL_{BH}(m;t)}{dm}}\right|`$ $`=\left|{\displaystyle \frac{f_{BH}(m)}{dL_{BH}(m;t)/dm}}\right|,`$ where $`n_{BH}(m)`$ is the total number of BHs having the mass smaller than $`m`$. Usually, evolution of the luminosity function is parametrized by $`z`$ instead of $`t`$. We adopt here the Einstein-de Sitter universe as the cosmological model to relate $`t`$ with $`z`$, $$t=t_0/(z+1)^{3/2}$$ (11) where $`t_010^{10}`$\[yr\] is the present time. Here we discuss a number of examples of the dependence of the magnetic filed $`B_0`$ at the source region on BH mass $`m`$. The locus of the plasma source is supposed to several times the horizon radius ($`mϵ^{2/3}4.6m`$ for $`ϵ=0.1`$) where the pair creation seems to be effective due to the outer gap model (Hirotani & Okamoto 1998). The case $`B_0m^{1/2}(\zeta =1/2)`$ is so-called the Eddington value. This formula is derived as follows. Based on the spherical accretion with the Eddington accretion rate, we suppose the equipartition condition of energy density between the gravitational one and the magnetic one. However, in this case, the lifetime of the fly-wheel engine is determined independent of $`m`$ from equation (3). We can easily imagine the evolution of the luminosity function of this case. The curve moves rightward holding its shape corresponding to the increase of output power, and arrives at the explosive stage, and then, all the BH engines simultaneously cease their activity. This is trivially inconsistent with the observation (see figure 1), hence we reject this case. The magnetic field of the inner region of the Shakura-Sunyaev’s accretion disk is evaluated as follows. They assumed the equipartition between the magnetic energy and the thermal energy. For the radiation pressure supported case, $`\zeta =1/2`$ (similar to the Eddington value). This case is also unsuitable as discussed above. For the gas pressure supported case, $`\zeta =1/20`$ assuming that the mass accretion rate is proportional to $`m`$ (like the Eddington limit). In our discussion, since the dynamic range of the BH mass $`m`$ is $`10^6M_{}m10^{10}M_{}`$, merely in the dynamic range of the fourth order of magnitude, the dependence $`\zeta =1/20`$ means almost $`B_0const.`$ Hence we can assume that the mass dependence of the magnetic field $`B_0`$ at the source region is presumably very weak, $`1/2<\zeta `$. In the following discussion, we assume $`\zeta =0`$ and set $`B_0=1`$\[T\] independent of $`m`$. Evolution of the luminosity function is shown in figure 5, 6 for the typical case $`\zeta =0,n=0.8`$. In case $`\zeta =0`$, the initial luminosity is proportional to $`m^2`$, hence the luminosity function at the formation stage $`zz_{vir}`$ remarkably reflects the initial mass function (see figure 4). The curve of the luminosity function consists of monotonically decreasing power-law slope and exponential cut-off. For $`\zeta =0`$, the lifetime of the fly-wheel engine is a decreasing function of $`m`$ (see equation 3). When the BHs of mass, say, $`m=m_0`$ approach to the explosive stage $`t=t_{max}`$, the luminosity $`L_{BH}(m_0,t)`$ and $`dL_{BH}(m;t)/dm|_{m=m_0}`$ quickly increase. Until this time, the BHs having mass $`m>m_0`$ have already died. Hence the brighter-end of the luminosity function quickly extended in the lower-right direction (see eq. ). If the initial luminosity function has a sufficiently steep sloop, the evolution results a lift up at the brighter-end. On the other hand, if the initial luminosity function has a flat sloop, at the first, the brighter-end lifts up, however, the brighter-end reach the junction point between the power law part and the exponential cut off part of the luminosity function, the initial curve is bend in lower direction by the evolution (see figure 5 for $`n=0.8`$). This behavior seems to be plausible to explain the observed evolution of the luminosity function shown in figure 1 qualitatively. The criterion of the steepness is $`n0.8`$ for $`\zeta =0`$. This evolution is just the effect of individual evolution of each fly-wheel engine. The time scale of the evolution is a decreasing function of the BH mass $`m`$, and the individual luminosity is an increasing function of $`m`$. Hence, the brighter-end corresponds to massive and short lifetime BHs. Massive BHs quickly evolve to the explosive stage (lift up the curve at the exponential cut off part or more steepen at the power law part), and then they cease to release energy (shorten the curve). From the observational studies of the QSOs number counting, the bending of luminosity function curve has been pointed out (see, e.g., Boyle 1993, Pei 1995). The functional form of the curve is guessed as double power law or power law with exponential cut-off. Boyle (1993) argued the evolutionary motion of bending point (see figure 2 of Boyle 1993). The result of the fly-wheel model suggests that the bending point is determined by the initial mass function and does not move during evolution. ### 4.2 Evolution of the QSO population Similarly we can discuss the evolution of QSO spatial number density (or usually called as “population”) as a function of $`z`$. At the time $`t`$, the BHs of mass $`m`$ with $`tt_{vir}+t_{max}(m)`$ are still alive. This condition gives the upper boundary $`m_u(t)`$ of the mass of the active BHs, because $`t_{max}(m)`$ is an increasing function of $`m`$ in case $`B_0m^0`$. From the condition $$t_{vir}+(1ϵ^2)\tau _{evo}(m)t$$ (12) we obtain $`m_u(t)=`$ $`{\displaystyle \frac{ϵ^2(1ϵ^2)}{B_0^2}}{\displaystyle \frac{1}{tt_{vir}}}`$ (13) $`=`$ $`10^8[\text{M}_{}]\left({\displaystyle \frac{ϵ}{0.1}}\right)^2\left({\displaystyle \frac{B_0}{1\text{T}}}\right)^2(1ϵ^2)\left({\displaystyle \frac{10^9\text{yr}}{tt_{vir}}}\right)`$ (14) Here after we treat only the active BHs ($`mm_u(t)`$). Next we assume the detection limit $`L_{lim}(t)`$ of observation on the bolometric luminosity, and let us count the number of BHs satisfying $$L_{BH}L_{lim}$$ (15) as QSOs. Note that instead of that we cannot discuss the spectrum of released energy, we can treat only the total output power of the fly-wheel engine. The released energy by the fly-wheel engine produces, at the first, the plasma outflows, and finally, it is supposed to be converted to the radiation through some physical processes like the Fermi acceleration on the shock surface. Hence our attention should be focused to the bolometric luminosity under an assumption that released energy is completely converted to the radiation. Of course, actual observational segregation of QSOs from other objects is based on multicolor spectroscopy, but, unfortunately we cannot argue any more than the bolometric luminosity here. Models of concrete functional form of $`L_{lim}(t)`$ will be given later. The condition (15) gives the lower boundary $`m_l(t)`$ of the BH mass which can be detected as QSOs at the time $`t`$. The relation $`L_{BH}(m;t)=L_{lim}(t)`$ reduces to an equation for $`m`$, $$B_0(m)^4m^4/ϵ^2+L_{lim}(t)^2tB_0(m)^2m/ϵ^2L_{lim}(t)^2=0.$$ (16) This equation can be easily solved numerically. Especially, for case $`\zeta =0`$ (see equation 6), this equation can be reduced to the 4th order algebraic equation, $$k_1m^4+k_2(t)m+k_3(t)=0$$ (17) where $`k_1`$ $`=`$ $`B_0^4/ϵ^2`$ (18) $`=`$ $`10^{60}[\text{erg}^2/\text{s}^2/\text{M}_{}^4]\left({\displaystyle \frac{ϵ}{0.1}}\right)^2\left({\displaystyle \frac{B_0}{1\text{T}}}\right)^4,`$ (19) $`k_2(t)=`$ $`L_{lim}(t)^2tB_0^2/ϵ^2`$ (20) $`=L_{lim}(t)^2`$ $`\left({\displaystyle \frac{t}{1\text{yr}}}\right)10^{17}[1/\text{yr}/\text{M}_{}]\left({\displaystyle \frac{ϵ}{0.1}}\right)^2\left({\displaystyle \frac{B_0}{1\text{T}}}\right)^2,`$ (21) and $`k_3(t)=L_{lim}(t)^2`$. We can obtain unique real-positive root of this equation as $`m_l(t)`$. Let us sum up the number of BHs in the range $`m_l(t)mm_u(t)`$, $$n_{QSO}(t)=_{m_l(t)}^{m_u(t)}f_{BH}(m)𝑑m.$$ (22) This is the spatial number density of QSOs. The results of simple cases $`L_{lim}(t)`$ as $$L_{lim}(t)=10^{38.9}\text{[W] (solid line)}$$ (23) and $$L_{lim}(t)=10^{37.7}\text{[W] (dashed line)}$$ (24) are shown in figure 7. These values correspond to the luminosity at absolute magnitude $`M=26`$ (typical value for QSOs) and $`23`$ (typical value for AGNs), respectively. Figure 7 for $`n=0.8`$ shows very plausible evolution consistent with observations, in a qualitative sense, but the plot for $`n=1`$ (unplotted as figure) shows rather sudden decrease after the peak ($`z3`$). This is due to the difference between these cases of evolution of the luminosity function in this range of $`z`$. In the actual observation, the detection limit $`L_{lim}`$ should be an increasing function of the look back time $`t_0t`$ or the red shift $`z`$, because the detection limit corresponds to the limit on the energy flux $`F`$ like $`FF_{lim}`$. We have also tried more realistic function of the detection limit, $$L_{lim}(z)=F_{lim}4\pi [\sqrt{1+z}1]^2(1+z)/(H_0/c)^2$$ (25) where $`F_{lim}=`$ say $`6.6\times 10^{16}`$\[W/m<sup>2</sup>\] (apparent magnitude $`m_{lim}=18.9`$) is the detection limit on the energy flux, $`H_0=50`$\[km/s/Mpc\] is the Hubble constant at the present epoch and $`c`$ is the speed of light. This is the formula of the translation of the energy flux $`F_{lim}`$ to the luminosity $`L_{lim}`$ based on the Einstein-de Sitter universe. The result is plotted as the dot-dash line in figure 7. This value of $`F_{lim}`$ is a tentative and artifical one chosen to intersect the solid line and the dot-dash line in $`z=2.5`$. The conversion from the detection limit on the flux $`L_{lim}`$ to the limit on the bolometric magnitude $`m_{lim}`$ is based on the relation that $$m_{lim}=2.5\mathrm{log}\frac{F_{lim}}{F_0},$$ (26) where $`F_0=2.48\times 10^8[\text{W/m}^2]`$ (see Allen 1973). The value of the bolometric magnitude $`m_{lim}=18.9`$ adopted here might be somewhat brighter than the limit in the magnitude of actual QSO survey with a specified band of wave length. However, we should note that the conversion (the bolometric correction) between the magnitude used in observations (e.g., the V-band magnitude $`m_v`$ or the B-band magnitude $`m_b`$) and the bolometric magnitude for AGNs is not clear. Thus, it is not worth to argue a detailed estimation of the detection limit on the bolometric magnitude here. At the range with large $`z`$, say $`z>3`$, the population of the result plotted as the dot-dash line decreases considerably as $`z`$ increases, comparing with the previous result (the solid line) in figure 7. This is just as expected, because $`L_{lim}`$ is an increasing function of $`z`$ in this case. About the dot-dash line, in the range with small $`z`$, say $`z<2`$, the population is over estimated, because $`L_{lim}`$ which corresponds to a fixed apparent magnitude $`m_{lim}`$ ($`=18.9`$ in the case in figure 7) is too small. Such faint nuclei should not to be classified as AGNs. To be more realistic, the detection limit $`L_{lim}`$ should be switched from (23) to (25) at the crossing point ($`z=2.5`$ in figure 7) of these curves as $`z`$ increases. ## 5 SUMMARY The purpose of this work is not to present a realistic model which can precisely explain the observation, but to demonstrate properties of a fascinating mechanical model, i.e., the Kerr BH fly-wheel model, and to submit a physical scenario of the evolution of QSOs/AGNs not as a speculation, but as a result of the mechanical process. We have proposed a magnetohydrodynamic model for ‘engine’ of QSOs/AGNs: the Kerr BH fly-wheel model (see section 3 of this paper and Nitta et al. 1991). This engine is driven by the rotation of BH. The rotation energy of BH is extracted by an electromagnetic process (magnetic breaking). The extracted energy is once stored in the magnetosphere in the form of the Maxwell stress, and then it will produce the plasma outflows. One might mislead that the fly-wheel engine is the mechanism only for the radio-loud activity because the released energy produces the outflow. This might cause from a strong impression that highly collimated bipolar jets make the double radio lobes. However, from the observational point of view, the presence of outflows are required for other kind of AGNs. For example, it has been established that, BAL QSOs also have outflows (the “disk wind” nearly in the plane of the disk, see Murray et al. 1995). From the theoretical point of view, mechanics of the global structure of outflows is argued in many literature. The produced outflows will show a wide variation of global structures, e.g., the bipolar jets of radio-loud QSOs/AGNs, the equatorial wind of BAL QSOs, or more (see Nitta 1994). Thus we can say that the radio-loud activity is simply one possibility of the fly-wheel engine. In the fly-wheel model, we can clarify the properties and the evolution of individual engine (see section 3) parametrized by BH mass $`m`$, initial Kerr parameter $`a`$, magnetic field $`B_0`$ at the source region and a small dimension-less parameter $`ϵ`$. These engines are assumed to correspond to QSOs/AGNs. If we obtain the statistical properties for these parameters, we can discuss the statistics of ensemble of QSOs/AGNs. Here we adopt the Press-Schechter formalism for the mass distribution. In the scenario of this work, Kerr BHs are supposed to form at $`z=z_{vir}=200`$ with nearly maximum angular momentum $`am`$ at the formation epoch. The BH mass is assumed to be 10% of the total baryonic mass of the proto-galactic cloud. Since the magnetic field $`B_0`$ seems to be related with the BH mass $`m`$, we set $`B_0m^\zeta `$. As a result, very weak dependence $`0\zeta >1/2`$ is preferred for the consistency with observations. We assume $`\zeta =0`$ and $`B_0=1`$ \[T\] to obtain the figures. The small parameter $`ϵ`$ is determined by the physics of plasma injection process, e.g., the pair plasma production or the overflow from the disk halo. Since we do not have widely accepted standard theory of it, we assume as $`ϵ=0.1`$ (the distance of the source region is several times the horizon radius). Thus the evolution depends only on the BH mass $`m`$. We have discussed the evolution of the luminosity function and the spatial number density in a period $`0z5`$, and made a qualitative comparison with observations. In the typical case $`n=0.8`$ and $`\zeta =0`$, we obtain the evolution of the luminosity function and the spatial number density with a plausible behavior in the period $`0z5`$ consistent with observations. The brighter-end of the luminosity function is lifted up for $`z3`$, then it drops and the curve changes to be short and more steep for $`0z3`$ as shown in figure 5, 6. In accordance with this behavior, the spatial number density evolves as shown in figure 7. We should note that these characteristic evolutions obtained in this paper are derived from the evolution of the individual magnetospheric structure in the vicinity of the Kerr BH. This individual evolution is not a speculation but the result based on the MHD picture. In the previous works, e.g., Pei (1995), the evolution of individual AGNs is simply an assumption without any mechanical scenario. We have tried to join the intrinsic Kerr BH magnetospheric physics, i.e., the fly-wheel model with observational facts, and we have succeed to present a mechanical model of the evolution, at least qualitatively. In order to explain the observational facts of $`0z5`$, somewhat flat mass function of BHs ($`n0.8`$ in equation 7) and a weak dependence of the magnetic field at the source region on the BH mass ($`\zeta >1/2`$ in equation 6) are preferred. These values of $`n`$ and $`\zeta `$ should be determined through an extra physics, however we do not have widely accepted model of them, so we have treated them as free parameters of our picture. ## 6 DISCUSSION ### 6.1 Simplifications in our model For simplicity, we assumed the BH formation epoch as $`z_{vir}=200`$ independent of BH mass $`m`$. From the Compton drag model (see Sasaki & Umemura 1996), seed BH can formed only at the epoch in which the background photon density is sufficiently high. The formation epoch should be $`z>10^2`$. Of course, in the actual case, $`z_{vir}`$ will depend on $`m`$. However, when we translate the red shift $`z`$ to the cosmic time $`t`$ by virtue of the Einstein-de Sitter model, the variation around $`z200`$ corresponds to the order of $`10^6`$\[yr\], and is negligible comparing with the epoch around $`z3`$ ($`10^9`$\[yr\]) in which we are interested. Hence, it seems reasonable to suppose that $`z_{vir}const.`$ independent of $`m`$. We assumed that the dependence of the magnetic field $`B_0`$ at the source region on the BH mass $`m`$ as $`B_0m^\zeta `$. In the realistic case, $`B_0`$ should depend not only on $`m`$ but also on the mass accretion rate. This problem is very difficult and still opened at the present time as discussed in subsection 6.3, however, to discuss this problem as a whole is beyond the scope of this paper. We assumed that the initial Kerr parameter $`a`$ as $`a/m1`$. Bic̆ák & Dvor̆ák (1980) showed that extreme Kerr hole does not posses the magnetic field threading the horizon. This means that the magnetic breaking process can not extract the rotation energy from extreme Kerr holes. Hence we can not set the initial Kerr parameter as $`a/m=1`$. However we should note that exact value of the initial Kerr parameter is not essential. Even if $`a/m=`$, say, 0.9, 0.5 or 0.3, the explosive epoch $`t_{max}`$ shifted by only a factor of the order of unity. Such magnitude of ambiguity does not matter in our discussion based on an order estimation. The initial mass function or the initial Kerr parameter function of the “proto-galactic cloud” have been discussed in some literature (e.g., Sasaki & Umemura 1996 and Susa et al. 1994), but the general relativistic dynamical process of the BH formation from the proto-galactic cloud has not been solved. The initial Kerr parameter function may not be important comparing with the initial mass function as discussed in above paragraph. Hence we have supposed the initial mass function of the BH from it of the proto-galactic cloud by an assumption that 10% of baryonic mass collapses to form the seed BH. We assumed that the dimension-less small parameter $`ϵm\mathrm{\Omega }_F=0.1`$ in the calculation. This value corresponds to the situation that the source region (the plasma injection region or the pair creation region) is located at a radius several times the horizon radius ($`4.6m`$ for $`ϵ=0.1`$). This might be plausible for the outer gap model of the pair creation. The factor $`\mathrm{\Omega }_F`$ of the definition of $`ϵ`$ roughly coincides with the Keplerian angular velocity at the source region. If the locus of the plasma source is fixed at a radius several times the horizon radius from the outer gap model, $`\mathrm{\Omega }_F`$ depends only on $`m`$. In the evolutionary model of Nitta et al. (1991), $`mconst.`$ during the characteristic time scale of the angular momentum extraction which is defined as the time scale of the evolution. Hence we can assume that $`\mathrm{\Omega }_Fconst.`$, thus $`ϵconst.`$ independent of the time. In our model, each BH suddenly ceases to release energy at $`t=t_{max}`$ corresponding to the situation $`\omega _H=\mathrm{\Omega }_F`$. If $`\mathrm{\Omega }_F`$ of a BH magnetosphere distributes in a range, say, $`\mathrm{\Omega }_1\mathrm{\Omega }_F\mathrm{\Omega }_2`$ as a function of the magnetic flux function, the fly-wheel activity gradually ceases on a field line in order of decreasing value of $`\mathrm{\Omega }_F`$. If this variety of $`\mathrm{\Omega }_F`$ does not result the vary $`ϵ`$ in order of magnitude, it does not affect our qualitative discussion. In order to relate the time after the formation of blackholes and the redshift, we need a cosmological model. In the main discussion of this paper, we adopt the Einstein-de Sitter model (see eq. 11) for simplicity. Of course, the Einstein-de Sitter model is very classical, and recent observational studies of cosmology support the model with the cosmological constant, i.e., the Lemaitre model. Here we have to estimate the difference of results between the cases with the Einstein-de Sitter model and the Lemaitre model. We provide the look-back times $`t_1(z)`$ and $`t_2(z)`$ of the epoch with the redshift $`z`$ in the Einstein-de Sitter model and the Lemaitre model, respectively. If we choose the parameters $`\mathrm{\Omega }_0=0.1`$ and $`\lambda _0=0.9`$ in the Lemaitre model, the look-back time of the blackhole formation epoch ($`z=10^2`$) can be estimated as $`t_2=1.92t_1`$. At the epoch $`z=5`$ in which we are interested here, $`t_2=1.83t_1`$. These differences are simply in the factor of the order of unity. Hence the difference of the results between these two cosmological models does not matter in our discussion because the evolutionary process of our discussion (see section 3) is based on an order estimation. ### 6.2 2-types of BH engines In this paper, our attention is focused to the fly-wheel model. However, BH-accretion disk systems also seem to include another type of engine: the fuel (accretion powered) engine. The elementary process of the fuel engine is release of the gravitational energy of the accreting matter, so that the activity strongly depends on the mass accretion rate. It is widely believed that the accretion rate is roughly determined by the Eddington limit. This idea is based on a speculation that the regulating stage of the entire accretion process is the final stage, i.e., the accretion onto the BH. However, we do not have any theoretical conviction that how the entire system (i.e., a galaxy) determines the accretion rate. In another word, how the system can remove the angular momentum of the accreting matter to realize such accretion rate. In order to determine the activity of the fuel engine, we must solve the extraction of the angular momentum in a very wide spatial range. If we want to obtain the accretion rate of a range from an angular momentum extraction mechanism, we need the accretion rate outside this region where another mechanism may regulate the accretion rate, and so on. This endless chain seems to be hopeless to solve completely. Thus, it is so difficult to assemble the angular momentum extraction mechanisms of each range into a consistent theory, that we do not have another way except to treat the final accretion rate onto the BH as a free parameter. This is the point of difficulty to make evolutionary scenario of QSOs/AGNs based on the fuel model. In the model adopted in this paper, we do not consider the activity of the fuel engine at all. However, the author believes that coexistence of these 2 types of engines is undoubted. Even after the fly-wheel engine ceases to release energy, the fuel engine continues to act while the accretion is continued. If we make an appropriate assumption to estimate the luminosity due to the fuel engine, we should add the contribution of the fuel engine. For simplicity, if we assume a constant energy release of the fuel engine, i.e., the Eddington luminosity (depends only on $`m`$), this luminosity is comparable with the initial luminosity of the fly-wheel engine. In this model, the fuel engine works constantly and the fly-wheel engine works in a period $`t_{max}`$ after the BH formation. Even after the fly-wheel engine dies, the luminosity does not vanish, but decreases to the Eddington luminosity. In section 2, the fly-wheel model is characterized as that the mechanism of it can be clarified by the closed discussion in the vicinity of the BH. However this statement might be somewhat exaggerate. The fly-wheel engine seems to relate with the fuel model at the point that the magnetic field $`B_0`$ at the source region may depend on the accretion rate. $`B_0`$ is provided as the magnetic field strength averaged in a larger macroscopic scale than the scale of turbulence generated in the accretion disk. Such large scale magnetic field should be amplified by the accretion. The accretion plasma carries the frozen magnetic field into the inner magnetosphere and compresses it. Against this process, small but finite resistivity dissipates the magnetic field. Then the saturated level of the magnetic field strength is determined by the equilibrium of the compression and the dissipation. Unfortunately, this problem has not been solved yet as in the next subsection. Hence we have assumed that $`B_0`$ depends on the BH mass $`m`$ like equation (6) because the accretion rate seems to depend on $`m`$. ### 6.3 Ambiguity of the fly-wheel power estimation Ambiguities are still remained on the estimation of the power of the fly-wheel activity mainly due to the following two reasons. The first is due to some theoretical ambiguities of the estimation of the magnetic field strength near the BH. The second is ambiguity of the innermost magnetospheric structure of the BH-accretion disk systems, especially whether the field lines threading the horizon (or the innermost region of the accretion disk) are open toward the infinity or closed to be loop-like one. The poloidal magnetic field strength is traditionally estimated by an intuition of a principle of equipartition between the magnetic energy density and the gravitational one or the thermal one (see, e.g., Shakra & Sunyaev 1973). Recent numerical studies based on nonlinear evolutionary process of the resistive MHD seem to support the result from the equipartition. For example, Matsumoto et al. (1997) conclude that predominantly toroidal magnetic field is amplified by a differential rotation of the disk and the plasma $`\beta `$-value of $`\beta 10`$ can be achieved. If there is significant magnitude of the poloidal magnetic field, the saturation level will be more large and $`\beta 1`$ might be achieved. While this conjecture of equipartition is now widely accepted, there is still room for a disagreement about this point. Recently Livio et al. (1999) critically assess the efficiency of the Blandford-Znajek (BZ) process (the magnetically dominated case of the BH fly-wheel model) comparing with other disk activities. They reconsider the field strength in the innermost region of BH-disk system. In their result, power of the BZ process is dominated by the fly-wheel or the fuel (viscous heating) power of the innermost region of the accretion disk. The problem of the saturated strength of the poloidal magnetic field is still an open question. However we should note that the energetics of the fly-wheel process depend not only on the strength of the magnetic field but also on the inner magnetospheric structure of the BH-disk systems. The extracted Poynting energy flux due to the fly-wheel process is carried along the poloidal magnetic field lines, and will be converted to the kinetic energy of plasma outflow at some distant region from the horizon. Hence only open magnetic field lines can take place to the energy extraction toward very distant region. For example, Nitta et al. (1991) give schematic figure for the innermost magnetospheric structure (see figure 3 of that paper). In their result, the magnetic field lines connecting to the innermost region of the disk are closed (a loop-like structure connecting the BH and the disk), and do not contribute to the energy extraction. Open field lines are emanated from a high latitude region of the BH and the outer part of the disk. In this case, the discussion of Livio et al. (1999) should be altered. Thus efficiency of the fly-wheel process is closely combined with the disk dynamo process and the magnetospheric structure of the innermost region. These are very important but still open questions in the current state, and to argue this point would carry us too far away from the purpose of this paper. ### 6.4 Discrimination of QSOs As discussed in section 4, in the fly-wheel model, we can only estimate the total output power of the engine, and we cannot discuss the spectrum of resultant radiation. Hence the only way to distinguish QSOs/AGNs from normal galaxies is setting a criterion, say $`L_{lim}`$, on the bolometric luminosity. Here we suppose that entire released energy is perfectly converted to radiation, and suppose that the engines having the luminosity greater than $`L_{lim}`$ can be treated as QSOs/AGNs. This simplified procedure is obviously far from the actual QSO number counting studies. In future study, the problem of the resultant spectrum of the radiation should be solved. This is possible only if we solve the physics of plasma outflows being generated by the fly-wheel engine. This is, needless to say, one of the most difficult open questions in the magnetospheric astrophysics. From figure 7, the locus of the peak of population strongly depends on the criterion $`L_{lim}`$. If we set the smaller $`L_{lim}`$, the peak shifts to the smaller $`z`$. This means that if we survey AGNs in more deep, we will find more and more faint AGNs including low mass BH. These may correspond to Seyferts. However we should note if the mass of central BH is too small, the nucleus activity is dominated by its host galaxy. Such objects may not be classified as QSOs/AGNs. In this meaning, the criterion $`L_{lim}=10^{38.9}\text{or}10^{37.7}`$\[W\] adopted in this paper might be plausible, because these values dominate the typical luminosity of normal galaxies $`10^{37}`$\[W\] (Andromeda galaxy). ### 6.5 Similarity of radio-loud and radio-quiet AGNs Similarity among all kinds of AGNs is widely accepted from the observational point of view. The spectral energy distribution (SED) of radio-loud AGNs and radio-quiet AGNs are quit similar except the radio range (see, e.g., Elvis et al. 1994). We also cannot find any intrinsic difference in the evolution of the spatial number density of the optically selected QSOs, the flat-and-steep spectrum sources and radio-loud QSOs (see Shaver et al. 1996). These observational evidences of similarity implicitly suggest the universality of physical process of QSOs/AGNs. As mentioned in section 5, the author believes that the fly-wheel model is applicable not only for radio-loud AGNs. The variation of AGNs might be caused from the variation of proper parameters (BH mass, BH angular velocity, magnetic field strength, etc.). The difference of parameters will lead to different structure of outflows (see Nitta 1994). We may expect that outflows having different structure will produce different types of spectrum of the radiation. However, the correspondence of a global structure of the outflow and a resultant spectrum of radiation is still unclear. This should be a future problem. ### 6.6 Other stories of the fly-wheel activity There are other stories to make Kerr BH as central BH of QSOs/AGNs. Here we mention two of them: the merging of BHs and the spins up by accretion. Wilson & Colbert (1995) discussed the formation of AGN BHs by merging process. They tried to explain the difference between radio-loud and radio-quiet AGNs. As well known, number fraction of radio-loud AGNs is only 10% of total AGNs, and radio-loud galaxies are mainly observed as elliptical galaxies. From these points, they supposed that radio-loud AGNs are merger events of BHs. Merging of 2 galactic nuclei produce a quickly rotating Kerr BH, and a period after merging, the Blandford-Znajek process acts and shows radio-loud activity. Similarly, Moderski & Sikora (1996) supposed to make quickly rotating BH by very large mass accretion. The scenarios discussed in these papers are alternative one. There is room for another possibility that the fraction of radio-loud to radio-quiet may be related to probability to make jet-like outflow. The fly-wheel engine can form various structures of outflows (see Nitta 1994). If well-collimated jet-like structure which is nearly perpendicular to the galactic disk is formed, this will be observed as radio-loud one (or the radio galaxy, see Urry & Padovani 1995), because the terminal shock in the jet will locates far from the galactic disk due to very low ambient matter density in this direction. If equatorial wind is formed, this will be BAL QSO (see, e.g., Cohen et al. 1995 or Murray et al. 1995 ). Of course, these are simply speculations, because we do not have widely accepted physics for the structure formation of plasma outflows yet. Anyway, since we do not have authorized theory, we must try to test various possibilities. In some literature, the fuel process (accretion from disk) and the fly-wheel process (Blandford-Znajek process) are simultaneously considered (see, for example, Moderski & Sikora 1996 and Ghosh & Abramowicz 1997). In these models, the accretion contributes to spin up the BH, but the BZ process suppresses it. On the contrary, Nitta et al. (1991) imply that MHD accretion onto the Kerr BH extracts the angular momentum and spins down the rotation of the Kerr BH when $`\mathrm{\Omega }_F<\omega _H`$ (see section 3). This is a natural result of MHD accretion onto the Kerr BH (see Takahashi et al. 1990). One might think this is contrary each other. However the author does not think so. Moderski & Sikora (1996) and Ghosh & Abramowicz (1997) suppose to start with slowly rotating BH ($`\mathrm{\Omega }_F>\omega _H`$), on the contrary, Nitta et al. (1991) suppose quickly rotating BH ($`\mathrm{\Omega }_F\omega _H`$). This difference causes from the difference of concepts of the models. However, we should note that this difference is essential. In our model, the energy source of the Kerr BH fly-wheel engine is inherently obtained rotation energy of the central BH. The origin of this energy is the tidal interaction during the collapse of the proto-galactic cloud with other density fluctuations. Once the fly-wheel engine starts to act, the rotation energy monotonically decreases, and stops at the state $`\mathrm{\Omega }_F=\omega _H`$. On the other hand, in other models, accretion energy converts to the fly-wheel type activity, and they relate to the radio-loud activity. For example, central BH is normally in a state of slow rotation, however, if coalescence of BHs (Wilson & Colbert 1995) or very large mass accretion (Moderski & Sikora 1996) occurs, central BH spins up and the fly-wheel engine starts to wok. In these models, the energy source is, consequently, the accretion energy. ### 6.7 Dormant quasar: Fornax A Let us notice a splendid example Fornax A (NGC1316, see Iyomoto et al. 1998) which seems clearly show properties of the fly-wheel engine. In general, we cannot observe evolution of a galaxy because of very long lifetime of it, however Fornax A is a particular case which we can obtain the evidence of the evolution for recent 0.1 Gyr. This is a radio galaxy with double radio lobes. The nucleus should be active ($`>4\times 10^{34}`$\[W\] in 2-10 keV X-ray luminosity) at least 0.1 Gyr ago, while the present activity is ‘dormant’ ($`2\times 10^{33}`$\[W\] in 2-10 keV X-ray luminosity). We can guess the reason of it based on the fly-wheel model as follows. The fly-wheel engine was still active 0.1 Gyr ago, and the nucleus ejected plasma outflows (bipolar jets) and made the radio lobes. At an epoch within a past 0.1 Gyr, the fly-wheel engine ceased to work and nucleus became to be dormant. The radio lobe can emit radiation within a period determined by the Synchrotron cooling time without the energy supply by the fly-wheel engine. In this sense, the fly-wheel engine of Fornax A is not ‘dormant’ one but ‘dead’ one if without some mechanisms to spin up the central BH again. However the fuel engine can act after the fly-wheel activity cease. This corresponds to the present nucleus activity. In this case, the peak fly-wheel activity is an order of magnitude greater than the fuel activity. This is just a result of our model with $`ϵ0.1`$. Acknowledgement The author wishes to thank Drs. K. Aoki, K. Okoshi, S. Satoh, S. Kameno, T. Yamamoto and T. Totani, and Mr. Y. Tutui at National Astronomical Observatory of Japan for helpful comments and criticisms. The author also thanks to the anonymous referee for comprehensive comments for improvement. The author also thanks to Mr. S. Abe and Dr. A. Kawamura for the technical support.
no-problem/9905/physics9905057.html
ar5iv
text
# A Piecewise-Conserved Constant of Motion for a Dissipative System ## I Introduction Finding constants of motion is an important step in the solution of many problems of Physics, as they allow to reduce the number of the degrees of freedom of the problem. Constants of motion are intimately related to conservation laws or symmetries of the system. For example, it is well known that a symmetry of the Lagrangian of a system is responsible, by virtue of Noether’s theorem, to a constant of motion . By definition, a constant of motion preserves its value during the evolution of the system. Even in cases where there are no constants of motion, one could still sometimes find adiabatic invariants, which generalise the concept of a constant of motion to systems with slowly-varying parameters. It turns out, however, that there are cases where there are constants of motion, which are only piecewise-conserved. Well-known examples of such piecewise-conserved constants of motion arise from the generalisation of the Laplace-Runge-Lenz vector for general central potentials. For example, in the case of the three-dimensional isotropic harmonic oscillator, the Fradkin vector (which generalises the Laplace-Runge-Lenz vector) abruptly reverses its direction (although preserves its magnitude) during a full period (see also ). (The Fradkin vector directs toward the perigee, and the position of the perigee jumps discontinuously whenever the particle passes through the apogee.) Also, in the truncated Kepler problem, the Peres-Serebrennikov-Shabad vector abruptly changes its direction whenever the particle in motion passes through the periastron . Again, it is just the direction of the conserved vector which is only piecewise-conserved: the magnitude of the vector remains a constant of motion in the original meaning (namely, the magnitude has a fixed value throughout the motion). Piecewise-conserved constants of motion may also be relevant for systems which involve radiation reaction. In the above examples, the piecewise-conserved constants of motion appear in non-dissipative systems, and result from a discontinuity of the force (as in the truncated Kepler problem) or from geometrical considerations (as in the three-dimensional isotropic harmonic oscillator case). These piecewise-conserved constants of motion involve vectors rather than scalars, which still conserve their magnitude. The following question arises: Can one find, for elementary systems, piecewise-conserved constants of motion? In what follows, we shall discuss an elementary piecewise-conserved scalar constant of motion, for a simple oscillatory mechanical model which involves dissipation in the form of sliding (dry) friction. Although dry friction is more nearly descriptive of everyday macroscopic motion in inviscid media, it is usually ignored in elementary mechanics courses and textbooks, which very frequently discuss viscous friction (which is velocity dependent). Dry friction exhibits, however, some very interesting features and can be readily presented in the laboratory or classroom demonstration. The problem of a harmonic oscillator damped by dry friction was considered by several authors. Lapidus analised this problem for equal coefficients of static and kinetic friction, and found the position where the oscillator comes to rest . Hudson and Finfgeld were able to find the general solution of the equation of motion . However, they again assumed equal coefficients of static and kinetic friction, and used the Laplace transform technique, which is unknown to students of elementary mechanics courses, to generate the solution. An elementary solution, which ignores static friction, was derived by Barratt and Strobel . This solution is based on solving separately for each half cycle of the motion, and is consequently tedious and unappealing. Recently, Zonetti et al. considered the related problem of both dry and viscous friction for a pendulum, but did not offer a full analytic solution for the motion. In this work, we find the general solution for the motion taking into account both static and kinetic friction, using elementary techniques which are available for students of elementary courses of mechanics. We analyse the solution using a piece-wise conserved constant of motion. The discussion, as well as the corresponding laboratory experiment of classroom demonstration, are suitable for a basic course for physics or engineering students. ## II Elementary Discussion Let a block of mass $`M`$ be placed on a horizontal surface, such that the coefficients of static and kinetic friction are $`\mu _s`$ and $`\mu _k`$, respectively. The block is attached to a linear spring with spring constant $`k`$ (for both compression and extension), such that initially the spring is stretched from its equilibrium length by $`\mathrm{}`$, and the block is kept at rest at $`x=\mathrm{}`$. At time $`t=0`$ the block is released. If $`k\mathrm{}>\mu _sMg`$, $`g`$ being the gravitational acceleration, the block would start to accelerate. We take the friction to be small, namely $`\mu _{s,k}\mathrm{}k/(Mg)`$, and also assume slow motion, such that the friction force is independent of the speed. Namely, we neglect effects such as air-resistance, and include only the force which results from the block touching the surface. We also neglect any variation of $`\mu _k`$ with the speed. Immediately after the block starts accelerating, its motion is governed by the equation of motion $$M\ddot{x}=kx+\mu _kMg,$$ (1) with initial conditions $`x(0)=\mathrm{}`$ and $`\dot{x}(0)=0`$. From now on, let us introduce the frequency $`\omega ^2=k/M`$. Of course, the system does not preserve its energy, due to the friction force. However, let us define a new coordinate $`x^{}=x\mu _kg/\omega ^2`$. Equation (1) then becomes $`\ddot{x^{}}+\omega ^2x^{}=0`$. For this equation we know that there is a constant of motion, namely $`=\frac{1}{2}M\dot{x^{}}^2+\frac{1}{2}M\omega ^2x^2`$. Therefore, despite the presence of friction, one can still find a constant of motion, which has the functional form of the total mechanical energy, but which is of course not the energy, as the latter is not conserved. Calculating its numerical value we find that $`_0=\frac{1}{2}M\omega ^2\left(\mathrm{}\mu _kg/\omega ^2\right)^2`$. At the time $`t=\pi /\omega `$ the velocity of the block vanishes, and it can be easily shown that at $`t=\pi /\omega `$ its acceleration is $`\ddot{x}(\pi /\omega )=(\mathrm{}\mu _kg/\omega ^2)\omega ^2>0`$, such that the block reverses its motion. (We assume here that $`M\ddot{x}(\pi /\omega )>\mu _sMg`$.) The nature of the friction force is that its direction is always opposite to the direction of motion. Consequently, the equation of motion now changes to $$M\ddot{x}=kx\mu _kMg,$$ (2) with initial conditions $`x(\pi /\omega )=\mathrm{}2\mu _kg/\omega ^2`$ and $`\dot{x}(\pi /\omega )=0`$. One can again solve this equation readily. This time, let us define $`x^{}=x+\mu _kg/\omega ^2`$. Equation (2) again becomes $`\ddot{x^{}}+\omega ^2x^{}=0`$, such that $``$ is still conserved. However, this time the numerical value of $``$, which we denote by $`_1_0`$, and we find $`_1=\frac{1}{2}M\omega ^2\left(\mathrm{}3\mu _kg/\omega ^2\right)^2`$. One can describe the next phases of the motion similarly. During each phase of the motion (during half a period between two times at which the velocity vanishes) $``$, if defined properly, is conserved. However, $``$ is only piecewise-conserved, as its value changes abruptly from phase to phase. We note that the period of the oscillations is not altered by the presence of friction, and denote by $`P_{1/2}`$ half that period. Namely, $`P_{1/2}\pi /\omega `$. ## III General Discussion Let us now discuss the system in a more general way. It turns out that although there are friction forces, one can still write a hamiltonian $$H(x,p;t)=\frac{1}{2M}p^2+\frac{1}{2}M\omega ^2x^2f(t)x,$$ where $`f(t)=(1)^{\left[t/P_{1/2}\right]}\mu _kMg`$. The equation of motion is now $$\ddot{x}+\omega ^2x=f(t)/M$$ (3) with the initial conditions being (as before) $`x(0)=\mathrm{}`$ and $`\dot{x}(0)=0`$. We denote by square brackets of some argument the largest integer smaller than or equal to the argument. We also assume that the static friction force at the turning points of the motion is smaller than the elastic force of the spring, such that the motion does not stop. (Of course, for large enough time, this would not be true any more, and the block would eventually stop—see below.) Let us define the (complex) variable $`\xi \dot{x}+i\omega x`$ , where $`i^2=1`$. Then, instead of a real second order equation (such as Eq. (3)), one obtains a complex first order equation. It is advantageous to do this, because there is a general solution for any inhomogeneous linear first-order differential equation in terms of quadratures. Substituting the definition for $`\xi `$ in Eq. (3) we find that the equation of motion, in terms of $`\xi `$, takes the form $$\dot{\xi }i\omega \xi =f(t)/M,$$ (4) with the initial condition $`\xi _0\xi (t=0)=i\omega \mathrm{}`$. The solution of the equation of motion (4) is $$\xi (t)=e^{i\omega t}\left\{_0^t\frac{1}{M}f(t^{})e^{i\omega t^{}}𝑑t^{}+\xi _0\right\}.$$ (5) After finding the solution $`\xi (t)`$ we can find $`x(t)`$ and $`\dot{x}(t)`$ by $`\dot{x}(t)=(\xi (t)+\xi ^{}(t))/2`$ and $`x(t)=(\xi (t)\xi ^{}(t))/(2i)`$. We denote by a star complex conjugation. In order to integrate Eq. (5) we find it convenient to separate the discussion to two cases: case (a) where $`\left[t/P_{1/2}\right]`$ is an odd number (namely, $`\left[t/P_{1/2}\right]=2n1`$), and case (b) where $`\left[t/P_{1/2}\right]`$ is even (namely, $`\left[t/P_{1/2}\right]=2n`$), where $`n`$ is integer. We next split the interval of integration in Eq. (5) into two parts: we first integrate from $`t^{}=0`$ until $`t_{2n1}(2n1)P_{1/2}`$, and then integrate from $`t_{2n1}`$ to $`t`$, and sum the two contributions. Integrating term by term we find $`{\displaystyle _0^{t_{2n1}}}{\displaystyle \frac{1}{M}}f(t^{})e^{i\omega t^{}}𝑑t^{}`$ $`=`$ $`{\displaystyle \underset{j=0}{\overset{n1}{}}}{\displaystyle _{2jP_{1/2}}^{(2j+1)P_{1/2}}}\mu _kge^{i\omega t^{}}𝑑t^{}`$ (6) $``$ $`{\displaystyle \underset{j=0}{\overset{n2}{}}}{\displaystyle _{(2j+1)P_{1/2}}^{(2j+2)P_{1/2}}}\mu _kge^{i\omega t^{}}𝑑t^{}`$ (7) $`=`$ $`2i(2n1)\mu _kg/\omega .`$ (8) For case (a) we find that $$_{t_{2n1}}^t\frac{1}{M}f(t^{})e^{i\omega t^{}}𝑑t^{}=_{t_{2n1}}^t\mu _kge^{i\omega t^{}}𝑑t^{}=i(e^{i\omega t}+1)\mu _kg/\omega .$$ (9) For case (b) we find $`{\displaystyle _{t_{2n1}}^t}{\displaystyle \frac{1}{M}}f(t^{})e^{i\omega t^{}}𝑑t^{}`$ $`=`$ $`{\displaystyle _{(2n1)P_{1/2}}^{2nP_{1/2}}}\mu _kge^{i\omega t^{}}𝑑t^{}+{\displaystyle _{2nP_{1/2}}^t}\mu _kge^{i\omega t^{}}𝑑t^{}`$ (10) $`=`$ $`i(e^{i\omega t}3)\mu _kg/\omega .`$ (11) Collecting the two integrals, we find for case (a) that $$\xi _a(t)=2i(2n1)e^{i\omega t}\mu _kg/\omega i(1+e^{i\omega t})\mu _kg/\omega +i\omega \mathrm{}e^{i\omega t},$$ (12) and for case (b) $$\xi _b(t)=2i(2n1)e^{i\omega t}\mu _kg/\omega +i(13e^{i\omega t})\mu _kg/\omega +i\omega \mathrm{}e^{i\omega t}.$$ (13) Recalling the different values of $`\left[t/P_{1/2}\right]`$ for the two cases (a) and (b), we can unify the expressions for both $`\xi _a(t)`$ and $`\xi _b(t)`$, namely $$\xi (t)=i\left(2\left[t/P_{1/2}\right]+1\right)e^{i\omega t}\mu _kg/\omega +(1)^{\left[t/P_{1/2}\right]}i\mu _kg/\omega +i\omega \mathrm{}e^{i\omega t}.$$ (14) From this solution for $`\xi (t)`$ we can find that $$x(t)=(1)^{\left[t/P_{1/2}\right]}\mu _kg/\omega ^2+\left\{\mathrm{}\left(2\left[t/P_{1/2}\right]+1\right)\mu _kg/\omega ^2\right\}\mathrm{cos}\omega t,$$ (15) and $$\dot{x}(t)=\left\{\mathrm{}\left(2\left[t/P_{1/2}\right]+1\right)\mu _kg/\omega ^2\right\}\omega \mathrm{sin}\omega t.$$ (16) An interesting property of the solution given by Eqs. (15) and (16) is that for each half cycle it looks as if the motion were that of a simple harmonic oscillator, with no friction. In fact, the effect of the friction for each half cycle enters only in the initial conditions for that half cycle, or, more accurately, in the smaller value for the initial position for the half cycle. In addition, it is evident from Eq. (15) that the damping of the amplitude of the oscillation is linear in the time $`t`$, whereas in disspative systems in which the resistance is speed-dependent the damping is exponential in the time. Let us now define a new coordinate $`x^{}(t)x(t)f(t)/(M\omega ^2)=x(t)(1)^{\left[t/P_{1/2}\right]}\mu _kg/\omega ^2`$. Then, we find that $$x^{}(t)=\left\{\mathrm{}\left(2\left[t/P_{1/2}\right]+1\right)\mu _kg/\omega ^2\right\}\mathrm{cos}\omega t,$$ (17) and $$\dot{x^{}}(t)=\left\{\mathrm{}\left(2\left[t/P_{1/2}\right]+1\right)\mu _kg/\omega ^2\right\}\omega \mathrm{sin}\omega t.$$ (18) Next, we define $$(t)=\frac{1}{2}M\dot{x^{}}^2(t)+\frac{1}{2}M\omega ^2x_{}^{}{}_{}{}^{2}(t).$$ (19) Substituting the expressions for $`x^{}(t)`$ and $`\dot{x^{}}(t)`$ in $``$, we find that $$(t)=\frac{1}{2}M\omega ^2\left\{\mathrm{}\left(2\left[t/P_{1/2}\right]+1\right)\mu _kg/\omega ^2\right\}^2.$$ (20) It is clear that $``$ is not a constant of motion. However, a close examination shows that it is piecewise conserved: the only dependence on $`t`$ is through $`\left[t/P_{1/2}\right]`$. Therefore, between any two consecutive turning points we find that the numerical value of $``$ is conserved. Consequently, $``$ is a piecewise-conserved constant of motion. Clearly, $``$ has the dimensions of energy. However, we stress that $``$ is not the mechanical energy of system, because the latter is not even piecewise-conserved. In fact, the total mechanical energy of the system is $`T(t)=\frac{1}{2}M\dot{x}^2(t)+\frac{1}{2}M\omega ^2x^2(t)`$, namely, $`T(t)`$ $`=`$ $`{\displaystyle \frac{1}{2}}M\omega ^2\left\{\mathrm{}\left(2\left[t/P_{1/2}\right]+1\right)\mu _kg/\omega ^2\right\}^2+{\displaystyle \frac{1}{2}}M{\displaystyle \frac{\mu _{k}^{}{}_{}{}^{2}g^2}{\omega ^2}}`$ (21) $`+`$ $`(1)^{\left[t/P_{1/2}\right]}\mu _kMg\left\{\mathrm{}\left(2\left[t/P_{1/2}\right]+1\right)\mu _kg/\omega ^2\right\}\mathrm{cos}\omega t,`$ (22) which is a monotonically decreasing function of $`t`$, as expected. (Notice that whenever the cosine changes its sign, so does its amplitude.) Of course, if we add to $`T(t)`$ the work done by the friction force, we obtain a constant value. The fact that the total mechanical energy is monotonically decreasing is important: the system loses energy constantly. We have previously noted that the position and the velocity of the block during each half cycle are influenced by the presence of friction only through the initial conditions for that half cycle, but otherwise the motion is simple oscillatory. Despite this fact, the loss of energy occurs throughout of motion, as is evident from Eq. (22), as should be expected. In order to gain some more insight into the meaning of the piecewise-conserved $``$, let us find the time average of $`T(t)`$ between two successive turning points. Clearly, the average of the cosine vanishes, and we find $`<T(t)>`$ $`=`$ $`{\displaystyle \frac{1}{2}}M\omega ^2\left\{\mathrm{}\left(2\left[t/P_{1/2}\right]+1\right)\mu _kg/\omega ^2\right\}^2+{\displaystyle \frac{1}{2}}M{\displaystyle \frac{\mu _{k}^{}{}_{}{}^{2}g^2}{\omega ^2}}`$ (23) $`=`$ $`(t)+{\displaystyle \frac{1}{2}}M{\displaystyle \frac{\mu _{k}^{}{}_{}{}^{2}g^2}{\omega ^2}}.`$ (24) Therefore, the physical meaning of $``$ is the following: up to a global additive constant (namely, a constant throughout the motion) $``$ is equal to the time average of the total mechanical energy of the system $`T(t)`$ between any two consecutive turning points. Because of the dissipation, this time average decreases from one phase of the motion to the next, and therefore $``$ is only piecewise conserved. We next present our results graphically for two sets of parameters. First, we choose the parameters $`\mathrm{}=1\mathrm{m}`$, $`\omega =5\mathrm{sec}^1`$, $`M=1\mathrm{Kg}`$, $`g=9.8\mathrm{m}/\mathrm{sec}^2`$, $`\mu _s=0.54`$, and $`\mu _k=0.36`$. (These values for the coefficients of friction are typical for copper on steel.) In all the figures below the units of all axes are SI units. Figure 1 displays the position $`x(t)`$ and the velocity $`\dot{x}(t)`$ vs. the time $`t`$. It is clear that the amplitude of the oscillation attenuates, and eventually the block stops in a state of rest. Figure 2 displays the piecewise-conserved $``$ and the mechanical energy $`T`$ as functions of the time $`t`$. Indeed, the energy $`T(t)`$ is a monotonically-decreasing function of $`t`$, whereas $`(t)`$ is piecewise-conserved. One can also observe that up to a constant indeed $``$ is the average of the energy $`T`$ over one half-cycle of the motion.The dissipation of energy is most clearly portrayed by means of the phase space. Figure 3 shows the orbit of the system in phase space, namely, the momentum $`p=M\dot{x}`$ vs. the position $`x`$. The loss of energy is evident from the inspiral of the orbit. Eventually, the orbit arrives at a final position in phase space, and stays there forever. For figures 4,5, and 6 we changed only the coefficients of friction to $`\mu _s=0.15`$ and $`\mu _k=0.06`$. (These parameters are typical for the contact of two lubricated metal surfaces.) As the coefficients of friction in this case are smaller than their counterparts in the former case, we can observe many more cycles of motion before the motion stops. (In fact, the number of half-cycles in this case agrees with Eq. (25) below.) We note that because of the scale of Fig. 5 it is not apparent that the energy arrives at a non-zero constant value at late times. In this case, also, the qualitative characteristics of the motion are the same as in the former case (Figs. 13), but here the attenuated oscillatory motion is more apparent. Of course, the motion will not continue forever: Because of the decrease in the amplitude of the motion, eventually the static friction force at some turning point would be larger than the elastic force exerted on the block by the spring. Namely, at $`t=nP_{1/2}`$ we find $`x(t=nP_{1/2})=(1)^n(\mathrm{}2n\mu _kg/\omega ^2)`$, for some integer $`n`$, and the motion will stop for $`\mu _sg\omega ^2(\mathrm{}2n\mu _kg/\omega ^2)`$, or after an integral number of phases which is equal to the least integer $`n`$ which satisfies $$n\frac{1}{2}\left(\frac{\omega ^2\mathrm{}}{\mu _kg}\frac{\mu _s}{\mu _k}\right).$$ (25) We note that for the special case where $`\omega ^2\mathrm{}^2/(\mu _kg)`$ is integral the block may stop at $`x=0`$. This happens, however, only for special values of the parameters of the systems, and in general the system will rest at $`x0`$. Then, the block would remain at rest, and $``$ would be a constant of motion from then on. Namely, because of the dissipative nature of the problem, eventually the piecewise-conserved constant of motion becomes a true constant of motion, but this happens only when the dynamics of the system becomes trivial. (In our case, when the system is in a constant state of rest.) This feature of the dissipative system is in contrast with other piecewise-conserved constants of motion, which arise from non-dissipative systems, such as the truncated Kepler problem or the three-dimensional isotropic harmonic oscillator, where the piecewise-conserved constant vector remains piecewise conserved for all times.
no-problem/9905/cond-mat9905401.html
ar5iv
text
# Further evidence of the absence of Replica Symmetry Breaking in Random Bond Potts Models ## 1 Perturbative CFT results We shall not repeat here the renormalization group computations of higher moments, which can be found, although not in details, in references . We rather give a short overview, only stating relevant results. The partition function of the nearly-critical $`q`$-states random bond Potts model, is well known to be of the form $$Z(\beta )=\text{Tr }\mathrm{exp}\{H_0H_1\},$$ (1) where $`H_0`$ is the Hamiltonian of the conformal field theory corresponding to the $`q`$-states Potts model with coupling constant $`J_0`$ the same for each bond. The Hamiltonian $`H_1`$, being the deviation from the critical point induced by disorder is of the form $$H_1=d^2x\tau (x)ϵ(x),$$ (2) where $`\tau (x)\beta J(x)\beta _cJ_0`$ is the random temperature parameter. The theory is defined on the whole plane. We shall assume, for simplicity, that $`\tau (x)`$ has a gaussian distribution for each $`x`$, with $`\overline{\tau (x)}`$ $`=`$ $`\tau _0={\displaystyle \frac{\beta \beta _c}{\beta _c}}`$ (3) $`\overline{(\tau (x)\tau _0)(\tau (x^{})\tau _0)}`$ $`=`$ $`g_0\delta ^{(2)}(xx^{}).`$ (4) The usual way of averaging over disorder is to introduce replicas, that is, $`n`$ identical copies of the same model, for which: $$(Z(\beta ))^n=\text{Tr }\mathrm{exp}\left\{\underset{a=1}{\overset{n}{}}H_0^{(a)}d^2x\tau (x)\underset{a=1}{\overset{n}{}}\epsilon _a(x)\right\}.$$ (5) Taking the average over disorder by performing gaussian integration, one gets $$\overline{(Z(\beta ))^n}=\text{Tr }\mathrm{exp}\left\{\underset{a=1}{\overset{n}{}}H_0^{(a)}\tau _0d^2x\underset{a=1}{\overset{n}{}}\epsilon _a(x)+g_0d^2x\underset{ab}{\overset{n}{}}\epsilon _a(x)\epsilon _b(x)\right\}.$$ (6) This is a field theory of $`n`$ coupled models with coupling action given by $$H_{\text{int}}=g_0d^2x\underset{ab}{\overset{n}{}}\epsilon _a(x)\epsilon _b(x).$$ (7) Only non-diagonal terms are kept since diagonal ones can be included in the Hamiltonian $`H_0`$. Moreover, they can be shown to have irrelevant contributions, since their OPE consist of the identity plus terms that are irrelevant at the pure fixed point. We now turn our attention to the $`p`$-th moment of the spin-spin correlation function $`\overline{\sigma (0)\sigma (R)^p}`$. In terms of replicas, it can be written as $$\overline{\sigma (0)\sigma (R)^p}=\underset{n0}{lim}\frac{(np)!}{n!p!}\underset{a_1\mathrm{}a_p}{\overset{n}{}}\sigma _{a_1}(0)\mathrm{}\sigma _{a_p}(0)\underset{b_1\mathrm{}b_p}{\overset{n}{}}\sigma _{b_1}(R)\mathrm{}\sigma _{b_p}(R)$$ (8) Thus, the operator to be renormalized is $`𝒪_p(x)`$ $``$ $`\sigma _{a_1}(x)\sigma _{a_2}(x)\mathrm{}\sigma _{a_p}(x)`$ (10) $`a_1a_2\mathrm{}a_p,\mathrm{\hspace{0.17em}\hspace{0.17em}1}a_in,`$ perturbed by the interaction term $`\stackrel{~}{𝒪}_p(x)`$ $``$ $`𝒪_p\mathrm{exp}\{H_{\text{int}}\}`$ (11) $`=`$ $`𝒪_p\left(1H_{\text{int}}+{\displaystyle \frac{1}{2}}(H_{\text{int}})^2\mathrm{}\right).`$ (12) Renormalization group computations lead to the identification of a non-trivial fixed point, at which we are able to compute the correlation functions. Using scaling laws, we get $`\overline{\sigma (0)\sigma (R)^p}`$ $``$ $`\underset{n0}{lim}{\displaystyle \frac{(np)!}{n!}}{\displaystyle \underset{a_1a_2\mathrm{}a_p}{}}(Z(\xi _R))^2{\displaystyle \frac{1}{R^{2p\mathrm{\Delta }_\sigma }}}`$ (13) $``$ $`{\displaystyle \frac{(Z(\xi _R))^2}{R^{2p\mathrm{\Delta }_\sigma }}}.`$ The final result is obtained by using the fixed point value $`Z(\xi _R)e^{\gamma _{}\xi _R}=R^\gamma _{}`$. The RG study introduces a parameter $`ϵ`$, which can be seen as proportional to the central charge deviation of the pure model from the Ising value of $`1/2`$. For the $`3`$-state Potts model, $`ϵ=2/15`$. For generic $`ϵ`$, one gets (in , $`\alpha `$ should be replaced by $`\alpha `$): $$\overline{\sigma (0)\sigma (R)^p}\frac{1}{R^{2p\mathrm{\Delta }_{\sigma ^p}^{}}}.$$ (14) with $$\mathrm{\Delta }_{\sigma ^p}^{}=\mathrm{\Delta }_\sigma \gamma _{}(p),$$ (15) $$\gamma _{}(p)=\frac{9}{32}(p1)\left(\frac{2}{3}ϵ+\left(\frac{11}{12}\frac{2K}{3}+\frac{\alpha }{24}(p2)\right)ϵ^2\right)+𝒪(ϵ^3),$$ (16) and $$K=6\mathrm{log}2\alpha =33\frac{29\sqrt{3}\pi }{3}.$$ (17) Thus, perturbed conformal field theory predicts, for the $`3`$-state Potts models, the following values for the second and third moments: $$2\mathrm{\Delta }_{\sigma ^2}^{}=\frac{4}{15}0.0314=0.235$$ (18) $$2\mathrm{\Delta }_{\sigma ^3}^{}=\frac{4}{15}0.0466=0.220$$ (19) ## 2 Monte Carlo Simulations To search for signs of RSB, and, in the absence of it, to confirm RS values, we performed Monte Carlo simulations of the random bond $`q`$-Potts model for $`q=2,4,8`$. The method used follows the one in . To study scaling effects on the correlation functions, we studied square lattices of side $`L`$ ranging from $`10`$ to $`500`$. Since we wanted to exhibit a possible break of the replica symmetry, the algorithm has to be chosen in such a way that it doesn’t assume the symmetry a priori. For this reason, we simulated three configurations of the $`q`$-Potts model with same disorder, but different initial conditions and independent thermalizations. We computed the products of magnetization $$Q_3=\frac{1}{L^2}\underset{i=1,L^2}{}\sigma _i^a\sigma _i^b\sigma _i^c,$$ (20) and $$Q_2=\frac{1}{L^2}\underset{i=1,L^2}{}\sigma _i^a\sigma _i^b,$$ (21) with $`\sigma _i^a`$ being the thermal average of the local magnetization $$\sigma _i^a\stackrel{}{\sigma }_i^a\stackrel{}{m}^a,$$ (22) where $`\stackrel{}{m}^a`$ is the mean magnetization of lattice $`a`$; $$\stackrel{}{m}^a=\frac{1}{L^2}\underset{i=1,L^2}{}\stackrel{}{\sigma }_i^a.$$ (23) It is rather obvious, since all lattices are thermalized independently, that $`Q_3`$ and $`Q_2`$ are indeed the same as $$Q_3=\frac{1}{L^2}\underset{i=1,L^2}{}\sigma _i^a\sigma _i^b\sigma _i^c$$ (24) $$Q_2=\frac{1}{L^2}\underset{i=1,L^2}{}\sigma _i^a\sigma _i^b.$$ (25) Measurement were performed on square lattices with toroidal boundary conditions. The Hamiltonian of the simulated model is $$H=\underset{\{i,j\}}{}J_{ij}\left(\delta _{\sigma _i^a,\sigma _j^a}+\delta _{\sigma _i^b,\sigma _j^b}+\delta _{\sigma _i^c,\sigma _j^c}\right),$$ (26) where we took the coupling between nearest neighbours to be $$J_{ij}=J_0\text{ or }J_1$$ (27) with equal probabilities. This makes it possible to make the system self-dual by tuning the temperature so that the relation $$\frac{1e^{\beta J_0}}{1+(q1)e^{\beta J_0}}=e^{\beta J_1}$$ (28) is obeyed. We chose $`J_0/J_1=10`$ for the simulations with $`q=3,4`$, which is strong enough to avoid cross-over effects . For the $`q=8`$ model,we rather chose $`J_0/J_1=8.5`$, again because this seems the appropriate value to avoid cross-over and minimize the spread of our data set. Autocorrelation times were coarsely evaluated and the statistics ajusted in such a way that thermal fluctuations can be ignored (typically, for a single disorder configuration, thermalization period was at least 70 auto-correlation times long and at least 200 measures were taken (one every auto-correlation time). To average over disorder, we made measurements for 20 000 disorder configurations (10000 for $`q=8`$). Doing so, one can extract critical exponents straightforwardly: $$\overline{Q_p}=KL^{p\mathrm{\Delta }_{\sigma ^p}^{}},$$ (29) where $`K`$ is a non-universal constant. The exponent can then be obtained by taking logarithms. The results of our simulations, shown in Figures 1, 2 and 3, clearly support the RS scenario. In these figures, we present log-log plots of $`\overline{Q_p}^{2/p}`$ versus $`L`$ ($`p=2,3`$), for the three, four and eight-state Potts models. By taking the slopes of these graphs, we can extract $`2\mathrm{\Delta }_{\sigma ^p}^{}(q)`$ (which is minus the slope). None of the models presented show significant deviations from scaling which should arise if the replica symmetry was broken. For the 3-state Potts model, the critical exponents associated to the scaling behaviour are in fair agreement with the values predicted by perturbative CFT computations. The deviations from the pure model behaviour are: $`2\gamma ^{}(2)`$ $`=`$ $`0.0387\text{Monte Carlo}`$ (30) $`=`$ $`0.0314\text{CFT prediction}`$ (31) $`2\gamma ^{}(3)`$ $`=`$ $`0.0648\text{Monte Carlo}`$ (32) $`=`$ $`0.0466\text{CFT prediction}`$ (33) The numerical agreement is indeed quite surprising, especially for the third moment, where the perturbative expansion is near the end of its validity region . Olson and Young also computed spin-spin correlation functions moments, but in a different optic and with a different method. Our values for the exponents, presented in Table I, are in fair agreement with theirs, altough they seem to be systematically lower. Using other methods, Palagyi, Chatelain et al. obtain values that confirm this discrepancy. Our values are equal to theirs, within statistical errors. ## 3 Conclusion We believe that the presented evidence is enough to rule out the RSB scenario in random bond Potts models. If this symmetry was broken following Parisi’s scheme, deviations from the observed scaling laws would be, for the second moment, of the order of 10%, and thus would be easily observed. One can convince himself that the deviation should become more apparent for the third moment, something which is clearly not observed. It will be interesting to see how more precise numerical methods, such as transfer matrices iterations , could give accurate values for moments via cumulant expansions (for integer and non-integer values of $`q`$). \*** We would like to thank Vl.S. Dotsenko, M. Picco, J.L. Jacobsen, C. Chatelain and B. Berche for helpful comments and suggestions. We also acknowledge financial support from the NSERC Canada Scholarship Program.
no-problem/9905/quant-ph9905010.html
ar5iv
text
# Coherent States for the Deformed Algebras ## Abstract We provide a unified approach for finding the coherent states of various deformed algebras, including quadratic, Higgs and q-deformed algebras, which are relevant for many physical problems. For the non-compact cases, coherent states, which are the eigenstates of the respective annihilation operators, are constructed by finding the canonical conjugates of these operators. We give a general procedure to map these deformed algebras to appropriate Lie algebras. Generalized coherent states, in the Perelomov sense, follow from this construction. Recently, deformed Lie algebras have attracted considerable attention in the context of various physical and mathematical problems. The quadratic algebra was discovered by Sklyanin, in the context of statistical physics and field theory. The well-known Higgs algebra, a cubic algebra, was manifest in the study of the dynamical symmetries of the quantum oscillator and the Coulomb problem in a space of constant curvature. Other examples of deformation of Lie algebras appear in the description of the degeneracy structures and as dynamical symmetries of many conventional quantum mechanical problems, like singular oscillators and Hartmann potential. They have also appeared in interacting models of Calogero-Sutherland type. The presence of ambiguities in the definition of the generators of the Lie algebras, responsible for the degeneracy in these problems, have led many authors to the deformed Lie algebras. The celebrated quantum groups is another example of deformation, originating from the physical problems of the spin-chains and lower dimensional integrable models. In this communication, we present a unified approach for finding the coherent states (CS) of these deformed algebras. Coherent states, needless to say, occupy a very special place in physics, having relevance to many problems of physical interest. Hence, apart from its intrinsic interest, the method of construction presented here, will greatly facilitate the physical applications of these algebras. For the non-compact cases, the construction of the CS, which are the eigenstates of the lowering operators, takes place in two steps. First, we find the canonical conjugates of these operators. The CS, corresponding to the deformed algebras are then obtained by the action of the exponential of the respective conjugate operators on the vacuum; this is in complete parallel to the harmonic oscillator case. Another CS, which in a sense to be made precise in the text, is dual to the first one, naturally follows from the above construction. We also provide a mapping between the deformed algebras and their undeformed counterparts. This connection is then utilized to find the CS in the Perelomov sense. Apart from obtaining the known CS of the $`SU(1,1)`$ algebra, we construct the CS for the quadratic, cubic and the quantum group cases. Although our method is general, we will confine ourselves here to finding the CS of the deformed $`SU(1,1)`$ and $`SU(2)`$ algebras. The CS in our construction will be characterized by the eigenvalues of the Casimir operator. This operator for the deformed algebra, written in the Cartan-Weyl basis, $$[H,E_\pm ]=\pm E_\pm ,[E_+,E_{}]=f(H),$$ (1) can be written in the form , $`C`$ $`=`$ $`E_{}E_++g(H),`$ (2) $`=`$ $`E_+E_{}+g(H1).`$ (3) Here, $`f(H)=g(H)g(H1),g(H)`$ can be determined up to the addition of a constant. The eigenstates are characterized by the values of the Casimir operator and the Cartan subalgebra $`H`$. We make use of these relations to construct the canonical conjugate of the lowering operator of the $`SU(1,1)`$ algebra and then write down the corresponding coherent states. This is done for the purpose of comparing with the known results in the literature and to illustrate our method. This approach is then extended to the deformed algebras a straightforward way. In what follows, the relationships derived between various operators are valid only on suitable Hilbert spaces. For the $`SU(1,1)`$ algebra, $$[K_+,K_{}]=2K_0,[K_0,K_\pm ]=\pm K_\pm ,$$ (4) one finds, $`f(K_0)=2K_0`$ and $`g(K_0)=K_0(K_0+1)`$. The quadratic Casimir operator is given by $`C=K_{}K_++g(K_0)=K_{}K_+K_0(K_0+1)`$. $`\stackrel{~}{K_+}`$, the canonical conjugate of $`K_{}`$, satisfying $$[K_{},\stackrel{~}{K_+}]=1,$$ (5) can be written in the form, $$\stackrel{~}{K_+}=K_+F(C,K_0).$$ (6) Eq.(4) then yields, $$F(C,K_0)K_{}K_+F(C,K_01)K_+K_{}=1;$$ (7) making use of the Casimir operator relation given earlier, one can solve for $`F(C,K_0)`$ in the form, $$F(C,K_0)=\frac{K_0+\alpha }{C+K_0(K_0+1)}.$$ (8) The constant, arbitrary, parameter $`\alpha `$ in F can be determined by demanding that Eq.$`(4)`$ is valid in the entire Hilbert space. For the purpose of clarity, we illustrate this point, with the one oscillator realization of the $`SU(1,1)`$ generators. The ground states defined by $`K_{}0>=\frac{1}{2}a^20>=0`$, are, $`0>`$ and $`1>=a^{}0>`$, in terms of the oscillator Fock space. Making use of the results, $$K_00>=\frac{1}{4}(2a^{}a+1)0>=\frac{1}{4}0>,$$ (9) and $$C0>=\frac{3}{16}0>,$$ (10) we find that, $`[K_{},\stackrel{~}{K_+}]0>=K_{}\stackrel{~}{K_+}0>`$, yields $`\alpha =\frac{3}{4}`$. Similarly, for the other case $$[K_{},\stackrel{~}{K_+}]1>=1>,$$ (11) leads to $`\alpha =\frac{1}{4}`$. Hence, there are two disjoint sectors characterized by the $`\alpha `$ values $`\frac{3}{4}`$ and $`\frac{1}{4}`$, respectively. These results match identically with the earlier known ones , once we rewrite $`F`$ as, $$F(C,K_0)=\frac{K_0+\alpha }{C+K_0(K_0+1)},$$ (12) $$=\frac{K_0+\alpha }{K_{}K_+}.$$ (13) The unnormalized coherent state $`\beta >`$, which is the annihilation operator eigenstate, i.e, $`K_{}\beta >=\beta \beta >`$, is given in the vacuum sector by $$\beta >=e^{\beta \stackrel{~}{K^+}}0>.$$ (14) Analogous construction holds in the other sector, where $`\alpha =\frac{1}{4}`$. These states, provide a realization of the Cat states, and play a prominent role in the quantum measurement theory. As has been noticed earlier, $`[K_{},\stackrel{~}{K_+}]=1`$, also yields, $$[\stackrel{~}{K_+^{}},K_+]=1.$$ (15) From this, one can find the eigenstate of $`\stackrel{~}{K_+^{}}`$ operator, in the form, $$\gamma >=e^{\gamma K_+}0>.$$ (16) This CS, after proper normalization is the well-known Yuen state. Our construction can be easily generalized to various other realizations of the $`SU(1,1)`$ algebra. We now extend the above procedure to the quadratic algebra. As has been mentioned earlier, this algebra has relevance to statistical physics and field theory; a simpler version appears in quantum mechanical problems: $$[N_0,N_\pm ]=\pm N_\pm ,[N_+,N_{}]=2N_0+aN_0^2.$$ (17) In this case, $`f_1(N_0)=2N_0+aN_0^2=g_1(N_0)g_1(N_01)`$, where, $$g_1(N_0)=N_0(N_0+1)+\frac{a}{3}N_0(N_0+1)(N_0+\frac{1}{2}).$$ (18) Representation theory of the quadratic algebra has been studied in the literature; it shows a rich structure depending on the values of ‘a’. In the non-compact case, i.e, for the values of ‘a’ such that the unitary irreducible representations (UIREP) are either bounded below or above, we can construct the canonical conjugate $`\stackrel{~}{N_+}`$ of $`N_{}`$ such that $`[N_{},\stackrel{~}{N_+}]=1`$. It is given by $`\stackrel{~}{N_+}=N_+F_1(C,N_0)`$, with $$F_1(C,N_0)=\frac{N_0+\delta }{CN_0(N_0+1)\frac{a}{3}N_0(N_0+1)(N_0+\frac{1}{2})}.$$ (19) As can be easily seen, in the case of the finite dimensional UIREP, $`\stackrel{~}{N_+}`$ is not well defined since $`F_1(C,N_0)`$ diverges on the highest state. As mentioned earlier, the values of $`\delta `$ can be fixed by demanding that the relation, $`[N_{},\stackrel{~}{N_+}]=1`$, holds in the vacuum sector $`v>_i`$ where, $`v>_i`$ are annihilated by $`N_{}`$. This gives $`N_{}\stackrel{~}{N_+}v>_i=v>_i`$, which leads to $`(N_0+\delta )v>_i=v>_i`$, the value of the Casimir operator, $`C=N_{}N_++g_1(N_0)`$, can be easily calculated. Hence, the unnormalized coherent state $`\mu >_i:N_{}\mu >_i=\mu \mu >_i`$ is given by $`e^{\mu \stackrel{~}{N_+}}v>_i`$. The other coherent state originating from $`[\stackrel{~}{N_+^{}},N_+]=1`$ is given by $`\nu >_i=e^{\nu N_+}v>_i`$. This can be recognized as the (unnormalized) CS in the Perelomov sense. Depending on the UIREP being infinite or finite dimensional, this quadratic algebra can also be mapped in to $`SU(1,1)`$ and $`SU(2)`$ algebras, respectively; leaving aside the commutators not affected by this mapping, one gets, $$[N_+,\overline{N_{}}]=2bN_0;$$ (20) where $`b=1`$ corresponds to the $`SU(1,1)`$ and $`b=1`$ gives the $`SU(2)`$ algebra. Explicitly, $$\overline{N_{}}=N_{}G_1(C,N_0),$$ (21) and $$G_1(C,N_0)=\frac{(N_0^2N_0)b+ϵ}{Cg_1(N_01)},$$ (22) $`ϵ`$ being an arbitrary constant. One can immediately construct CS in the Perelomov sense as $`Uv>_i`$, where $`U=e^{\xi N_+\xi ^{}N_{}}`$. For the compact case, the CS are analogous to the spin and atomic coherent states. We would like to point out that, earlier the generators of the deformed algebra have been written in terms of the undeformed ones. However, in our approach the undeformed $`SU(1,1)`$ and $`SU(2)`$ generators are constructed from the deformed generators. The Cubic algebra, which is also popularly known as the Higgs algebra in the literature, manifested in the study of the degeneracy structure of eigenvalue problems in a curved space. The generators satisfy, $$[M_0,M_\pm ]=\pm M_\pm ,[M_+,M_{}]=2cM_0+\mathrm{\hspace{0.17em}4}hM_0^3,$$ (23) where, $`f_2(M_0)=2cM_0+\mathrm{\hspace{0.17em}4}hM_0^3=g_2(M_0)g_2(M_01)`$, and $$g_2(M_0)=cM_0(M_0+1)+hM_0^2(M_0+1)^2.$$ (24) Analysis of its representation theory yields a variety of UIREP’s, both finite and infinite dimensional, depending on the values of the parameters $`c`$ and $`h`$ . Physically, $`h`$ represents the curvature of the manifold. In the non-compact case the canonical conjugate is given by, $$\stackrel{~}{M_+}=M_+F_2(C,M_0),$$ (25) where, $$F_2(C,M_0)=\frac{M_0+\zeta }{CcM_0(M_0+1)hM_0^2(M_0+1)^2}.$$ (26) As before, the annihilation operator eigenstate is given by $$\rho >_i=e^{\rho \stackrel{~}{M_+}}p>_i,$$ (27) where, $`p>_i`$ are the states annihilated by $`M_{}`$. Like the previous cases, the dual algebra yields another coherent state. This algebra can also be mapped in to $`SU(1,1)`$ and $`SU(2)`$ algebras, as has been done for the quadratic case: $$[M_+,\overline{M_{}}]=2dM_0,$$ (28) where, $`d=1`$ and $`d=1`$ correspond to the $`SU(1,1)`$ and $`SU(2)`$ algebras respectively. Here, $$\overline{M_{}}=M_{}G_2(C,M_0),$$ (29) where, $$G_2(C,M_0)=\frac{(M_0^2M_0)d+\sigma }{Cg_2(M_01)},$$ (30) $`\sigma `$ being a constant. The coherent state in the Perelomov sense is then $`Uv>_i`$, where, $`U=e^{\varphi M_+\varphi ^{}M_{}}`$. For the sake of completeness, we now extend the above construction to the quantum group case. The quantum deformed $`SU(2)`$ algebra is given by, $$[D_0,D_\pm ]=\pm D_\pm ,[D_+,D_{}]=\frac{q^{D_0}q^{D_0}}{qq^1},$$ (31) for which, $$g_3(D_0)=\frac{q^{D_0+\frac{1}{2}}+q^{D_0\frac{1}{2}}}{(q^{\frac{1}{2}}q^{\frac{1}{2}})(qq^1)}.$$ (32) The canonical conjugate $`\stackrel{~}{D_+}`$, of, $`D_{}`$, valid for the non-compact case, is $$\stackrel{~}{D_+}=D_+F_3(C,D_0),$$ (33) where, $$F_3(C,D_0)=\frac{D_0+\eta }{Cg_3(D_0)}.$$ (34) One can then easily construct the coherent state like the previous examples. This algebra can also be mapped in to $`SU(1,1)`$ and $`SU(2)`$ algebras: $$[D_+,\overline{D_{}}]=2fD_0,$$ (35) where, $`f=1`$ and $`f=1`$ gives the $`SU(1,1)`$ and $`SU(2)`$ algebras respectively (the other relations of the algebra not being affected in this mapping). Explicitly, $$\overline{D_{}}=D_{}G_3(C,D_0),$$ (36) where, $$G_3(C,D_0)=\frac{(D_0^2D_0)f+\omega }{Cg_3(D_01)}.$$ (37) The coherent state in the Perelomov sense then follows naturally. To conclude, we have found a general method for constructing the coherent states for various deformed algebras. Since the method is algebraic and relies on the group structure of Lie algebras, the precise nature of the non-classical behaviour of these CS can be easily inferred from our construction. It will be of particular interest to see the role of the deformation parameters in this behaviour. Since many of these algebras are related to quantum mechanical problems with non-quadratic, non-linear Hamiltonians, quantum optical problems involving 3-photon and four photon processes , spin-chains and various other physical problems, a detailed study of the properties of the CS associated with these non-linear and deformed algebras is of physical relevance. The authors take the pleasure to thank Prof. S. Chaturvedi for stimulating conversations. VSK acknowledges useful discussions with Mr. N. Gurappa.
no-problem/9905/cond-mat9905341.html
ar5iv
text
# Density-functional theory study of the catalytic oxidation of CO over transition metal surfaces ## I Introduction Carbon monoxide oxidation is one of the most extensively studied heterogeneous catalytic reactions. This is due to both its technological importance (e.g., in car exhaust catalytic converters where the active components are transition metals such as Pt, Pd, and Rh) and its “simplicity” . A microscopic understanding, however, of this most fundamental catalytic reaction is still lacking. We note, however, that steps in this direction have recently been made via first-principles calculations . Experimentally, it is difficult to probe the state of the reactants during reaction – traditionally, information is only available before or after the event. Advances in surface science techniques, however, afford new information concerning the behavior of the reactants during the reaction process. For example, time-resolved scanning tunneling microscopy (STM) , time-resolved electron energy-loss spectroscopy (TREELS) , time-resolved infrared spectroscopy (TRIS) , and (“fast” high resolution) x-ray photoelectron spectroscopy (XPS) . On a larger scale are low energy electron microscopy (LEEM) and photoemission electron microscopy (PEEM) . Theoretically, a significant hindrance has been computational limitations, but also, for a realistic description of certain reactions, new theoretical developments had to be awaited; for example, the generalized gradient approximation (GGA) for the exchange-correlation functional has been shown to be crucial for obtaining accurate activation barriers for hydrogen dissociation at metal and semiconductor surfaces . Past studies performed using surface science techniques, i.e., on well characterized single-crystal metal surfaces under ultra high vacuum (UHV) conditions, have shown that CO oxidation proceeds via the Langmuir-Hinshelwood (L-H) mechanism in which reaction takes place between chemisorbed reagents . Recent high gas-pressures catalytic reactor experiments, which afford the study of chemical reactions under “realistic” high pressure and temperature conditions, support the assignment of the L-H mechanism for Pt, Pd, Rh, and Ir . For Ru, however, somewhat anomalous behavior was found which indicated that reaction via scattering of CO molecules with adsorbed O atoms may be taking place, i.e. via an Eley-Rideal-type mechanism . In particular, with pressures of about 10 torr and for oxidizing conditions (i.e., at CO/O<sub>2</sub> pressure ratios $`<1`$) the rate of CO<sub>2</sub> production was found to be significantly higher than at the other transition metal surfaces ; in contrast, under UHV conditions, the rate is extremely low over Ru (0001) . Unlike the other transition metals, almost no chemisorbed CO could be detected either during or after the reaction, and the kinetic data (activation energy and pressure dependencies) was found to be markedly different to that of the other metals; in particular, highest rates occurred for high concentrations of oxygen at the surface, whereas for the other metals, highest rates occurred for low O coverages. Our studies show that Ru does behave differently to the other transition metal catalysts in that high coverages of O can be supported on the surface (up to desorption temperatures) where the O-metal bond is significantly weaker than in the lower coverage phases. Investigation of the energetics for CO<sub>2</sub> formation indicates that a Langmuir-Hinshelwood mechanism, rather than an Eley-Rideal process is dominant. ## II Calculation Method In order gain understanding into the apparently different behavior of Ru for the CO oxidation reaction, and to obtain a microscopic picture of this basic surface catalyzed reaction in general, we carried out density-functional theory (DFT) calculations . We use the ab initio pseudopotential plane wave method and the supercell approach where we employ the GGA for the exchange-correlation interaction. We use ab initio, fully separable, norm-conserving GGA pseudopotentials , where for the Ru atoms, relativistic effects are taken into account using weighted spin-averaged pseudopotentials. The surface is modelled using a $`(2\times 2)`$ surface unit cell with four layers of Ru (0001). An energy cut-off of 40 Ry is taken with three special k-points in the two-dimensional Brillouin zone . The adsorbate structures are created on one side of the slab . We relax the position of the atoms, keeping the Ru atoms in the bottom two layers fixed at their bulk-like positions. ## III Oxygen on Ruthenium Under UHV conditions, at room temperature, dissociative adsorption of O<sub>2</sub> results in an (apparent) saturation coverage of $`\mathrm{\Theta }_\mathrm{O}1/2`$, corresponding to the formation of a $`(2\times 1)`$-O structure . At $`\mathrm{\Theta }_\mathrm{O}`$=1/4, a $`(2\times 2)`$-O phase forms . Here $`\mathrm{\Theta }_\mathrm{O}=1`$ means that there are as many O atoms as there are Ru atoms in the top layer. In both surface structures, O adsorbs in the hexagonal close-packed (hcp) site. Our earlier DFT-GGA calculations for O on Ru (0001) indicated that even higher coverage phases should form; namely, a ($`1\times 1`$)-O structure with coverage $`\mathrm{\Theta }_\mathrm{O}=1`$, as well as a $`(2\times 2)`$-3O structure with coverage $`\mathrm{\Theta }_\mathrm{O}=3/4`$. As for the lower coverage structures, the O atoms occupy hcp sites. The adsorption energy of O decreases notably with increasing coverage and for the monolayer coverage phase, the adsorption energy is $``$0.7 eV less than that of the $`(2\times 2)`$-O phase. Both the $`(2\times 2)`$-3O and $`(1\times 1)`$-O structures have subsequently been created experimentally with the use of NO<sub>2</sub> or high gas pressures of O<sub>2</sub> , and the atomic structure verified by dynamical low-energy electron diffraction (LEED) intensity analyses. NO<sub>2</sub> readily dissociates at elevated temperatures in the presence of adsorbed oxygen delivering atomic oxygen to the surface while NO desorbs. Furthermore, it has been demonstrated that after completion of the monolayer oxygen structure, additional oxygen can enter the subsurface region at elevated temperatures. Formation of the higher coverage phases ($`\mathrm{\Theta }=`$3/4 and 1.0) from gas-phase O<sub>2</sub> at “usual” exposures under UHV conditions is apparently kinetically hindered by activation barriers for O<sub>2</sub> dissociation, induced by the pre-adsorbed oxygen atoms at coverage $`\mathrm{\Theta }_\mathrm{O}`$0.5. With respect to the high pressure catalytic reactor experiments mentioned above, because the conditions under which the highest rates of CO<sub>2</sub> formation were reported involved high O<sub>2</sub> partial gas pressures and oxidizing conditions, there will be a significant attempt frequency of O<sub>2</sub> to overcome activation barriers for dissociative adsorption. Thus, it is likely that during reaction, the oxygen coverage on the surface approaches one monolayer. We note also that in the catalytic reactor experiments it may be unlikely that there is significant subsurface oxygen present since the temperature range studied in the experiment of 380 K to 500 K is less than that of 600 K at which oxygen is reported to enter the subsurface region with an appreciable rate (using NO<sub>2</sub> and auger electron spectroscopy indicated a coverage of about one monolayer. ## IV Reaction via gas-phase CO with adsorbed O In earlier publications we reported our investigation for reaction via scattering of gas-phase CO with adsorbed O so here we only briefly describe the results. The $`(1\times 1)`$-O phase is assumed to cover the whole surface and we investigate the interaction of CO with this oxygen-covered surface. For a given lateral position, CO is placed well above the surface (with the C-end down) and the total energy calculated for decreasing distances of CO from the surface. All atomic positions are relaxed except that of the C atom which is held fixed (and the bottom two Ru layers). We considered a number of lateral positions: the on-top and fcc sites, with respect to the Ru (0001) substrate, a bridge site between two adsorbed O atoms, as well as directly above an adsorbed O(a) atom (see Fig. 1a). We find that an energy barrier begins to build up at about 2.5 Å from the surface (with respect to the average position of the O atoms) for all sites, reflecting a repulsive interaction of CO with the O-covered surface. A very weak physisorption well of about 0.04 eV is also found above the O-adlayer. Its position is $``$3.0 Å from the surface for the on-top and fcc sites, and at about $``$3.5 Å for the bridge site and the approach directly over the O atom. Thus, for a full monolayer coverage of oxygen on the surface, CO is unable to form a chemical bond with the substrate and the L-H reaction mechanism is therefore prevented. In order to obtain a more detailed understanding of CO<sub>2</sub> formation via a scattering reaction, we evaluated an appropriate cut through the high-dimensional potential energy surface (PES) (see Ref. ). This cut is defined by two variables: the vertical position of the C atom and the vertical position of the reacting O(a) adatom directly below. Initially, with the CO-axis is held perpendicular to the surface we find that the activation barrier for CO<sub>2</sub> formation is $``$1.6 eV. When the tilt angle of the CO-axis is allowed to relax, the energy barrier is reduced to about 1.1 eV. The transition state (depicted in the inset of Fig. 2) has a “bond angle” of $`131^{}`$ and the reacting O(a) atom is $``$0.35 Å above the other O atoms in the surface unit cell. The C-O(a) bond length is 1.50 Å (stretched by 27 % compared to the calculated bond length of a free CO<sub>2</sub> molecule which is 1.18 Å ; the experimental value is 1.16 Å ), and the bond length of CO is 1.17 Å (the calculated value of a free CO molecule is 1.15 Å ; the experimental value is also 1.15 Å ). After onset of reaction, the CO-O(a) bond begins to develop and the O(a)-Ru bond is weakened. It then becomes energetically unfavorable for the reacting complex to be at the surface and it is strongly repelled towards the vacuum region. The resulting energy diagram is shown in Fig. 2. It can be seen that there is a significant energy gain from the surface reaction of 1.95 eV. Using the determined activation barrier in a simple Arrhenius-type equation with the prefactor obtained from consideration of the number of CO molecules hitting the surface per site per second at a given temperature and pressure, we can estimate the reaction rate . This will give an upper bound since for other orientations of the molecule the barrier is larger. The rate is found to be significantly lower (by $`3\times 10^6`$) than that measured experimentally . This indicates that this mechanism alone cannot explain the enhanced CO<sub>2</sub> turnover frequency as was speculated. To investigate other possible reaction channels, we consider it conceivable that there are vacancies in the perfect $`(1\times 1)`$-O adlayer (see Ref. for an estimate of the O-vacancy density). CO molecules may then adsorb at these vacant sites and react via a L-H mechanism. In the following section we investigate such a L-H reaction mechanism. ## V Reaction between adsorbed CO and O We first consider the energetics of adsorption of CO into a vacant hcp site of the monolayer oxygen structure. Interestingly, we find that there is an activation barrier of $``$0.3 eV. We expect, however, that this barrier will relatively easily be overcome at the high gas pressures used in the catalytic reactor experiments. From Fig. 1b it can be seen that CO is closely surrounded by a hexagonal arrangement of six O atoms. Even though the reactants are already very close, there is still a repulsive rather than an attractive interaction between CO and the neighboring O atoms. In determining the reaction path we consider reaction in the inequivalent area of the surface, indicated in Fig. 1b by the continuous lines. There are clearly a number of possible ways that the reaction between CO and a neighboring O atom could proceed: for example, the CO molecule may approach an O atom, an O atom may approach the CO molecule, or the reactants may both move towards each other over one Ru atom. For each of these scenarios there are obviously also a number of possible reaction paths (i.e., via the on-top site, bridge and fcc site, etc). To determine the minimum energy pathway, energy barrier, and associated transition state, is clearly not a simple problem. Recent ab initio studies of surface reactions and dissociation of diatomic molecules at surfaces have attempted “direct methods” for finding the lowest energy reaction pathway . In the present work, however, we use the “standard grid approach” of constructing various relevant PESs since we are also interested in the shape of the PES away from the minimum energy reaction pathway. In view of the weaker CO-metal bond strength compared to that of the O-metal bond strength at this coverage, i.e. 0.85 eV compared to 2.09 eV (with respect to gas phase 1/2 O<sub>2</sub>), we first consider reaction via movement of CO towards the O atom. This in fact turns out to have the lowest energy pathway of the ones we considered with a barrier of $``$1.51 eV. This value is consistent with the recent experimental estimate of $`>`$1.4 eV for the case of high oxygen coverages on the surface . Other possible reaction pathways considered were for O to move towards CO and for O and CO to move towards each other. The lowest energy pathway found for these scenarios were at least 0.2 and 0.3 eV higher, respectively. We point out that in this work we have only considered the most obvious reaction paths, and in order to explore in more detail the very complex nature of this high-dimensional potential energy surface, further calculations are required. As a first step we investigate the energetics for different fixed lateral positions of the C atom within the area shown in Fig. 1b. We initially keep the lateral position of the O atoms fixed, but allow vertical relaxations. For CO moving towards the on-top site (see Fig. 1), a strong repulsion between C and the two symmetrically equivalent O atoms develops giving rise to a large energy barrier of 2.53 eV. Thus, the pathway over the on-top site is energetically unfavorable. Interestingly, the situation is somewhat different for CO moving towards the fcc site: in this direction there is also the build up of an energy barrier; on overcoming the barrier, however, there is an attractive interaction between C and the two symmetrically equivalent O atoms (see Fig. 1b). When the O atoms are then allowed to laterally relax, we find the formation of a carbonate-like species. The atomic geometry is depicted in Fig. 3a. We note that in our investigations of the CO-gas scattering reaction, we also identified the stability of such a carbonate species on the fully O-covered surface; the atomic geometry is similar as can be seen from Fig. 3b. Formation of this species was also found to involve a significant energy barrier. We note that experimental identification of carbonate species in CO oxidation reactions over other transition metals have been reported (e.g., Ref. ), where they may possibly act as an intermediary. The actual role they play in the carbon monoxide oxidation reaction for this system, however, is at present unknown. We find that the minimum energy pathway found for CO<sub>2</sub> formation corresponds to one where the CO molecule moves essentially directly towards the O atom. The associated energetics are shown in Fig. 4 where we have constructed the energy diagram . Similarly to our study of the E-R mechanism, we also find a small physisorption well for CO<sub>2</sub> above the surface. The corresponding transition state geometry is depicted in the inset of Fig. 4. In this geometry, the C-O(a) bond is almost parallel to the surface and the CO axis is bent away from O(a) yielding a bent CO-O(a) complex with a bond angle of 125; similar to that found for the E-R mechanism. The C-O(a) bond length is 1.59 Å (about 35 % stretched compared to that in CO<sub>2</sub>) and the CO bond length is 1.18 Å . Figure 5 shows the atomic geometry at selected positions along the reaction path to CO<sub>2</sub> formation. Initially CO is in the vacancy and O(a) in the hcp site. As CO approaches O(a), repulsive forces build up on both the C and O(a) atoms and the molecular axis of CO begins to tilt away from O(a). At the transition state, CO and O begin to lift off the surface as they break their metal bonds in favor of developing a C-O(a) bond. Interestingly, the distance that O moves away from the surface by is $``$0.36 Å ; very similar to that found in the E-R mechanism which was $``$0.35 Å . As CO<sub>2</sub> begins to form, the molecular axis quickly straightens out to its linear geometry. The corresponding valence electron density and density difference distributions are shown in Fig. 6. The latter is constructed by subtracting from the electron density of the CO,O/Ru (0001) system, those of the O-covered surface and a free CO molecule. From the electron density difference distributions, with respect to the starting configuration of CO in the vacancy, it can be seen that as CO moves towards O(a), firstly there is less electron density in the CO $`2\pi ^{}`$ orbitals, indicating a weakening of the C-metal bond. Also, electron density has been depleted from the O(a) orbital pointing towards the CO molecule and an increase occurs into orbitals in the orthogonal direction (pointing in a near-perpendicular direction to the surface). The redistribution is an effect of Pauli repulsion. At the transition state, the onset of bond formation can be seen between these latter O(a) orbitals and the $`2\pi ^{}`$-like orbitals of the C atom. We can notice that CO is bonded only weakly to the metal through a $`2\pi ^{}`$-like orbital. The bond of the reacting O(a) to the surface is also significantly weakened at the transition state as can be see from the lower panel showing the total valence electron density. Interestingly, the density difference plot for the transition state is very similar to that of the E-R process (see Fig. 7 of Ref. ). In the last panel (leftmost), significant accumulation of electron density can be seen between the C and O(a) atoms as the CO<sub>2</sub> molecule is practically formed. To summarize, our studies indicate that a Langmuir-Hinshelwood mechanism rather than an Eley-Rideal mechanism is the dominant reaction process giving rise to the reported increase in reactivity of ruthenium for the CO oxidation reaction as measured in the high pressure catalytic reactor experiments as compared to under UHV conditions. We find that for high coverages of O on the surface, which are attainable under the sufficiently high oxygen pressures used in the experiment, the O adsorption energy is notably weaker than the lower coverage phases that form under UHV conditions. Furthermore, the adsorption energy of CO in the presence of high O coverages is weaker than in the presence of low O coverages. These factors, together with the close proximity of the reactants is thought to give rise to the observed increase in reactivity of Ru and the reported anomalous behavior on partial gas pressure. Finally, we like to mention very recent results of CO oxidation experiments: For the case of very high concentrations of oxygen at the surface (one monolayer on the surface plus oxygen occupying subsurface sites) which can be prepared after formation of the $`(1\times 1)`$ phase by using either NO<sub>2</sub> or high gas pressures of O<sub>2</sub> at elevated temperatures, reaction rates notably greater than that from the on-surface monolayer oxygen structure have been measured . These high rates have been proposed to be connected to the existence of copious amounts of oxygen in the subsurface region. These are clearly very interesting results and require more detailed investigations in order to understand this behavior. ### A Conclusion We have performed density functional theory calculations in order to investigate the catalytic oxidation of carbon monoxide over Ru (0001) for the conditions of high oxygen coverages on the surface for which highest reaction rates of CO<sub>2</sub> formation have been reported. It is only by exposure of the surface to high O<sub>2</sub> gas pressures, or with the use of strongly oxidative molecules such as NO<sub>2</sub> that high oxygen coverages can be achieved. In this case the O-metal bond strength is notably weaker compared to the lower coverage structures that form under UHV conditions, and as such, is expected to be more reactive. In the present work we concentrated mainly on the microscopic description and study of the Langmuir-Hinshelwood reaction mechanism. We identified a bent transition state for CO movement towards an adsorbed O atom with an associated energy barrier of approximately 1.5 eV. In this configuration the CO and the adsorbed O atom have substantially weakened (and significantly stretched) their bond to the substrate.
no-problem/9905/astro-ph9905125.html
ar5iv
text
# Searching for non-gaussianity: Statistical tests ## 1 Introduction Non-gaussianity is a very promising way of characterising some important physical processes and has many applications in astrophysics. In fluid mechanics, the non-gaussian signature of the probability density functions of velocity is used as evidence for turbulence (e.g. Chabaud et al. (1994)). Astrophysical fluids with very large Reynolds numbers, such as the interstellar medium, are expected to be turbulent. If it exists, turbulence should play a leading role in the triggering of star formation, in determining the dynamical structure of the interstellar medium and its chemical evolution. Non-gaussianity is a tool for indicating turbulence. Indeed, recent studies on the non-gaussian shape of the molecular line profiles can be interpreted as evidence for turbulence (Falgarone & Phillips 1990; Falgarone et al. 1994; Falgarone & Puget 1995; Miville-Deschênes et al. 1999). Non-gaussianity is also used as the indicator of coronal heating due to magneto-hydro-dynamical turbulence (Bocchialini et al. 1997; Georgoulis et al. 1998). Within the cosmological framework, the statistical nature of the Cosmic Microwave Background (CMB) temperature, or brightness anisotropies, probes the origin of the initial density perturbations which gave rise to cosmic structures (galaxies, galaxy clusters). The inflationary models (Guth 1981; Linde 1982) predict gaussian distributed density perturbations, whereas the topological defect models (Vilenkin 1985; Bouchet 1988; Stebbins 1988; Turok 1989) generate a non-gaussian distribution. Because the nature of the initial density perturbations is a major question in cosmology, a lot of statistical tools have been developed to test for non-gaussianity. In order to test for non-gaussianity, one can use traditional methods based on the distribution of temperature anisotropies. The simplest tests are based on the third and/or fourth order moments (skewness and kurtosis) of the distribution, both equal to zero for a gaussian distribution. Another test for non-gaussianity through the temperature distribution is based on the study of the cumulants (Ferreira et al 1997; Winitzki 1998). The n-point correlation functions also give very valuable statistical information. In particular, the three-point function, and its spherical harmonic transform (the bispectrum), are appropriate tools for the detection of non-gaussianity (Luo & Schramm 1993; Kogut et al. 1996; Ferreira & Magueijo 1997; Ferreira et al. 1998; Heavens 1998; Spergel & Goldberg 1998). An investigation of the detailed behaviour of each multipole of the CMB angular power spectrum (transform of the two-point function) is another non-gaussianity indicator (Magueijo 1995). Finally, other tests of non-gaussianity are based on topological pattern statistics (Coles 1988; Gott et al. 1990). The works of Ferreira & Magueijo (1997) on the search for non-gaussianity in Fourier space, and of Ferreira et al. (1997) on the cumulants of wavelet transformed maps, have shown that these approaches allow the study of characteristic scales, which is particularly interesting when studying a combination of gaussian and non-gaussian signals as a function of scale. In the present work, we study non-gaussianity in a wavelet decomposition framework, that is, using the coefficients in the wavelet decomposition. We decompose the image of the studied signal into a wavelet basis and analyse the statistical properties of the coefficient distribution. We then search for reliable statistical diagnoses to distinguish between gaussian and non-gaussian signals. The aim of this study is to find suitable tools for demonstrating and to quantifying the non-gaussian signature of a signal when it is combined with a gaussian distributed signal of similar, or higher, amplitude. In section 2, we describe the methods for wavelet decomposition and filtering. We then briefly describe the characteristics of the wavelet we use in our study. We also give the main characteristics of the test data sets used in our work. In section 3, we present the statistical criteria we developed to test for non-gaussianity. We then apply the tests to sets of gaussian test maps (Sec. 4). We also apply them, in section 5, to sets of non-gaussian maps as well as combinations of gaussian and non-gaussian signals with the same power spectrum. In section 6, we present and apply the detection strategy we propose to quantify the detectability of the non-gaussian signature. Finally, in section 7 we conclude and present our main results. ## 2 Analysis ### 2.1 The wavelet decomposition Wavelet transforms have been widely investigated recently because of their suitability for a large number of signal and image processing tasks. Wavelet analysis involves a convolution of the signal with a convolving function or a kernel (the wavelet) of specific mathematical properties. When satisfied, these properties define an orthogonal basis, which conserves energy, and guarantee the existence of an inverse to the wavelet transform. The principle behind the wavelet transform, as described by Grossman & Morlet (1984), Daubechies (1988) and Mallat (1989), is to hierarchically decompose an input signal into a series of successively lower resolution reference signals and their associated detail signals. At each decomposition level, L, the reference signal has a resolution reduced by a factor of $`2^\mathrm{L}`$ with respect to the original signal. Together with its respective detail signal, each scale contains the information needed to reconstruct the reference signal at the next higher resolution level. Wavelet analysis can therefore be considered as a series of bandpass filters and be viewed as the decomposition of the signal into a set of independent, spatially oriented frequency channels. Using the orthogonality properties, a function in this decomposition can be completely characterised by the wavelet basis and the wavelet coefficients of the decomposition. The multilevel wavelet transform (analysis stage) decomposes the signal into sets of different frequency localisations. It is performed by iterative application of a pair of Quadrature Mirror Filters (QMF). A scaling function and a wavelet function are associated with this analysis filter bank. The continuous scaling function $`\varphi _A(x)`$ satisfies the following two-scale equation: $$\varphi _A(x)=\sqrt{2}\underset{n}{}h_0(n)\varphi _A(2xn),$$ (1) where $`\mathrm{h}_0`$ is the low-pass QMF. The continuous wavelet $`\psi _A(x)`$ is defined in terms of the scaling function and the high-pass QMF $`\mathrm{h}_1`$ through: $$\psi _A(x)=\sqrt{2}\underset{n}{}h_1(n)\varphi _A(2xn).$$ (2) The same relations apply for the inverse transform (synthesis stage) but, generally, different scaling function and wavelet ($`\varphi _S(x)`$ and $`\psi _S(x)`$) are associated with this stage: $$\varphi _S(x)=\sqrt{2}\underset{n}{}g_0(n)\varphi _S(2xn),$$ (3) $$\psi _S(x)=\sqrt{2}\underset{n}{}g_1(n)\varphi _S(2xn).$$ (4) Equations (1) and (3) converge to compactly supported basis functions when $$\underset{n}{}h_0(n)=\underset{n}{}g_0(n)=\sqrt{2}.$$ (5) The system is said to be biorthogonal if the following conditions are satisfied: $`{\displaystyle _{}}\varphi _A(x)\varphi _S(xk)𝑑x=\delta (k)`$ (6) $`{\displaystyle _{}}\varphi _A(x)\psi _S(xk)𝑑x=0`$ (7) $`{\displaystyle _{}}\varphi _S(x)\psi _A(xk)𝑑x=0`$ (8) Cohen et al. (1990) and Vetterli & Herley (1992) give a complete treatment of the relationship between the filter coefficients and the scaling functions. When applied to bi-dimensional data (typically images), three main types of decomposition can be considered: dyadic (or octave band), pyramidal and uniform. 1. A “dyadic” decomposition refers to a transform in which only the reference sub-band (low-pass part of the signal) is decomposed at each level. In this case, the analysis stage is applied in both directions of the image at each decomposition level. The total number of sub-bands after L levels of decomposition is then 3L+1 (Fig. 1, upper panel). 2. A “pyramidal” decomposition is similar to a “dyadic” decomposition in the sense that only the reference sub-band is decomposed at each level, but it refers here to a transform that is performed separately in the two directions of the image. The total number of sub-bands after L levels of decomposition is then $`(\mathrm{L}+1)^2`$ (Fig. 1, lower panel). 3. A “uniform” decomposition refers to one in which all sub-bands are transformed at each level. The total number of sub-bands after L levels of decomposition is then $`4^\mathrm{L}`$. The wavelet functions are localised in space and, simultaneously, they are also localised in frequency. Therefore, this approach is an elegant and powerful tool for image analysis because the features of interest in an image are present at different characteristic scales. Moreover, if the input field is gaussian distributed, the output is distributed the same way, regardless of the power spectrum. This arises from the linear transformation properties of gaussian variables. The distribution of the wavelet coefficients of a gaussian process is thus a gaussian. Conversely, we expect that any non-gaussian signal will exhibit a non-gaussian distribution of its wavelet coefficients. In our study, we have used bi-orthogonal wavelets, which are mainly used in data compression, because of their better performance than orthogonal wavelets, in compacting the energy into fewer significant coefficients. There exist bi-orthogonal wavelet bases of compact support that are symmetric or antisymmetric. Antisymmetric wavelets are proportional to, or almost proportional to, a first derivative operator (e.g. the 2/6 tap filter (filter #5) of Villasenor et al. (1995), or the famous Haar transform which is an orthogonal wavelet). Symmetric wavelets are proportional to, or almost proportional to, a second derivative operator (e.g. the 9/3 tap filter of Antonini et al. (1992)). In the frame of detecting non-gaussian signatures, the choice of the wavelet basis is critical because non gaussian features exhibit point sources or step edges the wavelet must have a very good impulse response and a low shift variance, i.e. they better preserve the amplitude and the location of the details. Villasenor et al. (1995) have tested a set of bi-orthogonal filter banks, within this context, to determine the best ones for image compression. They conclude that even length filters have significantly less shift variance than odd length filters, and that their performance in term of impulse response is superior. In these filters, the high pass QMF is antisymmetric which is also a desirable property in the sense that we will also be interested in the statistical properties of the multi-scale gradients. Consequently, in our study, we have chosen the 6/10 tap filter (filter #3) of Villasenor et al. (1995) (Fig. 2) which represents the best compromise between all the criteria and energy compaction. Using this filter, we have chosen to perform a four level dyadic decomposition of our data. This particular wavelet and decomposition method have already been used for source detection by Aghanim et al. (1998). With such a decomposition, we also benefit from correlations between the two directions at each level, which is not the case with the pyramidal decomposition that treats both directions as if they were independent. Another advantage of this transform is that, for each level of decomposition, or scale, we benefit from the maximum number of coefficients possible, which is crucial for the statistics. ### 2.2 The test maps We use a test case which consists of sets of 100 maps of a gaussian signal and 200 maps of a non-gaussian signal, all having the same bell-shaped power spectrum and $`512\times 512`$ pixels. One of the non-gaussian sets (100 maps) consists of a distribution of disks with uniform amplitude (top-hat profiles), generating step edges. The disks have different sizes and amplitudes and are randomly distributed in the map. The signal is weakly skewed because the average skewness, which is the third moment of the distribution, computed over the 100 non-gaussian realisations is $`\mu _3=0.10\pm 0.05`$ (one sigma error for one realisation). Whereas the excess of kurtosis, the fourth moment ($`\mu _4`$) of the distribution, is $`k=\mu _4/\mu _2^23=1.09\pm 0.17`$. The second set of 100 non-gaussian maps consists of a distribution of gaussian profiles with different sizes and amplitudes. The skewness and excess of kurtosis of the 100 statistical realisations of the non-gaussian distribution of gaussian profiles are respectively $`0.06\pm 0.04`$ and $`1.19\pm 0.13`$. The same quantities computed over the 100 gaussian maps are respectively $`\mu _3=\mathrm{0.14\hspace{0.17em}10}^2\pm \mathrm{2.15\hspace{0.17em}10}^2`$ and $`\mathrm{0.03\hspace{0.17em}10}^2\pm \mathrm{3.87\hspace{0.17em}10}^2`$. These numbers should be equal to zero for a gaussian distribution. They are not because there is a statistical dispersion over a finite set of realisations. The purpose of this paper being to develop suitable statistical tests for non-gaussianity, we will not study other effects such as noise or beam dilution. These effects will be considered in an application of the method to the CMB signal in a second paper (Aghanim & Forni 1999). ## 3 Tests of non-gaussianity The most direct and obvious way of analysing the statistical properties of an image is to use the distribution of the pixel brightnesses, or temperatures, together with the skewness and kurtosis. If the two quantities are different from zero, they indicate that the signal is non-gaussian. However, a weak non-gaussian signal will hardly be detected through the moments of the temperature distribution. Another way of addressing the problem is to use the coefficients in the wavelet decomposition and to study their statistical properties which in turn characterise the signal. In fact, the wavelet coefficients are quite sensitive to variations (even weak ones) in the signal, temperature or brightness, and hence to the statistical properties of the underlying process. We have developed two tests which exhibit the non-gaussian characteristics of a signal using the wavelet coefficients. Since our test maps are not skewed in the following we focus only on the results obtained using the fourth moment. For the first discriminator, we study the statistical properties of the distribution of the multi-scale gradient coefficients. This method is appropriate when dealing with a non-gaussian process characterised by sharp edges and consequently by strong gradients in the signal. Indeed, in any region where the analysed function is smooth the wavelet coefficients are small. On the contrary, any abrupt change in the behaviour of the function increases the amplitude of the coefficients around the singularity. The detection of non-gaussianity is thus based on the search of these gradients. In the dyadic wavelet decomposition one can discriminate between the coefficients associated with vertical and horizontal gradients and the other coefficients. In our case, the vertical and horizontal gradients are analogous to the partial derivatives, $`/x`$ and $`/y`$, of the signal. Mallat & Zhong (1992) give a thorough treatment of the characterisation of signals from multi-scale edges. We compute the quadratic sum of the coefficients, the quantity $`𝒢_L=\left(/x\right)_L^2+\left(/y\right)_L^2`$, at each decomposition level $`L`$. This quantity represents the squared amplitude of the multi-scale gradient of the image. In the following, we will however refer to it as the multi-scale gradient coefficient. The second statistical discriminator is based on the study of the wavelet coefficients related to the horizontal, vertical and diagonal gradients. These coefficients are associated with the partial derivatives $`/y`$ and $`/x`$, as in the multi-scale gradient method, as well as with the cross derivative $`^2/xy`$. The coefficients are computed at each decomposition level and their excess of kurtosis with respect to a Gauss distribution exhibits the non-gaussian signature of the studied signal. In this context, the wavelet coefficients associated with the first derivatives are obviously closely related to the multi-scale gradient. In the following, we first apply our two tests to purely gaussian and non-gaussian maps. We then test the detectability of a non-gaussian signal added to a gaussian one with same power spectrum and with increasing mixing ratios. ## 4 Characterisation of gaussian signals ### 4.1 The multi-scale gradient and its distribution For the 100 gaussian maps, we find that the histogram of the multi-scale gradient coefficients can be fitted by the positive wing of the Laplace probability distribution function: $$(𝒢_L)=\frac{1}{\sqrt{2}\sigma }\mathrm{exp}\left(\frac{\sqrt{2}(𝒢_L\mu _1)}{\sigma }\right),$$ (9) where $`\mu _1`$ is the mean of the distribution (theoretically equal to zero) and $`\sigma ^2`$ is its second moment. We plot in figure 3 (left panels) the distribution of the multi-scale gradient coefficients in the four decomposition scales for the gaussian signal. For reasons of legibility, we have just plotted the fit obtained with the 100 maps. The error bars represent a confidence interval (for one realisation) and account for the statistical dispersion of the realisations. We can analyse the multi-scale gradient distribution through its $`n`$th-order moments ($`\mu _n`$). In particular, we compute the excess of kurtosis using the second and fourth moments of the distribution($`\mu _2`$ and $`\mu _4`$). For a gaussian distribution, the normalised excess is zero. For a Laplace distribution, the fourth moment is given by $`\mu _4=6\mu _2^2`$. The normalised excess of kurtosis $`k=\mu _4/\mu _2^26`$ highlights the non-gaussian signature of a signal through the departure of the multi-scale gradient from a Laplace distribution. At each decomposition level, we compute the normalised excess of kurtosis of the multi-scale gradient coefficients for the 100 gaussian maps, and we derive a representative value of the distribution that is the mean $`\overline{k}`$ which we quote in Table 1. The results show that $`\overline{k}`$ is very close to zero. The $`\sigma `$ values correspond to the root mean square values with respect to the mean $`\overline{k}`$. The $`\sigma `$ values define a confidence interval, or a probability distribution of the excess of kurtosis. For the gaussian signal the upper and lower boundaries of this interval are equal suggesting that the $`k`$ values are gaussian distributed. The increasing $`\sigma `$ (Fig. 3, left panels) with the decomposition scale is due to the larger dispersion. This feature is also the consequence of the smaller number of wavelet coefficients at higher decomposition scales. ### 4.2 Partial derivatives The wavelet coefficient distributions associated with the first and cross derivatives are gaussian for the gaussian maps. We thus compute the normalised excess of kurtosis with respect to a gaussian distribution ($`k=\mu _4/\mu _2^23`$). These values are displayed in figure 4 (dashed line) for the gaussian maps. In this figure and at each decomposition scale, the first set of 100 values stands for the excess of kurtosis of the wavelet coefficients associated with the horizontal gradient ($`/y`$). The second set of 100 values represents the same quantity computed for the vertical gradient ($`/x`$) and the last one represents the excess of kurtosis for the wavelet coefficients associated with cross derivative $`^2/xy`$ (diagonal gradients). We note that for the gaussian maps the excess is always centred around zero at all the decomposition scales. For the multi-scale gradient, the dispersion around the mean $`\overline{k}`$ increases with increasing decomposition scale. In table 2, we quote the mean together with the confidence intervals at each scale. The results also show that the values are close to zero confirming the gaussian nature of the signal. As a result, we conclude that a gaussian signal can be characterised by the distribution of the multi-scale gradient coefficients and of the coefficients associated with $`/x`$, $`/y`$ and $`^2/xy`$. In the first case, the multi-scale gradient coefficient distribution is fitted by a Laplace distribution and the excess of kurtosis is zero. In the second case, the excesses of kurtosis are gaussian distributed with $`\overline{k}=0`$ for the first and the cross derivatives. We check these two characteristics on other gaussian processes with different power spectra. For a white noise spectrum, we find identical results as in the study case: zero excess of kurtosis for the multi-scale gradient and the coefficients of the partial derivatives. Since our statistical tests are based on the statistics of the wavelet coefficients at each decomposition scale, we expect that a sharp cut off in the power spectrum of the gaussian signal will induce a sample variance problem. We check this behaviour using a gaussian process exhibiting a very sharp cut off with a shape close to a Heaviside function, at the second decomposition scale. This cut off is similar to the cut off expected in the CMB power spectrum in a standard cold dark matter model. At all the decomposition scales except the second, we find the expected zero excess of kurtosis for the wavelet coefficients of gaussian signals. At the second decomposition scale, we find a non-zero excess of kurtosis for the multi-scale gradient coefficients, as well as for the coefficients related to the first derivatives, which could be misinterpreted for a non-gaussian signature. This non-zero excess has nothing to do with an intrinsic property of the studied signal, as the latter is gaussian at all scales. It comes from the very sharp decrease in power combined with the narrow filter associated with the wavelet basis. In fact, the contribution of the gaussian process, at this scale, is sparse. Therefore, it induces a sample variance effect which in turn results in a non-zero excess of kurtosis. We tested a wider filtering wavelet and found that the excess of kurtosis decreases. Nevertheless, a wider filtering wavelet smoothes the non-gaussian signatures and reduces the efficiency of our discriminating tests. However, at the second decomposition scale, the wavelet coefficients associated with $`^2/xy`$ exhibit no excess of kurtosis for a process with sharp cut off. This indicates that the excess of kurtosis computed using the cross derivative coefficients is more reliable in characterising a gaussian process, and consequently non-gaussianity, regardless of the power spectrum. We also analysed the sum of gaussian signals. When there is no cut off in the power, the excess of kurtosis for the sum of gaussian signals is zero for both discriminators. By contrast, if one of the gaussian signals presents a cut off at any decomposition scale, we again find a non-zero excess of kurtosis at the corresponding scale. ## 5 Application to non-gaussian signals We apply our statistical discriminators to detect the non-gaussian signature of different processes. We first study two sets of non-gaussian maps, one constituted of a distribution of top-hat profiles and the other constituted of a distribution of gaussian profiles, both having the same power spectrum as the gaussian test maps used in the previous section. We also apply the statistical test to a combination of gaussian and non-gaussian signals with different mixing ratios. ### 5.1 The multi-scale gradient and its distribution We compute the multi-scale gradient coefficients ($`𝒢_L`$) using 100 statistical realisations of the non-gaussian process (top-hat profiles). At the four decomposition scales, we plot the fitted histogram (right panels of Figure 3). We note that the distribution of the multi-scale gradient coefficients also fits a Laplace distribution for small values of $`𝒢_L`$. However, there is a significant departure from this distribution for higher values. This is exhibited by the larger error bars and by the wings of the gradient distribution at large $`𝒢_L`$. Figure 3 (right panels) exhibits the non-gaussian signatures mostly at the first three decomposition scales. At the fourth, the lack of coefficients enlarges the error bars but we still marginally distinguish the non-gaussian signal. In our test case, the process is leptokurtic, that is the non-gaussianity is characterised by a positive excess of kurtosis. We quote, in the left panel of Table 3 the median excesses of kurtosis computed with the multi-scale gradient coefficients of the 100 maps. This is a more suitable quantity to characterise a non-gaussian process, than the mean $`\overline{k}`$, as there is an important dispersion of the $`k`$ values with a clear excess towards large values. The $`\sigma _\pm `$, which represents the $`rms`$ excess of kurtosis with respect to the median for one realisation, takes naturally into account the non symmetric distribution of the multi-scale gradients. This results in a lower boundary ($`\sigma _{}`$) for the confidence interval smaller that the upper boundary ($`\sigma _+`$). The latter is biased towards large values as we are studying a leptokurtic process. Therefore, the comparison between the values of $`k`$ and $`\sigma _{}`$ indicates the detectability of non-gaussianity. When $`k\sigma _{}`$ for one realisation differs from zero by a value of the order of, or larger than, $`\sigma _{}`$ this suggests that the signal is non-gaussian. For the top-hat profiles, there is an obvious excess of kurtosis at all scales. In order to test the non-gaussian signature arising form different processes with same power spectra, we analyse a set of 100 non-gaussian maps made of the superposition of gaussian profiles of different sizes and amplitudes. We compute the median value of the excess of kurtosis and the corresponding confidence intervals (Tab. 3, right panel). We note that $`k`$ is different from zero at all scales, exhibiting the non-gaussian nature of the studied process. However, it is smaller than in the case of the top-hat profiles. This decrease is due to the superposition of smoother profiles. We now add one representative gaussian map to 100 non-gaussian maps (top-hat profiles). As the non-gaussian signal is very strongly dependent on the studied map, it is necessary to span a large set of non-gaussian statistical realisations in order to have a reliable statistical specification of non-gaussianity. The gaussian and non-gaussian signals were summed with different mixing ratios represented by the ratio of their $`rms`$ amplitudes ($`R_{rms}=\sigma _{gauss}/\sigma _{nongauss}`$). After wavelet decomposition, we compute the multi-scale gradient coefficients of the summed maps and derive the normalised median excess of kurtosis with respect to a Laplace distribution together with the confidence intervals. The results are quoted in Table 4 as a function of the mixing ratio $`R_{rms}`$ and the wavelet decomposition scale. For $`R_{rms}=1`$, the excess of kurtosis is larger than that of the gaussian test map and it is smaller than that of the purely non-gaussian signal. The summation of the two processes has therefore, as expected, smoothed the gradients and diluted the non-gaussian signal. For a non-gaussian signal half that of the gaussian signal, only the first three scales indicate an excess of kurtosis different from the gaussian one. For the ratio $`R_{rms}=3`$, only the first scale has an excess marginally different from the gaussian signal. For larger ratios, the non-gaussian signal is quite blurred. ### 5.2 Partial derivatives For the 100 non-gaussian maps (top-hat profile) with the same power spectrum as the gaussian test maps, we compute the normalised excess of kurtosis, with respect to a gaussian, of the wavelet coefficients associated with $`/x`$, $`/y`$ and $`^2/xy`$. As for the multi-scale gradient, we derive the median excess of kurtosis and the upper and lower boundaries of the confidence intervals. The results, given in Table 5, show non-zero excesses of kurtosis for the first and cross derivatives at all decomposition scales. In Figure 4, the solid line represents the values of the excess of kurtosis of each non-gaussian realisation. The dashed line represents the same quantity for the gaussian test maps. We first note the overall shift of the values towards non-zero positive values (leptokurtic signal), with some very large values compared to the median. A second characteristic worth noting is the difference in amplitudes between, on the one hand, the excess of kurtosis of the coefficients associated with $`^2/xy`$ and, on the other hand, those associated with $`/x`$ and $`/y`$. The former are indeed smaller. As the excess of kurtosis of the first derivative coefficients is of the same order, we compute one median $`k`$ over $`/x`$ and $`/y`$ coefficients, and compare it to the excess of kurtosis of the cross derivative. At the first two decomposition scales there is an important and noticeable difference between the two sets of values $`/x`$ and $`/y`$, and $`^2/xy`$. At the third and fourth decomposition scales, the difference decreases but is still present. For the non-gaussian process made of the superposition of gaussian profiles, we compute the median excess of kurtosis associated with the wavelet coefficients of the first and cross derivatives (Tab. 6). As for the multi-scale gradient coefficients, we find that the excess of kurtosis is smaller for this type of non-gaussian maps but it is still significantly different from zero at all scales except the fourth. We analyse the sum of a representative gaussian map and the set of 100 non-gaussian maps (top-hat profile). The sum of the two processes has been performed again with different mixing ratios. The results we obtain are given in Table 7. An accompanying figure (Fig. 5) illustrates the corresponding results for a mixing ratio $`R_{rms}=1`$ (solid line). In this figure, the dashed line represents the gaussian process alone. For $`R_{rms}=1`$, we find that non-gaussianity is detected at all decomposition scales for both first and cross derivative coefficients. For a mixing ratio $`R_{rms}=2`$, we observe a significant excess of kurtosis only at the first decomposition scale. For $`R_{rms}3`$, the excess becomes marginal for both $`/x`$ and $`/y`$ coefficients at the first decomposition scale, all other scales showing no departure from gaussianity. The same tendency is noted for the coefficients associated with $`^2/xy`$. ## 6 Detection strategy of non-gaussianity We have characterised the gaussian signal through the excess of kurtosis of the multi-scale gradient and partial (first and cross) derivative coefficients using processes with different power spectra. When the power spectrum of the process exhibits a sharp cut off in one of the wavelet decomposition windows we find that the excess of kurtosis, associated with the coefficients of the first derivatives, and consequently of the multi-scale gradient, is non-zero at the filtering level of the cut off. On the contrary, the excess of kurtosis computed with the cross derivative coefficients is zero. Accordingly, we propose a detection strategy to test for non-gaussianity. We compare a set of maps of the “real” observed sky to a set of gaussian realisations having the power spectrum of the “real sky”. Our proposed method overcomes the problems arising from eventual cut off in the power spectrum of the studied process, and the consequent possible misinterpretations on the statistical signature. It also constitutes the most general approach to exhibit the statistical nature (gaussian or not) of a signal and quantify its detectability through our statistical tests. Our detection strategy of the non-gaussianity is based on the following steps: $``$ using observed maps of the “real sky” we compute the angular power spectrum of the signal, regardless of its statistical nature. $``$ We simulate gaussian synthetic realisations of a process having the power spectrum of the “real” process. On the obtained gaussian test maps filtered with the wavelet function, we compute the excess of kurtosis for the multi-scale gradient and derivative coefficients. This analysis allows us to characterise completely the gaussian maps, naturally taking into account eventual sample variance effects due to cut off at any scale. $``$ For the set of observed maps of the “real sky”, we compute the excesses of kurtosis associated with the multi-scale gradient, and the derivative coefficients. $``$ Assuming that the realisations (maps) are independent, each value of the excess of kurtosis has a probability of $`1/N`$, where $`N`$ is the number of maps. Using the computed excesses of kurtosis of both the gaussian and non-gaussian realisations, we deduce the probability distribution function (PDF) of the excess of kurtosis, for the multi-scale gradient coefficients and for the coefficients related to the derivatives. $``$ The last step consists of quantifying the detectability of the non-gaussian signature. That is to compare the PDF of the gaussian process to the PDF of the “real sky”. In practice this can be done by computing at each decomposition scale the probability that the median excess of kurtosis of the non-gaussian maps belongs to the PDF of the synthetic gaussian counterparts. It is the probability that a random variable is greater or equal to the real median $`k`$. We take $`k`$ which is the asymptotic value given by the central limit theorem. Another way of comparing the two PDFs is to use the Kolmogorov-Smirnov (K-S) test (Press et al. 1992) which gives the probability for two distributions to be identical. This test for non-gaussianity is more global than the previous test because it is sensitive to the shift in the PDFs, especially the median value, and to the spread of the distributions. This property makes it more sensitive to non-gaussianity especially in the case where we only have a small number of observed maps. We apply our detection strategy of non-gaussianity to the non-gaussian test maps constituted of the top-hat and gaussian profiles. For illustrative purposes, we give the results of the multi-scale gradient coefficient only. At the first three decomposition scales and for both sets of maps, we find that the probability for the signal to be non-gaussian is 100% using the probability of the measured $`k`$ to belong to the gaussian PDF. At the fourth scale, the probability is 99.99% and 99.95% for respectively the top-hat and gaussian profile distribution. The K-S test gives a 100% probability of detecting non-gaussianity. The detectability of the non-gaussian signature, for the sum of the gaussian and non-gaussian (top-hat profile) maps with mixing ratio $`R_{rms}=1`$, is 100% at the first decomposition scale. It is 99.96% and 93.4% at the second and third scale, and 76.43% at the fourth scale. For $`R_{rms}=2`$, the first scale is still perfectly non-gaussian, and only the second scale is detected with a probability of 72.4%. The K-S test gives more or less the same results for both mixing ratios. The results are illustrated in Figure 6 (for the multi-scale gradient coefficients) and in Figure 7 (for the cross derivative coefficients). In these plots, the solid line represents the PDF of the excess of kurtosis for the non-gaussian measured signal. The dashed line represents the PDF of the synthetic gaussian maps with same power spectrum. In both figures the left panels are for the non-gaussian signal alone, whereas the right panels are for the sum with a mixing ratio of one. ## 7 Discussion and Conclusions In the present work we develop two statistical discriminators to test for non-gaussianity. To do so, we study the statistical properties of the coefficients in a four level dyadic wavelet decomposition. Our first discriminator uses the amplitude of the multi-scale gradient coefficients. It is based on the computation of their excesses of kurtosis with respect to a Laplace distribution function. The second test relies on the computation of the excesses of kurtosis for the first and cross derivative coefficients. It can itself be divided into one specific test using the first derivatives and the other the cross derivative. For both discriminators (multi-scale gradient and partial derivatives), the gaussian signature is characterised by a zero excess of kurtosis. We check this property for several gaussian processes with different power spectra and for a signal made of the sum of gaussian signals. Given this property for gaussianity, the departure form a zero value of the excess of kurtosis indicates the non-gaussian signature. In order to overcome peculiar features in the power spectrum (e.g. sharp cut offs) at any wavelet decomposition scale, which could be misinterpreted for a non-gaussian signature, we propose the following detection strategy. We simulate synthetic gaussian maps with the same power spectrum as the non-gaussian studied signal. We compute the excess of kurtosis for the two discriminators, and for both gaussian and non-gaussian maps. We derive the PDF in each case. Then, we quantify the detectability of non-gaussianity by estimating the probability that the median excess of kurtosis of the non-gaussian signal belongs to the PDF of the gaussian counterpart, and by applying the K-S test to discriminate the gaussian and the “real” PDFs. We apply our detection strategy to the test maps of non-gaussian signals alone, and to the sum of gaussian and non-gaussian signals. In the first case, we show that the non-gaussian signature emerges clearly at all scales. In the second case, the detection depends on the mixing ratio ($`\sigma _{gauss}/\sigma _{nongauss}`$). Down to a mixing ratio of about 3, which is about 10 in term of power, we detect the non-gaussian signature. In parallel to our work, Hobson et al. (1998) have used the wavelet coefficients to distinguish the non-gaussianity due to the Kaiser-Stebbins effect (Bouchet et al 1988; Stebbins 1988) of cosmic strings. They used the cumulants of the wavelet coefficients up to the fourth order (Ferreira et al 1997), in a pyramidal decomposition. As mentioned in section 2.1 and in Hobson et al. (1998), the pyramidal decomposition induces a scale mixing. Therefore, it does not take advantage of the possible spatial correlations of the signal. Furthermore, it gives smaller numbers of coefficients within each sub-band for the analysis. We instead use the dyadic decomposition to avoid these two weaknesses, as in Aghanim et al. (1998). In our study, we use weakly non-gaussian simulated maps (small kurtosis). Such a weak non-gaussian signature, by contrast with the Kaiser-Stebbins effect, and with the point-like or peaked profiles, is detected using our statistical discriminators. Using the bi-orthogonal wavelet transform, we succeed in emphasising the non-gaussianity, by the means of the statistics of the wavelet coefficient distributions. This detection is also possible using other bi-orthogonal wavelet bases, but their efficiency is lower at larger scales. Consequently, the choice of the wavelet basis depends also on the characteristics of the non-gaussian signal one wants to emphasise. However we believe that the wavelet basis we choose represents an optimal compromise for a large variety of non-gaussian features. ###### Acknowledgements. The authors wish to thank the referee A. Heavens for his comments that improved the paper. We also thank F.R. Bouchet, P. Ferreira and J.-L. Puget for valuable discussions and comments and A. Jones for his careful reading.
no-problem/9905/cond-mat9905293.html
ar5iv
text
# Impurity spin relaxation in 𝑆=1/2 𝑋⁢𝑋 chains ## I Introduction The $`S=1/2`$ $`XX`$ chain $$H=\underset{i=1}{\overset{N1}{}}J_i(S_i^xS_{i+1}^x+S_i^yS_{i+1}^y),$$ (1) is one of the simplest quantum many-body systems conceivable, as many of its properties can be derived from those of noninteracting lattice fermions. Its equilibrium spin pair correlation functions $$S_i^\alpha (t)S_j^\alpha =\frac{\mathrm{Tr}e^{\beta H}e^{itH}S_i^\alpha e^{itH}S_j^\alpha }{\mathrm{Tr}e^{\beta H}},\alpha =x,z$$ (2) have been the objects of intense research efforts over an extended period . Only a few explicit analytic results are available, but several existing asymptotic results for large distances $`|ij|`$ or long times $`t`$ have been corroborated by numerical calculations. For $`XX`$ chains with homogeneous nearest-neighbor coupling ($`J_iJ`$) only three different types of asymptotic long-time behavior have been observed to date: Gaussian, exponential, and power-law (often with superimposed oscillations). It is interesting to speculate whether non-uniform or random couplings might induce additional types of asymptotic behavior. In the present paper we study the changes in autocorrelation functions $`S_i^\alpha (t)S_i^\alpha `$ induced by a single impurity spin in an otherwise homogeneous chain. The impurity spin is located either at the boundary of the system, $$J_1=J^{},J_iJ=1\text{ for }i2$$ (3) or in the bulk, $$J_{N/21}=J_{N/2}=J^{},J_iJ=1\text{ for all other }i.$$ (4) Equilibrium and non-equilibrium dynamics of the boundary impurity were studied early on by Tjon . In the weak-coupling limit ($`J^{}0`$), Tjon obtained exponential behavior of the impurity spin autocorrelation functions. Our results (see Sec. IV) show that for finite impurity coupling $`J^{}`$, exponential behavior occurs only in an intermediate time regime, whereas the ultimate long-time behavior is a power law. Besides their obvious relevance in low-dimensional magnetism, impurities in spin-1/2 chains are also of interest in quantum dynamics, where two-level systems coupled to “baths” serve as models for quantum systems in dissipative environments . The most popular model in this field is the spin-boson model, consisting of a single spin 1/2 coupled to a (quasi-) continuum of noninteracting oscillators with a given spectral density. In a recent study the oscillator bath was replaced with a bath of noninteracting spins 1/2. The changes in dynamic behavior which were observed as a result of this replacement suggest further exploration of different kinds of baths. The system studied here can be considered a two-level system (the impurity spin) coupled to a bath of interacting two-level systems (the remainder of the $`XX`$ chain). An interesting feature of this system is the fact that while the $`z`$ component of the total spin is conserved, the $`x`$ component is not. Thus differences are to be expected between the relaxation of the $`x`$ and $`z`$ components of the impurity spin. The plan of the paper is as follows: In Sec. II we discuss the method used to calculate the dynamic correlation functions numerically. In Sec. III we present results for spin autocorrelation functions of a bulk impurity spin (and also of its neighbors) for both zero and finite $`T`$. In Sec. IV we discuss boundary impurity autocorrelation functions. Sec. V summarizes our findings. ## II Method The open-ended $`N`$-site spin-1/2 $`XX`$ chain described by the Hamiltonian (1) can be mapped to a Hamiltonian of noninteracting fermions, $$\stackrel{~}{H}=\frac{1}{2}\underset{i=1}{\overset{N1}{}}J_i(c_i^{}c_{i+1}+c_{i+1}^{}c_i)$$ (5) by means of the Jordan-Wigner transformation between spin and fermion operators: $$S_i^z=c_i^{}c_i\frac{1}{2},$$ (6) $$S_i^+=(1)^{_{k=1}^{i1}c_k^{}c_k}c_i^{}=\underset{k=1}{\overset{i1}{}}(12c_k^{}c_k)c_i^{}.$$ (7) In the homogeneous case $`J_iJ`$, the one-particle energy eigenvalues are $$\epsilon _k=J\mathrm{cos}k,k=\frac{\nu \pi }{N+1},\nu =1,\mathrm{},N$$ (8) and the eigenvectors are sinusoidal functions of the site index $`i`$. For general $`J_i`$ neither eigenvalues nor eigenvectors are available analytically, however, both are easily obtained from the solution of a tridiagonal eigenvalue problem with standard numerical procedures . The spin correlation functions (2) are mapped to fermion correlation functions, with crucial differences between the cases $`\alpha =z`$ and $`\alpha =x`$. $`S_i^z(t)S_j^z`$ maps to a density-density correlation function involving four Fermi operators. Due to the string of signs in (7), however, $`S_i^x(t)S_j^x`$ maps to a many-particle correlation function involving $`2(i+j1)`$ Fermi operators. Wick’s theorem can be applied to expand $`S_i^x(t)S_j^x`$ in products of elementary fermion expectation values. That expansion can be most compactly expressed as a Pfaffian whose elements are sums involving the one-particle eigenvalues and eigenvectors. In order to obtain results valid in the thermodynamic limit $`N\mathrm{}`$, finite-size effects must be identified and eliminated. As finite-size effects are known to be caused by reflections of propagating excitations from the boundaries of the system, the maximum fermion group velocity (see 8) can be used to estimate the time range over which a given spin correlation function (2) can be expected to be free of finite-size effects. That estimate can then be verified by explicit numerical calculation of equivalent correlation functions for system sizes $`N_0`$ and, say, $`2N_0`$. The one-fermion eigenvalue problem for the single-impurity chain (3) or (4) may be solved analytically. The nature of the solution depends on the value of $`J^{}`$. For $`J^{}`$ below a critical value $`J_c`$ all states are extended and the continuous energy spectrum is given by $`\epsilon _k`$ (8) with $`J^{}`$-dependent $`k`$ values. For $`J^{}>J_c`$ a pair of exponentially localized impurity states with energies $`\pm \epsilon _0`$, $`|\epsilon _0|>1`$, emerge from the continuum. The critical coupling strength is $`J_c=1`$ for the bulk impurity and $`J_c=\sqrt{2}`$ for the boundary impurity. Below, we shall occasionally refer to properties of the analytic solution in order to explain the long-time asymptotic behavior observed in the numerical results. ## III Bulk impurity ### A $`T=0`$ The long-time asymptotic behavior of the $`T=0`$ bulk impurity spin $`x`$ autocorrelation function is difficult to obtain due to a combination of two reasons. Firstly this correlation is the computationally most demanding one, as large Pfaffians have to be evaluated. Secondly it is also the correlation function displaying finite size effects at the earliest times. This may be related to its particularly slow long-time asymptotic decay law $$S_i^x(t)S_{i+n}^x(n^2J^2t^2)^{1/4}\text{ for }T=0$$ (9) in the homogeneous case $`J_iJ`$. It should be noted that the right-hand side of (9) is the leading term of an asymptotic expansion; its character changes from purely real (for $`J^2t^2<n^2`$) to complex (for $`J^2t^2>n^2`$). More explicit forms are eq. (1.23) in ref. and eqs. (59,61) in ref. . We have calculated $`S_{N/2}^x(t)S_{N/2}^x`$ for impurity coupling constants $`0.1J^{}4`$. In all cases the asymptotic decay of the correlation function was consistent with the $`t^{1/2}`$ law (9). With growing $`J^{}`$ $`|S_{N/2}^x(t)S_{N/2}^x|`$ develops oscillations of rather well-defined frequency and growing amplitude, as shown in the inset of Fig. 1. The frequency of the oscillations for $`J^{}>1`$ is proportional to the energy $$|\epsilon _0|=\frac{J^2}{\sqrt{2J^21}}(J^{}>1)$$ (10) of the localized impurity state. On the whole, the long-time asymptotic behavior of the impurity spin $`x`$ autocorrelation at $`T=0`$ is not fundamentally changed by varying the value of $`J^{}`$. The impurity spin $`z`$ autocorrelation, in contrast, changes significantly when $`J^{}`$ is varied, as shown in Fig. 2. For small $`J^{}`$ $`|S_{N/2}^z(t)S_{N/2}^z|`$ displays a monotonic decay. Roughly at $`J^{}=0.5`$ oscillations (of well-defined and $`J^{}`$-independent frequency) begin to develop. For all $`J^{}<1`$ the correlation function follows an asymptotic $`t^2`$ law. The $`J^{}=1`$ correlation shows the $`t^1`$ law well known for the homogeneous case. As soon as $`J^{}`$ is further increased, the behavior changes again and the absolute value of the correlation function tends to a constant nonzero value for large times. The changes in the asymptotics of $`|S_{N/2}^z(t)S_{N/2}^z|`$ can be understood from the analytic solution mentioned in Sec. II. The Jordan-Wigner transformation yields, after a few simple steps, $`S_i^z(t)S_i^z=\left({\displaystyle \underset{\nu }{}}|i|\nu |^2e^{i\epsilon _\nu t}f(\epsilon _\nu )\right)^2`$ (11) $`+\left({\displaystyle \underset{\nu }{}}|i|\nu |^2f(\epsilon _\nu )\right)^2{\displaystyle \underset{\nu }{}}|i|\nu |^2f(\epsilon _\nu )+{\displaystyle \frac{1}{4}}.`$ (12) Here $`|\nu `$ is a one-fermion eigenstate of $`\stackrel{~}{H}`$ (5) with energy $`\epsilon _\nu `$ and $`f(x)=(\mathrm{exp}(\beta x)+1)^1`$ is the Fermi function. For $`J^{}>J_c=1`$ the presence of a localized impurity state with large $`|i|\nu |^2`$ and with $`\epsilon _\nu `$ outside the continuum yields a harmonically oscillating non-decaying contribution to $`S_i^z(t)S_i^z`$. Similar contributions are contained in every element of the Pfaffian for $`S_i^x(t)S_i^x`$, but not in that correlation itself (see Fig. 1, inset). The reason probably is a cancellation of terms due to the multiplications and additions inherent in the definition of the Pfaffian. The time scale introduced by the discrete energy value (10) is reflected in the oscillations of $`|S_{N/2}^x(t)S_{N/2}^x|`$ (Fig. 1, inset). Similar behavior is found for $`T>0`$ and will be discussed in the next subsection. For $`J^{}1`$ the time-dependent term in (11) is proportional to $$\left(_1^1𝑑\epsilon (1\epsilon ^2)^{\frac{1}{2}}e^{i\epsilon t}f(\epsilon )|i|\epsilon |^2\right)^2,$$ (13) where $`i|\epsilon `$ corresponds to $`i|\nu `$ in (11) and the inverse square root factor is the one-particle density of states of the dispersion (8) (which still describes the energy eigenvalues, only with slightly displaced $`k`$ values for $`J^{}1`$). For $`J^{}=1`$, $`|i|\epsilon |^2`$ does not depend on $`\epsilon `$, the inverse square root singularities at the band edges $`\epsilon \pm 1`$ lead to a $`t^{1/2}`$ asymptotic behavior of the integral, and to a $`t^1`$ behavior of $`S_i^z(t)S_i^z`$. For $`J^{}<1`$ the amplitude of the one-particle eigenstate with energy $`\epsilon `$ at the impurity site is $$|i|\epsilon |^2=\left[J^2+\frac{\left(1J^2\right)^2}{J^2}\frac{\epsilon ^2}{1\epsilon ^2}\right]^1$$ (14) (apart from weakly $`\epsilon `$-dependent normalization factors). This changes the band-edge singularities in (13) from $`(1\epsilon ^2)^{1/2}`$ to $`(1\epsilon ^2)^{1/2}`$, so that the integral contains a $`t^{3/2}`$ term. Consequently, $`S_i^z(t)S_i^z`$ contains a $`t^3`$ term which dominates for $`T>0`$ (see next subsection). At $`T=0`$, however, the leading term is proportional to $`t^2`$ due to the discontinuity of the Fermi function in (13). We have also studied the $`x`$ and $`z`$ autocorrelations of nearest and next-nearest neighbors of the impurity spin $`i=N/2`$. For weak impurity coupling ($`J^{}0.2`$) the $`x`$ correlation functions of spins $`i=N/2+1`$ and $`i=N/2+2`$ show weak oscillations superimposed on a $`t^{1/2}`$ decay masked by strong finite-size effects. The $`z`$ correlations (for $`J^{}<1`$) show stronger oscillations. Their decay looks roughly like a power law with an exponent between -2 and -3. ### B $`T>0`$ Whereas at $`T=0`$ only power-law decay is observed, exponential decay becomes possible at finite $`T`$. Fig. 3 shows $`x`$ and $`z`$ impurity spin autocorrelations at $`J^{}=0.3`$ for several decades in $`T`$. There are two well defined temperature regimes with a crossover between them. Within each regime the correlation functions do not change qualitatively: note that several of the curves in Fig. 3 coincide. The $`x`$ autocorrelation in the high-$`T`$ regime shows exponential decay which persists over the entire time range during which the the results are free of finite-size effects. The $`z`$ autocorrelation decays exponentially at first and later crosses over to the $`t^3`$ law derived above, with superimposed oscillations which are absent in the low-temperature regime. The appearance of oscillations (if only of small amplitude) in $`S_i^z(t)S_i^z`$ at high $`T`$ is reminiscent of the phenomena recently reported for a two-level system coupled to a spin bath. In that system, a persistence of oscillations up to infinite $`T`$ could be observed. Fig. 4 shows $`|S_{N/2}^x(t)S_{N/2}^x|`$ at $`T=10^5`$ for impurity coupling $`0.1J^{}1`$ in a $`N=128`$ chain. As autocorrelations at $`T=\mathrm{}`$ are real even power series in $`t`$, all curves start with zero slope at $`t=0`$, but then (with the exception of $`J^{}=1`$) bend over to a nearly perfect exponential decay. The inset shows the decay rate of that exponential decay as fitted to the data in the main plot. Also shown is the decay rate determined from data for $`T=1`$, and for $`J^{}>1`$. Differences between $`T=1`$ and $`T=\mathrm{}`$ are to be expected and are indeed visible in the behavior of the decay rate close to $`J^{}=1`$: For finite $`T`$ the $`x`$ autocorrelation function of a homogeneous chain $`J_i1`$ is known to decay exponentially with a finite $`T`$-dependent decay rate, whereas for infinite $`T`$ the decay is Gaussian . The Gaussian decay corresponds to a divergence of the exponential decay rate which is obvious in the inset of Fig. 4. For $`J^{}>1`$ $`|S_{N/2}^x(t)S_{N/2}^x|`$ is no longer (almost) purely exponential, but develops considerable oscillations. The exponential decay rate grows with $`T`$, but decreases as $`J^{}`$ grows, as shown in the inset of Fig. 4. As in the $`T=0`$ case the frequency of the oscillations is proportional to $`\epsilon _0`$ (10). In order to obtain a quantitative measure of the precision to which the decay of $`|S_{N/2}^x(t)S_{N/2}^x|`$ at $`T=10^5`$ follows an exponential, we fitted an exponential law $`a\mathrm{exp}(bt)`$ to the numerical data for $`0<t<100`$ and calculated the quantity $$p(t):=\frac{|S_{N/2}^x(t)S_{N/2}^x|}{a\mathrm{exp}(bt)}$$ (15) which should equal unity for a purely exponential decay. For $`J^{}0.4`$, $`p(t)`$ is shown in Fig. 5. Note that the scale of the figure extends only to a maximum deviation of 1 percent from purely exponential decay. The general slope in the data is a natural consequence of the intrinsically non-exponential behavior of the correlation function for small $`t`$ which was mentioned above. The data for $`J^{}0.2`$ follow the purely exponential fit to a precision of better than 2 parts in thousand for $`t10`$. This rules out the stretched-exponential behavior reported for $`J^{}0.2`$ at $`T=\mathrm{}`$ in an approximate study based on extrapolation of truncated continued-fraction expansions. Fig. 5 also reveals the presence of tiny oscillations which are invisible on the scale of Fig. 4. The frequency of these oscillations is independent of $`J^{}`$, in contrast to the stronger oscillations for $`J^{}>1`$ already mentioned above. The main differences between $`x`$ and $`z`$ spin autocorrelation functions were already shown in Fig. 3. Some more detail on the behavior of $`|S_{N/2}^z(t)S_{N/2}^z|`$ is presented in Fig. 6. The main plot shows the crossover between the exponential and $`t^3`$ (with superimposed oscillations) regimes for three small $`J^{}`$ values. With growing $`J^{}`$ the exponential decay rate grows and the exponential regime shortens in such a way that in the exponential regime $`|S_{N/2}^z(t)S_{N/2}^z|`$ is a decreasing function of $`J^{}`$ whereas in the power-law regime it is an increasing function of $`J^{}`$. We have deliberately chosen a very long time window in order to illustrate how finite-size effects manifest themselves (for $`t230`$). In the inset of Fig. 6 we demonstrate the asymptotic power-law behavior for larger values of $`J^{}`$. The dots represent the maxima of $`|S_{N/2}^z(t)S_{N/2}^z|`$, which follow the $`t^3`$ (for $`J^{}<1`$) and $`t^1`$ (for $`J^{}=1`$) laws already discussed. For $`J^{}>1`$ the behavior changes to a “constant with oscillations” type of asymptotics, similar to the $`T=0`$ situation shown in the inset of Fig. 2. For $`T>0`$, however, the amplitude of the oscillations is considerably larger than for $`T=0`$. The high-temperature spin autocorrelations of the nearest and next-nearest neighbors to the impurity (for $`J^{}1`$) do not show any particularly surprising features. The $`z`$ correlations do not show exponential decay in the beginning. Instead they oscillate and the maxima of the oscillations display the familiar $`t^3`$ (for $`J^{}<1`$) and $`t^1`$ (for $`J^{}=1`$) laws. The $`x`$ autocorrelations interpolate smoothly between two known limiting cases. At $`J^{}=1`$ the $`x`$ autocorrelation of the spin $`i=N/2+1`$ of course is a Gaussian as that of any other bulk spin. At $`J^{}=0`$, however, $`i=N/2+1`$ is the first spin in a semi-infinite homogeneous chain, whose $`T=\mathrm{}`$ autocorrelation function is a combination of Bessel functions with an asymptotic $`t^{3/2}`$ decay. Upon reducing $`J^{}`$ from 1 to 0, the development of the characteristic Bessel function oscillations (with zeros hardly depending on $`J^{}`$) can be nicely observed. Similarly the time range during which the correlation function follows the expected $`t^{3/2}`$ decay grows as $`J^{}`$ diminishes. The $`x`$ autocorrelation of $`i=N/2+2`$ behaves quite similarly, but only a small number of oscillations is visible (in a linear plot) because of the fast ($`t^{9/2}`$) asymptotic decay of the known $`J^{}=0`$ Bessel function expression. ## IV Boundary impurity The boundary impurity is defined by (3). Similarly to the case of the bulk impurity discussed in the previous section, the boundary correlation functions show a low-temperature regime and a high-temperature regime, and we restrict our discussion to the values $`T=0`$ and $`T=1`$ which represent these two regimes. It suffices to discuss the impurity spin $`x`$ autocorrelation function $`S_1^x(t)S_1^x`$, because $$2S_1^x(t)S_1^x=\underset{\nu }{}|1|\nu |^2e^{i\epsilon _\nu t}f(\epsilon _\nu ),$$ (16) that is, the square root of the time-dependent part of $`S_1^z(t)S_1^z`$ (see 11). The presence of an isolated impurity state for $`J^{}>J_c=\sqrt{2}`$ (compare Sec. II) should be visible in the dynamic correlation functions. In fact, the asymptotic behavior of $`S_1^x(t)S_1^x`$ displays a crossover similar to the one shown in the inset of Fig. 2: for all $`T`$ the long-time behavior is a power law for $`J^{}<\sqrt{2}`$ and a constant for $`J^{}>\sqrt{2}`$, with additional oscillations in both regimes. Fig. 7 shows $`|S_1^x(t)S_1^x|`$ for impurity coupling $`J^{}1`$. For $`T=1`$ we observe an initially exponential decay followed by a power law. The exponential decay rate grows with $`J^{}`$. The duration of the initial exponential phase decreases with growing $`J^{}`$ in such a way that in the subsequent power-law regime the correlation function is an increasing function of $`J^{}`$. This behavior is (not unexpectedly, compare (16)) similar to that of the bulk impurity $`z`$ autocorrelation discussed in Sec. III B, however, the asymptotic power law is a different one, namely the $`t^{3/2}`$ law known for the boundary spin of the homogeneous semi-infinite chain at infinite $`T`$, and for a range of boundary spins of the same system at finite $`T`$. At this point, the early analytical study by Tjon should be mentioned. In the limit of sufficiently small $`J^{}`$ Tjon found an asymptotically exponential decay $`e^{t/\tau }`$ for $`S_1^x(t)S_1^x`$, with decay rate $`\tau ^1=\frac{1}{2}J^2`$. Indeed, our numerical data show that during the exponential regime mentioned above, the decay rate is quite precisely equal to $`\frac{1}{2}J^2`$ for $`J^{}0.2`$, and a bit larger for larger $`J^{}`$. However, we also observe numerically (and explain analytically, see below) a crossover from exponential to power-law behavior. The duration of the exponential regime grows ($`J^2`$) as $`J^{}`$ becomes weak and thus our numerical results for arbitrary $`J^{}`$ connect smoothly to Tjon’s analytical result restricted to the weak-coupling limit. The inset of Fig. 7 shows $`|S_1^x(t)S_1^x|`$ for $`T=0`$ and $`J^{}<1`$. After a very slow initial decay (slower for smaller $`J^{}`$) the curves eventually all bend over to show a $`t^1`$ asymptotic decay developing some oscillations as $`J^{}`$ approaches unity. The $`t^1`$ decay at $`T=0`$ is again a well-known feature of the homogeneous semi-infinite chain. The asymptotic power laws may be understood from the properties of the analytic solution of the boundary impurity (one-particle) problem, along the lines of the discussion in Sec. III A. According to (16), $`|S_1^x(t)S_1^x|`$ for $`J^{}<\sqrt{2}`$ is given by an integral analogous to the one in (13). The analytic solution shows that the wave function factor in the integral, $`1|\epsilon \mathrm{sin}k`$, where $`\epsilon =\mathrm{cos}k`$. This leads to a band-edge singularity $`(1\epsilon )^{1/2}`$ in the integrand, and a $`t^{3/2}`$ asymptotic behavior of the integral. At $`T=0`$ the $`t^{3/2}`$ contribution is dominated by the $`t^1`$ contribution from the discontinuity of the Fermi function. ## V Summary and conclusions The dynamic spin correlation functions associated with isolated impurities in a $`S=1/2`$ $`XX`$ chain show a rich behavior depending on the temperature, the impurity coupling strength $`J^{}`$, the spin component ($`\alpha =x,z`$) under consideration, and on the position of the impurity spin in the chain. Regimes of low and high $`T`$, with qualitatively different behavior of the correlations may be distinguished. We have summarized the asymptotic behavior of the bulk $`x`$ and $`z`$ spin autocorrelations in these two $`T`$ regimes in Table I. The basic features of the boundary correlations may also be obtained from Table I by observing that (i) the $`z`$ autocorrelation does not change fundamentally between bulk and boundary and (ii) at the boundary the $`x`$ autocorrelation behaves as the square root of the $`z`$ autocorrelation. As mentioned in the introduction, the present model may be interpreted as a single two-level system in contact with a large bath. The influence of the nature of the bath degrees of freedom on the type of decay of the two-level system was addressed recently for a spin bath constructed in a way to resemble closely the standard harmonic oscillator bath. The following results were found. At $`T=0`$ the spin bath leads to damped oscillations in the two-level system, as does the oscillator bath. However, at high $`T`$, the oscillations vanish for the oscillator bath but persist for the spin bath. Without going into any detailed comparison between the spin bath employed in Ref. and the $`XX`$ chain studied here, we would like to point out the existence of similar oscillation phenomena at high $`T`$ in the present system. Fig. 3 (main plot) shows how the bulk impurity spin $`z`$ autocorrelation function develops oscillations as temperature increases. Whether these oscillations can be unambiguously assigned to either the impurity or the bath, and what happens for systems interpolating between the present one and that of Ref., remains to be seen in further studies. ###### Acknowledgements. We are grateful to Professor Gerhard Müller (University of Rhode Island) for helpful comments and suggestions.
no-problem/9905/cond-mat9905217.html
ar5iv
text
# Symmetric patterns of dislocations in Thomson’s problem \[ ## Abstract Determination of the classical ground state arrangement of $`N`$ charges on the surface of a sphere (Thomson’s problem) is a challenging numerical task. For special values of $`N`$ we have obtained using the ring removal method of Toomre, low energy states in Thomson’s problem which have icosahedral symmetry where lines of dislocations run between the 12 disclinations which are induced by the spherical geometry into the near triangular lattice which forms on a local scale. PACS numbers: 41.20.Cv, 73.90.+f \] Thomson’s problem consists of finding the ground state of $`N`$ Coulomb charges confined to move on the surface of a sphere. While this problem is simple to specify its solution is not. It has been studied by many authors, see and references therein. On a local scale, charges are disposed like those on a triangular lattice and each charge has 6 nearest neighbors. On the other hand, Euler’s theorem guarantees the existence of at least 12 fivefold disclinations (charges with only 5 nearest neighbors) on the sphere. More precisely, if $`v_i`$ is the number of charges with $`i`$ nearest neighbors, then $$\underset{i}{}(6i)v_i=12.$$ (1) There exist methods to place the charges in configurations with just 12 disclinations each of which is at the corners of an icosahedron, see Ref. . Those configurations are called icosadeltahedral and only exist when $`N`$ is given by $$N=10(h^2+hk+k^2)+2,$$ (2) with $`h`$ and $`k`$ integers. It was suggested in Ref. that these configurations might be the ground states of Thomson’s problem. Further work showed, however, that configurations with dislocations (bound pairs of fivefold and sevenfold disclination) have less energy than icosadeltahedral configurations , as lines of dislocations emanating from the disclinations act to screen the disclinations by reducing their strains fields . However, we were unable to find any patterns of icosahedral symmetry containing dislocations . It is the purpose of this Brief Report to show that such patterns can be found if one uses the “ring removal” technique of Toomre . Each disclination in an icosadeltahedral configuration is surrounded by rings of $`5n`$ charges, where $`n`$ is the order number of the ring. By removing the charges in one of these rings and then relaxing the energy of the system it is possible to obtain a configuration with 5 dislocations symmetrically disposed around the disclination. Removing several rings we can get lines of dislocations which act to screen the disclination as predicted by Dodgson and Moore . One must be careful how one chooses the rings to be removed; if one removes consecutive rings the final configurations are not usually symmetric. In Fig. 1 we have plotted an icosadeltahedral configuration with $`h=k=20`$, i.e. 12002 charges, after removing the 3rd and 7th rings around each disclination. This type of initial configuration was used in conjunction with standard numerical procedures (mostly the conjugate gradient method) to minimize the interaction energy $`E`$ of the Coulomb charges with each other. It is possible to estimate the total number of rings $`n_r`$ to be removed around each disclination (but not the actual ring numbers themselves, unfortunately) as follows. In Fig 2 are shown examples of regions (which we shall refer to as facets) for icosadeltahedral configurations which naively one would expect to have three equal sides but which cannot achieve this because of the spherical geometry. Let us denote by $`A`$ and $`A^{}`$ the length of the equal sides of the facets and $`B`$ the remaining one. $`A=A^{}<B`$. These lengths can be calculated by simple geometric arguments. For the case $`h=k`$ one has $$A=R\mathrm{tan}^1\left(\frac{\alpha /2}{\mathrm{cos}\frac{2\pi }{10}}\right),$$ (3) and $$B=R\mathrm{cos}^1\left(10.690983\mathrm{sin}^2A/R\right),$$ (4) where $`R`$ is the radius of the sphere and $`\alpha `$ is the angle between two neighboring disclinations where $$\alpha =2\mathrm{tan}^1\left(\frac{\sqrt{5}1}{2}\right).$$ (5) For $`h>k=0`$, $`A`$ is given by $$A=\frac{R\alpha }{2}.$$ (6) One should note that the above is only exact for $`h`$ even. When $`h`$ is odd, $`A`$ is slightly larger but since the difference tends to zero as $`h`$ increases we shall not take it into account. To triangulate each facet one subdivides $`A`$ and $`B`$ into $`D_A`$ and $`D_B`$ segments respectively. In Fig 2 $`D_A`$ and $`D_B`$ are both equal to $`3`$, but this need not be the case for larger values of $`N`$. To find the number of rings to take out, $`n_r`$, one requires that $`A/D_A`$ be as close as possible to $`B/D_B`$, since then the spacing between charges will be most uniform, thereby minimizing the strain field energy caused by the spherical geometry. The difference between $`D_A`$ and $`D_B`$ is then the number of rings to remove, $`n_r`$. With this in mind we obtain the following expression for $`n_r`$: $$n_r=\mathrm{Round}\left[\left(1\frac{A}{B}\right)D_B\right],$$ (7) where the function Round\[$`x`$\] gives the closest integer to $`x`$. $`D_B`$ is related to the number of particles $`N`$ by the relation $$D_B=\sqrt{\frac{N2}{p}},$$ (8) for the case $`h=k`$, $`p=30`$ while for $`h>k=0`$, $`p=40`$. Thus, within this approximation, one is able to estimate out how many rings to remove for given $`h`$ and $`k`$. In Tables 1 and 2 we show for both cases the value of $`N`$ at which $`n_r`$ changes its value. These estimates of $`N`$ are consistent with our observations on systems with up to 16000 particles. We have not studied the general case $`h>k>0`$ as it is difficult to identify the facets to triangulate. We next describe how we are going to report our values for this energy. Using Ewald sums one can calculate the energy for charges on an infinite plane triangular lattice and deduce that for the sphere with unit radius and for unit charges $$2E=N^21.1061033\mathrm{}N^{3/2}+\mathrm{},$$ (9) as $`N\mathrm{}`$. It is useful therefore to study $`E_i`$ given by $$E_i=\frac{2EN^2}{N^{3/2}},$$ (10) which will approach $`1.1061033\mathrm{}`$ as $`N\mathrm{}`$. We will refer to $`E_i`$ as the “energy” of the system. We examine first configurations with $`h=k`$. For them the final configurations obtained after energy relaxation following ring removal have dislocations on the lines between the disclinations as envisaged in . Fig. 3 is an example of a configuration with ($`h=k=20`$) minus the 2nd and 9th rings around each disclination so containing 11342 charges. In Figs. 4 and 5 we have plotted ($`h=k=23`$) minus 2nd and 10th rings in the former and minus 3rd and 9th rings in the latter. Dislocations obtained by removing the second ring are bent forming pentagonal “buttons” as in Figs. 3 and 4. Fig.5 is very similar to the kind of pattern which was suggested in . When $`hk`$, the dislocations obtained after removing rings are not on the lines between disclinations but between these lines. The resulting patterns are therefore of lower symmetry. In any of these three cases, dislocations place themselves onto a line rotated an angle $`\theta `$ from the line between disclinations given by $$\mathrm{cos}\theta =\frac{(h+k)(1+\mathrm{cos}2\pi /5)}{\sqrt{(h^2+k^2+2hk\mathrm{cos}2\pi /5)(2+2\mathrm{cos}2\pi /5)}}$$ (11) In Figs. 6 and 7 we study ($`h=40,k=0`$) minus 2nd and 10th rings in the former and minus 3rd and 9th rings in the latter. In conclusion, we have demonstrated that for certain values of $`N`$ there are low energy arrangements of the charges which have full icosahedral symmetry. For these special values of $`N`$ Thomson’s problem seems to reduce to the much simpler task of finding which rings when removed minimize the energy. APG would like to acknowledge a grant and financial support from CajaMurcia and EPSRC under grant GR/K53208. We thank A. Toomre for telling us about his “ring removal” method prior to its publication.
no-problem/9905/astro-ph9905253.html
ar5iv
text
# On the equipartition of thermal and non-thermal energy in clusters of galaxies ## 1 Introduction Several non-thermal processes have recently been detected in clusters of galaxies from the extreme ultraviolet (EUV) radiation in excess of the thermal expectation to the soft X-rays detected by ROSAT and BeppoSAX and again to hard X-ray excesses and radio radiation. For the Coma cluster, by far the best studied cluster, a complete investigation of the soft excess can be found in (Lieu et al. 1996a, 1996b; Bowyer, Lampton & Lieu 1996; Fabian 1996; Mittaz, Lieu & Lockman 1998, Sarazin & Lieu 1998) while the detection of the hard excess above 20 keV is reported in (Fusco-Femiano et al. 1998). A recent review of the diffuse radio emission can be found in (Feretti et al. 1998) . Some clusters show only emission in some region of frequency and not in others. Also for this reason Coma gives the best possibility to make multiwavelengths studies. A review of the current status of the multifrequency observations of Coma and viable models for the non thermal radiation can be found in (Ensslin et al. 1998) . As stressed by Fusco-Femiano et al. (1998), if the hard X-ray excess is due to inverse compton scattering (ICS) off the photons of the microwave background, then the combined radio and hard X-ray observations of Coma imply a small value for the average intracluster magnetic field, of order $`B0.10.2\mu G`$. Such a small value of the field requires large energy densities in electrons, and, as pointed out in (Lieu et al. 1999), CR energy densities comparable with the equipartition value are required. This conclusion is only weakly dependent on the specific model (primary or secondary) for the production of the electrons responsable for the radiation. In fact the need for large CR energy densities was recently confirmed by Blasi & Colafrancesco (1999), in the context of the secondary electron model. Lieu et al. (1999) also correctly pointed out that the assumption of equipartition is limited by the production of gamma rays through neutral pion decay, but this flux was claimed to be much smaller than the EGRET sensitivity, falling below the EGRET upper limit already imposed on the gamma ray flux from the Coma and Virgo clusters (Sreekumar et al. 1996). We calculate here the flux of gamma rays from the Coma and Virgo clusters in the assumption of equipartition of CRs with the thermal energy in the cluster, for two different models of the CR injection in the intracluster medium (ICM), and for different injection spectra and find that in some cases the gamma ray flux is in excess of the EGRET limit. Moreover, we find that for injection CR spectra flatter than $`E^{2.4}`$ (for $`Em_pc^2`$) some currently operating experiments like STACEE, HEGRA and Whipple could detect the gamma ray signal from Coma and Virgo in the TeV range, provided the CRs are in equipartition, or put strong constraints on this condition if no signal is detected. The paper is planned as follows: in section 2 we outline the calculations of the gamma ray fluxes from clusters; in section 3 we describe the models of CR propagation that we used and in section 4 we describe our results for the Coma and Virgo clusters. ## 2 The gamma ray fluxes In this section we calculate the flux of gamma rays due to the decay of neutral pions produced in CR collisions in the ICM. This channel provides the dominant contribution to gamma rays above $`100`$ MeV. Independent of the sources that provide the CRs in clusters, the equilibrium CR distribution is some function $`n_p(E_p,r)`$ of the proton energy $`E_p`$ and of the position in the cluster. For simplicity we assume the cluster to be spherically symmetric, so that the distance $`r`$ from the center is the only space coordinate. We determine $`n_p`$ for different injection models in the next section. The rate of production of gamma rays with energy $`E_\gamma `$ per unit volume at distance $`r`$ from the cluster center is given by Blasi & Colafrancesco (1999) $$q_\gamma (E_\gamma ,r)=2n_H(r)c_{E_\pi ^{min}(E_\gamma )}^{E_p^{max}}𝑑E_\pi _{E_{th}(E_\pi )}^{E_p^{max}}𝑑E_pF_{\pi ^0}(E_\pi ,E_p)\frac{n_p(E_p,r)}{(E_\pi ^2+m_\pi ^2)^{1/2}},$$ (1) where $`E_\pi `$ is the pion energy, $`E_\pi ^{min}=E_\gamma +m_\pi ^2/(4E_\gamma )`$ is the minimum pion energy needed to generate a gamma ray photon with energy $`E_\gamma `$ and $`E_p^{max}`$ is some maximum energy in the injected CR spectrum (our calculations do not depend on the value of $`E_p^{max}`$). Here $`n_H(r)`$ is the density of thermal gas at distance $`r`$ from the cluster center. For Coma we model the gas density through a King profile: $$n_H(r)=n_0\left[1+\left(\frac{r}{r_0}\right)^2\right]^{3\beta /2},$$ (2) where $`r_0400`$ kpc is the size of the cluster core, $`n_03\times 10^3cm^3`$ and $`\beta `$ is a phenomenological parameter in the range $`0.71.1`$ (Sarazin 1988) (we use $`\beta =0.75`$). For Virgo, we fit the gas density profile given by Nulsen & Bohringer (1995) to find $$n_H(r)=0.076\left(\frac{r}{4.8kpc}\right)^{1.16}cm^3.$$ (3) The function $`F_\pi `$ in eq. (1) represents the cross section for the production of neutral pions with energy $`E_\pi `$ in a CR collision at energy $`E_p`$ in the laboratory frame. Determining this function is complicate in the low energy regime where data is scarse. A possible approach was proposed by Dermer (1986) and recently reviewed by Moskalenko & Strong (1998) and is based on the isobar model. This approach is valid for CR collisions at $`E_p3`$ GeV and consists in treating the pion production as a process mediated by the generation and decay of the resonance $`\mathrm{\Delta }(1232)`$ in the $`pp`$ interaction. We refer to the papers by Dermer (1986) and Moskalenko & Strong (1998) for the detailed expressions for $`F_\pi `$. For $`E_p7`$ GeV the scaling approach is an excellent approximation of the function $`F_\pi `$. In this regime the differential cross section for $`pp`$ collisions can be written as $$\frac{d\sigma }{dE_\pi }(E_p,E_\pi )=\frac{1}{E_\pi }\sigma _0f_\pi (x)$$ (4) where $`x=E_\pi /E_p`$, $`\sigma _0=3.2\times 10^{26}cm^2`$ and $`f_\pi (x)=0.67(1x)^{3.5}+0.5e^{18x}`$ is the so called scaling function. In the scaling regime, the function $`F_\pi `$ coincides with the differential cross section given in eq. (4). Once the gamma ray emissivity is known from eq. (1), the flux of gamma rays with energy $`E_\gamma `$ is simply given by volume integration $$I_\gamma (E_\gamma )=\frac{1}{4\pi d^2}_0^{R_{cl}}𝑑r4\pi r^2q_\gamma (E_\gamma ,r)$$ (5) where $`d`$ is the distance to the cluster and $`R_{cl}`$ is the cluster radius. In fact $`R_{cl}`$ here plays the role of the size of the region where the non thermal processes are observed. We adopt here the value suggested from radio observations in Coma, $`R_{cl}1`$ Mpc. This is however a very conservative case and it seems likely that magnetic fields extend to larger regions. In fact in Ensslin, Wang, Nath & Biermann (1998a) the injection of energy due to formation of black holes in the Coma cluster was estimated and compared with the thermal energy in a region of $`5h_{50}^1`$ Mpc ($`h_{50}=h/0.5`$). For the Coma cluster we shall also consider this less conservative case. ## 3 The Cosmic Ray Distribution Several sources of CRs in clusters of galaxies were discussed by Berezinsky, Blasi & Ptuskin (1997) and it was argued that the known sources (AGNs, radiogalaxies, accretion shocks) are not able to provide CRs in equipartition with the thermal gas. An intense and short period of powerful emission from the cluster sources was also considered, consistently with the observed iron abundance in the cluster, with the same conclusion. Since recent observations of non thermal radiation from clusters seem to suggest that equipartition is indeed required, we do not make here any assumption on the type of sources and instead we assume equipartition and analyze the observational consequences of this assumption. The equipartition energy can be easily estimated from the total thermal energy of the gas, assuming it has a temperature $`T`$: $$E_{eq}\frac{3}{2}k_BT_0^{R_{cl}}𝑑r4\pi r^2n_H(r)$$ (6) where $`n_H(r)`$ is given by eq. (2) for the Coma cluster and by eq. (3) for the Virgo cluster. The temperature adopted for Coma is $`T_{Coma}=8.21`$ k while for Virgo we used $`T_{Virgo}=1.8`$ k (Nulsen & Bohringer 1995). Therefore, from the previous equation we obtain $`E_{eq}=1.6\times 10^{63}`$ erg for Coma and $`E_{eq}=1.5\times 10^{62}`$ erg for Virgo. These numbers could underestimate the total thermal energy due to the contribution of gas out of the $`1`$ Mpc region. In fact Ensslin et al. (1998a) estimated for the Coma cluster that the thermal energy in a region of $`5h_{50}^1`$ Mpc is $`1.3\times 10^{64}h_{50}^{5/2}`$ erg, a factor $`6`$ larger than estimated above. They also calculate the expected injection of total (thermal plus non thermal) energy due to black hole formation in the cluster, and find in the same region a similar number. Since not only the energy budget in CRs is not known, but also their spatial distribution is very poorly constrained, we consider here two extreme scenarios for the injection of CRs and we calculate the equilibrium CR distribution from the transport equation. i) Point source As argued by Berezinsky, Blasi & Ptuskin (1997), Colafrancesco & Blasi (1998) and Ensslin et al. (1998) it is likely that for most of the cluster’s age the main contributors to CRs in clusters are located in the cluster core. This is the case if a radiogalaxy or more generally a powerful active galaxy or a shock produced by merging is the source/ accelerator of CRs. There is an additional argument that plays in favor of a source mainly concentrated in the center of the cluster: if the average spatial distribution of the galaxies in a cluster is not a strong function of time, then it is reasonable to assume that at all times, as today, the distribution of the sources is peaked around the cluster center. According with Ensslin et al. (1998a) (see also references therein) the spatial distribution of galaxies in Coma is well represented by a King-like profile $`n_{gal}(r)=[1+(r/r_g)^2]^{0.8}`$, with $`r_g160`$ kpc, appreciably smaller that the cluster core, so that a source concentrated in the center seems a reasonable assumption. Therefore we assume that the source can be modelled as a point source with a rate of injection of CRs given by a power law in momentum $`Q(E_p)=Q_0p_p^\gamma `$, where $`p_p=\sqrt{E_p^2m_p^2}`$ is the CR momentum and the normalization constant is determined by energy integration $$Q_0_0^{E_p^{max}}𝑑T_pT_pp_p^\gamma =L_p,$$ (7) where $`T_p`$ is the kinetic energy and $`L_p`$ is the CR luminosity at injection, forced here to be correspondent to the establishment of equipartition in the cluster. We estimate it averaging the equipartition energy on the age of the cluster: $`L_pE_{eq}/t_0`$. The transport equation that gives the distribution of CRs at distance $`r`$ from the source and after a time $`t`$, namely $`n_p(E_p,r,t)`$, can be written in the form $$\frac{n_p(E_p,r,t)}{t}D(E_p)^2n_p(E_p,r,t)\frac{}{E_p}\left[b(E_p)n_p(E_p,r,t)\right]=Q(E_p)\delta (\stackrel{}{r}),$$ (8) where $`D(E_p)`$ is the diffusion coefficient and $`b(E_p)`$ is the rate of energy losses. As shown in (Berezinsky, Blasi & Ptuskin 1997, Colafrancesco & Blasi 1998 , Blasi & Colafrancesco 1999) for CR protons the energy losses can be neglected and eq. (8) has the simple solution (Blasi & Colafrancesco 1999) $$n_p(E_p,r,t)=\frac{Q_p(E_p)}{D(E_p)}\frac{1}{2\pi ^{3/2}r}_{r/r_{max(E_p)}}^{\mathrm{}}𝑑ye^{y^2}.$$ (9) where $`r_{max}(E_p)=\left[4D(E_p)t\right]^{1/2}`$ is the maximum distance that on average particles with energy $`E_p`$ could diffuse away from the source in the time $`t`$. We are interested here in the case $`t=t_0`$ ($`t_0`$ here is the age of the cluster, taken as comparable with the age of the universe). The solution of the equation $`r_{max}(E_p)=R_{cl}`$ gives an estimate of the maximum energy $`E_{max}`$ for which CRs can be considered confined in the cluster volume for all the age of the cluster. For reasonable choices of the diffusion coefficient the confined CRs provide the main contribution to the energy budget of CRs in clusters and also to the integral flux of gamma rays above $`100`$ MeV, calculated as explained in the previous section. As far as gamma rays produced by interactions of confined CRs are concerned the flux of gamma radiation is independent on the choice of the diffusion coefficient, as pointed out by Berezinsky, Blasi & Ptuskin (1997), and the spectrum of gamma rays simply reflects the spectrum of the parent protons (for $`E_\gamma 1`$ GeV). Rigorously this is true only for spatially constant intracluster gas density, while a density profile, as assumed here, results in a weak dependence of the gamma ray spectrum on the diffusion details. Therefore, for the sake of completeness we adopt here a specific choice of the diffusion coefficient: we assume that the fluctuations in the magnetic field in the cluster are well represented by a Kolmogorov power spectrum $`P(k)k^{5/3}`$ and we calculate the diffusion coefficient according with the procedure outlined by Colafrancesco & Blasi (1998), which gives $$D(E_p)=2.3\times 10^{29}E_p(GeV)^{1/3}B_\mu ^{1/3}\left(\frac{l_c}{20kpc}\right)^{2/3}cm^2/s$$ (10) where $`B_\mu `$ is the value of the magnetic field in $`\mu G`$ and $`l_c`$ is the scale of the largest eddy in the power spectrum of the magnetic field. Eqs. (10) and (9) completely define the distribution of cosmic rays in the cluster in the case of a point source. i) Spatially homogeneous injection As pointed out above, the budget of CRs in clusters is largely dominated by confined CRs, so that in the case of spatially homogeneous injection the distribution of CRs can be easily written in the form $$n_p(E_p,r)=n_0\frac{ϵ_{tot}}{V}p_p^\gamma $$ (11) where $`V=(4/3)\pi R_{cl}^3`$ is the injection volume, $`ϵ_{tot}`$ is the total energy injected in the cluster in the form of CRs and $`n_0`$ is calculated by the normalization condition $$n_0_0^{E_p^{max}}𝑑T_pT_pp_p^\gamma =E_{eq},$$ (12) where $`E_{eq}`$ is calculated according to eq. (6). Clearly eq. (11) does not describe well the CR distribution very close to the cluster boundary. Moreover at sufficiently high energy, where CRs are not confined in the cluster volume the CR spectrum suffers a steepening to $`E_p^{(\gamma +\eta )}`$, with $`\eta =1/3`$ for a Kolmogorov spectrum. ## 4 Results and conclusions We study the observational consequences of the assumption of equipartition between CRs and thermal gas in clusters of galaxies. In particular we calculated the flux of gamma radiation from the Coma and Virgo clusters when equipartition is assumed. This assumption seems to be required if a ICS origin is accepted for the hard and soft X-ray excess and for the EUV flux from Coma and other clusters of galaxies (note however that alternative possibilities can be proposed). In particular, according to Lieu et al. (1999), in order to account for the observed cluster soft excess flux from Coma, equipartition between CRs and gas is unavoidable. In (Berezinsky et al. (1997)) different possible models of CR injection in clusters were considered, including active galaxies, accretion shocks during the formation of the cluster and a possible bright phase in the past of the cluster galaxies, but none of these sources could account for CR energy densities larger that $`15\%`$ of the equipartition value, if a conversion efficiency of $`10\%`$ was assumed for the injection of non thermal energy from the total energy of the sources. On the other hand Ensslin et al. (1998a) compared the thermal energy in a $`5h_{50}^1`$ Mpc region with the energy injected during the formation of massive black holes in the Coma cluster. The total energy (thermal plus non thermal) released in this process was estimated to be comparable with the thermal energy in the cluster (if an efficiency factor is assumed, the non thermal energy may be smaller than the equipartition value). Since our knowledge of the sources of CRs in clusters is still very poor, we decided to adopt here a phenomenological approach and try to find observational tests or consequences of our assumptions. The most striking consequence of a large abundance of CRs in a cluster is the production of gamma rays through the generation and decay of neutral pions in $`pp`$ interactions. Since the EGRET instrument put an upper limit on the flux of gamma radiation above $`100`$ MeV from Coma and Virgo ($`F_\gamma ^{EGRET}(>100MeV)4\times 10^8phot/(cm^2s)`$ (Sreekumar et al. 1996)), we can use this constraint to test the equipartition assumption. Our calculations were carried out for two extreme models of injection of CRs in the cluster, namely a point source in the cluster core and a spatially homogeneous injection in the cluster volume. In the case of a point source, we can think of it as an effective source, in the sense that on average a dominant source or a set of sources are located at the cluster center. In this sense it is not needed that the same source remains active for all the age of the cluster. The energy spectrum of the injected CRs was assumed to be a power law in momentum, as expected for a shock acceleration spectrum, and two extreme values of the power index were studied, namely $`\gamma =2.1`$ and $`\gamma =2.4`$, which encompass the whole range of power indexes expected from shock acceleration (other models of acceleration also give power laws in the same range of parameters). Since the gamma ray spectra depend (although very weakly) on the choice of the diffusion coefficient, we made here a specific choice, modelling the spectrum of fluctuations of the field by a Kolmogorov spectrum and calculating the diffusion coefficient according with eq. (10). In the numerical calculations we used $`B_\mu =0.1`$ and $`l_c=20`$ kpc (if for instance we use $`B_\mu =1`$ the results on the integral fluxes change only by $`10\%`$, confirming the weak dependence on the diffusion details mainly due to the use of a specific gas density profile). The integral fluxes of gamma radiation above $`100`$ MeV for the cases mentioned above and in the conservative scenario of $`R_{cl}=1`$ Mpc, are reported in Table 1 for the Coma and Virgo clusters. Due to the appearence of the flat region at low gamma ray energy, typical of spectra from pion decay, there is not a strong dependence of the integral flux on $`\gamma `$. In some cases considered the gamma ray flux exceeds the EGRET upper limit. As it could be expected, the gamma ray flux is larger for the case of a point source in the cluster center and the EGRET limit is exceeded by a factor $`1.7`$ for Coma and by a factor $`9`$ for Virgo. In the case of homogeneous injection the gamma ray fluxes are slightly smaller than the EGRET upper limit both for Coma and Virgo. These results are more impressive when the condition of equipartition is imposed on a larger region of size $`5h_{50}^1`$ Mpc (Ensslin et al. (1998a)): for a single source in the cluster center the EGRET limit is exceeded by $`9`$ for $`\gamma =2.1`$ and by $`8`$ for $`\gamma =2.4`$. For an homogeneous injection the predicted fluxes are in excess of the EGRET limit by $`7`$ for $`\gamma =2.1`$ and by $`6`$ for $`\gamma =2.4`$. It is worthwhile to stress again that this result is practically independent on the specific choice of the diffusion coefficient. In fact, the CRs relevant for the production of gamma rays in the energy range $`0.110`$ GeV are certainly confined in the cluster for any reasonable choice of the diffusion coefficient, and the spectrum in this region is independent on this choice. The effects of the diffusion may appear only at higher energy where gamma rays are produced by CRs not confined in the cluster. As shown in (Berezinsky et al. 1997) this produces a steepening of the gamma ray spectrum to a power law with index $`\gamma +\eta `$ (with $`\eta =1/3`$ in the case of a Kolmogorov spectrum of fluctuations) at energies larger than a knee energy $`E_K`$. At smaller energies the gamma ray spectrum reproduces the spectrum of the parent CRs. The transition appears at $`E_K\left[R_{cl}^2/(B_\mu ^{1/3}t_0)\right]^{1/\eta }`$, as obtained from the equation $`r_{max}(E_p)=R_{cl}`$. Actually this was shown in (Berezinsky et al. 1997) for the case of a constant intracluster gas density. In the more realistic case considered here, where the gas is modelled by a King or a power law profile, the gamma ray spectrum suffers a smooth steepening even for confined CRs, but this affects the integral flux above $`100`$ MeV only at the level of $`10\%`$. The integral spectra of gamma rays from Coma with energy $`>E_\gamma `$ as functions of the energy $`E_\gamma `$ are shown in Fig. 1a (for the point source) and 1b (for the homogeneous case) for $`R_{cl}=1`$ Mpc. In the same plot we draw the sensitivity limits for several present and planned experiments for gamma ray astronomy. The solid lines refer to $`\gamma =2.1`$ while the dashed lines are obtained for $`\gamma =2.4`$. The fluxes in the energy region $`E_\gamma <100`$ GeV are well above the detectability limit of GLAST, so that there is no doubt that the question of equipartition will be completely answered with the next generation gamma ray satellites. However fig. 1 also shows that the signal from Coma could be detectable even in some current experiments, provided $`\gamma 2.4`$. In particular STACEE could detect the signal above $`30`$ GeV and Whipple might detect the signal for $`E_\gamma 250`$ GeV. The flux should be detectable by the HEGRA Cerenkov telescope above $`500`$ GeV. A non-detection from these experiments would imply a reduction of the energy density in clusters by about one order of magnitude below equipartition for $`\gamma =2.1`$. For steep spectra only STACEE has a slim chance to detect the signal. In the same energy range the next generation gamma ray experiments will very likely measure the flux of gamma rays for any value of $`\gamma `$ in the range considered here. In the case of the Virgo cluster and $`R_{cl}=1`$ Mpc, the fluxes are plotted in fig. 2a (for sources in the center) and fig. 2b (for a homogeneous injection) and conclusions similar to the ones outlined for Coma hold. Note that this result is subtantially different from the one obtained in previous calculations. In particular Ensslin et al. (1997) reached the conclusion that the fluxes from Coma and Virgo are orders of magnitude too low to be detectable in the TeV range. This conclusion was obtained because following Dar and Shaviv (1995) the gamma ray spectrum was assumed to reproduce the equilibrium spectrum of CRs in the Galaxy $`E_\gamma ^{2.7}`$ (this did not affect appreciably their integral fluxes above 100 MeV, which are not very different from the ones obtained here for the homogeneous case). However, as shown in (Berezinsky et al. 1997, Colafrancesco & Blasi 1998, Blasi & Colafrancesco 1999 ) and confirmed here, the spectrum of gamma rays from $`pp`$ collisions in clusters does not reproduce the equilibrium CR spectrum , but the generation spectrum as far as gamma ray photons are produced by interactions of CRs confined in the cluster, as it is the case for gamma rays with energy less than $`110`$ TeV, for the values of the parameters used here (though, as pointed out before, a slight steepening is introduced by the gas density profile). Therefore the gamma ray spectrum from CRs in clusters is approximately $`E_\gamma ^\gamma `$ up to some maximum energy $`E_K`$ where CRs begin to be no longer confined in the cluster volume. As a consequence the gamma ray fluxes in the TeV range could be detectable even by present experiments if the CRs are in equipartition with the gas in the cluster. On the other hand, if no flux is detected, this will put a strong constraint on the equipartition assumption. While the low energy integral gamma ray flux is very weakly dependent on the choice of the diffusion coefficient, the correspondent flux at higher energies is more sensitive to it, since, as explained above, the position of the knee is affected by this choice. In the context of a Kolmogorov spectrum, the maximum diffusion coefficient is obtained for a larger value of $`l_c`$. The choice $`l_c20`$ kpc was inspired by the typical size of the galaxies in the cluster. The largest scale where the magnetic fluctuations are injected is the typical distance between galaxies, of the order of $`l_c100`$ kpc. For $`l_c20`$ kpc the position of the knee is at $`110`$ TeV, while for $`l_c100`$ kpc the knee is at $`1020`$ GeV. However, since the steepening in the gamma ray spectrum begins at large energy, the difference in the plots caused by the use of this larger diffusion coefficient is a factor $`2`$ at $`100`$ GeV (for $`R_{cl}=1`$ Mpc), so that the possibility of detecting the gamma ray fluxes in this energy region is not appreciably affected and remains an interesting possibility. The situation improves rapidly with an increasing $`R_{cl}`$. In fact the value of the knee energy for a Kolmogorov spectrum goes like $`R_{cl}^6`$ and high energy CRs are easily confined in a region of $`45`$ Mpc. If, following Ensslin et al. (1998a), we use $`R_{cl}=5h_{50}^1`$ Mpc, then, as shown before the absolute gamma ray fluxes increase at all energies by a factor $`510`$ and the steepening at high energy is only found at $`E10^3`$ TeV for the diffusion coefficient in eq. (10). At present we can only use the EGRET limit as a constraint. For Coma this limit implies that the energy density must be smaller than $`60\%`$ of the equipartition value if the CRs are mainly contributed by sources in the central part of the cluster (with $`R_{cl}=1`$ Mpc). If injection occurs uniformly over the cluster volume, than the equipartition CR energy density is compatible with the EGRET limit (for $`R_{cl}=1`$ Mpc). For Virgo cluster the EGRET limit implies that the energy density in CRs must be smaller than $`10\%`$ of the equipartition value for the case of sources in the center. As stressed above, the case of equipartition in a larger region, (Ensslin et al. (1998a)) is already ruled out by present gamma ray observations by EGRET. As a consequence the CR energy density in this case for Coma is forced to be $`12\%`$ of the equipartition value for a point source and $`14\%`$ for homogeneous injection. We suggest that experiments like STACEE, HEGRA, Whipple and future gamma ray experiments look at the signal from nearby clusters, because this could definitely confirm or rule out the possibility that equipartition of CRs with the thermal gas is achieved in clusters of galaxies, or at least impose new and stronger constraints on the maximum allowed CR energy density in clusters. The author is grateful to A. Olinto, S. Colafrancesco, C. Covault and R. Ong for many useful discussions and to the anonymous referee for several interesting comments. The research of P.B. is funded through a INFN fellowship at the University of Chicago.
no-problem/9905/cond-mat9905193.html
ar5iv
text
# Properties of two - dimensional dusty plasma clusters. ## I Introduction Small charged particles of ”dust” are rather common systems, and are observed on different scales and in different environments: clusters of dust in the interstellar medium, charged colloidal suspensions, ordered structures in the gas discharge used in thermal processing of materials are examples of such systems . At present much attention is paid to experimental investigation of the properties of ”dusty plasma” which is a system of small micrometer particles in a high frequency gas discharge. One of the main reasons of this attention paid to the artificial objects like this one is the ability of a direct observation of their static and dynamical properties. The study of dusty plasma crystals and liquids ’in vitro’ which is being carried out in a number of laboratories around the world is of a great importance for understanding the plasma properties and is a powerful tool for the examination of melting, annealing and formation of defects of different kinds. Small particles immersed in a plasma may acquire large (up to $`10^5e`$) charges $`Ze`$ due to high mobility of plasma electrons. The presence of plasma screening modifies the Coulomb interparticle interaction and, with a good precision , the system can be described as a system of particles interacting with a Yukava-type pair potential. Here we consider two-dimensional (2D) clusters of dusty particles confined by an external harmonic potential of strength $`\alpha `$. It is obvious that it is 2D system that realizes when one consider particles of ’dust’ immersed in a plasma discharge cloud with the transverse dimension higher than the Debye screening length. The energy of the system has the form: $`E=(Ze)^2{\displaystyle \underset{i<j}{\overset{N}{}}}{\displaystyle \frac{\mathrm{exp}(|𝐫_{ij}|/R_D)}{|𝐫_{ij}|}}+\alpha {\displaystyle \underset{i=1}{\overset{N}{}}}|𝐫_i|^2`$ (1) The Debye screening length in plasma is determined as $`R_D=\left(4\pi q_i^2n_i/k_bT_i+4\pi e^2n_e/k_bT_e\right)^{1/2}`$ where $`q_i,n_i`$ and $`T_i`$ are the charge, mean density and temperature of plasma ions and $`e,n_e,T_e`$ are that of plasma electrons respectively. The energy of a cluster, being written in dimensionless units $`r_0=(Ze)^{2/3}/\alpha ^{1/3}`$ for distances and $`E_0=\alpha r_0^2`$ for energies becomes: $`E={\displaystyle \underset{i<j}{\overset{N}{}}}{\displaystyle \frac{\mathrm{exp}(\gamma |𝐫_{ij}|)}{|𝐫_{ij}|^3}}+{\displaystyle \underset{i=1}{\overset{N}{}}}|𝐫_i|^2`$ (2) where the dimensionless parameter $`\gamma =r_0/R_D`$ defines the range of the pair interaction potential. From (2) one can see that the thermodynamic state of a cluster of a given number of particles is determined by two dimensionless parameters: the inverse dimensionless screening length $`\gamma `$ and the dimensionless temperature of ”dusty” grains $`\mathrm{\Theta }=k_bT/E_0`$. The range of interaction between particles in a cluster $`1/\gamma =r/R_D`$ is controlled by the density and the temperature of a plasma (see above). In this paper we consider the properties of two-dimensional (2D) clusters (2) as a function of the number of particles $`N<40`$, the screening length $`1/\gamma `$ and the temperature $`\mathrm{\Theta }`$. We show (see Sec. II) that the change in the screening length (i.e. in parameter $`\gamma `$) causes rearrangements of the ground-state structure of the cluster at a set of points $`\gamma ^{}`$, such structural transitions can be treated as phase transitions of the first or the second order with respect to parameter $`\gamma `$. In Sec. III we apply molecular dynamics (MD) and Monte Carlo (MC) simulations in a canonical ensemble in order to study the thermodynamic properties of small clusters. We show that in clusters of rather a small number of particles and at a small enough plasma screening (at $`\gamma <10`$), as the temperature is increased, the orientational disordering happens first, i.e shells rotate with respect to each other by losing their mutual orientation order. At more higher temperatures a total disordering of cluster shells takes place. ## II Ground - state configurations Ground-state configurations (see Table 1 and Fig. 1-3) of the system (2) have been found with the help of the following methods: 1) The modified Newton method ; 2) The combination of ”random search” and ”gradient search” methods . For the results to be more reliable, all configurations discussed below have been independently obtained by both of this methods. Of course, no one of the present methods of the minimization of a multidimensional function is able to guarantee that the configuration obtained is a global minimum one. To overcome this difficulty we have used up to 200 randomly distributed initial configurations. This approach have also enabled us to investigate both the local minima and their caption regions (i.e. ”specific weights” of local minima). At $`\gamma 1`$ model (2) describes the Coulomb cluster in a harmonic trap, the system that have been actively studied both experimentally and with the use of computer simulations ,. In particular, the calculations carried out before have revealed that particles in small finite systems arrange themselves into shells. An analysis of shell structures for different number of particles $`N`$ enables one to consider the system as belonging to some period of a Mendeleev - type table. This table can be viewed as a classical equivalent to the well - known Periodic Table of elements. The presence of parameter $`\gamma `$ which determines the range of the pair potential enables one to investigate the influence of this range on the structures and properties of clusters. The fact that cluster structure depends on the range of interaction potential becomes obvious from Table 1 in which some ground-state configurations for 2D clusters in a harmonic trap are presented. As the value of parameter $`\gamma `$ changes, rearrangements of ground-state structure take place, each point $`\gamma ^{}`$ of any of these changes can be treated as a point of phase transition of one kind or another. Following by the approach used in work , the order of these phase transitions can be determined from the plot of the ground-state energy $`E(\gamma )`$, the discontinuity in the $`nth`$ derivative of $`E(\gamma )`$ with respect to parameter $`\gamma `$ corresponds to the phase transition of $`nth`$ order. Another way to determine the order of the phase transition is to analyse the behaviour of eigenfrequencies $`\omega _i(\gamma ),i=\overline{1,2N}`$ for the normal modes: first order transition takes place at the point $`\gamma ^{}`$ at which any of the eigenfrequencies exhibits a jump while softening of any of the eigenfrequencies (when it becomes zero) testifies about second order phase transition. The eigenfrequencies of the cluster of $`N=10`$ particles vs. screening parameter $`\gamma [0,10]`$ are presented in Fig. 1. At points $`\gamma 1.4`$ and $`\gamma 8.2`$ of first order transitions the eigenfrequencies exibit jumps that are clearly seen. From Fig. 1b one can see that with a decrease in the interaction range first, at $`\gamma 1.4`$, the distribution of particles throughout shells changes and the configuration typical for the Coulomb interaction is replaced by one appropriate to the dipole cluster of 10 particles ($`\{2,8\}\{3,7\}`$). Further reduction in the screening radius transforms (at $`\gamma 8.2`$) the cluster to the most ”packed” state $`\{2,8\}`$ which is characteristic of a system of hard spheres. In Fig. 2 the ground-state energy of a cluster of $`33`$ particles are given. At $`\gamma 3.751`$ the first derivative of the ground-state energy with respect to parameter $`\gamma `$ is discontinuous (see inset of Fig. 2a). Investigation of cluster configurations shows that the numbers of particles in two outer shells change here as $`\{1,6,11,15\}\{1,6,12,14\}`$. The point of the first order phase transition $`\gamma ^{}`$ can be determined as the point, where the energies of ground- and the lowest excited (metastable) states are equal. This statement is illustrated in Fig. 2b in which both ground-state energy $`E(\gamma )`$ and energy $`E^{(1)}(\gamma )`$ of the lowest local minimum are plotted in the vicinity of the transition point $`\gamma ^{}3.751`$. From this figure one can see that the configuration of the global minimum at $`\gamma <\gamma ^{}`$ corresponds to a local one at $`\gamma >\gamma ^{}`$. One can see that in the cluster of $`37`$ dipole particles one of the particles is between the second and the third shell to form an interstitial (analogous to the Frenkel defect in crystals) and to make the division of the ground-state configuration into shells ambiguous . The ’dusty’ cluster of $`37`$ particles is supposed to exhibit a rich variety of structural rearrangements while varying $`\gamma `$. The investigation of this cluster at different values of screening strength has revealed four phase transitions in the region $`\gamma [0,1.6]`$ (see Fig. 3a), namely two second order transitions (at $`\gamma 0.78`$ and $`\gamma 1.22`$) and two first order transitions (at $`\gamma 0.52`$ and $`\gamma 1.34`$). From Fig. 3a on can see that the number of particles in the outer shells changes at $`\gamma 0.52`$. It is worth while to note that a distinctive feature of the first order phase transition is the abrupt change in the cluster structure, Usually, this peculiarity exhibits as the change in the distribution of particles throughout shells (like that one can observe for the clusters of $`10`$ and $`33`$ particles, see Fig. 1,2). This very change takes place at $`\gamma =0.52`$ for the cluster involved. However no apparent changes in structure of the cluster are seen at the point $`\gamma 1.34`$ of the first order phase transition. More detailed study have shown that at this point there exist a rotation of the third shell with respect to the fourth one. This is illustrated in Fig. 3b which presents the mutual orientational order parameter $`g_{s_1s_2}`$ of different pairs of shells $`\{s_1,s_2\}`$ , the value which is very sensitive to the changes in mutual orientation of cluster shells. Subsequent increase in the parameter $`\gamma `$ leads to the realization of two other first order transitions which are depicted in Fig. 3b. In the first of them (at $`\gamma 7.015`$) one of the particles implants between the second and the third shells (see Table 1 and the discussion above). The corresponding transition can be written as $`\{1,7,13,16\}\{1,6,\overline{1},12,17\}`$. At $`\gamma >19`$ the cluster becomes well - facetted and has the most symmetrical structure {1,6,12,18}. Note that in this region of $`\gamma `$ the minimal nonzero eigenfrequency $`\omega _{min}`$ corresponds to twofold degenerate vibrations of the whole cluster in the harmonic trap with the frequency $`\omega _{min}=\sqrt{2}`$. Further decreasing in the range of the pair potential does not lead to any structural rearrangements. The study of Coulomb and dipole clusters have shown that the basis for most configurations is provided by different parts of 2D hexagonal lattice . When describing and analyzing the properties of such configurations it is suitable to introduce into consideration the ”crystal shells” $`Cr_c`$ that are concentric groups of nodes of ideal 2D crystal with $`c`$ nodes placed in the center of these groups. Obviously, in view of the axial symmetry of the confinement potential, we can concentrate on a finite number of the most symmetrical crystal shells which, by the number of particles in the center of the system, can be divided into the following groups: $`Cr_1`$, $`Cr_2`$, $`Cr_3`$, $`Cr_4`$. With the help of the crystal shell concept we have found that changes in the ground state structure of ”dusty” clusters, as parameter $`\gamma `$ is increased, comes in such a way, as to fill the maximal number of crystal shells. ## III Phase transitions One of the distinctive peculiarities of small clusters is the existence of two stages of their disordering : an intershell (orientational melting of shells $`s_1`$ and $`s_2`$ at the temperature $`\mathrm{\Theta }_{s_1s_2}`$) and a radial disordering (total melting at temperature $`\mathrm{\Theta }_f`$). The analysis of eigenfrequencies shows that the clusters with the small values of lowest nonzero eigenfrequencies $`\omega _{min}`$ have the eigenvectors corresponding to mutual rotations of cluster shells. Such clusters have low temperatures $`\mathrm{\Theta }_{s_1s_2}`$ of intershell disordering. It is obvious that changes in the cluster structure caused by variations in the control parameter $`\gamma `$ lead to the modulation of the temperatures of both orientational $`\mathrm{\Theta }_{s_1s_2}`$ and total $`\mathrm{\Theta }_f`$ disordering. Moreover, the phenomenon of the orientational disordering may disappear at all if the cluster has a well packed structure. The results of our simulations have proved this suggestion. The dependencies of the mutual orientational order parameter $`g_{21}(\mathrm{\Theta })`$ for two-shell cluster of $`N=10`$ particles at several values of parameter $`\gamma `$ are given in Fig. 4a. It is evident that $`g_{s_1s_2}`$ drops to zero at the point of relative disordering (mutual rotation of shells $`s_1`$ and $`s_2`$) . One can see from Fig. 4a that the change in the system configuration $`\{2,8\}\{3,7\}`$ which occurs at $`\gamma 1.4`$ (see Fig. 1) leads to a sharp decrease in the orientational disordering temperature: $`\mathrm{\Theta }_{21}(\gamma <1.4)1.310^4\mathrm{\Theta }_{21}(\gamma >1.4)0.710^5`$. The cluster is well - packed in the region $`\gamma >8.2`$ (see Fig. 1) and that is why it does not experience the orientational melting, when an increase in the temperature leads directly to the interchange of particles between shells at $`\mathrm{\Theta }10^3`$. This can be seen from the analysis of radial square deviations $`u_r^2`$: $`u_r^2={\displaystyle \frac{1}{N}}{\displaystyle \underset{i}{}}\left[|𝐫_i|^2|𝐫_i|^2\right]`$ (3) The dependence $`u_r^2(\mathrm{\Theta })`$ is given in Fig. 4b, also shown are analogous curves at $`\gamma =1`$ and $`\gamma =2`$. One can see that even the slightest variation in the value of control parameter $`\gamma `$ may change the temperature of the total melting up to orders. The changes in the interaction potential lead to the modification of the structure of the energy surface which determines the type and the distinctive features of the phase transitions. For this reason, one can suppose that at some values of the parameter $`\gamma `$ the system can have very interesting thermodynamic properties. In Fig. 5a the dependence of the radial square deviations (3) of four - shell cluster of $`N=33`$ particles at $`\gamma =3.76`$ is given. The graph has a number of plateaus located in different temperature intervals. A detailed investigation has shown that the regions of a sharp increase in $`u_r^2`$ correspond to the radial disordering of different pairs of shells: particles start to interchange between the third and the fourth shell at temperature $`\mathrm{\Theta }_{34}^f510^4`$ and between the second and the third – at $`\mathrm{\Theta }_{23}^f0.005`$. The total melting of the cluster takes place at $`\mathrm{\Theta }^f0.01`$. Some useful information about the character of the disordering considered can be obtained by exploring the local minima distribution $`\rho (E_{loc})`$. In order to estimate this histogram, at each measurement time point we have performed several hundreds of gradient search iterations to find the nearest local minimum with the energy $`E_{loc}`$. In Fig. 5b the local minima distribution of the system of 33 particles at $`\gamma =3.76,\mathrm{\Theta }=10^4,\mathrm{\Theta }=810^3`$ is shown. In an entirely ordered state (at $`\mathrm{\Theta }=10^4`$) the system lives in the vicinity of the global minimum (with energy $`E=64.795946`$ and the structure $`\{1,6,12,14\}`$). At $`\mathrm{\Theta }=810^3`$ the cluster can be also found at the lowest local minimum $`E^{(1)}=64.795975`$ with the configuration $`\{1,6,11,15\}`$ (see Fig. 2b). Considering the results stated above one can conclude that the first disordering seen in the temperature interval $`\mathrm{\Theta }[10^4,10^3]`$ (see Fig. 2) corresponds to the nonzero probability of the cluster to be found in the state $`\{1,6,11,15\}`$ which is metastable at the given value of the parameter $`\gamma `$. Such changes in the distribution of particles throughout shells demand the overcoming of potential barrier that, knowing about large specific weights of both ”ground” and ”excited” states, allows one to treat this temperature interval as that of the dynamical coexistence of two cluster forms $`\{1,6,12,14\}\{1,6,11,15\}`$. ## IV Conclusions In this letter we have presented the results of a study of finite ”dusty plasma” particle system. As a function of Debye screening length $`R_D`$ (for the particle charge in the plasma) we have found ground-state configurations of clusters consisting of $`N40`$ particles, their normal modes eigenvalues and the corresponding eigenvectors. The clusters undergo structural transitions which manifest itself as phase transitions of first or second orders with respect to parameter $`R_D`$. At points of first order transitions cluster coordinates experience jumps which lead either to the change in the shell distribution or to the rotation of some shells relative to each other. At points of second order transitions one of the eigenfrequencies softens and particle coordinates change continuously. By varying $`R_D`$ (for example, by varying the temperature and the density of the plasma) one can modulate thermodynamic properties of the system and considerably change the temperatures of both the orientational and the total disordering. It have turned out that for some clusters, as the screening becomes sufficiently high, the disappearance of the orientational melting of different parts of the system takes place and an increase in the temperature leads staightway to the interchanges of particles between shells. Acknowledgments. This work was partially supported by Russian Fund of Basic Researches, INTAS and the program ”Physics of solid state nanostructures” Table 1. Ground-state shell structures $`\{N_1,N_2,\mathrm{}\}`$ for dipole, Coulomb and logarithmical clusters of $`N`$ particles confined in a harmonical potential. | $`N`$ | Dipole cluster | Coulomb cluster | Logarithmical cluster | | --- | --- | --- | --- | | 9 | 2,7 | 2,7 | 1,8 | | 10 | 3,7 | 2,8 | 2,8 | | 11 | 3,8 | 3,8 | 3,8 | | | | | | | 32 | 1,6,12,13 | 1,5,11,15 | 4,11,17 | | 33 | 1,6,12,14 | 1,6,11,15 | 5,11,17 | | 34 | 1,6,12,15 | 1,6,12,15 | 1,5,11,17 | | | | | | | 36 | 1,6,12,17 | 1,6,12,17 | 1,6,12,17 | | 37 | 1,6,1,13,16 | 1,7,12,17 | 1,6,12,18 | | 38 | 2,8,13,15 | 1,7,13,17 | 1,6,12,19 | Fig. 1. Eigenfrequencies (a) and the lowest nonzero eigenfrequency $`\omega _{min}`$ (b) of the ’dusty’ cluster of $`10`$ particles. Inset: ground state configurations in three different regions of control parameter $`\gamma `$. Fig. 2. Cluster of $`33`$ particles. a) The first derivation of the cluster ground energy with respect to $`\gamma `$. In the inset the region of the first order phase transition is shown on an enlarge scale. b) Energies and configurations of the lowest local minimum (with the energy $`E^{(1)}`$ measured from the ground state energy $`E`$) of the cluster in the region of the phase transition. Fig. 3. Cluster of $`37`$ particles. ) The lowest nonzero eigenfrequency $`\omega _{min}(\gamma )`$ and mutual orientational order parameter of different pairs of shells $`g_{s_1s_2}(\gamma )`$. b) The regime of strong plasma screening. Eigenfrequency $`\omega _{min}`$ corresponds at $`\gamma >19`$ to the motion of the cluster at a whole in the confinement. Fig. 4. Two - shell cluster of $`10`$ particles. ) Thermodynamical mean of the mutual orientational order parameter $`<g_{21}>(\mathrm{\Theta })`$ at different values of $`\gamma `$. b) radial mean - square deviations od particles vs. temperature $`u_r^2(\mathrm{\Theta })`$. Fig. 5. Four - shell cluster of $`33`$ particles a) radial mean - square deviations. b) local minima distribution histogramm $`\rho (E^{(loc)})`$ of the cluster in the ordered state (at $`\mathrm{\Theta }=10^4`$) and at $`\mathrm{\Theta }=810^3`$, when there is an interchange of particles between the third and the fourth shells.
no-problem/9905/chao-dyn9905006.html
ar5iv
text
# Intermittency corrections to the mean square particle acceleration in high Reynolds number turbulence ## Abstract The mean square particle acceleration in high Reynolds number turbulence is dominated by the mean square pressure gradient. Recent experiments by Voth et al \[Phys. Fluids 10, 2268 (1998)\] indicate that this quantity, when normalized by the 1941 Kolmogorov value, $`ϵ^{3/2}\nu ^{1/2}`$, is independent of Reynolds number at high Reynolds number. This is to be contrasted with direct numerical simulations of Vedula and Yeung \[Phys. Fluids 11, 1208, (1999)\] which show a strong increase of the same quantity with increasing Reynolds number. In this paper we suggest that there is no inherent conflict between these two results. A large part of the increase seen in DNS is shown to be associated with finite Reynolds number corrections within the quasi-Gaussian approximation. The remaining intermittency corrections are relatively slowly increasing with Reynolds number, and are very sensitive to subtle cancellations among the longitudinal and transverse contributions to the mean square pressure gradient. Other possible theoretical subtleties are also briefly discussed. <sup>1</sup>Physics Department, New York University, New York, NY 10003, and Levich Institute, CCNY, New York, NY 10031 USA. Electronic address: Mark.Nelkin@nyu.edu <sup>2</sup>Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545 USA. Electronic address: syc@cnls.lanl.gov There has been considerable recent interest in the study of fluid particle acceleration in turbulent flows. A natural focus for study is the scaled mean square fluid particle acceleration $$\beta =\nu ^{1/2}ϵ^{3/2}a^2,$$ (1) where $`𝐚=d𝐯/dt`$ is the fluid particle acceleration, $`\nu `$ is the kinematic viscosity of the fluid, and $`ϵ`$ is the average rate of energy dissipation per unit mass. In the 1941 Kolmogorov theory, $`\beta `$ is independent of Reynolds number. Direct numerical simulations of isotropic turbulence by Vedula and Yeung and by Gotoh and Rogallo show that the scaled acceleration (1) increases strongly with Reynolds number, approximately as $`R_\lambda ^{1/2}`$, where $`R_\lambda `$ is the Taylor microscale Reynolds number. These DNS also give the expected result that the dominant contribution to $`\beta `$ comes from the mean square pressure gradient. The viscous contribution is no more than a few percent, and will not be considered further in this paper. The DNS are currently limited to a maximum $`R_\lambda `$ of 235. By contrast, a recent high resolution measurement of fluid particle acceleration by Voth et al suggests that $`\beta `$ is nearly independent of Reynolds number in a range of $`R_\lambda `$ between 1000 and 2000. This result is for the flow between counter-rotating disks. The quantity $`\beta `$ used in this paper is a factor of $`3`$ times the scaled acceleration variance $`a_0`$ used in and . These references look at a single Cartesian component, and we look at the sum over Cartesian components. We start from an exact result for the mean square pressure gradient for isotropic turbulence due to Hill and Wilczak , $$(p)^2=4_0^{\mathrm{}}r^3[L(r)+T(r)6M(r)]𝑑r,$$ (2) where we have taken the density $`\rho =1`$ with no loss of generality. In (2) the quantities $`L(r)`$, $`T(r)`$, and $`M(r)`$ are the three independent fourth order velocity structure functions. We refer to these as the longitudinal $`L(r)`$, transverse $`T(r)`$, and mixed $`M(r)`$, and they are given by $$L(r)=\mathrm{\Delta }u^4(r),T(r)=\mathrm{\Delta }v^4(r),M(r)=\mathrm{\Delta }u^2(r)\mathrm{\Delta }v^2(r).$$ (3) In a recent letter , we have analyzed a similar expression for the pressure structure function $`D_p(r)`$ in the inertial range, and we found it to be very sensitive to strong cancellations among the positive terms proportional to $`L(r)`$ and $`T(r)`$ and the negative term proportional to $`M(r)`$. We expect a comparable degree of cancellation here for the mean square pressure gradient. To estimate this cancellation, it is useful to introduce the quasi-Gaussian approximation, where $$L(r)=3[D_{LL}(r)]^2,T(r)=3[D_{NN}(r)]^2,M(r)=D_{LL}(r)D_{NN}(r).$$ (4) In (4), $`D_{LL}(r)`$ is the second order longitudinal velocity structure function, and $`D_{NN}(r)`$ is the second order transverse velocity structure function. For isotropic turbulence, these two quantities are related by $$D_{NN}(r)=D_{LL}(r)+(r/2)[dD_{LL}(r)/dr].$$ (5) If (4) and (5) are substituted into (2), we obtain $$(p)^2=3_0^{\mathrm{}}r^1[dD_{LL}(r)/dr]^2𝑑r,$$ (6) which was derived by Obukhov in 1949 and by Batchelor in 1951 . To evaluate (6) at high Reynolds numbers, Batchelor introduced an interpolation formula for the longitudinal structure function, which remains an accurate and useful approximation today. He suggested that $$D_{LL}(r)=(1/15)ϵ^{2/3}\eta ^{2/3}x^2(1+\alpha x^2)^{2/3},$$ (7) where $$x=r/\eta ,$$ $`ϵ`$ is the average rate of energy dissipation per unit mass, $`\eta =\nu ^{3/4}ϵ^{1/4}`$, is the Kolmogorov dissipation length scale, and $`\nu `$ is the kinematic viscosity of the fluid. The adjustable parameter $`\alpha `$ is taken as $`\alpha =0.006455`$, as discussed elsewhere . Substituting (7) into (6), the integral in (6) is easily evaluated to give $$\beta ^{QG}=ϵ^{3/2}\nu ^{1/2}(p)^2=(4/175\alpha )3.5$$ (8) A survey of earlier results for the value of $`\beta ^{QG}`$ is given by Kaneda . At low Reynolds number, there are two distinct corrections to (8) to be expected. Within the quasi-Gaussian approximation, there will be a finite Reynolds number correction due to the cutoff of (6) at large $`r`$. This occurs approximately at the integral length scale $`L`$. This leads to a viscosity independent subtraction from the mean square pressure gradient in (6). Since the high Reynolds number limit of $`(p)^2`$ is proportional to $`\nu ^{1/2}R_\lambda `$, the quasi-Gaussian value of $`\beta `$ should have a Reynolds number dependence, $$\beta ^{QG}=B(1C/R_\lambda )$$ (9) It is difficult to estimate the coefficient $`C`$ from theory, but fortunately the DNS can separately evaluate the full value of $`\beta `$ and its approximate value from the quasi-Gaussian approximation. These values, along with the ratio of the full to quasi-Gaussian contribution are given in Table 1. The full value is from , and the quasi-Gaussian value has been computed from the same data by Vedula and Yeung, and sent to us privately. The full value is three times the value reported in table 1 of since it is the sum over all three Cartesian components which is used here. The values of $`\beta ^{QG}`$ are well fit by (9) with $`B=3.75`$ and $`C=14.7`$. The value of $`B`$ is reasonably close to the value of $`B=3.5`$ in (8). We have no interpretation for the numerical value of $`C`$. Since the large $`r`$ cutoff is in the energy containing range, this value should depend on the large scale flow conditions, and is not expected to be universal. The ratio of the full value to the quasi-Gaussian value is determined by non-Gaussian corrections in the dissipation range of scales $`r`$. These corrections are well known to increase as the scale size $`r`$ decreases. In the inertial range, they are characterized by the well-known anomalous scaling exponents for the fourth order structure functions. The situation in the dissipation range is more complicated and less well characterized, but the dominant physical phenomenon is still the increasing intermittency with decreasing scale size. We thus refer to the corrections to the quasi-Gaussian value as intermittency corrections. In Table 1, we see that $`\beta ^{QG}`$ increases by a factor of $`1.58`$ from the smallest to the largest Reynolds numbers in the DNS, and that the intermittency correction increases by a factor $`1.66`$ in the same range. For higher Reynolds numbers, however, (9) indicates that $`\beta ^{QG}`$ is essentially constant. The Reynolds number dependence of the intermittency correction is difficult to estimate since the cancellation of the terms in (2) can be a sensitive function of Reynolds number. From Table 1, these corrections appear to increase approximately as $$\beta /\beta ^{QG}=Const.R_\lambda ^{0.23},$$ (10) but the power law fit is not very accurate, showing marginal signs of flattening at higher $`R_\lambda `$. The experiment by Voth et al is in the flow between counter-rotating disks in a range of $`R_\lambda `$ from $`1000`$ to $`2000`$. The measured value of $`\beta `$ in this range is independent of $`R_\lambda `$ and is $`21\pm 9`$. Most of the uncertainty is in the absolute value. The Reynolds number dependence is known fairly accurately. If we extrapolate the intermittency correction from the DNS of Vedula and Yeung, we obtain a value of $`\beta =14.0`$ at $`R_\lambda =1000`$ and $`\beta =16.4`$ at $`R_\lambda =2000`$. We do not know if the experiment is accurate enough to exclude this weak increase in this Reynolds number range, but the absolute value is large enough that substantial corrections to the quasi-Gaussian approximation are surely present. Now let us examine some of the subtle effects which can cause a complicated Reynolds number dependence of the intermittency corrections. First look at (2) in the quasi-Gaussian approximation. All of the integrands show a broad peak in the neighborhood of $`r=10\eta `$. Substituting (4), (5) and (6) into (2), all of the integrals can be done analytically to give $$\beta ^{QG}=(12/225\alpha )(1.5000+3.42864.5000)=(4/175\alpha )3.5,$$ (11) where the first positive term comes from $`L(r)`$, the second positive term from $`T(r)`$, and the negative term from $`M(r)`$. The cancellation among the three terms is substantial. If the intermittency corrections to each term have slightly different Reynolds number dependence, this slight difference can be magnified in an unpredictable way in the net value of $`\beta `$. Further, the Hill-Wilczak formula (2) is for isotropic turbulence, and is sensitive to corrections due to anisotropy. The experiment is not fully isotropic even at dissipation scales. Corrections due to anisotropy can be substantial, and nothing is known about their Reynolds number dependence. Neglecting such possible differences, it is still a subtle and difficult question to estimate the Reynolds number dependence of the intermittency correction to the longitudinal contribution. This is governed, to a good approximation by $`F(10\eta )/3`$, where $`F(r)`$ is the longitudinal flatness factor $$F(r)=\mathrm{\Delta }u^4(r)/\mathrm{\Delta }u^2(r)^2,$$ (12) which is $`3`$ for a Gaussian distribution. The crossover of this flatness from the inertial to the dissipation range is quite complicated because of the multifractal distribution of dissipation length scales. A specific model of this multifractal crossover has been calculated by Eggers and Wang . Working with the input data to Fig. 6 of that paper, we have calculated the correction factor $`F(10\eta )/3`$ versus Reynolds number in the Reynolds number range of . It increases somewhat more strongly than the extrapolation from the DNS, and thus does not by itself explain the observed plateau in the scaled acceleration variance. We note that the longitudinal velocity derivative flatness in the flow between counter-rotating disks has been observed to have a plateau in its Reynolds number dependence in the same range of Reynolds numbers studied by Voth et al. The origin of this plateau is not understood. It is not known if it is a general feature of turbulence at these Reynolds numbers, or if it is a special feature of the flow between counter-rotating disks. The present study started with the idea that these two experiments might be related, but we have not found any simple connection between them. To summarize, there is no essential disagreement between the low Reynolds number DNS and the high Reynolds number experiment. In the DNS, a large part of the increase in the scaled mean square pressure gradient $`\beta `$ arises from finite Reynolds number corrections within the quasi-Gaussian approximation, and these corrections are obtained directly from the DNS. These corrections are negligible for the high Reynolds number experiment. The remaining intermittency corrections are increasing rather slowly with Reynolds number, and it is very difficult to estimate how they will extrapolate to the Reynolds number range of the experiment. The experimental value of $`\beta =21\pm 9`$ has considerable uncertainty in its absolute value, but it is almost surely much larger than the quasi-Gaussian value, which can be accurately calculated as $`\beta ^{QG}3.5`$. The approximate Reynolds number independence observed for $`\beta `$ does not mean that the experiment is consistent with 1941 Kolmogorov scaling, but only that the intermittency corrections, though large, are not varying rapidly with Reynolds number in the range of the experiments. We would like to thank Eberhard Bodenschatz, Patrick Tabeling, P.K. Yeung, Greg Voth and Victor Yakhot for useful conversations concerning a preliminary version of this paper. We would also like to thank Prakash Vedula and P.K. Yeung for calculating the quasi-Gaussian contribution from their DNS, and for sending the results to us before publication, and Jane Wang for giving us the input data for Figure 6 in . Finally we would like to thank K. R. Sreenivasan for pointing out the possible importance of anisotropy in interpreting the high Reynolds number experiment.
no-problem/9905/chao-dyn9905010.html
ar5iv
text
# Poincaré’s Recurrence Theorem and the Unitarity of the 𝑺–Matrix ## Abstract A scattering process can be described by suitably closing the system and considering the first return map from the entrance onto itself. This scattering map may be singular and discontinuous, but it will be measure preserving as a consequence of the recurrence theorem applied to any region of a simpler map. In the case of a billiard this is the Birkhoff map. The semiclassical quantization of the Birkhoff map can be subdivided into an entrance and a repeller. The construction of a scattering operator then follows in exact analogy to the classical process. Generically, the approximate unitarity of the semiclassical Birkhoff map is inherited by the $`S`$–matrix, even for highly resonant scattering where direct quantization of the scattering map breaks down. For a classical conservative system, whether discrete or continuous in time, Poincaré’s celebrated theorem can be reduced to the general statement that the probability for an orbit to return to any given region is unity if the motion is bounded. There is no restriction on the time this recurrence will take, which may vary widely among orbits starting in different subregions. In any case, by waiting a sufficiently long time, the first return of each trajectory defines a measure preserving map of the region onto itself. The essential boundedness of the system in no way hinders its application to classical scattering problems, because we can always choose the appointed region to coincide with the opening of the scattering system to the outside world. Since we are only interested in the first return of the orbits to the chosen region, it makes no difference that the system is not really closed. In other words, we can still apply the theorem if the union of the scatterer and the opening combine to form a bounded measure preserving map. As a first example consider the simple scattering system composed of a circular billiard opening onto a straight tube, as shown in Fig. 1. In this case, the closure of the dynamical system can be reduced to the Birkhoff map (or bounce map) for specular collisions of the straight trajectories with the circular boundary. The phase space is defined by the boundary coordinate $`s`$ (or the angle $`\theta `$, in the case of unit radius) and $`p_s`$, the tangential momentum (proportional to $`\mathrm{cos}\alpha `$, where $`\alpha `$ is the angle of incidence). The closed dynamics is very simple in this case: $`p_s`$ is constant (integrable motion) and $`\mathrm{\Delta }\theta =2\alpha `$. Nonetheless, the scattering map of the orbits returning to the opening is discontinuous. Indeed it is composed of an infinite sequence of diminishing subregions, of which the first few are shown in Fig. 2. Therefore a maximally simple closed dynamics induces a relatively complex (resonant) scattering map. It is only in the (non resonant) limit where the size of the opening approaches the diameter of the circle that the scattering map is also simple. Consider now the less obvious example of the specular scattering from three disks, that has become the paradigm of chaotic classical scattering. It may appear that our choice of closing surface in Fig. 3 amounts to an overkill, since we are not interested in orbits such as a in Fig. 3. However this integrable motion described in the previous example does not mix with the scattering orbits such as b, so we can substract it from the phase space of our system. This is then composed of the Birkhoff coordinates of the external circuit, restricted to small $`p_s`$ added to the full Birkhoff coordinates of the three scattering circles, as shown in Fig. 4. Evidently, we obtain the useful asymptotic scattering picture by making the radius of the outer circle arbitrarily large, so that we can identify the exit direction of an orbit with the point where it collides with the outer circle. Even so, the useful area for the outer circle in the phase space of Fig. 4 will be smaller than that of the three disks combined. The dynamics for the first return of the closed map is indicated by the different regions in Fig. 4. This is less trivial than in the previous example, but it is nowhere singular. The first return map between the four circles is hyperbolic and discontinuous similarly to the baker’s transformation, but the full complexity of the motion only arises through the multiple iterations needed to compose the scattering map of first returns to the outer circle. This map exhibits a fractal structure of singularities generated by motion that nearly enters on the stable manifold of periodic orbits within the scatterer. In both our simple examples it may still be necessary to relate the map restricted to the opening to a map purely determined by the mesurement to be performed. In the case of the circle that opens onto the tube, one is finally concerned with the orbits entering and leaving the other end of the tube, whereas in the three disc case we should connect the enclosing circle to an asymptotically large circle. The resulting map is known as the Poincaré scattering map. Our immediate concern will be the quantization of the (first return) internal map rather than the details of its outer connections. An evident conceptual advantage is achieved by understanding the structure of scattering maps on the basis of multiple iterations of their closure, even though it may be necessary to reverse this procedure in experimental situations. Our purpose is now to show that we can transfer to semiclassical scattering a construction of open and closed systems corresponding to the one which we have been employing in the classical theory. The starting point is to note that we can always define a finite Hilbert space that will correspond semiclassically to a finite phase space. Indeed, the dimension of the Hilbert space corresponding to a two dimensional classical phase space of area $`A`$ will be $`N=A/2\pi \mathrm{}`$, where $`\mathrm{}`$ is Planck’s constant. The prescription given by Miller for the approximately unitary quantum map $`U`$, is given in the coordinate representation as $$U(q,q^{})\frac{1}{\sqrt{2\pi \mathrm{}}}\underset{j}{}\left|\frac{^2\sigma _j}{qq^{}}\right|^{1/2}e^{i\sigma (q,q^{})/\mathrm{}+i\mu _j},$$ (1) where $`\mu _j`$ is the Maslov phase and $`\sigma _j(q,q^{})`$ stands for the generating function of the classical map, given implicitly by $$p^{}=\frac{\sigma _j}{q^{}},p=\frac{\sigma _j}{q},$$ (2) the index $`j`$ indicating that there may be more than one orbit for a given pair of points $`(q^{},q)`$. Of course, we should worry about the discretization of coordinates, due to the finite dimension ($`N`$) of the Hilbert space, but the approximations will hold when $`N`$ is large. Furthermore, the semiclassical approximation (1) will leave out evanescent modes and diffraction effects (e.g. in the three disks problem), but again these will give relatively small contributions in the large $`N`$ limit. We can check the approximate unitarity of (1) by noting that, for each continuous region of area $`A_j`$ $$N_j=\text{Tr}U_jU_j^{}\frac{dqdq^{}}{2\pi \mathrm{}}\left|\frac{^2\sigma _j}{qq^{}}\right|=\frac{A_j}{2\pi \mathrm{}}.$$ (3) Thus, we are quantizing separately each subregion in a way that increases border effects for discontinuous maps with many subdivisions. We have shown that this is typical of scattering maps. If they are sufficiently resonant as in our examples, we obtain $`N_j\stackrel{<}{}1`$ for many subregions, which are hence beyond the range of validity of the Miller prescription. The quantum signature of this resonance problem is the need to account for a large number of evanescent modes that cannot be related to real classical orbits. It is important to note that a given scattering experiment may involve the amplitude of a single element of the $`S`$–matrix in the appropriate representation. We thus require detailed local knowledge of this operator, rather than merely its traces or other coarse grained information. Experience with the quantized baker’s map and other discontinuous maps analogous to the present scattering situation, shows that an iteration of the map may have a reasonable semiclassical approximation for its trace beyond the time when the semiclassical approximation for the full map has broken down. Moreover, we are not concerned with the small departure from unitarity of the Miller construction for a continuous classical map. Instead we seek to remedy its complete breakdown for highly resonant scattering maps. The way out is to rely on the construction of the scattering map from the multiple iterations of the simpler closed map. We thus need the following result, which may be considered as the quantization of the recurrence thorem: Given a finite Hilbert space $`H_N`$ subdivided into two orthogonal subspaces $`H_{N_0}=P_0H_N`$ and $`H_{N_1}=P_1H_N`$ (such that the projection operators $`P_0+P_1=1_N`$) and given an unitary operator $`U_N`$ defined on $`H_N`$, then the operator $$S_{N_0}=P_0U_N\left[1P_1U_N\right]^1P_0=P_0U_N\underset{m=0}{\overset{\mathrm{}}{}}\left(P_1U_N\right)^mP_0$$ (4) is unitary on $`H_{N_0}`$. It should be noted that this is a much stronger property than the obvious “weak unitarity” statement $$\psi _0H_{N_0}:|\psi _0|^2=\underset{m=0}{\overset{\mathrm{}}{}}|P_0U_N\left(P_1U_N\right)^m\psi _0|^2,$$ (5) which corresponds to the overall conservation of classical probability. To outline the proof of the theorem, we define the rectangular blocks of the operator $`U_N`$, namely $`U_{ij}=P_iUP_j`$, and their hermitian conjugates $`U_{ij}^{}=P_jU^{}P_i`$. The unitarity of $`U_N`$ implies $`U_{00}U_{00}^{}+U_{01}U_{10}^{}`$ $`=`$ $`1_{N_0}`$ (6) $`U_{00}U_{01}^{}+U_{01}U_{11}^{}`$ $`=`$ $`0`$ (7) $`U_{10}U_{01}^{}+U_{11}U_{11}^{}`$ $`=`$ $`1_{N_1},`$ (8) where $`1_{N_j}`$ coincides with the nonsingular block of $`P_j`$. In this notation Eq. (4) reads $`S_{N_0}=U_{00}+U_{01}[1_{N_1}U_{11}]^1U_{10}`$. Then it is straightforward, though a little lengthy, to verify that $`S_{N_0}S_{N_0}^{}=S_{N_0}^{}S_{N_0}=1_{N_0}`$ is a consequence of Eqs. (6–8). Consider now a basis for $`H_{N_1}`$ in which $`P_1U`$ is diagonal, with states $`\psi _1^j`$, and the corresponding eigenvalues $`\lambda _1^j`$. If the elements of $`U`$ that couple such a state to the subspace $`H_{N_0}`$ are labeled $`U_{10}^{jk}`$, then $$|\lambda _1^j|^2=1\underset{k}{}|U_{10}^{jk}|^2.$$ (9) It follows that each eigenvalue of $`P_1U`$ lies inside the unit circle, unless the corresponding eigenstate is completely uncoupled to $`H_{N_0}`$ (in which case this eigenstate should be substracted from $`H_{N_1}`$). The quantum map $`P_1U`$ is therefore strictly dissipative, corresponding to the classical scatterer that looses orbits at each iteration. This fact is essential for the convergence of the sum in (4) of the expansion for $`S_{N_0}`$. We can now apply the exact result (4) to scattering by identifying $`U_N`$ with the approximate semiclassical map (1) for the closure of the scattering system. The resulting scattering matrix given by (4) determines the on–shell $`S`$–matrix for fixed energy. If the semiclassical approximation for $`U`$ departs from unitarity by order $`ϵ`$, the scattering matrix will err by an order $`ϵ[1U_{11}(ϵ)]^2`$. So there could be a large deviation of the $`S`$–matrix from unitarity if one of the eigenvalues of $`U_{11}`$ were sufficiently close to the unit circle. However, $`ϵ`$ vanishes as $`\mathrm{}0`$, whereas the coupling of the scatterer to the opening is classically strong in our examples, so that (9) guarantees that asymptotically $`[1U_{11}(ϵ)]^2`$ remains finite. Therefore we can use the quantum recurrence theorem as a basis for the semiclassical approximation of the scattering matrix. In our examples, the energy dependence of the classical map is trivial and it can be scaled away, but the area of the phase space grows with energy, modifying the dimension of the corresponding Hilbert space. Another way to see this important energy dependence of the quantum mechanics is through the growth of the actions of the orbits, i.e. the generating functions $`\sigma _j`$ in (1). In the case of smooth potentials, rather than billiards, even the energy dependence is nontrivial, but these scattering systems can also be treated within our conceptual framework by introducing quantum Poincaré sections in the manner of Bogomolny or Rouvinez and Smilansky. Formally, we could reobtain the standard Miller formula for the scattering amplitudes by doing the matrix multiplications in the infinite expansion (4) by the method of stationary phase. If we keep the semiclassical representation (1) for the unitary matrix $`U`$ and compute $`\left[1_{N_1}U_{11}\right]^1`$ using “Fredholm Theory”, we rederive precisely the semiclassical scattering theory of Georgeot and Prange. Our approach shows that each of the operators employed in the semiclassical Fredholm theory originates in the quantization of Poincaré’s recurrence theorem, as well as elucidates their approximate unitary or dissipative character. Finally, our results allow for a minimal semiclassical approximation for the scattering matrix. The only approximation is the use of Miller’s theory for the global map $`U_N`$ (rather than for the scattering map, as in previous treatments), using the classical variables corresponding to the entrance and exit channels. The problem is then reduced to the numerical inversion of the matrix $`\left[1_{N_1}U_{11}\right]`$, which has the dimension of the classical phase space of the scatterer divided by Planck’s constant. ###### Acknowledgements. We would like to thank C. Anteneodo and R. Markarian for useful suggestions. This work was supported by the Conselho de Desenvolvimento Científico e Tecnológico (CNPq/Brazil).
no-problem/9905/cond-mat9905130.html
ar5iv
text
# Carbon films with a novel sp2 network structure. ## 1 Introduction The production of hard carbon based materials using cathodic arc and laser ablation techniques has been reported previously. High film hardness has been attributed to the presence of a high percentage of sp<sup>3</sup> (diamond - like) bonds, whereas a high concentration of sp<sup>2</sup> (graphitic) bonds is regarded as leading to the formation of soft films. However, the discovery of the C<sub>60</sub> fullerene molecule and carbon nanotubes which are sp<sup>2</sup> bonded, opens up the possibility of obtaining 3-D structures which exploit the extremely strong in-planar bonds of graphite (stronger than diamond) in a new class of hard thin film materials. The reports of the formation of hard sp<sup>2</sup> bonded materials by anisotropically pressing C<sub>60</sub> and by embedding distorted fullerene - like nanoparticles in an amorphous carbon matrix were early indications that such a carbon material could be synthesised. However, the structure of such high fraction sp<sup>2</sup> bonded materials with superior mechanical properties is not known in detail. In this paper we report a novel microstructure of hard and elastic carbon films prepared via a laser initiated pulsed cathodic arc method (Laser-Arc). Both High Resolution Electron Microscopy (HREM) and Electron Energy Loss Spectroscopy (EELS) show evidence for a new form of carbon thin film material which consists of sets of curved graphene sheets mixed with amorphous carbon in which sp<sup>2</sup> carbon bonding dominates. Although a curved graphene sheet structure has previously been proposed for CN<sub>x</sub> materials , this is the first time that HREM and EELS have been used together in order to get a more complete view of both structure and bonding in a pure carbon material. ## 2 Mechanical Properties The hard and elastic films were prepared using a carbon plasma produced by the Laser-Arc method without any gas ambient . Films deposited by this method on Si substrates typically have Youngs modulus values of $`E=`$ 400 - 700 GPa as measured using laser induced acoustic waves . The value depends on the specific deposition conditions (e.g. substrate temperature and arc current). For the sample under investigation a Youngs modulus of E = 480 GPa was determined. A nominal hardness of H = 45 GPa and an elastic recovery of 85$`\%`$ has been measured from indentation experiments. A typical microindentation curve from such a film is shown in Figure Carbon films with a novel sp<sup>2</sup> network structure. . The plastic hardness quoted was calculated on the basis of the well known Olivier and Pharr method from the indentation curves. There is some doubt regarding the absolute accuracy of the hardness obtained by this method. Nevertheless, the hardness is consistent with the rule of thumb H $``$ E / 10, which is well proven for amorphous carbon films. The very distinct characteristic of these films is the very small overall indentation depth and the very high elastic recovery after being subjected to a maximum load of the magnitude shown in Fig. Carbon films with a novel sp<sup>2</sup> network structure. . This is a clear indication of a minimal plastic deformation regime. Most other thin films (other than diamond) will show strong evidence of plastic deformation at such load levels . On the other hand, a soft and elastic film, for example rubber, will show a very high degree of elastic recovery, but the maximum indentation depth would be an order of magnitude larger. Therefore, indentation characteristics such as those measured from Fig. Carbon films with a novel sp<sup>2</sup> network structure. are a clear signature of thin films with superior mechanical properties. ## 3 Microstructural characterisation HREM studies were carried out using a JEOL 2000EX microscope having a 0.21 nm point to point resolution, whereas a VG HB601 STEM with a 1 nm probe diameter was employed to collect the EELS spectra. The HREM and EELS analyses were performed on samples prepared by cleaving the specimen perpendicular to the interface. No preparation treatments involving ion beams or chemical etching were used. This significantly reduced the possibility of introducing microstructural artifacts. Figures Carbon films with a novel sp<sup>2</sup> network structure. (a) and (b), show cross-sectional HREM micrographs taken under differing defocus conditions from the same thin area of the specimen. They reveal a structure which is very different from that usually seen in amorphous carbon films. Parallel curved fringes grouped together in sets are mixed forming patterns of swirls and concentric rings, of typically 4-6 nm in outer diameter. Selected area diffraction (SAD) showed two broad diffuse rings at around 0.113 nm and 0.210 nm, which arise from the amorphous carbon. In addition, a much sharper ring was seen centred at 0.363 nm but with an actual spread between 0.358 and 0.368 nm. These values were determined using the Si substrate material as calibration for the SADP. The 0.363 nm spacing is consistent with the calculated spacing between the carbon layers in bucky onion structures . The shape of the fringes suggests that they represent curved graphene sheets similar to those found in carbon nanotubes and bucky onions . These images seem to suggest that a substantial part of the film is made up from fragments of such nanoparticles mixed in a random way forming a co-continuous matrix with the amorphous carbon component. As the specimen thickness increases beyond 25 nm, it becomes progressively more difficult to see the individual graphene sheets and the film takes on a classic “amorphous” appearance. The characteristic pattern formed by the graphene sheets is seen much more clearly in Fig. Carbon films with a novel sp<sup>2</sup> network structure. (b) which is taken close to Gaussian focus, as compared to Fig. Carbon films with a novel sp<sup>2</sup> network structure. (a) which was taken close to Scherzer defocus. The details of the image contrast in these HREM images are controlled by the contrast transfer function of the microscope. The frequencies of interest are 2.75 nm<sup>-1</sup> which corresponds to the graphene plane spacing (0.363 nm) and the region around 4.76 nm<sup>-1</sup> which correspond to the range of atomic spacings about 0.21 nm which are the periodicities associated with the diffuse diffraction ring often observed in amorphous carbon (a-C). This latter 0.21 nm spacing is close to the (100) graphite and $`\{111\}`$ diamond spacings and can be associated with either type of short range order in a-C. Calculations of the contrast transfer function of our microscope have been performed for Scherzer and Gaussian focus conditions. In both situations the 0.363 nm (graphene) periodicity is faithfully transferred. However, in the latter case the amorphous periodicities around 0.21 nm are heavily attenuated relative to that for Scherzer defocus. Consequently, for the image at Gaussian focus, Fig. Carbon films with a novel sp<sup>2</sup> network structure. (b), the graphene-like features are accentuated at the expense of the amorphous component. It is also possible to accentuate the graphene periodicities in the Scherzer defocus micrograph using image processing techniques. For example, the Fast Fourier Transform (FFT) of the area indicated in Fig. Carbon films with a novel sp<sup>2</sup> network structure. (a) has been filtered to remove low periodicities and subsequently back transformed to produce the clearer image shown in Fig. Carbon films with a novel sp<sup>2</sup> network structure. (c). Once again the curved graphene plane features become strongly visible at the expense of the amorphous background. Electron Energy Loss Spectroscopy was chosen as the most appropriate tool for studying the bonding configuration in the carbon network. Since this technique has been used extensively for carbon films in the past , correlation between our film and other hard carbon materials reported to-date are possible. Figure Carbon films with a novel sp<sup>2</sup> network structure. shows EELS spectra acquired from different parts of the film. The $`1s\sigma ^{}`$ peak at 292 eV is the signature of $`\sigma `$-bonding while the $`1s\pi ^{}`$ peak at 285 eV indicates $`\pi `$ bonding in our material. The ratio of the integrals under the two peaks is frequently used to estimate the amount of $`\pi `$ bonding present . Effects due to background, plural scattering and zero loss energy width have been deconvoluted from all these spectra. The result of these effects needs to be carefully considered to obtain an accurate measure of the $`1s\pi ^{}`$ energy loss. We used a C<sub>60</sub> fullerite crystal as the calibration material since it has a known 1:3 ratio of $`\pi `$ to $`\sigma `$ bonds and is free from the orientational effects which can arise in graphite . Equating the heights of the $`1s\sigma ^{}`$ peaks for all spectra, we can gain qualitative information about the fraction of $`\pi `$-bonding in our material by directly comparing the intensity of the $`1s\pi ^{}`$ peaks. Spectrum Carbon films with a novel sp<sup>2</sup> network structure. (a) shows the C K-edge of a pure C<sub>60</sub> fullerite crystal which is taken as the representation of a purely sp<sup>2</sup> material with randomised bond orientation. Spectrum Carbon films with a novel sp<sup>2</sup> network structure. (b) shows the C K-edge from a highly oriented graphite sample with the basal planes parallel to the incident beam direction (i.e. the $`p`$ orbitals are perpendicular to the optical axis). This spectrum is used as a reference for the maximum orientational enhancement of the $`1s\pi ^{}`$ peak that could occur for the convergence and collection angles employed in this study . Comparing spectra Carbon films with a novel sp<sup>2</sup> network structure. (a) and Carbon films with a novel sp<sup>2</sup> network structure. (b) we see that although both materials studied consist of sp<sup>2</sup> bonded carbon, the orientation of the basal planes in the latter case strongly enhances the $`1s\pi ^{}`$ peak. Spectrum Carbon films with a novel sp<sup>2</sup> network structure. (c) shows the C K-edge obtained from a thin specimen area of our hard and elastic carbon film. Notably, the intensity of the $`1s\pi ^{}`$ peak lies in-between the intensities of the reference materials, revealing the presence of at least some orientational effects. The structure of our films shown in Figs. Carbon films with a novel sp<sup>2</sup> network structure. (a) and (b) suggests that the 1 nm EELS probe is, over the range of a few curved graphene sheets, most likely aligned parallel to the optical axis. This gives rise to orientational effects similar to those seen in spectrum Carbon films with a novel sp<sup>2</sup> network structure. (b). However, our material is slightly different from the graphite sample described above because within the cone defined by the convergent electron probe there are also $`\pi `$ bonds with various orientations due to the sheet curvature and the presence of residual amorphous carbon. Therefore, the $`1s\pi ^{}`$ peak is not quite as high as for the pure graphite sample. Following this line of argument, it would also be expected that as film thickness increases and more fullerene - like patches overlap, an eventual averaging of $`\pi `$ bond orientation should occur. Indeed, the intensity of the $`1s\pi ^{}`$ peak does decrease when we obtain the EELS spectrum from a thicker area of the specimen, as shown in Fig. Carbon films with a novel sp<sup>2</sup> network structure. (d). The fact that the $`1s\pi ^{}`$ peak intensity actually decreases to a level slightly below that for the C<sub>60</sub> fullerite is attributed to the presence of amorphous carbon with some residual sp<sup>3</sup> bonding, the relative influence of which increases in thick areas. ## 4 Discussion Since our material consists mainly of sp<sup>2</sup> bonded carbon, the challenge is to relate the excellent mechanical properties with the observed film microstructure. A “squeezed chicken wire” model is proposed to describe the observed network of graphene sheets shown in Fig. Carbon films with a novel sp<sup>2</sup> network structure. . If pieces of flat chicken wire are squeezed together so that they deform, they also become linked because single wires coming off their edges are entangled with the rest of the material. As a result it is very difficult to separate them afterwards and a robust but yet flexible structure can be formed. In analogy to pieces of chicken wire, a substantial fraction of our material consists of curved graphene sheets. These fullerene-like regions are brought into close proximity during deposition, as is evident from the high density with which they appear in HREM images. At their edges there are unsatisfied bonds through which they may bond either to the amorphous material or to other sheets. There have also been previous studies and models which have proposed that cross-linking of graphitic sp<sup>2</sup> plane regions with occasional sp<sup>3</sup> “defects” can explain the optical and electronic properties of a-C films. Here we have experimentally realised a hard graphitic material. However, our structure is distinct from the previous models in that it is pentagonal and heptagonal defects in the graphitic plane which gives rise to curvature, and hence a pseudo-3D graphitic structure by interlinking of these curved segments. This is close to the structure envisaged by Townsend et al who on the basis of atomic modeling, suggested that interlinking in purely sp<sup>2</sup> bonded structure can take place through randomly oriented pentagonal, hexagonal and heptagonal rings. Additionally, the carbon material reported here has a more periodic structure than in a purely amorphous material, and the curved graphene planes (fullerene-like structure) which lead to the interlinking are clearly seen in the HREM image of Fig. Carbon films with a novel sp<sup>2</sup> network structure. . Our material is best described as a nanostructured carbon with a fullerene-like structure. Unlike graphite which has strong in-plane covalent sp<sup>2</sup> bonds and weak interplanar Van - der Waals bonds, the structure in our hard and elastic films creates a 3-dimensional sp<sup>2</sup> bonded carbon network. In a previous study of hard and elastic carbon films resulting from fragmentation of carbon nanoparticles, the current authors inferred that sp<sup>3</sup> diamond-like bonding was dominant in regions where nanoparticle fragments interlinked. This deduction was based on the observation of a reduced 1s-$`\pi ^{}`$ peak intensity in the EELS K-edge spectrum obtained from an interlinked region, compared to that from adjacent single nanoparticle regions. In light of the more detailed EELS study carried out here, and taking into consideration the orientational effects on the relative magnitude of the 1s-$`\pi ^{}`$ peak, those earlier results may also be interpreted according to the “squeezed chicken wire” model. In the earlier case, the orientational effects present in individual nanoparticle fragments may have become randomised when they were “squeezed” together in the interlinking region. The hardness of our films (45 GPa) is an order of magnitude higher than that for graphite-like films ($``$ 5 GPa). In a very simplified model, our material may be considered as a strongly connected arrangement of nanometer stacks of graphene lamellae with different orientations. Depending on their orientation, the elastic modulus of a single stack is given by the modulus in direction $`C_{11}`$ = 1060 GPa and in the $`c`$ direction, $`C_{33}`$ = 36.5 GPa, respectively . Considering that these differently orientated stacks usually represent parts of the same (often closed) shell package the effective modulus may be approximated by their parallel arrangement, i. e. as the arithmetic average. For an isotropic film structure this gives $`E_{eff}(2C_{11}+C_{33})/3720`$ GPa, whereas for a textured structure $`E_{eff}(C_{11}+C_{33})/2550`$ GPa may be expected. If the additional reduction by the surrounding amorphous matrix is taken into account the latter value is consistent with the experimental value E = 480 GPa. Hence, this crude estimation shows that very high stiffness may be realised by suitably arranged graphene sheets and it supports the impression of a certain degree of texturing. Furthermore, when sets of graphene planes are compressed their built-in curvature causes them to recover a shape close to their initial state after deformation. This we propose is the origin of the apparently high elastic recovery in this material. A structure that consists of a dense array of curved graphene sheets, like the one seen in our films, is very close to the realisation of a continuous fullerene-like carbon material. Curved graphene sheets are well known for turbostratic structures. The decisive difference in this case is the special arrangement of the graphene lamellae representing closed or highly curved shells with nanometer curvatures. In this way the relative gliding of the sheets is prevented and extreme elastic recovery is possible by local buckling of the sheets. We propose that carbon films which exhibit this type of structure constitute a new class of carbonaceous material. Acknowledgements : We would like to thank M. Johansson and L. Hultman from Link$`\ddot{o}`$ping University for their expert guidance in TEM sample preparation and A. Burrows for help with image processing. We would also like to thank H. Ziegele for assistance with film deposition and D. Schneider for the measurements of the Youngs moduli of the films. Finally, we are indebted to Multi-Arc UK for funding this research programme. Figure Captions Figure Carbon films with a novel sp<sup>2</sup> network structure. Typical load-displacement curves obtained during microindentation testing. The elastic recovery is calculated using the formula $`\frac{d_{max}dmin}{d_{max}}`$, where d<sub>max</sub> and d<sub>min</sub> are the maximum and minimum displacements during unloading, respectively. Figure Carbon films with a novel sp<sup>2</sup> network structure. HREM micrographs : images $`(a)`$ and $`(b)`$ were obtained from the same region of the film taken near Scherzer defocus ($`\mathrm{\Delta }f52`$ nm) and Gaussian focus ($`\mathrm{\Delta }f0`$) respectively. Both images show sets of parallel graphene sheets forming swirls and concentric rings; image $`(c)`$-the area indicated in image $`(a)`$ has been image processed to accentuate the curved fullerene structure. Figure Carbon films with a novel sp<sup>2</sup> network structure. : EELS spectra obtained from :$`(a)`$ a pure C<sub>60</sub> fullerite crystal which is regarded as the best description of a pure sp<sup>2</sup> material with an averaged $`\pi `$ bond orientation; $`(b)`$ a highly oriented graphite sample with the basal planes parallel to the optical axis; $`(c)`$ our hard and elastic carbon film where the specimen has a thickness of 0.22$`\times \lambda `$; $`(d)`$ our hard and elastic carbon film where the specimen thickness is 1.52$`\times \lambda `$. For all EELS spectra the energy resolution was 0.3 eV/channel and the convergence and collection angles were 21.3 and 3.4 mrad, respectively. The amount of plural scattering present in the low energy loss region has been used to calculate the film thickness and is quoted above as a fraction of the inelastic mean free path $`\lambda `$.
no-problem/9905/hep-lat9905008.html
ar5iv
text
# Thermodynamics with Dynamical Clover Fermions ## Abstract We investigate the finite temperature behavior of nonperturbatively improved clover fermions on lattices with temporal extent $`N_t=4`$ and $`6`$. Unfortunately in the gauge coupling range, where the clover coefficient has been determined nonperturbatively, the finite temperature crossover/transition occurs at heavy pseudoscalar masses and large pseudoscalar to vector meson mass ratios. However, on an $`N_t=6`$ lattice the thermal crossover for the improved fermions is much smoother than for unimproved Wilson fermions and no strange metastable behavior is observed. FSU-SCRI-99-31 hep-lat/9905008 Simulations with Wilson fermions suffer $`𝒪(a)`$ scaling violations due to the dimension five operator that Wilson introduced to give the unwanted fermion doublers masses of order the cutoff, $`1/a`$. These scaling violations are much larger than those in the glue sector, which are $`𝒪(a^2)`$, and they can be numerically quite large, necessitating use of small lattice spacings at large simulation cost to get results that can be reliably extrapolated to the continuum limit. The $`𝒪(a)`$ scaling violations can be reduced to $`𝒪(a^2)`$ by introducing another dimension five operator into the fermion action, the so called clover term, $$S_{\mathrm{sw}}=c_{\mathrm{sw}}\kappa \underset{x}{}\overline{\psi }(x)i\sigma _{\mu \nu }_{\mu \nu }(x)\psi (x),$$ (1) where $`_{\mu \nu }(x)`$ is a lattice transcription of the field strength tensor $`F_{\mu \nu }(x)`$, usually taken from four open plaquettes looking like a clover leaf, as proposed by Sheikholeslami and Wohlert . For the reduction of scaling violations to work the clover coefficient, $`c_{\mathrm{sw}}`$, needs to be determined nonperturbatively as a function of the gauge coupling. The ALPHA collaboration developed a method to do so within the Schrödinger functional framework . For quenched QCD the nonperturbative clover coefficient is now known for gauge coupling $`6/g^25.7`$, corresponding to lattice spacings $`a\mathrm{}<0.17`$ fm . A substantial reduction of scaling violations in this region from $`𝒪(a)`$ to $`𝒪(a^2)`$ has been verified nicely in . The clover coefficient has recently also been determined by the ALPHA collaboration for full QCD with two flavors of dynamical fermions for gauge coupling $`\beta =6/g^25.2`$, corresponding roughly to lattice spacings $`a\mathrm{}<0.14`$ fm . To be precise, the clover coefficient was determined for $`\beta 5.4`$ and fitted to a ratio of polynomials in $`g^2`$. At $`\beta =5.2`$ a numerical consistency check with the value extrapolated with this function of $`g^2`$ was performed (see for details). Preliminary first results of hadron spectroscopy and the heavy quark potential in dynamical simulations with this nonperturbatively determined clover coefficient have appeared in . Arguably the largest lattice artifacts in simulations with dynamical Wilson fermions have been observed in simulations probing the finite temperature behavior in the vicinity of the deconfinement/chiral symmetry restoration transition or (at finite quark mass) crossover. The most likely scenario for the behavior of two-flavor QCD at finite temperature is a rapid crossover for finite quark mass, turning into a second order chiral symmetry restoring phase transition in the massless limit . However, simulations with dynamical Wilson fermions showed strange, unexpected behavior, e.g. first order phase transition like signals at intermediate quark masses, that softened again at smaller quark masses . It has been argued that this strange and unexpected behavior is due to effects of the Wilson pure gauge action in its so called “crossover region”, where the plaquette varies sharply with changes in the gauge coupling, which feeds back to the fermions, rather than being due to artifacts in the fermion action. Indeed, their simulations did not show any evidence for first order like signals. Similarly, a study using both an improved gauge and a clover improved fermion action, with so-called tadpole improved coefficients, found a smoother behavior in the thermal crossover region than the simulations with unimproved Wilson action for both gauge and fermion sectors . From the point of view of the Symanzik improvement program both these simulations still have $`𝒪(g^na)`$ errors of unknown magnitude. Furthermore, it is not clear, in the study where both gauge and fermion action are improved, which improvement is more important in smoothening out the thermal crossover behavior. In this letter, we study the effect of the nonperturbative improvement of the Wilson fermion action on the behavior at the finite temperature crossover. We are interested in simulations with small lattice extent in the temporal direction, i.e. at large lattice spacing, where the simulations are relatively cheap. The largest coupling for which the nonperturbative value of $`c_{sw}`$ is known is $`\beta =6/g^2=5.2`$. We performed simulations on $`8^3\times 4`$ lattices for $`\beta =5.4`$, $`5.3`$ and $`5.2`$ and various values of $`\kappa `$ in the thermal transition/crossover region. The values for $`c_{sw}`$ used, obtained from Ref. are listed in Table 1. Measurements were taken, after thermalization, over 500 trajectories away from the crossover region, and over up to 4000 trajectories in the middle of the crossover region. We show in Figure 1 the real part of the Polyakov line expectation value and in Figure 2 the average space-like plaquette. A clear crossover is seen for all three gauge couplings, becoming sharper as the coupling is increased (as $`\beta `$ is decreased). For the largest coupling, $`\beta =5.2`$, we also simulated on a larger spatial volume, $`12^3\times 4`$. As can be seen from the figures, there is no evidence for finite volume effects. The thermal crossover appears quite rapid, reminiscent of unimproved Wilson simulations , as compared to thermodynamics with a tadpole improved clover fermion and Symanzik improved gauge action . But, of course, comparisons should not be made in terms of bare parameters, such as $`\kappa `$. We have therefore computed hadron masses and the heavy quark potential at the thermal crossover points. For $`\beta =5.2`$ hadron masses were also computed on both sides of the thermal crossover. Most of these measurements were done on $`8^3\times 16`$ lattices. In one case, for $`\beta =5.2`$ and $`\kappa =0.1340`$, where the meson masses are lightest, we also have preliminary results from a simulation on a larger $`16^3\times 32`$ lattice. There, we do see finite size effects on the masses of about 10%, but much less on the mass ratio. We suspect that the masses for the next lightest quark mass, at $`\beta =5.2`$, $`\kappa =0.1330`$ are also affected by some small finite size effects, while for the other cases the finite size effects are expected to be smaller than the statistical errors. All the results are collected in Table 2. Even at the strongest coupling, for which the nonperturbative clover coefficient is known, the thermal crossover for $`N_t=4`$ lattices occurs at heavy pseudoscalar meson mass and large pseudoscalar to vector meson mass ratio. We therefore also considered thermodynamics on an $`N_t=6`$ lattice at $`\beta =5.2`$. $`\mathrm{R}eP`$ and $`\mathrm{T}rU_{p_s}/3`$ are shown in Figures 3 and 4, where they are compared to the results from $`N_t=4`$. The crossover seems somewhat smoother for $`N_t=6`$, and it occurs at lighter meson masses. However, at $`0.85`$ the pseudoscalar to vector meson mass ratio is still rather large. Comparing Figure 1 with similar plots for unimproved Wilson fermions such as Fig. 5 of the crossover for the improved clover fermions appears to be even more pronounced than for the unimproved Wilson fermions. Comparing as function of the bare hopping parameter $`\kappa `$, though, can be misleading and we therefore follow the strategy of Ref. and make a comparison as function of the quark mass, or equivalently as function of $`m_{PS}^2`$. From Ref. we can see that the crossover for Wilson fermions at $`\beta =5.12`$ and $`\kappa =0.1700`$ occurs at a pseudoscalar to vector meson mass ratio $`m_{PS}/m_V=0.899(4)`$. It appears that the unimproved Wilson fermion data at $`\beta =5.1`$ of are the set that most closely matches ours at $`\beta =5.2`$. In addition, from Ref. we have some mass measurements in the thermal crossover region. While the $`m_{PS}/m_V`$ ratios at the thermal crossover are comparable, the masses in lattice units are quite different. We therefore decided to plot $`\mathrm{R}eP`$ as function of $`(m_{PS}/m_V(\kappa _T))^2`$, where $`m_V(\kappa _T)`$ is the vector meson mass at the thermal crossover point $`\kappa _T`$. The comparison is shown in Fig. 5. It appears that the crossover for the improved fermions with $`N_t=4`$ is somewhat sharper than for the unimproved fermions. The crossover for the improved fermions on the $`N_t=6`$ lattice, on the other hand, is much smoother. The pseudoscalar to vector meson mass ratio at the crossover corresponds to that for unimproved Wilson fermion at $`\beta =5.22`$, $`\kappa =0.17`$ . This is the region where first order like metastable states were observed in . So here the improvement seems to help. In conclusion, we have carried out finite temperature simulations with nonperturbatively improved Wilson fermion in the region of largest coupling, and hence largest lattice spacing, for which the nonperturbative value of the clover coefficient $`c_{sw}`$ is known. The thermal crossover on $`N_t=4`$ lattices occurs at very heavy pseudoscalar meson masses and large pseudoscalar to vector meson mass ratios. The crossover appears somewhat sharper than for unimproved Wilson fermions at comparable $`m_{PS}/m_V`$ ratios. However, for the improved fermions the masses (in lattice units) are considerably larger, and the thermal crossover could still be significantly influenced by the deconfinement transition in the pure gauge theory. While the thermal crossover on an $`N_t=6`$ lattice at the strongest accessible coupling still occurs at a large $`m_{PS}/m_V`$ ratio, it has become much smoother, in particular compared to unimproved Wilson fermions, which show strange first order like behavior for comparable $`m_{PS}/m_V`$ ratios. To get the thermal crossover to occur for smaller $`m_{PS}/m_V`$ ratio, one needs to use lattices with larger temporal extent $`N_t`$ or try to push the nonperturbative determination of $`c_{sw}`$ to stronger couplings. This research was supported by DOE contracts DE-FG05-85ER250000 and DE-FG05-96ER40979. Computations were performed on the CM-2 and the workstation cluster at SCRI.
no-problem/9905/cond-mat9905148.html
ar5iv
text
# Effects of dimensionality and anisotropy on the Holstein polaron ## I Introduction Polarons are ubiquitous quasiparticles in deformable materials embodying the renormalizing effects of deformation quanta (phonons) on free carriers. The effects that can appear depend on the strength of the electron-phonon coupling and on the relative time scales of the free electron motion and the relevant host vibrations, this latter relationship being subsumed in the common notion of adiabaticity. Loosely speaking, at fixed adiabaticity weakly-coupled polarons may be spread over many lattice sites (“large” or “free”) while strongly-coupled polarons may be highly localized, even essentially completely collapsed (“small” or “self-trapped”). Similarly, at fixed electron-phonon coupling strength very adiabatic polarons may be quite broad, while non-adiabatic polarons may be quite compact. Part of the looseness in this characterization has to do with the nature of the self-trapping transition, and with dependences on the effective dimensionality of the host system. Well-known and widely-invoked results tied to the adiabatic approximation suggest that polarons in 1D should be qualitatively distinct from those found in 2D and 3D, and that even the notion of self-trapping should take on different meaning in low and high dimensions . The root of this lies in stability arguments suggesting that in 1D all polaron states should be characterized by finite widths, while in 2D and 3D polaron states may have either infinite radii (“free” states at weak coupling) or finite radii (“self-trapped” states at strong coupling). The self-trapping transition is thus taken to mean the abrupt transition from delocalized “free” states characterized by the free electron mass to highly-localized “self-trapped” states characterized by strongly-enhanced effective masses. The overall conclusion of this paper, on the other hand, consistent with a growing body of independent work , is that the properties of higher-dimensional polarons are more qualitatively similar in most respects to those of 1D polarons than they are different, and that those distinctions that can meaningfully be drawn are only distantly related to the more familiar expectations outlined above. Central in this subject is the notion of anisotropy, which figures particularly strongly in quasi-1D systems such as conducting polymers or in quasi-2D systems such as high-$`T_c`$ materials. The nature of self-trapping in a quasi-1D system poses particularly potent challenges, since depending on one’s stance one may reach divergent conclusions: the polaron may self-trap, or it may not; it may be sharply localized, soliton-like, or free; its mass may be the free electron mass, may be weakly renormalized, or may be enhanced by orders of magnitude. The resolution of such ambiguities lies not uniquely at the interface between 1D and 2D systems, but in a general understanding of the role of anisotropy in polaron structure, within which the clarification of the nature of self-trapping in quasi-1D systems is a byproduct. An essential preliminary observation is that the self-trapping transition is not, in fact, an abrupt phenomenon in any dimension except in the adiabatic limit; at finite parameter values the physically-meaningful transition is more in keeping with a smooth, if rapid, “crossover” from polaron structures characteristic of the weak-coupling regime to structures characteristic of the strong-coupling regime. The “self-trapping line” describing this transition can be located by criteria sensitive to changes in polaron structure; these may involve physical observables such as the polaron ground state energy and effective mass, or may rely upon more formal properties less accessible to direct physical measurement. Here, we consider several physical observables at finite parameters as well as in asymptotic regimes. Through these, we are able to characterize self-trapping in one, two, and three dimensions for any degree of anisotropy. For the explicit calculations to follow, we use the Holstein Hamiltonian on a $`D`$-dimensional Euclidean lattice $`\widehat{H}`$ $`=`$ $`\widehat{H}_{kin}+\widehat{H}_{ph}+\widehat{H}_{int},`$ (1) $`\widehat{H}_{kin}`$ $`=`$ $`{\displaystyle \underset{\stackrel{}{n}}{}}{\displaystyle \underset{i=1}{\overset{D}{}}}J_ia_\stackrel{}{n}^{}(a_{\stackrel{}{n}+\stackrel{}{ϵ}_i}+a_{\stackrel{}{n}\stackrel{}{ϵ}_i}),`$ (2) $`\widehat{H}_{ph}`$ $`=`$ $`\mathrm{}\omega {\displaystyle \underset{\stackrel{}{n}}{}}b_\stackrel{}{n}^{}b_\stackrel{}{n},`$ (3) $`\widehat{H}_{int}`$ $`=`$ $`g\mathrm{}\omega {\displaystyle \underset{\stackrel{}{n}}{}}a_\stackrel{}{n}^{}a_\stackrel{}{n}(b_\stackrel{}{n}^{}+b_\stackrel{}{n}),`$ (4) in which $`a_\stackrel{}{n}^{}`$ creates a single electronic excitation in the rigid-lattice Wannier state at site $`\stackrel{}{n}`$, and $`b_\stackrel{}{n}^{}`$ creates a quantum of vibrational energy in the Einstein oscillator at site $`\stackrel{}{n}`$. All sums are understood to run over the entire infinite, periodic, $`D`$-dimensional lattice. Because there is no phonon dispersion in this model, and because the electron-phonon coupling is strictly local, it is in $`\widehat{H}_{kin}`$ where lattice dimensionality and structure have their greatest influence; the $`J_i`$ are the nearest-neighbor electronic transfer integrals along the primitive crystal axes, and the $`\widehat{ϵ}_i`$ are unit vectors associated with the primitive translations. The above model encompasses all Bravais lattices, with the different lattice structures appearing only in the relative values of the hopping integrals $`J_i`$. For simplicity in the following, we use terms appropriate to orthorhombic lattices in which conventionally $`i=x`$, $`y`$, or $`z`$; however, all results hold for lattices of lower symmetry with appropriate transcription of these labels to those of the primitive axes. For some purposes in this paper, we qualify and quantify anisotropy through a vector $$\stackrel{}{J}=J_x\widehat{ϵ}_x+J_y\widehat{ϵ}_y+J_z\widehat{ϵ}_z$$ (5) whose orientation in a Cartesian system can be used to objectively quantify anisotropy. In these terms, an isotropic property is one depending only on the modulus $`|\stackrel{}{J}|=(\stackrel{}{J}\stackrel{}{J})^{1/2}`$. For other purposes, however, it is convenient to think of dimensions being turned “on” or “off” according to whether particular $`J_i`$ are finite or vanishing. Anisotropy can then be tuned by varying selected $`J_i`$ in the interval $`(0,J]`$. In several illustrations to follow, we do this sequentially, so that dimensions are “turned on” one by one, arriving ultimately at the isotropic $`D`$-dimensional case in which $`J_i=J`$ along all axes. In the following, we reserve the unsubscripted scalar symbol $`J`$ to represent common magnitude of all $`J_i`$ in an isotropic case ($`J=J_i=|\stackrel{}{J}|/\sqrt{D}`$). This manner of tuning dimensionality does not isolate anisotropy, however, since changing one $`J_i`$ keeping others fixed changes both the orientation and modulus of $`\stackrel{}{J}`$. From either perspective, it is such tuning between dimensions by continuously varying the anisotropy that is the physically-meaningful concept in most situations characterized as quasi-1D ($`J_x>>J_y,J_z`$) or quasi-2D ($`J_x,J_y>>J_z`$). Quite apart from such very direct quantifications of anisotropy, another quantity that arises naturally in the following is the sum of the transfer integrals along each axis $$𝒥\underset{i=1}{\overset{D}{}}J_iTr𝐌_\mathrm{𝟎}^\mathrm{𝟏},$$ (6) in which $`𝐌_\mathrm{𝟎}^\mathrm{𝟏}`$ is the reciprocal effective mass tensor of the free electron (see Eq. 41 ff.); in the isotropic case, $`𝒥=DJ`$. Using weak-coupling perturbation theory (WCPT) identifying the unperturbed Hamiltonian as $`\widehat{H}_0=\widehat{H}_{kin}+\widehat{H}_{ph}`$ and the perturbation as $`\widehat{H}^{}=\widehat{H}_{int}`$ , one can show that the form of the polaron energy band at weak coupling in $`D`$ dimensions is given by $$E(\stackrel{}{\kappa })=E_{WC}^{(0)}(\stackrel{}{\kappa })+E_{WC}^{(2)}(\stackrel{}{\kappa })+O\{g^4\},$$ (7) where $`E_{WC}^{(0)}(\stackrel{}{\kappa })`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{D}{}}}2J_i\mathrm{cos}\kappa _i,`$ (8) $`E_{WC}^{(2)}(\stackrel{}{\kappa })`$ $`=`$ (9) $`g^2`$ $`\mathrm{}^2\omega ^2{\displaystyle _0^{\mathrm{}}}𝑑te^{\mathrm{}\omega t}{\displaystyle \underset{i=1}{\overset{D}{}}}e^{2J_it\mathrm{cos}\kappa _i}I_0(2J_it),`$ (10) in which $`I_n(z)`$ is the modified Bessel function of order $`n`$. Using strong-coupling perturbation theory (SCPT) following the Lang-Firsov transformation, identifying the unperturbed Hamiltonian as $`\stackrel{~}{H}_0=\stackrel{~}{H}_{ph}+\stackrel{~}{H}_{int}`$ and the perturbation as $`\stackrel{~}{H}^{}=\stackrel{~}{H}_{kin}`$) , one finds $$E(\stackrel{}{\kappa })=E_{SC}^{(0)}(\stackrel{}{\kappa })+E_{SC}^{(1)}(\stackrel{}{\kappa })+E_{SC}^{(2)}(\stackrel{}{\kappa })+O\left\{\frac{\stackrel{~}{J}^3}{\mathrm{}^3\omega ^3}\right\},$$ (11) where $`\stackrel{~}{J}`$ is an effective, dressed tunneling parameter that may be either comparable to or much smaller than the bare $`J`$ depending on regime, and $`E_{SC}^{(0)}(\stackrel{}{\kappa })`$ $`=`$ $`g^2`$ (12) $`E_{SC}^{(1)}(\stackrel{}{\kappa })`$ $`=`$ $`e^{g^2}{\displaystyle \underset{i=1}{\overset{D}{}}}2J_i\mathrm{cos}\kappa _i`$ (13) $`E_{SC}^{(2)}(\stackrel{}{\kappa })`$ $`=`$ $`e^{2g^2}f(2g^2){\displaystyle \underset{i=1}{\overset{D}{}}}2J_i^2`$ (16) $`e^{2g^2}f(g^2){\displaystyle \underset{i=1}{\overset{D}{}}}2J_i^2\mathrm{cos}2\kappa _i`$ $`e^{2g^2}f(g^2){\displaystyle \underset{ij}{\overset{D}{}}}J_iJ_j\mathrm{cos}\kappa _i\mathrm{cos}\kappa _j`$ $`f(y)`$ $`=`$ $`\mathrm{Ei}(y)\gamma \mathrm{ln}(y)`$ (17) where $`\gamma `$ is the Euler’s constant and $`\mathrm{Ei}(y)`$ is the exponential integral. In Figure 1, we show results for the ground state energy in 1D, 2D, and 3D for weak-coupling perturbation theory through second order and strong-coupling perturbation theory through second order, together with corresponding results of quantum Monte Carlo simulation . This comparison shows that in any dimension weak-coupling and strong-coupling perturbation theory are both quite good up to a relatively small interval around the “knee” that is associated with the self-trapping transition. The real knee as discernible in the quantum Monte Carlo data characteristically falls to the strong-coupling side of the intersection (or near-intersection) of the WCPT and SCPT curves, but to the weak-coupling side of $`g_{ST}`$ as discussed in Section IV . (These systematic offsets are symptomatic of the smooth nature of the physically-meaningful transition, as discussed in the next section.) Beyond validating both weak and strong-coupling perturbation theory as used in this paper, Figure 1 also holds a message regarding the qualitative character of self-trapping in different dimensions. That message is that the occurrence of self-trapping is qualitatively similar in one, two, and three dimensions, with the primary qualitative changes being that the transition systematically increases in abruptness and shifts to stronger coupling as dimensionality is increased. ## II Self-trapping preliminaries A central result upon which the following sections build is the concept of a self-trapping line as contained in the empirical curve $$g_{ST}[1]=1+\sqrt{J/\mathrm{}\omega }$$ (18) that has been found to accurately characterize the transition between the small and large polaron regimes in one dimension. This curve was inferred through the application of objective criteria to physical properties such as the polaron effective mass , ground state energy, kinetic energy, phonon energy, electron-phonon interaction energy , and electron-phonon correlation function . Although this simple construct consistently describes a wealth of data drawn from multiple polaron properties obtained by our own and independent methods, it is well to stress what the above relation is not, since the same limitations apply to other constructs we are led to in the balance of this paper. The self-trapping curves we address do not describe a phase transition, nor even the exact location of the objectively-determined point of crossover implicit in any one physical property. Different physical properties generally signal the occurrence of self-trapping at distinct, though systematically and tightly clustered points on the polaron phase diagram. The empirical self-trapping line is not intended to describe any one property exactly, but to accurately describe the central trend of clusters of transition properties over a large range of the polaron parameter space. Self-trapping loci drawn from observations of different physical properties thus track the empirical trend line with their own systematic deviations that narrow as the adiabatic limit is approached. Clearly, the form of $`g_{ST}[1]`$ has not been derived from first principles. Indeed, one can easily be persuaded from approximate descriptions of the problem that the initial dependence of $`g_{ST}[1]`$ upon $`J/\mathrm{}\omega `$ is most likely not singular, but regular, if perhaps steep . Our retention of the square root in $`g_{ST}[1]`$ and its higher-dimensional generalizations developed below thus reflects not an assertion of singular physical behavior, but merely an economy of phenomenology. ## III Self-Trapping Transition in the Scaling Regime The location of the self-trapping line can be estimated under the practical assumption that the self-trapping transition should lie near the crossover from the weak-coupling regime to the strong-coupling regime; here, specifically, by the intersection of the ground state energy curves as given by the leading orders of each perturbation theory This is an imperfect assumption, since the errors in both weak-coupling perturbation theory and strong-coupling perturbation theory increase in absolute terms as the transition is approached. Absolute precision in the perturbative energy is not required, however, in order to accurately locate the transition in parameter space. Handled carefully, we can expect a WCPT/SCPT crossing condition to capture dependences that are asymptotically correct in the adiabatic strong-coupling limit, provided that relative errors in the appropriate quantities remain controlled. Since we are limited to the low orders of perturbation theory, we necessarily depend upon there being no unexpected surprises lurking in the higher orders of either weak or strong-coupling perturbation theory that upset the scaling relationships evident in the leading orders; that such might, in principle, occur is a caveat, however unlikely, that must attach to our arguments. To this end, we consider the adiabatic strong-coupling limit where the self-trapping line coincides with the adiabatic critical point; i.e., where the smooth physical transition steepens critically. The composite parameter $$\lambda =\frac{E_{SC}^0(0)}{E_{WC}^0(0)}=\frac{g^2\mathrm{}\omega }{2𝒥}$$ (19) appears frequently in discussions of this regime because it embodies the essential scaling relationship characterizing the adiabatic strong-coupling limit. This dominant scaling relationship, $`g^2𝒥/\mathrm{}\omega `$, guides our application of perturbation theory to the estimation of the location of the self-trapping transition. We note that $`\lambda `$ depends not on the modulus of $`\stackrel{}{J}`$ but on $`𝒥`$, the sum of the components $`J_i`$, and thus by its very definition $`\lambda `$ includes a dependence on anisotropy. The perturbative results (7) - (17) can be used to infer the expected value of the adiabatic critical point by retaining only those terms that dominate in the adiabatic strong-coupling regime; these are terms of comparable, leading magnitude when both $`g`$ and $`𝒥/\mathrm{}\omega `$ are large such that $`g^2𝒥/\mathrm{}\omega `$. The weak-coupling correction $`E_{WC}^{(2)}`$ is $`O\{\lambda \}`$ at large $`𝒥`$ and is thus negligible relative to $`E_{WC}^{(0)}`$, which is $`2𝒥`$. The strong-coupling correction $`E_{SC}^{(1)}`$ is exponentially small (in $`g^2`$) relative to $`E_{SC}^{(0)}`$ and is thus negligible in the adiabatic strong-coupling regime. Of the several contributions to $`E_{SC}^{(2)}`$ appearing in (16), only the term containing $`f(2g^2)`$ is not exponentially small, and of the terms contributing to this non-exponential contribution, only a single, dominant term remains non-vanishing in the adiabatic strong-coupling regime. Thus, combining all non-vanishing terms through second order of both weak- and strong-coupling perturbation theory, the crossing condition that obtains is $$2𝒥=g^2\mathrm{}\omega \frac{|\stackrel{}{J}|^2}{g^2\mathrm{}\omega }.$$ (20) The graphical solution of (20) is indicated in Figure 2. In obtaining (20), we are using perturbative results in extreme limits that may not obviously lie within the scope of the retained orders of either perturbation theory, or in principle may even lie beyond the scope of one or the other perturbation theory taken to all orders. Although the legitimacy of our arguments in this regard is beyond the scope of any available proof, it is not unsupported; that the trends in the true ground state energy are consistent with the scaling properties used to obtain (20) is evident in the results of multiple independent non-perturbative methods on both sides of the self-trapping transition . Such studies are necessarily at finite parameter values, however, and though confirmatory cannot in themselves prove that these trends continue unabated into the adiabatic limit. In Appendix A, we provide a discussion of WCPT in particular, showing how it is feasible that WCPT may continue to be valid in the scaling regime despite what may appear to be essentially strong coupling. We note that it is the second-order SCPT contribution on the r.h.s. of (20) that is the crucial element in much of the discussion that follows. If one fails to capture this contribution to the crossing condition, the self-trapping criterion that results is simply $`\lambda =1`$. This is, in fact, a widely-asserted self-trapping condition and is not grossly incorrect in many cases; however, considerable structure is lost to the casualness with which this estimate is often used, and the potential exists for significant quantitative errors if applied to the inappropriate regimes. In terms of the composite parameter $`\lambda `$ and according to the full condition (20), the adiabatic critical point in any dimension is given by $$\lambda _c=\frac{1}{2}\left[1\pm \sqrt{1|\stackrel{}{J}|^2/𝒥^2}\right].$$ (21) Of the two roots, it is the larger, $`(+)`$ root that is the physically meaningful one, yielding the isotropic (superscripts “$`i`$”) critical values $$\lambda _c^i[D]=\frac{1}{2}\left[1+\sqrt{1D^1}\right],$$ (22) $$\lambda _c[1]=0.5,\lambda _c^i[2]=0.8536\mathrm{},\lambda _c^i[3]=0.9082\mathrm{},$$ (23) $$\lambda _c[0]=0,\lambda _c^i[\mathrm{}]=1.0.$$ (24) The $`()`$ root, besides implying an unmeaningful dependence of the ground state energy on parameters, would imply a $`\lambda _c`$ that decreases with increasing dimensionality, contrary to considerable evidence, including the quantum Monte Carlo data shown in Figures 1 and 7. The dependence of the adiabatic critical point on anisotropy contains interesting structure (see Figure 3): In all of 1D and in each weakly anisotropic case for $`D>1`$, the dependence of $`\lambda _c`$ (solid line) on the anisotropy is essentially flat; thus, in the generic case of ordinary bulk materials with only modest anisotropies, $`\lambda _c`$ would appear to change significantly with dimensionality but to be essentially insensitive to underlying anisotropy. On the other hand, in the case of “low-dimensional” materials characterized by weak tunneling into one or more transverse dimensions (e.g., quasi-1D scenarios with one or two transverse dimensions, or quasi-2D scenarios with one), the weakly-involved dimensions have relatively strong effects on $`\lambda _c`$: The transition between zero dimensions and any higher dimensional case is marked by a jump discontinuity in the dependence of $`\lambda _c`$ on any $`J_i`$. The transition between one dimension and any higher-dimensional case is marked by a square-root singularity in dependence of $`\lambda _c`$ on the transverse $`J_i`$. The transition between successive higher dimensional cases is generically smooth, however, with the appearance of any singularity being dependent on the manner in which dimensionality is tuned (see Figure 4 ff.). The abscissa in Figure 3 is essentially the quantity $`𝒥/J`$ as $`𝒥`$ ranges from $`0`$ to $`3J`$ according to the particular scheme chosen for sequentially “turning on” higher dimensions. It is clear that while the casual criterion $`\lambda 1`$ constitutes a fair order-of-magnitude characterization of the occurrence of self-trapping in bulk materials in the adiabatic limit, there is considerable qualitative structure missed. The results shown in Figure 3, though quite general in character, depend on the particular manner in which the parameters $`\{J_x,J_y,J_z\}`$ are varied relative to each other; in particular, we note that the manner in which these are varied in Figure 3 does not isolate the anisotropy. We can obtain a more global view of self-trapping in higher dimensions while simultaneously isolating the anisotropy dependence by considering not the composite parameter $`\lambda _c`$, but the more elementary coupling parameter $`g_c`$ contained within it according to (19). That is, $$g_c=\sqrt{2𝒥\lambda _c/\mathrm{}\omega },$$ (25) This critical value of the coupling constant in the adiabatic limit depends on both the intensity of tunneling $`|\stackrel{}{J}|`$ and on the anisotropy. The dependence on the anisotropy can be isolated, however, in the normalized quantity $$\frac{g_c}{g_c[1]}=\left\{\frac{𝒥}{|\stackrel{}{J}|}\left[1+\sqrt{1|\stackrel{}{J}|^2/𝒥^2}\right]\right\}^{1/2}$$ (26) in which $`g_c[1]=\sqrt{|\stackrel{}{J}/\mathrm{}\omega |}`$ represents the critical coupling parameter in the one-dimensional case subject to the condition that the 1D tunneling parameter is fixed at the value $`|\stackrel{}{J}|`$ appropriate to the $`D`$-dimensional case. While dependent on each of $`J_x`$, $`J_y`$, and $`J_z`$, the ratio (26) is independent of $`|\stackrel{}{J}|`$ and depends only on the angular variables in a spherical polar coordinate representation of the $`\{J_x,J_y,J_z\}`$ system. Eq. 26 thus describes a surface having the interpretation that the radial distance from the origin is the factor by which the critical coupling constant $`g_c`$ in $`D`$ dimensions exceeds the critical value in one-dimension ($`g_c[1]`$) having the same intensity of tunneling. This surface is plotted in Figure 4, together with an isotropic (spherical) reference surface, and a surface corresponding to the condition $`\lambda _c=const`$. This latter surface shows that the oft cited condition $`\lambda _c1`$ contains implicit anisotropy; however, it is evident that the real anisotropy of self-trapping is even greater than might be inferred from this common rule of thumb. The presentation of $`\lambda _c`$ shown in Figure 3 corresponds to a particular transit of the $`g_c`$ surface seen in Figure 4: The 0D case can be considered to occupy the origin, and the 1D cases correspond to the three corners of the displayed surface. The “turning on” of the second dimension according to the scheme of Figure 3 corresponds to movement along the edge of the displayed surface to the midpoint of that edge corresponding to the 2D isotropic case. The subsequent “turning on” of the third dimension corresponds to movement perpendicular from this edge along a straight line (geodesic) to the center of the surface corresponding to the 3D isotropic case. This comparison shows in particular (as may be proven analytically): 1) that both the jump discontinuity between 0D and higher dimensions and the square-root singularity between 1D and higher dimensions are generic features, not dependent on the manner or sequence with which transverse dimensions are “turned on”, and 2) that the less-singular feature seen in Figure 3 at the transition from 2D to 3D is not generic, but appears only because dimensions in Figure 3 were turned on sequentially. Thus, for a given $`|\stackrel{}{J}|`$, we can distinguish three regimes based on sensitivity to anisotropy: $`g>g_c^i`$; there are no large polaron states at any degree of anisotropy. $`g<g_c[1]`$; there are no small polaron states for any degree of anisotropy. $`g_c[1]<g<g_c^i`$; large polaron states exist for sufficiently isotropic tunneling, small polaron states exist for sufficiently anisotropic tunneling, and these regimes are separated by a self-trapping transition as a function of anisotropy at fixed $`|\stackrel{}{J}|`$ and $`g`$. This effect of self-trapping as a function of anisotropy alone can be understood in terms of the size, shape, and content of the phonon cloud. As discussed in Appendix A, the more isotropic and higher-dimensional polaron scenarios are characterized by phonon clouds that are spread over the largest volumes of space and contain the fewest numbers of phonons. With increasing anisotropy, the polaron cloud grows more compressed, occupying smaller volumes of space, but being occupied by larger numbers of phonons. If this anisotropy-driven compression can proceed sufficiently far, the number of phonons in the phonon cloud can be driven sufficiently high that self-trapping can occur. ## IV Self-Trapping away from the Adiabatic Strong-Coupling Limit At general parameter values away from extreme limits, accurate estimations of the location of the polaron self-trapping line are scarce. Until rather recently, estimates even in one dimension were largely casual rules of thumb. As noted above, a frequently-encountered characterization holds that self-trapping occurs when $`\lambda 1`$; this condition is often supplemented by the condition $`g>1`$ acknowledging that the strong-coupling theory from which the $`\lambda `$ condition arises is not expected to hold to arbitrarily weak coupling. We can improve on the common rule of thumb by identifying the self-trapping transition not with a single fixed value of $`\lambda `$ (e.g., unity) but with the critical value obtaining in the adiabatic limit for the particular dimension and $`\stackrel{}{J}`$ appropriate to each unique circumstance (i.e., $`\lambda \lambda _c`$). In so doing, we capture all the structure evident in $`\lambda _c`$ (Figures 3 and 4) and make the preliminary assumption that the scaling relationships that characterize the adiabatic limit hold to a meaningful degree at moderate parameter values; i.e, we may consider extrapolation of critical scaling relationships to finite parameter values. The implications of such an assumption for the elementary coupling parameter $`g_c`$ are shown in Figure 5. The shifting of these estimated self-trapping lines with anisotropy is a direct reflection of the anisotropy of $`\lambda _c`$. The shifting of these estimated self-trapping lines with anisotropy is a direct reflection of the anisotropy of $`\lambda _c`$; it is this qualitative character of the mutual relationships among self-trapping curves of differing anisotropies that we expect to be largely preserved as necessary corrections are made. The need for further correction is evident, for example, in that the 1D example in Figure 5 differs substantially in absolute terms from the 1D empirical curve (18) although the two are qualitatively quite similar. Moreover, all of the curves displayed in Figure 5 violate the ancillary condition $`g>1`$ at small $`𝒥/\mathrm{}\omega `$, reflecting the expected eventual failure of extrapolation from the adiabatic strong-coupling regime. We should be able to improve on this estimate by using a more complete weak/strong condition (27) employing the complete results of both perturbation theories through second order as given in (7) - (17), thus objectively capturing non-adiabatic corrections implicit in those terms that do not contribute in the adiabatic limit. Thus we consider the condition $$E_{WC}^{(0)}(0)+E_{WC}^{(2)}(0)=E_{SC}^{(0)}(0)+E_{SC}^{(1)}(0)+E_{SC}^{(2)}(0).$$ (27) This refinement yields estimated self-trapping lines as illustrated by the truncated curves in Figure 6; these curves are truncated (arbitrarily at $`J/\mathrm{}\omega =2`$) because intersections of WCPT and SCPT begin to disappear at lower values of $`J/\mathrm{}\omega `$, as can be seen in the 1D panel of Figure 1. The effects of including non-adiabatic corrections depend on dimensionality, anisotropy, and “distance” from the adiabatic limit: 1) The self-trapping curves describing 1D and quasi-1D cases shift strongly to stronger coupling values, suggesting a corrective shift of order unity at essentially all $`\stackrel{}{J}`$. 2) The self-trapping curves describing 2D and 3D cases shift only weakly at moderate adiabaticity and more weakly with increasing adiabaticity. 3) Except for strong corrections in the quasi-1D regime, the qualitative character of the dependence of self-trapping on anisotropy is little affected by non-adiabatic corrections. 4) At low adiabaticity, all self-trapping curves shift to stronger coupling values in a manner and to a degree consistent with a condition $`g>1`$ at $`\stackrel{}{J}=0`$ rather than the condition $`g>0`$ suggested by adiabatic scaling. Gathering all the implications of the above together, we are led to extend our 1D empirical curve (18) describing the one-dimensional self-trapping to the general case describing any dimension and any degree of anisotropy. To do this we combine: a) the empirical curve $`g_{ST}[1]`$ that effectually characterizes the one-dimensional case, b) the adiabatic critical curve $`g_c`$ that effectually characterizes the higher-dimensional, higher-adiabaticity regime, and c) the adiabatic critical parameter $`\lambda _c`$ that compactly describes the qualitatively distinct characteristics of the low and high dimensionalities. From such considerations we are led to a family of empirical curves $$g_{ST}(1+𝒥/\mathrm{}\omega )^{(\lambda _c[1]\lambda _c)(𝒥/|\stackrel{}{J}|)}+g_c$$ (28) in which all quantities have been previously defined. This family of curves is not derived from any theory, and, apart from the 1D case, is not backed by a large body of independent high-quality data since such data is quite sparse at the present time. What high-quality data does exist at the present time is quantitatively consistent with this family of curves in the same fashion that an abundance of high-quality 1D data has been found consistent with $`g_{ST}[1]`$ (see Figures 1 and 7). In keeping with the discussion of Section II, we have not attempted to regularize square root dependences that arise naturally in the adiabatic limit, but which are most likely softened with decreasing adiabaticity. The utility of (28) lies in compactly and simply describing the apparent and mutually consistent trends in a large volume of results of independent methods and arguments, providing meaningful estimates for the location of the self-trapping transition in any dimension for any degree of anisotropy or adiabaticity. This estimated $`g_{ST}`$ is compared with quantum Monte Carlo data for the ground state energy in Figure 1 and effective mass in Figure 7. In Figure 3, we have included several curves (dashed lines) corresponding to $`\lambda _{ST}g_{ST}^2/2𝒥`$, using (28) for $`J/\mathrm{}\omega =1,2,5,10,100,1000`$. These curves indicate how we expect the physically meaningful self-trapping line at finite parameters as estimated by (28) to be related to the results of the adiabatic limit. Figure 3 shows that the higher-dimensional, more isotropic cases converge toward their adiabatic limits more rapidly than do lower-dimensional, more anisotropic cases. This convergence in one dimension is particularly poor, with significant deviations from the adiabatic limit persisting for $`J/\mathrm{}\omega >1000`$, by which point the higher-dimensional cases have converged beyond plotting precision. Viewed collectively, the dashed curves of Figure 3 also show that the composite parameter $`\lambda `$ does not provide a very natural or even qualitatively self-consistent characterization of the self-trapping transition over the whole of the adiabatic regime ($`J/\mathrm{}\omega >1/4`$). In the far adiabatic regime, where we may take $`\lambda _c`$ to fairly characterize the location of the self-trapping transition (solid curve in Figure 3), one may be led to conclude that large polarons are relatively more stable in higher dimensions and at weaker anisotropies since the occurrence of self-trapping is found to shift to larger values of $`\lambda `$ in these regimes. On the other hand, at more moderate degrees of adiabaticity (e.g., $`J/\mathrm{}\omega =1,2,5`$ in Figure 3) one is led by the same reasoning to conclude that large polarons are relatively less stable in higher dimensions and at weaker anisotropies since the occurrence of self-trapping is found to shift to lower values of $`\lambda `$ in these regimes. In particular, one of the most actively-investigated cases in contemporary studies is the “typical” scenario with $`J/\mathrm{}\omega `$ of order unity; Figure 3 shows that in terms of $`\lambda `$, the self-trapping trends in this case are quite distinct from those found in the adiabatic limit, certainly complicating the interpretation of results. From Figure 6, on the other hand, based on the more elementary coupling parameter $`g`$ appearing directly in the Hamiltonian, one is led to conclude that large polarons are everywhere relatively more stable in higher dimensions and at weaker anisotropies since the self-trapping line shifts to larger values of $`g`$ as these trends are followed regardless of the degree of adiabaticity. These trends in $`g`$ are qualitatively similar and uniform for all degrees of anisotropy and adiabaticity, whereas the same trends in $`\lambda `$ vary strongly with regime. For the same reasons that $`\lambda `$ is a convenient parameter with which to characterize polarons in the far adiabatic regime, it proves to be an inconvenient parameter in the broader context of the problem away from the adiabatic limit. ## V Correlation Function and Polaron Radius In view of the local nature of the electron-phonon coupling in the Holstein model, the spatial extent of the polaron can be characterized quite directly through an analysis of electron-phonon correlations. This can be done using a correlation function that has been long and widely used to characterize polaron size in $`D`$ dimensions : $$C_\stackrel{}{r}^{[D]}=\widehat{C}_\stackrel{}{r}^{[D]}=\frac{1}{2g}\underset{\stackrel{}{n}}{}a_\stackrel{}{n}^{}a_\stackrel{}{n}(b_{\stackrel{}{n}+\stackrel{}{r}}^{}+b_{\stackrel{}{n}+\stackrel{}{r}}),$$ (29) normalized such that $`_\stackrel{}{r}C_\stackrel{}{r}^{[D]}=1`$. This function can be viewed as measuring the shape of the polaron lattice distortion around the instantaneous position of the electron. Using Rayleigh-Schrödinger perturbation theory in the weak-coupling regime as in the preceeding sections one finds that $`C_\stackrel{}{r}^{[D]}`$ $`=`$ $`\mathrm{}\omega {\displaystyle _0^{\mathrm{}}}𝑑te^{\mathrm{}\omega t}{\displaystyle \underset{i=1}{\overset{D}{}}}e^{2J_it}I_{r_i}(2J_it).`$ (30) Note that setting any one $`J_i`$ to zero or summing $`C_\stackrel{}{r}^{[D]}`$ over one $`r_i`$ recovers $`C_\stackrel{}{r}^{[D1]}`$. This property implies that the effect of “turning on” transverse dimensions is simply to spread electron-phonon correlation strength transversely. Characterizing this multi-dimensional correlation function in terms of a width measure involves a variance tensor, $$\left\{\sigma ^\mathrm{𝟐}\right\}_{ij}=\underset{\stackrel{}{r}}{}r_ir_jC_\stackrel{}{r}^{[D]}=\delta _{ij}\sigma _{ii}^2,$$ (31) where $`\sigma _{ii}^2`$ $`=`$ $`\mathrm{}\omega {\displaystyle _0^{\mathrm{}}}𝑑te^{\mathrm{}\omega t}{\displaystyle \underset{r_i}{}}r_i^2e^{2J_it}I_{r_i}(2J_it)`$ (32) $`=`$ $`{\displaystyle \frac{2J_i}{\mathrm{}\omega }}={\displaystyle \frac{\mathrm{}}{2m_{ii}^0\omega }}{\displaystyle \frac{1}{l_i^2}}.`$ (33) in which $`m_{ii}^0`$ is the free electron effective mass and $`l_i`$ is the lattice constant in the $`i`$ direction. Thus, along each of the primitive crystallographic axes, the real-space variance is simply proportional to the electron transfer integral along that axis, and in a general direction is just the appropriate mixture determined by rotation. In absolute units (unrationalized by the lattice constants) the real-space variance is the same as that of the zero-point motion of a harmonic oscillator characterized by the lattice frequency $`\omega `$, but with the lattice mass replaced by the free electron mass measured along the appropriate direction. Utilizing the notion of a polaron half-width defined in terms of the correlation variance $$R_i=\frac{l_i}{2}\sqrt{\sigma _{ii}^2},$$ (34) we can associate with the polaron characteristic ellipsoidal volumes $`V[D]`$ $`V[1]`$ $``$ $`2R_xl_x\left({\displaystyle \frac{2J_x}{\mathrm{}\omega }}\right)^{1/2}`$ (35) $``$ $`\left({\displaystyle \frac{\mathrm{}}{2\omega }}\right)^{1/2}\left({\displaystyle \frac{1}{m_{ii}^0}}\right)^{1/2}`$ (36) $`V[2]`$ $``$ $`\pi R_xR_yl_xl_y{\displaystyle \frac{\pi }{4}}\left({\displaystyle \frac{2J_x}{\mathrm{}\omega }}{\displaystyle \frac{2J_y}{\mathrm{}\omega }}\right)^{1/2}`$ (37) $``$ $`{\displaystyle \frac{\pi }{4}}\left({\displaystyle \frac{\mathrm{}}{2\omega }}\right)\left(det𝐌_\mathrm{𝟎}^\mathrm{𝟏}\right)^{1/2}.`$ (38) $`V[3]`$ $``$ $`{\displaystyle \frac{4\pi }{3}}R_xR_yR_zl_xl_yl_z{\displaystyle \frac{\pi }{6}}\left({\displaystyle \frac{2J_x}{\mathrm{}\omega }}{\displaystyle \frac{2J_y}{\mathrm{}\omega }}{\displaystyle \frac{2J_z}{\mathrm{}\omega }}\right)^{1/2}`$ (39) $``$ $`{\displaystyle \frac{\pi }{6}}\left({\displaystyle \frac{\mathrm{}}{2\omega }}\right)^{3/2}\left(det𝐌_\mathrm{𝟎}^\mathrm{𝟏}\right)^{1/2}.`$ (40) This characteristic volume thus increases with the intensity of tunneling ($`V[D]|\stackrel{}{J}/\mathrm{}\omega |^{D/2}`$), and is largest in the isotropic case and decreases with increasing anisotropy. In the isotropic case we may regard $`R=R_i`$ as the polaron radius. Contrary to much prevailing opinion, these results show that in the weak-coupling regime: i) there are no significant qualitative or quantitative differences between 1D, 2D, and 3D polaron radii, ii) the polaron radius in 2D and 3D is not infinite, and iii) the polaron radius does not scale as $`J/g^2\mathrm{}\omega `$ in any dimension as commonly expected, but as $`\sqrt{J/\mathrm{}\omega }`$ in every dimension . ## VI Effective Mass For the circumstances we address in this paper, the reciprocal effective mass tensor is diagonal, with elements given by $$\left\{𝐌^\mathrm{𝟏}\right\}_{ij}=\mathrm{}^2\frac{^2E(\stackrel{}{\kappa })}{\kappa _i\kappa _j}|_{\stackrel{}{\kappa }=0}=\delta _{ij}\frac{1}{m_{ii}}.$$ (41) From this, it is easily shown that the reciprocal effective mass in any direction through second order of weak-coupling perturbation theory is given by $$\frac{m_{ii}^0}{m_{ii}^{}}=1g^2\mathrm{}^2\omega ^2_0^{\mathrm{}}𝑑tte^{\mathrm{}\omega t}\underset{i=1}{\overset{D}{}}e^{2J_it}I_0(2J_it),$$ (42) where $`m_{ii}^{}`$ and $`m_{ii}^0`$ are respectively the polaron and free electron effective masses in the $`i`$ direction.. Figure 7 shows the dependence of the isotropic polaron mass on dimensionality according to WCPT and quantum Monte Carlo simulation. Although this is a comparison between isotropic cases of $`J/\mathrm{}\omega =1`$ only, the excellent agreement between WCPT and quantum Monte Carlo out to $`g\sqrt{D}`$ suggests that the WCPT mass may be similarly accurate for $`\lambda <1/2`$ as defined in (19) at general $`\stackrel{}{J}`$. The weak-coupling result (42) shows that although anisotropy has definite effects on the value of the effective mass, the effect of anisotropy appears only in the value of a scalar multipler of the free electron mass; that is, although anisotropy of the free electron mass (inequalities among $`J_x`$, $`J_y`$, and $`J_z`$) is manifested in real-space anisotropies in electron-phonon correlation (i.e., in distortions of the shape of the polaron as discussed in the previous section), the mass renormalization associated with such distortions of polaron shape is isotropic. Interestingly, this implies that increasing $`J_y`$ or $`J_z`$ at fixed $`J_x`$ (for example) results in a decrease in $`m_{xx}^{}`$, translating into an associated increase in mobility in the $`x`$ direction. This influence of transverse directions on $`m_{xx}^{}`$ is illustrated in Figure 8. In the center and right panels of Figure 8, $`J_x/\mathrm{}\omega `$ is held fixed at unity, yet the effective mass in the $`x`$ direction continues to decrease as tunneling into transverse dimensions is turned on. These effects can be understood in terms of the transverse spreading of electron-phonon correlation strength as discussed in the last section. As a fixed correlation strength is spread over an increasing number of sites (characteristic volume of the polaron increases as discussed in the previous section), the average lattice deformation per participating site decreases. Consequently, mean square measures of lattice deformation decrease and exhibit changes that suggest a diminishing effectiveness of electron-phonon interactions in producing typical polaronic effects. The polaronic mass enhancement bears such a mean square dependence on the lattice deformation and like other such measures (e.g., the number of phonons in the phonon cloud as discussed in Appendix A), decreases with increasing dimensionality and decreasing anisotropy. The corresponding polaron effective mass resulting from strong-coupling perturbation theory through second order is given by $$\frac{m_{ii}^0}{m_{ii}^{}}=e^{g^2}+e^{2g^2}f(g^2)(3J_i+𝒥)/\mathrm{}\omega .$$ (43) This result is isotropic at first order simply by virtue of being independent of all $`J_i`$ at that order, but the second-order correction is anisotropic because the r.h.s. of (43) bears an explicit, unbalanced sensitivity to the direction along which the effective mass component is being measured. Unfortunately, this strong-coupling result is not very helpful; it disagrees substantially with more reliable results except at small $`J/\mathrm{}\omega `$. We take this as an indication that dominating (perhaps non-exponential) contributions have yet to be extracted from higher orders of SCPT. For such reasons we cannot estimate the location of the self-trapping transition from any crossing of (42) and (43). Instead, we have included in Figure 7 several symbols to indicate the values of $`g_{ST}`$ as given by (28); these several values are mutually consistent in locating essentially the same feature of the effective mass in every dimension, and coincides very well with the effective mass feature we have previously identified with the self-trapping transition (see Ref. ). ## VII On Dimensionality and Adiabaticity As noted in the introduction, the results that have long characterized commonly-held expectations for the dimensionality dependence of polaron structure are due to behavior ascertainable in the adiabatic approximation . In 2D and 3D, the minimum energy states in the adiabatic approximation are found to be “free” states throughout the weak-coupling regime up to a discrete coupling threshold beyond which “self-trapped” states have the minimum energy. This abrupt transition phenomenon is what is meant by the term “self-trapping transition” in the adiabatic approximation. Accordingly, there is no occasion to distinguish large polarons from small polarons in 2D and 3D since the “free” states below the transition are of infinite radius and distinct from large polarons, and the “self-trapped” states above the transition are always interpretable as small polarons. This set of circumstances in 2D and 3D is reflected in the catch phrase “all polarons are small”, since in this view large polarons in the adiabatic sense are never characteristic of the polaron ground state in bulk materials. In 1D, on the other hand, “free” states are unstable in the adiabatic approximation; instead, finite-radius (i.e. “self-trapped”) states are found at all finite coupling strengths, leading to the commonly encountered view that there is no self-trapping transition in 1D. That polaron states in 1D might be distinguishable as large or small is inconsequential in this view, as is the notion of a resolvable transition between distinct large and small polaron structures. The results of this paper differ strongly from the conventional adiabatic picture in multiple respects: i) The quasiparticles implicit in the weak-coupling states of every dimension are not weakly-scattered “free” electrons, but dressed electrons having finite radii generally greater than a lattice constant. ii) Although these weak-coupling quasiparticles can be sensibly characterized as large polarons, in no dimension do these weak-coupling states coincide with the large polaron states familiar from the adiabatic approximation in 1D. iii) The finite radii characterizing the weak-coupling quasiparticles in every dimension saturate to finite values with vanishing electron-phonon coupling, unlike the large polaron radii in the adiabatic approximation that in 1D diverge with vanishing coupling and in 2D and 3D are infinite already at finite coupling. iv) The self-trapping transition exists in every dimension, including 1D. v) The self-trapping transition is associated with the change from large polaron structure to small polaron structure in every dimension, including 2D and 3D, and not with a change from infinite to finite radii. vi) Dependences of polaron properties on parameters are smooth through the self-trapping transition in every dimension, unlike the abrupt changes often found in the adiabatic approximation in 2D and 3D. Our results are quantitatively supported by independent high-quality methods (including variational methods , cluster diagonalization , density matrix renormalization group , and quantum Monte Carlo ). Moreover, elaborations of adiabatic theory incorporating non-adiabatic corrections support our overall conclusion that the adiabatic approximation as it is widely regarded fails to embrace non-adiabatic characteristics that are essential to the proper description of polaron states in the weak coupling regime, and therefore fails as well to properly describe the self-trapping transition itself . With so many results at variance with the adiabatic approximation, it is well to ask in what respects, if any, are our results consistent with the adiabatic approximation and whether some sense can be made of the pervasive discrepancies. Indeed, several consistencies can be found that are illuminating. We first note that the dependence of $`\lambda _c`$ on dimensionality and anisotropy exhibits a generic square-root singularity at the boundary between 1D and any higher dimensional case, while at the boundary between higher-dimensional cases this dependence is generically smooth. For essentially the same underlying reasons, $`\lambda _c`$ is constant throughout 1D, but varies with detail of tunneling in higher dimensions. These distinctions are at least suggestive of the sharp contrasts between 1D and higher dimensional cases in the adiabatic approximation. Secondly, we note that the weak-coupling polaron radius $`R`$ as here derived diverges in any dimension in the adiabatic limit. Further considering the WCPT validity test in Appendix A, there is reason to speculate that this weak-coupling radius might continue to be a reasonably valid construct in 2D and 3D up to the vicinity of the self-trapping transition. Such a possibility might be consistent with the finding of strictly infinite-radius states on the weak-coupling side of the transition in 2D and 3D in the adiabatic approximation, while the possible breakdown of the weak-coupling radius construct below the transition in 1D might be consistent with existence of finite-width states in the 1D adiabatic approximation. ## VIII Conclusion In this paper we have analyzed the dependence of numerous polaron properties on the effective real-space dimensionality and anisotropy as determined by the electronic tunnelling matrix elements; these properties include the polaron ground state energy, polaron shape, size, and volume, the number of phonons in the phonon cloud and the polaron effective mass. In pursuing these analyses we have made extensive use of weak- and strong-coupling perturbation theories supported by selected comparisons with non-perturbative methods. Through the use of a scaling argument combining weak- and strong-coupling perturbation theory in the adiabatic strong-coupling regime, we have been able to infer the probable location of the self-trapping critical point in the adiabatic limit in any dimension and for any degree of anisotropy, and by combining information from multiple sources we have been able to extend this estimate from the adiabatic limit to finite adiabaticity. Central among our findings is the over-arching qualitative conclusion that polarons in any dimension and any degree of anisotropy are similar in most respects. In particular, polarons on the weak-coupling side of the self-trapping transition share a structure that is essentially identical in every dimension. This weak-coupling structure is consistent with the notion of the weak-coupling polaron as a finite-radius quasiparticle, but is inconsistent both with the notion of a weakly-scattered free electron (adiabatic approximation in 2D and 3D) and with the historical notion of the large polaron (adiabatic approximation in 1D). The strong-coupling structure is consistent with traditional notions of small polarons, including strong-coupling perturbation theory and the adiabatic approximation. Since the essential character of the weak-coupling states and strong-coupling states is only inessentially affected by dimensionality and anisotropy, the notion of the self-trapping transition separating the weak- and strong-coupling states is similarly not altered in any essential way by changes in dimensionality or anisotropy. Necessarily, one is led to view self-trapping as the more-or-less rapid transition, occurring in every dimension, between characteristic weak- and strong-coupling states, both of which are characterized by finite radii. If we may transcend the jargon that historically has had a tendency to polarize the conventional wisdom, it is fairly concluded that not all polarons are small, even in bulk materials, and that in every dimension and for every degree of anisotropy the self-trapping transition is a smooth, albeit rapid crossover between large and small polaron character. ## Acknowledgement The authors gratefully acknowledge P. Kornilovitch for providing the quantum Monte Carlo data used in Figures 1 and 7. This work was supported in part by the U.S. Department of Energy under Grant No. DE-FG03-86ER13606. ## A Breakdown of Weak-Coupling Perturbation Theory The weak-coupling perturbation theory considered in this paper is based on an expansion in states containing limited numbers of phonon quanta. The zeroth order properties are based upon states containing zero phonons, and second order properties upon states containing one phonon. The first neglected order of WCPT is the fourth order, built upon states containing no more than two phonons. A test of internal consistency of WCPT at particular parameters, therefore, is to compute the expected number of phonons to the retained order of perturbation theory, and compare this number to the maximum number of phonons present at that order. For second order WCPT as used in this paper, this number of phonons should be less than unity. The required computation is contained in $$n_{ph}=\frac{1}{N}\underset{\stackrel{}{q}}{}\frac{g^2\mathrm{}\omega }{\{E_{WC}^{(0)}(0)[E_{WC}^{(0)}(\stackrel{}{q})+\mathrm{}\omega ]\}^2},$$ (A1) where $`E_{WC}^{(0)}(\stackrel{}{\kappa })`$ is defined in (10). When each $`J_i`$ is large relative to $`\mathrm{}\omega `$, one finds that $`n_{ph}`$ $``$ $`{\displaystyle \frac{1}{4}}g^2\left({\displaystyle \frac{J_x}{\mathrm{}\omega }}\right)^{1/2}in1D,`$ (A2) $``$ $`{\displaystyle \frac{1}{\pi }}g^2\left({\displaystyle \frac{J_xJ_y}{\mathrm{}^2\omega ^2}}\right)^{1/2}in2D,`$ (A3) $``$ $`{\displaystyle \frac{1}{\pi }}g^2\left({\displaystyle \frac{J_xJ_yJ_z}{\mathrm{}^3\omega ^3}}\right)^{1/2}in3D.`$ (A4) These expressions can be consolidated into the single approximate relation $$n_{ph}g^2\frac{\mathrm{\Omega }[D]}{V[D]},$$ (A5) where $`\mathrm{\Omega }[D]`$ is the primitive cell volume and $`V[D]`$ the characteristic volume of the polaron in $`D`$ dimensions. The dimension-dependent constant of proportionality is near 1/2 in all cases. This simple relation, here proven only for the adiabatic weak-coupling regime (broad polarons), demonstrates the very direct but inverse relation between the number of phonons in the phonon cloud and the volume occupied by it. In the isotropic case and in terms of the composite parameter $`\lambda `$, the condition that expected phonon numbers should be less than unity results in the conditions ($`J\mathrm{}\omega `$) $`\lambda `$ $`<`$ $`2\left({\displaystyle \frac{J}{\mathrm{}\omega }}\right)^{1/2}in1D,`$ (A6) $`<`$ $`{\displaystyle \frac{\pi }{4}}in2D,`$ (A7) $`<`$ $`{\displaystyle \frac{\pi }{6}}\left({\displaystyle \frac{J}{\mathrm{}\omega }}\right)^{1/2}in3D.`$ (A8) Recalling that the self-trapping transition is expected to occur at $`\lambda `$ of order unity, it would appear that WCPT through second order is consistent with the condition $`n_{ph}<1`$ up to the transition in 2D and beyond the transition in 3D. It is the 1D case that appears to be on the weakest footing in the adiabatic strong-coupling regime; however, it is the 1D case that has been most exhaustively studied by non-perturbative means and found to be widely consistent with second-order WCPT.
no-problem/9905/cond-mat9905068.html
ar5iv
text
# 1 Transition lines for 𝐇∥𝐚 as a function of the anisotropy parameter 𝜈. The closed (open) circles denote the zeros of 𝜂₁ (𝜂₂). The inset plots the angle 𝜑_𝐿' at the I↔II transition as a function of 𝜈. \[ Vortex states of the $`E_u`$ model for Sr<sub>2</sub>RuO<sub>4</sub> Takafumi Kita Division of Physics, Hokkaido University, Sapporo 060-0810, Japan () Based on the Ginzburg-Landau functional of $`E_u`$ symmetry presented by Agterberg, vortex states of Sr<sub>2</sub>RuO<sub>4</sub> are studied in detail over $`H_{c1}HH_{c2}`$ by using the Landau-level expansion method. For the field in the basal plane, it is found that (i) the second superconducting transition should be present irrespective of the field direction; (ii) below this transition, a characteristic double-peak structure may develop in the magnetic-field distribution; (iii) a third transition may occur between two different vortex states. It is also found that, when the field is along the $`c`$ axis, the square vortex lattice may deform through a second-order transition into a rectangular one as the field is lowered from $`H_{c2}`$. These predictions will be helpful in establishing the $`E_u`$ model for Sr<sub>2</sub>RuO<sub>4</sub>. \] Active studies have been performed on superconducting Sr<sub>2</sub>RuO<sub>4</sub> where another unconventional pairing may be realized. A possible candidate for its symmetry is the $`E_u`$ model with two-fold degeneracy, as indicated by various experiments . However, further experiments seem being required before establishing its validity for Sr<sub>2</sub>RuO<sub>4</sub>. In this respect, the vortex states may provide clear and indisputable tests for the p-wave hypothesis. The present paper provides a detailed theoretical description of them which will be helpful towards that purpose. Clarifying the basic features of the two-component model, which has not been performed completely, will also be useful for the experiments of UPt<sub>3</sub>. The vortex states of the $`E_u`$ model for Sr<sub>2</sub>RuO<sub>4</sub> have been studied theoretically in a series of papers by Agterberg et al.. Based on the two-component Ginzburg-Landau (GL) functional and following essentially Abrikosov’s method which is effective near the upper ($`H_{c2}`$) and lower ($`H_{c1}`$) critical fields, they have provided several important predictions. Especially noteworthy among them are: existence of the second transition for $`𝐇𝐜`$ similar to that observed in UPt<sub>3</sub>; several orbital-dependent phenomena helpful in identifying which band is mainly relevant; stabilization of the square vortex lattice for $`𝐇𝐜`$. An observation of the square lattice has been reported by Riseman et al.. With these results, this paper focuses on the following: (i) The properties of the intermediate fields, in particular those below the second transition for $`𝐇𝐜`$, remain to be clarified. We will treat the whole range $`H_{c1}HH_{c2}`$ in a unified way, describe possible changes of experimentally detectable properties as a function of the field strength, and draw characteristic features in low fields. (ii) Considered have been the cases where the field is along the high-symmetry axes. It is still not clear whether or not the second transition for $`𝐇𝐜`$ persists for arbitrary field directions in the $`ab`$ plane, because the term $`|\eta _1|^2|\eta _2|^2`$ in the GL functional \[see Eq. (5) below\] generally causes the first- and third-order mixing. We will study those general cases to establish the existence of the second transition. (iii) Agterberg introduced several assumptions in the parameters used to minimize the free energy. We will perform the minimization without such assumptions. The goals (i)-(iii) may seem rather formidable, but they can be achieved with the Landau-level expansion method. When applied to the $`s`$-wave pairing, it successfully reproduced the properties of the whole region $`H_{c1}HH_{c2}`$ quite efficiently for an arbitrary $`\kappa `$. Compared with the direct minimization procedure in real space, the method has a couple of advantages that (i) it is far more efficient and (ii) one can enumerate possible second-order transitions rather easily, hence enabling us to establish the phase diagram of various multi-order-parameter systems. This is the first time where it is applied to an multi-order-parameter system so that this paper also has some methodological importance. The GL free-energy density adopted by Agterberg is given by $`f=`$ $`|𝜼|^2+{\displaystyle \frac{1}{2}}|𝜼|^4+{\displaystyle \frac{\gamma }{2}}(𝜼\times 𝜼^{})^2+(3\gamma 1)|\eta _1|^2|\eta _2|^2`$ (5) $`𝜼^{}\left[\begin{array}{cc}D_x^2+\gamma D_y^2+\kappa _5D_z^2& \gamma (D_xD_y+D_yD_x)\\ \gamma (D_xD_y+D_yD_x)& D_y^2+\gamma D_x^2+\kappa _5D_z^2\end{array}\right]𝜼`$ $`+h^2,`$ where the same notations are used here. This simplified free energy has the advantage that there are only two parameters in it whose values can be extracted from experiments, i.e. $`\kappa _1H_{c2}/\sqrt{2}H_c`$ and $`\nu \frac{13\gamma }{1+\gamma }`$, the latter being related to the $`H_{c2}`$ anisotropy in the $`ab`$ plane as $`H_{c2}(𝐚)/H_{c2}(𝐚+𝐛)=\frac{1\nu }{1+\nu }`$. The value $`\kappa _1=31`$ $`(1.2)`$ for $`𝐇𝐜`$ ($`𝐇𝐜`$) will be used throughout, whereas $`\nu `$ is left as a parameter. A recent observation of the $`H_{c2}`$ anisotropy suggests that $`\nu `$ is positive and $`0.01`$. We sketch the method to find the minimum for an arbitrary field strength. Let us fix the mean flux density $`𝐁(B\mathrm{sin}\theta \mathrm{cos}\phi ,B\mathrm{sin}\theta \mathrm{sin}\phi ,B\mathrm{cos}\theta )`$ rather than the external field $`H`$, and express $`𝐡=𝐁+\stackrel{~}{𝐡}`$ where the spatial average of $`\stackrel{~}{𝐡}`$ vanishes by definition. We then transform $$\left[\begin{array}{c}x\\ y\\ z\end{array}\right]=[\begin{array}{ccc}\mathrm{cos}\theta \mathrm{cos}\phi & \mathrm{sin}\phi & \mathrm{sin}\theta \mathrm{cos}\phi \\ \mathrm{cos}\theta \mathrm{sin}\phi & \mathrm{cos}\phi & \mathrm{sin}\theta \mathrm{sin}\phi \\ \mathrm{sin}\theta & 0& \mathrm{cos}\theta \end{array}]\left[\begin{array}{c}x^{}/L\\ y^{}L\\ z^{}\end{array}\right],$$ (6) $$𝜼(𝐫)=[\begin{array}{cc}\mathrm{cos}\frac{\varphi }{2}& \mathrm{sin}\frac{\varphi }{2}\\ \mathrm{sin}\frac{\varphi }{2}& \mathrm{cos}\frac{\varphi }{2}\end{array}][\begin{array}{cc}\mathrm{cos}\varphi ^{}& \mathrm{sin}\varphi ^{}\\ i\mathrm{sin}\varphi ^{}& i\mathrm{cos}\varphi ^{}\end{array}]𝜼^{}(𝐫^{}),$$ (7) where $`\varphi `$, $`\varphi ^{}`$ and $`L`$ are conveniently chosen as $`\varphi =\mathrm{tan}^1\left[2\gamma \mathrm{tan}2\phi /(1\gamma )\right]`$, $`\varphi ^{}=L^2\mathrm{cos}\theta `$, and $`L=\left\{(1+\gamma f)/[(1+\gamma f)\mathrm{cos}^2\theta +2\kappa _5\mathrm{sin}^2\theta ]\right\}^{1/4}`$ with $`f\left[(1\gamma )^2\mathrm{cos}^22\phi +4\gamma ^2\mathrm{sin}^22\phi \right]^{1/2}`$. Assuming uniformity along $`𝐳^{}`$ direction, we then expand $`𝜼^{}(𝐫^{})`$ and $`\stackrel{~}{𝐡}`$ $`(𝐫^{})`$ as $$𝜼^{}(𝐫^{})=\sqrt{V}\underset{N𝐪}{}𝐜_{N𝐪}\psi _{N𝐪}(𝐫^{}),$$ (8) $$\stackrel{~}{𝐡}(𝐫^{})=\widehat{𝐳}^{}\underset{𝐊\mathrm{𝟎}}{}\stackrel{~}{h}_𝐊\mathrm{exp}(i𝐊𝐫^{}),$$ (9) where $`V`$ is the system volume, $`\psi _{N𝐪}`$ denotes an eigenstate of the magnetic translation group in the flux density $`B`$ with the Landau-level index $`N`$ and the magnetic Bloch vector $`𝐪`$, and $`𝐊`$ is the reciprocal lattice vector of the vortex lattice. The explicit expression of $`\psi _{N𝐪}(𝐫^{})`$ for the spacial case where one of the unit vectors of the vortex lattice, $`𝐚_2`$, lies along the $`y^{}`$ axis is given by $`\psi _{N𝐪}(𝐫^{})={\displaystyle \underset{n=𝒩_\mathrm{f}/2+1}{\overset{𝒩_\mathrm{f}/2}{}}}\mathrm{e}^{i[q_y^{}(y^{}+0.5q_x^{})+na_{1x^{}}(y^{}+q_x^{}0.5na_{1y^{}})]/l_c^2}`$ $`\times \sqrt{{\displaystyle \frac{2\pi l_c/a_2}{2^NN!\sqrt{\pi }V}}}H_N({\displaystyle \frac{x^{}q_y^{}na_{1x^{}}}{l_c}})\mathrm{e}^{(x^{}q_y^{}na_{1x^{}})^2/2l_c^2}`$ with $`𝒩_\mathrm{f}^2`$ the number of flux quanta in the system, $`l_c`$ denoting $`\frac{1}{\sqrt{2}}`$ of the magnetic length, and $`a_{1x^{}}`$ ($`a_{1y^{}}`$) the $`x^{}`$ ($`y^{}`$) component of another unit vector $`𝐚_1`$. We also consider the counterclockwise rotation of $`𝐚_1`$ and $`𝐚_2`$ around the $`z^{}`$ axis by the angle $`\phi _L^{}`$. Substituting Eqs. (6)-(9) into Eq. (5) and integrating over the volume, we obtain the free energy per unit volume as $$F[\left\{𝐜_{N𝐪}\right\},\{\stackrel{~}{h}_𝐊\},B,\rho ,\vartheta ,\phi _L^{}]=\frac{1}{V}f[𝜼^{}(𝐫^{}),\stackrel{~}{𝐡}(𝐫^{}),B]d^3r^{}$$ (10) where $`\rho |𝐚_1|/|𝐚_2|`$ and $`\vartheta \mathrm{cos}^1\frac{𝐚_1𝐚_2}{|𝐚_1||𝐚_2|}`$. This $`F`$ is a desired functional which can be minimized rather easily using one of the standard minimization algorithms. Due to the periodicity of the vortex lattice, we only have to perform the integration over a unit cell. The external field $`H`$ is then determined through the thermodynamic relation ($`H=\frac{1}{2}\frac{F}{B}`$ in the present units). In numerical calculations we have cut the series in Eqs. (8) and (9) at some $`N_c`$ and $`|𝐊_c|`$, respectively, thereby obtaining a variational estimate of the free energy. The convergence can be checked by increasing $`N_c`$ and $`|𝐊_c|`$. The choice $`N_c12`$ and $`𝐊_c`$(the third smallest) has been checked to provide correct identification of the free-energy minimum with the relative accuracy of $`10^6`$ for $`B/H_{c2}0.1`$. Though not presented here, preliminary calculations reveal that the method is also effective for $`\theta 0`$, $`\frac{\pi }{2}`$. The functional $`F`$ has another advantage that one may enumerate possible transitions in the vortex states of multi-order-parameter systems. Much attention has been focused on this subject in connection with the observed phase diagram of superconducting UPt<sub>3</sub>. No complete analysis has appeared yet, however, and the use of $`F`$ will be quite helpful for that purpose. The features of the $`s`$-wave lattice can be summarized as follows: (a) a single $`𝐪`$ in Eq. (8) suffices to describe it with a choice of $`𝐪`$ corresponding to the broken translational symmetry of the lattice; (b) the hexagonal (square) lattice is made up of $`N=6n`$ ($`4n`$) Landau levels ($`n`$: integer); (c) more general structures can be described with $`N=2n`$ levels, odd $`N`$’s never mixing up since those bases have finite amplitude at the core sites; (d) the expansion coefficients $`𝐜_{N𝐪}`$ can be chosen real for the hexagonal and square lattices. With these results on the conventional lattice, the following second-order transitions are possible in multi-component systems: (i) deformation of the hexagonal or square lattice which accompanies entry of new $`N`$’s as well as complex numbers in the expansion coefficients; (ii) mixing of another wave number $`𝐪_2`$ satisfying $`𝐪_2𝐪_1=𝐊/2`$; (iii) entry of odd $`N`$’s. Though not complete, this consideration will be sufficient below. We now present the results for $`𝐇𝐜`$. Figure 1 shows the transition lines for $`𝐇𝐚`$ ($`\theta =\frac{\pi }{2}`$; $`\phi =0`$) as a function of the anisotropy parameter $`\nu `$; the one given as a function of $`\gamma =\frac{1\nu }{3+\nu }`$ has qualitatively the same structure, with $`\gamma =0`$ and $`1`$ respectively corresponding to $`\nu =1`$ and $`1`$. As already pointed out by Agterberg , there are three possible vortex states: the high-field region I where a hex- agonal lattice is stable with $`\eta _2=0`$; the region II where $`\eta _2`$ becomes finite with $`𝐪_2𝐪_1`$ equal to half the unit vector $`𝐛_1`$ of the reciprocal lattice, i.e. the vortex lattice is coreless with $`|𝜼|`$ finite everywhere; the region III where a deformed conventional lattice with $`\eta _20`$ is stable. In addition, Fig. 1 includes the following new results: (i) a full minimization with respect to $`\phi _L^{}`$ clarifies that the I$``$II transition is continuous as a function of $`\nu `$ (see the inset); (ii) high-precision calculations in the low-field region reveal that, as the field is lowered, the coreless state II is replaced via a first-order transition by the state III with cores. The reason for (ii) can be realized by looking at the variation of $`l_z(𝜼\times 𝜼^{})\widehat{𝐳}/2i|\eta _1||\eta _2|`$ which is proportional to the magnitude of the orbital angular momentum along $`𝐳`$. As seen in Fig. 2 calculated for $`\nu =0.077`$ ($`\gamma =0.3`$) and $`B/H_{c2}=0.25`$, one of the bulk states $`l_z=\pm 1`$ is alternately realized in II, and there necessarily exist lines of “defects” where $`l_z`$ vanishes. Compared with III where $`|𝜼|`$ vanishes at points, the state II is thus energetically unfavorable at low fields. It can however be stabilized at intermediate fields by making $`|𝜼|`$ more uniform. Figure 3 plots $`|𝜼(𝐫)|`$ for $`\nu =0.077`$, showing how the differences between II and III develop as $`B/H_{c2}`$ is decreased. In fact, only a deformation of the lattice occurs in III, whereas a layered structure also shows up in II with $`|𝜼(𝐫)|`$ becoming more and more uniform. This rather drastic change in II can be detected by measuring the magnetic-field distribution $`P(h)\frac{1}{V}\delta [hh_x(𝐫)]d^3r`$. As seen in Fig. 4, the single peak at $`B/H_{c2}=045`$ splits and one of them moves towards the high-field end, which originates from the development of a ridge in $`h_x(𝐫)`$ along a valley of $`|𝜼(𝐫)|`$. The observation of it by NMR or $`\mu `$SR experiments will form a direct evidence for the state II as well as for the presence of multi-order parameters. It is also quite interesting to perform the experiments in UPt<sub>3</sub> where a lattice distortion has already been detected. We finally point out that the second-order transition between I$``$II or I$``$III is present for an arbitrary field direction in the basal plane. A glance on the functional (5) may lead to the conclusion that the transition I$``$III disappears for a low-symmetry direction, since the term $`|\eta _1|^2|\eta _2|^2`$ yields those like $`\eta _1^{}\eta _2^{}|\eta _1^{}|^2`$. However, it does persist as the transition (i) classified in the preceding paragraph.The hexagonal lattice has been checked to be stable in the high-field region, and the phase diagram for a small $`|\nu |`$ is qualitatively similar to Fig. 1. We finally present the results for $`𝐇𝐜`$. Figure 5 shows the vortex lattice structure as a function of $`|\nu |`$ and $`B`$ for $`\kappa _1=1.2`$. The square lattice is stabilized near $`H_{c2}`$ for small values of $`|\nu |`$, confirming Agterberg’s result through a perturbation expansion with respect to $`\nu `$ ($`\kappa _1=1.2`$ corresponds Agterberg’s $`\kappa 0.66`$ for $`\nu =0`$). As the field is decreased, however, the lattice deforms into a rectangular one for $`|\nu |0.17`$, followed by a further transition into the square and/or a distorted (i.e. $`\rho 1`$; $`\vartheta \frac{\pi }{3},\frac{\pi }{2}`$) lattice for $`|\nu |0.1`$. The same calculation for $`\kappa _1=2.6`$ reveals that all the phase boundaries move rightward, with the distorted, square, and rectangular regions extending over $`0B/H_{c2}1`$ for $`|\nu |0.004`$, $`0.02|\nu |0.09`$, and $`0.23|\nu |`$, respectively. With $`\kappa _1`$ and $`|\nu |`$ small, the free energies of these lattices are not much different from one another, as suggested by Agterberg’s $`\nu `$-$`\kappa `$ diagram near $`H_{c2}`$, and the present calculation reveals that there may also be field-dependent transformations among them. Although Riseman et al. have reported an observation of the square lattice, there may exist field-dependent distortion in the diffraction pattern. A detailed experiment on the field dependence may be worth carrying out. The author is grateful to M. Sigrist for several useful conversations, and to D. F. Agterberg for valuable comments on the original manuscript. Numerical calculations were performed on an Origin 2000 in ”Hierarchical matter analyzing system” at the Division of Physics, Graduate School of Science, Hokkaido University.
no-problem/9905/astro-ph9905200.html
ar5iv
text
# OUTER REGIONS OF THE CLUSTER GASEOUS ATMOSPHERES ## 1. Introduction Clusters of galaxies are very important tools for observational cosmology. Massive clusters form through collapse of a large volume and therefore thought to contain a fair sample of the Universe in terms of dark matter, diffuse baryons, and possibly stellar mass. Through the study of the relative contribution of these components in clusters, one can determine the average matter density in the Universe as a whole (White et al. 1993, Carlberg et al. 1996). Most of mass in clusters is in the form of dark matter, observable directly only through the gravitational distortion of background galaxy images. For various reasons (sparseness, limited area coverage) lensing observations still cannot be used for a detailed study of the dark matter distribution in clusters. Much progress in understanding the dark matter halos of clusters has been done theoretically, through cosmological numerical simulations. Properties of simulated clusters in many respects agree with analytic or semi-analytic theoretical predictions. The mass function of clusters is in good agreement with that predicted by Press & Schechter (1974) theory (Efstathiou et al. 1985, Lacey & Cole 1994). A virialized region is well defined by $`r_{180}`$, a radius within which the mean density is approximately 180 of the critical density (Cole & Lacey 1996). Simulations predict that the dark matter density profiles are very similar when the radii are scaled to $`r_{180}`$, the hot gas follows the dark matter distribution at large radii, and these two components have equal temperature (Navarro, Frenk, & White 1995). As expected from the virial theorem, the gas temperature in simulations scales as $`M_{180}T^{3/2}`$, where $`M_{180}`$ is the mass within $`r_{180}`$ (Evrard, Metzler, & Navarro 1996). Most baryons, i.e. observable matter, in clusters are in the form of hot, X-ray emitting gas. Therefore, most of our direct knowledge about the structure of clusters comes from X-ray observations. Important cosmological conclusions derived from X-ray observations of clusters usually rely on simple theoretical assumptions. For example, a measurement of $`\mathrm{\Omega }`$ from the baryon fraction in clusters (White et al. 1993, David, Jones, & Forman 1995, Evrard 1997) requires that cluster baryons are not segregated with respect to the dark matter. Measurement of the cosmological parameters from the evolution of the cluster temperature function (Henry 1997) relies on the converting of temperature to mass as $`MT^{3/2}`$. However, unlike dark matter in simulated clusters, properties of the hot gas inferred from X-ray observations often deviate from simple theoretical expectations. For example, if gas were to follow the dark matter of Navarro et al. (1995), one would observe $`\rho _\mathrm{g}r^{2.7}`$ in the outer cluster parts (at $`r1`$Mpc), whereas the gas density profiles inferred from the Einstein observatory images are significantly flatter, $`\rho _\mathrm{g}r^{1.8}`$ (Jones & Forman 1984, 1998). The universal density profile, virial theorem, and the non-segregation of baryons predict the relation between X-ray luminosity and gas temperature $`LT^2`$. The observed relation is significantly steeper, $`LT^{2.6-3}`$ (David et al. 1993, Markevitch 1998, Allen & Fabian 1998). Using the gas temperature profile measured by ASCA and assuming that gas is in hydrostatic equilibrium, Markevitch & Vikhlinin (1997) derived the mass of A2256 which was found to be 40% lower than expected from Evrard et al. (1996) scaling. In fact, the only easily understandable scaling involving cluster baryons established so far is that between the temperature and galaxy velocity dispersion $`T\sigma ^2`$ (e.g., Edge & Stewart 1991). We demonstrate in this work that the hot gas in clusters does show a scaling expected from simple theoretical arguments. It is expected that the cluster virial radius can be defined as a radius of mean overdensity $`180`$. If baryons are not segregated on the global cluster scales, this radius can be found as a radius of some *baryon* overdensity, i.e. determined observationally. The virial theorem implies that the scaling of this radius with temperature should be of the form $`RT^{1/2}`$. Furthermore, if cluster density profiles are similar, such scaling should be observed for a range of limiting baryon overdensities. We indeed observe such a relation; its tightness is comparable to the tightness of similar correlations in simulated clusters. We use the values of cosmological parameters $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.5`$. The radius of mean gas overdensity $`\mathrm{\Delta }_g=Y`$ relative to the cosmic baryon density predicted by primordial nucleosynthesis is referred to as $`R_Y`$. ## 2. Cluster sample Our goals require a sample of clusters that are symmetric and that have high-quality imaging data to large radii. The present sample includes those clusters in which the X-ray surface brightness distribution has been mapped by the ROSAT PSPC to large radius, i.e. those in which the virial radius, $`r_{180}(T)`$, lies within the ROSAT PSPC field of view. For the purposes of this work, the virial radius is estimated from the temperature as $`r_{180}=1.95h^1\mathrm{Mpc}(T/10\mathrm{keV})^{1/2}`$ (Evrard et al. 1996). We also required that the ROSAT exposure was adequate for an accurate measurement of the surface brightness distribution at large radii. This requirement was implemented by the following objective procedure. We fitted the power law index of the azimuthally averaged surface brightness profile in the range $`r>r_{180}/3`$ and discarded all clusters with a 1-$`\sigma `$ statistical uncertainty in their slope exceeding $`\pm 0.1`$. We also excluded clusters with double or very strongly irregular X-ray morphology, because our analysis requires the assumption of reasonable spherical symmetry. The excluded clusters were A754, Cyg-A, A1750, A2151, A2197, A3223, A3556, A3558, A3560, A3562, A514, A548, S49-132, SC0625-536S, A665, A119, A1763, A3266, and A3376. The 39 clusters satisfying all the above criteria are listed in Table 1. The emission-weighted X-ray temperatures were compiled from the literature. The main sources are *ASCA* measurements by Markevitch et al. (1998) and Fukazawa et al. (1998), both excluding the cooling flow regions, and Mushotzky & Scharf (1997), and a pre-*ASCA* compilation by David et al. (1993). For three clusters without spectral data, we estimated temperatures from the cooling flow corrected $`L_x-T_x`$ correlation derived in Markevitch (1998); for two clusters, we adopted the $`L_x-T_x`$ temperature estimates from Ebeling et al. (1996). The relative uncertainty of the temperature estimates from the $`L_x-T_x`$ was assumed to be 25% at the 68% confidence. ## 3. ROSAT data reduction ROSAT PSPC images were reduced using S. Snowden’s software (Snowden et al. 1994). This software eliminates periods of high particle and scattered solar backgrounds as well as 15-s intervals after turning the PSPC high voltage on, when the detector may be unstable. Exposure maps in several energy bands are then created using detector maps obtained during the ROSAT All-Sky Survey that are appropriately rotated and convolved with the distribution of coordinate shifts found in the observation. The exposure maps include vignetting and all detector artifacts. The unvignetted particle background is estimated and subtracted from the data to achieve a high-quality flat-fielding even though the PSPC particle background is low compared to the cosmic X-ray background. The scattered solar X-ray background also should be subtracted separately, because, depending on the viewing angle, it can introduce a constant background gradient across the image. We eliminated most of Solar X-rays by simply excluding time intervals when this emission was high, but the remaining contribution was also modeled and subtracted. The output of this procedure is a set of flat-fielded, exposure corrected images in 6 energy bands, nominally corresponding to 0.2–0.4, 0.4–0.5, 0.5–0.7, 0.7–0.9, 0.9–1.3, and 1.3–2.0 keV (i.e., standard *ROSAT* bands R2–R7). These images contain only cluster emission, other X-ray sources, and the cosmic X-ray background. To optimize the signal-to-noise ratio and to minimize the influence of Galactic absorption, we used only the data above 0.7 keV<sup>1</sup><sup>1</sup>1For five clusters (A2052, A2063, A2163, A3571, and MKW3S), we used the energy band 0.9–2.0 keV to reduce the anomalously high soft background. If the cluster was observed in several pointings, each pointing was reduced individually and the resulting images were merged. To measure the cluster surface brightness distribution, we masked detectable point sources and extended sources not related to the cluster. It is ambiguous whether or not all sources should be excluded, because the angular resolution varies strongly across the image and therefore a different fraction of the background is resolved into sources. We chose to exclude all detectable sources, and later checked that, with the exception of very bright sources, the exclusion did not change our results. The cluster surface brightness was measured in concentric rings of equal logarithmic width; the ratio of the outer to inner radius of the ring was equal to 1.1. We created both azimuthally averaged profiles and profiles in six sectors with position angles $`0^{}-60^{}`$, …, $`300^{}-360^{}`$. The profile centroid was chosen at the cluster surface brightness peak. The particular choice of the centroid can affect the surface brightness profile in the inner region, especially for irregular clusters. However, it does not change any results at large distances, which was specifically checked. Therefore, we concluded that the simple choice of the cluster centroid was sufficient for our regular clusters. Finally, the cosmic X-ray background intensity was measured for each cluster individually. Cluster flux often contributes significantly to the background even at large distances from the center. We typically find that near $`r_{180}`$, the cluster contributes around $`5-20\%`$ of the background brightness. Since $`r_{180}`$ can be quite close to the edge of the FOV, it is often impossible to use any image region as a reference background region. Instead, we assumed that at large radii the cluster surface brightness is a power law function of radius, and therefore the observed brightness can be modeled as a power law plus constant background. Fitting the data at $`r>r_{180}/3`$ with this model, we determined the background. We checked that this technique provides the correct background value for distant clusters where one can independently measure the background near the PSPC edge. The fitted background value was subtracted from the data and its statistical uncertainty included in the results presented below. We have checked the flat-fielding quality using several *ROSAT* PSPC observations of “empty” fields. After exclusion of bright sources, as we do in the analysis of clusters images, the difference in the background level near the optical axis and near the FOV edge does not exceed $`5\%`$. The $`5\%`$ background variations correspond to an additional uncertainty of $`\delta \beta 0.03-0.04`$ in $`\beta `$ (§ 4) and a 1–2% uncertainty in the gas overdensity radius (§ 5.2). ## 4. Surface Brightness Fits Cluster X-ray surface brightness profiles are usually modeled with the $`\beta `$-model (Cavaliere & Fusco-Femiano 1976) of the form $$S(r)=S_0\left(1+r^2/r_c^2\right)^{3\beta -0.5},$$ (1) where $`r`$ is the angular projected off-center distance, and $`S_0`$, $`r_c`$, and $`\beta `$ are free parameters. Jones & Forman (1984, 1998) fitted this equation to a large number of the Einstein IPC cluster images. They find that the values of $`\beta `$ are distributed between $`0.5`$ and $`0.8`$ with the average ensemble value of $`\beta =0.6`$. Jones & Forman also find a mild trend of $`\beta `$ with the cluster temperature in the sense that hotter clusters have larger $`\beta `$. Cosmological cluster simulations typically predict steeper gas profiles, $`\beta 0.8-1`$ (e.g., Navarro et al. 1995), in contradiction with the data. Bartelmann & Steinmetz (1996) suggested that the observed values of $`\beta `$ are underestimated because the surface brightness is saturated by the background at large radii, where the brightness profile steepens. The accuracy of the $`\beta `$-model derived from the X-ray data is of great importance because this model is widely used to derive the total gravitating cluster mass via the hydrostatic equilibrium equation and to measure the gas mass. Below we critically examine whether the $`\beta `$-model provides an accurate description of the profiles in the wide range of radii, and also whether the azimuthal averaging of the surface brightness of regular-looking clusters can be justified. We also re-examine the previously reported correlation of $`\beta `$ with temperature. ### 4.1. Exclusion of Cooling Flows Many regular clusters have cooling flows which appear as strong peaks in the surface brightness near the cluster center (Fabian 1994). The inclusion of the cooling flow region in the $`\beta `$-model fit typically leads to small values for the core-radius and $`\beta `$ and to a poor fit to the overall brightness profile. Clearly, this region should be excluded from the fit if an accurate modeling of the surface brightness at large radii is the goal. Different strategies of the choice of the excluded regions can be found in the literature. Jones & Forman (1984) increased the radius of the excluded region until the $`\beta `$-model fit provided an acceptable $`\chi ^2`$. This technique leads to different exclusion radii depending on the observation exposure, cluster flux, and the radial bining of the surface brightness profile. A more physical approach would be to determine the radius beyond which gas cooling cannot possibly be important, i.e. where the gas cooling time (see, e.g. Fabian 1994) significantly exceeds the age of the Universe. White, Jones, & Forman (1997) and Peres et al. (1998) provide the values of $`r_{\mathrm{cool}}`$, the radius at which the cooling time equals $`1.3\times 10^{10}`$yr, for a large number of clusters, covering all but one of the cooling flow clusters in our sample. We always excluded the region $`r<2r_{\mathrm{cool}}`$, beyond which the cooling flow is unlikely to have any effect on the surface brightness distribution. ### 4.2. Surface Brightness Slope For comparison with previous studies, the results of fitting the beta-model to azimuthally averaged surface brightness profiles in the radius range $`2r_{\mathrm{cool}}`$$`1.5r_{180}(T)`$ for cooling flow, and $`0`$$`1.5r_{180}(T)`$ for non-cooling flow clusters<sup>2</sup><sup>2</sup>2Cluster X-ray emission never has been detected to $`1.5r_{180}(T)`$., are presented in Table 2. For cooling flow clusters, the best fit values of core radius are often comparable to the radius of the excluded region; therefore, the core radii cannot be reliably measured for those clusters. The $`\beta `$-parameter, on the other hand, is measured very accurately, and the $`\beta `$-model fits generally provide a very good description of the data (see examples in Fig. 1). The best-fit values of $`\beta `$ are plotted versus the cluster temperature in Fig. 2. Similarly to Jones & Forman (1998), we find that values of $`\beta `$ are distributed over a narrow range $`0.7\pm 0.1`$ for most clusters. However, the distributions in our and Jones & Forman samples are slightly offset. Jones & Forman find the average value $`\beta =0.6`$, while all but two our clusters have $`\beta >0.6`$. This difference is attributable in part to different techniques of excising the cooling flows; but also, because of the larger field of view and lower background, ROSAT data trace the surface brightness to larger radius where the profiles often steepen (see below). Unlike, for example, clusters in the Jones & Forman (1984, 1999) sample, there are no hints of a correlation of $`\beta `$ with cluster temperature (left panel in Fig. 2). A careful examination shows that the previously reported correlation of $`\beta `$ with temperature is due to small values of $`\beta 0.5`$ for cool clusters with $`T3`$ keV, for which we find significantly steeper profiles. Again, a likely explanation for this discrepancy is the incomplete removal of cooling flows in the earlier studies; a cooling flow, if not accounted for completely, biases $`r_c`$ and $`\beta `$ low. To determine the slope of the surface brightness profiles at large radii, we fit the profiles in the same range of radii in virial coordinates, $`0.3r_{180}(T)<r<1.5r_{180}(T)`$. With this choice of radii, clusters of different temperatures are compared in the same range of physical coordinates. The core radius cannot be determined from this fit, and so we fixed its value at either $`0.1r_{180}`$ or the value derived from the fit for the entire radial range. Because the value for core-radius is typically much smaller than the inner radius of the data, these modelings are equivalent to fitting the power law relation, $`Sr^{6\beta +1}`$. The values of $`\beta _{\mathrm{outer}}`$ are listed in Table 2 and plotted versus cluster temperature in Fig. 2. The slopes in the outer parts in many clusters are slightly steeper than those given by the $`\beta `$-model. The extreme case is A2163, where $`\beta `$ changes by 0.17. The surface brightness profile of this cluster shows a clear steepening at $`r>0.3r_{180}(T)`$ (Fig. 2). Although this cluster is probably a merger (see discussion in Markevitch et al. 1996), the same steepening in the surface brightness is seen in all but one of the $`60^{}`$ sectors. However, the typical change of $`\beta `$ in the outer parts is much smaller, $`\mathrm{\Delta }\beta 0.05`$, and only marginally significant in most clusters. Thus, a strong steepening of the gas density distribution at large radius suggested by Bartelmann & Steinmetz (1996) is excluded. There is some indication of a positive correlation of $`\beta _{\mathrm{outer}}`$ with temperature. This is mainly due to a group of 5 hot, $`T=6-10`$ keV, clusters with $`\beta _{\mathrm{outer}}>0.8`$, and a strong steepening of the surface brightness profile in the hottest cluster A2163. However, as is seen from Fig. 2, the possible change of slope is well within the scatter at high temperatures. In any case, the change of slope is small, from $`\beta 0.67`$ for 3 keV clusters to $`\beta 0.8-0.85`$ for 10 keV clusters. ### 4.3. Azimuthal Variations of the Surface Brightness Cluster X-ray surface brightness is often described by a radial profile (as in the previous sections). It is important to determine how accurate this description is in the outer region. We divide the clusters into sectors 0–60, …, 300–360, and determine $`\beta _{\mathrm{outer}}`$ in the radial range $`0.3r_{180}(T)<r<1.5r_{180}(T)`$ in each sector separately. Azimuthal variations of $`\beta _{\mathrm{outer}}`$ would indicate an asymmetric cluster. The sample appears to have clusters from very regular ones, such as A2029, to those which display statistically significant azimuthal variations of the slope, such as A1795 (Fig. 4.3). However, the amplitude of variations is typically not very large. The azimuthal rms variations of $`\beta _{\mathrm{outer}}`$ in excess of the statistical noise level are listed for all clusters in Table 2. In most cases, these variations are below 0.1, and in many cases are dominated by a strong deviation in just one sector. We conclude that the azimuthal averaging of the surface brightness in the cluster outer parts can be justified. We will return to the issue of azimuthal averaging in the discussion of gas mass distribution below. ## 5. Gas Mass Distribution The X-ray surface brightness distribution in clusters receives much attention because it can be rather precisely converted to the distribution of hot gas. Determination of the gas mass distribution is also a goal of our study. Below we briefly review techniques used to derive the gas density and present the results for our sample. ### 5.1. Conversion of Surface Brightness to Gas Mass Under the assumption of spherical symmetry, the observed surface brightness profile can be converted to the emissivity profile. The latter is then easily converted to the gas density profile, because the X-ray emissivity of hot, homogeneous plasma is proportional to the square of the density and, in the soft X-ray band, depends only very weakly on temperature (e.g., Fabricant, Lecar, & Gorenstein 1980, and §6.2 below). There are two main techniques to convert the observed surface brightness profile to the emissivity profile under the assumption of spherical symmetry. The first method is to fit an analytical function to the surface brightness profile $`S(r)`$ and then deproject the fit using the inverse Abell integral (e.g., Sarazin 1986). For the $`\beta `$-model surface brightness fit (eq. 1), the conversion is particularly simple (Cavaliere & Fusco-Femiano 1976, Sarazin 1986). The second widely used technique is the direct deprojection of the data without using an analytical model (Fabian et al. 1981, Kriss, Cioffi, & Canizares 1983). Briefly, one assumes that the emissivity is uniform within spherical shells corresponding to the surface brightness profile annuli. The contribution of the outer shells to the flux in each annuli can be subtracted, and the emissivity in the shell calculated, using simple geometrical considerations. This method has an important advantage over the $`\beta `$-fit, in that no functional form of the gas distribution is assumed and realistic statistical uncertainties at each radius are obtained. Although we generally find very little difference between the deprojection and $`\beta `$-fit methods, we use the deprojection technique as the preferred one. Once the distribution of emissivity (in units of flux per volume) is known, it can be converted to the distribution of gas mass as follows. The emissivity is multiplied by the volume of the spherical shell to obtain the total flux from this shell. Assuming that the gas temperature is constant at all radii, we use the Raymond & Smith (1977) spectral code to find the conversion coefficient between the flux and the emission measure integral, $`E=n_en_p𝑑V`$, given the plasma temperature, heavy metal abundance, cluster redshift, and Galactic absorption. Metal abundance has virtually no effect on the derived gas mass at high temperatures, which is the case for our clusters; we assume that it is 0.3 of the Solar value for all clusters. For this metal abundance, $`n_e/n_p=1.17`$, and the gas density is $`\rho _g=1.35m_pn_p`$. The gas mass in the shell is $`m_g=m_p(1.56EV)^{1/2}`$, where $`V`$ is the volume of the shell. Given the observed flux, the derived gas mass scales with distance to the cluster as $`d^{5/2}`$. ### 5.2. Correlation of the Baryon Overdensity Radius with Temperature As was pointed out in §1, simple theory predicts a tight correlation between the radius at a fixed baryon overdensity relative to the background density of baryons and the temperature in the form $`RT^{0.5}`$. Since most baryons in clusters are in the form of hot gas, and the gas mass is easily measured from the X-ray data, this correlation can be tested observationally. We use the deprojection technique to determine the enclosed gas mass as a function of radius. The baryon overdensity is calculated as the ratio of the enclosed mass and $`(4\pi /3)\rho _0R^3(1+z)^3`$, where $`\rho _0`$ is the present day background density of baryons derived from primordial nucleosynthesis, and $`z`$ is the cluster redshift. We adopt the value $`\rho _0=2.85\times 10^9`$ $`M_{}`$ Mpc<sup>-3</sup> (Walker et al. 1991); a different value of the background baryon density (e.g., a recent determination by Burles & Tytler 1998) would have no effect on our results except for scaling the reported overdensities. Previous studies of the baryonic contents in clusters indicated that baryons contribute $`15-20\%`$ of the total cluster mass (for $`h=0.5`$); if this ratio is representative of the Universe as a whole, it corresponds to a cosmological density parameter $`\mathrm{\Omega }_0=0.2-0.3`$ (White et al. 1993, David et al. 1995, Evrard 1997). With this range for $`\mathrm{\Omega }_0`$, the two commonly referenced values of the dark matter overdensity $`\delta =180`$ and 500 relative to the critical density correspond to gas overdensities $`\mathrm{\Delta }_g=600-1000`$ and 1500–2500, respectively. Therefore, we determine the radii at which the mean enclosed hot gas density is 1000 and 2000 above the baryon background; these radii are denoted $`R_{1000}`$ and $`R_{2000}`$ hereafter. For a wide range of gas temperatures, from 1.5 to 10 keV, the gas mass corresponding to the fixed *ROSAT* flux changes by only 4% if metal abundance is $`a=0.3`$ Solar, and by 10%, if $`a=0.5`$. The corresponding variations of the gas overdensity radius are approximately $`2\%`$ and 5%. Therefore, the values of $`R_{1000}`$ and $`R_{2000}`$, as derived from the *ROSAT* data, are practically independent of the gas temperature. The measured radii $`R_{1000}`$ and $`R_{2000}`$ are plotted versus cluster temperature in Fig. 3. Note that $`R`$ and $`T`$ are measured essentially independently, as opposed, for example, to the baryon fraction or total mass that involves mass estimates that use the gas temperature. The correlation is very tight and close to the theoretically expected $`RT^{0.5}`$. Note that even A3391, the cluster with an anomalously flat surface brightness profile, is quite close to the observed correlation. We fit power laws to the $`R-T`$ relation using the bisector modification of the linear regression algorithm that allows for intrinsic scatter and non-uniform measurement errors, and treats both variables symmetrically (Akritas & Bershady 1996 and references therein). The confidence intervals were determined using bootstrap resampling (e.g., Press et al. 1992). The best fit relations are $`\mathrm{lg}R_{1000}`$ $`=`$ $`(0.569\pm 0.043)\mathrm{lg}T+2.918`$ $`\mathrm{lg}R_{2000}`$ $`=`$ $`(0.615\pm 0.042)\mathrm{lg}T+2.720,`$ (2) where radii are in kpc and temperatures are in keV, and uncertainties are $`68\%`$ confidence. For any given temperature, the average scatter in $`R_{1000}`$ is only 6.5%, and $`7\%`$ for $`R_{2000}`$. This is comparable to the scatter of the dark matter overdensity radius $`r_{500}`$ in simulated clusters (Evrard et al. 1996). Even though the best fit slopes formally deviate from the expected value of 0.5 by $`2-3\sigma `$, the difference between the best fit and the $`RT^{0.5}`$ relation is within the scatter in the data (Fig. 3). A tight correlation of the baryon overdensity radius with temperature suggests that the gas density profiles in the outer parts of clusters are similar, when appropriately scaled. Figure 4 shows the gas density profiles plotted as a function of radius in Mpc, in units of $`r_{180}(T)`$, and in units of $`R_{1000}`$. No density scaling was applied. As expected, density profiles display a large scatter if no radius scaling is applied, since we are comparing systems of widely different masses. The density scatter at large radius becomes small (the entire range is $`\pm 40\%`$) when radii are scaled to $`r_{180}(T)`$. This scatter is close to that of the dark matter density in simulated clusters. The scatter is particularly small when profiles are scaled to the overdensity radius $`R_{1000}`$. One can argue that in this case, the scatter is artificially suppressed because the scaling depends on the density. However, the scatter remains small over a rather wide radial range; also, the same critique applies when dark matter profiles of simulated clusters are plotted in virial coordinates. To conclude, gas density profiles show a high degree of similarity, both in terms of enclosed mass and shape, when the radius is scaled to either the virial radius estimated from the gas temperature or the fixed gas overdensity radius. In the next section, we discuss some uncertainties which affect the measurement of the gas mass distribution. ## 6. Discussion of Gas Mass Uncertainties ### 6.1. Sample Selection and Spherical Symmetry To calculate gas mass from the observed X-ray surface brightness, we, and most other studies, assume that the cluster is spherically symmetric. It is desirable to check that this assumption is adequate. If the substructure had a strong effect on the derived gas mass, the mass calculated using surface brightness profiles in different cluster sectors would be substantially different when the substructure is seen in projection. Because of random orientations, substructure in projection occurs more often than along the line of sight. Therefore, the azimuthal variations of the gas mass can be used to place limits on the 3-dimensional deviations from the spherical symmetry. To look for this effect, we calculated gas masses using surface brightness profiles in six sectors in all clusters. The rms azimuthal gas mass variations within $`R_{1000}`$ are listed in Table 2. For most clusters, these variations are on the level of $`10-15\%`$, including statistical scatter. Since we find that mass variations in projection are small, this indicates that they also are small in three-dimensional space. It can be argued that only small azimuthal mass variations are found because clusters with substructure were excluded from the sample. Moreover, such selection might lead to a preferential selection of clusters having substructure along the line of sight which is invisible in the images. As a result, our gas distribution measurements might be seriously biased, because we do not average over different cluster orientations. These arguments are countered by the following considerations. We excluded only three cooling flow clusters, Cyg-A, A3558, and A1763, on the basis of strong substructure, compared to 25 such clusters in the sample (Table 1). Therefore, our cooling flow subsample, which comprises two thirds of the total sample, should be unbiased with respect to substructure selection. Since there is no obvious difference between cooling and non-cooling flow clusters either in terms of gas mass (Fig. 3) or surface brightness fits (Fig. 2), our subsample of non-cooling flow clusters also is unlikely to have significant substructure along the line of sight. ### 6.2. Temperature Structure ASCA measurements suggest that, at least in hot clusters, the gas temperature gradually declines with radius reaching $`0.5`$ of the average value at $`r=0.5r_{180}`$ (Markevitch et al. 1998). Because of strong line emission, calculation of the gas density from ROSAT flux is uncertain for cool clusters, if precise temperatures and metal abundances are unknown. If such cool ($`T2`$ keV) clusters have declining temperature profiles, our gas masses will be affected, because we assume isothermality. Fortunately, the effect is not very strong. We tested this by simulating the $`T=0.5`$ keV Raymond-Smith plasma with heavy metal abundance, $`a`$, in the range 0.1–0.5 of the Solar value, and converting the predicted ROSAT PSPC flux in the 0.5–2 keV band back to gas mass using the $`T=2`$ keV, $`a=0.3`$ spectral model. The mass was underestimated by 20% for $`a=0.1`$ and overestimated by $`35\%`$ for $`a=0.5`$. For the input spectrum with $`T=1`$ keV, the mass error was in the range $`\pm 15\%`$. The effect of temperature decline on the enclosed gas mass is smaller, because a significant mass fraction is contained within the inner, hotter regions. The error in the gas overdensity radius determination is still smaller because overdensity is a very strong function of radius (for example, $`\mathrm{\Delta }_gr^2`$ for $`\beta =2/3`$). ### 6.3. Cooling Flows The presence of a cooling flow leads to an overestimate of the gas density near the cluster center, if one assumes that the gas is single-phase and isothermal. However, the enclosed gas mass at large radius is little affected, because most of gas mass lies at large radii. For example, in A1795, the cluster with one of the strongest cooling flows, only 2.7% of the gas mass inside $`R_{1000}`$ is within the cooling radius and 9% of the mass is within $`2r_{\mathrm{cool}}`$, if one assumes that the cooling flow is single-phase and isothermal. Even if the mass within $`2r_{\mathrm{cool}}`$ is overestimated by 100% because of these incorrect assumptions, the total gas mass is overestimated by only 10%, and $`R_{1000}`$ is overestimated by only 3%. The true errors are likely to be smaller. The presence of a cooling flow also leads to an underestimate of the emission-weighted temperature (underestimation here is relative to the absence of radiative cooling, the assumption usually made in theory and simulations). For example, Markevitch et al. (1998) find that in several clusters, the temperature increases by up to 30% when the cooling flow is excised. This temperature error produces almost no to errors in gas masses, but can introduce an additional scatter in the $`R-T`$ correlation, or in the gas density profiles scaled to $`r_{180}(T)`$. We used temperatures from Markevitch et al. for which cooling flows were excised, for all clusters with strong cooling flows, except 2A0335 and A1689. Allen and Fabian (1998) find that the temperature increase in A1689 when the cooling flow is modeled as an additional spectral component is small, $`5\%`$. Cooling flows in other clusters in our sample are not very strong, and simple emission-weighted temperatures should be sufficiently accurate. ## 7. Discussion ### 7.1. Applicability of the $`\beta `$-model We have found above that the slope of the surface brightness profile in the outer part of clusters \[$`0.3r_{180}-r_{180}`$\] is slightly steeper than the slope of the $`\beta `$-model fit in the entire radial range (excluding the cooling flow region). Thus, the $`\beta `$-model does not describe the gas distribution precisely. However, deviations from the $`\beta `$-model are small and do not lead to significant errors in the total mass or gas mass. Consider the extreme case of A2163, where the global $`\beta `$-value is 0.73 but beyond a radius of $`0.3r_{180}(T)`$, the profile slope steepens to $`\beta =0.9`$. Such a change of $`\beta `$ leads to a 24% increase of the total mass calculated from the hydrostatic equilibrium equation; this is smaller than other uncertainties (Markevitch et al. 1996). The gas masses within $`R_{1000}`$ calculated from the global $`\beta `$-model and from the exact surface brightness profile differ by 20%. In most clusters, where $`\beta `$ typically changes by $`0.05`$, the effect on the total and gas mass is much smaller. ### 7.2. $`R-T`$: The First “Proper” Scaling for Baryons The scaling relations involving hot gas in clusters established previously show significant deviations from the theoretically expected relations. The most notable example is the luminosity-temperature correlation. From the virial theorem relation $`M_{\mathrm{tot}}T^{3/2}`$, and the assumptions of constant baryon fraction and self-similarity of clusters, one expects $`L_xT^2`$ while the observed relation is closer to $`L_xT^3`$ (David et al. 1993). The current consensus is that additional physics, such as preheating of the intergalactic medium or the feedback from galaxy winds/supernovae or shock heating of the IGM has important effects on the X-ray luminosities (see Cavaliere, Menci & Tozzi 1997 and references therein). These processes are still uncertain, and for example, prevent the use of the evolution of the cluster luminosity function as a cosmological probe. Another example of the deviations of cluster baryon scaling from theoretical expectations is the relation between the cluster size at a fixed X-ray surface brightness and temperature (Mohr & Evrard 1997). Mohr & Evrard find $`R_IT^{0.9\pm 0.1}`$ from the observations, while their simulated clusters show $`R_IT^{0.7}`$. After inclusion of feedback from galaxy winds to the simulations, Mohr & Evrard were able to reproduce the observed size-temperature relation. Note that the surface brightness threshold used by Mohr & Evrard was selected at a high level, so that the derived size $`R_I`$ was only $`0.3`$ of the cluster virial radius. The scaling between the radius of a fixed gas overdensity and temperature, $`R\mathrm{const}\times T^{1/2}`$, presented here is, to our knowledge, the first observed scaling involving only cluster baryons that is easily understandable theoretically (§1). The crucial difference between the luminosity-temperature and Mohr & Evrdard’s size-temperature relation and our scaling is that we use cluster properties at large radius, where most of the mass is located, while the $`L-T`$ and $`R_I-T`$ relations are based on properties of the inner cluster regions. Our findings thus suggest that any processes required to explain the observed $`L-T`$ and $`R_I-T`$ relations affect only central cluster parts and are not important for the gas distribution at large radii. ### 7.3. Limit on the Variations in the Baryon Fraction Simulations predict that the total mass within a radius of fixed overdensity scales as $`M_{\mathrm{tot}}T^{3/2}`$ (Evrard et al. 1996). Our observed scaling between the gas overdensity radius and $`T`$ is consistent with $`RT^{1/2}`$, or equivalently $`M_{\mathrm{gas}}T^{3/2}`$. Therefore, if the simulations are correct, $`M_{\mathrm{gas}}/M_{\mathrm{tot}}`$ does not depend on the cluster temperature. Since hot gas is the dominant component of baryons in clusters, the baryon fraction within a radius of fixed overdensity is constant for all clusters. To be more precise, the best-fit relation $`R_{1000}T^{0.57}`$ corresponds to a slowly varying gas fraction $`M_{\mathrm{gas}}/M_{\mathrm{tot}}=T^{0.2}`$. However, the stellar contribution can reduce this trend, because stars contribute a greater fraction of the baryon mass in low-temperature clusters (David et al. 1990, David 1997). The small observed scatter around the mean $`R-T`$ relation can be used to place limits on the variations of the baryon fraction between clusters of similar temperature. At large radius, the mean gas overdensity is $`\mathrm{\Delta }_gr^{3\beta }`$. Therefore, the $`7\%`$ observed scatter in radius at the given $`\mathrm{\Delta }`$ corresponds to a $`3\beta \times 7\%`$ scatter in overdensity at the given radius. Assuming that the total cluster mass is uniquely characterized by the temperature, the scatter in $`M_{\mathrm{gas}}/M_{\mathrm{tot}}`$ is 14–18%, *including* the measurement uncertainties. The small scatter indicates that the baryon fraction in clusters is indeed universal. There is also an intrinsic scatter in the $`M_{\mathrm{tot}}-T`$ relation, which is 8%–15% in simulated clusters (Evrard et al. 1996); if the deviations of the total mass and gas mass from the average value expected for the given temperature are not anti-correlated, the scatter of the baryon fraction is reduced still further. Thus, our results provide further observational support for measurements of $`\mathrm{\Omega }_0`$ from the baryon fraction in clusters and the global density of baryons derived from primordial nucleosynthesis. ### 7.4. Similarity of Gas Density Profiles The gas density profiles plotted in virial coordinates, i.e., radius scaled by either $`r_{180}(T)`$ or $`R_{1000}`$, are very similar, both in slope and normalization (Fig. 4). The similarity of the gas density slopes in the outer parts of clusters also is evident from the relatively small scatter of $`\beta `$-values in Fig. 2. Most clusters have $`0.65<\beta _{\mathrm{outer}}<0.85`$, which corresponds to gas density falling with radius between $`r^{1.95}`$ and $`r^{2.55}`$. The average gas density, $`\rho _\mathrm{g}r^{2.25}`$, is significantly more shallow than the universal density profile of the dark matter halos found in numerical simulations, $`\rho _{\mathrm{dm}}r^{2.7}`$ between $`0.3r_{180}`$ and $`r_{180}`$ (Navarro et al. 1995). Moreover, if gas in this radial range is in hydrostatic equilibrium and isothermal, a power law function of gas density with radius implies $`\rho _{\mathrm{dm}}r^2`$. Under the hydrostatic equilibrium assumption, the average gas polytropic index $`\gamma 1.3`$, or equivalently $`Tr^{0.7}`$, is required for the total mass to follow the Navarro et al. distribution. Interestingly, this is quite close to the temperature profile observed in many clusters within $`0.5r_{180}`$ (Markevitch et al. 1998). ### 7.5. Comparison with Other Works After this paper was submitted, we learned about works of Mohr, Mathiesen & Evrard (1999) and Ettori & Fabian (1999) who also studied the hot gas distribution in large cluster samples. We briefly discuss some aspects of these works that are in common with our study. Both Mohr et al. and Ettori & Fabian derive cluster $`\beta `$’s from a global fit. Ettori & Fabian exclude central 200 kpc in the cooling flow clusters; they find the global $`\beta `$’s in the range 0.6–0.8, in agreement with our results. Mohr et al. fit the cooling flow region with an additional $`\beta `$-model component, but force the same $`\beta `$ for the cluster and cooling flow components. Their values of $`\beta `$ for cooling flow clusters are often flatter than ours (e.g., they derive $`\beta =0.66\pm 0.03`$ for A85, while our value is 0.76); most likely, this is due to the difference in fitting procedures. Mohr et al. find a tight correlation of the cluster temperature with the hot gas mass within $`r_{500}`$ (estimated as $`r_{500}=C\times T^{1/2}`$) in the form $`M_{\mathrm{gas}}T^{1.98\pm 0.18}`$. Because the gas mass and gas overdensity radius are related as $`M_{\mathrm{gas}}=\mathrm{const}\times R^3`$, the $`M_{\mathrm{gas}}-T`$ and our $`R_{1000}-T`$ correlations are almost equivalent. However, our correlation corresponds to a flatter $`M_{\mathrm{gas}}-T`$ relation, $`MT^{1.71\pm 0.13}`$, closer to the theoretically expected slope of 1.5. There is an important difference of our and Mohr et al. approaches. While in our method, $`R_{1000}`$ and $`T`$ are measured essentially independently, the Mohr et al. measurement of the gas mass does depend on $`T`$ through $`r_{500}`$. Since $`r_{500}T^{1/2}`$ and typically $`M_{\mathrm{gas}}(<r)r`$, their method would find $`M_{\mathrm{gas}}T^{1/2}`$ even if gas profiles of all clusters are the same. This effect may introduce a bias which is responsible for a slightly steeper $`M_{\mathrm{gas}}-T`$ relation in Mohr et al. Mohr et al. and Ettori & Fabian (for low-redshift clusters) find that the values of gas fraction in hot clusters are distributed in a relatively narrow range, $`f_{\mathrm{gas}}0.2\pm 0.04`$. Our tight correlation of $`R_{1000}`$ and $`T`$ also implies a low, $`15\%`$ scatter in $`f_{\mathrm{gas}}`$ (see above). ## 8. Summary We have carried out a detailed analysis of the surface brightness distributions of a sample of 25 cooling flow clusters and 14 non-cooling flow clusters. Since the bulk of the cluster gas mass, and hence the luminous cluster baryons, reside at large radii, we have focussed on the properties of the gas profile at large radii The cluster profiles, from $`0.3r_{180}`$ to $`r_{180}`$ can be accurately characterized as a single power law with $`\beta =0.65-0.85`$. These outer profiles are steeper by about 0.05 in $`\beta `$ on average than profiles fit using the entire surface brightness profile (but excluding the cooling flow region). This indicates that the $`\beta `$-model does not describe the surface brightness profiles precisely. The previously reported correlation of increasing $`\beta `$ with increasing temperature (steepening profiles with increasing temperatures) is only weakly present in our data. This difference arises primarily because the low *ROSAT* background allows us to detect clusters to near the virial radius where they exhibit more similar profiles than in the central part, often dominated by the cooling flow. We find a very precise correlation of the radius, corresponding to a fixed baryon overdensity, with gas temperature which is consistent with that theoretically predicted from the virial theorem. For example, the radius at which the mean baryon overdensity is 1000 is best fit as a function of temperature as $`R_{1000}T^{0.57\pm 0.04}`$ and is consistent within the scatter with the theoretically expected relation $`RT^{0.5}`$. The observed scatter in the correlation of $`R_{1000}`$ vs. $`T`$ is small. Quantitatively, for any given temperature the average scatter in $`R_{1000}`$ is approximately 7%. This corresponds to a scatter in $`M_{\mathrm{gas}}/M_{\mathrm{tot}}`$ at the same radius of less than 20%, which includes any intrinsic variation as well as measurement errors. At large radii, cluster gas density distributions are remarkably similar when scaled to the cluster virial radius ($`r_{180}`$) and they are significantly shallower than the universal profile of dark matter density found in simulations (Navarro et al. 1995). However, for gas in hydrostatic equilibrium, the temperature profile found by Markevitch et al. (1998) combined with the gas density profiles observed for our sample imply a dark matter distribution quite similar to the universal one found in numerical simulations. M. Markevitch is thanked for careful reading of the manuscript. This work was supported by the CfA postdoctoral fellowship.
no-problem/9905/hep-ph9905367.html
ar5iv
text
# 1 Introduction ## 1 Introduction To describe multiplicity fluctuations in angular regions by analytical calculations using perturbative QCD is a challenge. It could help to improve our understanding of the parton cascading mechanism and might lead to a simple description of multiparticle correlations by QCD alone. The idea that QCD jets might exhibit a self-similar (or fractal) structure was brought up already in 1979 by R.P.Feynman , A.Giovannini and G.Veneziano . In recent years this conception has been confirmed by various groups , giving detailed predictions on variables and phase space regions where fractality is expected to show up. A simple predicted dependence of the fractal dimensions on $`\alpha _s`$ stimulated further interest in measuring them experimentally. The analytical calculations are performed in the Double Log Approximation (DLA) , neglecting energy-momentum conservation, and concern only idealized jets. They provide leading order predictions applicable quantitatively at very high energies ( $``$ 1 TeV) . At LEP energies, non-perturbative effects may be important. Also, they refer to multiparton states, whereas only multihadron states can be measured. It has been suggested that the parton evolution should be extended from the perturbative regime down to a lower mass scale (if possible to the mass scale of light hadrons) to be able to compare the partonic states directly with the hadronic states. This concept of Local Parton Hadron Duality (LPHD) is quite successful for single particle distributions and for global moments of multiplicity distributions. It remains questionable in the case of the more refined variables used here, namely factorial moments and cumulants in phase space bins. First experimental measurements revealed, indeed, substantial deviations. On the other hand it can be expected that these calculations will improve in the future. This would provide us with a better understanding of the internal structure of jets in terms of analytical expressions than can be obtained by Monte Carlo calculations with many parameters. In fact, the analytical predictions considered in this paper involve only one adjustable parameter, namely the QCD scale $`\mathrm{\Lambda }`$. The aim of this study is to use DELPHI data to measure multiplicity fluctuations in one- and two-dimensional angular intervals and compare them with the available theoretical predictions. It is hoped that such a study may show how to approach nearer to a satisfying theory based on QCD and LPHD which describes high energy multiparticle phenomena. In section 2 the theoretical framework is sketched, section 3 contains information about the experimental data and the Monte Carlo comparisons and in section 4 the comparison with the analytical calculations is presented. Section 5 contains the final discussion and the summary. ## 2 Theoretical framework The theoretical calculations treat correlations between partons emitted within an angular window defined by two angles $`\vartheta `$ and $`\mathrm{\Theta }`$. The parton and particle density correlations (fluctuations) in this window are described by normalized factorial moments of order $`n`$: $$F^{(n)}(\mathrm{\Theta },\vartheta )=\frac{\rho ^{(n)}(\mathrm{\Omega }_1,\mathrm{},\mathrm{\Omega }_n)𝑑\mathrm{\Omega }_1\mathrm{}𝑑\mathrm{\Omega }_n}{\rho ^{(1)}(\mathrm{\Omega }_1)\mathrm{}\rho ^{(1)}(\mathrm{\Omega }_n)𝑑\mathrm{\Omega }_1\mathrm{}𝑑\mathrm{\Omega }_n}$$ (1) where $`\rho ^{(n)}(\mathrm{\Omega }_1,\mathrm{},\mathrm{\Omega }_n)`$ are the $`n`$-parton/particle density correlation functions which depend on the spherical angles $`\mathrm{\Omega }_k`$. The integrals extend over the window chosen. The angular windows considered here are either rings around the jet axis with mean opening angle $`\mathrm{\Theta }=25^{}`$ and half width $`\vartheta `$ in the case of 1 dimension ($`D=1`$), or cones with half opening angle $`\vartheta `$ around a direction ($`\mathrm{\Theta },\mathrm{\Phi }`$) with respect to the jet axis in the case of 2 dimensions ($`D=2`$). At sufficiently large jet energies, the parton flow in these angular windows is dominated by parton avalanches caused by gluon bremsstrahlung off the initial quark. The cumulants $`C^{(n)}`$ are obtained from the moments $`F^{(n)}`$ by simple algebraic equations , e.g. $`C^{(2)}=F^{(2)}1`$, $`C^{(3)}=F^{(3)}3(F^{(2)}1)1`$ . The theoretical scheme for deriving the moments described above is based on the generating functional techniques in the DLA of perturbative QCD. The probability of radiating a gluon with momentum $`k`$ at an emission angle $`\mathrm{\Theta }_g`$ and azimuthal angle $`\mathrm{\Phi }_g`$ from an initial parton $`a`$ has been approximated by $$M(k)d^3k=c_a\gamma _0^2\frac{dk}{k}\frac{d\mathrm{\Theta }_g}{\mathrm{\Theta }_g}\frac{d\mathrm{\Phi }_g}{2\pi }$$ (2) $$\gamma _0^2=6\alpha _S/\pi $$ (3) with $`c_a`$ = 1 if $`a`$ is a gluon and $`c_a`$ = 4/9 if $`a`$ is a quark. Ref. derived their predictions explicitly for cumulant moments $`C^{(n)}`$, whereas and obtained similar expressions for the factorial moments $`F^{(n)}`$. It has been shown by Monte Carlo calculations that, at very high energy ($`\sqrt{s}1800`$ GeV), the values of $`F^{(n)}`$ and $`C^{(n)}`$ converge to each other. At LEP energies, however, the cumulants are still far away from the asymptotic predictions (see section 4). For the normalized cumulant moments $`C^{(n)}`$ and the factorial moments $`F^{(n)}`$ , the following prediction has been made: $$C^{(n)}(\mathrm{\Theta },\vartheta )\text{or}F^{(n)}(\mathrm{\Theta },\vartheta )\left(\frac{\mathrm{\Theta }}{\vartheta }\right)^{\varphi _n}$$ (4) All 3 references give in the high energy limit and for large values of $`\vartheta \mathrm{\Theta }`$ the same linear approximation for the exponents $`\varphi _n`$: $$\varphi _n(n1)D\left(n\frac{1}{n}\right)\gamma _0$$ (5) where D is a dimensional factor, 1 for ring regions and 2 for cones. For fixed $`\alpha _s`$ (along the parton shower) eq. 5 is asymptotically valid for all angles. In this case the fractal (Renyi-) dimension $`D_n`$ can be obtained from $`\varphi _n`$ (eq. 5) via: $`D_n=D{\displaystyle \frac{\varphi _n}{n1}}`$ (6) $`D_n={\displaystyle \frac{n+1}{n}}\gamma _0`$ (7) When the running of $`\alpha _S`$ with $`\vartheta `$ in the parton cascade is taken into account, in the following was obtained $$\varphi _n(n1)D2\gamma _0(n\omega (ϵ,n))/ϵ$$ (8) $$\omega (ϵ,n)=n\sqrt{1ϵ}(1\frac{1}{2n^2}\mathrm{ln}(1ϵ))$$ (9) and $$ϵ=\frac{\mathrm{ln}(\mathrm{\Theta }/\vartheta )}{\mathrm{ln}(P\mathrm{\Theta }/\mathrm{\Lambda })}$$ (10) where $`P\sqrt{s}/2`$ is the momentum of the initial parton. The dependence on the QCD parameters $`\alpha _s`$ or $`\mathrm{\Lambda }`$ enters in the above equations via $`\gamma _0`$ and $`ϵ`$ that are determined by the scale $`QP\mathrm{\Theta }`$. In the present study it is about 20 GeV for $`\sqrt{s}`$=91.1 GeV. The corresponding predictions of refs. (eq. 11) and (eq. 12) are analytically different, but numerically similar: $$\varphi _n=(n1)D\frac{2\gamma _0}{ϵ}\frac{n^21}{n}\left(1\sqrt{1ϵ}\right)$$ (11) $$\varphi _n=(n1)D\frac{n^21}{n}\gamma _0\left(1+\frac{n^2+1}{4n^2}ϵ\right)$$ (12) It should be noted that all three theoretical papers cited above use the lowest order QCD relation (13) between the coupling $`\alpha _s`$ and the QCD scale $`\mathrm{\Lambda }`$, which is also used in the present analysis: $$\alpha _s=\frac{\pi \beta ^2}{6}\frac{1}{\mathrm{ln}(Q/\mathrm{\Lambda })}$$ (13) $$\beta ^2=12\left(\frac{11}{3}n_c\frac{2}{3}n_f\right)^1$$ (14) where $`n_c`$=3 (number of colours). These relations depend also on the number of flavours ($`n_f`$). Since eq. 13 emerges only from “one loop” calculations, the parameter $`\mathrm{\Lambda }`$ is not the universal $`\mathrm{\Lambda }_{\overline{MS}}`$, but only an effective parameter $`\mathrm{\Lambda }_{eff}`$. But also in this approximation $`\alpha _s`$ runs, having a scale dependence $`1/\mathrm{ln}(Q^2/\mathrm{\Lambda }^2)`$. The running of $`\alpha _s`$ during the process of jet cascading is implicitly taken into account in (8), (11) and (12) by the dependence of $`\varphi _n`$ on $`ϵ`$ (or $`\vartheta `$). In theory this causes a deviation from a potential behaviour (eqs. 4 and 5) of $`F^{(n)}`$ when approaching smaller values of $`\vartheta `$ (larger $`ϵ`$). All theoretical predictions concern the partonic states. The corresponding experimental measurements, however, are of hadronic states. When comparing them, the hypothesis of LPHD has to be used. It may be noted that the factorial moments $`F^{(n)}`$ measured in the present study (see also eq. 15 below) are very similar to the well known and previously measured moments in rapidity space. Here the angle $`\vartheta `$ is used (translated by constant factors into $`ϵ`$), because this is the natural variable in the QCD calculations. ## 3 Experimental data and comparison with the Monte Carlo calculations The normalized factorial moments (1) are determined experimentally by counting $`n_m`$, the number of charged particles in the respective windows of phase space, for each event: $$F^{(n)}(\mathrm{\Theta },\vartheta )=\frac{n_m(n_m1)\mathrm{}(n_mn+1)}{n_m^n}$$ (15) where the brackets $`<>`$ denote averages over the whole event sample. The data sample used contains about 600000 $`e^+e^{}`$ interactions (after cuts) collected by DELPHI at $`\sqrt{s}=91.1`$ GeV in 1994. A sample of about 1200 high energy events at $`\sqrt{s}`$=183 GeV incident energy collected in 1997 is used to investigate the energy dependence. The calculated hadron energy was required to be greater than 162 GeV (corresponding to a mean energy of 175 GeV). The standard cuts as in for hadronic events and track quality were applied by demanding a minimum charged multiplicity, enough visible charged energy and events well contained within the detector volume. In the present study all charged particles (except identified electrons and muons) with momentum larger than 0.1 GeV have been considered. The special procedures for selecting high energy events are described in . WW-events have been excluded. Detailed Monte Carlo studies were done using the JETSET 7.4 PS model . The corrections were determined using events from a JETSET Monte Carlo simulation which had been tuned ($`\mathrm{\Lambda }`$=0.346 GeV and $`Q_0`$=2.25 GeV) to reproduce general event characteristics , which included variables different from those referred to in section 2. These events were examined at * Generator level where all charged final–state particles (except electrons and muons) with a mean lifetime longer than $`10^9`$ seconds have been considered; * Detector level which includes distortions due to particle decays and interactions with the detector material, other imperfections such as limited resolution, multi–track separation and detector acceptance, and the event selection procedures. Using these events, the factorial moments and cumulants introduced in section 2 of order $`n`$, $`A_n`$, studied below were corrected (for each $`ϵ`$ interval considered) by $$A_n^{\mathrm{cor}}=g_nA_n^{\mathrm{raw}},g_n=\frac{A_n^{\mathrm{gen}}}{A_n^{\mathrm{det}}}$$ (16) where the superscript “raw” indicates the quantities calculated directly from the data, and “gen” and “det” denote those obtained from the Monte Carlo events at generator and detector level respectively. The simulated data at detector level were found to agree satisfactorily with the experimental data. The measurement error on the relative angle $`\vartheta _{12}`$ between two outgoing particles was determined to be of order $`0.5^{}`$ (if both tracks had good Vertex Detector hits, even as small as $`0.1^{}`$). The jet axis is chosen to be the sphericity axis. To increase statistics in the case of the high energy sample the moments (15) have been calculated in both sphericity hemispheres and averaged. In addition, all phenomena which were not included in the analytical calculations had to be corrected for, namely (i) initial state photon radiation, (ii) Dalitz decays of the $`\pi ^0`$, (iii) residual $`K_s^0`$ and $`\mathrm{\Lambda }^0`$ decays near the vertex, and (iv) the effect of Bose–Einstein correlations. The corrections were estimated, for each $`ϵ`$ interval, like $`g_n`$ in eq. 16, by switching the effects on and off. Each of these correction factors were found to be below 10% in the case of factorial moments. The largest corrections have been found in the case of cumulants of higher orders and amounted to 16–25%, depending on the analysis angle. The total correction factor including all effects is denoted by $`g_n^{\mathrm{tot}}`$ and is the product of the individual factors. Systematic errors have been calculated from $`g_n^{\mathrm{tot}}`$ according to $`\mathrm{\Delta }A_n^{\mathrm{corr}}=\pm |A_n^{\mathrm{raw}}(g_n^{\mathrm{tot}}1)/2|`$. Due to uncertainties in measuring multiple tracks at very small separation angles, an additional systematic error was added for small $`\vartheta `$ values for $`F^{(4)}`$ and $`F^{(5)}`$. Fig. 1 shows a comparison at $`\sqrt{s}`$=91.1 GeV of the measured 1-dimensionel cumulants and 1- and 2-dimensional factorial moments with JETSET 7.4 tuned as described above. The cumulants and factorial moments are normalized by $`C^{(n)}(0)`$ and $`F^{(n)}(0)`$ for easy comparison of the measured shapes with the analytical predictions. There is generally good agreement between the Monte Carlo simulation (open circles) and the corrected data (full circles). The study of the influence of the resonance decay shown in Fig. 1 reveals significant effects. Numerical values of the measured and corrected 1- and 2-dimensional factorial moments are given in Tables 1 and 2, respectively, for convenience as function of $`\vartheta `$/$`\mathrm{\Theta }`$ (the $`ϵ`$ dependence follows from eq. 10). Table 1a : 1-dimensional factorial moments for orders 2 and 3 together with their statistical and systematic errors as function of $`\vartheta `$/$`\mathrm{\Theta }`$ ($`\mathrm{\Theta }=25^{}`$) $`\vartheta /\mathrm{\Theta }`$ $`F_2`$ $`\pm `$ stat $`\pm `$ syst $`F_3`$ $`\pm `$ stat $`\pm `$ syst 1.0000 1.035 0.002 0.009 1.114 0.003 0.030 0.9180 1.063 0.002 0.009 1.196 0.003 0.031 0.8426 1.101 0.002 0.010 1.315 0.004 0.034 0.7735 1.139 0.002 0.010 1.446 0.005 0.037 0.7100 1.176 0.003 0.010 1.577 0.006 0.040 0.6518 1.210 0.003 0.011 1.707 0.007 0.043 0.5983 1.241 0.003 0.011 1.831 0.008 0.045 0.5492 1.269 0.003 0.012 1.949 0.009 0.047 0.5041 1.293 0.004 0.012 2.055 0.010 0.048 0.4628 1.316 0.004 0.013 2.156 0.012 0.050 0.4248 1.335 0.004 0.013 2.246 0.013 0.051 0.3899 1.353 0.004 0.014 2.330 0.014 0.054 0.3579 1.368 0.005 0.015 2.406 0.015 0.057 0.3286 1.382 0.005 0.015 2.474 0.017 0.060 0.3016 1.393 0.005 0.016 2.530 0.018 0.061 0.2769 1.402 0.005 0.016 2.576 0.019 0.064 0.2541 1.409 0.006 0.017 2.612 0.021 0.067 0.2333 1.416 0.006 0.018 2.646 0.022 0.071 0.2141 1.422 0.006 0.018 2.672 0.024 0.073 0.1966 1.428 0.006 0.019 2.699 0.025 0.079 0.1804 1.432 0.007 0.020 2.720 0.027 0.086 0.1656 1.435 0.007 0.020 2.740 0.029 0.092 0.1520 1.437 0.007 0.020 2.751 0.031 0.094 0.1396 1.441 0.008 0.021 2.771 0.033 0.097 0.1281 1.444 0.008 0.022 2.785 0.035 0.104 0.1176 1.448 0.008 0.022 2.797 0.037 0.109 0.1080 1.451 0.009 0.022 2.808 0.040 0.108 0.0991 1.454 0.009 0.022 2.812 0.043 0.106 Table 1b : 1-dimensional factorial moments for orders 4 and 5 ) together with their statistical and systematic errors as function of $`\vartheta /\mathrm{\Theta }`$ ($`\mathrm{\Theta }=25^{}`$) | $`\vartheta /\mathrm{\Theta }`$ | $`F_4`$ | $`\pm `$ stat | $`\pm `$ syst | $`F_5`$ | $`\pm `$ stat | $`\pm `$ syst | | --- | --- | --- | --- | --- | --- | --- | | 1.0000 | 1.251 | 0.005 | 0.068 | 1.465 | 0.010 | 0.135 | | 0.9180 | 1.417 | 0.006 | 0.076 | 1.757 | 0.013 | 0.160 | | 0.8426 | 1.681 | 0.008 | 0.089 | 2.270 | 0.019 | 0.203 | | 0.7735 | 1.994 | 0.011 | 0.103 | 2.931 | 0.028 | 0.253 | | 0.7100 | 2.332 | 0.015 | 0.116 | 3.697 | 0.039 | 0.307 | | 0.6518 | 2.689 | 0.018 | 0.130 | 4.568 | 0.052 | 0.362 | | 0.5983 | 3.047 | 0.022 | 0.141 | 5.489 | 0.067 | 0.412 | | 0.5492 | 3.406 | 0.027 | 0.152 | 6.467 | 0.086 | 0.461 | | 0.5041 | 3.745 | 0.032 | 0.157 | 7.440 | 0.106 | 0.491 | | 0.4628 | 4.085 | 0.038 | 0.165 | 8.467 | 0.130 | 0.528 | | 0.4248 | 4.395 | 0.043 | 0.173 | 9.436 | 0.158 | 0.573 | | 0.3899 | 4.694 | 0.050 | 0.186 | 10.403 | 0.190 | 0.634 | | 0.3579 | 4.974 | 0.056 | 0.200 | 11.337 | 0.220 | 0.837 | | 0.3286 | 5.227 | 0.063 | 0.211 | 12.196 | 0.257 | 1.200 | | 0.3016 | 5.438 | 0.071 | 0.262 | 12.931 | 0.304 | 1.501 | | 0.2769 | 5.605 | 0.079 | 0.260 | 13.464 | 0.347 | 1.320 | | 0.2541 | 5.739 | 0.087 | 0.261 | 13.924 | 0.398 | 1.508 | | 0.2333 | 5.855 | 0.096 | 0.261 | 14.286 | 0.456 | 1.508 | | 0.2141 | 5.939 | 0.105 | 0.263 | 14.520 | 0.508 | 1.503 | | 0.1966 | 6.021 | 0.112 | 0.299 | 14.714 | 0.536 | 1.521 | | 0.1804 | 6.113 | 0.123 | 0.347 | 15.125 | 0.611 | 1.543 | | 0.1656 | 6.194 | 0.134 | 0.395 | 15.447 | 0.679 | 1.671 | | 0.1520 | 6.228 | 0.147 | 0.402 | 15.381 | 0.748 | 1.633 | | 0.1396 | 6.292 | 0.160 | 0.424 | 15.493 | 0.819 | 1.742 | | 0.1281 | 6.335 | 0.176 | 0.467 | 15.610 | 0.942 | 1.959 | | 0.1176 | 6.351 | 0.189 | 0.510 | 15.544 | 1.005 | 2.333 | | 0.1080 | 6.361 | 0.207 | 0.512 | 15.485 | 1.091 | 2.438 | | 0.0991 | 6.289 | 0.219 | 0.502 | 14.637 | 1.122 | 2.279 | Table 2a : 2-dimensional factorial moments for orders 2 and 3 together with their statistical and systematic errors as function of $`\vartheta /\mathrm{\Theta }`$ ($`\mathrm{\Theta }=25^{}`$) $`\vartheta /\mathrm{\Theta }`$ $`F_2`$ $`\pm `$ stat $`\pm `$ syst $`F_3`$ $`\pm `$ stat $`\pm `$ syst 1.000 1.046 0.002 0.036 1.155 0.004 0.111 0.918 1.143 0.002 0.041 1.476 0.006 0.145 0.843 1.240 0.003 0.047 1.858 0.010 0.193 0.774 1.337 0.004 0.056 2.296 0.015 0.258 0.710 1.428 0.005 0.067 2.769 0.022 0.338 0.652 1.518 0.006 0.080 3.298 0.031 0.435 0.598 1.602 0.007 0.095 3.863 0.043 0.544 0.549 1.678 0.009 0.112 4.440 0.058 0.660 0.504 1.749 0.010 0.129 5.085 0.077 0.784 0.463 1.822 0.012 0.148 5.821 0.103 0.914 0.425 1.888 0.014 0.167 6.506 0.132 1.021 0.390 1.956 0.017 0.186 7.312 0.171 1.123 0.358 2.011 0.019 0.204 8.067 0.222 1.186 0.329 2.069 0.022 0.221 9.007 0.298 1.240 0.302 2.121 0.026 0.236 9.994 0.398 1.258 0.277 2.188 0.030 0.251 10.985 0.503 1.233 Table 2b : 2-dimensional factorial moments for orders 4 and 5 together with their statistical and systematic errors as function of $`\vartheta /\mathrm{\Theta }`$ ($`\mathrm{\Theta }=25^{}`$) $`\vartheta /\mathrm{\Theta }`$ $`F_4`$ $`\pm `$ stat $`\pm `$ syst $`F_5`$ $`\pm `$ stat $`\pm `$ syst 1.000 1.356 0.008 0.250 1.703 0.021 0.508 0.918 2.126 0.018 0.397 3.369 0.056 1.009 0.843 3.234 0.035 0.628 6.343 0.138 1.913 0.774 4.738 0.063 0.963 11.133 0.293 3.368 0.710 6.614 0.107 1.405 18.072 0.575 5.428 0.652 9.046 0.182 1.988 28.813 1.187 8.458 0.598 12.110 0.297 2.706 44.913 2.309 12.622 0.549 15.604 0.467 3.469 65.771 4.404 17.235 0.504 20.094 0.693 4.326 94.370 6.777 22.310 0.463 25.785 1.050 5.206 134.110 11.377 27.410 0.425 31.593 1.505 5.758 179.450 17.935 29.940 0.390 38.817 2.178 6.087 234.930 27.941 29.300 0.358 46.590 3.379 5.895 297.580 53.980 23.620 0.329 58.358 5.432 5.405 419.930 104.740 13.500 0.302 72.636 8.664 4.098 599.320 181.600 16.410 0.277 85.249 11.582 1.647 743.900 235.130 46.850 ## 4 Comparison with the analytical calculations ### 4.1 Quantitative comparison at $`\sqrt{s}`$ = 91.1 GeV Fig. 2 shows the cumulants of orders $`n`$ = 2 and $`n`$ = 3 in one-dimensional rings around jet cones normalized by $`C^{(n)}(0)`$ and compared with the predictions of ref. . * The agreement with the data is very bad: The predictions lie well below the data and differ in shape (Fig. 2a). Using a lower value of $`\mathrm{\Lambda }`$ (i.e. $`\mathrm{\Lambda }=0.04`$ GeV instead of 0.15 GeV) does not help, as can be seen in Fig. 2b (neither does a smaller value of $`n_f`$, not shown here). Fig. 3 shows the factorial moments of orders 2, 3, 4 and 5 normalized by $`F^{(n)}(0)`$, together with the predictions of refs. , in one- and two- dimensional angular intervals (i.e. rings and side cones) for various numerical values of $`\mathrm{\Lambda }`$ and $`n_f`$. * The correlations in one-dimensional rings around jets, expressed by factorial moments, are not described well by the theoretical predictions using the QCD parameters $`\mathrm{\Lambda }=0.15`$ GeV and $`n_f=5`$ (Fig. 3a). The predictions lie below the data for not too large $`ϵ`$, differing also in shape. * Choosing $`n_f=3`$ (Fig. 3b) instead of $`n_f=5`$ as in Fig. 3a reduces the discrepancies. * Choosing in addition the smaller value of $`\mathrm{\Lambda }=0.04`$ GeV (Fig. 3c), $`F^{(2)}`$ is well predicted for smaller values of $`ϵ`$, the higher orders ($`n>2`$) still deviate considerably. * The factorial moments in 1 and 2 dimensions show different behaviour for the lower order moments $`n<4`$ : choosing the same set of parameters ($`\mathrm{\Lambda }=0.15`$ GeV, $`n_f=3`$), $`F^{(2)}`$ and $`F^{(3)}`$ lie above the predictions in the 1-dimensional case (Fig. 3b), but below them in the 2-dimensional case (Fig. 3d). * The higher moments $`F^{(4)}`$ and $`F^{(5)}`$ have similar features in the 1- and 2-dimensional case (Figs. 3b,3d). * In Fig. 3 the slopes at small $`ϵ`$ are generally steeper than predicted (with the exception of $`F^{(2)}`$ and $`F^{(3)}`$ in Fig. 3d) and the “bending” begins at smaller values of $`ϵ`$. * It is not possible to find one set of QCD parameters $`\mathrm{\Lambda }`$ and $`n_f`$ which simultaneously minimize the discrepancies between data and predictions for moments of all orders 2,3,4 and 5 in both the 1- and 2-dimensional cases. ### 4.2 Energy dependence Fig.4 shows a comparison with high energy data at $`\sqrt{s}`$=183 GeV (with a mean energy of $`\sqrt{s}`$=175 GeV) and the corresponding predictions according to eq. 8, where the energy dependence enters via the parameter $`\gamma _0`$. It can be seen that for small values of $`ϵ`$ there is no improvement of agreement at high energy. For larger values of $`ϵ`$ the statistical errors of the high energy data are substantial. The relative increase of the predicted moments agrees qualitatively with that of the JETSET model that, as shown in Fig. 1, agrees very well with the measurement at $`\sqrt{s}`$=91.1 GeV. Similar conclusions can be found from the predictions based on eqs. 11 and 12. ### 4.3 Qualitative features In the introduction, arguments have been given that the DLA might not be accurate enough for a quantitative description of experiments. Some disagreement with the measurement could be expected considering the asymptotic nature of the calculations, but nevertheless an overall qualitative description of the data should be provided. Indeed the data (see Figs. 3,4) show some general qualitative features that are predicted well by the analytical calculations: * The factorial moments rise linearly at small $`ϵ`$ exhibiting a fractal structure as predicted in eqs. 4,5 for the parton cascade and saturate at higher values. * The factorial moments increase from $`\sqrt{s}`$=91.1 GeV to $`\sqrt{s}`$=175 GeV. * The 2-dimensional moments rise much more steeply than the 1-dimensional moments (Figs 3b, 3d). * The values of $`\varphi _n`$ obtained by fitting eq. 4 to the data in the region of small $`ϵ`$ ($`ϵ<0.1`$) follow the predictions eq. 5 qualitatively, as can be seen in Table 3. * In Fig. 3 it is shown that the analytically calculated factorial moments depend sensitively on $`\mathrm{\Lambda }`$. It should be noted that a similar dependence (although weaker because of the $`\mathrm{\Lambda }`$-independent fragmentation) is observed in JETSET when varying $`\mathrm{\Lambda }`$ and keeping all other parameters constant. ### 4.4 Discussion of the QCD parameter $`\gamma _0`$ The first term in the perturbative formula eq. 5 involves the phase space volume, the second one depends explicitly on the parameter $`\gamma _0`$ (eq. 3), i.e. the QCD coupling $`\alpha _s`$. Fig. 5a summarizes the behaviour at small $`ϵ`$, where the numerical values of $`\gamma _0^{eff}`$ derived from the measured slopes $`\varphi _n`$ are given for the orders $`n=2,3,4,5`$. From the present theoretical understanding, $`\gamma _0`$ is expected to be independent of $`n`$. For example, for $`\mathrm{\Lambda }`$=0.15 GeV and $`n_f`$=3 ($`\mathrm{\Theta }=25^o`$, $`QP\mathrm{\Theta }`$) eqs. 13 and 14 give the numerical value $`\alpha _s`$=0.143 and hence from eq. 3 the value $`\gamma _0`$=0.523. This is indicated as horizontal line in Fig. 5, where also the lines for $`\mathrm{\Lambda }`$=0.01 GeV and $`\mathrm{\Lambda }`$=0.8 GeV are given for comparison. The average measured values of $`\gamma _0^{eff}`$ are of the same order as the expectation. The $`n`$-dependence observed, however, is not described by the calculations. The measured values of $`\gamma _0^{eff}`$ agree, however, extremely well with the corresponding values obtained from JETSET, as can be seen in Fig. 5a. Table 3: Comparison of measured and predicted slopes $`\varphi _n`$ the errors were obtained by adding statistical and systematic errors quadratically | 1-dimensional case | n=2 | n=3 | n=4 | n=5 | | --- | --- | --- | --- | --- | | data | 0.38 $`\pm `$ 0.006 | 1.04 $`\pm `$ 0.02 | 1.87 $`\pm `$ 0.02 | 2.78 $`\pm `$ 0.03 | | $`\mathrm{\Lambda }=0.15`$ GeV, $`n_f=5`$ | 0.15 | 0.49 | 0.88 | 1.28 | | $`\mathrm{\Lambda }=0.04`$ GeV, $`n_f=5`$ | 0.25 | 0.66 | 1.12 | 1.59 | | $`\mathrm{\Lambda }=0.15`$ GeV, $`n_f=3`$ | 0.22 | 0.61 | 1.04 | 1.49 | | $`\mathrm{\Lambda }=0.04`$ GeV, $`n_f=3`$ | 0.30 | 0.76 | 1.26 | 1.77 | | $`\mathrm{\Lambda }=0.005`$ GeV, $`n_f=3`$ | 0.40 | 0.93 | 1.50 | 2.07 | | 2-dimensional case | n=2 | n=3 | n=4 | n=5 | | data | 0.93 $`\pm `$ 0.02 | 2.62 $`\pm `$ 0.04 | 4.77 $`\pm `$ 0.05 | 7.15 $`\pm `$ 0.06 | | $`\mathrm{\Lambda }=0.15`$ GeV, $`n_f=5`$ | 1.15 | 2.49 | 3.88 | 5.28 | | $`\mathrm{\Lambda }=0.04`$ GeV, $`n_f=5`$ | 1.25 | 2.66 | 4.12 | 5.59 | | $`\mathrm{\Lambda }=0.15`$ GeV, $`n_f=3`$ | 1.22 | 2.61 | 4.04 | 5.49 | | $`\mathrm{\Lambda }=0.04`$ GeV, $`n_f=3`$ | 1.30 | 2.76 | 4.26 | 5.77 | | $`\mathrm{\Lambda }=0.005`$ GeV, $`n_f=3`$ | 1.40 | 2.93 | 4.50 | 6.07 | ### 4.5 Attempts for improvement One of the shortcomings of the present calculations is the lack of energy-momentum conservation. There exist two attempts for improvement. Firstly, in ref. , Modified Leading Log Approximation (MLLA) corrections have been calculated for the intermittency exponents $`\varphi _n`$. An order dependent correction for $`\gamma _0`$ has been proposed, leading to a correction to $`\gamma _0`$ amounting to only a few percent for all orders $`n`$ = 2 to 5. The deviations observed in Fig. 5 are much larger. In a second attempt, Meunier and Peschanski introduced energy conservation terms explicitly. This leads, however, to even smaller predicted slopes $`\varphi _n`$ and consequently larger values of $`\gamma _0`$, increasing the discrepancies shown in Table 3 and Fig. 5. No angular recoil effects were included in these calculations. Recently Meunier proposed to use, instead of the evolution variable $`ϵ=\mathrm{ln}(\mathrm{\Theta }/\vartheta )/\mathrm{ln}(P\mathrm{\Theta }/\mathrm{\Lambda })`$, the variable $`ϵ=\mathrm{ln}((\overline{n}_0/\overline{n})^{1/D})/\mathrm{ln}(P\mathrm{\Theta }/\mathrm{\Lambda })`$, where $`\overline{n}_o`$ and $`\overline{n}`$ are the mean multiplicities in the first $`ϵ`$ bin ($`\vartheta =\mathrm{\Theta }`$) and in the $`ϵ(\vartheta )`$ bins respectively. Using this new variable, the discrepancies of the 1-dimensional factorial moments observed so far are reduced by almost a factor 2 – see Fig. 5b – and the $`n`$-dependence is less strong. The discrepancy between the 1- and 2-dimensional moments, however, is increased (Fig. 5b). Whether the use of the evolution variable $`\frac{\overline{n}_o}{\overline{n}}`$ is more suitable than the angular evolution variable $`\mathrm{\Theta }/\vartheta `$, which is indicated only in the 1-dimensional case, must still remain open. Another question concerns the range of validity of the LPHD hypothesis, which can be studied only by using Monte Carlo simulations at both partonic and hadronic levels. Different Monte Carlo models or different choices of the cut-off parameter $`Q_o`$ at which the parton cascade is “terminated”, even in a moderate interval (0.3 - 0.6 GeV), lead to different answers . In the strict sense LPHD demands a low cut-off scale ($`Q_0`$ 0.2-0.3 GeV) . In a JETSET study of the partonic state with $`\mathrm{\Lambda }`$=0.15 GeV and $`Q_0`$=0.33 GeV a steeper rise of the moments than that of the hadron state is observed at small $`ϵ`$ thus even increasing the discrepancy with the analytical predictions. These studies and the results of indicate that even a possible violation of LPHD might not be the reason for the observed discrepancies. Fig. 1 also shows that shape distortions due to resonance decay, although significant, are much smaller than the discrepancies between data and theoretical predictions. Similarily a slightly steeper rise of moments is also observed in Monte Carlo studies when replacing the sphericity axis by the ”true” $`q\overline{q}`$ axis and excluding initial heavy flavour production. These effects, however, are smaller than that caused by inhibiting resonance decay (see Fig. 1c,d). This discussion suggest that the analytical calculations need to be improved beyond the above attempts. Only after improving the perturbative calculations does one have a better handle to estimate how far nonperturbative effects are spoiling the agreement with the data. The importance of including angular recoil effects into the parton cascade, as it is also stressed in , is intuitively evident when analysing angular dependent functions. ## 5 Summary and outlook Experimental data on multiplicity fluctuations in one- and two- dimensional angular intervals in $`e^+e^{}`$ annihilations into hadrons at $`\sqrt{s}=91.1`$ GeV and $`\sqrt{s}175`$ GeV collected by the DELPHI detector have been compared with first order analytical calculations of the DLA and MLLA of perturbative QCD. Some general features of the calculations are confirmed by the data: the factorial moments rise approximately linearly for large angles (as expected from the multifractal nature of the parton shower) and level off at smaller angles; the dimensional-, order- and energy dependences are met qualitatively. At the quantitative level, however, large deviations are observed: the cumulants are far off the predictions; the factorial moments level off with substantially smaller radii; even by reducing the QCD parameters $`\mathrm{\Lambda }`$ and/or $`n_f`$, the analytical calculations are not able to describe simultaneously the factorial moments at all orders $`n=2,3,4,5`$ and at different dimensionalities (1- and 2-dimensions). Thus an evaluation of QCD parameters from the data is not possible at present. From Monte Carlo studies there are indications that possible violations of LPHD are not responsible for these discrepancies. Therefore these shortcomings are probably mainly due to the high energy approximation inherent in the DLA (which is most responsible for the extreme failure of calculations using cumulants). Available MLLA calculations cannot substantially improve on the DLA. To match the data at presently available energies, improvements such as the inclusion of full energy-momentum conservation are needed. Similar conclusions have been obtained by a parallel one-dimensional study . More checks on refined predictions are desirable in the future. Acknowledgements We thank W. Kittel, P. Lipa, J.-L. Meunier, W.Ochs, R. Peschanski and J.Wosiek for valuable discussions and stimulation. We are greatly indebted to our technical collaborators and to the funding agencies for their support in building and operating the DELPHI detector, and to the members of the CERN-SL Division for the excellent performance of the LEP collider.
no-problem/9905/astro-ph9905140.html
ar5iv
text
# ISOCAM 15 𝜇m Search for Distant Infrared Galaxies Lensed by Clusters ## 1 Introduction Clusters of galaxies can serve as windows on the distant universe by bringing faint objects above detection threshholds via gravitational lensing. Many arcs representing magnified ordinary galaxies at moderate to high redshifts are seen in rich galaxy clusters, and these have been well-studied over the past several years. Indeed some of the most distant known galaxies have been discovered in this way at optical wavelengths (see Franx et al 1997 for a lensed galaxy at $`z=4.92`$). Clusters are known to be equally useful as cosmic magnifiers in other wavebands as well. In the local universe the dominant populations of luminous objects are infrared galaxies and quasars. The ultraluminous infrared galaxies (ULIRGs), with bolometric luminosities $`10^{12}L_{\mathrm{}}`$, appear to be powered by intense starbursts and obscured quasars (e.g., Sanders and Mirabel 1996). It remains an open question how many such objects exist among galaxy populations at high redshift, since they are obscured in the optical region and generate most of their luminosity in the mid- to far-infrared where sensitivities of space-borne telescopes such as IRAS have been marginal for their detection above redshifts of a few tenths. Indeed only one such high-$`z`$ galaxy is known in the IRAS database, the infrared galaxy/hidden quasar F10214+4724, at $`z=2.3`$ (Rowan-Robinson et al 1991). Two high-redshift IRAS quasars have also been identified: the Cloverleaf at $`z=2.6`$, and APM 08279+5255 at $`z=3.9`$, both broad absorption line quasars (Barvainis et al 1995; Irwin et al 1998). Boosting by gravitational lensing was required for the detection of F10214+4724 (Broadhurst and Lehár 1995), the Cloverleaf is a quad lens, and APM 08279+5255 appears to be a close optical double and very likely lensed as well. New lensed iand unlensed infrared galaxies are currently being found via their submillimeter emission by SCUBA on the JCMT (Smail, Ivison, & Blain 1997; Ivison et al 1998; Hughes et al 1998; Barger et al 1998). ESA’s Infrared Space Observatory (ISO) offered a new opportunity to mount systematic searches for other high-$`z`$ infrared-dominated objects. The approach we adopted with ISO, which we report here, was to use the mid-infrared camera ISOCAM to image at 15 $`\mu `$m two very rich lensing clusters of galaxies, Abell 2218 and Abell 2219. The strategy was to enhance detectability of distant infrared galaxies over random fields by taking advantage of the cluster lensing boost. Other groups have carried out similar programs: Altieri et al (1998a) for Abell 2218; Altieri et al (1998b) and Lémonon et al (1998) for Abell 2390; and Metcalfe et al (1998) for Abell 370. The two target clusters were chosen for their richness and for previous optical indications of lensing, along with purely observational considerations such as visibility to ISO and high ecliptic latitude. Both are at moderate redshift: $`z=0.176`$ for Abell 2218, and $`z=0.225`$ for Abell 2219. In this experiment several probable cluster galaxies were detected at 15 $`\mu `$m in the two clusters. Three background galaxies, dominated by their mid-infrared emission, were also detected. Two are previously known lensed galaxies behind Abell 2218 at redshifts $`z=0.474`$ and $`z=1.034`$, and the third is a faint red object in the field of Abell 2219 with a probable redshift of $`z=1.048`$. ## 2 Observations and Data Analysis Each cluster was observed for a total of 1.2 hours using the LW3 filter of ISOCAM covering the range $`1218`$ $`\mu `$m. The 32$`\times 32`$ detector array was configured with a plate scale of $`6^{\prime \prime }`$ per pixel. Data were taken in “micro-scan” mode, using 3x3 rasters with $`14^{\prime \prime }=2.33`$ pixel steps to maximize the area covered and the different pixel sampling of the same sky area. Three separate such rasters, with slightly shifted centers, were taken of each cluster, to achieve yet more cross-sampling between camera pixels and sky pixels. The diffraction-limited beamsize was $`6.3^{\prime \prime }`$ FWHM, but with a $`6^{\prime \prime }`$ pixel size and multiple sampling, we estimate that the point source width should be $`89^{\prime \prime }`$. There are no strong sources in the field with which to measure an accurate point spread function, but the program sources that were detected with reasonable SNR yield a FWHM of $`9^{\prime \prime }`$ when fitted with a circular Gaussian approximation to the PSF, or about the expected value. For Abell 2218 the first raster was rendered unusable by detector instabilities. Only the second and third rasters were summed, for a total integration time of 0.8 hr. For Abell 2219 all three rasters were acceptable. Images were dark-subtracted, flat-fielded, deglitched, corrected for transient response, and combined using standard CIA routines at IPAC in Pasadena. Source fluxes were derived by fitting with a circular two-dimensional Gaussian function and zero level using AIPS, with fixed FWHM of $`9^{\prime \prime }`$ (see above). The RMS noise levels, obtained by fitting with the same Gaussian at random points in the fields, are $`200`$ $`\mu `$Jy for Abell 2218 and $`110`$ $`\mu `$Jy for Abell 2219. These noise levels are well above the theoretical expectation, for reasons that are not understood by us at present; the cause is not a scaling error, since our source fluxes are consistent with those of Alieri et al (1998a) (see §3.1). Absolute coordinates on the original ISO rasters have an uncertainty of $`6^{\prime \prime }`$, and small shifts were required for both ISO images to align the ISO sources with galaxies in optical frames. No rotations were required. After the shifts all of the ISO sources had clear optical identifications. One of the detected sources in the Abell 2219 image (designated A2219#5, see below) is optically faint and very red, making it a candidate for the sort of distant infrared-dominated galaxy we were searching for. A spectrum obtained for us by T. Broadhurst and B. Frye using the Keck I telescope shows an emission line at $`\lambda `$7634Å, to which we assign a probable identification of \[OII\] $`\lambda \lambda 3726,3729`$Å at $`z=1.048`$. This identification is supported by the continuum shape and the lack of other strong lines in the $`63009500`$Å passband observed (see, e.g., starburst galaxy templates of Kinney et al 1996). A detection of H$`\alpha `$ at 1.34 $`\mu `$m would confirm the redshift. A possible alternative identification would be H$`\alpha `$ at $`z=0.163`$, but we feel this is less likely than \[OII\] at $`z=1.048`$ for the reasons given. ## 3 Results and Discussion The 15 $`\mu `$m images are shown in Figures 1 and 2. The coordinates shown have been adjusted after comparison between the sources in the ISO image and galaxies in I-band optical images, and should be accurate to $`12^{\prime \prime }`$. In Abell 2218 there are four sources with detections for which we can be reasonably confident, and in Abell 2219 we identify five sources. Only detections that were evident on more than one frame are considered bona fide sources here. All of the accepted sources are closely coincident with visible-light sources. Contour overlays of the 15 $`\mu `$m images on $`I`$-band images from the Palomar 5m telescope are shown in Figures 3 and 4 (5m images courtesy I. Smail). An overlay of the 15 $`\mu `$m image of Abell 2218 on an HST $`R`$-band image is shown in Figure 5. Object identifications, with 15 $`\mu `$m and $`I`$-band fluxes, are given in Table 1. All sources are consistent with being point-like in the ISO images. ### 3.1 Detected galaxies: General discussion Of the nine objects detected at 15 $`\mu `$m, three have redshifts available in the literature. These are A2218#395, #317, and #289 (using the galaxy numbering system of Le Borgne, Pello, & Sanahuja 1992; Abell 2219 has no previous numbering system and the ISO sources are numbered here according to RA order). The latter two (see §3.2) are background galaxies at $`z=0.474`$ and $`z=1.034`$ respectively (Ebbels et al 1997), with #289 clearly being distorted by the cluster potential. Abell 2218 has a mean redshift of 0.176 (Le Borgne et al 1992). A2218#395, at $`z=0.103`$, is a foreground galaxy and not part of the cluster. As discussed above, we assign a tentative redshift of $`z=1.048`$ for A2219#5. The other detected galaxies in Abell 2218 and Abell 2219 are of unknown redshift and precise Hubble type, although the majority appear to be spirals or irregulars. None of those galaxies look particularly distorted as if by lensing, and all except A2219#5 have optical colors typical of ordinary spiral or elliptical galaxies. However, they are distinguished by their strong 15 $`\mu `$m emission, which sets them apart from the hundreds of other galaxies in the ISOCAM fields. The apparent mid-infrared luminosities (over the $`\mathrm{\Delta }\nu /\nu 0.25`$ of the filter passband) of the four galaxies with known redshifts, A2218#395, #317, and #289, and A2219#5, are respectively $`L_{14\mu \mathrm{m}}=3.9\times 10^8L_{\mathrm{}}`$, $`L_{10\mu \mathrm{m}}=4.9\times 10^9L_{\mathrm{}}`$, $`L_{7\mu \mathrm{m}}=3.0\times 10^{10}L_{\mathrm{}}`$, and $`L_{7\mu \mathrm{m}}=1.9\times 10^{10}L_{\mathrm{}}`$ (assuming $`z=1.048`$ for A2219#5), where the subscripts on the luminosities represent the rest wavelengths observed. The frequency width of the 15 $`\mu `$m filter has been taken to be $`5.0\times 10^{12}`$ Hz, and the cosmological parameters used are $`H_0=65`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.1`$. For A2218#289 and #317, and A2219#5, the calculated luminosities represent upper limits, since the fluxes are likely to be magnified by lensing. The other galaxies, if assumed to be at the redshifts of their clusters, range in luminosity between $`5.8\times 10^8L_{\mathrm{}}`$ and $`2.4\times 10^9L_{\mathrm{}}`$. For comparison, of the 57 Virgo cluster galaxies of all Hubble types detected by Boselli et al (1998) at 15 $`\mu `$m, none have $`L_{15\mu \mathrm{m}}`$ above $`10^9L_{\mathrm{}}`$, and most lie in the range $`10^610^8L_{\mathrm{}}`$. The galaxies detected here have much higher apparent luminosities in the mid-infrared than average cluster galaxies. Boselli et al (1998) find that for spirals and irregulars the 15 $`\mu `$m flux tends to be dominated by dust re-emission of primarily UV stellar light, whereas for ellipticals it is the (very weak) direct light of the Rayleigh-Jeans tail of the old population of red stars. However, for spirals the relation between star formation and 15 $`\mu `$m flux is not simple. It appears that galaxies with moderate activity have the highest 15 $`\mu `$m to UV flux ratios whereas for active star-forming galaxies this ratio is lower (Boselli et al 1997). The galaxies with clear spiral morphology in our sample are A2218#395 and #317, and A2219#1 and #3. A2218#289 and #275 appear irregular. The others, all in Abell 2219 (galaxies #2, #4, and #5), are difficult to classify because of lack of angular resolution (all are smaller than $`3^{\prime \prime }`$). However, the very small number of detections of E/SO galaxies in Virgo relative to later types by Boselli et al (1998) suggests that all or almost all of the detections here are likely to be from spirals or irregulars. This is supported by the large 15 $`\mu `$m to I-band flux ratios, which are typical of spirals rather than ellipticals (see Table 1). Altieri et al (1998a) recently reported ISOCAM imaging of Abell 2218 at 5, 7, 10, and 15 $`\mu `$m. Their field was roughly one-quarter the size of ours, because their pixel field of view was $`3^{\prime \prime }`$ compared with our $`6^{\prime \prime }`$. In the inner regions of the cluster they detected the central cD galaxy and five others: #395, #317, #323, #373, and #275. We detected #395, #317, #275, and possibly #373 (which we do not claim as a firm detection, although it does show up with $`400\mu `$Jy in two positive contours in Figures 4 & 5). A2218#323 is weak in Figure 2 of Altieri et al, and appears to be below our detection threshhold, as does the cD galaxy. For #395 and #317 the fluxes given by Altieri et al (1998a) are consistent with ours. Our approximate flux of $`400\mu `$Jy for #373 is also consistent with the value found by Altieri et al. ### 3.2 Notes on individual galaxies A2218#289, at $`z=1.034`$, consists of a complex of bright knots and diffuse emission (see Figure 5). Lensing distortion is substantial at the northeastern end, where the galaxy is stretched across the halo of cluster galaxy #244; because the object appears so luminous it is probably highly magnified, according to Kneib et al (1996). A2218#317 is a background spiral galaxy at $`z=0.474`$. It is very probably magnified by the cluster, since its shape and color led to a redshift prediction by Kneib et al (1996), using cluster inversion techniques, of $`0.2<z_{\mathrm{lensing}}<0.4`$. Given that such redshift predictions are statistical in nature, the spectroscopically measured redshift of $`z=0.474`$ can be considered to be consistent with the lensing hypothesis. A2219#5 is optically the faintest object among those detected at 15 $`\mu `$m, by more than an order of magnitude. It is quite red, with an optical color $`BI=4.08`$, and a 15 $`\mu `$m to $`I`$ ratio $`S_{15\mu \mathrm{m}}/S_I=180`$; the second largest such ratio is 21 for A2218#289. A2219#5 therefore appears to be an unusual object, very red in the optical and apparently dominated in luminosity by its mid- or far-infrared emission. Lémonon et al (1998) and Altieri et al (1998b) have recently found similar objects in ISOCAM images of Abell 2390. Referrring to Figure 6, A2219#5 is well-resolved in the optical along the major axis and barely resolved along the minor axis. A 2-dimensional gaussian fit gives a FWHM size of $`2.0^{\prime \prime }\times 1.0^{\prime \prime }`$ at position angle 103$`\mathrm{°}`$ (major axis). The image seeing is $`0.7^{\prime \prime }`$, so the deconvolved minor axis width would be $`0.7^{\prime \prime }`$. The direction to the center of the cluster from the location of A2219#5 lies at PA = 210$`\mathrm{°}`$ (see Figure 6), so the major axis is within 17$`\mathrm{°}`$ of being perpendicular to this direction, suggesting a lensed arclet. ### 3.3 IR/B Ratios Table 1 lists the 15 $`\mu `$m-to-I ratios for all detected galaxies. The three galaxies at the highest measured redshifts have the highest ratios, as would be expected for a survey with fewer detections in the infrared than the visible. For A2218#289 and A2219#5 at redshifts near 1, the observed 15 $`\mu `$m/I ratio maps closely to the ratio of 7 $`\mu `$m/B in the rest frame. For these bands, the observed flux density ratios of 21 and 176 translate to a luminosity ratio (in $`\nu f_\nu `$) of $`1`$ and $`10`$. Helou et al (1999) show that for star forming galaxies the ISO 7 $`\mu `$m band ($`59`$ $`\mu `$m) is dominated by the Aromatic features, and that the luminosity in this band accounts for $`28\%`$ of the total infrared luminosity between 3 and 1000 $`\mu `$m. The lower luminosity fraction is characteristic of intense star bursts, while the higher fractions occur in quiescent or mildly active star forming galaxies. At the extreme luminosity and excitation end however, these fractions can drop even lower, as in Arp 220, which emits only about 0.3% of its luminosity in the 7 $`\mu `$m band (E. Sturm, private communication; Genzel et al. 1998). Using the higher 7 $`\mu `$m-to-total-IR fraction of 8% leads to IR/B ratios of 13 and 120 for A2218#289 and A2219#5, which are much higher than expected in quiescent galaxies. One is therefore led by self-consistency arguments to using a smaller fraction applicable to active galaxies. For a 7 $`\mu `$m-to-total-IR fraction of 2%, one finds the total IR/B ratios are $`40`$ and $`400`$ for A2218#289 and A2219#5 (extrapolating the optical spectrum to obtain B<sub>rest</sub>), and the total observed IR luminosities about $`1.5\times 10^{12}`$ and $`9.5\times 10^{11}L_{\mathrm{}}`$. While these are substantial luminosities, they do have to be adjusted down for gravitational lensing amplification, which is on the order of ten for A2218#289 (Casoli et al 1996), and at most a few for A2219#5 given that it is lensed by the smooth cluster potential. If A2219#5 is at a redshift of 0.163 rather than 1.048 (see §2), its luminosity would be considerably lower of course but its IR/B ratio would still be very large. To further constrain the IR/B ratios we have analyzed the raw IRAS survey data for the best estimate of upper limits in the far-infrared, and find for each of A2218#289 and A2219#5 a three-sigma upper limit of about 225 mJy at 100 $`\mu `$m, in quiet sky with very little cirrus noise. This implies that the 7 $`\mu `$m-to-total-IR fraction must be greater than 1.6% for A2218#289, and greater than 1% for A2219#5, or they would have been detected by IRAS. These limit fractions add confidence that the 2% adopted above is a reasonably good estimate, since it is constrained from both above and below. We therefore conclude that both of the $`z=1`$ objects are likely to be dusty galaxies similar to local ULIRGs, with luminosities up to a few $`\times 10^{11}L_{\mathrm{}}`$. While IR/B $`40`$ (A2218#289) is not unusual for an active galaxy, a ratio of 400 (A2219#5) is extreme, in that it requires a high column density of gas and dust with the dust surrounding the active regions without significant leaks or clumping. Differential lensing amplification between wavelengths (Eisenhardt et al 1996) is unlikely to be the cause of the extreme ratio since A2219#5 is lensed by the diffuse potential of Abell 2219, which is too smooth to produce such an effect. Similar high ratios have been reported by Dey et al (1999) for the z=1.44 ERO (extremely red object) HR10 (IR/B $`300`$), and by Yun and Scoville (1998) for IRAS F15307+3252 (IR/B $`650`$). A2219#5 has a number of other similarities to HR10 and other members of the ERO class, although its colors are probably not extreme enough to warrant inclusion in that class (we estimate an R$``$K color of 4.5, interpolating between our measured fluxes; EROs have R$``$K$`>6`$). The EROs are a population of infrared bright, extremely red objects that have been discovered in near-infrared imaging surveys. Most EROs are sufficiently faint optically that their redshifts and spectral properties are unknown. The exception is HR10, for which a redshift of 1.44 has been derived based on \[OII\] and H$`\alpha `$ emission lines (Graham & Dey 1996; Dey et al 1999). This object is thought to be a high-redshift counterpart of the dusty, ultraluminous infrared galaxies found in the local Universe (Dey et al 1999). The optical/IR spectral energy distributions and luminosities of A2219#5 (assuming $`z=1.048`$) and HR10 are similar, both are extended at the $`12^{\prime \prime }`$ level, and both show an emission line at long optical wavelengths, firmly identified as \[OII\] $`\lambda \lambda 3726,3729`$Å in the case of HR10 and tentatively identified as the same line in A2219#5. If EROs are like A2219#5 in their infrared properties, the mid-infrared ISO detection of A2219#5 indicates that the sharply rising near-infrared spectrum of objects like HR10 continues out to at least 15 $`\mu `$m. This is consistent with the detection of HR10 at submillimeter wavelengths by Dey et al (1999). We conclude that the galaxies in the cluster backgrounds are luminous, high IR/B objects. As is the case for most ULIRGs, the primary driving power source, whether QSO or starburst, is unclear. ## 4 Summary Nine galaxies have been detected in ISOCAM 15 $`\mu `$m images of the lensing clusters Abell 2218 and Abell 2219. At least three of these are luminous, high IR/B lensed background galaxies at redshifts of $`0.51`$, and one, behind Abell 2219, is an optically faint object heavily dominated by its mid-infrared emission. Judging from its high infrared luminosity, its high ratio of infrared to optical emission, and its red optical colors, this object appears to be active, with the source or sources of activity (starburst and/or AGN) heavily obscured by dust. We thank Ian Smail for kindly providing optical images and photometry of the clusters, Jean-Paul Kneib for the HST image, and Tom Broadhurst and Brenda Frye for obtaining the Keck spectrum of A2219#5. FIGURE CAPTIONS
no-problem/9905/hep-th9905218.html
ar5iv
text
# Acknowledgments ## Acknowledgments We would like to dedicate this paper to the memory of Yuri Golfand, one of the pioneers of supersymmetry. We are indebted to P. Aschieri and especially to M. K. Gaillard for many illuminating discussions. This work was supported in part by the Director, Office of Science, Office of High Energy and Nuclear Physics, Division of High Energy Physics of the U.S. Department of Energy under Contract DE-AC03-76SF00098 and in part by the National Science Foundation under grant PHY-95-14797.
no-problem/9905/hep-ph9905517.html
ar5iv
text
# Initial conditions and evolution of off-diagonal distributions ## 1 Introduction Off-diagonal (or skewed) parton distributions provide important information about the nonperturbartive structure of the nucleon . They can in principle be measured in such processes as deeply virtual Compton scattering, diffractive vector meson production or diffractive high-$`p_T`$ jet production. Just as for the ordinary (diagonal) parton distributions, the QCD evolution equations play an important role in the determination of skewed parton distributions. As usual input distributions are required. However, this is more complicated than in the diagonal case since the skewed parton distributions depend on additional variable – the asymmetry parameter $`\xi pp^{}`$. We use the symmetric formulation of Ji in which the skewed parton distributions are given by functions $`H(x,\xi )`$ with support $`1(x,\xi )1`$, see for more details. Here we discuss various ways to specify the initial conditions and show how the differences in the input distributions disappear on evolution. In particular we illustrate the conclusion of that the skewed distributions $`H(x,\xi )`$, at small $`x`$ and $`\xi `$, are fixed by the conventional diagonal partons. ## 2 Constraints imposed on $`H(x,\xi )`$ The distributions $`H(x,\xi )`$ have to fulfil several conditions with respect to the variable $`\xi `$. First, the time-reversal invariance and hermiticity impose the condition $$H(x,\xi )=H(x,\xi ).$$ (1) Thus $`H(x,\xi )`$ is an even function of $`\xi `$. The second condition states that in the limit $`\xi =0`$ we recover the ordinary diagonal parton distributions $$H(x,0)=H^{diag}(x).$$ (2) The third condition is more complicated but has a simple origin, . The $`N^{th}`$ moment of $`H(x,\xi )`$ is a polynomial in $`\xi `$ of the order $`N`$ at most $$_1^1𝑑xx^{N1}H(x,\xi )=\underset{i=0}{\overset{[N/2]}{}}A_{N,i}\xi ^{2i},$$ (3) which also embodies condition (1). Finally we impose continuity of $`H(x,\xi )`$ at the border $`x=\pm \xi `$ between two different physical regions, see . This ensures that the amplitude of a physical process, described by skewed distributions, is finite. All these conditions have to be fulfilled in the construction of the input for evolution. They are, of course, conserved during the evolution. ## 3 Initial conditions for evolution There are two equivalent ways to evolve $`H(x,\xi ,\mu ^2)`$ up in the scale $`\mu ^2`$. The first method, which is most appropriate at small $`x,\xi `$, uses the evolution of the Gegenbauer moments of $`H(x,\xi )`$ and the Shuvaev transform to find the final answer in the $`x`$space . In the second method the solutions are found after imposing initial conditions and numerically solving the evolution equations, directly in the $`x`$space. Here we supplement the studies of by presenting results from the second approach. Condition (3) is difficult to fulfil in order to specify initial distributions at a certain scale $`\mu _0^2`$. It may be facilitated by the use of the double distribution $`(\stackrel{~}{x},\stackrel{~}{y})`$ $$H(x,\xi )=_{}𝑑\stackrel{~}{x}𝑑\stackrel{~}{y}(\stackrel{~}{x},\stackrel{~}{y})\delta (x(\stackrel{~}{x}+\xi \stackrel{~}{y})),$$ (4) where $``$ is the square $`|\stackrel{~}{x}|+|\stackrel{~}{y}|1`$ This prescription introduces a nontrivial mixing between $`x`$ and $`\xi `$. Condition (1) is guaranteed if $`(\stackrel{~}{x},\stackrel{~}{y})=(\stackrel{~}{x},\stackrel{~}{y})`$. We still have freedom to add to (4) a function $`\text{sign}(\xi )D(x/\xi )`$, antisymmetric in $`x/\xi `$, contained entirely in the ERBL-like region $`|x|<\xi `$ . The only problem left is to build in the ordinary diagonal distributions in the prescription (4). To do this we take $$(\stackrel{~}{x},\stackrel{~}{y})=h(\stackrel{~}{y})H^{diag}(\stackrel{~}{x})/_{1+|\stackrel{~}{x}|}^{1|\stackrel{~}{x}|}𝑑y^{}h(y^{}),$$ (5) where different choices of $`h`$ give, via (4), different initial conditions for $`H(x,\xi )`$. Three choices, $$h(\stackrel{~}{y})=\{\begin{array}{cc}\delta (\stackrel{~}{y})\hfill & \\ (1\stackrel{~}{y}^2)^{p+1}p=0,1\hfill & \\ \mathrm{sin}(\pi \stackrel{~}{y}^2)\hfill & \end{array},$$ (6) are shown in Fig. 1, with $`H^{diag}(x)`$ given by GRV at $`\mu _0^2=0.26\text{GeV}^2`$. The first choice gives simply the diagonal input $`H^{diag}(x)`$, independent of $`\xi `$. The second, with $`p=0(1)`$ for quarks (gluons), generates an input form similar to that obtained from the model of in which the Gegenbauer moments of $`H(x,\xi )`$ are $`\xi `$-independent. This property is conserved by the evolution. The exact form of the double distribution in this case can be found in . The last choice was selected so as to give an oscillatory input behaviour in the ERBL-like region. ## 4 Discussion Fig. 1 shows the quark non-singlet, quark singlet and gluon skewed distributions for the three input models evolved up in $`\mu ^2`$ for $`\xi =0.03`$. The results for larger values of $`\xi `$ are qualitatively the same . We see that the form of $`h(\stackrel{~}{y})`$ has the most influence in the ERBL region. However even in this region evolution soon washes out the differences. Already by $`\mu ^2=100`$ GeV<sup>2</sup> the three curves are close to each other. In the DGLAP region, $`|x|>\xi `$, the curves are almost identical while in the ERBL region, $`|x|<\xi `$, they approach the asymptotic form for each particular skewed parton distribution. Such behaviour is especially important in view of the fact that one set of the curves is obtained from the pure diagonal input parton distributions. In this way we illustrate the result presented in that to a good accuracy at small $`\xi `$, the skewed distributions $`H(x,\xi ;\mu ^2)`$ are completely known in terms of conventional partons. Thus, to summarize, the nonperturbative information contained in the diagonal input parton distributions and particular features of the evolution equations for skewed parton distributions are sufficient for their determination. Acknowledgements We thank Max Klein and Johannes Blümlein for their efficient organization of DIS99, and the Royal Society and the EU Fourth Framework Programme ‘Training and Mobility of Researchers’, Network ‘QCD and the Deep Structure of Elementary Particles’, contract FMRX-CT98-0194 (DG 12-MIHT) for support.
no-problem/9905/cond-mat9905333.html
ar5iv
text
# Composite Fermion Picture for Multi-Component Plasmas in 2D Electron-Hole Systems in a Strong Magnetic Field ## Introduction. In a 2D electron-hole system in a strong magnetic field, the only bound complexes are neutral excitons $`X^0`$ and spin-polarized charged excitonic ions $`X_k^{}`$ ($`k`$ excitons bound to an electron) . Other complexes found at lower fields unbind due to a hidden symmetry. The $`X_k^{}`$ ions are long lived Fermions whose energy spectra contain Landau level structure. By numerical diagonalization of small systems we can determine binding energies and angular momenta of the excitonic ions, and pseudopotentials which describe their interactions with electrons and with one another . We show that a gas of $`X_k^{}`$’s can form Laughlin incompressible fluid states, but only for filling factors $`\nu _k(2k+1)^1`$ (in the following, subscript $`k`$ denotes $`X_k^{}`$). Multi-component plasmas containing electrons and $`X_k^{}`$ ions of one or more different types can also form incompressible fluid states. A generalized composite Fermion (CF) picture is proposed to describe such a plasma. It requires the introduction of Chern–Simons charges and fluxes of different types (colors) in order to mimic generalized Laughlin type correlations. The predictions of this CF picture agree well with numerical results for systems containing up to eighteen particles. ## Four Electron–Two Hole System. Understanding of the energy spectrum of this simple system is essential for our considerations. Result of the numerical diagonalization in Haldane spherical geometry , for the magnetic monopole strength $`2S=17`$, is shown in Fig. 1. Open and solid circles mark multiplicative and non-multiplicative states, respectively. For $`L<12`$ there are four low lying bands, which we have identified, in order of increasing energy, as two $`X^{}`$’s, an electron and an $`X_2^{}`$, an electron and an $`X^{}`$ and a decoupled $`X^0`$, and finally two electrons and two decoupled $`X^0`$’s. We find that the $`X_k^{}`$ has an angular momentum $`l_k=Sk`$ in contrast to an electron which has $`l_0=S`$. All relevant binding energies and pseudopotentials are also determined. An important observation is that the pseudopotential of composite particles ($`k>0`$) is effectively infinite (hard core) if $`L`$ exceeds a particular value. This is due to unbinding of ions at too small separation. Once the maximum allowed $`L`$’s for all pairings are established, the four bands in Fig. 1 can be approximated by the pseudopotentials of electrons (point charges) with angular momenta $`l_A`$ and $`l_B`$, shifted by the appropriate binding energies (large symbols). ## Larger Systems We know from exact calculations for up to eleven electrons that the CF picture correctly predicts the low lying states of the fractional quantum Hall systems. The reason for this success is the ability of the electrons in states of low $`L`$ to avoid large fractional parentage (FP) from pair states associated with large values of the Coulomb pseudopotential. In particular, for the Laughlin $`\nu _0=1/3`$ state, the FP from pair states with maximum pair angular momentum $`L=2l_01`$ vanishes. We hypothesize that the same effect should occur for an $`X^{}`$ system when $`l_0=S`$ is replaced by $`l_1=S1`$. We define an effective $`X^{}`$ filling factor as $`\nu _1(N,S)=\nu _0(N,S1)`$ and expect the incompressible $`X^{}`$ states at all Laughlin and Jain fractions for $`\nu _11/3`$. States with $`\nu _1>1/3`$ cannot be constructed because they would have some FP from pair states forbidden by the hard core repulsion. Fig. 2 shows energy spectra of the $`6e+3h`$ system at $`2S=8`$ and 11. Both multiplicative (open circles) and non-multiplicative (solid circles) states are shown in frames (a) and (c). In frames (b) and (d) only the non-multiplicative states are plotted, together with the approximate spectra (large symbols) obtained by diagonalizing the system of three ions with the actual pseudopotentials appropriate to the three possible charge configurations: $`3X^{}`$ (diamonds), $`e^{}+X^{}+X_2^{}`$ (squares), and $`2e^{}+X_3^{}`$ (triangles). Good agreement between the exact and approximate spectra in Figs. 2b and 2d allows identification of the three ion states and confirms our conjecture about incompressible states of a $`X^{}`$ gas. States corresponding to different charge configurations form bands At low $`L`$, the bands are separated by gaps, predominantly due to different total binding energies of different configurations. The lowest state in each band corresponds to the three ions moving as far from each other as possible. If the ion–ion repulsion energies were equal for all configurations (a good approximation for dilute systems), the two higher bands would lie above dashed lines, marking the ground state energy plus the appropriate difference in binding energies. The low lying multiplicative states can also be identified as $`3e^{}+3X^0`$, $`2e^{}+X^{}+2X^0`$, $`2e^{}+X_2^{}+X^0`$, and $`e^{}+2X^{}+X^0`$. The bands of three ion states are separated by a rather large gap from all other states, which involve excitation and breakup of composite particles. The largest systems for which we performed exact calculations are the $`6e+3h`$ and $`8e+4h`$ systems at $`2S`$ up to 12 (Laughlin $`\nu _1=1/5`$ state of three $`X^{}`$’s and one quasi-$`X^{}`$-hole in the $`\nu _1=1/3`$ state of four $`X^{}`$’s). In each case the CF picture applied to the $`X^{}`$ particles works well. For larger systems the exact diagonalization becomes difficult. For example, for the $`12e+6h`$ system we expect the $`\nu _1=1/3`$, 2/7, 2/9, and 1/5 incompressible states to occur at $`2S=17`$, 21, 23, and 27, respectively. We managed to extrapolate the involved pseudopotentials making use of their regular dependence on $`2S`$, and use them, together with the binding energies, to determine approximate low lying bands in the energy spectra, as shown in Fig. 3. At $`2S=17`$, the only state of the $`6X^{}`$ configuration is the $`L=0`$ ground state (filled circle); other $`6X^{}`$ states are forbidden by the hard core. The low lying states of other low energy configurations, $`e^{}+5X^{}+X^0`$ (open circles) and $`e^{}+4X^{}+X_2^{}`$ (open squares) are separated from the $`6X^{}`$ ground state by a gap. At $`2S=21`$, 23, and 27, all features predicted by the CF picture occur for the $`6X^{}`$ states. ## Generalized Composite Fermion Picture In order to understand all of the numerical results presented in Fig. 3a, we introduce a generalized CF picture by attaching to each particle fictitious flux tubes carrying an integral number of flux quanta $`\varphi _0`$. In the multi-component system, each $`a`$-particle carries flux $`(m_{aa}1)\varphi _0`$ that couples only to charges on all other $`a`$-particles and fluxes $`m_{ab}\varphi _0`$ that couple only to charges on all $`b`$-particles, where $`a`$ and $`b`$ are any of the types of Fermions. The effective monopole strength seen by a CF of type $`a`$ (CF-$`a`$) is $`2S_a^{}=2S_b(m_{ab}\delta _{ab})(N_b\delta _{ab})`$. For different multi-component systems we expect generalized Laughlin incompressible states when all the hard cores are avoided and CF’s of each type fill completely an integral number of their CF shells. In other cases, the low lying multiplets will contain different types of quasiparticles (QP-$`a`$, QP-$`b`$, …) or quasiholes (QH-$`a`$, QH-$`b`$, …) in the neighboring incompressible state. Our multi-component CF picture can be applied to the system of excitonic ions, where the CF angular momenta are given by $`l_k^{}=|S_k^{}|k`$. As an example, let us consider Fig. 3a and make the following CF predictions. For six $`X^{}`$’s we obtain the Laughlin $`\nu _1=1/3`$ state at $`L=0`$. Because of the $`X^{}`$-$`X^{}`$ hard core, it is the only state of this configuration. For the $`e^{}+5X^{}+X^0`$ configuration we set $`m_{11}=3`$ and $`m_{01}=1`$, 2, and 3. For $`m_{01}=1`$ we obtain $`L=1`$, 2, $`3^2`$, $`4^2`$, $`5^3`$, $`6^3`$, $`7^3`$, $`8^2`$, $`9^2`$, 10, and 11; for $`m_{01}=2`$ we obtain $`L=1`$, 2, 3, 4, 5, and 6; and for $`m_{01}=3`$ we obtain $`L=1`$. For the $`e^{}+4X^{}+X_2^{}`$ configuration we set $`m_{11}=3`$, $`m_{02}=1`$, $`m_{12}=3`$, and $`m_{01}=1`$, 2, or 3. For $`m_{01}=1`$ we obtain $`L=2`$, 3, $`4^2`$, $`5^2`$, $`6^3`$, $`7^2`$, $`8^2`$, 9, and 10; for $`m_{01}=2`$ we obtain $`L=2`$, 3, 4, 5, and 6; and for $`m_{01}=3`$ we obtain $`L=2`$. Note that the sets of multiplets obtained for higher values of $`m_{01}`$ are subsets of the sets obtained for lower values; we would expect them to form lower energy bands since they avoid additional large values of $`e^{}`$-$`X^{}`$ pseudopotential. As marked with lines in Fig. 3a, this is indeed true for the states predicted for $`m_{01}=2`$. However, the states predicted for $`m_{01}=3`$ do not form separate bands. This is because $`e^{}`$-$`X^{}`$ pseudopotential increases more slowly than linearly as a function of $`L(L+1)`$ in the vicinity of $`L=l_0+l_1m_{01}`$; in such case the CF picture fails. The agreement of our CF predictions with the exact spectra of different systems, as in Figs. 2 and 3, is really quite remarkable and strongly indicates that our multi-component CF picture is correct. We are actually able to confirm predicted Laughlin type correlations in the low lying states by calculating their FP coefficients. In view of the results obtained for many different systems that we were able to treat numerically, we conclude that if exponents $`m_{ab}`$ are chosen correctly, the CF picture works well in all cases. ## Summary Low lying states of electron-hole systems in a strong magnetic field contain charged excitonic ions $`X_k^{}`$ interacting with one another and with electrons. For different combinations of ions occurring at low energy, we introduced general Laughlin type correlations into the wavefunctions and demonstrated formation of incompressible fluid states of such multi-component plasmas at particular values of the magnetic field. We also proposed a generalized multi-component CF picture and successfully predicted lowest bands of multiplets for various charge configurations at any value of the magnetic field. It is noteworthy that the fictitious Chern–Simons fluxes and charges of different types or colors are needed in the generalized CF model. This strongly suggests that the effective magnetic field seen by the CF’s does not physically exist and that the CF picture should be regarded as a mathematical convenience rather than physical reality. Our model also suggests an explanation of some perplexing observations found in photoluminescence, but this topic will be addressed in a separate publication. We thank M. Potemski for helpful discussions. AW and JJQ acknowledge partial support from the Materials Research Program of Basic Energy Sciences, US Department of Energy. KSY acknowledges support from Korea Research Foundation.
no-problem/9905/hep-ph9905493.html
ar5iv
text
# 1 junk LU TP 99–08 May 1999 Drag effects in charm photoproduction E. Norrbin and T. Sjöstrand Department of Theoretical Physics 2, Lund University Sölvegatan 14 A, s-223 62 Lund, Sweden emanuel@thep.lu.se, torbjorn@thep.lu.se > Abstract: We have refined a model for charm fragmentation at hadron colliders. This model can also be applied to the photoproduction of charm. We investigate the effect of fragmentation on the distribution of produced charm quarks. The drag effect is seen to produce charm hadrons that are shifted in rapidity in the direction of the beam remnant. We also study the importance of different production mechanisms such as charm in the photon and from parton showers. In previous work we studied and refined a model for the hadronization of a low-mass string in the framework of the Lund string fragmentation model . The model was used to describe the leading particle effect that has been observed at fixed-target experiments . With a leading charmed meson defined as having the light quark in common with the incoming beam, an asymmetry has been observed between leading and non-leading charmed mesons, favouring leading particles in the beam fragmentation region. In a string fragmentation framework this is understood in the following way. Because of the colour flow in an event, the produced charm quarks normally are colour-connected to the beam remnants of the incoming particles. This results in the possibility for a charmed hadron to gain energy and momentum from the beam remnant in the fragmentation process and thus be produced at a larger rapidity than the initial charm quark. The extreme case in this direction is when the colour singlet containing the charm quark and the beam remnant has a small invariant mass, e.g. below or close to the two-particle threshold. Then the colour singlet, called a cluster, will be forced to collapse into a meson, giving a hard leading particle. The corresponding production mechanism for non-leading particles involves sea-quarks and is therefore suppressed. The qualitative nature of the asymmetry can thus be understood within the string model. The quantitative predictions, however, depend on model parameters. The model has been tuned to reproduce data on both asymmetries and single-charm spectra at fixed target energies . Here we wish to apply the model to $`\gamma `$p physics at HERA. The asymmetries are small in this case because of the higher energy and the flavour neutral photon beam. Therefore the emphasis will be shifted towards beam-drag effects, consequences of the photon structure and higher-order effects. The photon is a more complicated object than a hadron because it has two components, one direct where the photon interacts as a whole and one resolved where it has fluctuated into a $`\mathrm{q}\overline{\mathrm{q}}`$ pair before the interaction. This will result in very different event structures in the two cases. This study is constrained to real photons (photoproduction) as modeled by Schuler and Sjöstrand and implemented in the Pythia event generator. We include the photon flux and use cuts close to the experimental ones. We first examine the leading-order charm spectra for direct and resolved photons, estimate the cross section in the two cases, and study how the fragmentation process alters the charm spectra in the string model. Then we add some higher-order processes (flavour excitation and quark splitting) and find that they give a significant contribution to the charm cross section, especially for resolved photons. We consider charm photoproduction in an $`\mathrm{e}^\pm `$p collision (820 GeV protons and 27.5 GeV electrons) with real photons ($`Q^2<1`$ GeV), $`130<W_{\gamma \mathrm{p}}<280`$ and some different $`p_{}`$-cuts. The analysis is done in the $`\gamma `$p center of mass system using true rapidity ($`y=\frac{1}{2}\mathrm{ln}(\frac{E+p_z}{Ep_z})`$) as the main kinematical variable. The photon (electron) beam is incident along the negative z-axis. To leading order, the massive matrix elements producing charm are the fusion processes $`\mathrm{q}\gamma \mathrm{c}\overline{\mathrm{c}}`$ (direct), $`\mathrm{gg}\mathrm{c}\overline{\mathrm{c}}`$ and $`\mathrm{q}\overline{\mathrm{q}}\mathrm{c}\overline{\mathrm{c}}`$ (resolved). Fig. 1 shows the distribution of charmed quarks and charmed hadrons separated into these two classes. For direct photons the hadrons are shifted in the direction of the proton beam, since both charm quarks are colour-connected to the proton beam remnant. In a resolved event the photon also has a beam remnant, so the charmed hadron is shifted towards the beam remnant it is connected to. Also note that the drag effect is a small-$`p_{}`$ phenomenon. A jet at high $`p_{}`$ will not be much influenced by the beam remnant. The drag effect is illustrated in Fig. 2 where the average rapidity shift in the hadronization, $`\mathrm{\Delta }y=y_{\mathrm{Hadron}}y_{\mathrm{Quark}}`$, is shown as a function of $`y_{\mathrm{Hadron}}`$. For direct photons and central rapidities the shift is approximately constant. The increasing shift for large rapidities is due to an increased interaction between the proton remnant and the charmed quark when their combined invariant mass is small. At large negative rapidities there is no corresponding effect because there is no beam remnant there. The drop of $`\mathrm{\Delta }y`$ in this region is a pure edge effect; only those events with below-average $`\mathrm{\Delta }y`$ can give a very negative $`y_{\mathrm{Hadron}}`$. For resolved photons the shift is in the direction of the proton and photon beam remnants. Note that what is plotted is only the mean. The width of $`\mathrm{\Delta }y`$ is generally larger than the mean, so the shift can go both ways. For example the quarks at very small rapidities ($`y\text{ }<5`$) in Fig. 1b will all be shifted with $`\mathrm{\Delta }y>0`$ but hadrons produced there will, on the average, come from quarks produced at larger rapidities (i.e. $`\mathrm{\Delta }y<0`$). Hence the apparent contradiction with Fig. 2b by these edge effects. In order to isolate the drag effect we plot the rapidity shift in the direction of ’the other end of a string’. This is accomplished by studying $`\mathrm{\Delta }y\mathrm{sign}(y_{\mathrm{Other}\mathrm{end}}y_{\mathrm{Quark}})`$ as a function of $`y`$ and $`p_{}`$ as shown in Fig. 3. In this case the difference between direct and resolved events are less marked, showing the universality of string fragmentation. The remaining differences stem from the different distributions of string masses in the two cases. Higher-order effects can be included in an event generator through flavour excitation (e.g. $`\mathrm{cq}\mathrm{cq}`$) and parton showers (gluon splitting, $`\mathrm{g}\mathrm{c}\overline{\mathrm{c}}`$). This approach is in some ways complementary to full NLO calculations. A NLO calculation of the charm cross section contains all diagrams up to order $`\alpha _\mathrm{s}^3`$ ($`\alpha _\mathrm{s}^2\alpha _{\mathrm{em}}`$ for direct photons) whereas a Monte Carlo event generator simulating parton showers/flavour excitation contains all diagrams of order $`\alpha _\mathrm{s}^2`$ ($`\alpha _\mathrm{s}\alpha _{\mathrm{em}}`$ for direct photons) and an approximation to all higher orders. In this way some processes that are not included in a NLO calculation are approximated. Some examples are $`\gamma \mathrm{q}\mathrm{c}\overline{\mathrm{c}}\mathrm{qg}`$, $`\mathrm{g}\gamma \mathrm{q}\overline{\mathrm{q}}\mathrm{c}\overline{\mathrm{c}}`$ and $`\mathrm{gg}\mathrm{c}\overline{\mathrm{c}}\mathrm{c}\overline{\mathrm{c}}`$. At HERA energies, higher-order effects give large contributions to the cross section. In Fig. 4 the cross section is divided into different production channels for direct and resolved photons. We note that now the cross sections are of the same order of magnitude and the major contribution in the resolved case is flavour excitation. The details of course depend on the parameterization of the photon structure. The double peak structure in the flavour excitation process for direct photons is because the charm quark in the beam remnant at low $`p_{}`$ is also included. This peak disappears when a $`p_{}`$ cut is introduced (Fig. 4c). The physics discussed here has consequences also for b-production at HERA-B. Because of the larger mass of the b-quark, drag/collapse effects are expected to be smaller. However, this is compensated by the smaller CM-energy when the HERA proton beam is used on a fixed target, giving non-negligible effects as shown in Fig. 5. An understanding of these aspects are important when studying CP violation in the $`\mathrm{B}^0\overline{\mathrm{B}^0}`$ system . In summary we have improved the modelling of charm in the Pythia event generator by a consideration of charm hadroproduction data . In this note we study beam-drag effects at HERA and it should be interesting to look for experimental signatures, e.g. differences between NLO and data. We also show that higher order effects give important contributions to the charm production spectra at HERA energies, especially for resolved photons.
no-problem/9905/hep-ex9905017.html
ar5iv
text
# Recent Results on Heavy Flavours with ALEPH ## I Introduction The large sample of $`Z`$ collected by ALEPH during the LEP1 consists of about 4.5 million hadronic decays. Out of them, about 1 million of $`b\overline{b}`$ events allow to investigate rare processes which could help constraining the CKM mixing matrix, before the start of the new experiments at the B factories and hadronic colliders. In this report I will present the measurements of the $`(bul\nu X)`$ which allows a measurement of the $`|V_{ub}|`$ CKM matrix element, the $`(bs\gamma )`$ and a study on the width difference between mass eigenstates in the $`B_s`$ system. ## II Measurement of $`|V_{ub}|`$ The measurement of $`|V_{ub}|`$ is performed by measuring the semileptonic branching ratio of B hadrons into charmless final states ($`bul\nu X`$ ). In fact this rate is proportional to the $`|V_{ub}|`$ <sup>2</sup> and can be computed in the framework of the Heavy Quark Expansion theory . Previous measurements of $`|V_{ub}|`$ were performed in both the exclusive and inclusive channels in the $`\mathrm{{\rm Y}}`$(4S) . The exclusive measurements show large theoretical uncertainties in the transition amplitude computation. Inclusive measurements overcame this difficulty looking for an excess of events at the endpoint of the lepton momentum distribution where the $`bc`$ contribution vanishes. At LEP both methods are not viable, either due to the small branching ratio and large background in one case or to the small accuracy in reconstructing the B hadron rest frame in the other case. This measurement relies upon the fact that nearly 90% of the $`bul\nu X`$ decays are expected to have an invariant mass of the hadronic system $`M_X`$ below the charm threshold of 1.87 GeV/c<sup>2</sup>. By contrast, only in 10% of the cases the lepton energy in the $`b`$ rest frame, $`E^{}`$, is above the kinematic boundary for $`bcl\nu X`$ transitions . ### A Event Selection Events are selected by requiring the presence of an identified lepton with momentum larger than 3 GeV/c. In the opposite hemisphere a b lifetime tag is applied to reduce non–b contamination to less than 2%. The neutrino energy and direction is estimated from the missing momentum in the lepton hemisphere with a typical accuracy of 280 mrad on its direction and 2 GeV on its energy. Particles coming from the charmless hadronic system $`X`$ are reconstructed by means of two Neural Networks, one to select photons and the other charged particles. The B hadron rest frame is then reconstructed by adding the momenta of the lepton, the neutrino and the selected particles. The total energy is determined by assigning a mass of 5.38 GeV/c<sup>2</sup> to the total system. The momentum and angular resolution, obtained from the simulation, are 4.5 GeV/c and 60 mrad respectively. The discrimination of the $`bul\nu X`$ signal events from the background $`bc`$ transitions is made on a statistical basis and is based on the fact that the $`c`$ quark is heavier with respect to the $`u`$ quark, leading to different kinematical properties for the two final states. Because of resolution effects the separation that uses a single kinematical variable such as $`M_X`$ can be considerably improved by combining more information characterizing the leptonic and the hadronic parts into a multivariate Neural Network. The choice of the variables is based not only in such a way to have a good discrimination between signal and background, but also on a reduced sensitivity to the specific composition of the $`X_u`$ system. Among the Neural Network inputs, the ones with larger separating power are the invariant mass, the sphericity and the lepton momentum. The Neural Network output is close to 1 for signal events and close to 0 for background events. The signal Monte Carlo use a Hybrid model following Ramirez et al. . At low hadronic energy (below 1.6 GeV), only resonant states are produced, while for large energy, non-resonant states are expected to contribute, which occurs in 75% of the cases. Figure 1 shows the Neural Network output for the Monte Carlo and data after all the selection cuts. The signal is extracted from a binned likelihood fit to the Neural Network output between 0.6 and 1.0, which gives the smallest total relative error on the measurement. The Monte Carlo is normalized to the same number of events in the data to reduce the sensitivity to the assumed efficiencies of the analysis cuts. The first bin is excluded in this normalisation procedure, to minimise the effects of uncertainties of the background events in the fit. In this way, the fit is effectively a fit to the shape of the signal and background distributions. ### B Results and systematic checks An excess of 303 $`\pm `$ 88 events is clearly seen in the region where the signal is expected, and is in good agreement with the predicted Neural Network shape. From the fitted number of events one can extract the value of the branching ratio: $$(bul\nu )=(1.73\pm 0.55_{\mathrm{stat}}\pm 0.51_{\mathrm{syst}bc}\pm 0.21_{\mathrm{syst}bu})\times 10^3.$$ (1) The systematics come from the uncertainties in modelling the $`bc`$ and the $`bul\nu X`$ transitions. For the $`bc`$ systematic source, the main uncertainties are the ones which come from the knowledge of the charm topological branching ratios and the statistical uncertainties on $`BR(bl)`$, $`BR(bcl)`$ and $`<X_b>`$ (the average energy fraction of the B hadron) which have been measured by ALEPH. The systematic coming from the modelling of the inclusive decays of the $`bu`$ transitions dominates the second systematic error. Extensive checks have been made to support the measurement. The fit is stable against variations on the fit interval of the Neural Network output, as well as changing some variables in input of the Neural Network. Since neutral hadrons have not been considered when reconstructing the $`b`$ hadron, a bad simulation of the $`bc`$ states involving energetic neutral hadrons would alter the background in the region of high values of the Neural Network output. The neutral hadronic energy distribution in a 30<sup>0</sup> cone around the lepton is different for final states with and without $`K_L^0`$, allowing a measurement of the inclusive production rate of $`K_L^0`$ in $`D`$ meson decays. Good agreement is observed between data and simulation. The rate found is in very good agreement with the one measured by MARKIII. Finally, since no vertexing information has been used in the selection, a common vertex between the lepton and the charged hadronic system is searched for, with a cut on the $`\chi ^2`$ probability larger than 0.2. The efficiency for this cut is larger for $`bu`$ transitions with respect to the $`bc`$, due to the presence of the charm vertex in the latter. This difference is further enhanced in the Neural Network region close to 1, due to the smaller charged multiplicity and poor vertex of the $`bc`$ transitions. The good agreement of vertexing efficiencies in data and Monte Carlo is shown in Fig.2, where the effect of the signal inclusion is evident. From this measurement and the average b lifetime one can extract the value of the $`|V_{ub}|`$ matrix element, by using the relation obtained in the framework of the Heavy Quark Expansion Theory : $$|V_{ub}|^2=20.98\frac{(bX_ul\nu _l)}{0.002}\frac{1.6\mathrm{ps}}{\tau _B}(1\pm 0.05_{pert}\pm 0.06_{m_b})\times 10^6$$ (2) where $`\tau _b=(1.554\pm 0.013)`$ ps is the average $`b`$ hadron lifetime: $$|V_{ub}|^2=(18.68\pm 5.94_{\mathrm{stat}}\pm 5.94_{\mathrm{syst}}\pm 1.45_{HQE})\times 10^6.$$ (3) corresponding to $`|V_{ub}|`$ = $`(4.16\pm 1.02)\times 10^3`$. ## III Measurement of the $`(bs\gamma )`$ The $`bs\gamma `$ decay is a flavour changing neutral current that in the Standard Model proceeds via an electromagnetic penguin diagram in which the photon is radiated from either the W or one of the quark lines and its branching ratio is predicted to be $`(3.76\pm 0.30)\times 10^4`$ at the Z peak. Virtual particles in the loop may be replaced by non-Standard Model particles, such as charged Higgs bosons or supersymmetric particles which could either enhance or suppress the decay rate, making it sensitive to physics beyond the Standard Model . The $`bs\gamma `$ decay is expected to be dominated by two body decays, leading to the presence of a very energetic photon in the final state associated with a system of high momentum and rapidity hadrons, originating from a displaced secondary vertex. The composition of the signal $`bs\gamma `$ Monte Carlo is based primarily on predictions from the Heavy Quark Effective Theory as well as on the CLEO measurements of the inclusive $`bs\gamma `$ branching ratio and the exclusive $`BK^{}(892)\gamma `$ . ### A Event Selection The event reconstruction starts from the requirement of a photon with an energy larger than 10 GeV which does not gives any $`\pi ^0`$ when paired with other photons in the event. In that hemisphere, $`\pi ^0`$ and $`K_s`$ mesons are searched for. The remaining tracks were assigned a probability to come from the $`bs\gamma `$ according to their momentum, rapidity with respect to the B hadron direction and, in the case of charged tracks, their three dimensional impact parameter significance. The probability functions were derived from the simulation. The reconstruction of the $`bs\gamma `$ candidate is then made by the photon and the hadronic system in the same hemisphere resulting by adding the $`\pi ^0`$ and $`K_s`$, charged tracks and neutral calorimetric objects in decreasing $`bs\gamma `$ probability. The candidate is accepted if the jet mass lies within 700 MeV/c<sup>2</sup> of the mean B meson mass, the hadronic system has a mass smaller than 4 GeV/c<sup>2</sup> and a multiplicity smaller than eight objects. The hemisphere opposite to the candidate is required to be b-like using a lifetime b tag. After the preselection there remain 1560 hadronic events, which are then splitted into eight different categories depending on : * the value of the length of the major axis of the shower ellipse, $`\sigma _l`$, of the photon candidate in the electromagnetic calorimeter * the energy of the hard photon, $`E_\gamma ^{}`$ * the b tag probability of the opposite hemisphere. The signal is extracted from a binned log-likelihood fit of the $`E_\gamma ^{}`$ distributions of the eight subsamples, using the corresponding distributions for the signal and background simulations, taking into account the finite Monte Carlo statistics in each bin. The five parameters in the fit are $`N_{bs\gamma },N_{FSR},N_{(bc)\pi ^0},N_{(\mathrm{non}\mathrm{b})\pi ^0}`$ and $`N_{other}`$, which are, respectively, the total number of signal, Final State Radiation, $`(bc)\pi ^0,(\mathrm{non}\mathrm{b})\pi ^0`$ and ’other’ background events. Figure 3 shows the $`E_\gamma ^{}`$ distribution for the purest signal sub-sample in data and Monte Carlo. The clear excess in data is nicely explained by the signal contribution. ### B Results and systematic checks From the number of fitted events and correcting for the efficiency in the selection cuts the inclusive $`bs\gamma `$ branching ratio is determined to be $$(bs\gamma )=(3.11\pm 0.80_{\mathrm{stat}}\pm 0.72_{\mathrm{syst}})\times 10^4$$ (4) where the statistical error takes into account also the finite Monte Carlo statistics. The largest systematic error comes from the shape of the $`E_\gamma ^{}`$ background distributions as well as on the relative proportion of each background source in each sub-sample. This is assessed by observing the change in the measured branching ratio as the boundaries between the eight sub-classes are varied. The uncertainty coming from the energy calibration has been estimated by varying both the ECAL and HCAL calibrations. Finally the uncertainty of the baryonic $`bs\gamma `$ decays has been assessed by repeating the fit setting that fraction to zero. Checks have been performed which are consistent with the expected $`bs\gamma `$ hypothesis: * there is evidence of lifetime in the same hemisphere as the photon * there is an excess of high momentum kaons * the shape of the excess $`\sigma _l`$ distribution is characteristic of single photons. ## IV A study on the width difference between mass eigenstates in the $`B_s`$ system One of the most challenging measurements in the $`B_s\overline{B}_s`$ system is the measurement of the mass difference between the two mass eigenstates, $`\mathrm{\Delta }M_s`$, which allows to constrain the CKM unitarity triangle. Since $`\mathrm{\Delta }M_s`$ is large and therefore difficult to measure, a complementary insight might come from the width difference $`\mathrm{\Delta }\mathrm{\Gamma }_s`$, which is related to $`\mathrm{\Delta }M_s`$ via the relation : $$\frac{\mathrm{\Delta }\mathrm{\Gamma }_s}{\mathrm{\Delta }M_s}=\frac{3}{2}\pi \frac{m_b^2}{m_t^2}\frac{\eta _{QCD}^{\mathrm{\Delta }\mathrm{\Gamma }_s}}{\eta _{QCD}^{\mathrm{\Delta }M_s}}$$ (5) where the ratio of the QCD correction factors ($`\eta `$) is expected to be of the order of unity, and does not depend on the CKM matrix elements . The width difference is expected to be of the order of 10% of the total width. One of the simplest way to investigate the width difference is to measure directly one of the two components of the $`B_s`$ lifetime. The decay $`B_sD_s^{()+}\overline{D}_s^{()}X`$ is dominantly CP even and, if one neglects CP violating effects, is the short lifetime eigenstate. The investigation is performed in the case when each of the two $`D_s`$ decay into a $`\varphi `$. The $`\varphi `$ are identified in the $`K^+K^{}`$ decay mode. ### A Event selection ALEPH has undergone a refinement of its tracking performances of the data already taken at LEP1. Firstly a better pattern recognition in the Silicon Vertex Detector allows to increase the vertexing efficiency of hadronic B decays of about 30%. In addition, thanks to the readout of the specific ionization induced by charged particles on the TPC pads, the dE/dx measurement efficiency has now increased from 85% to nearly 100%, without degrading very much the purity. This measurement is the first one which benefits from this upgrade. Each kaon candidate is required to have at least 1.5 GeV/c momentum and is identified by requiring the dE/dx to be consistent with the kaon hypothesis and vetoing pions. The angle between the two kaons in the $`\varphi `$ rest frame has to be larger than -0.8. The $`\varphi \varphi `$ system is required to have at least 10 GeV/c in momentum and an invariant mass between 2.0 and 4.5 GeV/c<sup>2</sup>. Most of the combinatorial and fragmentation background is thus removed. Besides the signal events there remain other sources of double $`\varphi `$ events: one or both $`\varphi `$ can originate from fragmentation or combinatorial background. Three physics background are also present in $`Zb\overline{b}`$ and $`Zc\overline{c}`$ events: 1. $`BDD_s(X)`$, $`D_s\varphi _1X`$, $`D\varphi _2X`$ 2. $`B_{(s)}D_{(s)}X`$ , $`D_{(s)}\varphi _1X`$ and $`\varphi _2`$ from fragmentation 3. $`D_{(s)}\varphi _1X`$ and $`\varphi _2`$ from fragmentation The first reproduces exactly the signal signature and can not be removed in the selection. However the $`D`$ decay is Cabibbo suppressed with respect to the signal process ($`D_s\varphi X18\%`$, $`D^+\varphi X2.9\%`$, $`D^0\varphi X0.9\%`$). To have a good tracking quality, each kaon must have at least one hit in the VDET. The two $`\varphi `$ are then constrained to a common vertex with the $`\chi ^2`$ probability of the vertex greater than $`1\%`$. At this point it is possible to reconstruct the $`B_s`$ decay length as the distance of the $`\varphi \varphi `$ vertex from the primary vertex projected along the momentum direction. To reject non $`b`$ events a b-tag is also demanded. After this chain of cuts, the contribution due to $`\varphi `$ from fragmentation is negligible. The global efficiency is about $`8\%`$ with a b-purity of $`84\%`$ in the final sample. ### B Results The $`B_s`$ lifetime is determined from the proper decay time distribution of the $`\varphi \varphi `$ events. For each $`B_s`$ candidate, the proper time is obtained from the decay length $`l`$ of the $`\varphi \varphi `$ system and the $`B_s`$ boost. The decay length is measured in three dimensions by projecting the vector joining the interaction point and the $`\varphi \varphi `$ decay vertex onto the direction of flight of the $`\varphi \varphi `$ resultant momentum. The typical resolution of the $`\varphi \varphi `$ decay vertex along the direction of flight is $`200\mu `$m. The boost of the $`B_s`$ is computed from the nucleated jet method starting from the two $`\varphi `$ tracks. The $`B_s`$ is extracted from an unbinned likelihood fit to the proper time distribution of the $`B_s`$ candidates. The background events amount to 78% of the total, and their proper time distribution has been parametrised from the events in the two $`\varphi `$ sidebands. Figure 4 shows the result of the fit: $$\tau _{B_s}^{short}=(1.42\pm 0.23\pm 0.16)\mathrm{ps}.$$ (6) The main systematics comes from the combinatorial background shape parametrisation. From this measurement one could extract the difference in the width of the two $`B_s`$ mass eigenstates: $$\frac{\mathrm{\Delta }\mathrm{\Gamma }}{\mathrm{\Gamma }}=2\frac{1\tau _{B_s}^{short}}{\overline{\tau }_{B_s}}=0.24\pm 0.35$$ (7) where $`\overline{\tau }_{B_s}`$ is the average $`B_s`$ lifetime (1.61$`\pm `$ 0.10 ps) and the statistical and systematic errors are combined.
no-problem/9905/comp-gas9905001.html
ar5iv
text
# Lattice Gases and Cellular Automata ## 1 Historical Background ### 1.1 The Ising Model The use of lattice gases for the study of equilibrium statistical mechanics dates back to a 1920 paper of Lenz in which he proposed to model a ferromagnet by a regular $`D`$-dimensional lattice $`𝐋`$ of two-state “spins.” Physically, these may be thought of as the magnetization vectors of elemental magnetic domains, and the model constrains them to point in one of two directions, say “up” and “down.” For a two-dimensional lattice, this is illustrated schematically in Fig. 1. Mathematically, the state of the system can be described by the collection of variables $`S(𝐱)`$, indexed by the lattice points $`𝐱𝐋`$, and taking their values from the set $`\{1,+1\}`$; here $`S(𝐱)=+1`$ means that the spin at site $`𝐱`$ is pointing up, and $`S(𝐱)=1`$ means that it is pointing down. If we suppose that the lattice has a total of $`N|𝐋|`$ sites, then the total number of possible states of the system is $`2^N`$ To use these spins as a model of ferromagnetism, it was necessary to assign an energy to each of these $`2^N`$ states, in such a way as to make it energetically favorable for each spin to align with an externally applied magnetic field $`\alpha `$, and for neighboring spins to align with each other. The first of these goals is achieved by including an energy contribution $`\alpha S(𝐱)`$ for each spin present, and the second by including an energy contribution $`JS(𝐱)S(𝐲)`$ for each pair of neighboring sites $`𝐱`$ and $`𝐲`$. Thus, the full energy of the system is $$H(𝐒)=\alpha \underset{𝐱}{}S(𝐱)\frac{J}{2}\underset{𝐱}{}\underset{𝐲𝐍(𝐱)}{}S(𝐱)S(𝐲),$$ where $`𝐍(𝐱)`$ denotes the set of sites neighboring site $`𝐱`$, and the factor of $`1/2`$ in front of the second term prevents double-counting of the pairs of spins. To use this to study the equilibrium properties of a ferromagnet, it is necessary to compute the partition function $$Z(K,h)\underset{N\mathrm{}}{lim}\underset{𝐒}{}\mathrm{exp}\left[\frac{H(𝐒)}{k_BT}\right],$$ where $`T`$ is the temperature, $`KJ/(k_BT)`$, $`h\alpha /(k_BT)`$, the sum over $`𝐒`$ includes all $`2^N`$ possible states of the system, and we have taken the thermodynamic limit by letting the number of spins go to infinity. Lenz posed the problem of calculating this quantity to his student Ising, who solved it for a one-dimensional lattice of spins in 1925 . While Ising’s $`D=1`$ solution is elementary, Onsager’s $`D=2`$ solution for $`h=0`$ required almost another twenty years to complete, and is significantly more complicated. The solution for the critical exponents for $`D=2`$ with $`h0`$ is a much more recent development, first published by Zamalodchikov in 1989. The problem for $`D=3`$ is outstanding, even for $`h=0`$. ### 1.2 Universality and Materials Science One might wonder why so much effort has been devoted to the Ising model when it is clearly only a crude idealization of a real ferromagnet. Certainly, nobody expects the detailed functional form of, say, the dependence of the Ising model’s magnetization $$M(K,h)=\frac{_𝐱^NS(𝐱)\mathrm{exp}\left[\frac{H(𝐱)}{k_BT}\right]}{_𝐱^N\mathrm{exp}\left[\frac{H(𝐱)}{k_BT}\right]}=\frac{\mathrm{ln}Z(K,h)}{h}$$ on the temperature $`T`$ to be valid for any real material. There are, however, good reasons to believe that certain features of this functional form are universal – that is, model-independent. This is particularly true near criticality (in the $`D=2`$ and $`D=3`$ Ising models), where the spin-spin correlation length diverges, and fluctuations at all length scales are present. For example, at zero applied field and near criticality, the magnetization varies as $$M=\{\begin{array}{cc}0\hfill & \text{for }T>T_c\hfill \\ M_0(\frac{T_cT}{T_c})^\beta \hfill & \text{for }TT_c,\hfill \end{array}$$ where $`T_c`$ is the critical temperature, $`M_0`$ is a proportionality constant, and $`\beta `$ is an example of what is called a critical exponent. The scale invariance of the fluctuations at the critical point allow a renormalization group treatment which indicates that the critical exponent should be rather insensitive to the particular model Hamiltonian used. In fact, critical exponents should depend on only the dimensionality of the space and the symmetries of the underlying Hamiltonian function. For example, the unmagnetized Ising-model Hamiltonian is invariant under the symmetry group $`Z_2`$ – that is, multiplication in the set $`\{1,+1\}`$ – because the energy is invariant under the inversion of all the spins in the system. Systems with $`Z_2`$ symmetry are expected to have $`\beta =1/8`$ in $`D=2`$, and $`\beta 0.33`$ in $`D=3`$. A related lattice spin model, called the Heisenberg model, endows each spin with a vector orientation in three dimensions and has an interaction Hamiltonian that depends only on dot products of these vectors at neighboring sites. Since these are invariant under the continuous group of SO(3) rotations, we might expect a different critical exponent for $`\beta `$, and in fact this is the case: $`\beta 0.36`$ for the $`D=3`$ Heisenberg model. Thus, universality teaches us that it is possible to learn some “real physics” by studying highly idealized models such as the Ising model. This realization led to a flurry of variants of lattice spin models, appropriate to various real materials. As an example, we consider an ingenious model developed by Widom to describe microemulsions. A microemulsion is created by the addition of a surfactant or amphiphile to a mixture of two immiscible fluids, such as oil and water. An amphiphile is a chemical that typically has an ionic hydrophyllic end that likes to sit in water, covalently bonded to a hydrocarbon chain which is hydrophobic in that it prefers to sit in oil. This situation gives rise to two crucial properties: First, the free energy of the amphiphile is lowest when it lives on the interface between the two fluids. Second, the presence of the amphiphiles on the interface gives it a rigidity, or bending energy, that is proportional to the square of the local mean curvature <sup>1</sup><sup>1</sup>1More generally, it is proportional to the square of the difference of the local mean curvature and some spontaneous value thereof. Note that this rigidity is in addition to the surface tension that is always present between immiscible fluids.. Together, these two properties can give rise to some spectacular behavior. For example, thanks to the first property, when there is insufficient interface to accomodate the amphiphile, it becomes energetically profitable for the amphiphile to create new interface to inhabit. It does this by breaking up the separated mixture of water and oil into emulsion droplets. The droplets become smaller as more amphiphile is added, until they become so small (on the order of 50 nm) that the curvature energy mentioned in the second property removes the incentive for them to get any smaller. If the amount of surfactant continues to increase past this point a sponge phase can result, or, at lower temperatures, ordered lyotropic phases consisting of alternating sheets or tubes of oil and water. The self-organization of these structures as a result of relatively simple chemical properties is a methodology for nanofabrication, and scientific and industrial uses of these materials abound. To model microemulsions using a lattice gas, Widom introduced, in the mid 1980’s, a model very similar to that of Ising, in that each site on a Cartesian grid could be in one of two states, $`S(𝐱)\{1,+1\}`$. One clever innovation of this model is that it situates the particles on the links between the lattice vertices, rather than on the vertices themselves. Links can be characterized by the values of $`S`$ at the vertices they connect. Thus, each link can be in any one of four possible states: Links that connect two $`S=0`$ vertices are said to contain water. Links that connect two $`S=1`$ vertices are said to contain oil. Finally, links that begin at an $`S=1`$ ($`S=0`$) vertex and end at an $`S=0`$ ($`S=1`$) vertex are said to contain amphiphile whose hydrophyllic end points toward the end (beginning) of the link. The great advantage of this model is that the representation itself literally forces the amphiphile to live on the interface between the oil and water, and orients it correctly. This is illustrated in Fig. 2. The Hamiltonian for Widom’s model includes two-spin interactions, as were present in the Ising model Hamiltonian, but it also includes three-spin interactions, necessary to capture the curvature energy described above and required for the rigidity of the interface. Since the number of particles of each type may change when a spin is flipped, it is also necessary to append terms of the form $`\mu _Wn_W+\mu _On_O+\mu _An_A`$, where the $`\mu `$’s are the (fixed) chemical potential per particle and the $`n`$’s are the particle numbers of each species, and run the Monte Carlo simulation in the grand canonical ensemble. Widom’s model has been used to reproduce much of the above-described phenomenology associated with amphiphilic fluids. In particular, droplet, sponge and lyotropic phases are all seen, as are coexistence regions between these phases. ### 1.3 From Statics to Dynamics The Ising model is discussed in many (perhaps most) statistical physics textbooks. Our purpose for bringing it up in this context is merely to note that it has two “false identities.” First, it looks a bit like a Hamiltonian dynamical system because it has an energy function (which we have obligingly denoted by $`H`$) that is exponentiated and integrated to get a partition function – just as is done with the Hamiltonian of a system of classical particles when deriving, e.g., the Mayer cluster expansion. Second, it looks a bit like a cellular automaton (CA) because its state can be represented by a single bit at each site. In fact, as defined above, the Ising model is neither of these things. Indeed, it is not a dynamical system at all. It is posed only as an equilibrium model. You can use it to find a partition function, and from that derive all of its thermodynamic properties, but thermodynamics (its name notwithstanding) is not dynamics at all. When it is used to study the difference between two states of a given system, it pays little or no attention to the particulars of the dynamical process that takes the system between those states. By contrast, Hamiltonian dynamical systems require a continuous phase space, endowed with a Poisson bracket that is antisymmetric and obeys the Jacobi identity; given such a bracket structure, any phase function $`H`$ defines a deterministic dynamics on the phase space. Likewise, a CA is a Markovian dynamical system on a discrete phase space, with transition rules that are applied to each site simultaneously and depend only upon the states of neighboring sites. When the Ising model is simulated on a computer to study its equilibrium properties, it begins to look more like a dynamical system. The usual method is to perform a Monte Carlo simulation by flipping spins randomly, and then accepting or rejecting these flips according to the Metropolis algorithm , while sampling desired equilibrium properties. Because the Metropolis algorithm is Markovian in nature, this is a dynamical system in the mathematical sense of the word. However it is still neither Hamiltonian nor a CA. The discrete phase space would seem to distinguish it from most Hamiltonian systems <sup>2</sup><sup>2</sup>2There have been interesting attempts to construct fully discrete Hamiltonian systems, but such ideas are still in the embryonic stage of development., while the sequential nature of the algorithm (changes are considered to only one spin at a time) distinguishes it from the latter. More pertinently, it should be noted that the dynamics of the Metropolis algorithm do not necessarily have any relation to physical dynamics – other than the property of converging to the equilibrium state. The physical dynamical properties of a system cannot be studied by (ordinary) Monte Carlo methods; molecular dynamics, or some other such microcanonical algorithm must be used. Given the utility of the Ising model in studying the equilibrium properties of materials, it should not be surprising that people have tried very hard to extend such models to include real dynamics. For a ferromagnet, this would mean being able to study the approach of the magnetization to equilibrium. One way to approach this problem is to invent a microcanonical dynamics for the Ising model. Such a model would evolve the system in such a way as to conserve energy globally and maintain the condition of detailed balance. If the probability that the system is in state $`𝐒`$ is denoted by $`P(𝐒)`$, and if the probability of transition of the system from state $`𝐒`$ to state $`𝐒^{}`$ is denoted by $`A(𝐒𝐒^{})`$, then the latter condition, which ensures that the dynamical process will converge to a Boltzmann-Gibbs equilibrium, may be written $$P(𝐒)A(𝐒𝐒^{})=P(𝐒^{})A(𝐒^{}𝐒).$$ Such a microcanonical dynamics for the Ising model was developed by Creutz in 1983 . To localize the energy change incurred by the flip of a spin, it was necessary to update the lattice in a “checkerboard” pattern – first updating the black squares and then the red ones. This has the effect of “freezing” a site’s neighbors while it is flipped, so that the energy change incurred by the flip can be calculated in advance. Because the algorithm is to be microcanonical, this energy must be stored somewhere. Creutz solved this problem by allowing for each site to have a “bank” where it can make local deposits and withdrawals of energy. The bank must have a finite capacity though, and if flipping the spin at a site would result in an overflow or underflow of the local bank, the flip is not done; otherwise, it is done. Creutz demonstrated that his dynamics equilibrate the $`D=2`$ Ising model. The lattice is initialized with a total energy (particles plus banks) that never changes in the course of the simulation. The portion of the energy in the spins themselves, however, may change, and must therefore be monitored by the simulation. The important point, however, is that Creutz’ model is able to make a CA out of the Ising model. The checkerboard updating is not a real problem, since it can be incorporated in the CA framework by including a “time parity” bit at each site, initialized in a checkerboard pattern, and mandated to toggle at each time step. The value of this bit might then be used to determine which sites will attempt to flip, and the rest is naught but fully deterministic and reversible nearest-neighbor communication and simultaneous updating at each site – in other words, a reversible CA. ### 1.4 From Ferromagnetism to Hydrodynamics The approach to equilibrium of a ferromagnet is certainly interesting, but, because the order parameter (magnetization) is not a conserved quantity, it happens relatively fast. The path to equilibrium in systems with conserved order parameters is generally more tortuous and difficult – and hence interesting – because such systems are more constrained. Hence, for example, a viscous fluid at high Reynolds number achieves equilibrium only after turbulent relaxation in which intricate structures may be spawned across a wide range of scales in length and time. To be sure, it is possible to conserve the ferromagnetic order parameter. The so-called Kawasaki dynamics of the Ising model flips only pairs of oppositely directed spins, so that the total magnetization is conserved. If this idea were combined with Creutz’ microcanonical dynamics, it would be possible to perform a microcanonical simulation of a system with a conserved order parameter, but I am not aware that this has been tried. The more natural setting for systems with conserved order parameters is hydrodynamics. Fluids generally have a conserved mass and momentum, and if compressibility effects are considered, the conservation of energy will also play an important role. There was, however, a conceptual problem with the CA simulation of fluids: Using lattice gases to model ferromagnets is intuitive enough because it is not difficult to picture a regular array of magnetic domains. It seemed much less clear how to model a fluid on a lattice, however, and almost a half a century elapsed between the development of the Ising model and the first lattice-gas models of hydrodynamics. ## 2 Hydrodynamic Lattice Gases ### 2.1 The Kadanoff-Swift Model The goal of a hydrodynamic lattice gas is to take the same “minimalist” approach to fluids that the Ising model takes to ferromagnets. The object is not a precise model of the dynamics at the finest scales, but rather to invent a fictitious microdynamics whose coarse-grained behavior – in the thermodynamic limit – lies in the same universality class as the phenomenon under study. Along these lines, the approach that seems the most promising is to model the fluid at the level of fictitious “molecules” that can move about and collide, as they do in a real fluid, conserving mass, momentum and (for compressible fluids) energy as they do so. The first attempt (known to me) along these lines was undertaken by Kadanoff and Swift in 1968 . They considered a $`D=2`$ Cartesian lattice, each site of which may be occupied by a particle, as shown in Fig. 3. Each particle is tagged with one of four momenta, oriented along the diagonals, $`𝐜_1`$ $`=`$ $`+\widehat{𝐱}+\widehat{𝐲}`$ $`𝐜_2`$ $`=`$ $`\widehat{𝐱}+\widehat{𝐲}`$ $`𝐜_3`$ $`=`$ $`\widehat{𝐱}\widehat{𝐲}`$ $`𝐜_4`$ $`=`$ $`+\widehat{𝐱}\widehat{𝐲},`$ where $`\widehat{𝐱}`$ and $`\widehat{𝐲}`$ are the unit vectors in the $`x`$ and $`y`$ directions. An Ising-like Hamiltonian is defined. Then, at each step, a particle is randomly selected and one of three things happen to it: * It moves in the direction of its velocity vector to the next site where it could land without violating conservation of energy. * It diffuses, to any empty neighboring site, carrying its momentum with it. * It exchanges momentum vectors with another neighboring particle. The dynamics thus defined conserves mass, momentum and energy, and obeys the principle of detailed balance. Note that it is not technically a cellular automaton, because of the sequential nature of the particle updates, but it might be made into one by using some generalization of checkerboard updating and/or Cruetz’ “banks.” To my knowledge, however, this has never been tried. The Kadanoff-Swift (KS) model exhibits many features of real fluids, such as sound-wave propagation, and long-time tails in velocity autocorrelation functions. As the authors noted, however, it does not faithfully reproduce the correct equations of motion of a viscous (or, for that matter, any other kind of) fluid. In particular, the model exhibits a strong lattice anisotropy; the decay of sound waves, for example, depends on their direction of propagation with respect to the underlying lattice. ### 2.2 The HPP Model and Kinetic Theory The next advance in the lattice modelling of fluids came in the mid 1970’s, when Hardy, de Pazzis and Pomeau introduced a new lattice model with a number of innovations that warrant discussion here. The HPP model, named for its authors, also resides on a $`D=2`$ Cartesian lattice. Particle velocities are taken from the set $`𝐜_1`$ $`=`$ $`+\widehat{𝐱}`$ $`𝐜_2`$ $`=`$ $`+\widehat{𝐲}`$ $`𝐜_3`$ $`=`$ $`\widehat{𝐱}`$ $`𝐜_4`$ $`=`$ $`\widehat{𝐲},`$ and there may be anywhere from zero to four particles at each site. The only restriction is that there may not be more than one particle with a particular velocity, at a particular site, at a particular time. This “exclusion principle” makes it possible to represent the state of any site $`𝐱`$ by four bits; bit $`n_j(𝐱,t)`$, where $`j\{1,2,3,4\}`$, encodes the presence ($`1`$) or absence ($`0`$) of a particle with velocity $`𝐜_j`$ at site $`𝐱`$ and time $`t`$. Given this representation, the dynamics is defined as follows: At each time step, each site may experience a purely local collision, in which its particles rearrange their velocity vectors in such a way as to conserve mass and momentum. A moment’s thought indicates that the only nontrivial way for this to happen is if exactly two particles enter a site from opposite directions, and exit in the other two opposite directions; for any other configuration, the particles simply retain their incoming velocities, “passing through” each other without interacting. After the collisions, all particles “stream” to the site in the direction of their velocity vector. Note that all sites experience collisions simultaneously, and all particles stream simultaneously as well. The HPP model can thus be regarded as a CA. The above described dynamical rule can actually be expressed algebraically as follows: $$n_j(𝐱+𝐜_j,t+\mathrm{\Delta }t)=n_j(𝐱,t)+\omega _j,$$ (1) where $`\mathrm{\Delta }t`$ is the time associated with one step of the dynamical process (equal to unity if natural “lattice units” are adopted), and where $`\omega _j`$ is called the collision operator. If the collision operator were not present, this equation would simply state that a particle with velocity $`𝐜_j`$ will exist at site $`𝐱+𝐜_j`$ at time $`t+\mathrm{\Delta }t`$ if it existed at site $`𝐱`$ one time step earlier. This captures the streaming process. To include the collisions, $`\omega _j`$ must subtract (add) a particle from direction $`j`$ if the incoming state will undergo a nontrivial collision that will deplete (augment) that direction. For the HPP rule described above, we have $`\omega _1`$ $`=`$ $`n_1(1n_2)n_3(1n_4)+(1n_1)n_2(1n_3)n_4`$ $`\omega _2`$ $`=`$ $`n_2(1n_3)n_4(1n_1)+(1n_2)n_3(1n_4)n_1`$ $`\omega _3`$ $`=`$ $`n_3(1n_4)n_1(1n_2)+(1n_3)n_4(1n_1)n_2`$ $`\omega _4`$ $`=`$ $`n_4(1n_1)n_2(1n_3)+(1n_4)n_1(1n_2)n_3.`$ Here we have made use of the fact that multiplication of bits is equivalent to the logical “and” operation, substraction from one is equivalent to the logical “not” operation, and addition of mutually exclusive bits is equivalent to the logical “or” operation. Thus, for example, the quantity $`(1n_1)n_2(1n_3)n_4`$ is equal to one if directions $`2`$ and $`4`$ are occupied, and $`1`$ and $`3`$ are not. Thus, this term appears with a plus sign in $`\omega _1`$ and $`\omega _3`$, since the resulting collision augments those directions, and with a minus sign in $`\omega _2`$ and $`\omega _4`$ since those directions are depleted. This algebraic description of the exact microscopic motion of the particles is somewhat akin to the Klimontovich description of continuum kinetic theory . It is made somewhat simpler by the discreteness of the spatial lattice, and the finite number of allowed velocities. To use this microscopic description to find the fluid equations obeyed by the coarse-grained density and hydrodynamic velocity requires all of the tricks of kinetic theory. The first step is to determine a kinetic equation for the single-particle distribution function. We define this by an ensemble average, supposing that we have a large number of such lattices, with initial conditions sampled from some (unspecified) distribution, and writing $$N_j(𝐱,t)n_j(𝐱,t),$$ where the angle brackets denote the ensemble average. Note that, whereas $`n_j`$ is bit-valued, $`N_j`$ is real-valued. By taking the ensemble average of Eq. (1), we arrive at $`N_j(𝐱+𝐜_j,t+\mathrm{\Delta }t)`$ $`=`$ $`N_j(𝐱,t)+\omega _j`$ $`=`$ $`N_j(𝐱,t)n_j(1n_{j+1})n_{j+2}(1n_{j+3})`$ $`+(1n_j)n_{j+1}(1n_{j+2})n_{j+3},`$ where all subscripts are evaluated modulo $`4`$. At this point, we see that we have a problem. The collision operator is nonlinear in the $`n_j`$’s, and the average of a product is not generally equal to the product of the averages – not if the quantities involved are correlated. Thus, the dynamical equation of the $`N_j`$’s will involve averages of products, such as $`n_jn_{j+2}`$. To know these, it would be necessary to write kinetic equations for these two-point correlations, but these will involve still higher correlations, etc. This infinite series of equations is the lattice-gas analog of the BBGKY hierarchy of kinetic theory. To truncate this hierarchy and obtain a closed equation for the $`N_j`$’s, it is necessary to make a physical approximation: We shall assume that the particles entering a collision are uncorrelated. This approximation is tantamount to Boltzmann’s famous molecular chaos assumption. It is unlikely to be true, especially for high densities and low-dimensional lattices. For now we note that, under this assumption, it is possible to replace the average of products above by the product of averages, resulting in a closed equation for the single-particle distribution function, $`N_j(𝐱+𝐜_j,t+\mathrm{\Delta }t)`$ $`=`$ $`N_j(𝐱,t)+\mathrm{\Omega }_j`$ (3) $`=`$ $`N_j(𝐱,t)N_j(1N_{j+1})N_{j+2}(1N_{j+3})`$ $`+(1N_j)N_{j+1}(1N_{j+2})N_{j+3}.`$ This is called the lattice-Boltzmann equation. From this, it is possible, using the Chapman-Enskog analysis of classical kinetic theory, to derive the hydrodynamic equations obeyed by the mass density, $$\rho (𝐱,t)\underset{i}{}N_i(𝐱,t)$$ (4) and the momentum density, $$\rho (𝐱,t)𝐮(𝐱,t)\underset{i}{}𝐜_iN_i(𝐱,t).$$ (5) Note that Eqs. (4) and (5) are the discrete-velocity analog of the usual integration over velocity space to obtain the hydrodynamic densities. In this way, the fully compressible hydrodynamic equations for the HPP lattice gas were worked out in the original papers by HPP . For our purposes, we note that the result of this exercise in the incompressible limit is $`\mathbf{}𝐮=0`$ and $`{\displaystyle \frac{u_x}{t}}+g(\rho ){\displaystyle \frac{}{x}}u_y^2`$ $`=`$ $`{\displaystyle \frac{1}{\rho }}{\displaystyle \frac{P}{x}}+\nu (\rho ){\displaystyle \frac{^2u_x}{x^2}}\left(\nu (\rho )+{\displaystyle \frac{1}{2}}\right){\displaystyle \frac{^2u_y}{xy}}`$ $`{\displaystyle \frac{u_y}{t}}+g(\rho ){\displaystyle \frac{}{y}}u_x^2`$ $`=`$ $`{\displaystyle \frac{1}{\rho }}{\displaystyle \frac{P}{y}}+\nu (\rho ){\displaystyle \frac{^2u_y}{y^2}}\left(\nu (\rho )+{\displaystyle \frac{1}{2}}\right){\displaystyle \frac{^2u_x}{xy}},`$ (6) where $`P`$ is the pressure, and we have defined the functions $$g(\rho )\frac{1\rho /2}{1\rho /4}$$ and $$\nu (\rho )\frac{1}{2\rho (1\rho /4)}\frac{1}{2}.$$ Eqs. (6) has some superficial resemblence to the Navier-Stokes equations of viscous fluid dynamics, but closer inspection reveals some important differences. Like the KS model studied earlier, the HPP model gives rise to anisotropic hydrodynamic equations that are not invariant under a global spatial rotation. They involve derivatives with respect to $`x`$ and $`y`$ in combinations that cannot be expressed in terms of the familiar $`\mathbf{}`$ operator of vector calculus. Rather, the grid coordinates $`x`$ and $`y`$ have a preferred status. Hence, for example, the drag of a KS or HPP fluid as it flows past a generic obstacle will depend on that obstacle’s angle of orientation with respect to the underlying lattice. At the time, this was not considered a problem, since the real purpose of the KS and HPP models was to study the statistical physics of fluids, and both models could do this well. Traditional computational fluid dynamicists, however, were not inclined to take notice of this as a serious numerical method unless and until a way was found to remove the unphysical anisotropy. ### 2.3 The FHP Model Another thirteen years passed from the introduction of the HPP model to the solution of the anisotropy problem in 1986 by Frisch, Hasslacher and Pomeau , and simultaneously by Wolfram . The FHP lattice gas, named after the authors of the first reference given above, is very similar to that of HPP in that the evolution proceeds by alternating collision and streaming steps – hence, it is again a cellular automaton. The only real difference is that it is based on a triangular lattice instead of a Cartesian one, as shown in Fig. 4. Now one would expect that a six-fold symmetric lattice would give rise to a more isotropic model than a four-fold symmetric one. The surprising result of the 1986 studies, however, is that the six-fold version does not only improve the isotropy – it yields perfect isotropy! To see why isotropy is recovered on a triangular lattice, we generalize the results of the Chapman-Enskog analysis mentioned above. It turns out that the most general form for the viscous term in the hydrodynamic equations above is $$\frac{u_i}{t}+\mathrm{}=c_{ijkl}_j_ku_l,$$ where $`c_{ijkl}`$ is a fourth-rank tensor that is constructed from the lattice vectors, and hence shares all of their symmetries. If the lattice is invariant under rotation by $`60^{}`$ or $`90^{}`$, so then will be the components of this tensor. Note that if this tensor is isotropic – so that it is expressible in terms of the Kronecker delta – then we have most generally $$c_{ijkl}=\nu \delta _{jk}\delta _{il}+\mu \delta _{ij}\delta _{kl}+\mu \delta _{ik}\delta _{jl},$$ whence our hydrodynamic equation becomes $$\frac{𝐮}{t}+\mathrm{}=\nu ^2𝐮+2\mu \mathbf{}\left(\mathbf{}𝐮\right).$$ This form, expressible with vector notation, is indeed isotropic and the two terms on the right may be identified with the shear and bulk viscosity terms of the Navier-Stokes equation, respectively. Thus, a sufficient condition for the attainment of isotropy is to ensure that the $`c_{ijkl}`$ tensor is isotropic. It turns out that the only rank-four tensors in two dimensions that are invariant under $`60^{}`$ rotation are isotropic; by contrast, there exist rank-four tensors in two dimensions that are invariant under $`90^{}`$ rotation but which are not isotropic (cannot be expressed in terms of the Kronecker delta). Thus, generically, isotropy will be recovered on a triangular grid and not on a Cartesian one. In the incompressible limit, the result of the Chapman-Enskog analysis for the FHP fluid is then $`\mathbf{}𝐮=0`$ and $$\frac{𝐮}{t}+g(\rho )𝐮\mathbf{}𝐮=\mathbf{}P+\nu (\rho )^2𝐮,$$ (7) where $`g(\rho )`$ is defined as it was for the HPP model, and $`\nu (\rho )`$ is a well-behaved function of density whose precise form depends on the particulars of the collision rules (e.g., whether or not three- or four-body collisions are considered). Thus, isotropy is recovered, and there is no problem writing Eq. (7) in the usual notation of vector calculus, as we have done. There is, however, one last lingering problem: The factor of $`g(\rho )`$ in front of the inertial term (second term on the left-hand side) is equal to unity in the real Navier-Stokes equations. This is a consequence of Galilean invariance (GI) which, like isotropy, is yet another Lie-group symmetry possessed by the real Navier-Stokes equations, but which we may not take for granted in our lattice model. To see that the convective derivative operator $$\frac{D}{Dt}\frac{}{t}+𝐮\mathbf{}$$ (8) has GI, we consider the transformation of coordinates $`𝐱^{}`$ $`=`$ $`𝐱𝐕t`$ $`t^{}`$ $`=`$ $`t,`$ where $`𝐕`$ is a constant vector. This has inverse $`𝐱`$ $`=`$ $`𝐱^{}+𝐕t^{}`$ $`t`$ $`=`$ $`t^{}.`$ Then the derivatives transform as follows: $`{\displaystyle \frac{}{t}}`$ $`=`$ $`{\displaystyle \frac{t^{}}{t}}{\displaystyle \frac{}{t^{}}}+{\displaystyle \frac{𝐱^{}}{t}}\mathbf{}^{}={\displaystyle \frac{}{t^{}}}𝐕\mathbf{}^{}`$ $`\mathbf{}`$ $`=`$ $`\left(\mathbf{}t^{}\right){\displaystyle \frac{}{t^{}}}+\left(\mathbf{}𝐱^{}\right)\mathbf{}^{}=\mathbf{}^{},`$ and the hydrodynamic velocity transforms as follows $$𝐮=\frac{d𝐱}{dt}=\frac{d}{dt^{}}\left(𝐱^{}+𝐕t^{}\right)=\frac{d𝐱^{}}{dt^{}}+𝐕=𝐮^{}+𝐕.$$ Under this Galilean transformation, we see that $$\frac{}{t}+𝐮\mathbf{}=\frac{}{t^{}}𝐕\mathbf{}^{}+\left(𝐮^{}+𝐕\right)\mathbf{}^{}=\frac{}{t^{}}+𝐮^{}\mathbf{}^{},$$ so the form of the convective derivative operator is indeed Galilean invariant. The presence of the $`g(\rho )`$ factor invalidates this argument. The lack of GI in the FHP fluid is due to the fact that the lattice itself constitutes a preferred reference frame. We can now see the isotropy and GI problems in a unified light: Lie-group symmetries are often responsible for particular features of hydrodynamic equations; for example, isotropy implies that the equation can be written in the notation of vector calculus, and GI implies that the convective derivative will have the form of Eq. (8). Such symmetries may be broken by the presence of the fixed lattice, and when this happens the corresponding features of the hydrodynamic equations may be destroyed. We managed to recover isotropy by noting that the maximum rank of the tensors in our hydrodynamic equation is four, and that rank-four tensors with $`60^{}`$ rotational invariance are always isotropic. How do we fix the GI problem? There are several ways to do so. For incompressible flow, for which $`\rho `$ and hence $`g(\rho )`$ are constant, the easiest solution is to scale $`𝐮`$ and the pressure $`P`$ by $`g(\rho )`$; since the term in which $`g(\rho )`$ appears is the only one that is quadratic in $`𝐮`$, this works handily. Another approach introduces additional particle velocities to force $`g(\rho )`$ to unity. Another consideration is the functional form of the shear viscosity, $`\nu (\rho )`$. For lattice gases that satisfy detailed balance, this quantity will have a minimum as a function of $`\rho `$. This minimum value of the viscosity sets the maximum value of the Reynolds number that may be simulated on a given size lattice. The result of this 1986 work was a cellular automaton model whose coarse-grained behavior was that of a Navier-Stokes fluid. Now the computational fluid dynamicists took notice, and there was a five-year-long flurry of activity in the field. Some of the accomplishments during this time were 1. the extension of the model to three dimensions. 2. simulations of flow past various obstacles, and comparisons with other more conventional methods of computational fluid dynamics (CFD). 3. clever variations on the collision rules of the basic FHP model intended to achieve lower viscosity minima, and experimental tests of the functional form of $`\nu (\rho )`$. 4. careful measurements of long-time tails in velocity autocorrelation functions, and finite-size corrections to the viscosity. 5. extensions of the model to simulate complex fluid hydrodynamics, including interfaces and surface tension. 6. a host of algorithmic tricks, and even special-purpose hardware, to simulate such hydrodynamic lattice gases on parallel computers. The number of papers produced in this period is far too great to review here; the interested reader is referred to the excellent text by Rothman and Zaleski and the secondary references therein, for a review of this work. Items 1 and 5 on the above list will be discussed briefly below in Subsection 2.4 and Section 3, respectively. One reason that hydrodynamic lattice gases captured peoples’ imagination at this time was that they represented an altogether new way of doing CFD – and, indeed, computational physics in general. Conventional CFD methods began with the Navier-Stokes equations, and discretized them in one of a variety of ways. Lattice gases, by contrast, defined a kind of particle kinetics from which the Navier-Stokes equations were emergent – just as they are emergent for a real fluid. There are definite advantages to such a “physical” approach, aside from its undeniable aesthetic appeal. For example, one often overlooked advantage of lattice-gas models is their unconditional stability. The Navier-Stokes equations have a basis in kinetic theory, as the behavior of a system of particles whose collisions conserve mass and momentum. The fact that these underlying collisions obey a detailed-balance condition ensures the validity of the $`H`$ theorem, the fluctuation-dissipation theorem, Onsager reciprocity, and a host of other critically important properties with macroscopic consequences. When its kinetic origins are cavalierly ignored, and the Navier-Stokes equations are “chopped up” into finite-difference schemes, these important properties can be lost. The discretized evolution equations need no longer satisfy an $`H`$ theorem, and the notion of underlying fluctuations may lose meaning altogether. As the first practioneers of finite-difference simulations on digital computers found in the 1940’s and 1950’s, the result can be the development and growth of high-wavenumber numerical instabilities, and indeed these have plagued essentially all CFD methodologies in all of the decades since. Such instabilities are entirely unphysical because they represent a clear violation of the $`H`$ theorem; indeed, the Second Law of Thermodynamics would preclude their occurrence. Numerical analysts have responded to this problem with textbooks full of ways to “patch up” these anomalies – including upwind differencing, artificial viscosity, and a host of other very clever tricks – but from a physicist’s point of view it would have been much better if the original discretization process had retained more of the underlying physics, so that these problems had not occurred in the first place. Lattice gases represented an important first step in this direction. As was shown shortly after their first applications to hydrodynamics, they can be constructed with an $`H`$ theorem that rigorously precludes any kind of numerical instability. More glibly stated, lattice gases avoid numerical instabilities in precisely the same way that Nature herself does so. Even the computer implementation of a hydrodynamic lattice gas was novel. All other CFD methodologies make use of floating-point numbers to represent real quantities. In such floating-point numbers, some bits represent the mantissas, others the exponents and others the signs. A lattice gas, by contrast, can use a representation of one bit per particle. All bits thus play an equal role in some sense; this idea is sometimes colorfully referred to as “bit democracy.” In spite of these successes, the use of lattice gases for the simulation of simple (single-phase) Navier-Stokes fluids declined substantially in the early 1990’s. Ironically, it was largely supplanted by the direct floating-point computer simulation of the lattice Boltzmann equation, Eq. (3). The lattice Boltzmann equation also has kinetic underpinnings in a sense, though its representation is that of a single-particle distribution function, rather than a full particle-level description. Most importantly, the lattice Boltzmann framework allows for greater accuracy than LGA by effectively eliminating kinetic fluctuations. Measured quantities are therefore much less noisy, and require less ensemble averaging to compute accurately. For certain complex fluids such as microemulsions, however, lattice gases remain a very effective simulation methodology. ### 2.4 The FCHC Model To find a lattice-gas model of isotropic three dimensional hydrodynamics, it was necessary to find a $`D=3`$ lattice, under whose symmetry group the only isotropic rank-four tensors are invariant. The trouble is that no such regular lattice exists in three dimensions. In 1987, however, Frisch et al. noticed that a lattice with the required symmetry does exist in four dimensions. It is called the face-centered hypercubic (FCHC) lattice, and is self-dual with 24 lattice vectors per site. The 24 lattice vectors can be arranged in three groups of eight, such that any two groups of eight comprise the 16 vertices of a regular four-dimensional hypercube, and the vectors of the third group of eight point in the direction of the centers of the faces of that hypercube (hence, the name FCHC). The 24 lattice vectors are most easily enumerated by taking all integer quadruples $`(i,j,k,l)`$ that lie at a distance $`\sqrt{2}`$ from the origin. Clearly, two of the integers components must be zero, and the other two must be $`\pm 1`$. There are $`\left(\genfrac{}{}{0pt}{}{4}{2}\right)=6`$ ways to choose which two are zero, and $`2^2=4`$ ways of assigning $`\pm 1`$ to the other two, for a total of 24. It was noticed that by projecting this lattice back to three dimensions – by, say, ignoring the fourth coordinate – a simple set of $`D=3`$ lattice vectors was obtained which worked. The three dimensional lattice thus obtained is not a regular lattice, in that not all the lattice vectors have the same length, and some of them have multiplicity two (because they correspond to two different vectors on the four-dimensional lattice), but it works nevertheless. The projected lattice vectors are thus | $`j`$ | $`c_{jx}`$ | $`c_{jy}`$ | $`c_{jz}`$ | | --- | --- | --- | --- | | 0,1 | +1 | 0 | 0 | | 2,3 | -1 | 0 | 0 | | 4 | 0 | +1 | +1 | | 5 | 0 | +1 | -1 | | 6 | 0 | -1 | +1 | | 7 | 0 | -1 | -1 | | 8,9 | 0 | +1 | 0 | | 10,11 | 0 | -1 | 0 | | 12 | +1 | 0 | +1 | | 13 | +1 | 0 | -1 | | 14 | -1 | 0 | +1 | | 15 | -1 | 0 | -1 | | 16,17 | 0 | 0 | +1 | | 18,19 | 0 | 0 | -1 | | 20 | +1 | +1 | 0 | | 21 | +1 | -1 | 0 | | 22 | -1 | +1 | 0 | | 23 | -1 | -1 | 0 | Note that four of them have multiplicity two. Also note that we have listed them in the three groups of eight, described above. A more geometric description of the FCHC lattice and its use for lattice-gas simulations is given in the reference by Adler et al. . ## 3 Applications to Complex Fluids The simulation of the hydrodynamics of complex fluids, such as immiscible flow, coexisting phases, emulsions, colloids, liquid crystals, gels and foams is one of the principal outstanding challenges of computational condensed matter physics. Hydrodynamic equations for such materials are often not known or ill-posed, so that finite-difference discretizations are not even an option. Molecular dynamics can, of course, be employed, but it advances in time steps that are typically between $`10^2`$ and $`10^3`$ of a mean-free path. In this context, lattice gases offer the possibility of an inexpensive molecular dynamics – one for which particles are still discrete, but can advance in steps on the order of a mean-free time (since the particles typically suffer a collision at each time step). Earlier in this review, we mentioned that lattice models can be useful for studying the equilibrium properties of complex fluids, and we considered the model of Widom in some detail. We finish this survey by demonstrating how lattice gas hydrodynamics have begun to explore the hydrodynamics of such materials. In 1988, Rothman and Keller introduced a lattice-gas model of immiscible flow, such as that of oil and water. They accomplished this by tagging the lattice gas particles with two “colors,” to distinguish oil and water. Their collisions were required to conserve the total mass of each color separately, as well as the total momentum. They then skewed the collision outcomes to favor those that send particles towards sites dominated by other particles of the same color. This affinity of particles for other particles of the same color gives the two phases cohesion, and the interface surface tension. Let us call the two colors “red” and “blue,” and use $`n_i^R(𝐱,t)`$ and $`n_i^B(𝐱,t)`$, respectively, to denote the occupation number of each color in velocity direction $`i`$, at site $`𝐱`$ and at time $`t`$. Then Rothman and Keller defined a color field $$𝐄\underset{i}{}𝐜_i\left[\underset{j}{}\left(n_j^Rn_j^B\right)\right]$$ at each site that points in the direction of increasing (red minus blue) color, and a color flux $$𝐉\underset{i}{}𝐜_i\left(n_i^Rn_i^B\right)$$ for each possible outgoing state. They then chose the outcome that minimized the color work, $$H=𝐉𝐄.$$ (9) More generally, Chen et al. pointed out that one should assign probabilities to each of the possible outcomes based on the Boltzmann weights $`\mathrm{exp}\left(\frac{H}{k_BT}\right)`$, where $`T`$ is a temperature. This model has been studied extensively (see ), and it exhibits phase separation and surface tension. Another rather different model of two coexisting phases, such as water and water vapor, was worked out by Appert and Zaleski in 1990 . The model gives particles an attraction by allowing mass- and momentum-conserving collisions between certain configurations of particles at different sites. This model has also been extensively studied (again, see ). Both this model and that of Rothman and Keller have been cast in the lattice Boltzmann framework as well ; indeed, this remains a very active area of study. Neither the Rothman-Keller and Appert-Zaleski models, as originally posed, obeyed the principle of detailed balance. Since this is important, for all of the reasons described above, some effort has been devoted to restoring detailed balance to hydrodynamic lattice-gas models with interaction between particles at different sites. This turns out not to be easy. The best attempts to date have taken as a starting point a 1988 method due to Colvin et al. , called Maximally Discretized Molecular Dynamics or “$`(MD)^2`$.” This is essentially a return to the Kadanoff-Swift representation with a maximum of one particle per site, and sequential propagation of particles. (Thus, this model is no longer, strictly speaking, a cellular automaton.) Like the KS model, particles are allowed to exchange momentum with their neighbors, and they can step in the direction of their momentum to the nearest empty site. A hard-sphere interaction is introduced that may extend over more than one site. Mass, momentum and energy are exactly conserved, and detailed balance is maintained. Arbitrary interaction potentials – possibly beyond just the nearest neighbor in range – were added to the $`(MD)^2`$ model by Gunn et al. in 1993, and this model remains near the state of the art <sup>3</sup><sup>3</sup>3The algorithm in the paper by Gunn et al. used a continuum-valued velocity, but this is not an essential feature.. The collisions in this model do not change the energy, and may therefore be unconditionally accepted. The propagation step, however, may change the energy, and the algorithm can be implemented in either a microcanonical version (in which the propagation is accepted only if the energy change is zero), or a canonical version (in which it is accepted according to the Metropolis algorithm). Finally, we note that many variants of these algorithms exist and are useful for various purposes. Boghosian et al. developed a variation of the Rothman-Keller model that allows for the inclusion of a surfactant phase, allowing the simulation of microemulsions. To see how this works, first note that the quantity $$\underset{i}{}\left(n_i^Rn_i^B\right)$$ can be thought of as a color charge in the Rothman-Keller model, in that its current is the color flux, and its vector-weighted sum over neighbors is the color field. In this context, surfactant particles are introduced as color dipoles, and numerous terms are added to the Hamiltonian, Eq. (9), to account for the color-dipole interaction that makes the surfactant prefer to live on the interface, and the dipole-dipole interaction that gives rise to the curvature energy. Details are given in the reference . The $`D=2`$ model has been studied in some detail, and preliminary $`D=3`$ results have been obtained as of this writing. The model is able to track the formation and saturation of droplet (Fig. 5, $`D=2`$), wormlike-micelle (Fig. 6, $`D=3`$), sponge (Fig. 7, $`D=2`$) and lamellar (Fig. 8, $`D=3`$) phases, and the time dependence of this saturation has been studied . Interfacial fluctuations in the presence of surfactant have been studied with this model , and it has also been used for the first simulations of the shear-induced sponge-to-lamellar phase transition (Fig. 9, $`D=2`$). Note that all of these applications involve nonequilibrium or dynamical processes that have previously been difficult to address. The most lengthy MD simulations to date are barely able to see the self-assembly of a single emulsion droplet. The lattice-gas method, by contrast, is able to study the growth and saturation of many such droplets and larger structures, as can be seen in these figures. ## 4 Conclusion We have reviewed the use of lattice gases for physical problems, with emphasis on their applications to hydrodynamics. We have traced the evolution of hydrodynamic lattice gases in the 1960’s and 1970’s, leading up to the development of the FHP model in 1986. We have then seen some of the attempts to add interactions – possibly between particles at different sites – to such models, in order to simulate complex-fluid hydrodynamics; we have also discussed the loss of detailed balance in such models, and its recovery by the $`(MD)^2`$ algorithm and its variants. Though much of the CFD-related activity in this field has migrated to lattice Boltzmann methods, it is this author’s belief that lattice gases remain very well suited for the role of “inexpensive molecular dynamics” for the hydrodynamic simulation of complex fluids. In addition, it seems likely that this kind of simulation can still benefit from the development of special-purpose hardware. Interest in this area has been somewhat stifled to date because the ongoing meteoric rise in workstation performance (per unit cost) has made it dangerous to try to develop any kind of hardware outside of the mainstream. Sooner or later, however, workstation performance will begin to saturate, and the very real differences between hardware optimized for lattice gases and for general-purpose floating-point calculations will once again become a target on which hardware designers might well focus some activity. ## Acknowledgements The author would like to thank Raissa d’Souza, Norman Margolus and Peter Coveney for their review of the manuscript and helpful comments. The three-dimensional images were made by Andrew Emerton, Peter Love and David Bailey of Oxford University.
no-problem/9905/hep-ph9905265.html
ar5iv
text
# References Goldstone’s theorem guarantees that associated with every spontaneously broken global symmetry there is a massless particle. Below the symmetry breaking scale, the dynamics of these Nambu-Goldstone degrees of freedom can be descibed by an action which realizes the spontaneously broken symmetry nonlinearly. This effective action encapsulates all the consequences of the symmetry current algebra and, through lowest non-trivial order in a derivative expansion, is unique up to reparametrization, independent of the underlying theory -. For the case of an internal global symmetry group $`G`$ spontaneously broken to an invariant subgroup $`H`$, the Nambu-Goldstone fields, $`\pi ^i,i=1,\mathrm{}\mathrm{dim}G/H`$, act as coordinates of the coset manifold $`G/H`$. A particular choice of the coset coordinates is the standard realization parametrized as $$U(\pi )=e^{\frac{2iT^i\pi ^i}{F_\pi }},$$ (1) where $`T^A`$ is the fundamental representation of $`G`$ and $`F_\pi `$ is the Nambu-Goldstone boson decay constant. While $`U`$ transforms linearly under $`G`$, the Nambu-Goldstone fields, $`\pi ^i`$, transform linearly only under $`H`$ and nonlinearly under the spontaneously broken $`G`$ generators. With this choice of coordinates, the $`G`$ invariant effective Lagrangian is simply $$=\frac{F_\pi ^2}{4}\mathrm{Tr}\left[_\mu U^{}^\mu U\right].$$ (2) In addition to the spontaneous symmetry breaking, there also often appears some soft explicit symmetry breaking. For the case of chiral symmetry breaking, this explicit breaking takes the form of a soft mass term and the above effective Lagrangian is modified to $$=\frac{F_\pi ^2}{4}\mathrm{Tr}\left[_\mu U^{}^\mu U\right]uF_\pi ^2\mathrm{Tr}\left[mU^{}+Um\right].$$ (3) Here $`m`$ is the mass matrix characterizing the soft explicit breaking and $`u`$ is the order parameter of the spontaneous symmetry breaking. For example, if the chiral symmetry is dynamically broken due to some underlying strong gauge interaction, then $`u=\frac{<\overline{\psi }\psi >}{2F_\pi ^2}`$, where $`\psi `$ is a chiral fermion of the underlying theory. The above effective Lagrangian assumes that the theory is free of chiral anomalies. If such effects are also present, the effective action is further modified by the inclusion of a Wess-Zumino term -. While we have focused on the case of a single spontaneously broken global symmetry, the analysis is trivially extended to include the possibility of multiple spontaneously and softly broken internal global symmetry groups. The resultant effective Lagrangian is simply obtained by additively including the individual effective Lagrangians. In addition to the case of spontaneously broken global symmetries, one can also construct effective Lagrangians which nonlinearly realize scale symmetry. Thus we envision an underlying model where over some energy range the quantum fluctuations are such that the scale anomaly either vanishes or is but a very small effect and dilatations are approximately a good symmetry broken only by some soft mass terms. Then when a global internal symmetry is spontaneously broken, there will be an accompanying spontaneous scale symmetry breakdown - and the spectrum will include its associated Nambu-Goldstone boson, the dilaton $`D`$. Since, in general, the couplings of the underlying theory do indeed run, the dilaton is a pseudo Nambu-Goldstone boson aquiring a mass related to the scale at which the renormalization group $`\beta `$ functions become significant. To nonlinearly realize the scale symmetry, one introduces a standard realization for the dilaton as $$S(D)=e^{\frac{D}{F_D}}$$ (4) where $`F_D`$ is the dilaton decay constant. The associated scale transformation, parametrized by $`ϵ`$, is $$\delta ^D(ϵ)S=ϵ(1+x^\nu _\nu )S$$ (5) and results in an inhomogeneous scale transformation of the dilaton as $$\delta ^D(ϵ)D=ϵ(F_D+x^\nu _\nu D).$$ (6) Since the generators of space-time scale transformations commute with those of the internal symmetry group $`G`$, the Nambu-Goldstone boson fields $`\pi ^i`$ are constrained to carry zero scale weight $$\delta ^D(ϵ)\pi ^i=ϵx^\nu _\nu \pi ^i,$$ (7) while the dilaton is a $`G`$ singlet and thus satisfies $$\delta ^G(\omega )D=0,$$ (8) with $`\omega ^A`$ parametrizing the group $`G`$ transformations. The effective action (3) can be made scale invariant up to soft breaking terms by including appropriate powers of $`S`$ to make the scaling weight of each invariant term be four. The soft explicit scale and $`G`$ symmetry breaking terms are dictated by the form of the underlying theory. For the case of a global chiral symmetry broken by a soft fermion mass term which also softly breaks the scale symmetry, the effective Lagrangian which nonlinearly realizes both the chiral and the scale symmetry takes the form - $``$ $`=`$ $`{\displaystyle \frac{1}{2}}F_D^2_\mu S^\mu S+{\displaystyle \frac{1}{4}}F_\pi ^2S^2Tr[_\mu U^{}^\mu U]`$ (9) $`{\displaystyle \frac{1}{2}}uF_\pi ^2(3\gamma )S^4Tr[m]+uF_\pi ^2S^{3\gamma }Tr[mU^{}+Um].`$ (10) where $`\gamma `$ is the anomalous dimension of the underlying fermion field. Under the nonlinear scale and chiral symmetries, this effective Lagrangian transforms as $$\delta ^D(ϵ)=ϵ(3+x^\nu _\nu )\{uF_\pi ^2(3\gamma )S^{3\gamma }Tr[mU^{}+Um]\}$$ (11) $$\delta ^G(\omega )=i\omega ^AuF_\pi ^2S^{3\gamma }Tr[T^A(mU^{}Um)]$$ (12) where the right hand sides reflect the soft explicit symmetry breaking. The fact that the coefficient of the scale and chirally invariant $`S^4`$ term in the Lagrangian (10) depends on the explicit breaking mass parameter, $`m`$, is somewhat unusual and warrants some further elaboration. The necessity for this value can be established by expanding $``$ in powers of the dilaton field $`D`$. The elimination of the destabilizing term linear in $`D`$ is accomplished by fixing it to be $`\frac{1}{2}uF_\pi ^2(3\gamma )tr[m]`$ as in Eq.(10). The dependence of the $`S^4`$ coefficient on the explicit scale and chiral symmetry breaking parameter $`m`$ is dictated in order for the symmetry to be realized à la Nambu-Goldstone with $`<0|S|0>=1`$ and $`<0|U|0>=I`$ so that $`<0|D|0>=0`$ and $`<0|\pi ^i|0>=0`$. The vanishing of the $`S^4`$ coupling in the chiral limit, $`m0`$, is required since a potential of the form $`\lambda S^4`$ with $`\lambda `$ nonvanishing gives a classical vacuum corresponding to $`<S>_0=0`$ which drives $`<D>_0\mathrm{}`$. This instability signals that the corresponding effective Lagrangian realizes the symmetry in a Wigner-Weyl mode. Consequently, a Nambu-Goldstone realization of the symmetry requires a vanishing of the $`S^4`$ coupling in the chiral limit, $`m0`$. On the other hand, one cannot simply ignore the $`S^4`$ term entirely since in its absence, the dilaton becomes tachyonic for $`m0`$. In the exact symmetry chiral limit, $`m0`$, the invariant effective Lagrangian is simply obtained as $$=\frac{1}{2}F_D^2_\mu S^\mu S+\frac{1}{4}F_\pi ^2S^2tr(_\mu U^+^\mu U).$$ (13) Let us next consider nonlinear realizations of another type of space-time symmetry, namely supersymmetry (SUSY). For spontaneously broken SUSY, the dynamics of the Nambu-Goldstone fermion, the Goldstino, is described by the Akulov-Volkov Lagrangian . The nonlinear SUSY transformations of the Goldstino fields are given by $`\delta ^Q(\xi ,\overline{\xi })\lambda ^\alpha `$ $`=`$ $`F\xi ^\alpha +\mathrm{\Lambda }^\rho (\xi ,\overline{\xi })_\rho \lambda ^\alpha `$ (14) $`\delta ^Q(\xi ,\overline{\xi })\overline{\lambda }_{\dot{\alpha }}`$ $`=`$ $`F\overline{\xi }_{\dot{\alpha }}+\mathrm{\Lambda }^\rho (\xi ,\overline{\xi })_\rho \overline{\lambda }_{\dot{\alpha }},`$ (15) where $`\xi ^\alpha `$, $`\overline{\xi }_{\dot{\alpha }}`$ are the Weyl spinor SUSY transformation parameters and $`\mathrm{\Lambda }^\rho (\xi ,\overline{\xi })\frac{i}{F}\left(\lambda \sigma ^\rho \overline{\xi }\xi \sigma ^\rho \overline{\lambda }\right)`$ is a Goldstino field dependent translation vector and $`F`$ is the Goldstino decay constant. The Akulov-Volkov Lagrangian takes the form $$_{AV}=\frac{F^2}{2}detA,$$ (16) with $`A_\mu ^\nu =\delta _\mu ^\nu +\frac{i}{F^2}\lambda \underset{\mu }{\overset{}{}}\sigma ^\nu \overline{\lambda }`$ the Akulov-Volkov vierbein. Under the nonlinear SUSY variations, it transforms as the total divergence $$\delta ^Q(\xi ,\overline{\xi })_{AV}=_\rho \left(\mathrm{\Lambda }^\rho _{AV}\right),$$ (17) and hence the associated action $`I_{AV}=d^4x_{AV}`$ is SUSY invariant. Finally, let us explore the possibility of having simultaneous nonlinear realizations of scale, chiral, and super symmetries. The superconformal algebra - nontrivially relates the SUSY and scale symmetry transformations as $$[\delta ^D(ϵ),\delta ^Q(\xi ,\overline{\xi })]=\frac{1}{2}\delta ^Q(ϵ\xi ,ϵ\overline{\xi }).$$ (18) As a consequence, the Goldstino fields transform with scaling weight $`\frac{1}{2}`$ (recall the Nambu-Goldstone bosons of the spontaneously broken global symmetry carry scaling weight $`0`$) as $`\delta ^D(ϵ)\lambda _\alpha `$ $`=`$ $`ϵ\left({\displaystyle \frac{1}{2}}+x^\nu _\nu \right)\lambda _\alpha `$ (19) $`\delta ^D(ϵ)\overline{\lambda }^{\overline{\alpha }}`$ $`=`$ $`ϵ\left({\displaystyle \frac{1}{2}}+x^\nu _\nu \right)\overline{\lambda }^{\overline{\alpha }}.`$ (20) As is the case for any matter field, the SUSY is nonlinearly realized - on the dilaton field as $$\delta ^Q(\xi ,\overline{\xi })D=\mathrm{\Lambda }^\rho (\xi ,\overline{\xi })_\rho D.$$ (22) We now attempt to construct an effective Lagrangian containing the Goldstino, dilaton, and the Nambu-Goldstone bosons of the global symmetry group $`G`$ which non-linearly realizes the scale, $`G`$, and super symmetries, up to some soft explicit breakings of the scale and $`G`$ symmetries. To render the Akulov-Volkov Lagrangian scale invariant, we simply multiply it by $`S^4`$ to raise its scaling weight to four. On the other hand, the scale and $`G`$ invariant effective Lagrangian pieces can be made nonlinearly SUSY invariant - by simply multiplying them by the Akulov-Volkov determinant, $`\mathrm{det}A`$, and replacing all space-time derivatives by nonlinear SUSY covariant derivatives so that, for example, $`_\mu \pi ^i𝒟_\mu \pi ^i=(A^1)_\mu {}_{}{}^{\nu }_{\nu }^{}\pi ^i`$. So doing, we secure the effective Lagrangian $``$ $`=`$ $`{\displaystyle \frac{F^2}{2}}(detA)S^4+{\displaystyle \frac{F_D}{2}}(detA)𝒟_\mu S𝒟^\mu S`$ (24) $`+{\displaystyle \frac{F_\pi ^2}{4}}(detA)S^2\mathrm{Tr}\left[𝒟_\mu U^{}𝒟^\mu U\right]+uF_\pi ^2(detA)S^{3\gamma }\mathrm{Tr}\left[mU^{}+Um\right],`$ where the last term is a soft explicit chiral and scale symmetry breaking term. However, noting that $`\mathrm{det}A`$ starts with unity, then in order to guarantee that the scale symmetry be realized in a Nambu-Goldstone manner, the coefficient of the $`S^4`$ term which is the Goldstino decay constant cannot be chosen arbitrarily, but instead must be proportional to the soft explicit chiral and scale symmetry breaking parameters. Explicitly it is given by $$F^2=(3\gamma )uF_\pi ^2\mathrm{Tr}[m]$$ (25) The necessity of this identification is for exactly the same reason as discussed in the non-SUSY case. Unless this coefficient vanishes in the good symmetry chiral limit, the dilaton vacuum value is driven to negative infinity. Of course, the fact that the Goldstino decay constant (and hence the SUSY breaking scale) vanishes in the chiral limit raises its own host of complications. First and foremost, it implies the scale symmetry and SUSY cannot both be simultaneously realized as nonlinear symmetries. Note that there is no difficulty in constructing an effective Lagrangian invariant under both nonlinear SUSY and chiral symmetry transformations. It is only when both SUSY and scale symmetry are to be realized nonlinearly that one encounters the inconsistency. The source of the problem can be directly traced to the fact that the Akulov-Volkov Lagrangian relates the coefficients of the derivatively coupled Goldstino self interactions and an overall constant which is the vacuum energy accompanying the spontaneous superymmetry breaking. This constant vacuum energy term is ignorable until one makes the model nonlinearly scale invariant which is achieved by multiplying the entire Akulov-Volkov Lagrangian by $`S^4`$. So doing, not only do the Goldstino kinetic term and its self interactions get multiplied by this factor, but so does the constant term. As such the erstwhile constant term now becomes a dilaton self interaction potential term. In order for the dilaton to be a Nambu-Goldstone particle, it cannot sustain a potential whose coefficients are nonvanishing in the chiral limit. Thus the coefficient of this term which is the Goldstino decay constant must vanish in the chiral limit. But this is the same coefficient as that multiplying the entire Akulov-Volkov determinant. Since the Goldstino decay constant is the matrix element of the supersymmetry current between the vacuum and the Goldstino state, its vanishing signals the nonviability of the Goldstone realization. Consequently, both symmetries cannot be simulataneously realized nonlinearly and the spectrum cannot contain both a dilaton and a Goldstino as Nambu-Goldstone particles. We have been unable to find a way out of this conundrum. One may try to include additional low energy degrees of freedom and see if the resulting model is consistent. Since $`R`$ symmetry is another component of the superconformal algebra, we have investigated also including an $`R`$-axion, the Nambu-Goldstone boson of spontaneous broken $`R`$ symmetry. Once again the resultant effective Lagrangian does not have a consistent interpretation in the chiral limit. It thus appears as if spontaneous supersymmetry breaking requires the presence of explicit (hard) scale symmetry breaking. This is certainly the case for the various models which have been studied in the literature -. Here we have provided an argument that this is actually the case in general. Note that the literature contains numerous studies of models purporting to include both dilatons and Goldstinos. In all of these models, however, the dilaton aquires a non-zero vacuum expectation value. Consequently, it is not really a Nambu-Goldstone boson and there is no contradiction with our result. This work was supported in part by the U.S. Department of Energy under grant DE-FG02-91ER40681 (Task B).
no-problem/9905/chao-dyn9905035.html
ar5iv
text
# DYNAMICS OF ELASTIC EXCITABLE MEDIA ## 1 Introduction The Burridge–Knopoff model \[Burridge & Knopoff, 1967\] mimics the interaction of two plates in a geological fault as a chain of blocks elastically coupled together and to one of the plates, and subject to a friction force by the surface of the other plate, such that they perform stick–slip motions — Fig. 1a. This simple system reproduces some statistical features of real earthquakes \[Carlson & Langer, 1989\] such as the Gutenberg–Richter power-law distribution \[Gutenberg & Richter, 1956\], considered an example self-organized criticality, \[Carlson et al., 1994\] which obviously involves a large number of degrees of freedom. However, recent laboratory experiments \[Rubio & Galeano, 1994\] that attempt to reproduce these dynamics in a real stick–slip dynamical system consisting of an elastic gel sliding around a metallic cylinder, have shown that low-dimensional phenomena are more robust in reality, and, although proven to be unstable in the Burridge–Knopoff model, do show up in the laboratory. In a different realm, excitable media are usually studied using the model of van der Pol, FitzHugh, and Nagumo \[van der Pol & van der Mark, 1928, FitzHugh, 1960, FitzHugh, 1961, Nagumo et al., 1962\]. This model normally includes only diffusive coupling. Originally from physiology and chemistry, excitable media have also captured the attention of physicists and mathematicians working in the area of nonlinear science because of the apparent universality of many features of their complex spatiotemporal properties \[Meron, 1992\]. We have shown \[Cartwright et al., 1997\] that a Burridge–Knopoff model with a lubricated creep–slip friction force law showing viscous properties at both the low and high velocity limits (Fig. 1b) is a type of van der Pol–FitzHugh–Nagumo excitable medium in which the local interaction is elastic rather than diffusive. We have investigated the dynamics of the model and have shown that its behaviour is dominated by low-dimensional structures, including global oscillations and propagating fronts. Here we investigate further the dynamics of elastic excitable media, and focus on the behaviour of the fronts. ## 2 The Model Our elastic excitable medium model may be written \[Cartwright et al., 1997\] $$\frac{^2\chi }{t^2}=c^2\frac{^2\chi }{x^2}(\chi \nu t)\gamma \varphi \left(\frac{\chi }{t}\right),$$ (1) where, in the language of frictional sliding, $`\chi (x,t)`$ represents the time-dependent local longitudinal deformation of the surface of the upper plate in the static reference frame of the lower plate, $`\varphi (\chi /t)=(\chi /t)^3/3\chi /t`$ is the friction function, as the dashed line in Fig. 1b, $`\gamma `$ measures the magnitude of the friction, $`c`$ is the longitudinal speed of sound, and $`\nu `$ represents the pulling velocity or slip rate. Compare this with the discrete Burridge–Knopoff model from whence Eq. (1) may be derived in the continuum limit $$m\frac{d^2x_i}{dt^2}=k_c(x_{i+1}2x_i+x_{i1})k_p(x_iVt)F_f\left(\frac{dx_i}{dt}\right),$$ (2) where $`x_i`$ is the departure of block $`i`$ from its equilibrium position. It has been noted \[Carlson et al., 1994\] that in some cases discrete Burridge–Knopoff models fail to attain a well-defined continuum limit. This is not a consequence of any numerical instability in computer simulations, but of the hyperbolic nature of the equation and of the shape of the friction force commonly used. In such cases, Eq. (1) should be considered as a symbolic representation of the well-defined discrete dynamics of Eq. (2). From Eq. (1) we can obtain an expression for the local velocity $`\psi =\chi /t`$ of the interface that gives us the model written as a couple of differential equations of first order in time $`{\displaystyle \frac{\psi }{t}}`$ $`=`$ $`\gamma (\eta \varphi (\psi )),`$ (3) $`{\displaystyle \frac{\eta }{t}}`$ $`=`$ $`{\displaystyle \frac{1}{\gamma }}\left(\psi \nu c^2{\displaystyle \frac{^2\psi }{x^2}}\right).`$ (4) ## 3 Front Dynamics The dynamical behaviour of the elastic excitable medium model has been reported in detail in a previous paper \[Cartwright et al., 1997\]. It is notable that much of the chaotic behaviour shown by the discrete versions of the Burridge–Knopoff models built on the basis of a monotonic velocity weakening friction law \[Rice, 1993, Schmittbuhl et al., 1993, Xu & Knopoff, 1994\] becomes more organized in our case. Global oscillations and propagating fronts dominate a large proportion of the relevant parameter space. Moreover, the global oscillations show interesting instability mechanisms leading to the appearance of the propagating fronts. Among these instabilities, the most interesting is perhaps the occurrence of a period-doubling bifurcation at a finite spatial wavelength. As a result of this bifurcation, the globally synchronized oscillatory medium breaks into a finite number of equidistant pacemaker zones from which pairs of counterpropagating fronts are emitted. Fronts emitted from neighbouring pacemakers annihilate upon collision and the annihilation point then becomes an emitting centre for the next generation of fronts; the whole process repeats after two iterations. These results are graphically summarized by Fig. 2, where the transient evolution of a nearly uniform initial state is shown for $`\nu `$ just above the period-doubling instability. In the first stages of the evolution the dynamics is dominated by synchronized global oscillations. The instability then grows, giving rise during a certain time interval to a transient period-doubled structure. Finally this structure decays into a set of propagating fronts (or propagating pulses, since a pair of neighbouring fronts can be considered a pulse). This bifurcation was discussed in more detail in a previous work \[Cartwright et al., 1997\]. Here we focus on the properties of the propagating front regime with special emphasis on the selection mechanisms for the front velocity and spatial configuration. We suppose a solution of the type $`\psi (x,t)=f(\stackrel{~}{z})`$, where $`\stackrel{~}{z}=x/v+t`$, and $`v`$ is the front velocity. This together with the further rescaling $`z=\stackrel{~}{z}/\sqrt{1c^2/v^2}`$ leads to $$\frac{d^2f}{dz^2}+\mu (f^21)\frac{df}{dz}+f=\nu ,$$ (5) which is the van der Pol equation with the nonlinearity rescaled by $`\mu =\gamma /\sqrt{1c^2/v^2}`$. The propagating fronts are then periodic solutions of the van der Pol equation. The parameter $`\mu `$ is undefined until the value of the front velocity $`v`$ is chosen. However, we know that the period of the solution is a function $`T=T(\mu )`$ of $`\mu `$: in the limit of large $`\mu `$, $`T`$ behaves as $`T=k\mu +O(\mu ^1)`$, where $`k=3+(\nu ^21)\mathrm{ln}[(4\nu ^2)/(1\nu ^2)]`$ \[Feingold et al., 1988\]. Since this period should be commensurate with the system size $`S`$, we have the condition $`nT(\mu (v))=S/(v\sqrt{1c^2/v^2})`$, where $`n`$ is an integer, to select the allowed front velocities, which in the large $`\mu `$ limit gives us the quantizing condition $`v=S/(nk\gamma )`$. The integer $`n`$ can be interpreted as the total number of pulses propagating in the system. Because Eq. (5) has bounded solutions only if $`v^2>c^2`$, the propagating fronts are supersonic. Since our analytical estimations predict that $`v`$ becomes smaller than $`c`$ and approaches zero as the number of pulses in the system is increased, there should be a maximum number of pulses allowed in the system. However, numerical simulations show that it is possible to find solutions composed of an arbitrary number of propagating pulses with a proper choice of the initial conditions. A question then arises as to what is the long term behaviour of these solutions when the number of pulses exceeds the maximum allowed by the restrictions on the front velocity. There are three simple scenarios logically compatible with the analysis: 1. Some of the pulses are annihilated by the dynamics. 2. Since the configuration cannot propagate rigidly at a compatible uniform speed, nondecaying fluctuations of the velocity of individual pulses should be observed. 3. The system evolves into a uniformly propagating solution with speed approaching the singular limit $`v=c`$. In this limit the pulses become discontinuous, so that Eq. (1) becomes ill-defined. The original system of blocks and springs is thus no longer well represented by the partial differential equation Eq. (1), to which our analytic estimations pertain, but the discrete effects present in Eq. (2) prevail. The first scenario occurs, but only during a transient phase: at long times states with arbitrary numbers of pulses are reached and maintained in time. Examples of how the second scenario could be realized will be briefly discussed below. We first focus on the third scenario, for which we find abundant numerical evidence. Figure 3 summarizes this. In upper frame of the first column we see a propagating solution composed of only one pulse travelling around the system to the right. Using the spatial coordinate $`x`$ as a parameter we have drawn below the phase-space diagram of such a solution. This diagram shows geometrically that the spatial dependence of the solution is accurately described as the periodic solution of a van der Pol–FitzHugh–Nagumo equation. The second column shows a two-pulse solution propagating to the left at a speed approximately one half that of the single-pulse solution, and thus closer to the sound velocity $`c`$, as expected from the analytical estimations. The phase-space projection below shows general agreement with the prediction of Eq. (5), although a small discrepancy is already visible. Finally, the third column in Fig. 3 shows a breakdown of the continuum model. A train of six pulses travelling at constant speed has been generated, which is forbidden within the framework of the continuum analysis. They travel to the left at speed essentially $`c`$. Notice the sharp gradients at both the leading and trailing edges of the pulses. If we refine the numerical spatial discretization, this just increases the gradients, so that we may expect discontinuities in the continuum limit. The discrete model Eq. (2) introduced in the computer should better represent the physical system of springs and blocks than the partial differential equation Eq. (1). The phase-space diagram clearly shows a strong departure from the solutions of the van der Pol–FitzHugh–Nagumo equations. A well-defined phase-space structure appears, which should be understood as a property of the purely discrete system, different in this case from the van der Pol–FitzHugh–Nagumo phase-space structure. Discreteness effects have been studied in several models with velocity weakening friction \[Rice, 1993, Schmittbuhl et al., 1993, Xu & Knopoff, 1994, Galeano et al., 1998\]. When $`\nu `$ has been taken very far from the slipping threshold the behaviour locally is of autonomous relaxation oscillations, and a slightly different kind of propagating front dynamics occurs. By setting $`\nu =0`$ we recover the symmetric form of the van der Pol equation, supplemented in our case with the elastic spatial term. Now a Floquet stability analysis of the homogeneous oscillatory state shows the development of a long wavelength instability as $`\nu `$ decreases. The evolution of this instability is displayed in Fig. 4. One can roughly describe this behaviour as the formation of spatial regions where the medium undergoes almost synchronous relaxation oscillations. Neighbouring regions, however, oscillate in antiphase and a phase-change front defining the border between them travels around the system. On the left we can see some of these fronts moving at different velocities during a transient stage, while on the right is displayed an asymptotic state where the fronts have reached an equilibrium configuration in which the phase distribution travels rigidly. This behaviour is reminiscent of the phase dynamics found in the complex Ginzburg–Landau equation \[Montagne et al., 1996, Montagne et al., 1997\]. A closer look at Fig. 4 indicates that the spatial regions that we describe as synchronized, are in fact regions where the excitation or relaxation propagates extremely fast. This is visualized in the picture as a very slight tilt of the bands representing the oscillation. Notice that the projection of these bands shrinks considerably near the places where the phase jumps. By taking spacelike slices of these pictures we obtain the instantaneous configuration of the excitation field. Such configurations look like travelling pulses varying wildly in size and velocity. This behaviour can be considered an extreme manifestation of scenario 2 above. ## 4 Discussion A lubricated friction law in the Burridge–Knopoff model can be justified both by theory \[Persson, 1995, Persson, 1997\], and experiments \[Heslot et al., 1994, Brechet & Estrin, 1994, Kilgore et al., 1993, Demirel & Granick, 1996, Budakian et al., 1998\]. Moreover, studies of peeling adhesive tape \[Hong & Yue, 1995\], of Saffman–Taylor fracture in viscous fingering \[Kurtze & Hong, 1993\], and of the Portevin–Le Châtelier effect \[Kubin & Estrin, 1985, Lebyodkin et al., 1995\] lead to the same form of friction law as we use here. Certainly our model well represents the qualitative characteristics of the laboratory stick–slip dynamics experiments \[Rubio & Galeano, 1994\] referred to above, and might have relevance to the present debate on shear stress and friction in real geological faults \[Sleep & Blanpied, 1992, Melosh, 1996, Cohen, 1996\]. In the context of excitable systems elastic coupling has been left aside, because in the chemical and biological systems studied up to now the coupling is diffusive. However, an elastic excitable medium can be realized as an active transmission line or optical waveguide \[Cartwright et al., 1997\]. ## Acknowledgements The authors acknowledge the financial support of the Spanish Dirección General de Investigación Científica y Técnica, contracts PB94-1167 and PB94-1172.
no-problem/9905/gr-qc9905091.html
ar5iv
text
# References What is the homogeneity of our universe telling us? Mark Trodden<sup>1</sup><sup>1</sup>1trodden@erebus.cwru.edu and Tanmay Vachaspati<sup>2</sup><sup>2</sup>2tanmay@theory4.cwru.edu Particle Astrophysics Theory Group Department of Physics Case Western Reserve University 10900 Euclid Avenue Cleveland, OH 44106-7079, USA ## Abstract The universe we observe is homogeneous on super-horizon scales, leading to the “cosmic homogeneity problem”. Inflation alleviates this problem but cannot solve it within the realm of conservative extrapolations of classical physics. A probabilistic solution of the problem is possible but is subject to interpretational difficulties. A genuine deterministic solution of the homogeneity problem requires radical departures from known physics. Awarded Honorable Mention in the 1999 Gravity Research Foundation Essay Competition. CWRU-P20-99 It is well known that the standard cosmological model has a homogeneity problem: why is the temperature of the cosmic microwave background the same to a high degree of accuracy in regions that have never been in causal contact? This observation is one of the primary motivations for the concept of inflation , which has been the foundation of a revolution in cosmology over the last two decades. The rapid expansion of cosmic inflation can stretch an initially small, smooth spatial region to a size much larger than the observable universe today, providing a hope of explaining the present day large-scale homogeneity of the universe. While the observed homogeneity is a compelling reason to study inflation in detail, the mechanism can only truly be said to solve the homogeneity problem if the initial smooth region from which inflation begins is causally correlated, so that its own homogeneity is achievable by physical processes. Otherwise, we are left with the problem of understanding the homogeneity of the initial inflating patch; another, albeit less severe, homogeneity problem. It is then a striking result that, under certain conservative assumptions, if the universe is not born inflating, large-scale homogeneity is required for inflation to begin . Hence classical inflationary models which employ conservative extrapolations of known physics, can only be considered to alleviate, but not solve, the homogeneity problem. Furthermore, the extensions to known physics that are required to solve the homogeneity problem are quite novel and provide hints that may be used to construct new physical theories. The main constraint on inflationary models comes from the requirement that gravitational forces not be “too repulsive”. This constraint can be embodied in the Raychauhuri equation governing the divergences of light rays (i.e. null geodesics) which says that, if $`\theta =_aN^a`$ denotes the divergence of a congruence of null geodesics whose tangent vectors are $`N^a`$, then $$\frac{d\theta }{d\tau }+\frac{1}{2}\theta ^2=\sigma _{ab}\sigma ^{ab}+\omega _{ab}\omega ^{ab}R_{ab}N^aN^b,$$ (1) where $`\tau `$ is the affine parameter along the null geodesic, $`\sigma _{ab}`$ is the shear tensor, $`\omega _{ab}`$ the twist tensor and $`R_{ab}`$ the Ricci tensor. For a specially chosen congruence of null rays - one that is hypersurface orthogonal - it can be shown that, $$\frac{d\theta }{d\tau }R_{ab}N^aN^b=8\pi T_{ab}N^aN^b,$$ (2) where $`T_{ab}`$ is the energy-momentum tensor, and in the last equality we have used Einstein’s equations in natural units. The weak energy condition concerns the energy-momentum tensor of the matter. This condition is satisfied by all known matter at the classical level, and it seems reasonable to assume that it should be satisfied generally. A straightforward consequence is $$T_{ab}N^aN^b0,$$ (3) which for a perfect fluid amounts to requiring a positive energy density, $`\rho 0`$, and a pressure that is bounded from below by minus the energy density: $`p\rho `$. The Raychaudhuri equation, in conjunction with the weak energy condition, then leads to $$\frac{d\theta }{d\tau }0.$$ (4) This equation is a form of the physical statement that the gravitational forces between reasonable matter should not be too repulsive. Were negative energy densities or arbitrarily large negative pressures allowed in the theory, the statement would not be true. We want to understand the implications of the constraint (4) for inflationary models. Consider a universe in which, due to causal processes, a small patch is undergoing inflation but is immersed in a spacetime which itself may be expanding but not inflating. Then there are null rays that originate in the background spacetime and enter the inflating region. We can calculate $`\theta `$ in both the background region and the inflating region. In fact, if the expansion of both regions is given by a scale factor ($`a(t)`$) as in the standard cosmology, radially incoming null rays have a divergence given by $$\theta =\frac{2}{a(t)}\left(H\frac{1}{x}\right),$$ (5) where, as usual, $`H=\dot{a}/a`$ and $`x`$ is the physical radial distance of the ray at time $`t`$. Eq. (4) implies that $`\theta `$ cannot be negative in the background region and positive in the inflating region which, when used in conjunction with (5), gives: $$H_{\mathrm{inf}}^1H_{\mathrm{FRW}}^1(t_i),$$ (6) where $`t_i`$ is the time that inflation started and the subscripts refer to the inflating (inf) and non-inflating (FRW) spacetimes. It seems reasonable to assume that the conditions leading to inflation must be satisfied over a region larger than $`H_{\mathrm{inf}}^1`$. Then, from eq. (6), the patch size that can inflate to form our observable universe has to be larger than the background Hubble scale, $`H_{\mathrm{FRW}}^1(t_i)`$. Note that $`H^1`$ is large compared to typical length scales over which particle interactions can homogenize the universe. Hence, large-scale homogeneity has to be an initial condition for cosmic inflation to proceed, and therefore such inflation does not solve the homogeneity problem. This is the striking result alluded to earlier. Nevertheless, the universe does exhibit large-scale homogeneity, the only proposed explanation for which is inflation. It is therefore worthwhile considering what it takes to genuinely solve the homogeneity problem in the context of inflationary models. The derivation of the above result does not hold if at least one of the following statements is true: * There exist violations of the classical Einstein equations, say due to quantum effects. * The weak energy condition is violated in the early universe. * The universe has non-trivial topology. * The universe is born directly into an inflating universe, that is, there is no pre-inflationary epoch, such as might occur in quantum cosmology. * Singularities other than the big bang are present. Probably the most conservative approach is to consider quantum effects in the early universe. We think this is conservative because we know that quantum mechanics correctly describes the world we live in, whereas the other possible options require conditions that are not seen today. However, a quantum mechanical explanation of the homogeneity is necessarily a probabilistic solution to the problem, and is subject to differing interpretations since we observe only one universe. When considering a quantum mechanical origin of the homogeneity, one must consider both the possibility of directly producing the observed universe, and the possibility of generating the appropriate inflationary initial conditions from which it could evolve. The principle behind adopting inflation as a paradigm is that it greatly enhances the probability for the creation of the universe that we see. However, this does leave open the issue of the probability of producing inflationary initial conditions themselves. An analogy might help clarify this situation. Suppose there are a hundred coins laid out on a table and we find that all of them have their heads facing up. Should we then say that the coins were thrown at random and we are simply seeing a highly unlikely chance event? Or should we say that, at a later time, someone carefully arranged the coins with their heads up? Inflation is analogous to the latter case here. However, it can only be viable if we understand the probability of a process that can “turn all the coins face up”. In the cosmological context there are anthropic considerations that confuse the interpretation yet further - it may be that we can only see a coin if its head is facing up. Such questions are extremely difficult to answer and, at present, it is fair to say that no convincing answer is known. The same difficulties (together with other technical ones) arise when one attempts to explain the creation of our universe by quantum cosmology . Faced with the difficulties of a probabilistic interpretation of obtaining cosmic inflation via quantum processes, we may consider less conservative directions. If new kinds of matter are present that couple in novel ways to the metric, they can either modify Einstein’s equations such that the last equality in (2) does not hold, or else they can provide violations of the weak energy condition (“extremely repulsive matter”). It is interesting to note that non-minimally coupled scalar fields are a specific example of matter that can evade our constraint. Such fields arise naturally in supergravity and string theory, a possible quantum theory of gravity. It may be that the observed homogeneity is steering us to consider these fields as promising inflaton candidates. In addition, if space has non-trivial topology , we may also recover an inflationary solution to the homogeneity problem. In this case, however, the length scale associated with cosmic topology should be comparable to the inflating horizon size. Another escape from the result is possible if we include singularities other than the big bang in the spacetime. Here, however, the singularity must border the inflating patch of the universe. So, even if one did produce an inflationary patch, there would be no way of predicting events in this patch without first understanding the nature and influence of the singularity . To summarize, the observed homogeneity problem cannot have an inflationary solution within conventional extrapolations of classical physics. If one wishes to find a solution, novel departures from classical physics must be considered. The quantum solution is beset with interpretational difficulties. Other classical solutions rely on violations of the weak energy condition or modifications of Einstein’s equations that would, in effect, provide a strongly repulsive gravitational event in the history of the universe.
no-problem/9905/nucl-th9905054.html
ar5iv
text
# Large 𝑝_𝑡 enhancement from freeze out ## 1 Introduction In continuum and fluid dynamical models, particles, which leave the system and reach the detectors, can be taken into account via freeze out (FO) or final break-up schemes, where the frozen out particles are formed on a 3-dimensional hypersurface in space-time. Such FO descriptions are important ingredients of evaluations of two-particle correlation data, transverse-, longitudinal-, radial-, and cylindrical- flow analyses, transverse momentum and transverse mass spectra and many other observables. The FO on a hypersurface is a discontinuity where the pre-FO equilibrated and interacting matter abruptly changes to non-interacting particles, showing an ideal gas type of behavior. The general theory of discontinuities in relativistic flow was not worked out for a long time, and the 1948 work of A. Taub Ta48 discussed discontinuities across propagating hypersurfaces only (which have a space-like unit normal vector, $`d\widehat{\sigma }^\mu d\widehat{\sigma }_\mu =1`$). Events happening on a propagating, (2 dimensional) surface belong to this category. An overall sudden change in a finite volume is represented by a hypersurface with a time-like normal, $`d\widehat{\sigma }^\mu d\widehat{\sigma }_\mu =+1`$. The freeze out surface is frequently a surface with time like normal. In 1987 Taub’s approach was generalized to both types of surfaces Cs87 , making it possible to take into account conservation laws exactly across any surface of discontinuity in relativistic flow. When the EoS is different on the two sides of the freeze out front these conservation laws yield changing temperature, density, flow velocity across the front CF74 ; Bu96 ; ALC98 ; AC98 ; 9ath99 . ## 2 Conservation laws across idealized freeze out fronts The freeze out surface is an idealization of a layer of finite thickness (of the order of a mean free path or collision time) where the frozen-out particles are formed and the interactions in the matter become negligible. To use well-known Cooper-Frye formula CF74 $$E\frac{dN}{d^3p}=f_{FO}(x,p;T,n,u^\nu )p^\mu 𝑑\sigma _\mu $$ (1) we have to know the post-FO distribution of frozen out particles, $`f_{FO}(x,p;T,n,u^\nu )`$, which is not known from the fluid dynamical model. To evaluate measurables we have to know the correct parameters of the matter after the FO discontinuity! The post freeze out distribution need not be a thermal distribution! In fact $`f_{FO}`$ should contain only particles which cross the FO-front outwards, $`p^\mu d\widehat{\sigma }_\mu >0`$, so if $`d\widehat{\sigma }^\mu `$ is space-like this seriously constrains the shape of $`f_{FO}`$. This problem was recognized in recent years, and the first suggestions for the solution were published recently Bu96 ; ALC98 ; AC98 ; 9ath99 . If we know the pre freeze out baryon current and energy-momentum tensor, $`N_0^\mu `$ and $`T_0^{\mu \nu },`$ we can calculate locally, across a surface element of normal vector $`d\widehat{\sigma }^\mu `$ the post freeze out quantities, $`N^\mu `$ and $`T^{\mu \nu }`$, from the relations Ta48 ; Cs87 : $`[N^\mu d\widehat{\sigma }_\mu ]=0`$ and $`[T^{\mu \nu }d\widehat{\sigma }_\mu ]=0,`$ where $`[A]AA_0`$. In numerical calculations the local freeze out surface can be determined most accurately via self-consistent iteration Bu96 ; NL97 . ## 3 Freeze out distribution from kinetic theory We present a kinetic model simplified to the limit where we can obtain a post FO particle momentum distribution. Let us assume an infinitely long tube with its left half ($`x<0`$) filled with nuclear mater and in the right vacuum is maintained. We can remove the dividing wall at $`t=0`$, and then the matter will expand into the vacuum. By continuously removing particles at the right end of the tube and supplying particles on the left end, we can establish a stationary flow in the tube, where the particles will gradually freeze out in an exponential rarefraction wave propagating to the left in the matter. We can move with this front, so that we describe it from the reference frame of the front (RFF). We can describe the freeze out kinetics on the r.h.s. of the tube assuming that we have two components of our momentum distribution, $`f_{free}(x,\stackrel{}{p})`$ and $`f_{int}(x,\stackrel{}{p})`$. However, we assume that at $`x=0`$, $`f_{free}`$ vanishes exactly and $`f_{int}`$ is an ideal Jüttner distribution, then $`f_{int}`$ gradually disappears and $`f_{free}`$ gradually builds up. Rescattering within the interacting component will lead to re-thermalization and re-equilibration of this component. Thus, the evolution of the component, $`f_{int}`$ is determined by drain terms and the re-equilibration. We use the relaxation time approximation to simplify the description of the dynamics. Then the two components of the momentum distribution develop according to the coupled differential equations: $$\begin{array}{ccc}\hfill _xf_{int}(x,\stackrel{}{p})dx=& \mathrm{\Theta }(p^\mu d\widehat{\sigma }_\mu )\frac{\mathrm{cos}\theta _\stackrel{}{p}}{\lambda }f_{int}(x,\stackrel{}{p})dx+\hfill & \\ & & \\ & +\left[f_{eq}(x,\stackrel{}{p})f_{int}(x,\stackrel{}{p})\right]\frac{1}{\lambda ^{}}dx,\hfill & \end{array}$$ (2) $$\begin{array}{ccc}\hfill _xf_{free}(x,\stackrel{}{p})dx=& +\mathrm{\Theta }(p^\mu d\widehat{\sigma }_\mu )\frac{\mathrm{cos}\theta _\stackrel{}{p}}{\lambda }f_{int}(x,\stackrel{}{p})dx.\hfill & \end{array}$$ (3) Here $`\mathrm{cos}\theta _\stackrel{}{p}=p^x/p`$ in the RFF frame. The first (loss) term in eq. (2) is an overly simplified approximation to the model presented in ref. ALC98 . It expresses the fact that particles with momenta orthogonal to the FO surface ($`\mathrm{cos}\theta _\stackrel{}{p}=1`$) leave the system with bigger probability than particles emitted at an angle. The interacting component of the momentum distribution, described by eq. (2), shows the tendency to approach an equilibrated distribution with a relaxation length $`\lambda ^{}`$. Of course, due to the energy, momentum and particle drain, this distribution, $`f_{eq}(x,\stackrel{}{p})`$ is not the same as the initial Jüttner distribution, but its parameters, $`n_{eq}(x)`$, $`T_{eq}(x)`$ and $`u_{eq}^\mu (x)`$, change as required by the conservation laws. In this case the change of the conserved quantities caused by the particle transfer from component $`int`$ to component $`free`$ can be obtained in terms of the distribution functions as: $$dN_i^\mu =\frac{dx}{\lambda }\frac{d^3p}{p_0}p^\mu \mathrm{\Theta }(p^\mu d\widehat{\sigma }_\mu )\mathrm{cos}\theta _\stackrel{}{p}f_{int}(x,\stackrel{}{p})$$ (4) and $$dT_i^{\mu \nu }=\frac{dx}{\lambda }\frac{d^3p}{p_0}p^\mu p^\nu \mathrm{\Theta }(p^\mu d\widehat{\sigma }_\mu )\mathrm{cos}\theta _\stackrel{}{p}f_{int}(x,\stackrel{}{p}).$$ (5) Due to the collision or relaxation terms $`T^{\mu \nu }`$ and $`N^\mu `$ change, and this should be considered in the modified distribution function $`f_{int}(x,\stackrel{}{p})`$. ### 3.1 Immediate re-thermalization limit Let us assume that $`\lambda ^{}\lambda `$, i.e. re-thermalization is much faster than particles freezing out, or much faster than parameters, $`n_{eq}(x)`$, $`T_{eq}(x)`$ and $`u_{eq}^\mu (x)`$ change. Then $`f_{int}(x,\stackrel{}{p})f_{eq}(x,\stackrel{}{p})`$, for $`\lambda ^{}\lambda .`$ For $`f_{eq}(x,\stackrel{}{p})`$ we assume the spherical Jüttner form at any $`x`$ including both positive and negative momentum parts with parameters $`n(x),T(x)`$ and $`u_{RFG}^\mu (x)`$. (Here $`u_{RFG}^\mu (x)`$ is the actual flow velocity of the interacting, Jüttner component, i.e. the velocity of the Rest Frame of the Gas (RFG) Bu96 ). In this case the change of conserved quantities due to particle drain or transfer can be evaluated for an infinitesimal $`dx`$. The changes of the conserved particle currents and energy-momentum tensor in the RFF, eqs. (45) are given in ref. ALC98 . The new parameters of distribution $`f_{int}`$, after moving to the right by $`dx`$ can be obtained from $`dN_i^\mu `$ and $`dT_i^{\mu \nu }`$. The differential equation describing the change of the proper particle density is ALC98 : $$dn_i(x)=u_{i,RFG}^\mu (x)dN_{i,\mu }(x).$$ (6) Although this covariant equation is valid in any frame, $`dN_i^\mu `$ are calculated in the RFF ALC98 . For the re-thermalized interacting component the change of Eckart’s flow velocity is given by $$du_{i,E,RFG}^\mu (x)=\mathrm{\Delta }_i^{\mu \nu }(x)\frac{dN_{i,\nu }(x)}{n_i(x)},$$ (7) where $`\mathrm{\Delta }_i^{\mu \nu }(x)=g^{\mu \nu }u_{i,RFG}^\mu (x)u_{i,RFG}^\nu (x)`$ is a projector to the plane orthogonal to $`u_{i,RFG}^\mu (x)`$, while the change of Landau’s flow velocity is ALC98 $$du_{i,L,RFG}^\mu (x)=\frac{\mathrm{\Delta }_i^{\mu \nu }(x)dT_{i,\nu \sigma }u_{i,RFG}^\sigma (x)}{e_i+P_i}.$$ (8) Although, for the spherical Jüttner distribution the Landau and Eckart flow velocities are the same, the change of this flow velocity calculated from the loss of baryon current and from the loss of energy current are different $`du_{i,E,RFG}^\mu (x)du_{i,L,RFG}^\mu (x).`$ This is a clear consequence of the asymmetry caused by the freeze out process as it was discussed in ref. ALC98 , i.e., the cut by $`\mathrm{\Theta }(p^\mu d\widehat{\sigma }_\mu )`$ changes the particle flow and energy-momentum flow differently. This problem does not occur for the freeze out of baryonfree plasma, and we have only $`du_{i,L}^\mu `$. The last task is to determine the change of the temperature parameter of $`f_{int}`$. From the relation $`eu_\mu T^{\mu \nu }u_\nu `$ we readily obtain the expression for the change of energy density $$de_i(x)=u_{\mu ,i,RFG}(x)dT_i^{\mu \nu }(x)u_{\nu ,i,RFG}(x),$$ (9) and from the relation between the energy density and the temperature (see Chapter 3 in ref. Cs94 ), we can obtain the new temperature at $`x+dx`$. Fixing these parameters we fully determined the spherical Jüttner approximation for $`f_{int}`$. The application of this model to the baryonfree and massless gas gives the following coupled set of equations: $`{\displaystyle \frac{d\mathrm{ln}T}{dx}}`$ $`=`$ $`{\displaystyle \frac{u_\mu \tau ^{\mu \nu }u_\nu }{4\sigma _{SB}}},`$ $`{\displaystyle \frac{du^\mu }{dx}}`$ $`=`$ $`{\displaystyle \frac{3}{4\sigma _{SB}}}\left[\tau ^{\mu \nu }u^\mu u_\sigma \tau ^{\sigma \nu }\right]u_\nu .`$ Here we use the EoS, $`e=\sigma _{SB}T^4`$, the definition $`dT^{\mu \nu }`$ $`=`$ $`dx\tau ^{\mu \nu }T^4`$, and $`x`$ is measured in units of $`\lambda `$. Now we can find the distribution function for the noninteracting, frozen out part of particles according to equation (3). The results are shown in Fig. 1. We would like to note that now $`f_{int}(x,\stackrel{}{p})`$ does not tend to the cut Jüttner distribution in the limit $`x\mathrm{}`$. Furthermore, we obtain that $`T0`$, when $`x\mathrm{}`$ ALC98 . So, $`f_{int}(x,\stackrel{}{p})=\frac{1}{(2\pi \mathrm{})^3}\mathrm{exp}[(\mu p^\nu u_\nu )/T]0`$, when $`x\mathrm{}`$. Thus, all particles freeze out in the present model, but such a physical FO requires infinite distance (or time). This second problem may also be removed by using volume emission model discussed in 9ath99 . ## 4 Conclusions In a simple kinetic model we evaluated the freeze out distribution, $`f_{free}(x,p)`$, for stationary freeze out across a surface with space-like normal vector, $`d\widehat{\sigma }^\mu d\widehat{\sigma }_\mu <0`$. In this model particles penetrating the surface outwards were allowed to freeze out with a probability $`\mathrm{cos}\theta _\stackrel{}{p}`$, and the remaining interacting component is assumed to be instantly re-thermalized. The three parameters of the interacting component, $`f_{int}`$, are obtained in each time step. The density of the interacting component gradually decreases and disappears, the flow velocity also decreases and the energy density decreases. The temperature, as a consequence of the gradual change in the emission mechanism, gradually decreases. The arising post freeze out distribution, $`f_{free}`$ is a superposition of cut Jüttner type of components, from a series of gradually slowing down Jüttner distributions. This leads to a final momentum distribution, with a more dominant peak at zero momentum and a forward halo, Fig. 1. In this rough model a large fraction ($`95\%`$) of the matter is frozen out by $`x=3\lambda `$, thus, the distribution $`f_{free}`$ at this distance can be considered as a first estimation of the post freeze out distribution. One should also keep in mind that the model presented here does not have realistic behavior in the limit $`x\mathrm{}`$, due to its one dimensional character. These studies indicate that more attention should be paid to the final freeze out process, because a realistic freeze out description may lead to large $`p_t`$ enhancement na44 ; na49 as the considerations above indicate (Fig. 1). For accurate estimates more realistic models should be used. In case of rapid hadronization of QGP and simultaneous freeze out, the idealization of a freeze out hypersurface may be justified, however, an accurate determination of the post freeze out hadron momentum distribution would require a nontrivial dynamical calculation. This work is supported in part by the Research Council of Norway, PRONEX (contract no. 41.96.0886.00), FAPESP (contract no. 98/2249-4) and CNPq. Cs. Anderlik, L.P. Csernai and Zs.I. Lázár are thankful for the hospitality extended to them by the Institute for Theoretical Physics of the University of Frankfurt where part of this work was done. L.P. Csernai is grateful for the Research Prize received from the Alexander von Humboldt Foundation.
no-problem/9905/patt-sol9905002.html
ar5iv
text
# Two-color multistep cascading and parametric soliton-induced waveguides ## Abstract We introduce the concept of two-color multistep cascading for vectorial parametric wave mixing in optical media with quadratic (second-order or $`\chi ^{(2)}`$) nonlinear response. We demonstrate that the multistep cascading allows light-guiding-light effects with quadratic spatial solitons. With the help of the so-called ‘almost exact’ analytical solutions, we describe the properties of parametric waveguides created by two-wave quadratic solitons. Recent progress in the study of cascading effects in optical materials with quadratic (second-order or $`\chi ^{(2)}`$) nonlinear response has offered new opportunities for all-optical processing, optical communications, and optical solitons . Most of the studies of cascading effects employ parametric wave mixing processes with a single phase-matching and, as a result, two-step cascading. For example, the two-step cascading associated with type I second-harmonic generation (SHG) includes the generation of the second harmonic ($`\omega +\omega =2\omega `$) followed by reconstruction of the fundamental wave through the down-conversion frequency mixing (DFM) process ($`2\omega \omega =\omega `$). These two processes are governed by one phase-matched interaction and they differ only in the direction of power conversion. The idea to explore more than one simultaneous nearly phase-matched process, or double-phase-matched (DPM) wave interaction, became attractive only recently , for the purposes of all-optical transistors, enhanced nonlinearity-induced phase shifts, and polarization switching. In particular, it was shown that multistep cascading can be achieved by two second-order nonlinear cascading processes, SHG and sum-frequency mixing (SFM), and these two processes can also support a novel class of multi-color parametric solitons . The physics involved into the multistep cascading can be understood by analyzing a chain of parametric processes: SHG $`(\omega +\omega =2\omega )`$ SFM $`(\omega +2\omega =3\omega )`$ DFM $`(3\omega \omega =2\omega )`$ DFM $`(2\omega \omega =\omega )`$. The main disadvantage of this kind of parametric processes for applications is that it requires nonlinear media transparent up to the third harmonic frequency. Then, the important question is: Can we find parametric processes which involve only two frequencies but allow to get all advantages of multistep cascading ? In this Rapid Communication, we answer positively this question introducing the concept of two-color multistep cascading. We demonstrate a number of unique features of multistep parametric wave mixing which do not exist for the conventional two-step cascading. In particular, using one of the processes of two-color multistep cascading, we show how to introduce and explore the concept of light guiding light for quadratic spatial solitons, that has been analyzed earlier for Kerr-like spatial solitary waves but seemed impossible for parametric interactions. For the first time to our knowledge, we find ‘almost exact’ analytical solutions for two-wave quadratic solitons and investigate, analytically and numerically, the properties of parametric waveguides created by quadratic spatial solitons in $`\chi ^{(2)}`$ nonlinear media. To introduce more than one parametric process involving only two frequencies, we consider vectorial interaction of waves with different polarization. We denote two orthogonal polarization components of the fundamental frequency (FF) wave ($`\omega _1=\omega `$) as A and B, and two orthogonal polarizations of the second harmonic (SH) wave ($`\omega _2=2\omega `$), as S and T. Then, a simple multistep cascading process consists of the following steps. First, the FF wave A generates the SH wave S via type I SHG process. Then, by down-conversion SA-B, the orthogonal FF wave B is generated. At last, the initial FF wave A is reconstructed by the processes SB-A or AB-S, SA-A. Two principal second-order processes AA-S and AB-S correspond to two different components of the $`\chi ^{(2)}`$ susceptibility tensor, thus introducing additional degrees of freedom into the parametric interaction. Different types of multistep cascading processes are summarized in Table I. The processes in the row (a) of Table I described above and the multistep cascading introduced in Ref. are qualitatively similar, but the latter involves a third-harmonic wave. To demonstrate some of the unique properties of the multistep cascading, we discuss here how it can be employed for light-guiding-light effects in quadratic media. For this purpose, we consider the principal DPM process (c) (see Table I) in the planar slab-waveguide geometry. Using the slowly varying envelope approximation with the assumption of zero absorption of all interacting waves, we obtain $$\begin{array}{c}2ik_1\frac{A}{z}+\frac{^2A}{x^2}+\chi _1SA^{}e^{i\mathrm{\Delta }k_1z}=0,\hfill \\ 2ik_1\frac{B}{z}+\frac{^2B}{x^2}+\chi _2SB^{}e^{i\mathrm{\Delta }k_2z}=0,\hfill \\ 4ik_1\frac{S}{z}+\frac{^2S}{x^2}+2\chi _1A^2e^{i\mathrm{\Delta }k_1z}+2\chi _2B^2e^{i\mathrm{\Delta }k_2z}=0,\hfill \end{array}$$ (1) where $`\chi _{1,2}=2k_1\sigma _{1,2}`$, the nonlinear coupling coefficients $`\sigma _k`$ are proportional to the elements of the second-order susceptibility tensor, and $`\mathrm{\Delta }k_1`$ and $`\mathrm{\Delta }k_2`$ are the corresponding wave-vector mismatch parameters. To simplify the system (1), we look for its stationary solutions and introduce the normalized envelopes $`u`$, $`v`$, and $`w`$ according to the following relations, $`A=\gamma _1u\mathrm{exp}(i\beta z\frac{i}{2}\mathrm{\Delta }k_1z)`$, $`B=\gamma _2v\mathrm{exp}(i\beta z\frac{i}{2}\mathrm{\Delta }k_2z)`$, and $`S=\gamma _3w\mathrm{exp}(2i\beta z)`$, where $`\gamma _1^1=2\chi _1x_0^2`$, $`\gamma _2^1=2x_0^2(\chi _1\chi _2)^{1/2}`$, and $`\gamma _3^1=\chi _1x_0^2`$, and the longitudinal and transverse coordinates are measured in the units of $`z_0=(\beta \mathrm{\Delta }k_1/2)^1`$ and $`x_0=(z_0/2k_1)^{1/2}`$, respectively. Then, we obtain a system of normalized equations, $$\begin{array}{c}i\frac{u}{z}+\frac{^2u}{x^2}u+u^{}w=0,\hfill \\ i\frac{v}{z}+\frac{^2v}{x^2}\alpha _1v+\chi v^{}w=0,\hfill \\ 2i\frac{w}{z}+\frac{^2w}{x^2}\alpha w+\frac{1}{2}(u^2+v^2)=0,\hfill \end{array}$$ (2) where $`\chi (\chi _2/\chi _1)`$, $`\alpha _1=(\beta \mathrm{\Delta }k_2/2)(\beta \mathrm{\Delta }k_1/2)^1`$, and $`\alpha =4\beta (\beta \mathrm{\Delta }k_1/2)^1`$. Equations (2) are the fundamental model for describing any type of multistep cascading processes of the type (c) (see Table I). First of all, we notice that for $`v=0`$ (or, similarly, $`u=0`$), the dimensionless model (2) coincides with the corresponding model for the two-step cascading due to type I SHG discussed earlier , and its stationary solutions are defined by the equations for real $`u`$ and $`w`$, $$\begin{array}{c}\frac{d^2u}{dx^2}u+uw=0,\hfill \\ \frac{d^2w}{dx^2}\alpha w+\frac{1}{2}u^2=0,\hfill \end{array}$$ (3) that possess a one-parameter family of two-wave localized solutions $`(u_0,w_0)`$ found earlier numerically for any $`\alpha 1`$, and also known analytically for $`\alpha =1`$, $`u_0(x)=\left(3/\sqrt{2}\right)\mathrm{sech}^2(x/2)=\sqrt{2}w_0(x)`$ (see Ref. ). Then, in the small-amplitude approximation, the equation for real orthogonally polarized FF wave $`v`$ can be treated as an eigenvalue problem for an effective waveguide created by the SH field $`w_0(x)`$, $$\frac{d^2v}{dx^2}+[\chi w_0(x)\alpha _1]v=0.$$ (4) Therefore, an additional parametric process allows to propagate a probe beam of one polarization in an effective waveguide created by a two-wave spatial soliton in a quadratic medium with FF component of another polarization. However, this type of waveguide is different from what has been studied for Kerr-like solitons because it is coupled parametrically to the guided modes and, as a result, the physical picture of the guided modes is valid, rigorously speaking, only in the case of stationary phase-matched beams. As a result, the stability of the corresponding waveguide and localized modes of the orthogonal polarization it guides is a key issue. In particular, the waveguide itself (i.e. two-wave parametric soliton) becomes unstable for $`\alpha <\alpha _{\mathrm{cr}}0.2`$ . In order to find the guided modes of the parametric waveguide created by a two-wave quadratic soliton, we have to solve Eq. (4) where the solution $`w_0(x)`$ is known numerically only. These solutions have been also described by the variational method , but the different types of the variational ansatz used do not provide a very good approximation for the soliton profile at all $`\alpha `$. For our eigenvalue problem (4), the function $`w_0(x)`$ defines parameters of the guided modes and, in order to obtain accurate results, it should be calculated as close as possible to the exact solutions found numerically. To resolve this difficulty, below we suggest a novel ‘almost exact’ solution that would allow to solve analytically many of the problems involving quadratic solitons, including the eigenvalue problem (4). First, we notice that from the exact result at $`\alpha =1`$ and the asymptotic result for large $`\alpha `$, $`wu^2/\left(2\alpha \right)`$, it follows that the SH component $`w_0(x)`$ of Eqs. (3) remains almost self-similar for $`\alpha 1`$. Thus, we look for the SH field in the form $`w_0(x)=w_m\mathrm{sech}^2(x/p)`$, where $`w_m`$ and $`p`$ are unknown yet parameters. The solution for $`u_0(x)`$ should be consistent with this choice of the shape for SH, and it is defined by the first (linear for $`u`$) equation of the system (3). Therefore, we can take $`u`$ in the form of the lowest guided mode, $`u_0(x)=u_m\mathrm{sech}^p(x/p)`$, that corresponds to an effective waveguide $`w_0(x)`$. By matching the asymptotics of these trial functions with those defined directly from Eqs. (3) at small and large $`x`$, we obtain the following solution, $$u_0(x)=u_m\mathrm{sech}^p(x/p),w_0(x)=w_m\mathrm{sech}^2(x/p),$$ (5) $$u_m^2=\frac{\alpha w_m^2}{\left(w_m1\right)},\alpha =\frac{4\left(w_m1\right)^3}{\left(2w_m\right)},p=\frac{1}{\left(w_m1\right)},$$ (6) where all parameters are functions of $`\alpha `$ only. It it easy to verify that, for $`\alpha _{\mathrm{cr}}<\alpha <\mathrm{}`$, the SH amplitude varies in the region $`1.3<w_m<2`$, so that all the terms in Eq. (6) remain positive. It is really amazing that the analytical solution (5), (6) provides an excellent approximation for the profiles of the two-wave parametric solitons found numerically. Figures 1(a,b) show a comparison between the maximum amplitudes of the FF and SH components and selected soliton profiles, respectively. As a matter of fact, the numerical and analytical results on these plots are not distinguishable, and that is why we show them differently, by continuous curves and crosses. For $`\alpha <1`$, the SH profile changes, but in the region $`\alpha >\alpha _{\mathrm{cr}}`$ the approximate analytical solution is still very close to the exact numerical one: a relative error is less than 1%, for the amplitudes, and it does not exceed 3%, for the power components. That is why we define the analytical solution given by Eqs. (5), (6) as ‘almost exact’. Details of the derivation, as well as the analysis of the case $`\alpha <1`$, will be presented elsewhere . Now, the eigenvalue problem (4) can be readily solved analytically. The eigenmode cutoff values are defined by the parameter $`\alpha _1`$ that takes one of the discrete values, $`\alpha _1^{(n)}=(sn)^2/p^2`$, where $`s=(1/2)+[(1/4)+w_m\chi p^2]^{1/2}`$. Number $`n`$ stands for the mode order $`(n=0,1,\mathrm{})`$, and the localized solutions are possible provided $`n<s`$. The profiles of the guided modes can be found analytically in the form $`v_n(x)=V\mathrm{sech}^{sn}(x/p)H(n,2sn+1,sn+1;\zeta ),`$ where $`\zeta =\frac{1}{2}[1\mathrm{tanh}(x/p)]`$, $`V`$ is the mode amplitude, and $`H`$ is the hypergeometric function. According to these results, a two-wave parametric soliton creates, in general, a multi-mode waveguide and larger number of the guided modes is observed for smaller $`\alpha `$. Figures 2(a,b) show the dependence of the mode cutoff values $`\alpha _1^{(n)}`$ vs. $`\alpha `$, at fixed $`\chi `$, and vs. the parameter $`\chi `$, at fixed $`\alpha `$, respectively. For the case $`\chi =1`$, the dependence has a simple form: $`\alpha _1^{(n)}=[1n(w_m1)]^2`$. Because a two-wave soliton creates an induced waveguide parametrically coupled to the modes of the orthogonal polarization it guides, the dynamics of the guided modes may differ drastically from that of conventional waveguides based on the Kerr-type nonlinearities. Figures 3(a-d) show two examples of the evolution of guided modes. In the first example \[see Fig. 3(a-c)\], a weak fundamental mode is amplified via parametric interaction with a soliton waveguide, and the mode experiences a strong power exchange with the orthogonally polarized FF component through the SH field, but with only a weak deformation of the induced waveguide \[see Fig. 3(a) – dotted curve\]. This effect can be interpreted as a power exchange between two guided modes of orthogonal polarizations in a waveguide created by the SH field. In the second example, the propagation is stable \[see Fig. 3(d)\]. When all the fields in Eq. (2) are not small, i.e. the small-amplitude approximation is no longer valid, the profiles of the three-component solitons should be found numerically. However, some of the lowest-order states can be calculated approximately using the approach of the ‘almost exact’ solution (5),(6) described above. Moreover, a number of the solutions and their families can be obtained in an explicit analytical form. For example, for $`\alpha _1=1/4`$, there exist two families of three-component solitary waves for any $`\alpha 1`$, that describe soliton branches starting at the bifurcation points $`\alpha _1=\alpha _1^{(1)}`$ at $`\alpha =1`$: (i) the soliton with a zero-order guided mode for $`\chi =1/3`$: $`u(x)=\left(3/\sqrt{2}\right)\mathrm{sech}^2\left(x/2\right)`$, $`v(x)=c_2\mathrm{sech}\left(x/2\right)`$, $`w(x)=\left(3/2\right)\mathrm{sech}^2\left(x/2\right)`$, and (ii) the soliton with a first-order guided mode for $`\chi =1`$: $`u(x)=c_1\mathrm{sech}^2\left(x/2\right)`$, $`v(x)=c_2\mathrm{sech}^2\left(x/2\right)\mathrm{sinh}\left(x/2\right)`$, $`w(x)=\left(3/2\right)\mathrm{sech}^2\left(x/2\right)`$, where $`c_2=\sqrt{3\left(\alpha 1\right)}`$ and $`c_1=\sqrt{\left(9/2\right)+c_2^2}`$. Some other soliton solutions exist for a specific choice of the parameters, e.g. for $`\alpha =\alpha _1=4/9`$ and $`\chi =1`$, we find $`u(x)=\left(4/3\right)\mathrm{sech}^3\left(x/3\right)`$, $`v(x)=\left(4/3\right)\mathrm{sech}^3\left(x/3\right)\mathrm{sinh}\left(x/3\right)`$, and $`w(x)=\left(4/3\right)\mathrm{sech}^2\left(x/3\right)`$. Stability of these three-wave solitons is a nontrivial issue; a rigorous analysis of all such multi-component states is beyond the scope of the present Rapid Communication and will be addressed elsewhere. At last, we would like to mention that in the limit of large $`\alpha `$, when the coupling to the second harmonic is weak, we can use the cascading approximation $`w\left(u^2+v^2\right)/\left(2\alpha \right)`$. Then, the equations for two orthogonal polarizations of the FF wave reduce to a system of two coupled NLS equations, an asymmetric case of TE-TM vector spatial solitons well studied in the literature (see, e.g., Ref. and references therein). For a practical realization of the DPM processes and the soliton light-guiding-light effects described above, we can suggest two general methods. The first method is based on the use of two commensurable periods of the quasi-phase-matched (QPM) periodic grating. Indeed, to achieve DPM, we can employ the first-order QPM for one parametric process, and the third-order QPM, for the other parametric process. Taking, as an example, the parameters for LiNbO<sub>3</sub> and AA-S $`(xxz)`$ and BB-S $`(zzz)`$ processes , we find two points for DPM at about 0.89 $`\mu `$m and 1.25 $`\mu `$m. This means that a single QPM grating can provide simultaneous phase-matching for two parametric processes. For such a configuration, we obtain $`\chi 1.92`$ or, interchanging the polarization components, $`\chi 0.52`$. The second method to achieve the conditions of DPM processes is based on the idea of quasi-periodic QPM grating. As has been recently shown experimentally and numerically , Fibonacci optical superlattices provide an effective way to achieve phase-matching at several incommensurable periods allowing multi-frequency harmonic generation in a single structure. In conclusion, we have introduced the concept of two-color multistep cascading and demonstrated a possibility of light-guiding-light effects with parametric waveguides created by two-wave spatial solitons in quadratic media. We believe our results open a new direction in research of cascading effects, and may bring new ideas into other fields of nonlinear physics where parametric wave interactions are important.
no-problem/9905/cond-mat9905174.html
ar5iv
text
# Electron-Phonon Correlations, Polaron Size, and the Nature of the Self-Trapping Transition ## Acknowledgement This work was supported in part by the U.S. Department of Energy under Grant No. DE-FG03-86ER13606.
no-problem/9905/hep-ph9905428.html
ar5iv
text
# THEORETICAL SUMMARY, ELECTROWEAK PHYSICS11footnote 1Talk presented at the 17^{𝑡⁢ℎ} International Workshop on Weak Interactions and Neutrinos (WIN 99), Cape Town, South Africa, January 24-30, 1999. ## 1 The $`Z`$, the $`W`$, and the weak neutral current The $`Z`$, the $`W`$, and the weak neutral current have always been the primary tests of the unification part of the standard electroweak model. Following the discovery of the neutral current in 1973, its effects in pure weak processes such as $`\nu N`$ and $`\nu e`$ scattering and in weak-electromagnetic interference (e.g., $`eD`$ asymmetries, $`e^+e^{}`$ annihilation, atomic parity violation) were intensely studied in a series of experiments that were typically of several % precision . The $`W`$ and $`Z`$ were discovered directly at CERN in 1983 and their masses determined. In the 90’s, the $`Z`$ pole experiments at LEP and the SLC have allowed precision studies at the 0.1% level of $`M_Z`$ (0.002%) and the $`Z`$ lineshape, branching ratios, and asymmetries; and recent measurements at LEP II and the Tevatron have yielded $`M_W`$ to better than 0.1% . The implications of these results are * The standard model is correct and unique to zeroth approximation, confirming the gauge principle and the standard model gauge group and representations. * The standard model is correct at the loop level, verifying the concept of renormalizable gauge theories, and allowing predictions from observed loop effects of $`m_t`$, $`\alpha _s`$, and $`M_H`$. * Possible new physics at the TeV scale is severely constrained, strongly supporting such new physics as supersymmetry and unification, as opposed to TeV-scale compositeness. * The gauge couplings at the electroweak scale are precisely determined, allowing tests of gauge unification. ## 2 Electroweak radiative corrections and the hadronic contribution to $`\alpha (M_Z)`$ J. Erler reviewed the status of electroweak radiation corrections . Because of the accuracy of the high precision data, multi-loop perturbative calculations have to be performed. These include leading two-loop electroweak, three-loop mixed electroweak-QCD, and three-loop QCD corrections. $`𝒪(\alpha \alpha _s)`$ vertex corrections to $`Z`$ decays have become available only recently, inducing an increase in the extracted $`\alpha _s`$ by about 0.001. The inclusion of top mass enhanced two-loop $`𝒪(\alpha ^2m_t^4)`$ and $`𝒪(\alpha ^2m_t^2)`$ effects is crucial for a reliable extraction of $`M_H`$. The latter, for example, lowers the extracted value of the higgs mass by $``$ 18 MeV. Erler has collected all available results in a new radiative correction package. All $`Z`$ pole and low energy observables are self-consistently evaluated with common inputs. The routines are written entirely within the $`\overline{\mathrm{MS}}`$scheme, using $`\overline{\mathrm{MS}}`$definitions for all gauge couplings and quark masses. This reduces the size of higher order terms in the QCD expansion. The largest remaining theoretical uncertainty arises from the $`M_W`$$`M_Z`$$`\widehat{s}_Z^2`$ interdependence, where $`\widehat{s}_Z^2`$ is the weak angle in the $`\overline{\mathrm{MS}}`$scheme. The problem is directly related to the renormalization group running of the electromagnetic coupling, $$\alpha (M_Z)=\frac{\alpha }{1\mathrm{\Delta }\alpha (M_Z)}.$$ (1) While the contributions from leptons and bosons (and the top quark when not technically decoupled) can be computed with sufficient accuracy, the hadronic contributions from the five lighter quarks escape a first principle treatment due to strong interaction effects. M. Steinhauser reviewed the recent developments in the determination of $`\alpha (M_Z)`$ and the closely related problem of the hadronic contributions to the anomalous magnetic moment of the muon . These are calculated via dispersion relations involving the cross section for $`e^+e^{}hadrons`$. Early estimates used experimental data for the cross section up to around $`\sqrt{s}`$ 40 GeV, and perturbative QCD (PQCD) for higher energies. However, several groups have emphasized that perturbative and nonperturbative QCD (using sum rules and operator product expansions) are more reliable than the data down to around 2 GeV, leading to a shifted value and smaller uncertainty. Steinhauser described the impact of recent improved low energy data (e.g., below 1 GeV), as well the theoretical developments involving PQCD, the charm threshold, QCD sum rules, and unsubtracted dispersion relations. The recent calculations are in excellent agreement with each other, and considerably reduce the theoretical uncertainties. ## 3 Global fits and their implications J. Erler described the results of global fits to all precision electroweak data, for testing the standard model, determining its parameters, and searching for or constraining the effects of new physics. We used the complete data sets described in , and carefully took into account experimental and theoretical correlations, in particular in the $`Z`$-lineshape sector, the heavy flavor sector from LEP and the SLC, and for the deep inelastic scattering experiments. Predictions within and beyond the SM were calculated by means of a new radiative correction program based on the $`\overline{\mathrm{MS}}`$ renormalization scheme (see Section 2). All input and fit parameters are included in a self-consistent way, and the correlation (present in theory evaluations of $`\alpha (M_Z)`$) between $`\alpha _s`$ and the hadronic contribution is automatically taken care of . We find very good agreement with the results of the LEPEWWG , except for well-understood effects originating from higher orders. We would like to stress that this agreement is quite remarkable as they use the electroweak library ZFITTER , which is based on the on-shell renormalization scheme. It also demonstrates that once the most recent theoretical calculations, in particular Refs. , are taken into account, the theoretical uncertainty becomes quite small, and is in fact presently negligible compared to the experimental errors. The relatively large theoretical uncertainties obtained in the Electroweak Working Group Report were estimated using different electroweak libraries, which did not include the full range of higher order contributions available now. In the Standard Model analysis we use the fine structure constant, $`\alpha `$, and the Fermi constant, $`G_F=1.16637(1)\times 10^5`$ GeV<sup>-2</sup>, as fixed inputs. The error in $`G_F`$ is now of purely experimental origin after the very recent calculation of the two-loop QED corrections to $`\mu `$ decay have been completed . They lower the central value by $`2\times 10^{10}`$ GeV<sup>-2</sup> and the extracted $`M_H`$ by 1.3%. Moreover, there are five independent fit parameters, which can be chosen to be $`M_Z`$, $`M_H`$, $`m_t`$, $`\alpha _s`$, and the hadronic contribution to $`\mathrm{\Delta }\alpha (M_Z)`$. Alternatively, $`M_Z`$ can be replaced by $`s_W^2`$ (the weak angle in the on-shell scheme) or the $`\overline{\mathrm{MS}}`$angle $`\widehat{s}_Z^2`$. We do not use $`\alpha _s`$ determinations from outside the $`Z`$ lineshape sector. The fit to all precision data is perfect with an overall $`\chi ^2=28.8`$ for 36 degrees of freedom, and yields , $$\begin{array}{ccc}M_H\hfill & =& 107_{45}^{+67}\text{ GeV},\\ m_t\hfill & =& 171.4\pm 4.8\text{ GeV},\\ \alpha _s\hfill & =& 0.1206\pm 0.0030,\\ \widehat{s}_Z^2\hfill & =& 0.23129\pm 0.00019,\\ \overline{s}_{\mathrm{}}^2\hfill & =& 0.23158\pm 0.00019,\\ s_W^2\hfill & =& 0.22332\pm 0.00045,\end{array}$$ (2) where $`\overline{s}_{\mathrm{}}^2\widehat{s}_Z^2+0.00029`$ is the effective angle usually quoted by the experimental groups . The larger uncertainty in the on-shell quantity $`s_W^2`$ is due to its greater sensitivity to $`m_t`$ and $`M_H`$. None of the observables deviates from the SM best fit prediction by more than 2 standard deviations. The low value of of $`M_H`$ is consistent with the expectations of supersymmetric extensions of the standard model in the decoupling limit (for which the contributions of sparticles to the radiative corrections are negligible). For a detailed discussion of the upper limits on $`M_H`$ and their significance, see . The value of $`\alpha _s`$ from the precision measurements is consistent with other determinations . The precise determination of $`\widehat{s}_Z^2`$ and $`\alpha _s`$ allows a test of gauge unification. The values are compatible with minimal supersymmetric grand unified theories, when threshold corrections at the high and low scales are included , but not with the simplest non-supersymmetric grand unified theories. The precision data also allow stringent constraints on physics beyond the standard model. Typically, one expects that new physics at the TeV scale that does not decouple (i.e., the radiative corrections do not become smaller for larger scales for the new physics) should lead to deviations at the few % level, to be compared with the 0.1% observations. This class includes most versions of composite fermions and dynamical symmetry breaking. On the other hand physics which decouples, such as softly broken supersymmetry for sparticle masses $`M_Z`$, are compatible with the observations. Specific constraints on heavy $`Z^{}`$ bosons and on supersymmetry, and constraints on general parametrizations of classes of extensions of the standard model (such as extended technicolor or higher-dimensional Higgs representions), are extremely stringent, and are described in . As one example, consider the $`\rho `$-parameter, defined by $$\rho _0=\frac{M_W^2}{M_Z^2\widehat{c}_Z^2\widehat{\rho }(m_t,M_H)},$$ (3) where $`\widehat{c}_Z^21\widehat{s}_Z^2`$, and $`\widehat{\rho }`$ incorporates standard model radiative corrections. $`\rho _0`$ is a measure of the neutral to charged current interaction strength. The SM contributions are absorbed in $`\widehat{\rho }`$, so that in the SM $`\rho _0=1`$, by definition. Examples for sources of $`\rho _01`$ include non-degenerate extra fermion or boson doublets, and non-standard Higgs representations. In a fit to all data with $`\rho _0`$ as an extra fit parameter, we obtain, $$\begin{array}{ccc}\rho _0\hfill & =& \hfill 0.9996_{0.0006}^{+0.0009},\\ m_t\hfill & =& \hfill 172.9\pm 4.8\text{ GeV},\\ \alpha _s\hfill & =& \hfill 0.1212\pm 0.0031,\end{array}$$ (4) in excellent agreement with the SM. The central values are for $`M_H=M_Z`$, and the uncertainties are $`1\sigma `$ errors and include the range, $`M_ZM_H167`$ GeV, in which the minimum $`\chi ^2`$ varies within one unit. Note, that the uncertainties for $`\mathrm{ln}M_H`$ and $`\rho _0`$ are non-Gaussian: at the $`2\sigma `$ level ($`\mathrm{\Delta }\chi ^24`$), Higgs masses up to 800 GeV are allowed, and we find $$\rho _0=0.9996_{0.0013}^{+0.0031}\text{ (}2\sigma \text{)}.$$ (5) This implies strong constraints on the mass splittings of extra fermion and boson doublets , $$\mathrm{\Delta }m^2=m_1^2+m_2^2\frac{4m_1^2m_2^2}{m_1^2m_2^2}\mathrm{ln}\frac{m_1}{m_2}(m_1m_2)^2,$$ (6) namely, at the $`1\sigma `$ and $`2\sigma `$ levels, respectively, $$\underset{i}{}\frac{C_i}{3}\mathrm{\Delta }m_i^2<\text{ (38 GeV)}^2\text{ and (93 GeV)}^2,$$ (7) where $`C_i`$ is the color factor. Generalizations to the $`S`$, $`T`$, and $`U`$ parameters, which can describe the effects of degenerate chiral fermions, are described in . ## 4 Electroweak baryogenesis J. R. Espinosa surveyed the status of electroweak baryogenesis in the standard model and its supersymmetric extension . As is well known, a baryon asymmetry can be created cosmologically if the three Sakharov conditions are satisfied: (1) baryon number violation; (2) $`C`$ and $`CP`$ violation (to distinguish baryons from antibaryons); and (3) thermal non-equilibrium in the baryon number violating processes. Baryon number violation (with $`BL`$ conserved) is present in the standard model as a non-perturbative tunneling between degenerate vacua. The tunneling rate is negligibly small at low temperature, but is enhanced by thermal fluctuations for higher temperatures (“sphalerons”), especially above the electroweak phase transition, for which the barrier height vanishes. Such effects at and before the electroweak phase transition would wash out any baryon asymmetry created at an earlier GUT era if the latter have $`BL=0`$. On the other hand, it is possible that a $`B`$ asymmetry was actually created at the time of the electroweak transition, as first discussed by Kuzmin, Rubakov, and Shaposhnikov . The basic scenario is that if the electroweak transition is first order, it proceeds by the creation and expansion of bubbles, with a broken phase inside and an unbroken phase outside. Baryon number violation can occur outside the expanding bubble, where it is unsuppressed. The $`C`$ and $`CP`$ breaking is manifested by $`CP`$-asymmetric reflection and transition rates for massless fermions and antifermions as they encounter the expanding wall, leading for example, to an excess of baryons entering the expanding bubble. Necessary conditions for this to occur are not only sufficient $`CP`$ violation, but also a first order transition, and finally that $`v/T_c1`$, where $`v`$ and $`T_c`$ are respectively the electroweak scale and the critical temperature for the transition. If the latter is not satisfied, $`B`$ violation inside the bubble will occur, destroying the asymmetry. Espinosa surveyed the current situation. Within the standard model, the conditions of a first order transition and $`v/T_c1`$ require respectively that the Higgs mass satisfies $`M_H<72`$ GeV and 50 GeV, in contradiction with the experimental lower limit of around 97 GeV. The situation is modified in the MSSM due to (1) new sources of $`CP`$ violation, (2) an extended Higgs sector with two doublets, and (3) the influence of stops. Many authors have explored the possibilities in detail. The upshot is that baryogenesis in the MSSM is not excluded, but only works for a limited region of parameter space which is explorable at LEP II and the Tevatron. For example, the $`v/T_c1`$ condition requires $`M_H<105110`$ GeV, $`m_{\stackrel{~}{t}_R}<m_t`$, small $`\mathrm{tan}\beta `$, and large $`m_A`$. If these conditions are not satisfied, then baryogenesis would require new mechanisms, such as extensions of the MSSM involving additional Higgs fields or additional gauge symmetries. Another possibility, related to non-zero neutrino mass, is that a lepton asymmetry was created at an early epoch, e.g., by out of equiibrium decays of heavy Majorana neutrinos, and then converted to a baryon asymmetry by the $`BL`$ conserving sphaleron effects . ## Acknowledgments This work was supported by U.S. Department of Energy Grant No. DOE-EY-76-02-3071. It is a pleasure to thank the participants in the electroweak working group, and Jens Erler for collaboration. ## References
no-problem/9905/astro-ph9905002.html
ar5iv
text
# Velocity distribution of stars in the solar neighbourhood Based on data from the ESA Hipparcos astrometry satellite ## 1 Introduction The velocity distribution of stars in the solar neighbourhood is an important tool for studying different aspects of galactic kinematics and dynamics. During a long era of ground-based astrometry that preceded the Hipparcos mission, many subtle details in the velocity field have gone undetected due to the smearing caused by large uncertainties in stellar parallaxes. With the Hipparcos measurements we are in a position to investigate the structure in more detail, confirming some previous characteristics and discovering some new features. A typical analysis of the distribution of stars in the $`UV`$-plane concentrates on determining the velocity ellipsoid and its parameters (dispersions in $`U`$, $`V`$ and $`W`$, as well as the orientation of the principal axis, i.e. the longitude of the vertex). More details about this can be found in any textbook on galactic structure and kinematics (e.g. Mihalas & Binney 1981). In addition to the global properties of the velocity ellipsoid, a variety of different local irregularities are also studied. Certain concentrations of stars in velocity space mean that there exist groups of stars (moving groups) that move with the same velocity. This idea has been elaborated extensively in works by Eggen (e.g. 1970) and other authors. Different authors use different techniques and different stellar samples, but they all report the presence of moving groups in the solar neighbourhood (Figueras et al. 1997, Chereul et al. 1997,1998,1999, Dehnen 1998, Asiain et al. 1999). This paper is part of a larger project started at the University of Canterbury in order to test Eggen’s hypothesis (Skuljan et al. 1997). Here we discuss some inhomogeneities in the velocity distribution that are related to moving groups and can give some clues to the problem of star formation. ## 2 The sample For this study a sample of 4597 stars has been constructed using the following selection criteria: 1. Parallax greater than 10 mas (stars within 100 pc of the Sun), and $`\sigma _\pi /\pi <0.1`$, as taken from the Hipparcos Catalogue (ESA 1997). 2. Survey flag (Hipparcos field H68) set to ‘S’. 3. Existing radial velocities in the Hipparcos Input Catalogue (ESA 1992). 4. Existing $`BV`$ colours in the Hipparcos Catalogue. The survey flag has been used so that no stars proposed by various individual projects are included in this analysis, since they could introduce a bias. Stars with parallax uncertainties greater than 10 per cent have been rejected in order to have a reliable error propagation. The Hipparcos $`BV`$ colour index is used here as a suitable indicator for dividing the sample into two subsets, as explained in Section 6. There are 12520 ‘survey’ stars with parallaxes greater than 10 mas, out of 118218 entries found in the Hipparcos Catalogue. However, only 11009 of these stars have their parallax uncertainties less than 10 per cent. Finally, for 11007 of them the $`BV`$ colours are known. On the other hand, we find 19467 stars with known radial velocities, out of 118209 entries in the Hipparcos Input Catalogue. Only 4597 stars are found in both subsets, if the catalogue running numbers are used as a matching criterion ($`\text{HIP}=\text{HIC}`$). Before we proceed with our analysis of the velocity distribution, some important points have to be emphasized here concerning the problem of bias. First of all, not all spectral classes are equally represented in our sample. The Hipparcos catalogue is essentially magnitude limited, which means that we shall have a significant deficiency of red dwarfs, compared to the young early-type stars. The situation is illustrated in Figure 1. A great majority of stars are concentrated around $`BV=0.5`$, corresponding to the main-sequence F stars. There is also a possible concentration of earlier-type stars around A0, as well as a distinct peak of K giants (red-clump stars on the horizontal branch, to be more precise) around $`BV=1.0`$. This should be kept in mind when drawing any conclusions regarding the stellar ages (see Section 6), but it will essentially not affect our results. A possibly more serious problem concerning our stellar sample is a kinematic bias. Binney et al. (1997) demonstrated that radial velocities are predominantly known for high-proper-motion stars. If only the stars with known radial velocities are used, then any velocity distribution derived from such a biased sample might give a false picture and lead to some wrong conclusions about the local stellar kinematics. That is the reason why many authors today choose not to include the measured radial velocities at all (see also Dehnen & Binney 1998, Dehnen 1998, Crézé et al. 1998, Chereul et al. 1998,1999). We have checked for potential kinematic bias in our case, and the result is presented in Figure 2. Two distributions are shown as functions of the transverse velocity ($`v_\mathrm{t}=4.74\mu /\pi `$), one for the total sample of 11007 ‘survey’ stars within 100 pc ($`N_{v_\mathrm{t}}`$), and the other for the stars with known radial velocities only ($`n_{v_\mathrm{t}}`$). The $`v_\mathrm{t}`$-axis has been divided into 4-$`\text{km}\text{s}^1`$ bins and the stars have been counted in each bin. If there was no kinematic bias, then the probability that a star has a radial velocity should be constant from bin to bin, and the ratio $`n_{v_\mathrm{t}}/N_{v_\mathrm{t}}`$ would appear as a flat line. It is obvious from Figure 2 that this is not the case for our sample. While the radial velocities are known for about 40 per cent of the stars at $`v_\mathrm{t}=20\text{km}\text{s}^1`$, the ratio reaches 80 per cent at $`v_\mathrm{t}=120\text{km}\text{s}^1`$. However, the effect becomes a real problem only at higher velocities, above 70 – 80 $`\text{km}\text{s}^1`$, as easily seen from the diagram. Below this limit, the ratio stays more or less flat, so that we can expect no significant distortions in the inner parts of the velocity distribution, where we shall primarily concentrate our attention. We shall return to this problem again when we compare our velocity distribution to those of other authors (see Section 3). ## 3 Velocity distribution Having fixed the stellar sample, we can now proceed with the analysis. Hipparcos parallaxes and proper motions, together with the radial velocities from the Hipparcos Input Catalogue, have been used to compute the stellar space velocities relative to the Sun. The right-handed coordinate system has been used, with the $`X`$-axis pointing towards the galactic centre, $`Y`$-axis in the direction of galactic rotation (clockwise, when seen from the north galactic pole), and $`Z`$-axis towards the north galactic pole. Corresponding velocity components are $`U`$, $`V`$ and $`W`$ respectively. A typical error-bar in each velocity component is close to $`1\text{km}\text{s}^1`$, with about 80 per cent of stars having their velocity uncertainties less than $`2\text{km}\text{s}^1`$, as shown in Fig. 3. The error propagation has been treated taking into account the full correlation matrix between the Hipparcos astrometric parameters. In order to estimate the probability density function $`f(U,V)`$ from the observed data, we use here an adaptive kernel method (for more details see Silverman 1986). The basic idea of this method is to apply a smooth weight function (called the kernel function) to estimate the probability density at any given point, using a number of neighbouring data points. The term ‘adaptive’ here means that the kernel width depends on the actual density, so that the smoothing is done over a larger area if the density is smaller. We use the following definition of the adaptive kernel estimator (see page 101 of Silverman 1986), defined at an arbitrary point $`\stackrel{}{\xi }=(U,V)`$: $$\widehat{f}(\stackrel{}{\xi })=\frac{1}{n}\underset{i=1}{\overset{n}{}}\frac{1}{h^2\lambda _i^2}K\left(\frac{\stackrel{}{\xi }\stackrel{}{\xi }_i}{h\lambda _i}\right)$$ where $`K(\stackrel{}{r})`$ is the kernel function, $`\lambda _i`$ are the local bandwidth factors (dimensionless numbers controlling the overall kernel width at each data point), and $`h`$ is a general smoothing factor. We assume also that there are $`n`$ data points $`\stackrel{}{\xi }_i=(U_i,V_i)`$. Our function $`K(\stackrel{}{r})`$ is a 2-D radially symmetric version of the biweight kernel (Fig. 4), and is defined by: $$K(r)=\{\begin{array}{cc}\frac{3}{\pi }(1r^2)^2,\hfill & r<1\hfill \\ 0,\hfill & r1\hfill \end{array}$$ so that $`K(\stackrel{}{r})𝑑\stackrel{}{r}=1`$ (a condition that any kernel must satisfy in order to produce an estimate $`\widehat{f}`$ as a proper probability density function). The local bandwidth factors $`\lambda _i`$ needed for the computation of $`\widehat{f}(\stackrel{}{\xi })`$ are defined by: $$\lambda _i=\left[\frac{\widehat{f}(\stackrel{}{\xi }_i)}{g}\right]^\alpha $$ where $`g`$ is the geometric mean of the $`\widehat{f}(\stackrel{}{\xi }_i)`$: $$\mathrm{ln}g=\frac{1}{n}\underset{i=1}{\overset{n}{}}\mathrm{ln}\widehat{f}(\stackrel{}{\xi }_i)$$ and $`\alpha `$ is a sensitivity parameter, which we fix at $`\alpha =0.5`$ (a typical value for the two-dimensional case). Note that in order to compute $`\lambda _i`$ we need the distribution estimate $`\widehat{f}`$ which, in turn, can be computed only when all $`\lambda _i`$ are known. This problem, however, can be solved iteratively, by starting with an approximate distribution (a fixed kernel estimate, for example), and then improving the function as well as the $`\lambda _i`$ factors in a couple of subsequent iterations. Finally, an optimal value for the smoothing parameter $`h`$ is determined using the least-squares cross-validation method, by minimizing the score function: $$M_o(h)=\widehat{f}^2\frac{2}{n}\underset{i=1}{\overset{n}{}}\widehat{f}_i(\stackrel{}{\xi }_i)$$ where $`\widehat{f}^2`$ can be computed numerically, and $`\widehat{f}_i(\stackrel{}{\xi }_i)`$ is the density estimate at $`\stackrel{}{\xi }_i`$, constructed from all data points except $`\stackrel{}{\xi }_i`$. It can be shown that minimizing $`M_o(h)`$ is equivalent (in terms of mathematical expectation) to minimizing the integrated square error $`(\widehat{f}f)^2`$, so that our estimate $`\widehat{f}`$, based on the optimal value for $`h`$, is as close as possible to the true distribution $`f`$, using the data set available. In our case (for the whole sample of stars), we have found the minimum of $`M_o(h)`$ at $`h=10.7\text{km}\text{s}^1`$. Our $`UV`$-distribution of stellar velocities with respect to the Sun is shown in Fig. 5, both as a scatter plot and a smooth contour plot representing the density function $`\widehat{f}(U,V)`$, as computed using the adaptive kernel method described above. The computations have been performed on a grid of square bins of $`2\times 2`$ $`\text{km}\text{s}^1`$. This choice for the bin size has been made taking into account a typical uncertainty in the velocity components, as mentioned earlier (Fig 3). Finally, the density function has been rescaled by a multiplication factor $`nS`$, where $`n=4597`$ is the total number of stars and $`S=4`$ $`\text{km}^2\text{s}^2`$ is the area covered by a square bin. The numerical value of the transformed distribution at any given bin is therefore equal to the average number of stars falling in that bin (assuming that our estimate $`\widehat{f}`$ is close to the real distribution $`f`$). The distribution in Fig. 5 is obviously not uniform, showing some concentrations that have been associated with the Hyades, Pleiades, Sirius and other young moving groups (see e.g. Figueras et al. 1997, Chereul et al. 1997). A closer examination, however, reveals an additional pattern of inhomogeneities on a somewhat larger scale. At least three long branches (we shall call them: the Sirius branch, the middle branch and the Pleiades branch, respectively from top to bottom) can be identified by eye, slightly curved but almost parallel and running diagonally across the diagram with a negative slope. We have traced the branches in a preliminary way by following the local maxima (ridge lines) in the $`UV`$-distribution, as shown by the dashed lines in Fig. 5. It has been pointed out before (Skuljan et al. 1997) that the parallax uncertainty can produce some radially-elongated features in the $`UV`$-plane, since the stars tend to move radially relative to the zero point ($`U=0`$, $`V=0`$) when their parallax is changed. However, this effect would create some branches converging to the zero point, which is clearly not the case in Fig. 5. We conclude that the parallax uncertainty cannot be responsible for the distribution found. Besides the three branches, there seems to exist a fairly sharp ‘edge line’ at an angle of about $`+30^{}`$ relative to the $`U`$-axis, connecting the lower-$`U`$ extremities of the branches and defining a region above the line where only a few stars can be found. Practically all the stars seem to occupy the lower part of the $`UV`$-plane bounded by the ‘edge line’ and the Sirius branch. In such a situation, the traditional velocity-ellipsoid approach does not seem to be appropriate any more. It would be interesting to compare our distribution from Fig 5 to similar diagrams obtained by other authors (Asiain et al. 1999, Chereul et al. 1998,1999, Dehnen 1998). In particular, Fig. 3 of Dehnen 1998 demonstrates all basic features that we introduce here, although their distribution was obtained without radial velocities. This clearly suggests that the kinematic bias (Binney et al. 1997) does not affect significantly the inner parts of the $`UV`$-distribution. ## 4 The wavelet transform In order to determine the precise nature and the statistical significance of the features seen in Fig. 5 we have chosen the wavelet transform technique to analyse our data. There are many examples in the literature demonstrating how this powerful tool can be used in different areas (e.g. Slezak et al. 1990, Chereul et al. 1997), but nevertheless we shall point out some of the basic properties relevant to our work, concentrating on the two-dimensional case only. To do a wavelet transform of a function $`f(x,y)`$ we define a so-called analysing wavelet $`\psi (\frac{x}{a},\frac{y}{a})`$, which is another function (or another family of functions), where $`a`$ is the scale parameter. By fixing the scale parameter we can select a wavelet of a given particular size out of a family characterized by the same shape $`\psi `$. The wavelet transform $`w(x,y)`$ is then defined as a correlation function, so that at any given point ($`\xi ,\eta `$) in the $`XY`$-plane we have one real value for the transform: $$w(\xi ,\eta )=\underset{\mathrm{}}{\overset{\mathrm{}}{}}\underset{\mathrm{}}{\overset{\mathrm{}}{}}f(x,y)\psi (\frac{x\xi }{a},\frac{y\eta }{a})𝑑x𝑑y,$$ which is called the wavelet coefficient at ($`\xi ,\eta `$). Since we usually work in a discrete case, having a certain finite number of bins in our $`XY`$-plane, this means that we shall have a finite number of wavelet coefficients, one value per bin. The actual choice of the analysing wavelet $`\psi `$ depends on the particular application. When a given data distribution is searched for certain groupings (over-densities) then a so-called Mexican hat is most commonly used (e.g. Slezak et al. 1990). A two-dimensional Mexican hat (Fig. 6) is given by: $$\psi (r/a)=\left(2\frac{r^2}{a^2}\right)e^{r^2/2a^2}$$ where $`r^2=x^2+y^2`$. The main property of the function $`\psi `$ is that the total volume is equal to zero, which is what enables us to detect any over-densities in our data distribution. The wavelet coefficients will be all zero if the analysed distribution is uniform. But if there is any significant ‘bump’ in the distribution, the wavelet transform will give a positive value at that point. Moreover, if we normalize the Mexican hat using a factor $`a^2`$, then we will be able to estimate the half-width of the ‘bump’, by simply varying the scale parameter $`a`$: the wavelet coefficient in the centre of the bump will reach its maximum value if the scale $`a`$ is exactly equal to $`\sigma `$, assuming that the ‘bump’ is a gaussian of a form: $`\mathrm{exp}(\rho ^2/2\sigma ^2)`$, $`\rho `$ being the distance from the centre. Many authors choose the scale in such a way that it gives the maximum wavelet transform. This is an attractive option providing straightforward information on the average half-width of the gaussian components in our distribution. However, there are some situations when we would prefer somewhat smaller scales, in order to separate two close components or to detect some narrow but elongated features. We have applied several different scales to our $`UV`$-distribution from Fig. 5b, and the results are shown in Fig. 7. The positive contours (solid lines) describe the regions where we have an over-density of stars (grouping), while the negative contours (dashed lines) show the regions with star deficiency (under-density). We shall concentrate our attention here only on the positive values. At about $`a45`$ $`\text{km}\text{s}^1`$ the normalized wavelet coefficients reach their maxima around the most populated parts of the $`UV`$-plane (except the Hyades moving group, where the maximum wavelet transform occurs somewhere below $`2`$ $`\text{km}\text{s}^1`$, i.e. at a scale less than the bin size). ## 5 Confidence levels An important question at this stage is how probable are the features revealed in Fig. 7, i.e. what are the confidence levels for the contours to be above the random noise. A commonly used procedure to estimate the probabilities is numerical simulation (sometimes called the Monte Carlo method) based on random number generators (see e.g. Escalera & Mazure 1992). Let us consider again the distribution in Fig. 5b. By smoothing the probability density function we have found the average number of stars $`\overline{N}_{}`$ in each bin (to be more precise, we have found an optimal estimate of the average value, as close as possible using the data set available). On the other hand, the observed number of stars $`N_{}`$ in each bin has a statistical uncertainty. This is not only due to the measurement errors. Even if the velocities have no random errors (or extremely small errors, i.e. much less than the bin size) we still have statistical fluctuations related to the finite sample. We can expect that the observed number of stars $`N_{}`$ in each bin will fluctuate following the Poisson distribution, with an average of $`\overline{N}_{}`$. This means that we can regard our observed histogram as one outcome from an infinite set of possibilities, when we let the star counts in every bin fluctuate according to the Poisson statistics. We can numerically simulate those “other possibilities” and create the distributions (copies) that “could have happened”. If any feature of the distribution is found repeating from one copy to another, we can be confident that the feature is real, i.e. not generated by noise. Actually, the number of successful appearances divided by the total number of simulations will give us the probability of the feature being real. We use this idea to find the probabilities that our wavelet coefficients are positive, since a positive coefficient is automatically an indicator that a grouping in the $`UV`$-plane exists. We generate a large number ($`N=1000`$) of Poisson random copies of our smooth histogram from Fig. 5b. Then, for each of these copies, we derive a replica of the original data set, by creating $`N_{}`$ stars randomly distributed over each bin (in total, we shall have a number of stars close to $`n=4597`$ for all bins together). Finally, we treat the new data set in the same way as the original: we estimate the density function by applying the adaptive kernel method<sup>1</sup><sup>1</sup>1In order to reduce the computing time, we use the original smooth distribution to compute $`\lambda _i`$ directly for every random data set, and we assume the same optimal smoothing factor., and then compute the wavelet transform of the corresponding smooth histogram. This enables us to examine each coefficient ($`w`$) over the whole set of simulations, and compute the probability such that the value is positive. If there are $`N_\mathrm{p}`$ simulations with $`w>0`$ then we have a probability $`P(w>0)=N_\mathrm{p}/N`$. This procedure has been repeated for all wavelet transforms that we shall present in this paper. Typically, the features shown have a 90 per cent or better probability of being real. ## 6 The data analysis Although we have started our analysis by examining the whole sample of stars, we are aware of the fact that the stellar kinematic properties may depend on the age. It is reasonable to assume that younger stars have better chances of still keeping the memory of their original velocities that they acquired at formation. If there is any grouping in velocity space, and if the grouping is only a result of cluster evaporation (Eggen’s hypothesis), then we would expect to see it most prominently amongst the youngest stars. In this paper we are not dealing explicitly with the stellar ages but we are using the spectral type (colour index actually) as an age indicator. We have divided the whole sample of stars into two groups: 1. 1036 early-type stars ($`BV<0.3`$), and 2. 3561 late-type stars ($`BV>0.3`$). by choosing $`BV=0.3`$ as an arbitrary division point, corresponding approximately to the boundary between the A and F spectral classes. The terms ‘early-type’ and ‘late-type’ should be regarded here as suitable names to be used in this paper only. Note that our late-type group contains spectral classes F and later, with two distinct clumps (main sequence F stars, plus K giants) as seen in Fig. 1. Analysing the Hipparcos catalogue, Dehnen & Binney (1998) found Parenago’s discontinuity at $`BV=0.61`$, which means that most of the stars blue-ward of that are younger than the Galactic disk itself. This applies to the F stars in our sample. We can conclude that our early-type group (spectral classes B–F) contains predominantly young stars, while our late-type group (F–M) is a mixture of older and younger main-sequence stars, rather young red-clump stars, and a few old red giants. The first group should better show the young moving groups and it will allow us to compare the results with other authors. On the other hand, the second group should possibly show the old-disk moving groups that can be compared with Eggen’s results. ### 6.1 Early-type stars We have applied the adaptive kernel method again to derive the probability density function $`f(U,V)`$ for the 1036 early-type stars. An optimal value for the smoothing parameter in this case is $`h=8.1`$ $`\text{km}\text{s}^1`$. The smooth distribution, together with the corresponding wavelet transforms at several different scales, are shown in Fig 8. The three branches are well separated and easily detected. Although the ‘middle branch’ does not appear as a feature sufficiently long for a separate analysis, we shall nevertheless treat it as an ‘incomplete’ branch. The maximum wavelet transform is at about $`a=4\text{km}\text{s}^1`$, which also corresponds to an average half-width of the branches. In order to determine the edge line we have used the 5-per-cent level (relative to the maximum) of the smoothed $`UV`$-histogram, as indicated in Fig. 8a. By fitting a straight line to the corresponding portion of the contour we find the tilt to be $`\alpha 32^{}`$ ($`\mathrm{tan}\alpha 0.62`$) and the edge line to follow the equation $`V=16.4\text{km}\text{s}^1+0.62U`$. The tilt of the branches (assuming that they all have the same tilt) has been computed by rotating the wavelet transform (Fig. 8b) counter-clockwise and examining the distribution along $`V`$ only (taking a sum of all positive wavelet coefficients at a given $`V`$). At an angle of $`\beta 25^{}`$ ($`\mathrm{tan}\beta 0.47`$) the three branches appear as the narrowest (and strongest) gaussians, which means that the tilt of the branches in the $`UV`$-plane is $`25^{}`$. We find the following three equations for the branches, which are shown in Fig 8: $`V_1=`$ $`7.6\text{km}\text{s}^10.47U`$ (Sirius branch) $`V_2=`$ $`8.9\text{km}\text{s}^10.47U`$ (middle branch) $`V_3=`$ $`26.6\text{km}\text{s}^10.47U`$ (Pleiades branch) It should be noted, however, that these three linear relations describe the branches well enough only relatively far from the edge line. The branches seem to curve and follow the edge line at their lower-$`U`$ extremity. This is especially the case with the Pleiades branch. In Fig. 9 we present the one-dimensional distribution in the direction perpendicular to the branches ($`\beta 25^{}`$) so that each branch appears as a single peak at a fixed position. The zero point of the relative velocity scale has been centred on the middle branch. The three branches are approximately equidistant, with a separation of about $`15\text{km}\text{s}^1`$. ### 6.2 Late-type stars The distribution function in the $`UV`$-plane for the 3561 late-type stars has been computed using an optimal smoothing factor of $`h=13.6`$ $`\text{km}\text{s}^1`$. The contour diagrams of the distribution and the corresponding wavelet transforms at several different scales, are shown in Fig 10. If we compare this with the early-type case in Fig. 8, we find a similar pattern, although somewhat more complex. Besides the three main branches, we have now got some new details, such as a concentration of stars at about $`(20,30)`$ $`\text{km}\text{s}^1`$ (possibly another fragment of the middle branch), as well as a new branch in the bottom part of the diagram, at $`(30,50)`$ $`\text{km}\text{s}^1`$. In order to get the new features named, we introduce here two of Eggen’s old-disk moving groups, Wolf 630 and $`\zeta `$ Herculis, also marked in Fig. 10d. We shall simply use the fact that these two Eggen’s moving groups (Eggen 1965,1971) agree well with the features revealed by our wavelet transforms, although the question of the significance of such a correlation is still to be answered. In order to find the positions of the branches in Fig. 10, we could perhaps proceed as when dealing with the early-type stars. However, the branches now seem longer, and possibly curved (especially the Sirius branch), so that our procedure for determining the angle $`\beta `$ does not seem to be appropriate any more. On the other hand, there is not enough data to do a more sophisticated analysis including the curvature of the detected features. Our approach was to adopt the same inclination angle of $`\beta 25^{}`$, as derived from the early-type stars, and then simply find the positions of the branches from the rotated one-dimensional distribution shown in Fig. 11. The equations for the branches are: $`V_1=`$ $`6.9\text{km}\text{s}^10.47U`$ (Sirius branch) $`V_2=`$ $`7.0\text{km}\text{s}^10.47U`$ (middle branch) $`V_3=`$ $`29.2\text{km}\text{s}^10.47U`$ (Pleiades branch) $`V_4=`$ $`62.0\text{km}\text{s}^10.47U`$ ($`\zeta `$ Herculis branch) There is also a possible hint of a weak fifth branch (Fig. 11) at a relative velocity of about 30 $`\text{km}\text{s}^1`$. An overall impression is that the branches are roughly equidistant, with a separation slightly larger than in the early-type stars. With one additional branch, an average separation is now about 20 $`\text{km}\text{s}^1`$. If this ‘periodicity’ is real, then a two-dimensional Fourier transform of the distribution will show some peaks in the power spectrum, as we shall demonstrate in the following section. ## 7 The Fourier transform The two-dimensional power spectrum $`Q(f_U,f_V)`$ of the smooth $`UV`$-histogram for the whole sample of stars (square root of the power spectral density) is shown in Fig. 12. Most of the total power is concentrated within the central bulge (maximum power density of about 4700), corresponding to a roughly gaussian distribution of the stars in the $`UV`$-plane. There are also two relatively strong side peaks (maximum power density of about 920) symmetrically arranged around the central bulge at frequencies ($`\pm 0.008,\pm 0.031`$$`\text{s}\text{km}^1`$, as well as two higher harmonics at ($`\pm 0.016,\pm 0.054`$$`\text{s}\text{km}^1`$. The peaks are arranged along a straight line at an angle of $`\gamma =74^{}`$ (the dashed line in Fig. 12a). Some other features can also be seen at relatively high significance levels, but we shall concentrate here only on the aligned peaks. They define a planar wave in the velocity plane. Of course, the peaks at negative frequencies are simply symmetrical images of the positive ones, without any additional information. In order to estimate the significance of the features in the power spectrum, we have proceeded in a similar way as when treating the wavelet transforms. A large number of random copies have been used to see how the power spectral density fluctuates at any given frequency point. The standard deviation $`\sigma `$, has been computed for each bin and the ratio $`Q/\sigma `$ has been used as a measure of significance. Every peak in the power spectrum can be related to a planar wave propagating in a certain direction, with a frequency $`f=\sqrt{f_U^2+f_V^2}`$. We find a period of about 33 $`\text{km}\text{s}^1`$ for the stronger side peak (the one closer to the central bulge), and about 17 $`\text{km}\text{s}^1`$ for the first harmonic. These values are in a good agreement with our wavelet transform analysis: the longer period corresponds to the separation between the two most prominent branches in the $`UV`$-distribution (Sirius and Pleiades), while the second one can be related to the remaining weaker branches. It should be noted, however, that the power spectrum contains obviously much more information than we have extracted here, and a more detailed analysis is needed. ## 8 Conclusion Using the Hipparcos astrometry and published radial velocities, we have undertaken a detailed examination of the $`UV`$-distribution of stars in the solar neighbourhood. This analysis reveals a branch-like structure both in early-type and late-type stars, with several branches running diagonally with a negative slope relative to the $`U`$-axis. The branches are seen at relatively high significance levels (90 per cent and higher) when analysed using the wavelet transform technique. They are roughly equidistant in velocity space, as confirmed by the two-dimensional power spectrum. The branch-like velocity distribution may be due to the galactic spiral structure itself, or some other global characteristics of the galactic potential combined with the initial velocities at the time of star formation. A possibility also exists that this is a result of a sudden burst of star formation that took place some time ago in several adjacent spiral arms. What we see now in the velocity space might be an image of the galactic spiral arms from real space. The main problem with this hypothesis, however, is that the stars in our $`UV`$-branches are not of the same age. Some groups (like the Pleiades) are even composed of stars having a range of different ages (Eggen 1992, Asiain et al. 1999). There are other possibilities that we are currently testing by means of numerical simulations involving the motion of stars in the galactic potential (including the spiral component). Some of the details in the velocity-plane structure can be simulated by choosing appropriate initial velocities for stars being created in the galactic spiral arms, in combination with some velocity dispersion. We are going to elaborate these ideas in more detail in a future paper. ## Acknowledgments This work has been supported by the University of Canterbury Doctoral Scholarship and by the Royal Society of New Zealand 1996 R.H.T. Bates Postgraduate Scholarship to JS, as well as by a Marsden Fund grant to the Astronomy Research Group at the University of Canterbury. We gratefully acknowledge the comments of an anonymous referee who directed us towards the use of the adaptive kernel method. It was with great sadness that we heard of the passing of Olin Eggen during the final stages of this work. His insight into these problems was an inspiration to the field.
no-problem/9905/astro-ph9905329.html
ar5iv
text
# Comments on “Another view on the velocity at the Schwarzschild horizon” ## Abstract It is shown that the conclusions reached by Tereno are completely faulty We have recently shown that the Kruskal derivative assumes a form $$\frac{du}{dv}\frac{f(r,t,dr/dt)}{\pm f(r,t,dr/dt)}$$ (1) because $`u\pm v`$ as $`r2M`$. Although this limit attains a value of $`\pm 1`$ irrespective of $`f0,\mathrm{},oranything`$, Tereno refuses to accept this. Although, we have already pointed out that one should work out the limiting values of the relevant fractions appropriately, Tereno has decided to adopt another view point on this issue. In his new note, he has correctly reexpressed our result in terms of the physical speed $`V`$, as seen by the Kruskal observer, and more explicit Sch. relationships: For $`r>2m`$, the expression is, $$V=\frac{1+\mathrm{tanh}(t/4M)\frac{dt}{dr}(12M/r)}{\mathrm{tanh}(t/4M)+\frac{dt}{dr}(12M/r)},$$ (2) Now since as $`r2M`$, $`t\mathrm{}`$ and $`\mathrm{tanh}(t/4M)1`$, the above equation approaches a form: $$V=\frac{f(r,t,dt/dr)}{f(r,t,dt/dr)};r2M$$ (3) Clearly, the foregoing limit assumes a value of 1 irrespective of whether $`f0,\mathrm{},oranything`$, Tereno thinks it is less than unity! He on the other hand invokes (correctly) the expression for $`dt/dr`$ for a radial geodesic: $$\frac{dt}{dr}=E\left(1\frac{2M}{r}\right)^1\left[E^2\left(1\frac{2M}{r}\right)\right]^{1/2}.$$ (4) where $`E`$ is the conserved energy per unit rest mass. It follows from this equation that $$(12M/r)\frac{dt}{dr}=E\left[E^2\left(1\frac{2M}{r}\right)\right]^{1/2}$$ (5) Therefore, as $`r2M`$, we have $$(12M/r)\frac{dt}{dr}1$$ (6) And if we put this result into Eq. (2), we will obtain $$V=\frac{1\mathrm{tanh}(t/4M)}{\mathrm{tanh}(t/4M)+1};r2M$$ (7) And clearly this above limit is again -1. But again, Tereno will not accept i! Instead, he attempts to find an explicit $`t=t(r)`$ relation by a completely incorrect ansatz. First he considers an approximate value of the quantity in square bracket in Eq.(4). And when this approximation is valid in the infinetisimal neighbourhood of $`r=2M`$, he, incorrectly integrates it over a finite region. By feeding the resultant incorrect value of $`t(r)`$ in Eq. (2) and by plotting the same he concludes that $`V<1`$. Even if he is determined not to evaluate the appropriate limits and verify that $`v=1`$ at $`r=2M`$, his later exercise was unnecessary because the precise and correct $`tr`$ relationship is already known. For instance, he may look into Eq. (12.4.24) of Shapiro & Teukolsky, we can write $$\frac{t}{2M}=\mathrm{ln}\frac{x+1}{x1}+\left(\frac{R}{2M}1\right)\left[\eta +\left(\frac{R}{4M}\right)(\eta +\mathrm{sin}\eta )\right]$$ (8) where $`R`$ is the value of $`r`$ at $`t=0`$ and the “cyclic coordinate” $`\eta `$ is defined by $$r=\frac{R}{2}(1+\mathrm{cos}\eta )$$ (9) and the auxiliary variable $$x=\left(\frac{R/2M1}{R/r1}\right)$$ (10) Now in principle using this exact parametric form of $`t(r)`$ and using the exact form of $`dt/dr`$, one can plot Eq. (2). And then subject to the numerical precision (note $`t=\mathrm{}`$ at $`r=2M`$), one may indeed verify that $`V=1`$ at $`r=2M`$. However, since, $`\mathrm{tanh}(t/4M)=1`$ at $`r=2M`$, essentially, we would be back to our starting position Eq. (1) by this procedure. Now let us also consider the “Janis coordinates” considered by Tereno. Here the radial coordinate is $$x_1=(w+r)/\sqrt{2}$$ (11) and the time coordinate is $$x_0=(wr)/\sqrt{2}$$ (12) where $$w(r,t)=t+r+2M\mathrm{ln}\frac{r2M}{2M}$$ (13) As correctly indicated by Tereno, the physical speed measured in this coordinate is $`V_j=dx_1/dx_0`$. And, in a general manner this can be written as $$V_j=\frac{dx_1}{dx_0}=\frac{dw/dt+dr/dt}{dw/dtdr/dt}$$ (14) But if we go back to Eq. (4), it is found that $$\frac{dr}{dt}=0;r=2M$$ (15) Therefore, as $`r2M`$, we have $$V_j\frac{dw/dt}{dw/dt}=1;r2M$$ (16) And the eventual expression obtained in Eq. (13-14) of Tereno is simply incorrect. If the reader is not still convinced about our result, we would remind a basic relationship obtained by the Kruskal coordinates: $$u^2v^2=(r/2M1)e^{r/2M}$$ (17) By differentiating both sides of this equation w.r.t., we obtain $$2u\frac{du}{dt}2v\frac{dv}{dt}=\left[\frac{(r/2M1)}{2M}e^{r/2M}+\frac{e^{r/2M}}{2M}\right]\frac{dr}{dt}$$ (18) From Eq. (4) , we note that $`dr/dt=0`$ at the EH, and therefore, the foregoing equation yields $$\frac{du}{dt}=\frac{v}{u};r=2M$$ (19) But from Eq. (17), we find that $`v/u=\pm 1`$ at $`r=2M`$, and therefore $$\frac{du}{dt}=\frac{v}{u}=\pm 1;r=2M$$ (20) We have already explained why the value of $`V1`$ at $`r=2M`$ in any coordinate system. If the free fall speed measured by a Sch. observer is $`V_S`$ and the relative velocity of the “other static observer” is $`V_{SO}`$ with respect to the Sch. observer, then we will have (locally): $$V=\frac{V_S\pm V_{SO}}{1\pm V_SV_{SO}}$$ (21) And since, $`V_S=1`$ at $`r=2M`$, we will have $`V1`$. We hope Tereno will now realize that, indeed, $`V=1`$ at the event horizon. And correspondingly, the geodesic of a material particle becomes null at the EH. This in turn, implies that, there can not any finite mass BH, and the collapse process continues indefinitely. For an overall scenario see In case Tereno flashes another manuscript on the same line, we shall not respond any further.
no-problem/9905/gr-qc9905016.html
ar5iv
text
# Do 𝐻₀ and 𝑞₀ really have the values we believe they have? ## Abstract We present an example where a justified modification of the law of propagation of light in a Robertson-Walker model of the universe leads to an identification of $`H_0`$ and $`q_0`$ different from that corresponding to the usual law of propagation along null geodesics. We conclude from this example that observed values which we would associate with the values of $`H_0`$ and $`q_0`$ with the usual interpretation correspond in fact to the values of $`2H_0`$ and $`\frac{1}{2}(q_01)`$. It is therefore possible that observed values that we usually interpret as corresponding to a moderately aged universe with accelerating expansion may in fact correspond a much older universe with a decelerating expansion. When astronomers say that they obtain from observations the values of the Hubble constant and deceleration parameter of the universe they mean that they measure the slope $`H_0^{}`$ and a curvature related parameter $`q_0^{}`$ at the origin of the Hubble graph: $$z=H_0^{}d+\frac{1}{2}H_0^2(1+q_0)d^2$$ (1) where $`z`$ is the red-shift and $`d`$ is an operationally defined, agreed upon, distance indicator of observed sources. To confront the observed Hubble graph with theory requires the description of a cosmological model and the theoretical identification of the two parameters $`H_0^{}`$ and $`q_0^{}`$, and the distance indicator $`d`$. The simplest cosmological models assume a Robertson-Walker model with line-element: $$d\tau ^2=dt^2+F^2(t)N^2(r)\delta _{ij}dx^idx^j,N(r)=1/(1+kr^2/4),r=\sqrt{\delta _{kl}x^kx^l}$$ (2) where $`F(t)`$ is the scale factor, $`k`$ the curvature of space, and where units have been chosen such that the universal speed constant $`c`$ is equal to $`1`$. Assuming also that light behaves in a Robertson-Walker space-time as it does in a pure vacuum domain with minimal coupling of electromagnetism to gravitation, i.e. assuming that light propagates along null geodesics of the line-element above one derives (1) with the following identification: $$H_0^{}=H_0\frac{\dot{F}}{F},q_0^{}=q_0\frac{\ddot{F}F}{\dot{F}^2}$$ (3) and: $$d=_0^rN(r)𝑑r$$ (4) This point of view has been questioned recently in two papers (Refs. and ) that consider cosmological models with a varying speed of light, either in a framework more general than general relativity, like scalar-tensor theories of gravity or in more general phenomenological approaches. On the other hand it is also possible to consider a varying speed of light in the framework of general relativity as it was discussed in our recent gr-qc preprint (Ref ). This paper is based on the idea that $`F`$ in the line-element (2) can be interpreted as the inverse of an effective speed of light or equivalently, as a refractive index. In this case the theory of light propagation in a non-dispersive medium (See for instance ) tell us that the light rays are the null geodesics of the metric: $$\overline{g}_{\alpha \beta }=g_{\alpha \beta }+(1F^2)u_\alpha u_\beta $$ (5) where $`g_{\alpha \beta }`$ is the metric corresponding to the line-element (2) and $`u^\alpha `$ is the time-like unit vector with components $`u^0=1`$ and $`u^i=0`$ in the corresponding coordinates. Therefore: $$\overline{g}_{00}=F^2,\overline{g}_{0i}=0,\overline{g}_{ij}=F^2N^2\delta ij$$ (6) This means that a light ray emanating from any point with radial coordinate $`r_e`$ at time $`t_e`$ reaches the point with radial coordinate $`r=0`$ at time $`t`$ given by: $$_0^{r_e}N(r)𝑑r=_{t_e}^tF^2(t)𝑑t$$ (7) With the usual convention about light propagation the integrand in the r-h-s is $`F^1`$ instead of $`F^2`$. The interesting fact that comes out from this interpretation is that the same elementary calculation that leads to the relationship (1) with the identification (3), when one assumes that light propagates along geodesics of (2) now leads to the following identification: $$H_0^{}=2H_0,q_0^{}=\frac{1}{2}(q_01)$$ (8) in which case the dynamics of the universe becomes radically different from what it is believed to be now. Let us assume as a simple numerical example that: $$q_0^{}=0.1,H_0^{}=60\text{km/s/Mpc}$$ (9) are the observed values of the parameters in (1). According to the usual identification (3) this corresponds to a moderately aged universe ($`H_0^1=17\times 10^9`$yr) with accelerating expansion, while with the new identification (8) the values of $`q_0`$ and $`H_0`$ are: $$q_0=+0.8H_0=30\text{km/s/Mpc}$$ (10) which correspond to a very old universe ($`H_0^1=33\times 10^9`$yr) with a decelerating expansion. All this should remind us that modern cosmology is still a very young branch of physics and astronomy where very much remains to be discovered and clarified.
no-problem/9905/astro-ph9905238.html
ar5iv
text
# An Internal Second Parameter Problem in the Sculptor Dwarf Spheroidal Galaxy ## 1 Introduction The Sculptor dwarf spheroidal (dSph) galaxy was the first Galactic dSph to be identified (Shapley 1938) and has a long observational history. Its variables have been tabulated by a large number of studies (Baade & Hubble 1939; Thackeray 1950; van Agt 1978; Goldsmith 1993, Hereafter G93; Kałuz̀ny et al. 1995, hereafter K95), and the period distribution of RR Lyrae stars suggests a metallicity spread (G93, K95). While a range of abundances is generally accepted, impressions of the Sculptor horizontal branch (HB) morphology outside of the instability strip have varied depending on photometric depth, field size and filter systems employed to construct the color magnitude diagram (CMD). Kunkel & Demers’ (1977, KD77) Sculptor CMD (324 stars to $`V=20.6`$) yielded 43 red HB (RHB) out of 49 HB stars and a deficit of stars with $`BV<+0.3`$ as well as a red giant branch (RGB) well described by a metal-poor population ($`[\mathrm{Fe}/\mathrm{H}]=1.9`$). Norris & Bessell (1978) re-analyzed the CMD in combination with two spectra to argue for a Sculptor metallicity spread of $`2.2[\mathrm{Fe}/\mathrm{H}]1.5`$, and Smith & Dopita (1983) confirmed an inhomogeneous metallicity distribution function (MDF) via narrow-band photometry. Da Costa’s (1984, D84) deep, but small area photometry to the Sculptor MSTO did not provide strong constraints on either the HB or RGB; however, it did show an abundance spread similar to previous results (confirmed by Da Costa 1988) and a predominantly red HB (7 of 10 HB stars). The conclusion of these studies was that Sculptor is a “second parameter ($`2^{\mathrm{nd}}`$P) object” that shows a rather red global HB for its mean abundance (D84). More recently, however, Schweitzer et al. (1995, SCMS) produced a CMD with 1043 stars that reveals a prominent blue HB (BHB) with more stars than KD77 and D84. In the first wide-field, CCD survey of Sculptor, K95 reported the usual metallicity spread based on the RGB (as did SCMS), but also substantiated the large BHB population and derived a more moderate Sculptor HB morphology index of $`(BR)/(B+V+R)=0.15`$ (see also the Grebel et al. 1994 CMD). Because their $`VI`$ photometry of $`>6000`$ stars with $`V<21.5`$ covered a much larger area than previous results, K95’s significantly increased BHB:RHB ratio suggests differences in the spatial distribution of BHB and RHB stars. This might be due to abundance gradients in the dSph. However, since the most metal rich population in Sculptor’s RGB has $`[\mathrm{Fe}/\mathrm{H}]1.5`$, which would normally give a uniform to blue HB, the variation in the RHB population must be due to spatial variation in the $`2^{\mathrm{nd}}`$P effect. In this letter, we present $`BV`$ photometry of the Sculptor dSph galaxy. We find the usual evidence for RGB stars ranging from metal-poor ($`[\mathrm{Fe}/\mathrm{H}]1.5`$) to very metal-poor ($`[\mathrm{Fe}/\mathrm{H}]2.3`$); however, on the basis of two distinct HBs and two distinct RGB bumps, Sculptor’s MDF may be better characterized as bimodal. This bimodality gives rise to one population with a $`2^{\mathrm{nd}}`$P effect, and a second one with likely very little HB $`2^{\mathrm{nd}}`$P. Differences in radial distributions for these two populations can account for the variation in HB morphology within Sculptor and among previous surveys of this galaxy. ## 2 Observations and Reduction We observed Sculptor on UT 23 July and 1–2 August, 1991 with the Las Campanas 1-m Swope telescope using the thinned, $`1024^2`$ TEK2 CCD camera. Five overlapping, 10$`\stackrel{}{\mathrm{.}}`$4 wide pointings were arranged in a 2$`\times `$2 grid with a center frame overlapping the other four to lock together the photometry. Each field was typically observed with one $`B`$ and $`V`$ exposure of 1800 and 900 sec length, respectively. The data were reduced with the IRAF package CCDRED and photometered with the DAOPHOT II and ALLFRAME programs (Stetson 1987, 1994). Detections were matched using DAOMASTER and then calibrated to observed Graham (1982) standard stars using our own code. This code compares calibrated magnitudes of stars in common on different CCD frames and determines minor frame-to-frame systematic errors (e.g., due to shuttering errors, transient transparency changes, errors in the photometric transformation). Because of photometric conditions, the derived mean residuals for each frame ($`0.1`$ mag on the basis of $`689`$ comparison stars) were used as offsets and applied iteratively with new color determinations until convergence. Our resulting photometric precision is $`(\sigma _B,\sigma _V)=(0.05,0.05)`$ mag at the HB. ## 3 Horizontal Branch Our $`(BV,V)_0`$ (Figure 1) and $`(BV,B)_0`$ (not shown) CMDs show an HB that appears to be kinked over the RR Lyrae gap. All tests of the photometry pipeline have shown this “kink” to be real, and a hint of this HB “kink” can be seen in KD77. Similarly kinked HBs have been noted previously in the CMDs of some “bimodal” Galactic globular clusters (GGC), e.g., NGC 6229 (Borissova et al. 1997), NGC 2808 (Ferraro et al. 1990), and NGC 1851 (Walker 1992), to which the Sculptor CMD bears some resemblance. Indeed, our derived $`B:V:R`$ (blue:variable:red HB) ratio of (0.42:0.19:0.39) resembles those of bimodal GGCs (see Borissova et al. 1997 for a summary). Stetson et al. (1996) make a poignant comparison of the bimodal NGC 1851 CMD to those of the similar metallicity, “$`2^{\mathrm{nd}}`$P GGC pair” NGC 288 and NGC 362; that NGC 1851 has both an RHB like NGC 362 and a BHB like NGC 288 suggests that NGC 1851 has an internal $`2^{\mathrm{nd}}`$P problem. Stetson et al. use this fact to argue as unlikely that the $`2^{\mathrm{nd}}`$P effect is due to differences in age, helium abundance or \[CNO/Fe\] within NGC 1851. Despite the similarities of our Sculptor HB to the HBs of bimodal GGCs, there are two key reasons why the Stetson et al. analysis does not apply here: (1) From the RGB width we know that Sculptor has an abundance spread. (2) There is no a priori reason to assume that all of the stars in Sculptor are coeval. Bearing this in mind we now explore the origin of the bimodality of the Sculptor HB. The $`V`$ magnitude difference from the red edge of the Sculptor BHB to the blue edge of the RHB is $`0.15\pm 0.02`$ mag. If the bimodality is completely due to differences in \[Fe/H\], typical values for $`dM_V/d[\mathrm{Fe}/\mathrm{H}]`$ suggest an \[Fe/H\] difference of 0.5 to 1.0 dex. This is consistent with reported \[Fe/H\] spreads from fitting isochrones to the Sculptor RGB. The situation is, however, more complex because we are comparing HB stars at different colors, and both the luminosity of the theoretical ZAHB (zero-age-HB) and bolometric correction vary with position along the HB. Moreover, the HB is strongly affected by the oxygen abundance. At a constant core mass ($`M_{\mathrm{core}}`$) increasing \[O/Fe\] increases $`L_{\mathrm{HB}}`$. However increasing \[O/Fe\] also leads to a decrease in $`M_{\mathrm{core}}`$. All other things being equal, a decrease in $`M_{\mathrm{core}}`$ leads to a decrease in $`L_{\mathrm{HB}}`$. The net result is that the ZAHB variation with \[O/Fe\] can be rather complex. In the Galaxy it is generally thought that for metallicities appropriate for Sculptor, \[O/Fe\] is constant with a value in the range +0.3–0.5. There is no reason to assume that Sculptor has undergone the same chemical enrichment history as the Galaxy so we consider all $`0.0[\mathrm{O}/\mathrm{Fe}]+0.5`$ possible. Most recent HB models have an assumed \[O/Fe\], \[Fe/H\] relation. The only available models that allow us to explore the composition parameters independently are those of Rood (unpublished). To convert $`\mathrm{log}L`$ and $`\mathrm{log}T_{\mathrm{eff}}`$ to $`M_V`$ and $`BV`$ we use the results of Kurucz (1979) and Bell & Gustaffson (1978) blended to reproduce observed HBs of GGCs smoothly. Throughout this paper, we assume $`(mM)_0=19.71`$ and $`E(BV)=0.02`$ (K95) for Sculptor. Figure 2 shows the observed CMD of the Sculptor HB with superimposed ZAHBs terminated at the red end at a mass of 0.85$`M_{}`$with a cross mark indicating a mass of 0.80$`M_{}`$. These are roughly the maximum possible masses for 12 and 15 Gyr populations, respectively. Since all stars undergo some mass loss the ZAHB population will not actually reach these two points. Evolution and observational scatter will carry some stars redward, but for practical purposes the end of the ZAHB should mark the redward extent of the HB. We start with the hypothesis that the Sculptor BHB is a low metallicity population and the RHB a higher metallicity population, both consistent with the spread of the RGB. The fairly uniform distribution across the RGB suggests comparable numbers in each group. The size of the observational error would obscure obvious bimodality on the RGB. The BHB can be fit reasonably with $`[\mathrm{Fe}/\mathrm{H}]=2.3`$ and $`0.0[\mathrm{O}/\mathrm{Fe}]+0.5`$. Indeed, the BHB rather resembles that of the low metallicity GGC M92 (see Figure 1). The RHB can be fit with oxygen enhanced models with $`1.9[\mathrm{Fe}/\mathrm{H}]1.5`$. The odd behavior of the ZAHB level with \[Fe/H\] for the $`[\mathrm{O}/\mathrm{Fe}]=+0.5`$ ZAHBs is due to approximations used for $`M_{\mathrm{core}}`$. Independently of such modeling details, one can expect the variation of ZAHB level with \[Fe/H\] to be less for oxygen enhanced models than for scaled solar abundances. The models with $`[\mathrm{O}/\mathrm{Fe}]=0.0`$ cannot fit the RHB: at $`[\mathrm{Fe}/\mathrm{H}]=1.9`$ the ZAHB does not extend far enough to the red; at $`[\mathrm{Fe}/\mathrm{H}]=1.5`$ the level of the ZAHB is too low. One could conceivably produce the observed bimodality using one composition with $`1.9[\mathrm{Fe}/\mathrm{H}]1.5`$ and $`[\mathrm{O}/\mathrm{Fe}]=+0.5`$. Such a mono-compositional bimodality is observed in GGCs but modeling it requires the ad hoc introduction of bimodality in some underlying parameter (Catelan et al. 1998). However, in Sculptor a composition spread is observed, and a bimodal composition is quite natural, e.g., arising from two bursts of star formation. Hence, it seems undesirable to us to discard the “natural explanation” in favor of the yet to be determined mechanism that produces bimodal HBs in GGCs. It is clear from the $`[\mathrm{O}/\mathrm{Fe}]=+0.5`$ ZAHB (Figure 2a) that even if most of the low metallicity population is found on the BHB, some could be found in the RR Lyrae strip and on the RHB. We suspect that this is a small fraction of the low metallicity population, because the red end of the BHB veers away from the ZAHB suggesting that the ZAHB is populated only for $`(BV)_00.15`$. In analogy to M92 we suspect that $`90`$% of the low metallicity population is found on the BHB and that its age is similar to that of M92. Similarly, from Figure 2 we see that the higher metallicity population could contaminate the BHB. The RHB population does drop as one approaches the RR Lyrae strip. But there is precedent from the bimodal HB GGCs that such a population could increase further to the blue. There is reason to think this is not true for Sculptor. First, if there is significant high metallicity contamination of the BHB, where are the low metallicity stars we infer must be present from the RGB spread? Second, the BHB morphology is more like that of M92 than the blue-HB-tails of clusters with higher metallicity–M13, NGC 288, etc. These arguments in themselves are not compelling, but fit the overall scenario we develop here. Normal GGCs with the metallicity we suggest for the Sculptor RHB have uniform HBs. This means that the Sculptor high metallicity population suffers from a “too red” $`2^{\mathrm{nd}}`$P problem like, e.g., GGCs NGC 362 and NGC 7006 and the extreme halo cluster Pal 14. While the case for age as the $`2^{\mathrm{nd}}`$P in GGCs has been hard to establish (e.g., Stetson et al. 1996; Catelan et al. 1998; VandenBerg 1998; but see counter views by Chaboyer et al. 1996, Sarajedini et al. 1997), there is good reason to think that a higher metallicity population in a low density system like Sculptor might be younger. Thus, we hypothesize that the RHB arises from a population several Gyr younger than the BHB. Indeed, D84 has suggested multiple age components ($`\delta (\mathrm{age})3`$ Gyr) from his study of turnoff stars. If bimodal, Sculptor’s two HB populations probably overlap significantly in the instability strip. The distribution of RR Lyrae periods in Sculptor (G93, K95) shows a large range, consistent with a large spread in metallicity. The periods of RRab stars at the blue fundamental edge of the instability strip (those with the shortest periods) are well correlated with the metallicity (Sandage 1993a). In Sculptor, the shortest period RRab (ignoring two stars with very discrepant periods) has a period of 0.474 days (K95), implying a metallicity of $`[\mathrm{Fe}/\mathrm{H}]=1.6`$. While the red fundamental edge is not as useful a metallicity indicator, the existence of RRab stars with $`P0.8`$ days indicates the presence of another population with $`[\mathrm{Fe}/\mathrm{H}]<2.0`$. In addition, G93 and K95 both note a correlation of average magnitude with period in Sculptor RRab stars. Because $`M_V`$ is a function of $`[\mathrm{Fe}/\mathrm{H}]`$, the spread in $`M_V`$ also implies a metallicity spread. The intensity weighted average $`V`$ magnitude for the majority of the RRab stars lies in the range $`20.1<V<20.25`$ (K95), or $`0.24<M_V<0.64`$, which corresponds to $`2.3<[\mathrm{Fe}/\mathrm{H}]<1.3`$ (Sandage 1993b). ## 4 Red Giant Branch Our analysis so far points to a bimodality of populations in the Sculptor HB. However, such bimodality is also suggested in the giant branch, where two distinct RGB bumps can be seen (Figure 1): one near $`(BV,V)_0=(0.8,19.3)`$ and one near $`(0.8,20.0)`$. The former RGB bump lies toward the blue side of the RGB, near the expected locus for metal poor stars, while the latter RGB bump lies toward the red side of the RGB, near the expected locus for more metal rich stars. To illustrate the differences, we fit a mean RGB locus to the entire Sculptor RGB, divide the RGB in half, and plot (Figure 3) RGB luminosity functions for all stars within $`\mathrm{\Delta }(BV)0.125`$ left and right of the mean RGB locus. We isolate the redward RGB bump at $`V_020.0`$. The blueward bump is less clearly defined but probably is $`19.0V19.4`$. The extreme magnitude differences between the RGB bumps again argues for a metallicity separation of order a dex. We can use the absolute magnitudes of the RGB bumps to obtain a global metallicity (\[M/H\]) for the two bump populations (Ferraro et al. 1999): Using $`(mM)_0=19.71`$ (K95), we find \[M/H\] $`2.1`$ and $`1.3`$. Ferraro et al. (1999, Figure 11a) also give relations for the RGB bump dependence on the magnitude difference between the bump and HB. If we adopt $`V=20.2`$ for the height of the BHB population and assign this to the metal-poor RGB, we obtain $`V_{bump}V_{HB}0.9`$; this implies an abundance $`[\mathrm{Fe}/\mathrm{H}]2.4`$, on the Zinn (1985) scale. For the RHB population, if we adopt $`V_{HB}=20.35`$ and assign to this the other RGB bump, we obtain $`V_{bump}V_{HB}=0.35`$; this is the difference expected for $`[\mathrm{Fe}/\mathrm{H}]1.6`$. The presence of the distinct RGB bumps, their estimated $`M_V`$, and their location relative to the HB suggest a bimodal MDF with $`[\mathrm{Fe}/\mathrm{H}]2.3`$ and $`[\mathrm{Fe}/\mathrm{H}]1.6`$. ## 5 Discussion From analysis of the RGB and HB, a consistent scenario can be assembled. In Figure 1 we show representative RGB, AGB, and HB fiducials for the metal-poor ($`[\mathrm{Fe}/\mathrm{H}]=2.23`$), BHB cluster M92 and the less metal poor ($`[\mathrm{Fe}/\mathrm{H}]=1.44`$) $`2^{\mathrm{nd}}`$P cluster Pal 14. These clusters bracket the Sculptor RGB, while each cluster separately approximates the BHB and RHB, respectively. Apart from the fact that Pal 14 may be a little metal-rich by a few 0.1 dex, the two clusters provide a reasonable bimodal paradigm for the Sculptor MDF. Our bimodal interpretation of Sculptor differs somewhat from previous studies that argue for an abundance spread. It should be noted that a true bimodality in the RGB of Sculptor in the form of two distinct RGB sequences would be masked somewhat by observational scatter and the superposition of the asymptotic giant branch for the more metal rich population. The presence of two distinct RGB bumps, rather than a slanting RGB bump “continuum,” is evidence for bimodality in Sculptor. We note that a suggestion of bimodality (or trimodality) was made previously by Grebel et al. (1994). In §1 we argued that disparate HB morphologies found among different surveys of the Sculptor CMD derived from radial differences in global HB morphology. Figure 4 provides evidence that this is the case: The global HB index increases by 0.4 from the center to the $`500\mathrm{"}`$ radius accessible with our catalogue. We have argued for a bimodal MDF. Accordingly, the radial gradient in Figure 4 is not likely due to a radial abundance gradient, or the gradual diminishing of a $`2^{\mathrm{nd}}`$P effect. Rather, the cumulative evidence suggests that the HB radial dependence is due to changes in proportions of two nearly mono-metallic populations with radius. Indeed, the relative densities of the blue:red half of the RGB track those of the BHB:RHB very well (Figure 4). The spatial distribution of the \[BHB, blue RGB, metal-poor\] population appears to be more extended than that of the \[RHB, red RGB, less metal poor\] population, which shows a higher core concentration. Spatial differences in the Sculptor HB were suggested previously by Light (1988) and Da Costa et al. (1996) and are explored further by Hurley-Keller et al. (1999). Da Costa et al. (1996) also point out radial HB index gradients (with a similar sense) in the Leo II and And I dSphs, and adopt the same interpretation of mixing variations in bimodal HB populations. The existence of bimodal, $`2^{\mathrm{nd}}`$$`+`$ non-$`2^{\mathrm{nd}}`$P populations within dSphs would be significant since, unlike bimodal GGCs such as NGC 1851, in dSphs it is (now) entirely plausible to consider multiple star formation bursts with age as the $`2^{\mathrm{nd}}`$P. In Sculptor’s case, it is likely that the $`[\mathrm{Fe}/\mathrm{H}]2.3`$ population formed in an earlier, more extended burst. If the presence of these two distinct populations is born out, the (relatively nearby) Sculptor dSph could well prove to be a Rosetta stone of the HB and the adamantine $`2^{\mathrm{nd}}`$P question. We thank Eva Grebel for helpful discussions and the referee for useful suggestions. ## 6 Captions fig. 1. – $`(BV,V)_0`$ CMD for the Sculptor dwarf galaxy with overlaid fiducials for M92 (dashed line; from Sandage, 1970) and Pal 14 (solid line; from Holland & Harris 1992). We adopt cluster distance moduli and reddenings from Harris (1996) and Holland & Harris (1992), respectively. The right panel highlights the RGB and red bump region. fig. 2. – Fits of model ZAHBs to the HB of Sculptor. Panel (a) shows oxygen enhanced ($`[\mathrm{O}/\mathrm{Fe}]=+0.5`$) models, and (b) shows models with solar $`[\mathrm{O}/\mathrm{Fe}]`$. In each panel, the solid line shows the model with $`[\mathrm{Fe}/\mathrm{H}]=2.3`$, the dashed line shows $`[\mathrm{Fe}/\mathrm{H}]=1.9`$, and the dotted line shows $`[\mathrm{Fe}/\mathrm{H}]=1.5`$. The ZAHBs are terminated at the red end at a mass of 0.85$`M_{}`$and a cross mark indicates a mass of 0.80$`M_{}`$(see text). fig. 3. – Differential (right ordinate) and cumulative (left ordinate) RGB luminosity functions for stars within 0.125 mag in $`(BV)_0`$ color to either the blue (dot-dash curves) or red (solid curves) of the mean RGB locus. The dot-dash curves are offset vertically by $`+1.0`$ for the cumulative and by $`+80`$ for the differential luminosity function. Breaks in the slope of the cumulative distributions (indicated by thin solid lines) point to locations of RGB bumps, marked by vertical lines. fig. 4. – Radial dependence of HB (filled circles) and RGB (open circles) morphology from our catalogue. The RR Lyrae counts in the same areas are from K95. The values for $`(B:V:R)`$ for the HB and $`B+R`$ for the RGB in each annulus are given for each point.
no-problem/9905/astro-ph9905199.html
ar5iv
text
# The First Light seen in the redshifted 21–cm radiation ## 1 The general framework The diffuse Intergalactic Medium (IGM) at very high redshift (between recombination and full reionization at $`z>5`$) can be observed in the redshifted 21–cm radiation against the cosmic background. The signal can be detected in emission or in absorption depending on whether the spin temperature $`T_S`$ is larger or smaller than the cosmic background temperature $`T_{CMB}=2.73(1+z)`$. This can happen if $`T_S`$ is coupled via collisions to the kinetic temperature $`T_K`$ of the IGM. However, the density contrast on Mpc scales at very early epochs is so low that the collision coupling is inefficient, and therefore the IGM is expected to be invisible against the CMB (Madau, Meiksin & Rees, 1998 , hereafter MMR). On the other hand, large, massive regions at high density contrast are extremely rare in most of the hierarchical CDM universes at such high redshifts. There is a however another mechanism that makes the diffuse hydrogen visible in the redshifted 21cm line: the Wouthuysen-Field effect. In this process, a Ly$`\alpha `$ photon field mixes the hyperfine levels of neutral hydrogen in its ground state via intermediate transition to the $`2p`$ state. A detailed picture of the Wouthuysen-Field effect can be found in Meiksin (1999, ) and Tozzi, Madau, Meiksin & Rees (1999, , hereafter TMMR). The process effectively couples $`T_S`$ to the color temperature $`T_\alpha `$ of a given Ly$`\alpha `$ radiation field (Field 1958, ). The color temperature is easily driven toward the kinetic temperature $`T_K`$ of the diffuse IGM due to the large cross section for resonant scattering (Field 1959, ). In this case the spin temperature is: $$T_S=\frac{T_{\mathrm{CMB}}+y_\alpha T_K}{1+y_\alpha },$$ (1) where $`y_\alpha \mathrm{3.6\hspace{0.17em}10}^{13}P_\alpha /T_K`$, and $`P_\alpha `$ is the total rate at which Ly$`\alpha `$ photons are scattered by an hydrogen atom. However, the same Ly$`\alpha `$ photon field also re–heats the diffuse gas, driving $`T_K`$ toward larger values. The thermal history of the diffuse IGM then results from the competition between adiabatic cooling due to the cosmic expansion and re–heating due to the photon field. In the absence of a contribution from a strong X-ray background, the thermal history of the IGM can be written simply as: $$\frac{dT_K}{dz}=\frac{2\mu }{3}\frac{\dot{E}}{k_B}\frac{dt}{dz}+2\frac{T_K}{(1+z)},$$ (2) where $`\dot{E}`$ is the heating rate due to recoil of scattered Ly$`\alpha `$ photons. Here $`\mu =16/13`$ is the mean molecular weight for a neutral gas with a fractional abundance by mass of hydrogen equal to 0.75. Prior to the generation of the photon field, the IGM is neutral and cold, at a temperature $`T_K\mathrm{2.6\hspace{0.17em}10}^2(1+z)^2`$ (Couchman 1985, ) given only by the adiabatic cooling after recombination. At the onset of the re–heating sources, there will be coupling between the kinetic and the spin temperature. An observation at the frequency $`1420/(1+z)`$ MHz will detect absorption or emission against the CMB, with a variation in brightness temperature with respect to the CMB value: $$\mathrm{\Delta }T_b(2.9\mathrm{mK})h^1\eta \left(\frac{\mathrm{\Omega }_bh^2}{0.02}\right)\frac{(1+z)^2}{[\mathrm{\Omega }_M(1+z)^3+\mathrm{\Omega }_K(1+z)^2+\mathrm{\Omega }_\mathrm{\Lambda }]^{1/2}},$$ (3) where $`\mathrm{\Omega }_K=1\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda }`$ is the curvature contribution to the present density parameter, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is the cosmological constant, $`\mathrm{\Omega }_b`$ is the baryon density, and $`\eta (T_{CMB}T_S)/T_S`$. Observations of such variation in the brightness temperature can be used to investigate the thermal history of the IGM, and the underlying birth and evolution of the radiation sources. All the following results are presented and discussed in TMMR (). ## 2 The epoch of the First Light We first investigate a simple situation in which, at a given redshift $`z_{th}`$, the Ly$`\alpha `$ photon field reaches a thermalization rate $`P_{th}7.6\times 10^{13}\mathrm{s}^1(1+z)`$; for such a value $`T_S`$ is driven effectively toward $`T_K`$ (see MMR, ). If the IGM is heated only by the same Ly$`\alpha `$ photons, there will be a transient epoch where $`T_S<T_K`$, (i.e., $`\eta <0`$) and an absorption feature appears at the corresponding redshifted frequency. This effect necessarily has a limited extension in time and thus in frequency space, since $`T_S`$ becomes larger than $`T_{CMB}`$ on a relatively short timescale. However, the signal is easily detectable with a resolution of few MHz, and, most of all, has a large effect since $`|\eta |>>1`$ when $`T_S<<T_{CMB}`$. Such a strong feature marks the transition from a cold and dark universe, to a universe populated with radiation sources. If we assume that the Ly$`\alpha `$ field reaches the thermalization value when $`z_{th}=9`$ on a timescale $`\tau 10`$ Myrs, the IGM will be visible in absorption for $`10÷30`$ Myrs, corresponding to $`\mathrm{\Delta }T_b40`$ mK over a range of $`5`$ MHz. In the top panel of figure 1 such a signature is shown as a function of the observed frequency. In the bottom panel the corresponding thermal evolution for the IGM is shown. Such results are weakly dependent on the epoch $`z_{th}`$ and on the adopted cosmology. However, the amplitude of the detected signal will be strongly dependent on the timescale $`\tau `$ on which the Ly$`\alpha `$ field reaches the thermalization value. In figure 2 the maximum of the absorption is plotted for different timescales $`\tau `$ in three representative cosmologies. The signal is always larger than 10 mK; note however that for $`\tau >30`$ Myr, the absorption is spread out over a large interval in frequency, especially at $`\nu <100`$ MHz where the sensitivity of radio telescopes becomes lower (see TMMR). ## 3 The density field After re–heating and before reionization, $`T_S>>T_{CMB}`$ holds, and the IGM is detectable only in emission. However, $`\eta 1`$ always, and the effect due to the continuum distribution of a diffuse IGM is not as strong as in the absorption case. Such a small positive offset with respect to the CMB background can be difficult to detect. On the other hand, fluctuations in the redshifted 21–cm emission, which reflect fluctuations in the density of the IGM, are at least two orders of magnitude larger than the intrinsic CMB fluctuations on scales of $`1`$ arcmin. These fluctuations correspond to scales of a few comoving Mpc, and are in the linear regime at $`z>5`$. In this case the fluctuations induced in the brightness temperature will be directly proportional to $`\mathrm{\Delta }\rho /\rho `$, allowing a straightforward reconstruction of the perturbation field at that epoch. In figure 3 and 4 we show results for two cosmologies, a tilted CDM universe with critical density (tCDM), and an open $`\mathrm{\Omega }_0=0.4`$ universe (OCDM). In both cases the fluctuations are normalized to reproduce the local abundance of clusters of galaxies. In OCDM the fluctuations are much larger (a factor of 3) since the evolution of the perturbation is strongly suppressed in an open universe with respect to the critical case, and for a given local normalization, the amplitude of the perturbations at high $`z`$ is correspondingly larger. In both figures the density field has been evolved with a collisionless N–body simulation of 64<sup>3</sup> particles using the Hydra code (Couchman, Thomas, & Pearce 1995, ). The box size is $`20h^1`$ comoving Mpc, corresponding to 17 (11) arcmin in tCDM (OCDM). The baryons are assumed to trace the dark matter distribution without any biasing. Since the level of fluctuations ranges from a few to $`10`$ $`\mu `$Jy per beam (with a resolution of 2 arcmin), it seems possible that observations with the Square Kilometer Array (Braun 1998, , see also http://www.nfra.nl.skai) may be used to reconstruct the matter density field at redshifts between the epoch probed by galaxy surveys and recombination, on scales as small as $`0.52`$ $`h^1`$ comoving Mpc, i.e. masses in the range between $`10^{12}`$ and $`10^{13}h^1M_{}`$. ## 4 The first quasars If re–heating is provided by a single quasar (without any other source of radiation), 21–cm emission on Mpc scales will be produced in the quasar neighborhood (outside the HII bubble) as the medium surrounding it is heated to $`T_S=T_K>T_{\mathrm{CMB}}`$ by soft X-rays from the quasar itself. The size and intensity of the detectable 21–cm region will depend on the quasar luminosity and age. In particular the intensity in the emission weakens with radius and with the age of the quasar. We calculated the kinetic temperature around a typical quasar, along with the neutral IGM fraction and Ly$`\alpha `$ flux. The resulting radial temperature profiles were then superimposed on the surrounding density fluctuations as computed using Hydra. In figure 5 a sequence of snapshots after 10, 50 and 100 Myr after the birth of a quasar at $`z=8.5`$ are shown in a box of 100$`h^1`$ Mpc (comoving). The visual effect is due to the convolution of the spin temperature profile with the (linearly) perturbed density field around the quasar. The temperature of the IGM at great distances from the quasar is assumed to be $`T_KT_{CMB}`$, and the signal goes to zero. In the figure the color ranges from 0 to $`10`$ mK with respect to the CMB level, which is black. Another situation occurs when the temperature of the IGM at large distances from the quasar is lower than $`T_{CMB}`$, e.g., $`T_K=\mathrm{2.6\hspace{0.17em}10}^2(1+z)^2`$ K. In this case the emission region is followed by an absorption ring, since the Ly$`\alpha `$ photons reach regions where $`T_K<T_{CMB}`$. The radio map resulting from a quasar ‘sphere of influence’ 10 Myr after it turns on at $`z=8.5`$ (tCDM) is shown in figure 6. The signal ranges from about $`3`$ $`\mu `$Jy to $`3\mu `$Jy per beam (with a 2 arcmin resolution). The absorption region is limited to a very sharp edge. However, in figure 7 we show a quasar with the same Ly$`\alpha `$ luminosity but with an intrinsic exponentially absorbed spectrum at energies larger than the Lyman limit. Consequently the HII region is reduced, and the X–ray warming front is well behind the light radius. This occurrence leads to a larger absorption ring where the signal reaches $`20\mu `$Jy in a 2 arcmin beam. Imaging the gas surrounding a quasar in 21–cm emission could provide a direct means of measuring intrinsic properties of the source, like the emitted spectrum and the opening angle of quasar emission. All these features are within reach of the new generation radio telescopes like SKA. ## 5 Conclusions The Wouthuysen-Field effect allows one to peer into the Dark Age. The observation of the neutral IGM in the redshifted 21–cm can give insight into the thermal evolution of the diffuse hydrogen and thus into the formation and evolution of radiation sources, at epochs when the age of the universe is only $`0.3`$ Gyrs. In particular, the epoch of the First Light can be seen as a deep ($`40`$ mK) absorption feature a few MHz wide against the CMB, at the corresponding redshifted 21cm line. Moreover, the density perturbation field at a redshift $`z5÷20`$ can be reconstructed looking for mK fluctuations at $`1`$$`5`$ arcmin resolution in the radio sky, providing a determination of its amplitude between the epoch probed by galaxy surveys and recombination. Finally, the first ionizing sources, like luminous quasars, can be seen by identifying peculiar, ring–shaped signals whose morphology depends on the source’s age, luminosity and geometry. ## References
no-problem/9905/gr-qc9905103.html
ar5iv
text
# A new conformal duality of spherically symmetric space–times ## 1 Introduction From the Lagrangian $$L=C_{ijkl}C^{ijkl}\sqrt{g}$$ (1) where $`C_{jkl}^i`$ is the conformally invariant Weyl tensor one gets the Bach tensor $$B_{ij}=2C_{ij;lk}^{kl}+C_{ij}^{kl}R_{lk}$$ (2) Recently, the solutions of the Bach equation $`B_{ij}=0`$, i.e., the vacuum solutions of conformal Weyl gravity, enjoyed a renewed interest because in the static spherically symmetric case one gets a term linear in $`r`$, (cf. for a deduction and for the motivation): $$ds^2=A(r)dt^2+\frac{dr^2}{A(r)}+r^2d\mathrm{\Omega }^2$$ (3) with $$A(r)=13\beta \gamma \frac{(23\beta \gamma )\beta }{r}+\gamma rkr^2$$ (4) Further dicussion of this solution can be found in . In , the viability of the term $`\gamma r`$ is doubted, whereas in just this part of the potential played the main role.<sup>1</sup><sup>1</sup>1From dimensional analysis one can deduce the powers of $`r`$ in three different ways as follows: a) In the Newtonian limit (i.e. $`\mathrm{\Delta }`$ is the flat-space Laplacian) one gets from the Einstein-Hilbert Lagrangian $`L_{EH}`$ via $`\mathrm{\Delta }\phi =0`$ the two spherically symmetric solutions $`\phi =1`$ and $`\phi =1/r`$; and from $`L`$ eq. (1) via $`\mathrm{\Delta }\mathrm{\Delta }\phi =0`$ one gets additionally $`\phi =r`$ and $`\phi =r^2`$ (and, of course, all the linear combinations.) $`\phi =1`$ gives flat space, and $`\phi =r^2`$ corresponds to the de Sitter space-time, so the essential terms are $`1/r`$ for $`L_{EH}`$ and $`r`$ for $`L`$. b) $`L`$ and $`L_{EH}`$ differ by a factor $`<length>^2`$, so this should be the case for the potentials, too. c) Similarly one gets this as heuristic argument by calculating the Greens functions in momentum space. A solution of the Bach equation is called trivial if it is conformally related to an Einstein space, i.e. to a vacuum solution of the Einstein equation with arbitrary $`\mathrm{\Lambda }`$ . So our question reads: Do non-trivial spherically symmetric solutions of the Bach equation exist? Up to now, contradicting answers have been given: Metric (3) with (4) is an Einstein space for $`\gamma =0`$ only, so it seems to be a non-trivial solution for $`\gamma 0`$, whereas in (cf. also for earlier references) it is stated that only trivial spherically symmetric solutions of the Bach equation exist. It is the aim of the present paper to clarify this contradiction by introducing a new type of conformal duality within spherically symmetric space-times.<sup>2</sup><sup>2</sup>2It will be a duality different from that one introduced in , cf. for a review on conformal transformations between fourth-order theories of gravity. The result will be that the value of $`\gamma `$ in eq. (4) can be made vanish by a conformal transformation. Then the question whether this linear term is physically measurable or not depends on the question in which of these two conformal frames the non-conformal matter lives. As a byproduct of this discussion we will present a new view to the question (see the different statements to this question in \[9-12\]) under which circumstances a spherically symmetric Einstein space can be expressed in Schwarzschild coordinates. The paper is organized as follows: In sct. 2 we will deduce the new duality transformation, in sct. 3 we apply this transformation to the solution eq. (3,4), and in sct. 4 we look especially to those solutions where Schwarzschild coordinates do not apply. ## 2 A new conformal duality transformation The general static spherically symmetric metric can be written as $$ds^2=A(r)dt^2+B(r)dr^2+C(r)d\mathrm{\Omega }^2$$ (5) where $`d\mathrm{\Omega }^2=d\psi ^2+\mathrm{sin}^2\psi d\varphi ^2`$ is the metric of the standard 2-sphere. The functions $`A`$, $`B`$ and $`C`$ have to be positive. The main simplification for solving the Bach equation for metric (5) was done in as follows: The two possible gauge degrees of freedom (a redefinition of the radial coordinate $`r`$ and the conformal invariance of the Bach equation) can be used to get $`r`$ as Schwarzschild coordinate, i.e., $`C(r)=r^2`$, and $`A(r)B(r)=1`$, i.e., one starts from the metric (3). The case when Schwarzschild coordinates do not apply will be discussed in sct. 4, here we concentrate on the following question: Do there exist conformal transformations of metric (3) which keep that metric form-invariant? Of course, if $`r`$, $`ds`$, and $`t`$ are multiplied by the same non-vanishing constant $`\alpha `$, and the function $`A`$ will be redefined accordingly, then metric (3) remains form-invariant. This conformal transformation with a constant conformal factor is called a homothetic transformation, and it will not be considered essential. Likewise, the transformation $`rr`$ not changing the form of the metric (3) will not be considered essential. Example: Let $`A(r)=1\frac{2m}{r}`$, i.e., the Schwarzschild solution with mass parameter $`m`$. Let $`\widehat{r}=\alpha r`$, $`d\widehat{s}^2=\alpha ^2ds^2`$, then $`d\widehat{s}^2`$ represents the Schwarzschild solution with mass parameter $`\widehat{m}=\alpha m`$.<sup>3</sup><sup>3</sup>3This applies also to negative values $`\alpha `$. One should expect that further conformal transformations do not exist because we already applied the conformal degree of freedom to reach the form (3) from the form (5). This expectation shall be tested in the following: Let $`b(r)`$ be any non–constant function, and let the conformally transformed metric be $`d\stackrel{~}{s}^2=b^2(r)ds^2`$. With eq. (3) this reads $$d\stackrel{~}{s}^2=b^2(r)A(r)dt^2+\frac{b^2(r)dr^2}{A(r)}+b^2(r)r^2d\mathrm{\Omega }^2$$ (6) Next, we have to assume that $`b(r)r`$ is not a constant, and then we can introduce $`\stackrel{~}{r}=b(r)r`$ as new Schwarzschild radial coordinate for metric (6). We get $$\frac{d\stackrel{~}{r}}{dr}=b(r)+r\frac{db}{dr}$$ (7) Form-invariance in the 00-component means that $$\stackrel{~}{A}(\stackrel{~}{r})=b^2(r)A(r)$$ (8) and form-invariance in the 11-component implies $$\frac{b^2(r)dr^2}{A(r)}=\frac{d\stackrel{~}{r}^2}{\stackrel{~}{A}(\stackrel{~}{r})}$$ (9) Eqs. (8) and (9) together imply $$\frac{d\stackrel{~}{r}}{dr}=\pm b^2(r)$$ (10) If the lower sign appears we shall apply the transformation $`rr`$ to get the upper sign. So we get without loss of generality from eqs. (10) and (7) $$b^2(r)=b(r)+r\frac{db}{dr}$$ (11) The non-constant solutions of eq. (11) are $$b(r)=\frac{1}{1+\alpha r}$$ (12) with a non-vanishing constant $`\alpha `$. The assumption that $`b(r)r`$ is not a constant is always fulfilled. We get $$\stackrel{~}{r}=\frac{r}{1+\alpha r}$$ (13) which is valid for $`1+\alpha r0`$ and can be inverted to $$r=\frac{\stackrel{~}{r}}{1+\stackrel{~}{\alpha }\stackrel{~}{r}}$$ (14) where $`\stackrel{~}{\alpha }=\alpha `$. Eqs. (13) and (14) are dual to each other: Exchange of tilted and untilted quantities changes the one of them to the other. A likewise duality can be found for eq. (8) because of $$\stackrel{~}{b}(\stackrel{~}{r})b(r)1$$ (15) and for eq. (12). Factorizing out a suitable homothetic transformation we can restrict to the case $`\alpha =1`$. Further, we restrict to the case that the denominator of eq. (13) is positive. Let us summarize this restricted case as follows: Let $`A(r)`$ be any positive function, let $`b(r)=1/(1+\alpha r)`$ with $`\alpha =1`$ and let $$ds^2=A(r)dt^2+\frac{dr^2}{A(r)}+r^2d\mathrm{\Omega }^2$$ Then the tilde-operator defined by $`\stackrel{~}{\alpha }=\alpha `$, $`\stackrel{~}{A}(\stackrel{~}{r})=b^2(r)A(r)`$, $$\stackrel{~}{r}=\frac{r}{1+\alpha r}$$ and $$d\stackrel{~}{s}^2=b^2(r)ds^2$$ represents a duality, i.e., the square of the tilde-operator is the identity operator. ## 3 Spherical symmetry and the Bach equation Let us apply the duality from sct. 2 to the Schwarzschild–de Sitter solution, i.e., to metric (3) with $$A(r)=1\frac{2m}{r}\frac{\mathrm{\Lambda }}{3}r^2$$ (16) That means, we have to insert eq. (16) into eqs. (6, 12, 14). Finally, we remove all the tildes, and we arrive at a metric which exactly coincides with eqs. (3,4): There is a one-to-one correspondence between the three parameters $`m`$, $`\mathrm{\Lambda }`$, $`\alpha `$ on the one hand, and $`\beta `$, $`\gamma `$, $`k`$ on the other hand. Here is the main result of the present paper: The Mannheim-Kazanas -solution given by eqs. (3,4) of the Bach equation is nothing but a conformally transformed Schwarzschild-de Sitter metric; the 3-parameter set of solutions (3,4) can be found by the conformal duality deduced in sct. 2. It should be mentioned that the set of solutions of the full non-linear field equation is really only 3-dimensional, and that this is in contrast to the linearized equation which allows all linear combinations of 1, $`r`$, $`1/r`$, and $`r^2`$, i.e., a 4-dimensional set. Up to now we had assumed that the metric is static and spherically symmetric. However, also the Bach equation allows to prove a Birkhoff-like theorem : Every spherically symmetric solution is conformally related to a solution possessing a fourth isometry.<sup>4</sup><sup>4</sup>4This fourth isometry may be time-like or space-like, and we have a regular horizon at surfaces where this character changes, so this is exactly analogous to the situation in Einstein’s theory. Another version of this result reads: Every spherically symmetric solution is almost everywhere conformally related to an Einstein space. Furthermore, the necessary conformal factor can always be chosen such that it maintains the spherical symmetry. Why we need the restriction “almost everywhere” in the second version? This applies to those points where the necessary conformal transformation becomes singular. Example: Take $`u=1/r`$ as new coordinate in the Schwarzschild solution and apply an analytic conformal transformation such that the metric can be analytically continued to negative values $`u`$ via a regular point $`u=0`$; by construction, this space-time solves the Bach equation, but at $`u=0`$ it fails to be conformally related to an Einstein space. ## 4 Applicability of Schwarzschild coordinates To complete the discussion we want to give also those spherically symmetric solutions of the Bach equation which cannot be expressed in Schwarzschild coordinates. Before we do so, let us compare with the analogous situation in Einstein’s theory. It has a long tradition to assume, see e.g. , that every static spherically symmetric line element can be expressed in Schwarzschild coordinates, i.e., that in metric (5), $`C(r)=r^2`$ can be achieved by a coordinate transformation. However, the topic is a little bit more involved:<sup>5</sup><sup>5</sup>5In , sct. 23.2., page 595 one reads: “For a more rigorous proof that in any static spherical system Schwarzschild coordinates can be introduced, see Box 23.3.”. But that Box 23.3. at page 617 does not only give this proof, but also the necessary assumption: “ …such a transformation is possible, (i.e. nonsingular) only where $`(r)^20`$.” Later in the book (page 843) one can find the sentence: “The special case $`(r)^2=0`$ is treated in exercise 32.1.” and 3 pages later “We thank G.F.R. Ellis for pointing out the omission of the case $`(r)^2=0`$ in the preliminary version of this book.” Gaussian coordinates for metric (5), i.e., $`B1`$, can always be chosen by a redefinition of $`r`$, but Schwarzschild coordinates can be introduced only in regions where $`dC/dr0`$. On the other hand, the Schwarzschild radius comes out after one integration which has the result that usually, the order of the field equation will be reduced by one if expressed in Schwarzschild coordinates. This latter property is the very reason for the usefulness of them. Let us take a special example of the Schwarzschild-de Sitter metric: We insert $`m=l/3>0`$ and $`\mathrm{\Lambda }=1/l^2`$ into eqs. (3,16). For any positive constant $`\epsilon `$ we apply the following coordinate transformations $$r=l+\epsilon x,t=l^2\tau /\epsilon $$ (17) and get $$ds^2=l^4Dd\tau ^2+\frac{dx^2}{D}+(l+\epsilon x)^2d\mathrm{\Omega }^2$$ (18) with $$D=\frac{1}{\epsilon ^2}[1\frac{2l}{3(l+\epsilon x)}\frac{(l+\epsilon x)^2}{3l^2}]$$ (19) Developing this $`D`$ in a series in $`\epsilon `$ it turns out that it is regular at $`\epsilon =0`$ and there its value reads $`D=x^2/l^2`$. Therefore: Eq. (18) represents a one-parameter family of space-times analytic in the parameter $`\epsilon `$, and for every $`\epsilon >0`$ it represents a spherically symmetric solution of the Einstein equation with $`\mathrm{\Lambda }=1/l^2`$. From continuity reasons, it represents a solution also for $`\epsilon =0`$. We get $$ds^2=l^2[\frac{dx^2}{x^2}+x^2d\tau ^2+d\mathrm{\Omega }^2]$$ (20) which represents a spherically symmetric Einstein space which cannot be written in Schwarzschild coordinates. (It should be noted that in these coordinates, $`x`$ is timelike and $`\tau `$ is space-like.) The deduction of this solution presented here seems to be new. Nevertheless, it is already known, but usually it is not listed within the set of spherically symmetric Einstein spaces: In it is listed in table 10.1. under the topic “$`G_6`$ with $`\mathrm{\Lambda }`$-term”. In fact, metric (20) represents the cartesian product of two 2-spaces of constant and equal curvature, cf. . Therefore, it is also a static metric and possesses a 6-dimensional isometry group. A fortiori, metric (20) represents also a static spherically symmetric solution of the Bach equation, and this solution is not listed in refs. . Further, let us mention that the cartesian product of two 2-spaces of constant curvature $`P`$ and $`Q`$ resp. represents an Einstein space iff $`P=Q`$, and it represents a solution of the Bach equation iff $`P^2=Q^2`$. Thus, for $`P=Q0`$ we get another static spherically symmetric solution of the Bach equation which cannot be expressed in Schwarzschild coordinates; however, it is conformally flat and therefore trivial, too. Finally, we want to stress that the above consideration only dealt with vacuum solutions of conformal Weyl gravity; of course, the inclusion of non-conformal matter requests to fix one of the conformal frames, and it has to be discussed yet whether this shall be the Schwarzschild-de Sitter or in the Mannheim-Kazanas frame. The result of the present paper is that both solutions are conformally related, and that no further spherically symmetric solutions of the Bach equation exist. ## Acknowledgement Financial support from DFG is gratefully acknowledged. I thank the colleagues of Free University Berlin, where this work has been done, especially Prof. H. Kleinert, for valuable comments. ## References R. Bach, Math. Zeitschr. 9 (1921) 110; H. Weyl, Sitzber. Preuss. Akad. d. Wiss. Berlin, Phys.-Math. Kl. (1918) 465. P. Mannheim, D. Kazanas, Gen. Relat. Grav. 26 (1994) 337; P. Mannheim, D. Kazanas, Phys. Rev. D 44 (1991) 417; P. Mannheim, Phys. Rev. D 58 (1998) 103511; N. Spyrou, D. Kazanas, E. Esteban, Class. Quant. Grav. 14 (1997) 2663. A. Edery, M. Paranjape, Phys. Rev. D 58 (1998) 024011; A. Edery, M. Paranjape, Gen. Relat. Grav. 31 (1999) in print. J. Demaret, L. Querella, C. Scheen, Class. Quant. Grav. 16 (1999) 749. H.-J. Schmidt, Ann. Phys. (Leipz.) 41 (1984) 435. R. Schimming, p. 39 in: M. Rainer, H.-J. Schmidt (Eds.) Current topics in mathematical cosmology, WSPC Singapore 1998. H.-J. Schmidt, gr-qc/9703002; Gen. Relat. Grav. 29 (1997) 859. V. Faraoni, E. Gunzig, P. Nardone: Conformal transformations in classical gravitational theories and in cosmology, gr-qc/9811047, Fund. Cosmic Physics, to appear 1999. M. v. Laue, Sitzber. Preuss. Akad. d. Wiss. Berlin, Phys.-Math. Kl. (1923) 27. C. Misner, K. Thorne, J. Wheeler: Gravitation, Freeman, San Francisco 1973. D. Kramer, H. Stephani, M. MacCallum, E. Herlt: Exact solutions of Einstein’s field equations, Verl. d. Wiss. Berlin 1980. M. Katanaev, T. Klösch, W. Kummer: Global properties of warped solutions in General Relativity, gr-qc/9807079. H.-J. Schmidt, Grav. and Cosmol. 3 (1997) 185; gr-qc/9709071.
no-problem/9905/astro-ph9905107.html
ar5iv
text
# On the choice of parameters in solar structure inversion ## 1 Introduction The observed solar p-mode oscillation frequencies depend on the structure of the solar interior and atmosphere. The goal of the inverse analysis is to make inferences about the solar structure given these frequencies. A substantial number of inversions using a variety of techniques have been reported in the literature within the last decade (e.g. Gough & Kosovichev 1990; Däppen et al. 1991; Kosovichev 1993; Dziembowski et al. 1994; Basu et al. 1997). Two of the most commonly used inversion methods are implementations of the optimally localized averages (OLA) method, originally proposed by Backus & Gilbert (1968): the method of Multiplicative Optimally Localized Averages (MOLA), following the suggestion of Backus & Gilbert, and the method of Subtractive Optimally Localized Averages (SOLA), introduced by Pijpers & Thompson (1992, 1994). Both methods depend on a number of parameters that must be chosen in order to make reliable inferences of the variation of the internal structure along the solar radius. Most authors do not specify how these parameters are chosen or how a different choice would affect the solution. The goal of this work is to make a detailed analysis of the influence of each parameter on the solution, as a help towards arriving at an optimal set of parameters for a given data set. The adiabatic oscillation frequencies are determined solely by two functions of position: these may be chosen as density $`\rho `$ and $`\mathrm{\Gamma }_1=(\mathrm{ln}p/\mathrm{ln}\rho )_{\mathrm{ad}}`$ or as any other independent pair of model variables related directly to these (e.g. Christensen-Dalsgaard & Berthomieu 1991). The solar p modes are acoustic waves that propagate in the solar interior and their frequencies are largely determined by the behaviour of sound speed $`c`$. Hence, it is natural to use $`c`$ as one of the variables, combined with, e.g., $`\rho `$ or $`\mathrm{\Gamma }_1`$. The helium abundance $`Y`$ is also commonly used, in combination with $`\rho `$ or $`p/\rho `$, $`p`$ being pressure; this, however, requires the explicit use of the equation of state, incomplete knowledge of which could cause systematic errors (see Basu & Christensen-Dalsgaard 1997). In this work, we consider the inverse problem as defined in terms of sound speed and density. ## 2 Linear Inversion Techniques ### 2.1 The inverse problem Inversions for solar structure are based on linearizing the equations of stellar oscillations around a known reference model. The differences in, for example, sound speed $`c`$ and density $`\rho `$ between the structure of the Sun and the reference model $`(\delta c^2/c^2,\delta \rho /\rho )`$ are then related to the differences between the frequencies of the Sun and the model ($`\delta \omega _i/\omega _i`$) by $`{\displaystyle \frac{\delta \omega _i}{\omega _i}}`$ $`=`$ $`{\displaystyle _0^1}K_{c^2,\rho }^i(r){\displaystyle \frac{\delta c^2}{c^2}}(r)dr+{\displaystyle _0^1}K_{\rho ,c^2}^i(r){\displaystyle \frac{\delta \rho }{\rho }}(r)dr`$ (1) $`+`$ $`{\displaystyle \frac{F_{\mathrm{surf}}(\omega _i)}{Q_i}}+ϵ_i,i=1,\mathrm{},M,`$ where $`r`$ is the distance to the centre, which, for simplicity, we measure in units of the solar radius $`R_{}`$. The index $`i`$ numbers the multiplets $`(n,l)`$. The observational errors are given by $`ϵ_i`$, and are assumed to be independent and Gaussian-distributed with zero mean and variance $`\sigma _i^2`$. The kernels $`K_{c^2,\rho }^i`$ and $`K_{\rho ,c^2}^i`$ are known functions of the reference model. The term in $`F_{\mathrm{surf}}(\omega _i)`$ is the contribution from the uncertainties in the near-surface region (e.g. Christensen-Dalsgaard & Berthomieu 1991); here $`Q_i`$ is the mode inertia, normalized by the inertia of a radial mode of the same frequency. For linear inversion methods, the solution at a given point $`r_0`$ is determined by a set of inversion coefficients $`c_i(r_0)`$, such that the inferred value of, say, $`\delta c^2/c^2`$ is $$\frac{\delta c^2}{c^2}(r_0)=\underset{i}{}c_i(r_0)\frac{\delta \omega _i}{\omega _i}.$$ (2) From the corresponding linear combination of equations (1) it follows that the solution is characterized by the averaging kernel, obtained as $$𝒦(r_0,r)=\underset{i}{}c_i(r_0)K_{c^2,\rho }^i(r),$$ (3) and also by the cross-term kernel: $$𝒞(r_0,r)=\underset{i}{}c_i(r_0)K_{\rho ,c^2}^i(r),$$ (4) which measures the influence of the contribution from $`\delta \rho /\rho `$ on the inferred $`\delta c^2/c^2`$. The standard deviation of the solution is obtained as $$\left(\underset{i}{}c_i^2(r_0)\sigma _i^2\right)^{1/2}.$$ (5) The goal of the analysis is then to suppress the contributions from the cross term and the surface term in the linear combination in equation (2), while limiting the error in the solution. If this can be achieved $$\frac{\delta c^2}{c^2}(r_0)_0^1𝒦(r_0,r)\frac{\delta c^2}{c^2}(r)dr.$$ (6) It is generally required that $`𝒦(r_0,r)`$ has unit integral with respect to $`r`$, so that the inferred value is a proper average of $`\delta c^2/c^2`$: we apply this constraint here. Evidently, the resolution of the inference is controlled by the extent in $`r`$ of $`𝒦`$, the goal being to make it as narrow as possible. The surface term in equation (1) may be suppressed by assuming that $`F_{\mathrm{surf}}`$ can be expanded in terms of polynomials $`\psi _\lambda `$, and constraining the inversion coefficients to satisfy $$\underset{i}{}c_i(r_0)Q_i^1\psi _\lambda (\omega _i)=0,\lambda =0,1,\mathrm{},\mathrm{\Lambda }$$ (7) (Däppen et al. 1991). As $`F_{\mathrm{surf}}`$ is assumed to be a slowly varying function of frequency, we use Legendre polynomials of low degree to define the basis functions $`\psi _\lambda `$. The maximum value of the polynomial degree, $`\mathrm{\Lambda }`$, used in the expansion is a free parameter of the inversion procedures, which must be fixed. There are analogous expressions for the density inversion, expressing $`\delta \rho /\rho (r_0)`$ in terms of the appropriate averaging kernel obtained as a linear combination of the mode kernels $`K_{\rho ,c^2}^i`$, and involving a cross term giving the contribution from $`\delta c^2/c^2`$. In the case of density inversion, an additional constraint is obtained by noting that the mass of the Sun is quite accurately known, the mass of the reference model being usually fixed at this value; thus the density difference is generally constrained to satisfy $$4\pi _0^1\frac{\delta \rho }{\rho }(r)\rho (r)r^2dr=0.$$ (8) We have found that this constraint is important for stabilizing the solution. A number of different inversion techniques can be used for inverting the constraints given in equation (1). We have used two versions of the technique of Optimally Localized Averages (OLA) (cf. Backus & Gilbert 1968) where the inversion coefficients are determined explicitly. ### 2.2 SOLA Technique The aim of the Subtractive Optimally Localized Averages (SOLA) method (Pijpers & Thompson 1992, 1994) is to determine the inversion coefficients so that the averaging kernel is an approximation to a given target $`𝒯(r_0,r)`$, by minimizing $`{\displaystyle _0^1}\left[𝒦(r_0,r)𝒯(r_0,r)\right]^2dr+\beta {\displaystyle _0^1}𝒞^2(r_0,r)f(r)dr`$ $`+\mu \overline{\sigma }^2{\displaystyle \underset{i}{}}c_i^2(r_0)\sigma _i^2,`$ (9) subject to $`𝒦`$ being unimodular. Here $`f(r)`$ is a suitably increasing function of radius aimed at suppressing the surface structure in the cross-term kernel: we have used $`f(r)=(1+r)^4`$. Also, $`\mu `$ is a trade-off parameter, determining the balance between the demands of a good fit to the target and a small error in the solution; also, the quantity $`\overline{\sigma }^2`$ is the average variance, defined by $$\overline{\sigma }^2=\frac{\underset{i}{}\sigma _i^2}{M},$$ (10) $`M`$ being the total number of modes. The second trade-off parameter $`\beta `$ determines the balance between the demands of a well-localized averaging kernel and a small cross term. To suppress the influence of near-surface uncertainties, i.e., the term in $`F_{\mathrm{surf}}`$, the coefficients are constrained to satisfy equation (7). We have used target functions defined by $$𝒯(r_0,r)=Ar\mathrm{exp}\left[\left(\frac{rr_0}{\mathrm{\Delta }(r_0)}+\frac{\mathrm{\Delta }(r_0)}{2r_0}\right)^2\right],$$ (11) where $`A`$ is a normalization constant to make the target unimodular. Thus the target function has its maximum at $`r=r_0`$ and has almost a Gaussian shape, except that it is forced to go to zero at $`r=0`$. The target is characterized by a linear width in the radial direction: $`\mathrm{\Delta }(r_0)`$ = $`\mathrm{\Delta }_\mathrm{A}c(r_0)/c(r_\mathrm{A})`$, where $`r_\mathrm{A}`$ is a reference radius; this variation of the width with sound speed reflects the ability of the modes to resolve solar structure (e.g. Thompson 1993). We have taken $`r_\mathrm{A}=0.2R_{}`$, and in the following characterize the width by the corresponding parameter $`\mathrm{\Delta }_\mathrm{A}`$. ### 2.3 MOLA Technique In the case of Multiplicative Optimally Localized Averages (MOLA) method, the coefficients are found by minimizing $`{\displaystyle _0^1}𝒦^2(r_0,r)J(r_0,r)dr+\beta {\displaystyle _0^1}𝒞^2(r_0,r)f(r)dr`$ $`+\mu \overline{\sigma }^2{\displaystyle \underset{i}{}}c_i^2(r_0)\sigma _i^2,`$ (12) where $`J(r_0,r)`$ is a weight function that is small near $`r_0`$ and large elsewhere: $$J(r_0,r)=(rr_0)^2.$$ (13) This, together with the normalization constraint, forces $`𝒦`$ to be large near $`r_0`$ and small elsewhere, as desired. As in equation (9) $`f(r)`$ is included to suppress surface structure in the cross-term kernel. The quantity $`\overline{\sigma }^2`$ is defined by equation (10). To suppress the influence of near-surface uncertainties, i.e., the term in $`F_{\mathrm{surf}}`$, the coefficients are again constrained to satisfy equation (7). The MOLA technique is generally much more demanding on computational resources than is the SOLA technique because it involves analysis of a kernel matrix which depends on the target $`r_0`$; in the SOLA case, the corresponding matrix is independent of $`r_0`$ and hence need only to be analyzed once, for a given inversion case. ### 2.4 Quality measures for the solution As seen from the previous sections, the inversions are characterized by the free parameters $`\mu `$, $`\beta `$, $`\mathrm{\Lambda }`$ and $`\mathrm{\Delta }_\mathrm{A}`$ in the case of SOLA. These must be chosen to balance the relative importance of obtaining a well-localized average of the sound speed (or density) difference, minimizing the variance of the random error and reducing the sensitivity of the solution to the second function (i.e., the cross term) as well as to the surface uncertainties. The resolution of the inversion is characterized by the properties of the averaging kernel (eq. 3), which determine the degree to which a well-localized average of the underlying true solution can be obtained. Various measures of the width of $`𝒦`$ have been considered in the literature. Here we measure resolution in terms of the distance $`\mathrm{\Delta }_{\mathrm{qu}}=r_{\mathrm{qu}}^{(3)}r_{\mathrm{qu}}^{(1)}`$ between the upper and lower quartile points of $`𝒦`$; these are defined such that one quarter of the area under $`𝒦`$ lies to the left of $`r_{\mathrm{qu}}^{(1)}`$ and one quarter of the area lies to the right of $`r_{\mathrm{qu}}^{(3)}`$. Furthermore, the location of $`r_{\mathrm{qu}}^{(1)}`$ and $`r_{\mathrm{qu}}^{(3)}`$, relative to the target location $`r_0`$, provides a measure of any possible shift of the solution relative to the target. For the average of the solution to be well-localized, it is not enough that $`\mathrm{\Delta }_{\mathrm{qu}}`$ be small: pronounced wings and other structure away from the target radius will produce nonlocal contributions to the average. As a measure of such effects in the SOLA case, we consider $$\chi (r_0)=_0^1[𝒦(r_0,r)𝒯(r_0,r)]^2dr,$$ (14) which should be small (Pijpers & Thompson 1994). In the MOLA case, we introduce $$\chi ^{}(r_0)=_0^{r_A}𝒦^2(r_0,r)dr+_{r_B}^1𝒦^2(r_0,r)dr,$$ (15) where $`r_A`$ and $`r_B`$ are defined in such a way that the averaging kernel has its maximum at $`(r_A+r_B)/2`$ and its FWHM is equal to $`(r_Br_A)/2`$; again, a properly localized kernel requires that $`\chi ^{}`$ is small. In a similar way, it is useful to define a measure $`C(r_0)`$ of the overall effect of the cross term: $$C(r_0)=\sqrt{_0^1𝒞^2(r_0,r)dr},$$ (16) which should be small in order to reduce the sensitivity of the solution to the second function. It is evident that the overall magnitude of the error in the inferred solution should be constrained. However, Howe & Thompson (1996) pointed out that it is important to consider also the correlation between the errors in the solution at different target radii. This arises even if the errors in the original data are uncorrelated: the errors in the solution at two positions are generally correlated, because they have been derived from the same set of data. The normalized correlation function which describes the correlation between the errors in the solution at $`r_1`$ and at $`r_2`$ is defined as: $$E(r_1,r_2)=\frac{c_i(r_1)c_i(r_2)\sigma _i^2}{\left[c_i^2(r_1)\sigma _i^2\right]^{1/2}\left[c_i^2(r_2)\sigma _i^2\right]^{1/2}}.$$ (17) Howe & Thompson showed that correlated errors can introduce features into the solution on the scale of the order of the correlation-function width. Examples of correlation functions are shown in Fig. 1. For sound-speed inversion, the error correlation generally has a peak at $`r_1=r_2`$ of width corresponding approximately to the width of the averaging kernel (Fig. 1 top). For density inversion, this peak at $`r_1=r_2`$ is much broader than the averaging-kernel width. This is a consequence of the difficulty in inferring density using acoustic-mode frequencies. There is also a region of strong anti-correlation (Fig. 1 bottom). This is a result of applying the mass-conservation condition (eq. 8) since an excess of density in one part of the model has to be compensated by a deficiency in another. ## 3 Data and models The properties of the inversion depends on the mode selection and errors in the data; the combination of mode selection and errors is often described as the mode set, in contrast to the data set which in addition contains the data values. We have based the analysis on the combined LOWL + BiSON mode set described by Basu et al. (1997). Here the modes are in the frequency range 1.5–3.5 mHz, with degrees between 0 and 99. This set in particular provides values for the standard errors $`\sigma _i`$ which to a large extent control the weights given to individual modes; here $`\overline{\sigma }^2=8.6\times 10^{11}`$ (cf. eq. 10). In some cases realizations of artificial data were considered; these were obtained as differences between frequencies of the proxy and reference models, discussed below, with the addition of normally distributed random errors with the variances of the LOWL+BiSON mode set. Also, but to a far lesser extent, the inversion depends on the reference model. We have used Model S of Christensen-Dalsgaard et al. (1996) as our reference model. The model assumes that the Sun has an age of 4.6 Gyr. To construct artificial data for tests of the parameters for solar structure inversion we adopted as a “proxy Sun” another model, of identical physical assumptions to those in Model S, but with the lower age of 4.52 Gyr. With no further modifications, the “proxy Sun” would not include any surface uncertainties, and hence $`F_{\mathrm{surf}}`$ would have been zero. To provide a reasonably realistic model of this term, the “proxy Sun” in addition contained a near-surface modification based on a simple description of the effects of turbulent pressure on the frequencies, and calibrated to match the actual near-surface contribution to the difference between the solar frequencies and those of Model S (cf. Rosenthal 1998). ## 4 The choice of inversion parameters For a given mode set, the parameters controlling the inversion must be chosen in a way that, in an appropriate sense, optimizes the measures of quality introduced in Section 2.4. Needless to say, this places conflicting demands on the different parameters, requiring appropriate trade-offs. Also, it is probably fair to say that no uniquely defined optimum solution exists. Here we have chosen what appears to be reasonable parameter sets, (cf. eqs 20 and 21). The procedure leading to these choices is summarized in Section 4.4; however, we first justify them by investigating the effect on the properties of the inversion of modifications to the parameters around these values. This is mostly done in terms of quantities such as error, error correlation, and kernel properties which do not depend on the used data values; however, the effects are also illustrated by analyses of the artificial data defined in Section 3. The parameter $`\mathrm{\Lambda }`$ plays a somewhat special role, in that the suppression of the surface effects is common to both inversion methods (SOLA and MOLA) and to inversion for $`\delta c^2`$ and $`\delta \rho `$. For this reason we treat $`\mathrm{\Lambda }`$ separately, in Section 4.1. The response of the solution to the values of the remaining parameters depends somewhat on the choice of inversion method, and strongly on whether the inversion is for the sound speed or density difference. We consider sound-speed inversion in Section 4.2 and density inversion in Section 4.3. ### 4.1 The choice of $`\mathrm{\Lambda }`$ Unlike the remaining inversion parameters the choice of the degree $`\mathrm{\Lambda }`$ used in the suppression of the surface term must directly reflect the properties of the data values; we base the analysis on the near-surface modification introduced in the artificial data according to the procedure of Rosenthal (1998) (very similar results are obtained for solar data). To determine the most appropriate value of $`\mathrm{\Lambda }`$ we consider the frequency-dependent part of the frequency differences. This is isolated by noting that according to the asymptotic theory the frequency differences satisfy (e.g. Christensen-Dalsgaard, Gough & Thompson 1989) $$S_i\frac{\delta \omega _i}{\omega _i}H_1\left(\frac{\omega _i}{L}\right)+H_2(\omega _i),$$ (18) with $`L=l+1/2`$, where $`l`$ is the degree of mode $`i`$. Here $`S_i`$ is a scaling factor which in the asymptotic limit is proportional to $`Q_i`$ and the slowly varying component of $`H_2(\omega _i)`$ corresponds to the function $`F_{\mathrm{surf}}`$ in the asymptotic limit. Thus, by fitting a linear combination of Legendre polynomials to $`H_2`$: $$H_2(\omega _i)\underset{\lambda =0}{\overset{\mathrm{\Lambda }}{}}a_\lambda P_\lambda (\omega _i),$$ (19) we can determine the appropriate value of $`\mathrm{\Lambda }`$ for any given data set. In practice, we make a non-linear least-squares fit to a sum of two linear combinations of Legendre polynomials, in $`\omega /L`$ and $`\omega `$ to $`S_i\delta \omega _i/\omega _i`$, using a high $`\mathrm{\Lambda }`$ ($`\mathrm{\Lambda }=16`$). Then we remove $`H_1`$ from $`S_i\delta \omega _i/\omega _i`$ and fit now a single linear combination of Legendre polynomials in $`\omega `$, looking for the smallest value of $`\mathrm{\Lambda }`$ that provides a good fit (Fig. 2). On this basis, we infer that $`\mathrm{\Lambda }=6`$ provides an adequate representation of the surface term; we use this as our reference value in the following. The solar data considered by Basu et al. (1997) have a similar behaviour, and $`\mathrm{\Lambda }=6`$ is also an appropriate choice in that case. The constraints imposed by equation (7) do not depend explicitly on the target location; hence it is reasonable that they introduce a contribution to the errors in the solution that varies little with $`r_0`$, leading to an increase in the error correlation. This is confirmed in the case of sound-speed inversion by the results shown in the top panel of Fig. 1. (We note that, in contrast, for density inversion the correlation decreases somewhat with increasing $`\mathrm{\Lambda }`$; we have no explanation for this curious behaviour, but note that the density correlation is in any case substantial.) ### 4.2 Parameters for sound-speed inversion As reference we use what is subsequently determined to be the best choice of parameters: SOLA $`:`$ $`\mathrm{\Lambda }=6,\mu =10^4,\beta =2,\mathrm{\Delta }_\mathrm{A}=0.06;`$ MOLA $`:`$ $`\mathrm{\Lambda }=6,\mu =10^5,\beta =1.`$ (20) Effects on the quality measures of varying the parameters around these values are illustrated in Figs 3, 4, 5 and 7; in addition, Fig. 7 shows results of the analysis of artificial data (cf. Section 3), and Fig. 8 illustrates properties of selected averaging kernels. Throughout, parameters not explicitly mentioned have their reference values. #### 4.2.1 Choice of $`\mu `$ Generally known as the trade-off parameter, $`\mu `$ must be determined to ensure a trade-off between the solution error (eq. 5) and resolution of the averaging kernel. This is typically illustrated in trade-off diagrams such as Fig. 3, showing the solution error against resolution (here defined by the separation $`\mathrm{\Delta }_{\mathrm{qu}}`$ between the quartile points) as $`\mu `$ varies (circles). As $`\mu `$ is reduced, the solution error increases; the resolution width generally decreases towards a limiting value which, in the SOLA case, is typically determined by the target width $`\mathrm{\Delta }(r_0)`$. On the other hand, for larger values of $`\mu `$, there is a strong increase in the width, with a corresponding very small reduction in the solution error. The behaviour in the trade-off diagram depends on the target radius $`r_0`$ considered, the risk of a misleading solution being particularly serious in the core or near the surface if $`\mu `$ is too large. Thus it is important to look at the trade-off diagram at different target radii. This is illustrated in Fig. 4, where solution error (lower panels) and the location of the quartile points relative to the target radius (upper panels) are plotted. It is evident that the error increases markedly towards the centre and surface, particularly in the SOLA case. In addition, the averaging kernels get relatively broad and there is a tendency that they are shifted relative to the target location, particularly near the centre. Note that the results plotted in Fig. 4 use values of $`\mu `$ such that the solution error given by SOLA and MOLA techniques are similar. Even for small values of $`\mu `$, the MOLA averaging kernels do not penetrate as deep into the core as do the SOLA averaging kernels. The resolution of the averaging kernel is more sensitive to the choice of $`\mu `$ using MOLA than using SOLA (cf. Fig. 3). In addition to the error and resolution, we also need to consider other properties of the solution. ‘Global’ properties of the averaging kernels, measured by $`\chi `$ or $`\chi ^{}`$ are illustrated in Fig. 5, together with the integrated measure $`C`$ of the cross-talk. For larger values of $`\mu `$, these quantities increase, particularly near the surface. The strong increase in $`\chi ^{}`$ (and $`\chi `$) at large target radii is due to the presence of a depression in the averaging kernel near the surface that increases quickly with $`r_0`$ (cf. Fig. 8), especially for MOLA. The influence of $`\mu `$ on the error correlation is illustrated in Fig. 7; the correlation evidently increases with decreasing $`\mu `$, together with the solution error. Note that the error correlation increases with the target radius for any choice of parameters (see also Rabello-Soares, Basu & Christensen-Dalsgaard 1998). It is slightly smaller using SOLA than MOLA when their solution errors are similar. Finally, Fig. 7 shows the inferred solutions for the sound-speed difference obtained from the artificial data described in Section 3, for three values of $`\mu `$, and compared both with the true $`\delta c^2/c^2`$ and the solution inferred for error-free data. The behaviour of the solution generally reflects the properties discussed so far. In particular, it should be noticed that the solution for the data with errors is shifted systematically relative to the solution based on the error-free data, reflecting the error correlation, most clearly visible in the outer parts of the model; this behaviour illustrates the care required in interpreting even large-scale features in the solution at a level comparable with the inferred errors. We also note that even the inversion based on error-free data shows a systematic departure from the true solution, particularly near the surface. This appears to be a residual consequence of the imposed near-surface error, exacerbated by the lack of high-degree modes which might have constrained the solution in this region. We have checked this by considering in addition artificial data without the imposed near-surface error (cf. Section 3). #### 4.2.2 Choice of $`\beta `$ The importance of $`\beta `$ is seen most clearly in Fig. 5, in terms of the properties of the averaging kernels and the cross term: as desired, increasing $`\beta `$ reduces the importance of the cross term as measured by $`C`$, but at the expense of poorer averaging kernels, as reflected in $`\chi `$ and $`\chi ^{}`$. It should be noticed, however, that the choice of $`\beta `$ mainly affects the solution in the core and near the surface, while it has little effect in the intermediate parts of the solar interior. We also note that the error in the solution increases with increasing $`\beta `$ (cf. Fig. 3), as does the error correlation (see Fig. 7). Thus the choice of $`\beta `$ is determined by the demand that the cross term be sufficiently strongly suppressed, without compromising the properties of the averaging kernels and errors. #### 4.2.3 Choice of $`\mathrm{\Delta }_\mathrm{A}`$ The SOLA technique has an additional parameter: the width of the target function at a reference radius (see eq. 11). The aim of SOLA is to construct a well-localized averaging kernel that will provide as good a resolution as possible. As illustrated in Fig. 3 $`\mathrm{\Delta }_\mathrm{A}`$ ensures a trade-off between averaging-kernel resolution (taking into account also the deviation $`\chi `$ from the target) and the solution error. The effect of $`\mathrm{\Delta }_\mathrm{A}`$ on the averaging kernels is illustrated in Fig. 8. Evidently, for high $`\mathrm{\Delta }_\mathrm{A}`$ the solution is smoothed more strongly than at low $`\mathrm{\Delta }_\mathrm{A}`$. However, if $`\mathrm{\Delta }_\mathrm{A}`$ is too small, the averaging kernel starts to oscillate; even more problematic is the presence of an extended tail away from the target radius since it introduces a non-zero contribution from radii far removed from the target. As in the case of $`\mu `$, the error increases with increasing resolution when $`\mathrm{\Delta }_\mathrm{A}`$ is reduced. On the other hand, the error correlation decreases with decreasing $`\mathrm{\Delta }_\mathrm{A}`$ due to the stronger localization of the solution (cf. Fig. 7) and develops a tendency to oscillate. We finally note that $`C`$ is almost insensitive to $`\mathrm{\Delta }_\mathrm{A}`$. ### 4.3 Density inversion As reference we use what is subsequently determined to be the best choice of parameters: SOLA $`:`$ $`\mathrm{\Lambda }=6,\mu =10^5,\beta =10,\mathrm{\Delta }_\mathrm{A}=0.06;`$ MOLA $`:`$ $`\mathrm{\Lambda }=6,\mu =10^7,\beta =50.`$ (21) Effects on the quality measures of varying the parameters around these values are illustrated in Figs 9, 10, 11 and 13; in addition, Fig. 13 shows results of the analysis of artificial data (cf. Section 3), and Figs 14 and 15 illustrate properties of selected averaging kernels. Throughout, parameters not explicitly mentioned have their reference values. In the case of density inversion, we have found that the cross term is generally small, with little effect on the solution. In addition, the sound-speed difference between the Sun and calibrated solar models is typically small in the convection zone (e.g. Christensen-Dalsgaard & Berthomieu 1991), further reducing the effect of the sound-speed contribution in the density inversion. As a result, the effect of the value of $`\beta `$ on the properties of the inversion is very modest, although a value of $`\beta `$ in excess of 1 is required to suppress the remaining effect of the cross term. Hence, in the following we do not consider the effect of changes to $`\beta `$, or the behaviour of $`C`$. #### 4.3.1 Choice of $`\mu `$ As for sound-speed inversion, the trade-off parameter $`\mu `$ must be determined to ensure a trade-off between the solution error and resolution of the averaging kernel (Fig. 9 \- circles). Its behaviour is very similar to that in the sound-speed case (Fig. 3). As $`\mu `$ is reduced, the solution error increases but the resolution width cannot get smaller than a certain value, which, in the case of SOLA, is the target width $`\mathrm{\Delta }(r_0)`$. On the other hand, for larger values of $`\mu `$, there is a strong increase in the width, with a corresponding very small reduction in the solution error. The dependence of the trade-off on target radius $`r_0`$ is illustrated in Fig. 10. If $`\mu `$ is too large, one may get a misleading solution especially in the core. As for sound-speed inversion, the averaging-kernel resolution using MOLA is more sensitive to variations in $`\mu `$ than using SOLA. For larger values of $`\mu `$, beside the increase in the averaging-kernel width there is an increase in $`\chi `$ and $`\chi ^{}`$ (cf. Fig. 11). The bump in $`\chi `$ and $`\chi ^{}`$ around $`r_00.3`$ for a large $`\mu `$ is due to a “shoulder” in the averaging kernel that appears at these target radii, as illustrated in the case of SOLA in Fig. 14. The error correlation, illustrated in Fig. 13, increases somewhat with increasing $`\mu `$; as already noted, the error correlation changes sign and is of much larger magnitude for density than for sound speed, probably as a result of the mass constraint (eq. 8). Examples of inferred solutions, for the artificial data defined in Section 3, are shown in Fig. 13, together with the true model difference and the difference inferred from error-free frequency differences. For a large value of $`\mu `$ the averaging kernel is not well localized and hence affects the solution (bottom panel); this is true also for the solution based on error-free data. For smaller $`\mu `$ the error clearly increases; also, particularly in the SOLA case, the effect of the error correlation is evident. Thus the solution again deteriorates. This illustrates how the correlated errors can introduce features into the solution, showing the importance of limiting the error correlation. #### 4.3.2 Choice of $`\mathrm{\Delta }_\mathrm{A}`$ As for the sound-speed inversion, $`\mathrm{\Delta }_\mathrm{A}`$ ensures a trade-off between averaging-kernel resolution and the solution error in the SOLA technique (cf. Fig. 9). As before, if we choose $`\mathrm{\Delta }_\mathrm{A}`$ too small, the averaging kernel is poorly localized and it starts to oscillate; this is reflected in the departure of $`\chi `$ from the target (see Fig. 11) and illustrated in more detail, for selected target radii, in Fig. 15. On the other hand, for large $`\mathrm{\Delta }_\mathrm{A}`$, the solution is smoothed relative to the one for small $`\mathrm{\Delta }_\mathrm{A}`$ (due to the low resolution). The effect on the error correlation of changes in $`\mathrm{\Delta }_\mathrm{A}`$ is very modest, although for small $`\mathrm{\Delta }_\mathrm{A}`$ there is a tendency for oscillations (cf. Fig. 13), as was also seen for sound-speed inversion. ### 4.4 Summary of the procedure As a convenience to the reader, we briefly summarize the sequence of steps which we have found to provide a reasonable determination of the trade-off parameters for the SOLA and MOLA methods for inversion for the corrections $`\delta c^2`$ and $`\delta \rho `$ to squared sound speed and density: * The parameter $`\mathrm{\Lambda }`$ is common to both inversion methods and to inversion for $`\delta c^2`$ and $`\delta \rho `$. Unlike the other parameters, it must directly reflect the properties of the data values, in terms of the ability to represent the surface term (cf. Section 4.1). It is also largely independent of the choice of the other parameters: $`\mu `$, $`\beta `$ and $`\mathrm{\Delta }_\mathrm{A}`$; thus its determination is a natural first step. It should be noted, however, that the choice of $`\mathrm{\Lambda }`$ has some effect on the error correlation (cf. Fig. 1), generally requiring that $`\mathrm{\Lambda }`$ be kept as small as possible. * The second step is the determination of $`\mu `$, whose value is the most critical to achieve a good solution. As described in Sections 4.2.1 and 4.3.1, it must be determined to ensure a trade-off between the solution error and resolution of the averaging kernel (Figs 3 and 9 \- circles) at representative target radii $`r_0`$ (see also Figs 4 and 10). In addition, we need to consider the broader properties of the averaging kernels as characterized by $`\chi `$ (SOLA) or $`\chi ^{}`$ (MOLA) and the cross-talk quantified by $`C`$ (Figs 5 and 11), as well as the error correlation (Figs 7 and 13). * The next step is to find $`\beta `$ which is determined by the demand that the cross term be sufficiently strongly suppressed, without compromising the properties of the averaging kernels and errors. Its effect on the properties of density inversion is very modest. * Finally, the SOLA technique has an additional parameter: $`\mathrm{\Delta }_\mathrm{A}`$, which ensures a trade-off between averaging-kernel resolution and the solution error. $`\mathrm{\Delta }_\mathrm{A}`$ is typically decreased until the averaging kernels are poorly localized and start to oscillate (Figs 8 and 15) which is reflected in the departure of $`\chi `$ from the target. After this first determination of $`\mu `$, $`\beta `$ and possibly $`\mathrm{\Delta }_\mathrm{A}`$, we go back to step number 2, determining $`\mu `$ using now the new values of $`\beta `$ and $`\mathrm{\Delta }_\mathrm{A}`$. The procedure obviously requires initial values of $`\beta `$ and (for SOLA) $`\mathrm{\Delta }_\mathrm{A}`$: we suggest $`\beta =10`$ and $`\mathrm{\Delta }_\mathrm{A}=0.06`$ or larger. Although the measures of quality of the inversion are essentially determined by the mode set (modes and errors), the determination of the parameters must also be such as to keep the solution error sufficiently small to see the variations in the relative sound-speed or density differences, which in the case of solar data and a suitable reference model could be as small as $`10^3`$ and $`5\times 10^2`$ respectively. Furthermore, to enable comparison of the solution of the inversion of two different data sets, the solution errors should be similar. To obtain an impression of the quality of the solution and the significance of inferred features, we also strongly recommend analysis of artificial data for suitable test models, including comparison of the inferred solutions for the selected parameters with the true difference between the models and with the solution inferred for error-free data (see Figs 7 and 13). ## 5 Conclusion Appropriate choice of the parameters controlling inverse analyses of solar oscillation frequencies is required if reliable inferences are to be made of the structure of the solar interior. This choice must be based on the properties of the solution, as measured by the variance and correlation of the errors, by the resolution of the averaging kernels and by the influence of the cross-talk. We have considered a mode set representative of current inverse analyses and investigated the properties of the inversion, as well as the solution corresponding to a specific set of artificial data. By varying the parameters we have obtained what we regard as a reasonable choice of parameters (cf. eqs 20 and 21); this was verified by considering in some detail the sensitivity of the relevant measures of the quality of the inversion to changes in the parameters. The analysis also illustrated that an unfortunate choice of parameters may result in a misleading inference of the solar sound speed or density; furthermore it became evident that the correlation between the error in the solution at different target location plays an important role, even for our optimal choice of parameters, and hence must be taken into account in the interpretation of the results (see also Howe & Thompson 1996). The meaning of the parameters is evidently closely related to the precise formulation of the inverse problem. For example, it would be possible to introduce weight functions in the integrals in equations (9) and (12), to give greater weight to specific aspects of the solution. The need for such refinements is suggested by the fact that the properties of the solution depends rather sensitively on the target location $`r_0`$. More generally, it is likely that the best choice of parameters may depend on the target location, further complicating the analysis and (particularly in the SOLA case) increasing the computational expense. The procedure adopted here is evidently somewhat ad hoc, although we have attempted a logical sequence in the order in which the parameters were chosen. A more systematic approach, making use of objective criteria, would in principle be desirable. However, even in the considerable simpler case of inversion for a spherically symmetric rotation profile, characterized essentially just by the parameters $`\mu `$ and possibly $`\mathrm{\Delta }_\mathrm{A}`$, such objective determination of the parameters has so far met with little success in practice (see, however, Stepanov & Christensen-Dalsgaard 1996 and Hansen 1996). On the other hand, it is far from obvious that an objectively optimal solution to the inverse problem exists, for a given data set: the best choice of parameters may well depend on the specific aspects of the solar interior that are being investigated. It is important, however, that the error and resolution properties of the solution be kept in mind in the interpretation of the results; indeed, the immediate availability of measures of these properties is a major advantage of linear inversion techniques such as those discussed here. ## Acknowledgments We are very grateful to M. J. Thompson and an anonymous referee for constructive comments on earlier versions of the manuscript. This work was supported in part by the Danish National Research Foundation through its establishment of the Theoretical Astrophysics Center.
no-problem/9905/astro-ph9905112.html
ar5iv
text
# The central region of the Fornax cluster – III. Dwarf galaxies, globular clusters, and cD halo – are there interrelations? ## 1 Introduction The central regions of galaxy clusters are the places with the highest galaxy density in the universe. Dwarf ellipticals (dE) are especially the most strongly clustered types of galaxies in high-density environments (e.g. review by Ferguson & Binggeli ferg94 (1994), and references therein). Several striking characteristics are seen in the center region of clusters: (1) most central galaxies possess extraordinarily rich globular cluster systems (GCS) (see Harris harr91a (1991), Richtler rich95 (1995) and references therein), but see also apparent counter-examples (see Table 14 in McLaughlin et al. mcla94b (1994)); (2) there often exists a cD galaxy in the center of clusters (e.g. Schombert scho88 (1988)). (3) different types of dwarf galaxies have different clustering properties (e.g. Vader & Sandage vade91 (1991)); (4) in some cases the faint end slope of the dwarf galaxy luminosity function (LF) seems to depend on the cluster-centric distance (e.g. in Coma: Lobo et al. lobo (1997)). The question arises on whether these properties may be related through the accretion of dwarf galaxies. The answer to this question is most probably associated to the formation epoch of galaxy clusters. At that time, it is expected that galaxies were very gas-rich and that interactions between galaxies were more frequent. The number density of galaxies at that epoch in the central region must have been larger than today. Therefore, the initial population of dwarf galaxies played an important role. The favoured theoretical models of galaxy cluster formation predict a steep slope of the initial mass function towards the low-mass end (see a more detailed discussion and references in Sect. 2.1). In contrast, the faint end slope of the observed LF in nearby groups and clusters are significantly flatter (see Ferguson & Binggeli ferg94 (1994), Trentham tren98 (1998)). One possibility that would explain this discrepancy is the accretion and dissolution of dwarf galaxies in cluster centers. It is posible to understand the formation of a rich GC system and a cD halo from the infall of gas-poor and gas-rich dwarfs into a dense cluster environment. During the infall of gas-poor as well as gas-rich dwarfs in a dense cluster center environment several scenarios are thinkable for forming a rich GCS and a cD halo (see Sect. 5). Support for such a scenario from the observational side comes from López-Cruz et al. (lope (1997)) who compared the properties of clusters with and without a central luminous cD galaxy. They found that clusters without a prominent cD galaxy tend to have a steep LF at the faint end and a high fraction of late-type galaxies, and thus seem to be less evolved than clusters with pronounced cD galaxies and relatively flat LF at the faint end. They explain this finding by the disruption of dwarf galaxies. In this study the attention is focused on the properties of the relatively poor, compact, and evolved Fornax cluster, one of the best studied galaxy clusters in the local universe (e.g. Ferguson ferg89 (1989), Ferguson & Sandage ferg88 (1988)). Other nearby clusters are believed to be in different evolutionary states. Whereas Virgo (e.g. Sandage et al. sand85 (1985), Ferguson & Sandage ferg91 (1991)) is dominated by late type galaxies and is only half as dense in the center (numbers of galaxies per volume) as Fornax. Centaurus (Jerjen & Dressler jerj97b (1997), Stein et al. stei (1997)) and Coma (e.g. Secker & Harris seck97 (1997)) show substructures, indicative of a still ongoing dynamical evolution, like for example cluster-cluster or cluster-group merging. In the first two papers of this series (1998a , 1998b , hereafter Paper I and Paper II) we investigated the distribution of galaxies in central Fornax fields. We found two compact objects that belong to the Fornax cluster and might be candidates for isolated nuclei of stripped dwarf ellipticals. However, very few new members were found compared to the study of Ferguson (ferg89 (1989)). Thus, the spatial distribution and luminosity function of dwarf galaxies in Fornax (Ferguson & Sandage ferg88 (1988)) was confirmed. In this paper we discuss the possibility, whether the infall of dwarf galaxy into the cluster center may play an important role in the enrichment of the central globular cluster system, especially the increase of the globular cluster specific frequency $`S_N`$, as well as the formation of the extended cD halo. In the following section we give a compilation of the necessary background of our analysis. ## 2 Dwarf galaxies in clusters ### 2.1 Theoretical background on the evolution of dwarf galaxies in clusters In their review about dwarf elliptical galaxies, Ferguson & Binggeli (ferg94 (1994)) summarized the formation and evolutionary scenarios that are predicted by theoretical models. It is generally accepted that galaxy formation started from gaseous conditions in the early universe followed by the collapse of primordial density fluctuations, cooling of the gas and subsequent star formation (e.g. White & Frenk whit91 (1991), Blanchard et al. blan (1992), Cole et al. cole94 (1994), Kauffmann et al. kauf93 (1993), Lacey et al. lace (1993)). In cold dark matter (CDM) dominated models the formation of low-mass galaxies is favored, because for dwarf galaxy halos collapsing at $`z3`$–10 the cooling time is short compared to the free-fall time, thus cooling should be very efficient, and accordingly many dwarfs will be formed. A steep slope, $`\alpha =2`$, of the initial mass function ($`N(M)`$d$`MM^\alpha `$) is predicted (e.g. Blanchard et al. blan (1992)). In contrast, the faint end slope of the observed luminosity functions in nearby clusters are around $`\alpha 1.3\pm 0.4`$ (see Ferguson & Binggeli ferg94 (1994)). This contradiction is the so-called “overcooling problem” (e.g. Cole cole91 (1991)). If the CDM model prediction is correct, there must have been active some mechanisms that either counteracted the cooling during the collapse of dwarfs or destroyed the numerous dwarfs after their formation. Plausible mechanisms that involve internal as well as external agents are summarized in the review by Ferguson & Binggeli (ferg94 (1994)). In the following, we focus our attention on the possibility that many dwarf galaxies have merged with the central galaxy. For a CDM power spectrum in an $`\mathrm{\Omega }=1`$ cosmology the epoch of dwarf galaxy formation is believed to be also the epoch of rapid merging. Kauffmann et al. (kauf94 (1994)) included the merging of satellite galaxies in their CDM models and found that most of the observational data can be reproduced when adopting a merging timescale that is a tenth of the tidal friction timescale, and when star formation is suppressed in low-circular-velocity halos until they are accreted into larger systems. Further, efficient merging at all epochs results in a decrease of the faint end slope of the LF compared to the initial predicted value of $`\alpha =2`$. ### 2.2 Dwarf galaxies and cD halo Several authors have suggested that tidal disruption (total dissolution of the galaxy light) of galaxies in cluster centers as well as tidal stripping (only outer parts are affected, a remnant survives) might be related to the formation of cD halos (see references below). The time of formation is being discussed. Most authors assume that the stripping processes take place after the cluster collapse (e.g. Gallagher & Ostriker gall72 (1972), Richstone richs76 (1976), Ostriker & Hausman ostri (1977), Richstone & Malumuth richs83 (1983)). In contrast, Merritt (merr (1984)) explained the general appearence of cD halos as the result of dynamical processes during the cluster collapse. In his scenario the accumulation of slowly-moving galaxies in the cluster core via dynamical friction only plays an important role for groups or clusters with small velocity dispersion $`\sigma _v500`$ km s<sup>-1</sup> (Fornax: $`\sigma _v360`$ km s<sup>-1</sup>). White (whit87 (1987)) argued that, in the case of tidal disruption and stripping, the distribution of stripped and disrupted material (diffuse light, dark matter, GCs) should be more concentrated to the center than the relaxed galaxy distribution, because galaxies closer to the center are more affected by disruption processes than galaxies outside. In the case of Merritt’s model, galaxies formed before the collapse, stripping occured during the collapse, and finally the stripped material is distributed in the same way as the galaxies through collective relaxation. Furthermore, it is interesting to note that also a large amount of the intracluster gas (seen as X-ray halo) might have had its origin in dwarf galaxies, which could have expelled their gas by supernova-driven winds, or stripped off their gas (Trentham tren94 (1994), Nath & Chiba nath (1995)). In the Virgo cluster, for example, Okazaki et al. (okaz93 (1993)) estimated that the amount of gas expelled from the E and S0 galaxies is not adequate to account for the total gas mass in the cluster. Mac Low & Ferrara (macl (1999)) calculated that low mass dwarf galaxies can easily blow away metals from supernovae which might enrich the halo gas. ## 3 Properties of dwarf galaxies, GCs, and cD halo in the Fornax cluster center The center of the Fornax cluster hosts the central galaxy NGC 1399 with an extraordinarily rich globular cluster system and an extended cD halo as well as a halo of X-ray emitting gas. In the following we give a short review on the properties of the different components that have to be considered in the picture of a common evolution. In Table 1 thoses properties are summarized: the slopes of the surface density profiles, the velocity dispersion, and the ranges of metallicities. Furthermore, the absolute $`V`$ luminosities and estimated masses are given, if available. ### 3.1 Dwarf galaxies in the Fornax cluster The most complete investigation of the Fornax dwarf galaxies was done by Ferguson (ferg89 (1989), Fornax Cluster Catalog (FCC)) as well as by Davies et al. (davie88 (1988), and following papers: Irwin et al. irwi (1990), Evans et al. evan (1990)). As we have shown in Paper I the morphological classification of Fornax members by Ferguson (ferg89 (1989)) is very reliable and nearly no dE has been missed within the survey limits as far as we can judge from the comparison with our sample fields. Thus, the following properties of the Fornax dwarf galaxies are mainly based on the FCC plus the additional new members as presented in Paper I. The spatial distribution of dEs in Fornax can be represented by a King profile with a core radius of $`0\stackrel{}{.}67\pm 0\stackrel{}{.}1`$ and a center located about $`25\mathrm{}`$ west of NGC 1399 (Ferguson ferg89 (1989)). In order to compare their surface density profile with that of the GCS and the cD halo light we fitted power laws to the radial distribution of the dEs and dS0s in the extended FCC adopting NGC 1399 as the center. For that we counted galaxies brighter than $`B_T=19`$ mag in 7 equi-distant rings from 0 to $`3\mathrm{°}`$. We determined the slopes of the density profiles in the inner ($`r<0\stackrel{}{.}8`$) as well as in the outer ($`0\stackrel{}{.}8<r<3\mathrm{°}`$) part. The dividing radius of $`0\stackrel{}{.}8`$ is about the limit out to where the cD halo light and the gas envelope have been measured. The results are summarized in Table 2. In addition, we also give the mean slopes, when fitting a power law to the total profile, and the fitted values for the giant galaxies. The nucleated dwarf galaxies have the steepest slope and are more concentrated towards the central galaxy than the non-nucleated dE/dS0s. The luminosity function (LF) of the Fornax dwarf galaxies was studied by Ferguson & Sandage (ferg88 (1988)) in a region with radius smaller than $`2\stackrel{}{.}4`$, centered on NGC 1399. They found that the nucleated dwarf ellipticals (dE,Ns) as well as the dwarf lenticular (dS0) galaxies are brighter than the non-nucleated dEs. Further, the faint end slope of the dE/dS0 LF, fitted by a Schechter (sche (1976)) function, is quite flat ($`\alpha =1.08\pm 0.10`$) compared to other clusters like Virgo ($`\alpha =1.31\pm 0.05`$) or Centaurus ($`\alpha =1.68\pm 0.56`$, Jerjen & Tammann jerj97c (1997)). Table 3 summarizes the results for the faint end slopes of Schechter function fits to different subsamples of the extended FCC. Colors and metallicities of dwarf galaxies in Fornax have been studied by photometric as well as by spectroscopic means (e.g. Caldwell & Bothun cald87b (1987), Bothun et al. both (1991)). Spectroscopically determined metallicities seem to be consistent with the picture that the bluer dwarfs are the more metal-poor ones. The metallicity range for 10 bright dE,Ns is $`1.5<[Fe/H]<0.8`$ dex (Held & Mould held (1994)). The metallicities derived from Washington photometric indices for 15 LSB dwarfs are of the same order (Cellone et al. cell94 (1994)). Concerning ages, all investigated dwarfs possess an old stellar population, some of them a contribution of intermediate-age stars, and only few have signs of recent or ongoing star formation (Held & Mould held (1994), Cellone & Forte cell96 (1996)). It seems that the Fornax dEs share the same characteristics as the Local Group dSph population (e.g. review by Grebel greb97 (1997)). Radial velocity measurements of 43 Fornax dwarfs ($`18>B_t>15`$ mag) by Drinkwater et al. (drin97 (1997)) result in a velocity dispersion of $`\sigma _v=490`$ km s<sup>-1</sup>, significantly larger than that of 62 giants ($`B_t<15`$ mag), $`\sigma _v=310`$ km s<sup>-1</sup>. According to the authors, this difference cannot be explained by measurement errors. ### 3.2 The central globular cluster system The globular cluster system of NGC 1399 is one of the best investigated GCSs outside the Local Group. The total number of GCs is about $`N_{\mathrm{GC}}=5800\pm 500`$ (Kissler-Patig et al. kiss97a (1997), Grillmair et al. gril98 (1998)) within a radius of $`10\mathrm{}`$ from the galaxy center. This is about 10 times the number of GCs in the other Fornax ellipticals, $`300\pm 60<N_{tot}<700\pm 100`$. Adopting a distance of 18.2 Mpc or $`(mM)_0=31.3`$ mag to NGC 1399 (Kohle et al. kohl (1996), recalibrated with new distances of Galactic GCs, Gratton et al. grat (1997)) the absolute magnitude of NGC 1399 is $`M_V=21.75`$ mag when taking the apparent magnitude values from the literature (Faber et al. fabe89 (1989), RC3: de Vaucouleurs et al. devo91 (1991)). This corresponds to a specific frequency of $`S_N=11.6\pm 2.0`$. If the light of the cD halo within $`10\mathrm{}`$ is taken into account (see Sect. 3.3), $`S_N`$ is reduced to $`6.8\pm 2.0`$ ($`M_{V,\mathrm{tot}}=22.33`$ mag). However, distinguishing a cD halo and a bulge component in the galaxy light, $`S_N`$ for the cD halo would be about $`10\pm 1`$ (assuming $`S_N=3.2`$ for the bulge, see Sect. 9), the average value of the other early-type Fornax galaxies (Kissler-Patig et al. kiss97a (1997)). Thus, the building up of the GCS of the cD halo component must have been very efficient. The color distribution of the GCs around NGC 1399 is very broad compared to most other GCSs in Fornax ellipticals and can only be explained by a multimodal or perhaps just a bimodal GC population (e.g. Ostrov et al. ostro (1993), Kissler-Patig et al. kiss97a (1997), and Forbes et al. forb97 (1997)). Spectroscopic analysis of 18 GCs by Kissler-Patig et al. (kiss98a (1998)) shows a metallicity range between $`1.6`$ and $`0.3`$ dex (with possible peaks at $`1.3`$ and $`0.6`$ dex), and two exceptional GCs at about 0.2 dex, located in the red (metal rich) tail of the color distributions. The comparison of the line indices with theoretical evolutionary models suggests that most of the GCs are older than at least 8 Gyr. If one fits the GC color distribution with two Gaussians, the number ratio of metal rich (red) to metal poor (blue) GCs is about 1:1 (Forbes et al. forb97 (1997)). The radial extension of the GCS around NGC 1399 can be traced out to about $`10\mathrm{}`$ ($`53`$ kpc). The slope of the GC surface density profile, $`\rho r^\alpha `$, is about $`\alpha =1.5\pm 0.2`$, when taking the average of the published values. Forbes et al. (forb97 (1997)) found that the distribution of the blue GC subpopulation is even flatter ($`\alpha 1.0\pm 0.2`$), whereas the red GCs are more centrally concentrated ($`\alpha 1.7\pm 0.2`$), comparable to the slope of the galaxy light ($`\alpha =1.6\pm 0.1`$). See Fig. 2 for a schematic overview. Radial velocities of 74 GCs around NGC 1399 have been measured (Kissler-Patig et al. kiss99a (1999), Minniti et al. minn98 (1998), Kissler-Patig et al. kiss98a (1998)). The velocity dispersion for the whole sample is $`\sigma _v=373\pm 35`$ km s<sup>-1</sup>. No differences can be seen between the red and blue subpopulations. However, there exists a radial dependence of the velocity dispersion in the sense that $`\sigma _v`$ rises from $`263\pm 92`$ to $`408\pm 107`$ km s<sup>-1</sup> between $`2\mathrm{}`$ and $`8\mathrm{}`$ (Kissler-Patig et al. kiss99a (1999)). ### 3.3 cD halo and bulge The galaxy light of NGC 1399 follows an extended cD profile (Schombert scho86 (1986), Killeen & Bicknell kill (1988)) out to a radial distance of about 34 arcmin from the galaxy center ($`\mathrm{\Sigma }_B=28`$ mag isophotal surface brightness level). This is about 180 kpc in Fornax distance (18.2 Mpc) and comparable to the extent of the X-ray envelope (Ikebe et al. ikeb (1996), Jones et al. jone97 (1997)). The determination of the stellar population parameters of the outer cD halo, like accurate photometric colors, metallicity or velocity dispersion, is very difficult due to the low surface brightness. Long slit spectra have been taken for the stellar bulge population within a radius of about $`1\stackrel{}{.}5`$ from the center of NGC 1399 (Franx et al. fran (1989), Bicknell et al. bick (1989)). The velocity dispersion is about 200 km s<sup>-1</sup> at $`1\stackrel{}{.}5`$ and rises within the central $`10\mathrm{}`$ to a central value of about 360 km s<sup>-1</sup>. Besides the GCS, an useful tracer for the stellar population at larger radii is the population of planetary nebulae (PNe). Arnaboldi et al. (arna (1994)) studied the kinematics of 37 PNe out to a radius of $`4\stackrel{}{.}5`$. They found an increase in the velocity dispersion with increasing radius from 269 km s<sup>-1</sup> for $`r<2\stackrel{}{.}6`$ to 405 km s<sup>-1</sup> for $`2\stackrel{}{.}6<r<4\stackrel{}{.}5`$ (18 of the 37 PNe). #### 3.3.1 Luminosity and surface brightness profile In this subsection we divide the light profile of NGC 1399 into a cD halo and a bulge component in order to compare their characteristics with those of the GCS and the dwarf galaxy population. We determined the absolute luminosity of the cD halo in the following way: in the $`\mu `$$`r^{1/4}`$ plot (Fig. 1, upper panels) one can see that the SB profile of NGC 1399 (determined from the NE CCD field F2) changes its slope at about $`50\mathrm{}`$. We fitted the total profile by the sum of two de Vaucouleurs laws: $`\mu (r)=ZP_{\mathrm{cal}}2.5log[I_{\mathrm{gal}}^0exp((r/\alpha _{\mathrm{gal}})^{1/4})+I_{\mathrm{cD}}^0exp((r/\alpha _{\mathrm{cD}})^{1/4})]`$ The steeper, more concentrated profile represents the luminosity of the bulge without cD halo, whereas the flatter, more extended profile contains the light of the cD halo. The total luminosity of each component is $`I_{\mathrm{gal},\mathrm{cD}}^{\mathrm{tot}}=2\pi rI_{\mathrm{gal},\mathrm{cD}}(r)𝑑r`$. We restricted our calculations to within a radius of $`10\mathrm{}`$ where the number of detected GCs fades into the background. The dashed lines in Fig. 1 represent the “best” fit (data points inside $`1\stackrel{}{.}5`$ radius have been omitted). The dotted lines give the ranges for possible fits. The surface density slopes of both profiles are $`\alpha =2.0\pm 0.2`$ and $`\alpha =1.0\pm 0.2`$ respectively, if $`\mathrm{\Sigma }(r)r^\alpha `$. The slope of the combined profile is $`\alpha =1.6\pm 0.1`$ (see also Fig. 2). In the literature one finds an apparent magnitude for NGC 1399 of $`V=9.55`$ mag (Faber et al. fabe89 (1989), RC3: de Vaucouleurs et al. devo91 (1991), adopting a mean $`(BV)`$ color of 1.0 mag, Goudfrooij et al. goud94b (1994)). This magnitude is derived from an aperture growth curve extrapolation. with a maximum aperture of diameter $`1\stackrel{}{.}5`$ (Burstein et al. burs84b (1984)). A 1-component fit of an $`R^{1/4}`$ law within $`1\stackrel{}{.}5`$ (the largest aperture in Burstein et al. burs84b (1984)) is shown in Fig. 1 (uppermost panel). Adopting an absolute magnitude of $`M_V=21.75`$ mag for the integrated light under this profile, the total luminosity of the bulge light from the 2-component fit (middle panel) is $`M_{V,\mathrm{bulge}}=21.50\pm 0.20`$ mag (about 80% of the luminosity given in the literature). The luminosity for the cD halo is $`M_{V,\mathrm{cD}}=21.65\pm 0.2`$ mag and for the whole system within $`10\mathrm{}`$ $`M_{V,\mathrm{tot}}=22.33\pm 0.2`$ mag. Another check for the correct proportion of the luminosities of the different components can be made by comparing the integrated flux within an aperture of $`1\stackrel{}{.}5`$ with the total flux within $`10\mathrm{}`$. Adopting $`V=10.30`$ mag for the $`1\stackrel{}{.}5`$ aperture (Burstein et al. burs84b (1984)), we derive $`M_{V,\mathrm{tot}}=22.27\pm 0.2`$ mag for the whole system, in excellent agreement with the value given above. Note that the total luminosity of the cD halo is about 180 times the luminosity of a typical dwarf galaxy with $`M_V=16.0`$ mag or 2.2 times the total luminosity of the present dEs and dS0s in Fornax. ## 4 Comparison of corresponding properties In the previous section we presented the properties of the different components of NGC 1399 and the galaxy population in the center of the cluster. In Figure 2 we give a schematic overview of the surface density/brightness (SD/SB) profiles of the different components and their extension. The profiles are arbitrarily shifted in the ordinate axis. The surface densities of GCs and galaxies are number densities, whereas the profile of the galaxy light is a surface brightness profile. Nevertheless, this is comparable to the number density profile, if one assumes similar stellar populations and, accordingly, similar $`M/L_V`$ ratios. The surface density of the X-ray gas is again a particle (number) density, $`n_{\mathrm{gas}}r^\gamma `$. The gas density profile is derived from the surface brightness distribution of the X-ray gas $`S(r)r^\tau `$ under the assumption of isothermal conditions ($`\tau =(\gamma +1)/2`$) for a radius larger than about 10 kpc (Jones et al. jone97 (1997)). The plot shows that the profile slopes of the blue GC population and the cD halo light are strikingly similar, whereas the distribution of dE/dS0 galaxies is somewhat flatter in the central $`0\stackrel{}{.}7`$ and slightly steeper outside. Interestingly, the surface density profile of the X-ray gas is also very similar in slope and extension to the cD halo light and the blue GCs. In contrast, the profile of the bulge light of NGC 1399 is significantly steeper than all other profiles. The same behaviour can be seen in the velocity dispersion. It is comparable for dwarf galaxies and GCS, whereas the stars in the stellar bulge have a lower $`\sigma _v`$ (see also Minniti et al. minn98 (1998), Kissler-Patig et al. kiss99a (1999)). This agreement in morphological and dynamical properties of GCs, the cD halo light, and perhaps also the gas particles might suggest that these components share a common history (or origin). In the next section we describe some scenarios that might have happened when dwarf galaxies interacted with the central cluster galaxy. ## 5 Disruption, accretion and stripping of dwarf galaxies What are the possible consequences, when dwarf galaxies of different types interact with the central galaxy, especially with respect to the formation of a cD halo and a rich GCS? We make a distinction between two main cases: (1) the infall of gas-poor dwarfs, for example dwarf ellipticals, where only the existing stellar component is involved in the interaction process, and (2) the infall of gas-rich dwarfs, where the interaction of the gas has to be considered and might play an important role in the formation of new stellar populations A further sub-division of these cases is: (a) the dwarf galaxy will be totally dissolved in the interaction process (b) only parts of the dwarf galaxy (for example gas and/or globular clusters) will be stripped during the passage through the central cluster region (c) the dwarf galaxy neither loses gas nor stars nor clusters to the cluster center, but might change its morphological shape because of tidal interactions (for example getting more compact or splitting into two). In the next subsections we discuss the possible consequences for the different cases. ### 5.1 Gas-poor dwarfs (1a): in this case the stellar population of the dwarf galaxy will be disrupted in tidal tails and the stellar light will be smeared out in the potential well of the cluster center. Most affected by this process are the faintest dEs (or dSphs, Thompson & Gregory thom (1993)). In clusters with a low velocity dispersion or at the bottom of a local potential well in a rich cluster (Zabludoff et al. zabl (1990)), the light of several dissolved dwarfs may form an extended, diffuse cD halo. Existing GCs of the dwarfs will survive and contribute to the central GCS. In the Local Group, an example for this scenario may be the Sagittarius dSph which is dissolving into our Galaxy, adding 4 new GCs to the GCS of the Milky Way (Da Costa & Armandroff daco95 (1995)). However, only few dwarf galaxies with a very rich GCS compared to their luminosity are known (Miller et al. mill (1998), Durrell et al. durr (1996)). In Sect. 6 we estimate under which conditions the accretion of gas-poor dwarfs and their GCS can increase $`S_N`$ of a central GCS. Finally, the nuclei of dE,Ns can survive the dissolution of their parent galaxy and may appear as GCs (Zinnecker et al. zinn (1988), Bassino et al bass (1994)). The nuclear magnitudes of all Virgo dE,Ns (Binggeli & Cameron bing91 (1991)), for example, fall in the magnitude – surface brightness sequence is defined by the GCs (e.g. Binggeli bing94 (1994)). (1b): like in the case 1a the stripped stars and GCs will be distributed around the central galaxy. In this case the question arises on how large the number of stripped GCs is compared to the luminosity of the stripped stellar light. If GCs could be stripped from regions with a high local $`S_N`$, this would also increase $`S_N`$ of the central GCS. According to model calculations by Muzzio et al. (muzz84 (1984), see also review by Muzzio muzz87b (1987)), the tidal accretion of GCs and stars can be an important process in the dynamical evolution of GCSs in galaxy clusters. In some galaxies the GCS is more extended than the underlying stellar light, which has the consequence that the local $`S_N`$ increases with galactocentric distance; for example NGC 4472 has a global $`S_N`$ of 5.5 and a local $`S_N`$ larger than 30 at 90 kpc (McLaughlin et al. mcla94b (1994)). Forbes et al. (forb97 (1997)) and Kissler-Patig et al. (kiss99a (1999)) suggest that the stripping of the outermost GCs and stars from such a galaxy naturally increases the $`S_N`$ of the central GCS. It would be interesting to investigate whether this is also true for the GCSs of dwarf galaxies. Furthermore, it would be interesting to know how the tidal stripping process changes the shape of the remaining galaxy. Kroupa (krou (1997)) simulated the interaction of a spherical low-mass galaxy with a massive galactic halo and found that the model remnants share the properties of dwarf spheroidals. On the other hand, M32 may be an example for a tidally compressed remnant, whose GCs have been stripped (e.g. Faber fabe73 (1973), Cepa & Beckman cepa (1988)). (1c): in this case the dwarf galaxy does not contribute to the formation of cD halo and central GCS. However, as in 1b one might speculate about the change of the morphological shape after a passage of the galaxy through the cluster center. Note that, except in their nuclei, the metallicity of GCs in dEs as well as the metallicity of the bulk of their stars is very low ($`2.5<[Fe/H]<1.0`$ dex, see the review on Local Group dwarfs by Hodge hodg94b (1994)). Therefore, stripped GCs from these galaxies will only contribute to the metal-poor population of the central GCS. And accordingly, the cD halo should have quite a blue color. ### 5.2 Gas-rich dwarfs (2a): for the stellar population and GCs see 1a. The infalling gas will experience the thermal pressure of the hot medium in the central galaxy. The densities can be high enough that star formation occurs and the formation of many dense and compact star clusters is possible (e.g. Ferland et al. ferl (1994)). As mentioned in Sect. 3.4, stripped gas that was not converted into stars may contribute to the intracluster X-ray gas in the cluster center (see Nath & Chiba nath (1995)). The open questions here are, how many “new” star clusters will survive the further cluster center evolution, and how large the number of surviving clusters is compared to the light of newly formed stars which contribute to the total light of the central galaxy and/or cD halo. In other words, it is unclear whether the formation of new GCs is so efficient that it can increase $`S_N`$ of the central GCS. Some constraints/estimations that can be made from observations of very young star clusters in merging galaxies and starburst galaxies are presented in Sect. 7. (2b): the stripping of a gas-rich galaxy mainly affects the gas component that may form new stars and clusters as mentioned in case 2a. Nulsen (nuls (1982), see also Ferguson & Binggeli ferg94 (1994)) estimated a typical mass loss rate from infalling dwarfs of $`\dot{M}=7.410^2M_{\mathrm{}}\mathrm{yr}^1nr_{\mathrm{kpc}}^2\sigma _{\mathrm{km}\mathrm{s}^1}`$, where $`n`$ is the gas density in the cluster, $`r_{\mathrm{kpc}}`$ the dwarf galaxy radius in kpc, and $`\sigma _{\mathrm{km}\mathrm{s}^1}`$ the velocity dispersion of the cluster. A stripping time scale for a typical dwarf irregular (dI), $`r=4`$ kpc, gas mass $`M_{\mathrm{gas}}=10^8M_{\mathrm{}}`$, and $`\sigma =400`$ km s<sup>-1</sup> (Fornax) is $`t_S=0.5`$ Gyr, when adopting $`n=8.210^8`$ cm<sup>-3</sup> for the central gas density of the Fornax cluster (Ikebe et al. ikeb (1996)). The fact that dEs are more concentrated towards cluster centers than dIs is interpreted as a result of this stripping scenario (dEs being the remnants of stripped dIs, e.g. Lin & Faber lin (1983), Kormendy korm (1985)). Furthermore, non-nucleated dEs have a quite low GC $`S_N`$ in contrast to dE,Ns, for which $`S_N`$ increases with decreasing luminosity (Miller et al. mill (1998)). One might speculate that not only gas but also GCs have been stripped, whereas the dE,Ns have a different evolution history. (2c): the passage of a gas-rich dwarf through the intergalactic gas in a cluster might trigger star formation in the dwarf galaxy itself (Silk et al. silk (1987)) and might enrich its GCS (see Sect. 7). Ferguson & Binggeli (ferg94 (1994)) suggest that a galaxy falling into the cluster for the first time in the present epoch encounters such high densities that stars can form long before ram pressure becomes efficient. A further close passage then can result in the cases 2a or 2b. In all theses cases there are no restrictions for the metallicity of newly formed GCs and stars. Their metallicity depends on the gas enrichment history of the accreted or stripped galaxies themselves. Of course, a lower limit is the metallicity of the interstellar matter of the dwarf galaxy. As an estimation for this lower limit can serve the metallicity of the young populations in the Local Group dSphs and irregulars. It seems that the secondary stellar population has metallicities more metal-rich than $`[Fe/H]=1.2`$ dex, in some cases even up to solar values (e.g. Grebel greb97 (1997)). ## 6 Enhancement of $`S_N`$ by accretion of gas-free dwarf galaxies The possibility that the accretion of gas-poor dwarfs can increase $`S_N`$ of the central GCS requires that the $`S_N`$ value of a large number of accreted dwarfs themselves is very high. Only few examples of dwarf galaxies with very high GC frequencies are known. In the Local Group the Fornax and Sagittarius dwarf spheroidals have extraordinarily high $`S_N`$ values: $`29\pm 6`$ and $`25\pm 9`$ respectively (Durrell et al. durr (1996)). Their absolute luminosities are about $`M_V=13`$ mag. Durrell et al. (durr (1996)) found in the Virgo cluster two dE,Ns fainter than $`M_V=15.5`$ mag that have GC specific frequencies in the order of $`S_N=14\pm 8`$. Recently Miller et al. (mill (1998)) found that exclusively dE,Ns can possess high $`S_N`$ GCSs, whereas dEs have “normal” values. In this respect, it is worthwhile noting that the nuclei of dE,Ns could be merged globular clusters, and thus the $`S_N`$ of these galaxies might have been even higher in the past. It seems that all of the high $`S_N`$ dwarf galaxies belong to the faint luminosity end of the dwarf galaxy population. Thus, their total numbers of GCs are very small, $`4<N_{tot}<20`$ and it might reflect the stochastic effect, where a low mass dwarf is able to produce no, 1, 2, or several clusters. We tested with Monte Carlo (MC) simulations the possibility that 2500 GCs were captured by gas-poor dwarf galaxy accretion in the center of the Fornax cluster. The number 2500 comprises about the blue GC subpopulation. We assumed that galaxies with absolute luminosities in the range $`18.0<M_B<8.5`$ mag have been accreted. Each galaxy contains GCs according to its luminosity. For galaxies with $`18.0<M_B<15.5`$ we adopted a mean GC specific frequency of $`S_N=4.5`$ (Durrell et al. durr (1996)). In the fainter magnitude bins, the number of GCs was chosen randomly in such a way that the ranges of observed $`S_N`$ (Miller et al. mill (1998)) were reproduced. In Table 4 the initial conditions for the dwarf GCSs are summarized. We simulated 3 cases, a very optimistic one (simulation run 1, very faint dwarfs can also possess GCs), a pessimistic case (run 3), where no dwarf fainter than $`M_B=12.5`$ can possess GCs (as it seems to be the case for the Local Group dSphs), and a medium case (run 2, dwarfs fainter then $`M_B=10.5`$ can not possess any GC). However, if faint dEs are already stripped dwarf galaxies, the simulation run 1 or 2 seem more reasonable. We started our simulations with an initial Schechter-type LF with a given characteristic luminosity $`M^{}`$ and faint end slope $`\alpha `$. Then we “disrupted” galaxies of randomly chosen luminosities as long as 2500 GCs have been accumulated, considering the condition that the final LF resembles the present one of the Fornax cluster. We have chosen the following initial faint end slopes: $`\alpha =1.1`$ (the present day faint end slope of dE/dS0s), $`\alpha =1.4`$, and $`\alpha =1.8`$. We varied the characteristic luminosity between $`M^{}=15.3`$ (present day), $`M^{}=16.3`$, and $`M^{}=17.3`$. The brighter $`M^{}`$ the higher is the fraction of disrupted dwarf galaxies at the bright end of the LF. Figure 3 shows the initial LFs with different slopes compared to the present day LF (hatched area). For each simulation we calculated the total number of disrupted dwarfs $`N_{\mathrm{tot}}`$, their total luminosity $`M_{\mathrm{B},\mathrm{tot}}`$, the fraction of their light compared to the cD halo light $`\mathrm{\Delta }L=L_{\mathrm{dw}}/L_{\mathrm{cD}}`$, and the specific frequency of GCs compared to the disrupted stellar light $`S_{\mathrm{N},\mathrm{dw}}`$. In addition, we estimated the mean metallicity of the accreted GCs. For galaxies brighter than $`M_V=13`$ mag, we adopted the metallicity–luminosity relation given by Côté et al. (cote (1998)), $`\overline{[Fe/H]}=2.31+0.638M_V+0.0247M_V^2`$. For the fainter dwarfs, a metallicity–luminosity relation was derived from a linear regression to the Côté et al. data with $`M_V<13`$ mag, $`\overline{[Fe/H]}=0.10M_V3.13`$. With this relation, the GCs of the faintest dwarfs in our simulations, $`M_B=8.5`$ mag, have a mean metallicity of about $`\overline{[Fe/H]}=2.3`$ dex. Table 5 summarizes the results of our simulations. The MC simulations show that high $`S_N`$ values around 10 can only be achieved under the assumption that dwarf galaxies fainter than $`M_B=10.5`$ mag can possess at least one GC and that the faint end slope of the initial LF is at least as steep as $`\alpha =1.4`$. The dissolved light then comprises about 60-90% of the present day cD halo light (within $`10\mathrm{}`$). However, the number of dissolved galaxies in theses cases, 3000–14000, is very high. It has to be shown whether theoretical simulations of cluster evolution can reproduce such high destruction rates, when including also very low mass dwarf galaxies. Moreover, the mean metallicity of the accreted GCs for the cases with high $`S_N`$ values would be about 0.5 dex lower than the observed metal-poor peak at $``$1.3 dex. In Sect. 9 we discuss, whether the mixture of the presented accretion process with stripping of GCs and new formation from infalling gas can explain the luminosity of the cD halo together with the observed $`S_N`$ of the central GCS in the Fornax cluster. ## 7 Efficiency of new cluster formation by the accretion of gas-rich dwarf galaxies In this section we assume that the gas of stripped dwarfs forms stars and clusters with the same proportion as has been determined for merging and starburst galaxies. Many examples for young GC candidates in mergers are known: e.g. NGC 3597 (Lutz lutz (1991)), NGC 1275 (Holtzman et al. holt (1992), Nørgaard-Nielsen et al. norg (1993), NGC 5018 (Hilker & Kissler-Patig hilk96 (1996)), NGC 7252 (Whitmore et al. whitm93 (1993), Schweizer & Seitzer schw93 (1993), Whitmore & Schweizer whitm95a (1995)). The number of newly formed clusters differs from case to case and seems to depend on the amount of gas that is involved in the merging process. The precondition that is needed to form a bound cluster is a cold gas cloud with high density in its core (e.g. Larson lars (1993)). Furthermore, the local star formation efficiency has to be very high and has to occur on a short timescale in order to avoid an early disruption by strong stellar winds and by supernova explosions of the most massive stars (Brown et al. brow (1995), Fritze-v. Alvensleben & Burkert frit95 (1995)). The best candidates for the progenitors of the clusters are the massive, embedded cores of (super) giant molecular clouds (e.g. Ashman & Zepf ashm92 (1992), Harris & Pudritz harr94b (1994)). Elmegreen et al. (elme (1993)) have shown that large molecular cloud complexes can form in interacting systems. The high densities in the cores that are necessary for the cluster collapse can be induced by direct cloud-cloud collision as well as by an increase of the ambient gas pressure as a result of a merger (Jog & Solomon jog (1992)). Furthermore, the high velocities of colliding gas during mergers might act as dynamical heating that counteracts a fast cooling which would prevent an efficient cluster formation. In this way metal-rich gas, where the cooling times are normally short, could also efficiently form GCs. What is the observed cluster formation efficiency in merger and starburst galaxies? Meurer et al. (meur (1995)) investigated the ultraviolet (UV) properties of young clusters in nine starburst galaxies (blue compact dwarfs as well as ultraluminous mergers). On average, about 20% of the UV luminosity comes from clusters. But is this percentage sufficient to increase the specific frequency of the GCS? Before answering this question one has to know how many young clusters will survive the evolution of several Gyr. Fritze v. Alvensleben & Kurth (frit97 (1997)) calculated with the help of stellar population evolutionary models (Fritze v. Alvensleben & Burkert frit95 (1995)) that the young Antennae (NGC 7252) clusters will evolve into a typical GCS. However, they do not exclude the possibility that up to 60% of the present clusters may be destroyed by dynamical effects during the evolution of the cluster system. This value is the result of semi-analytical model calculations by Vesperini (vesp (1997)), who simulated the evolution of a original GC population in a spiral. Note that most of the destroyed clusters are low mass clusters. Thus the destroyed cluster mass is a much smaller percentage of the initial total cluster mass. On the other hand, Okazaki & Tosa (okaz95 (1993)) estimated that about 60% in mass of an initial GC population, whose initial mass function $`\varphi `$ is approximated by the power law $`\varphi =\mathrm{d}N/\mathrm{d}MM^\alpha `$, with $`\alpha 2`$, will be destroyed after evolving into the present GCLF. In the following we will assume that a dissolution of 20 to 60% of the cluster mass or light is reasonable. ### 7.1 Increase of $`S_N`$ in a starburst galaxy Coming back to the question, whether a strong starburst like those investigated by Meurer et al. (meur (1995)) can increase $`S_N`$, we make a simple calculation: we start with a gas-rich dwarf that has an absolute luminosity of $`M_V=16.0`$ mag and 5 GCs, which means $`S_N=2`$ (typical for spirals, e.g. Zepf & Ashman zepf93a (1993)). We assume that a starburst occurs which involves 10% of the total mass, of which 20% will be transformed into clusters. For the duration of the burst this increases $`M_V`$ of the galaxy to about $`17.8`$ mag (assuming that the young stellar population is about 4 mag brighter than a faded old one, Fritze v. Alvensleben & Burkert frit95 (1995)). About 12 Gyr after the burst $`M_V`$ has faded again to $`16.1`$ mag. At this time the total luminosity of the clusters is $`M_V=11.8`$ if no cluster has been destroyed, or $`M_V=11.0`$ if as many clusters as corresponding to about 50% of the total cluster light have been destroyed. Adopting for the evolved GCS a typical GCLF (t5-function) with a turnover magnitude of $`M_{\mathrm{V},\mathrm{TO}}=7.4`$ mag and a dispersion of $`\sigma =1.0`$ (e.g. Kohle et al. kohl (1996)), about 30 or 18 GCs have survived, respectively. The “specific frequency” of the GCS at the time of the starburst is still quite low, $`S_N=2.7`$ for 35 clusters (or cluster candidates), since the galaxy itself is dominated by the young bright stellar light. Such low $`S_N`$ values were determined for Local Group irregulars including the LMC (Harris harr91a (1991)). After 12 Gyr $`S_N`$ has increased significantly, $`S_N=8.4`$ for 23 GCs (or even $`S_N=12.7`$ for 35 GCs). If the starburst is 10 times weaker (1% of the total mass), only 3–5 clusters would have survived and the resulting $`S_N`$ is only slightly larger than before, $`S_N=3.5\pm 0.5`$. ### 7.2 Estimation of the final $`S_N`$ of a starburst Furthermore we want to answer the following question. What is the specific frequency of the starburst itself, without an already existing old stellar population? In other words, we consider an isolated gas cloud and assume that some mechanism has triggered a starburst as strong as observed in starburst galaxies. Then we “destroy” about 20 to 60% of the cluster light, and look how many GCs survived compared to the total luminosity of the whole system. Note that stars and clusters fade in the same way (according to the models by Fritze v. Alvensleben & Burkert frit95 (1995)). The final GCLF has by definition the shape of a t5-function with $`M_{\mathrm{V},\mathrm{TO}}=7.4`$ mag and $`\sigma =1.0`$. We assume that immediately after the burst 20% of the light comes from clusters and that 1000 GCs will survive the evolution. Table 6 summarizes the results. In column 1 the fraction $`f_{\mathrm{destr}}`$ of GC light that has been disrupted during the evolution is given. Columns 2, 3, and 4 are the absolute luminosities of the evolved GCs, stars, and the total system, respectively. Column 5 gives the resulting $`S_N`$ of the system. The calculations show that $`S_N`$ in an isolated starburst is very high, $`40<S_N<90`$. We note that there exists no evidence that such a high $`S_N`$ can be the result of a simple undisturbed galaxy formation. In particular, dEs, whose structural properties are most easily explained by a starburst followed by a supernova-driven wind (e.g. Dekel & Silk deke (1986)), have much lower $`S_N`$ values. However, in the context of the galaxy infall scenario, our calculations might imply that stripped gas from galaxies – and especially dwarf galaxies expell their gas most easily (i.e. Dekel & Silk deke (1986)) – can significantly increase the GC $`S_N`$ of the central GCS, if it suffers a starburst comparable to that observed in starburst galaxies. In Sect. 9 we apply these results to NGC 1399. ## 8 Constraints and estimations for the dwarf galaxy infall scenario in the Fornax cluster ### 8.1 Why the present day dwarf galaxy population supports the hypothesis of early infall In a scenario where a sufficient number of dwarfs has been dissolved into a cD halo, one would expect a flat faint end slope of the LF compared to the initial value. López-Cruz et al. (lope (1997)) found in a sample of 45 clusters that clusters with a pronounced cD galaxy indeed tend to have a flat LF faint end slope. This is what we also find for the Fornax cluster. Furthermore, the surface density slope of dE and dS0 galaxies within the core radius of the cluster ($`r_c=0\stackrel{}{.}7`$) is flatter than the slope of all possibly dissolved and/or stripped material: cD halo stars, GCs, and perhaps rest gas. This is consistent with White’s (whit87 (1987)) argument that disrupted material is more concentrated than the relaxed galaxy distribution. Moreover, like in other evolved clusters, the gas-rich late-type Fornax galaxies are found at the outskirts of the cluster, whereas the early-type (possibly stripped) dwarfs are more concentrated towards the center (Ferguson ferg89 (1989)). One also expects that the LF of compact dwarfs is steeper than the LF of less compact dwarfs (since they are more easily disrupted). Indeed, in the Fornax cluster the LF of the non-nucleated dE/dS0s is flatter than the LF of the (on the average) more luminous nucleated dE/dS0s. Furthermore, if the stripping of gas and stars was more effective in the inner regions, one would expect to find a larger number of fainter remnants in the center than outside. This is indeed seen for the non-nucleated dE/dS0s. The fainter dwarfs are more concentrated to the center than the brighter ones (Ferguson & Sandage ferg88 (1988)). Finally, two probable candidates for survived nuclei of dissolved dwarfs have been found (see Paper 1 and 2), which would indicate that also dwarfs from the brighter nucleated dE/dS0 population have been dissolved. ### 8.2 Constraints from the metallicity distribution Constraints on the metallicity have to be considered, if one assumes that the metallicity distribution of the GCs is bimodal rather than equally distributed over the range of possible GCs metallicities, $`2.0<[Fe/H]<0.0`$ dex. Primordial gas, expelled and stripped from low mass dwarfs, is normally believed to be a contributor to the metal-poor GC subpopulation ($`[Fe/H]<1.1`$ dex), if transformed into GCs. However, this may not always be the case. Mac Low & Ferrara (macl (1999)) calculated that galaxies less massive than about $`10^8`$ M can eject metals from supernovae into the intergalactic medium easier than their interstellar gas. Thus, the expelled gas might also be enriched from the supernovae ejecta and more metal-rich GCs could have been formed as well. The capture of GCs of early-type dwarf galaxies can only have contributed to the metal-poor GCs, since all observed GCs in such dwarfs seem to be more metal-poor than $`[Fe/H]=1.2`$ dex (e.g. Minniti et al. minn96b (1996)). Côté et al. (cote (1998)) have shown via Monte Carlo simulations that the capture of dwarf galaxies can indeed reproduce the bimodal color distribution around M49 and M87 under the assumption that the red globular cluster population is the intrinsic GCS of the galaxies. The mean metallicity of the captured GCs peaks around $`\overline{[Fe/H]}=1.3`$ dex for a steep initial LF slope in their simulations. However, they do not include dwarf galaxies fainter than $`M_V=13`$ mag. As shown in Sect. 6, the inclusion of these galaxies and an extrapolation of the metallicity–luminosity relation for their GCs, can push the mean metallicity of the captured GCs to a lower value. Since the metallicity of the accreted stellar population of dwarfs itself is between $`2.0`$ to $`0.6`$ (taking the values of Local Group dwarfs, e.g. Grebel greb97 (1997)), one should also see a low metallicity in the cD halo light. Unfortunately, the metallicity determination of the cD halo is quite difficult due to the low surface brightness, and in the center the light of the bulge of NGC 1399 would dominate a metal-poor halo component. On the other hand, the metallicity of the halo is probably a mixture of different metallicities, if one assumes that not only metal-poor dwarfs have been dissolved, but also stellar populations of more massive galaxies had been stripped and new stars from infalling gas of higher metallicities might have formed. To explain by the accretion scenario the majority of the red metal-rich GC subpopulation ($`[Fe/H]0.6`$ dex), either already existing GCs of this metallicity had to be captured, or GCs had to be newly formed from enriched infalling gas. This is possible, if one allows that the gas-rich dwarfs first had time to enrich their interstellar matter to at least $`0.8`$ dex, before the stripping of the gas became important and/or before new cluster formation in these dwarfs has been triggered by interaction processes. In the LMC, for example, no clusters were formed between about 3 to 10 Gyr ago. Whereas the few old clusters have a metallicity of about $`1.8`$ dex, the younger clusters have metallicities around $`0.4`$ dex (e.g. Olszewski et al. olsz91b (1991), Hilker et al. hilk95 (1995)). Concerning the time scale for metal enrichment in spirals, e.g. Möller et al. (moel (1997)) estimated that 2-3 Gyr is enough time to enrich the iron abundance of the interstellar medium from $`1.5`$ dex to about $`0.6`$ dex for early-type spirals (Sa, Sb), whereas at least 7 Gyr are needed for late-type spirals (Sc, Sd). Similarly, Fritze-v. Alvensleben & Gerhard (frit94 (1994)) calculated the metallicity of a secondary GC population in a early-type spiral-spiral merger to be about $`[Fe/H]=0.6`$ dex after about 2 Gyr of their life time, and after about 8 Gyr in a late-type spiral-spiral merger. Assuming that the metal-rich GC population in Fornax was formed by the infall of all types of gas-rich galaxies, one therefore should expect a range of ages inbetween them in order to account for a metallicity peak around $`0.6`$ dex. First, the metal-rich GCs should be at least 2 Gyr younger than the metal-poor GCs with $`[Fe/H]=1.3`$ dex. Second, their age spread should be in the order of 2-6 Gyr. It would be interesting to know whether this prediction can be proved or disproved in further investigations. We note that a large number of metal-rich GCs also might have been accumulated by stripping from more massive early-type galaxies. According to the metallicity–luminosity relation by Côté et al. (cote (1998)), the mean metallicity, $`[Fe/H]=0.6`$ dex, of the red GC population around NGC 1399 corresponds to a luminosity of the former parent galaxies of about $`M_V=20`$ mag. This value is typical for the low-luminosity ellipticals in Fornax, as for example NGC 1374, NGC 1379, and NGC 1427. ### 8.3 Constraints from the spatial distribution of globular clusters A further point that has to be explained is why the red GC population is more concentrated than the blue one (Forbes et al. forb97 (1997)). The answer might be that star formation from infalling gas is more concentrated to the inner part of the dense cluster core, as it is also expected for a merging scenario (Ashman & Zepf ashm92 (1992)). Another possibility is that most of the red GCs have nothing to do with a secondary formation or accretion process, but rather belong to the original GC population of the bulge of NGC 1399. However, as we show in the next section, not more than about 1300 GCs can belong to the bulge light, if one assumes reasonable values of the initial GC specific frequency. This comprises only about half of the red GC subpopulation. ## 9 The correct mix of accreted and newly formed GCs Let us imagine that the infall of dwarf galaxies and gas was really the dominating process for the building-up of the cD halo and the GCS. How many dwarfs and their transformed gas would then have contributed to the cD halo light and how many GCs might belong to the cD halo? NGC 1399 possesses about 5800 globular clusters (see Sect. 3.2). About 1300 of them would belong to the bulge, $`M_{\mathrm{V},\mathrm{gal}}=21.5`$ mag (see Sect. 3.3.1), if one assumes an initial specific frequency of $`S_N=3.2`$, which is the mean value for the other ellipticals in the Fornax cluster, except NGC 1404 and NGC 1380. That means that 4500 GCs would belong to the cD halo and its specific frequency would be about $`S_N=10\pm 1`$. Note that half of the total GCS ($`=2900`$ GCs) are assigned to the metal-poor peak around \[Fe/H\] $``$ $``$1.3 dex, and therefore at least 1600 metal-rich GCs (\[Fe/H\] $``$ $``$0.6 dex) have to be explained by the infall scenario, if one assumes that all 1300 remaining bulge GCs belong to the metal-rich sub-population. How can dwarf galaxies account for such a high $`S_N`$? As presented in Sect. 5, there are mainly three scenarios possible. Firstly, accreted gas-poor dwarfs possessed high GC frequencies themselves. In this case, the average $`S_N`$ of all accreted dwarfs and GCs can have values between 4 and 22 depending on the initial conditions (see Table 5). Secondly, the infalling gas of previously gas-rich dwarfs was effectively converted into globular clusters. Regarding the starburst as an isolated entity its resulting systems of stars and clusters can have $`S_N`$ values between 40 and 90 (see Table 6). Finally, the stripping of GCs from dwarf galaxies was more effective than the stripping of their field population. That this is in principal possible is indicated by the fact that the $`S_N`$ value of the outer parts of galaxies that are primarily affected by stripping can be in the order of 30 (see Sect. 5.1, case 1b). Among these 3 possibilities the stripping of GCs from dwarf galaxies most probably plays a minor role. Even if all 50 dE/dS0s within the core radius of the galaxy distribution are remnants, whose outer GCs have been stripped off, we calculate that maximally some hundred GCs have been captured by this process, assuming an initial $`S_N=5.5`$, $`S_N=30`$ for the stripped stars and GCs, and a final $`S_N=3.0`$ for the remnant (similar to the values for NGC 4472, McLaughlin et al. mcla94b (1994)). In the following, we consider the case that at most 500 GCs have been stripped. What is the correct mixture of the two other processes that fulfill the following assumptions? (1) The cD halo has been formed only by accreted and newly formed matter, and its total luminosity is $`M_{\mathrm{V},\mathrm{cD}}=21.65`$ mag (2) The specific frequency of the accreted and newly formed GCs with respect to the halo luminosity is $`S_N=10`$, ($`=4500`$ GCs) (3) The 4500 GCs in the cD halo consist of 2500 metal-poor (blue) and 2000 metal-rich (red) GCs (this implies that the GCS of the bulge has 400 metal-poor and 900 metal-rich GCs) (4) GCs captured by accretion and stripping of dwarfs can only be metal-poor In Table 7 we present the possible mixtures of the 3 processes, starting with cases for which GC accretion is dominant and ending with cases in which most GCs have been formed from infalling gas. In the first five cases we assumed that all metal-poor GCs were captured or stripped. Assuming a high $`S_N`$ value for the accretion process ($`S_N=9`$, cases 1 and 2), the cluster formation efficiency (CFE) does not need to be as high as estimated for merger and starburst situations (see Table 6). However, as discussed in Sect. 6, a high $`S_N`$ requires a high accretion rate of dwarf galaxies, a steep initial slope of the faint end of the galaxy LF, and also very faint dwarf galaxies should have possessed at least one GC. The faintest dwarf galaxies with a GCS observed so far are the Local Group dSphs Fornax and Sagittarius ($`M_V12.5`$ mag). The other way around, if starbursts from stripped gas can produce a high $`S_N`$ value ($`40<S_N<80`$, cases 3-5) and have formed 2000 metal-rich GCs, the $`S_N`$ for the accreted metal-poor GCs is in the order 5-6. Such values can easily be achieved by the accretion scenario presented in Sect. 6 under various reasonable initial conditions (see Table 5). In the cases 6 to 8 we assumed that the majority of the GCs had their origin from infalling gas with a low value of the estimated starburst CFEs ($`20<S_N<40`$). The specific frequency for the remaining 1500 accreted GCs then can be very low ($`3<S_N<5`$), very faint dwarfs do not need to possess GCs, and the numbers of dissolved dwarfs can be in the order of 250-500. However, one then has to assume that GCs formed from metal-rich as well as metal-poor gas and that most of the original dwarf galaxies have been very gas-rich. ## 10 Concluding remarks We have summarized the properties of the different components of the central galaxy NGC 1399 (GCS, cD halo, bulge) and the dwarf galaxy population in the center of the cluster. We have analysed under which circumstances the GCS and cD halo can be explained by the infall and accretion of gas-poor as well as gas-rich dwarf galaxies. Estimations of the GC formation efficiency from infalling gas, and simulations of the accretion of GCSs from early-type dwarfs have shown that the building-up of the cD halo and central GCS by dwarf galaxy and gas infall alone is only possible under special conditions during the formation and evolution of the cluster. Depending on the leading process which contributed most to the rich GCS, the following conditions are required to fulfill the observed properties: * There are as many blue (metal-poor) as red (metal-rich) GCs seen around NGC 1399. Since not all metal-rich GCs can be assigned to the bulge of NGC 1399 when adopting a reasonable $`S_N`$, the formation of secondary metal-rich GCs from stripped gas (or within the dwarf galaxies) probably was an effective process besides the capture of metal-poor GCs. * The stripping and capture of GCS of gas-poor dwarf galaxies can only account for the metal-poor GC population. * If the accretion of gas-poor dwarfs was a dominating process, the faint end slope of their initial LF had to be as steep ($`\alpha <1.4`$) as it is predicted in CDM models in order to provide a sufficient number of dwarfs that have been disrupted in the central galaxy. * A steep faint end slope of the initial LF leads to a mean metallicity of the captured dwarfs that is about 0.5 dex more metal-poor than the observed value of the blue GC population around NGC 1399. * Furthermore, in the accretion dominated scenario, an unlikely high number of dwarfs ($`6000`$, see Table 5) had to be accreted, and about 50% of the fainter dwarfs ($`12.5<M_V<10.5`$) must have possessed at least one GC in order to produce high $`S_N`$ values. * A very efficient increase in $`S_N`$ of the central GCS by the formation of GCs from gas can be achieved, if the the cluster formation efficiency was as high as in merging or starburst galaxies. * If the majority of the GCs (metal-poor as well as metal-rich ones) formed from stripped gas, a significant fraction of the gas was enriched to at least $`1.0`$ dex before forming GCs in order to explain the bimodal metallicity distribution of the central GCS. This implies that the metal-rich GCs should be at least 2 Gyr older than the metal-poor ones and should show a significant age spread. Certainly, some of the requirements are quite restrictive. We conclude that the infall of dwarf galaxies can principally explain many properties in the center of the Fornax cluster, but is most probably not the only process that has been active. Certainly, also the brighter, more massive galaxies were envolved by the interaction processes in the central region of the Fornax cluster. A natural extension of the dwarf galaxy infall scenario is, for example, the stripping (and early merging) of giant galaxies, ellipticals and spirals (as mentioned in Sect. 8.2). Besides the low-luminosity ellipticals in Fornax, very likely candidates for stripping are the central giant galaxies NGC 1380 and NGC 1404, which have low GC specific frequencies, and might therefore have provided a significant fraction of the central GCS (see Kissler-Patig et al. kiss99a (1999)). We are aware of the fact that our proposed scenario has to be tested and confirmed by further theoretical as well as observational work. Especially, it has to be shown in N-body simulations whether the accretion rate of dwarf galaxies can be very high, and what is the dynamical behaviour of stripped and accreted GCs in the central cluster potential. Furthermore, it has to be tested under which conditions a high cluster formation efficiency can be obtained from stripped gas (whether it is comparable to a starburst situation in a galaxy or not). On the observational side it has to be shown, whether the faintest dwarf galaxies possess GCs or not. Further investigation of the faint end slopes of galaxy LFs for clusters with very different properties (redshift, richness, compactness, existence of a cD galaxy, etc.) will show whether the proposed scenario is compatible with the findings. Observations more sensitive to the ages of GCs (i.e. measurement of line indices) will prove or disprove the predictions of an age spread among the GCs. ###### Acknowledgements. This research was partly supported by the DFG through the Graduiertenkolleg ‘The Magellanic System and other dwarf galaxies’ and through grant Ri 418/5-1 and Ri 418/5-2. MH thanks Fondecyt Chile for support through ‘Proyecto FONDECYT 3980032’ and LI for support through ‘Proyecto FONDECYT 8970009’.
no-problem/9905/hep-ph9905323.html
ar5iv
text
# 1 Introduction ## 1 Introduction The massless double box diagram shown in Fig. 1 enters many important physical observables, e.g., amplitudes of the Bhabba scattering at high energies. An experience shows that master diagrams, i.e. with all powers of propagators equal to one, are most complicated for evaluation. In the massless off-shell case, the master double box Feynman integral has been analytically evaluated in strictly in four dimensions. It is the purpose of the present paper to evaluate it analytically on shell, i.e. for $`p_i^2=0,i=1,2,3,4`$, in the framework of dimensional regularization , with the space-time dimension $`d=42ϵ`$ as a regularization parameter. To do this, we start, in the next Section, from the alpha-representation of the double box and, after expanding some of the involved functions in Mellin–Barnes (MB) integrals, arrive at a five-fold MB integral representation with gamma functions in the integrand. We use, in Sec. 3, a standard procedure of taking residues and shifting contours to resolve the structure of singularities in the parameter of dimensional regularization, $`ϵ`$. This procedure leads to an appearance of multiple terms where Laurent expansion in $`ϵ`$ becomes possible. The resulting integrals in all the MB parameters but one are evaluated explicitly in gamma functions and their derivatives. In Sec. 4, the last MB integral is evaluated by closing an initial integration contour in the complex plane to the right, with an explicit summation of the corresponding series. A final result is expressed in terms of polylogarithms $`\text{Li}_a\left(t/s\right)`$, up to $`a=4`$, and generalized polylogarithms $`S_{a,b}(t/s)`$, with $`a=1,2`$ and $`b=2`$. Starting from the same one-fold MB integral and closing the contour of integration to the left, we obtain a similar result written through the same class of functions depending on the inverse ratio, $`s/t`$. Furthermore, we obtain, as a by-product, an explicit result for the backward scattering value, i.e. at $`t=s`$, of the double box diagram. ## 2 From momentum space to MB representation The massless on-shell double box Feynman integral can be written as $`{\displaystyle \frac{\text{d}^dk\text{d}^dl}{(k^2+2p_1k)(k^22p_2k)k^2(kl)^2(l^2+2p_1l)(l^22p_2l)(lp_1p_3)^2}}`$ $`{\displaystyle \frac{\left(i\pi ^{d/2}\mathrm{e}^{\gamma _\mathrm{E}ϵ}\right)^2}{(s)^{2+2ϵ}(t)}}K(t/s,ϵ),`$ (1) where $`s=(p_1+p_2)^2,t=(p_1+p_3)^2`$, and $`k`$ and $`l`$ are respectively loop momenta of the left and the right box. Usual prescriptions, $`k^2=k^2+i0,s=si0`$, etc are implied. We have pulled out not only standard factors that arise when integrating in the loop momenta but also a factor that makes the resulting function $`K`$ depend on the dimensionless variable, $`x=t/s`$. The alpha representation of the double box is straightforwardly obtained: $$K(x,ϵ)=\mathrm{\Gamma }(3+2ϵ)_0^{\mathrm{}}\text{d}\alpha _1\mathrm{}_0^{\mathrm{}}\text{d}\alpha _7\delta \left(\alpha _i1\right)D^{1+3ϵ}\left(A+x\alpha _5\alpha _6\alpha _7\right)^{32ϵ},$$ (2) where $`D`$ $`=`$ $`(\alpha _1+\alpha _2+\alpha _7)(\alpha _3+\alpha _4+\alpha _5)+\alpha _6(\alpha _1+\alpha _2+\alpha _3+\alpha _4+\alpha _5+\alpha _7),`$ (3) $`A`$ $`=`$ $`\alpha _1\alpha _2(\alpha _3+\alpha _4+\alpha _5)+\alpha _3\alpha _4(\alpha _1+\alpha _2+\alpha _7)+\alpha _6(\alpha _1+\alpha _3)(\alpha _2+\alpha _4).`$ (4) As it is well-known, one can choose a sum of an arbitrary subset of $`\alpha _i,i=1,\mathrm{},7`$ in the argument of the delta function in (2). We choose it as $`\delta \left(_{i6}\alpha _i1\right)`$ and change variables by turning from alpha to Feynman parameters $`\alpha _3=\alpha _{35}\xi _1,\alpha _5=\alpha _{35}(1\xi _1),\alpha _1=\alpha _{17}\xi _3,\alpha _7=\alpha _{17}(1\xi _3),`$ $`\alpha _{35}=\xi _5\xi _2,\alpha _4=\xi _5(1\xi _2),\alpha _{17}=(1\xi _5)\xi _4,\alpha _2=(1\xi _5)(1\xi _4).`$ (5) to obtain the following parametric integral: $`K(x,ϵ)=\mathrm{\Gamma }(3+2ϵ){\displaystyle _0^{\mathrm{}}}\text{d}\alpha _6{\displaystyle _0^1}\text{d}\xi _1\mathrm{}{\displaystyle _0^1}\text{d}\xi _5\xi _2\xi _4\xi ^2(1\xi )^2`$ (6) $`\times (\alpha _6+\xi _5(1\xi _5))^{1+3ϵ}Q^{32ϵ},`$ where $`Q`$ $`=`$ $`x\alpha _6(1\xi _1)\xi _2(1\xi _3)\xi _4(1\xi _5)\xi _5`$ (7) $`+\xi _5(1\xi _5)[\xi _5\xi _1\xi _2(1\xi _2)+(1\xi _5)\xi _3\xi _4(1\xi _4)]`$ $`+\alpha _6[\xi _5\xi _1\xi _2+(1\xi _5)\xi _3\xi _4][\xi _5(1\xi _2)+(1\xi _5)(1\xi _4)].`$ We are now going to apply five times the MB representation $$\frac{1}{(X+Y)^\nu }=\frac{1}{\mathrm{\Gamma }(\nu )}\frac{1}{2\pi i}_i\mathrm{}^{+i\mathrm{}}\text{d}w\frac{Y^w}{X^{\nu +w}}\mathrm{\Gamma }(\nu +w)\mathrm{\Gamma }(w),$$ (8) where the contour of integration is chosen in the standard way: the poles with the $`\mathrm{\Gamma }(\mathrm{}+w)`$-dependence (let us call them infrared (IR) poles) are to the left of the contour and the poles with the $`\mathrm{\Gamma }(\mathrm{}w)`$-dependence (ultraviolet (UV) poles) are to the right of it. First, we introduce a MB integration, in $`w`$, using decomposition of the function $`Q`$ with $`Y`$ as the first line in (7). We introduce a second MB integral choosing as $`X`$ the term with $`\alpha _6`$ in the rest part of $`Q`$. After that we can take an integral in $`\alpha _6`$ in gamma functions. The next three MB integrations, in $`z_1,z_2`$ and $`z_3`$, are to separate terms in the following three combinations: $`[\xi _5\xi _1\xi _2+(1\xi _5)\xi _3\xi _4]`$, $`[\xi _5(1\xi _2)+(1\xi _5)(1\xi _4)]`$ and $`[\xi _5\xi _1\xi _2(1\xi _2)+(1\xi _5)\xi _3\xi _4(1\xi _4)]`$. All the integrals in Feynman parameters are then taken explicitly in gamma functions. Finally, we perform the change of variables $`z_2=w_2z_11,z_3=w_3z_11`$ and arrive at the following nice 5-fold MB integral: $`K(x,ϵ)={\displaystyle \frac{1}{\mathrm{\Gamma }(13ϵ)}}{\displaystyle \frac{1}{(2\pi i)^5}}{\displaystyle \text{d}w\text{d}w_2\text{d}w_3\text{d}z\text{d}z_1x^{w+1}}`$ $`\times \mathrm{\Gamma }(1+w)^2\mathrm{\Gamma }(w)\mathrm{\Gamma }(w_2)\mathrm{\Gamma }(12ϵww_2)\mathrm{\Gamma }(w_3)\mathrm{\Gamma }(12ϵww_3)`$ $`\times {\displaystyle \frac{\mathrm{\Gamma }(1w_2+z_1)\mathrm{\Gamma }(1w_3+z_1)\mathrm{\Gamma }(ϵ+w+w_2+w_3z_1)\mathrm{\Gamma }(z_1)}{\mathrm{\Gamma }(1+w+w_2+w_3)\mathrm{\Gamma }(14ϵww_2w_3)}}`$ $`\times \mathrm{\Gamma }(1ϵ+z)\mathrm{\Gamma }(2+2ϵ+w+w_2+zz_1)\mathrm{\Gamma }(2+2ϵ+w+w_3+zz_1)`$ $`\times {\displaystyle \frac{\mathrm{\Gamma }(23ϵww_2w_3+z_1z)\mathrm{\Gamma }(z_1z)}{\mathrm{\Gamma }(3+2ϵ+w+z)}}.`$ (9) One can interchange the order of integration in an arbitrary way. For each order, the rules of dealing with poles are as formulated above. Note that if we have a product $`\mathrm{\Gamma }(a+v)\mathrm{\Gamma }(bv)`$, for some integration variable $`v=w,w_2,w_3,z,z_1`$ with $`a`$ and $`b`$ dependent on other variables, then the integration in $`v`$ produces a singularity of the type $`\mathrm{\Gamma }(a+b)`$. ## 3 Resolving singularities in $`ϵ`$ Since it looks hopeless to evaluate our MB integral for general $`ϵ`$ let us try to obtain a result in expansion in $`ϵ`$ up to the finite part. There is a factor $`1/\mathrm{\Gamma }(13ϵ)`$ proportional to $`ϵ`$ when $`ϵ`$ tends to zero. Representation (9) is therefore effectively 4-fold because to generate a contribution that does not vanish at $`ϵ=0`$ we need to take a residue at least in one of the integration variables. None of the integrations can however immediately produce an explicit $`ϵ`$-pole. Let us first distinguish the following two gamma functions $$\mathrm{\Gamma }(ϵ+w+w_2+w_3z_1)\mathrm{\Gamma }(23ϵww_2w_3+z_1z)$$ that are essential for the appearance of the poles. We can write down the integral in $`z_1`$ as minus residue at the point $`z_1=ϵ+w+w_2+w_3`$ (where the gamma function $`\mathrm{\Gamma }(ϵ+w+w_2+w_3z_1)`$ has its first pole which is UV, with respect to $`z_1`$) plus an integral with the same integrand where this pole is IR. We can similarly write down the integral in $`z`$ as minus residue at the point $`z=23ϵww_2w_3+z_1`$ (where the gamma function $`\mathrm{\Gamma }(23ϵww_2w_3+z_1z)`$ has its first pole which is UV, with respect to $`z`$) plus an integral with the same integrand where this pole is IR. As a result we decompose integral (9) as $`K=K_{00}+K_{01}+K_{10}+K_{11}`$ where $`K_{11}`$ corresponds to the two residues, $`K_{10}`$ to the residue in $`z`$ and the integral in $`z_1`$ with the opposite nature of the first pole of $`\mathrm{\Gamma }(ϵ+w+w_2+w_3z_1)`$, etc. For example, the contribution $`K_{11}`$ is given by the following 3-fold integral: $`K_{11}(x,ϵ)={\displaystyle \frac{1}{(2\pi i)^3}}{\displaystyle \text{d}w\text{d}w_2\text{d}w_3x^{w+1}\mathrm{\Gamma }(1+w)\mathrm{\Gamma }(w)}`$ $`\times \mathrm{\Gamma }(w_2)\mathrm{\Gamma }(ϵw_2)\mathrm{\Gamma }(1+ϵ+w+w_2)\mathrm{\Gamma }(12ϵww_2)`$ $`\times \mathrm{\Gamma }(w_3)\mathrm{\Gamma }(ϵw_3)\mathrm{\Gamma }(1+ϵ+w+w_3)\mathrm{\Gamma }(12ϵww_3)`$ $`\times {\displaystyle \frac{\mathrm{\Gamma }(2+3ϵ+w+w_2+w_3)\mathrm{\Gamma }(ϵww_2w_3)}{\mathrm{\Gamma }(1+w+w_2+w_3)\mathrm{\Gamma }(14ϵww_2w_3)}}.`$ (10) This contribution is in turn decomposed, in a similar way, as $`K_{11}=_{i,j=0,1,2}K_{11ij}`$. Here the value $`i=1`$ of the first index denotes the residue in $`w_2`$ at the point $`w_2=0`$. The value $`i=2`$ denotes the residue in $`w_2`$ at $`w_2=1ϵw`$ of the integrand where the first pole of $`\mathrm{\Gamma }(w_2)`$ is UV rather than IR. Finally, $`i=0`$ means that both above poles are IR. The second index similarly refers to the integral in $`w_3`$. In particular, we have $`K_{1111}(x,ϵ)=\mathrm{\Gamma }(ϵ)^2{\displaystyle \frac{1}{2\pi i}}{\displaystyle \text{d}wx^{w+1}}`$ $`\times {\displaystyle \frac{\mathrm{\Gamma }(1+ϵ+w)^2\mathrm{\Gamma }(2+3ϵ+w)\mathrm{\Gamma }(12ϵw)^2\mathrm{\Gamma }(ϵw)\mathrm{\Gamma }(w)}{\mathrm{\Gamma }(14ϵw)}},`$ (11) $`K_{1112}(x,ϵ)=K_{1121}(x,ϵ)={\displaystyle \frac{\mathrm{\Gamma }(1+2ϵ)\mathrm{\Gamma }(ϵ)}{\mathrm{\Gamma }(3ϵ)}}{\displaystyle \frac{1}{2\pi i}}{\displaystyle \text{d}wx^{w+1}}`$ $`\times {\displaystyle \frac{\mathrm{\Gamma }(1+w)^2\mathrm{\Gamma }(1+ϵ+w)^2\mathrm{\Gamma }(12ϵw)\mathrm{\Gamma }(ϵw)\mathrm{\Gamma }(w)}{\mathrm{\Gamma }(2+ϵ+w)}},`$ (12) $`K_{1122}(x,ϵ)=\mathrm{\Gamma }(ϵ)^2{\displaystyle \frac{1}{2\pi i}}{\displaystyle \text{d}wx^{w+1}}`$ $`\times {\displaystyle \frac{\mathrm{\Gamma }(1+w)^3\mathrm{\Gamma }(1+ϵ+w)^2\mathrm{\Gamma }(ϵw)^2\mathrm{\Gamma }(ϵw)\mathrm{\Gamma }(w)}{\mathrm{\Gamma }(2+ϵ+w)\mathrm{\Gamma }(12ϵ+w)\mathrm{\Gamma }(12ϵw)}}.`$ (13) The next contribution is $`K_{10}(x,ϵ)={\displaystyle \frac{1}{\mathrm{\Gamma }(13ϵ)}}{\displaystyle \frac{1}{(2\pi i)^4}}{\displaystyle \text{d}w\text{d}w_2\text{d}w_3\text{d}z_1x^{w+1}\mathrm{\Gamma }(1+w)^2\mathrm{\Gamma }(w)}`$ $`\times \mathrm{\Gamma }(w_2)\mathrm{\Gamma }(ϵw_2)\mathrm{\Gamma }(12ϵww_2)\mathrm{\Gamma }(w_3)\mathrm{\Gamma }(ϵw_3)\mathrm{\Gamma }(12ϵww_3)`$ $`\times {\displaystyle \frac{\mathrm{\Gamma }(2+3ϵ+w+w_2+w_3)\mathrm{\Gamma }(1w_2+z_1)\mathrm{\Gamma }(1w_3+z_1)}{\mathrm{\Gamma }(1+w+w_2+w_3)\mathrm{\Gamma }(14ϵww_2w_3)}}`$ $`\times {\displaystyle \frac{\mathrm{\Gamma }(14ϵww_2w_3+z_1)\mathrm{\Gamma }(ϵ+w+w_2+w_3z_1)\mathrm{\Gamma }(z_1)}{\mathrm{\Gamma }(1ϵw_2w_3+z_1)}},`$ (14) where the first pole of $`\mathrm{\Gamma }(ϵ+w+w_2+w_3z_1)`$ is IR, with respect to $`z_1`$, rather than UV. We further decompose this contribution by changing the nature of the first pole of $`\mathrm{\Gamma }(14ϵww_2w_3+z_1)`$ in $`z_1`$. We obtain $`K_{10}=K_{100}+K_{101}`$, where the new index 1 corresponds to the residue and has the form $`K_{101}(x,ϵ)={\displaystyle \frac{1}{(2\pi i)^3}}{\displaystyle \text{d}w\text{d}w_2\text{d}w_3x^{w+1}\frac{\mathrm{\Gamma }(1+w)^2\mathrm{\Gamma }(w)}{\mathrm{\Gamma }(2+3ϵ+w)}}`$ $`\times \mathrm{\Gamma }(w_2)\mathrm{\Gamma }(ϵw_2)\mathrm{\Gamma }(2+4ϵ+w+w_2)\mathrm{\Gamma }(12ϵww_2)\mathrm{\Gamma }(w_3)`$ $`\times {\displaystyle \frac{\mathrm{\Gamma }(ϵw_3)\mathrm{\Gamma }(2+4ϵ+w+w_3)\mathrm{\Gamma }(12ϵww_3)\mathrm{\Gamma }(2+3ϵ+w+w_2+w_3)}{\mathrm{\Gamma }(1+w+w_2+w_3)}}.`$ (15) Each of the contributions $`K_{100}`$ and $`K_{101}`$ is then decomposed using the change of the nature of poles $`w_2=0`$ and $`w_3=0`$. We obtain $`K_{10j}=K_{10j00}+K_{10j01}+K_{10j10}+K_{10j11}`$, for $`j=0,1`$. Here the value $`i=1`$ of the last index denotes the residue in $`w_3`$ at the point $`w_3=0`$ and the $`i=0`$ an integral where the first pole of $`\mathrm{\Gamma }(w_3)`$ is considered UV. The second index from the end similarly refers to $`\mathrm{\Gamma }(w_2)`$. For example, $$K_{10111}(x,ϵ)=\mathrm{\Gamma }(ϵ)^2\frac{1}{2\pi i}\text{d}wx^{w+1}\mathrm{\Gamma }(2+4ϵ+w)^2\mathrm{\Gamma }(1+w)\mathrm{\Gamma }(12ϵw)^2\mathrm{\Gamma }(w).$$ (16) Then we have $`K_{01}(x,ϵ)={\displaystyle \frac{1}{\mathrm{\Gamma }(13ϵ)}}{\displaystyle \frac{1}{(2\pi i)^4}}{\displaystyle \text{d}w\text{d}w_2\text{d}w_3\text{d}zx^{w+1}\mathrm{\Gamma }(1+w)^2\mathrm{\Gamma }(w)}`$ $`\times \mathrm{\Gamma }(w_2)\mathrm{\Gamma }(1+ϵ+w+w_2)\mathrm{\Gamma }(12ϵww_2)\mathrm{\Gamma }(w_3)\mathrm{\Gamma }(1+ϵ+w+w_3)`$ $`\times {\displaystyle \frac{\mathrm{\Gamma }(12ϵww_3)\mathrm{\Gamma }(ϵww_2w_3)\mathrm{\Gamma }(1ϵ+z)}{\mathrm{\Gamma }(1+w+w_2+w_3)\mathrm{\Gamma }(14ϵww_2w_3)}}`$ $`\times {\displaystyle \frac{\mathrm{\Gamma }(2+ϵw_2+z)\mathrm{\Gamma }(2+ϵw_3+z)\mathrm{\Gamma }(ϵ+w+w_2+w_3z)\mathrm{\Gamma }(22ϵz)}{\mathrm{\Gamma }(3+2ϵ+w+z)}},`$ (17) where the first pole of $`\mathrm{\Gamma }(22ϵz)`$ is IR, with respect to $`z`$, rather than UV. Using the change of variables $`w_212ϵww_2,w_312ϵww_3,z_1ϵww_2w_3+z`$ in $`K_{10}`$ we see that $`K_{01}K_{10}`$. Finally, the contribution $`K_{00}`$ is similarly decomposed: $`K_{00}=K_{0000}+K_{0001}+K_{0010}+K_{0011}`$. Now we observe that, in each of the obtained contributions, the only additional (with respect to explicit gamma functions depending on $`ϵ`$) source of the poles in $`ϵ`$ is the last integration, in $`w`$, where the first (UV) pole of the gamma function $`\mathrm{\Gamma }(12ϵw)`$ glues with an IR pole of $`\mathrm{\Gamma }(1+w)`$ or $`\mathrm{\Gamma }(1+ϵ+w)`$ when $`ϵ0`$ — see such examples in (1113) and (16). Therefore we further decompose each of the contributions into two pieces: minus residue at the point $`w=12ϵ`$ plus an integral where we can integrate in the region $`1<\mathrm{Re}w<0`$. In each of these pieces, we now can expand an integrand in a Laurent series in $`ϵ`$ up to the finite part. In particular, no poles in $`ϵ`$ arise in $`K_{0000}`$ so that it it zero at $`ϵ=0`$ because of the overall factor $`1/\mathrm{\Gamma }(13ϵ)`$. We collect separately the pieces from these last residues and from the last integration at $`1<\mathrm{Re}w<0`$. The first collection gives the leading order term in the expansion of the double box in the limit $`t/s0`$ while the second collection involves the rest of the terms of this expansion. A remarkable fact is that, in all these multiple contributions, the integrations in $`w_2,w_3,z,z_1`$ can be performed analytically, with the help of the first and the second Barnes lemmas $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle _i\mathrm{}^{+i\mathrm{}}}\text{d}w\mathrm{\Gamma }(\lambda _1+w)\mathrm{\Gamma }(\lambda _2+w)\mathrm{\Gamma }(\lambda _3w)\mathrm{\Gamma }(\lambda _4w)`$ (18) $`={\displaystyle \frac{\mathrm{\Gamma }(\lambda _1+\lambda _3)\mathrm{\Gamma }(\lambda _1+\lambda _4)\mathrm{\Gamma }(\lambda _2+\lambda _3)\mathrm{\Gamma }(\lambda _2+\lambda _4)}{\mathrm{\Gamma }(\lambda _1+\lambda _2+\lambda _3+\lambda _4)}},`$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle _i\mathrm{}^{+i\mathrm{}}}\text{d}w{\displaystyle \frac{\mathrm{\Gamma }(\lambda _1+w)\mathrm{\Gamma }(\lambda _2+w)\mathrm{\Gamma }(\lambda _3+w)\mathrm{\Gamma }(\lambda _4w)\mathrm{\Gamma }(\lambda _5w)}{\mathrm{\Gamma }(\lambda _1+\lambda _2+\lambda _3+\lambda _4+\lambda _5+w)}}`$ (19) $`={\displaystyle \frac{\mathrm{\Gamma }(\lambda _1+\lambda _4)\mathrm{\Gamma }(\lambda _2+\lambda _4)\mathrm{\Gamma }(\lambda _3+\lambda _4)\mathrm{\Gamma }(\lambda _1+\lambda _5)\mathrm{\Gamma }(\lambda _2+\lambda _5)\mathrm{\Gamma }(\lambda _3+\lambda _5)}{\mathrm{\Gamma }(\lambda _1+\lambda _2+\lambda _4+\lambda _5)\mathrm{\Gamma }(\lambda _1+\lambda _3+\lambda _4+\lambda _5)\mathrm{\Gamma }(\lambda _2+\lambda _3+\lambda _4+\lambda _5)}}`$ and their corollaries. These are two typical examples of such corollaries: $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle _i\mathrm{}^{+i\mathrm{}}}\text{d}w{\displaystyle \frac{\mathrm{\Gamma }(\lambda _1+w)\mathrm{\Gamma }(\lambda _2+w)^2\mathrm{\Gamma }(\lambda _2w)\mathrm{\Gamma }(\lambda _3w)}{\mathrm{\Gamma }(\lambda _1+\lambda _2+\lambda _3+w)}}`$ (20) $`={\displaystyle \frac{\mathrm{\Gamma }(\lambda _1\lambda _2)\mathrm{\Gamma }(\lambda _2+\lambda _3)\left[\psi ^{}\left(\lambda _1+\lambda _3\right)\psi ^{}\left(\lambda _2+\lambda _3\right)\right]}{\mathrm{\Gamma }(\lambda _1+\lambda _3)}},`$ where the pole $`w=\lambda _2`$ is considered IR while other poles are treated in the standard way, and $$\frac{1}{2\pi i}_{1/2i\mathrm{}}^{1/2+i\mathrm{}}\text{d}w\mathrm{\Gamma }(1+w)\mathrm{\Gamma }(w)\mathrm{\Gamma }(w)\mathrm{\Gamma }(1w)\psi (1+w)^2=\frac{\gamma _\mathrm{E}^2\pi ^2}{3}+6\gamma _\mathrm{E}\zeta (3)+\frac{\pi ^4}{45}.$$ (21) Here $`\gamma _\mathrm{E}`$ is the Euler constant, $`\psi (z)`$ the logarithmical derivative of the gamma function, and $`\zeta (z)`$ the Riemann zeta function. After taking these integrations and summing up the resulting contributions into the two above collections we obtain the following result $`K(x,ϵ)`$ $`=`$ $`K_{0t}(x,ϵ)+K_{1t}(x,ϵ)+o(ϵ),`$ (22) $`K_{0t}(x,ϵ)`$ $`=`$ $`{\displaystyle \frac{4}{ϵ^4}}+{\displaystyle \frac{5\mathrm{ln}x}{ϵ^3}}\left(2\mathrm{ln}^2x{\displaystyle \frac{5}{2}}\pi ^2\right){\displaystyle \frac{1}{ϵ^2}}`$ (23) $`\left({\displaystyle \frac{2}{3}}\mathrm{ln}^3x+{\displaystyle \frac{11}{2}}\pi ^2\mathrm{ln}x{\displaystyle \frac{65}{3}}\zeta (3)\right){\displaystyle \frac{1}{ϵ}}+{\displaystyle \frac{4}{3}}\mathrm{ln}^4x+6\pi ^2\mathrm{ln}^2x{\displaystyle \frac{88}{3}}\zeta (3)\mathrm{ln}x+{\displaystyle \frac{29}{30}}\pi ^4,`$ $`K_{1t}(x,ϵ)`$ $`=`$ $`{\displaystyle \frac{2}{\pi i}}{\displaystyle \frac{\text{d}wx^{w+1}}{1+w}\mathrm{\Gamma }(1+w)^3\mathrm{\Gamma }(w)^3}`$ (24) $`\times \left[{\displaystyle \frac{1}{ϵ}}{\displaystyle \frac{5}{1+w}}+3\psi (1+w)4\psi (w)\gamma _\mathrm{E}\right].`$ Let us stop for a moment and observe that this result provides, in a very easy way, not only numerical evaluation of the double box diagram for general values of $`s`$ and $`t`$ but also asymptotic expansions in the limits $`t/s0`$ and $`s/t0`$ which are obtained by taking series of residues respectively to the right or to the left. ## 4 Evaluating the last MB integral The last MB integration, in (24), is performed analytically by taking the sum of the residues at the points $`w=0,1,2,\mathrm{}`$ and summing up the resulting series. In this last step, we use, in particular, summation formulae derived in . Here is the final result: $`K_{1t}(x,ϵ)`$ $`=`$ $`\left[2\text{Li}_3\left(x\right)2\mathrm{ln}x\text{Li}_2\left(x\right)\left(\mathrm{ln}^2x+\pi ^2\right)\mathrm{ln}(1+x)\right]{\displaystyle \frac{2}{ϵ}}`$ (25) $`4\left(S_{2,2}(x)\mathrm{ln}xS_{1,2}(x)\right)+44\text{Li}_4\left(x\right)4\left(\mathrm{ln}(1+x)+6\mathrm{ln}x\right)\text{Li}_3\left(x\right)`$ $`+2\left(\mathrm{ln}^2x+2\mathrm{ln}x\mathrm{ln}(1+x)+{\displaystyle \frac{10}{3}}\pi ^2\right)\text{Li}_2\left(x\right)`$ $`+\left(\mathrm{ln}^2x+\pi ^2\right)\mathrm{ln}^2(1+x){\displaystyle \frac{2}{3}}\left(4\mathrm{ln}^3x+5\pi ^2\mathrm{ln}x6\zeta (3)\right)\mathrm{ln}(1+x),`$ where $`\text{Li}_a\left(z\right)`$ is the polylogarithm and $$S_{a,b}(z)=\frac{(1)^{a+b1}}{(a1)!b!}_0^1\frac{\mathrm{ln}^{a1}(t)\mathrm{ln}^b(1zt)}{t}\text{d}t,$$ (26) a generalized polylogarithm introduced in . Note that any (generalized) polylogarithms involved can be expanded in a Taylor series at $`x=0`$ with the radius of convergence equal to one. We can similarly close the integration contour to the left and obtain a result in a form of functions depending on the inverse ratio, $`y=1/x`$: $`K(x,ϵ)`$ $`=`$ $`K_{0s}(x,ϵ)+K_{1s}(x,ϵ)+o(ϵ),`$ (27) $`K_{0s}(1/y,ϵ)`$ $`=`$ $`{\displaystyle \frac{4}{ϵ^4}}{\displaystyle \frac{5\mathrm{ln}y}{ϵ^3}}\left(2\mathrm{ln}^2y{\displaystyle \frac{5}{2}}\pi ^2\right){\displaystyle \frac{1}{ϵ^2}}`$ (28) $`+\left({\displaystyle \frac{7}{2}}\pi ^2\mathrm{ln}y+{\displaystyle \frac{65}{3}}\zeta (3)\right){\displaystyle \frac{1}{ϵ}}+{\displaystyle \frac{1}{3}}\pi ^2\mathrm{ln}^2y+{\displaystyle \frac{76}{3}}\zeta (3)\mathrm{ln}y{\displaystyle \frac{83}{90}}\pi ^4,`$ $`K_{1s}(1/y,ϵ)`$ $`=`$ $`\left[2\text{Li}_3\left(y\right)2\mathrm{ln}y\text{Li}_2\left(y\right)\left(\mathrm{ln}^2y+\pi ^2\right)\mathrm{ln}(1+y)\right]{\displaystyle \frac{2}{ϵ}}`$ (29) $`4\left(S_{2,2}(y)\mathrm{ln}yS_{1,2}(y)\right)36\text{Li}_4\left(y\right)4\left(\mathrm{ln}(1+y)5\mathrm{ln}y\right)\text{Li}_3\left(y\right)`$ $`2\left(\mathrm{ln}^2y2\mathrm{ln}y\mathrm{ln}(1+y)+{\displaystyle \frac{10}{3}}\pi ^2\right)\text{Li}_2\left(y\right)`$ $`+\left(\mathrm{ln}^2y+\pi ^2\right)\mathrm{ln}^2(1+y)+2\left(\mathrm{ln}^3y+{\displaystyle \frac{2}{3}}\pi ^2\mathrm{ln}y+2\zeta (3)\right)\mathrm{ln}(1+y).`$ As a by-product, we obtain an explicit result for the backward scattering value of (1), i.e. at $`t=s`$, $$\frac{\left(i\pi ^{d/2}\right)^2\mathrm{e}^{2\gamma _\mathrm{E}ϵ}}{(s)^{3+2ϵ}}\left[\frac{4}{ϵ^4}\frac{9\pi ^2}{2ϵ^2}\frac{53\zeta (3)}{3ϵ}+\frac{22\pi ^4}{9}\pi i\left(\frac{5}{ϵ^3}\frac{25\pi ^2}{6ϵ}\frac{148\zeta (3)}{3}\right)\right].$$ (30) The presented algorithm is applicable to massless on-shell box Feynman integrals with any integer powers of propagators. Acknowledgments. I am grateful to M. Beneke and the CERN Theory Group for kind hospitality during my visit to CERN in April–May 1999 where this work was completed. Thanks to M.B. for careful reading the manuscript and to O.L. Veretin for involving me into this problem and useful discussions. This work was supported by Volkswagen Foundation, contract No. I/73611, and by the Russian Foundation for Basic Research, project 98–02–16981.
no-problem/9905/hep-ex9905008.html
ar5iv
text
# Charm Photoproduction in ep Collisions at HERA ## 1 INTRODUCTION Heavy quark photoproduction can be used to probe pQCD calculations with a hard scale given by the heavy quark mass and the high transverse momentum of the produced parton ($`m_Q\mathrm{\Lambda }_{QCD}`$). Two types of NLO calculations with different approaches are available for comparison with measurements of charm photoproduction at HERA. The massive charm approach assumes light quarks to be the only active flavours within the structure functions of the proton and the photon, while the massless charm approach also treats charm as an active flavour and is thus only valid for $`p_{}m_c`$. The data taken by the ZEUS collaboration during 1996/1997 corresponds to an integrated luminosity of about $`37\text{pb}^1`$. In a subsample of about $`17\text{pb}^1`$ a small calorimeter positioned along the beam pipe was used to tag low $`W`$ events, $`80`$$`<W_{\gamma p}<120`$ GeV. The results of the high $`W`$ region ($`130`$$`<W_{\gamma p}<280`$ GeV) have been published before and will not be shown here. This is the first presentation of our low $`W`$ results. Charm was identified by the observation of $`D^{}`$(2010) mesons, which were reconstructed in the following decay modes: $`D^+D^0\pi _s^+(K^{}\pi ^+)\pi _s^+`$ $`(Br=0.0262\pm 0.0010)`$ and $`D^+D^0\pi _s^+(K^{}\pi ^+\pi ^+\pi ^{})\pi _s^+`$ $`(Br=0.051\pm 0.003)`$ and charge conjugates. The kinematic range studied was $`p_{}^D^{}>2`$ GeV and $`1.5<\eta ^D^{}<1.5`$ for the high $`W`$ region, and $`2<p_{}^D^{}<8`$ GeV and $`1.0<\eta ^D^{}<1.5`$ for the low $`W`$ region. The pseudorapidity is $`\eta ^D^{}=\mathrm{ln}(\mathrm{tan}\frac{\theta }{2})`$, where $`\theta `$ is the polar angle with respect to the proton beam direction. Charged tracks were measured in the central tracking detector. Cross sections were calculated in the photoproduction range of photon virtualities $`Q^2<1\text{GeV}^2`$ ( $`Q^2<0.015\text{GeV}^2`$ for the tagged data). ## 2 $`D^{}`$ RECONSTRUCTION $`D^{}`$ events have been selected by means of the mass difference ($`\mathrm{\Delta }M`$) method . In the high $`W`$ region we have observed $`3702\pm 136`$ $`D^{}`$’s in the $`D^0(K\pi )`$ decay mode with $`p_{}^D^{}>2`$GeV, and $`1397\pm 108`$ in the $`(K\pi \pi \pi )`$ decay mode with $`p_{}^D^{}>4`$GeV ($`M(D^0)=1.80`$$`1.92`$GeV). In the low $`W`$ region we triggered only the $`(K\pi )`$ decay mode, and observed $`550\pm 36`$ $`D^{}`$ events in the range $`2<p_{}^D^{}<8`$ GeV (Fig. 1). All tracks were assumed to be pions and kaons in turn; wrong charge $`D^{}`$ combinations were used as a background distribution (dashed curve in Fig. 1), normalized outside the signal region. ## 3 $`D^{}`$ CROSS SECTIONS AND COMPARISON WITH CALCULATIONS The $`D^{}`$ differential cross sections in the low $`W`$ region are shown in Fig. 2 to 5. For comparing the experimental data to the NLO QCD calculations, we have used the $`D^{}`$ branching value measured by OPAL $`f(cD^++\mathrm{})=0.222\pm 0.014\pm 0.014`$. For the charm fragmentation to $`D^{}`$ the Peterson fragmentation function was used: $$D_c(z)=N\frac{z(1z)^2}{[(1z)^2+ϵz]^2},z=\frac{p_D^{}}{p_c}.$$ In the massive calculation $`ϵ=0.036`$ was obtained from a recent fit of Nason and Oleari to ARGUS data. Alternatively, the Peterson fragmentation was replaced by fragmentation effects estimated by a leading order Monte Carlo (Pythia). Initial and final state radiation were not included. The results of both calculations for the low $`W`$ region are shown in Figs. 2 and 3. The cross sections are compared with NLO QCD massive calculations using MRSG for the proton structure function (SF) and GRV-G HO for the photon. The theoretical massive calculations are below the data, in particular in the forward (proton) direction, although the Pythia fragmentation slightly improves the agreement. A comparison with massless calculations , which are expected to become valid mainly at higher $`p_{}^D^{}`$, is shown in Fig. 4 for several photon structure functions. Some sensitivity to the photon SF seems to be present, but the excess in the forward direction is evident. The structure function GS-G-96 HO describes the data best. Recently Berezhnoy, Kiselev and Likhoded (BKL) have suggested a new model for describing $`D^{}`$ photoproduction . In this tree level pQCD $`O(\alpha \alpha _s^3)`$ calculation, they hadronize the ($`c,\overline{q}`$) state produced in pQCD, taking into account higher twist terms at $`p_{}m_c`$. Thus the model is supposed to be valid over the whole $`p_{}`$ range studied. No explicit resolved component is used. Singlet and octet color states both contribute to $`D^{}`$ production. The color state ratio $`O(8)/O(1)`$ is a free parameter in this model and was tuned to the ZEUS untagged results , yielding a value of 1.3. Comparison of these calculations, for the same Octet/Singlet mixture, with the ZEUS tagged low $`W`$ data is shown in Fig. 5. A better agreement with the data is observed than that for the NLO calculations.
no-problem/9905/physics9905017.html
ar5iv
text
# Intrinsic Kinematics ## 1 Intrinsic Equations The acceleration vector of a material point moving in a plane can be resolved in two special directions which are independent of the choice of the particular system of reference used to describe the motion. These intrinsic directions are the tangent to the trajectory of the material point and the perpendicular to it in the plane of the motion . Fig.1a shows the situation for a particle describing an arbitrary trajectory in the plane. At the position P of the particle we have indicated the direction of the velocity vector $`\stackrel{}{v}`$ and the total acceleration vector $`\stackrel{}{a}`$. The component of the acceleration tangent to the path, $`a_t`$, measures the rate of change of the magnitude of the velocity vector. The component of the acceleration normal to the path, $`a_n`$, measures the rate of change of the direction of the velocity vector. In Fig. 1a, we have also drawn a circle of radius $`\rho `$ and center O which is tangent to the path at P. When this circle fits the curve just right at P it is called the osculating circle of the path at that point. The osculating circle is very helpful in determining the component of the acceleration normal to the path of the particle. If we imagine that when the particle is at P, instead of following its real path, it describes a uniform motion around the osculating circle itself, the component of the acceleration normal to the path becomes the centripetal acceleration in this motion, having magnitude $`a_n=v^2/\rho `$ in direction of the radius PO of the osculating circle . Now note that in Fig.1a we have represented the total acceleration vector in the direction of the chord PQ of the osculating circle, making an angle $`\varphi `$ with the radius PO. Since $`a_n=a\mathrm{cos}\varphi `$ we also have $$\frac{v^2}{\rho }=a\mathrm{cos}\varphi .$$ (1.1) The magnitude of the total acceleration of the particle in (1.1) can be related to yet another geometric element of the osculating circle, namely, the length of the chord PQ between the particle and the osculating circle, in the direction of the total acceleration vector. From Fig. 1a we see that this length is $`C=2\rho \mathrm{cos}\varphi `$. Substituting this value for $`\varphi `$ into (1.1) yields: $$C=\frac{2v^2}{a}.$$ (1.2) ## 2 Projectile Motion Relation (1.2) finds an interesting application in the study of projectile motion under gravity . In Fig. 1b we have represented the parabolic trajectory described by a projectile fired with velocity $`\stackrel{}{v_0}`$ at an angle $`\theta `$ to the horizontal. Suppose that when the projectile is at P its velocity vector $`\stackrel{}{v}`$ makes an angle $`\beta `$ to the horizontal. Seeing that the horizontal projection of the motion is uniform, the equality $`v\mathrm{cos}\beta =v_0\mathrm{cos}\theta `$ holds at P. The acceleration in the direction of the chord PQ is g due to gravity. Thus Eq. (1.2) becomes $$C_\beta =\frac{2v_0^2\mathrm{cos}^2\theta }{g\mathrm{cos}^2\beta }.$$ (2.1) Formula (2.1) allows us to construct the parabolic motion of the projectile from the intrinsic elements developed above . First we note that when $`\beta =0`$ the particle reaches the vertex V of the parabola. The length of the chord PQ in this position is $$C_{(\beta =0)}=2p=\frac{2v_0^2\mathrm{cos}^2\theta }{g}.$$ (2.2) The above relationship determines a length p which is the distance between the focus and the directrix line of the parabola. This distance is the basis for the construction of the parabola, as we will see in the next section, since the defining property of a parabola is that any point on it is equidistant from the focus and the directrix. ## 3 The Parabola We begin the construction of the parabola by tracing the line PH normal to the path along the radius of the osculating circle (But note that H is not the center of the circle), and PG along the horizontal, as shown in Fig.1b. The axis of the parabola is the vertical line through H and G when $`GH=p`$. To locate the focus we invoke the reflective property of the parabola, according to which any light ray (PQ) parallel to the axis and incident on the parabola is reflected to the focus. The ray PQ incides on the parabola at P making an angle $`\beta `$ with respect to PH, and is refleted to F in such a way that the angle of reflection equals the angle of incidence, or $`\widehat{HPF}=\beta `$. It follows from simple trigonometric relations in the triangles PGF and PGH that $$PH=2PF\mathrm{cos}\beta \mathrm{and}p=PH\mathrm{cos}\beta .$$ Eliminating PH among the above equations we get the following expression for the distance between of the particle to the focus of the parabola $$PF=\frac{p}{2\mathrm{cos}^2\beta }.$$ (3.1) From (3.1) we see that the vertex of the parabola is a point on its axis at a distance $`p/2`$ from the focus. The distance (3.1) is related the length (2.1) of the chord PQ at any point of the path of the particle by $`C_\beta =4PF`$. These elements suffice to construct the parabola (See for the more usual analytical description). Finally, we mention that two important parameters pertaining to a more physical analysis of the projectile motion are the total horizontal distance (or range) and the maximum height attained by the projectile in the case it returns to the same horizontal it was launched at. We work out the expression for the range here, and leave it to the reader to figure out how to use our analysis to determine the maximum height. The range R is just twice the horizontal distance PG when P is the launching point of the projectile, and G is a point on same the horizontal from P. In this case PG forms an angle $`\pi /2\theta `$ with the corresponding line PH, and we obtain $$R=2p\mathrm{tan}\theta =\frac{v_0^2\mathrm{sin}2\theta }{g}.$$ (3.2)
no-problem/9905/nucl-th9905042.html
ar5iv
text
# Can Nuclear Decay constant be Modified ? ## I Introduction Recently, a series of investigations were carried out on the effect of vacuum fluctuation on nuclear energy levels\[1-4\]. These works explored energy level shifts due to vacuum fluctuations in a finite space. In addition, it was reported that the life-time of the hydrogen $`2P`$-state could change by about 3$``$5 percent, if the atom was placed between two parallel conducting plates separated from each other by $`1\mu m`$ . There are another reports that spontaneous emissions by a Rydberg atom and those of cyclotron radiation can be inhibited by cavity effects. There are also some attempts to observe changes of decay constants with technetium (see Table 1). This letter will report an experimental result that the life-time of a radioactive nucleus put between two parallel flat plates becomes longer than that of the nucleus in free space. To date, the life-time of a nucleus has been believed to be an invariable quantity. Namely, it has been supposed that even if electric or magnetic field were applied, nuclear energy level widths would remain unchanged, although shifts or splitting could take place. Notice that the life-time is proportional to the inverse of the energy level width. ## II Experiment In order to explore the shift of nuclear life-time, we carried out Mössbauer measurement of the width of the first excited state in the $`{}_{}{}^{133}Cs`$ nucleus using the facilities at Leuven University. ### A Gamma-ray source The compound $`Ba^{}TiO_3`$ was manufactured as the gamma-ray source. The manufacturing process is as follows : A mixture of $`Ba^{}Cl_2`$, $`BaCO_3`$ and $`TiO_2`$ in the ratio of 1:3:4 was pulverized and heated at 1200<sup>o</sup>C for 20 hours in an electric oven followed by annealing at 700$``$800<sup>o</sup>C for two days. Since it was not sufficiently hard at this stage, we pulverized it again and, then, heated once more at 1200<sup>o</sup>C for 40 hours. X-ray diffraction analysis showed a characteristic $`BaTiO_3`$ compound pattern and confirmed this sample to be a good gamma-ray source. ### B Measurement without plates The source in 2.4mCi amount was fixed to the electromechanical spectrometer providing the Doppler velocity. The result obtained with the scintillation counter showed indeed a typical Mössbauer spectrum of a single line. This spectrum was obtained at 4.2$`\mathrm{K}`$ by using $`CsCl`$ powder with 300$`mg/cm^2`$ thickness as the absorber. Its volume density was 4$`g/cm^3`$, namely 300$`mg`$ $`CsCl`$ in a volume $`1cm\times 1cm\times 0.75mm`$. The line broadening is 0.71$`mm`$/$`s`$ and the relative depth of the spectrum is around 1.3 percent. The spectrum corresponds to an 81KeV gamma-ray emitted by the transition from the first excited state $`\frac{5}{2}^+`$ to the ground state $`\frac{7}{2}^+`$ of $`{}_{}{}^{133}Cs`$. The reduced $`\chi `$-square of the Lorentzian spectrum between 6 and 251 channels was 1.3, and the relative error for line position is around $`2\%`$. The width at the half-height of the Mössbauer spectrum was $`\mathrm{\Gamma }_{exp}=0.796\pm 0.014mm/s`$, equivalently $`\mathrm{\Gamma }_{exp}=(2.149\pm 0.038)\times 10^7`$eV. This value contains a thickness effect of the absorber and possible unresolved hyperfine interaction arising from distortions in the $`BaTiO_3`$ lattice and a possible experimental broadening effect due to vibration. If the source is free from the thickness effect, the width of the Mössbauer spectrum is usually given with the natural line width, $`\mathrm{\Gamma }_{nat}`$, as $`\mathrm{\Gamma }_{exp}=(2+0.270t_a)\mathrm{\Gamma }_{nat}+\mathrm{\Delta }\mathrm{\Gamma },`$ (1) where $`\mathrm{\Delta }\mathrm{\Gamma }`$ is the systematic broadening and the thickness effect of the absorber is expressed by a factor $`0.270t_a`$. Here $`t_a=n\sigma _0f`$ with n being the number of radioactive nucleus per unit area, $`\sigma _0`$ the maximal cross section for resonant nuclear absorption and $`f`$ the recoilless fraction of the absorber. For our absorber $`CsCl`$, we have $`n=1.075\times 10^{21}cm^2`$, $`\sigma _0=1.021\times 10^{19}cm^2`$ and $`f=0.0145`$. The source $`BaTiO_3`$ might not be free from the thickness effect. For such a case, Eq.(1) should be converted into a form $`\mathrm{\Gamma }_{exp}=\xi _s\mathrm{\Gamma }_{nat}+(1+0.270t_a)\mathrm{\Gamma }_{nat}+\mathrm{\Delta }\mathrm{\Gamma },`$ (2) where $`\xi _s=1+0.270\stackrel{~}{t}_a`$ with $`\stackrel{~}{t}_a=\stackrel{~}{n}\stackrel{~}{\sigma }_0\stackrel{~}{f}`$. The value of $`\xi _s`$ is not known at this moment, but it does not matter for the investigation of the effect of plates because the term depending on $`\xi _s`$ disappears in the final expression. This fact can be seen later. ### C Measurement with plates Let us now explore the case of the absorber placed between two parallel flat plates. We prepared the plates in the following manner. Silicon wafer plates of 3$`\times `$3$`cm^2`$ with 0.58$`mm`$ thickness were coated by gold with about 100$`\AA `$ thickness at room temperature using an evaporation method developed by the Surface Physics Group at Yonsei University. The roughness of the plate surface was on the order of 0.01$`\mu m`$. The two plates were separated by 0.61$`mm`$ with stainless stick spacers. Accuracy of parallelness was around 2$`\mu m`$. The absorber, $`CsCl`$(90.0$`mg`$), was formed in a very thin mylar square bag of $`1cm\times 1cm\times 0.15mm`$ at a volume density of $`6g/cm^3`$, equivalently $`90mg/cm^2`$ thickness which yields $`nn^{}=1.613\times 10^{21}cm^2`$ and was centered between the two plates to prevent it from touching the plate surfaces. Thermal effects were very small, i.e., the second-order Doppler shift was negligible, since the temperature variation during the experiment was $`\pm `$0.2K. In this experiment, we took such a geometry as the gamma-ray came along the direction perpendicular to the flat plates. In this experiment, we obtained a very distinct and thin spectrum. The width of Lorentzian spectrum is $`\mathrm{\Gamma }_{exp}^{}=0.707\pm 0.014mm/s`$, equivalently $`\mathrm{\Gamma }_{exp}^{}=(1.909\pm 0.038)\times 10^7`$eV. In order to extract the effect of plates set around the absorber, let us rewrite Eq.(2) in the form $`\mathrm{\Gamma }_{exp}^{}=\xi _s\mathrm{\Gamma }_{nat}+(1+0.270t_a^{})\mathrm{\Gamma }^{(a)}+\mathrm{\Delta }\mathrm{\Gamma },`$ (3) where $`t_a^{}=n^{}\sigma _0f=(1.613\times 10^{21}cm^2)\times (1.021\times 10^{19}cm^2)\times 0.0145=2.388`$ and $`\mathrm{\Gamma }^{(a)}`$ is the natural line width modified by the plates. Subtracting Eq.(2) from Eq.(3), we find $`\mathrm{\Gamma }^{(a)}={\displaystyle \frac{\mathrm{\Gamma }_{exp}^{}\mathrm{\Gamma }_{exp}+(1+0.270t_a)\mathrm{\Gamma }_{nat}}{1+0.270t_a^{}}}`$ (4) The half-life of the first excited state $`\frac{5}{2}^+`$ in $`{}_{}{}^{133}Cs`$ is known to be $`\tau _{1/2}=6.27\pm 0.02`$ ns, from which the natural line width, $`\mathrm{\Gamma }_{nat}`$, can be calculated as $`\mathrm{\Gamma }_{nat}={\displaystyle \frac{\mathrm{}}{\tau _{1/2}}}ln2=(0.728\pm 0.002)\times 10^7\mathrm{eV}.`$ (5) Substituting $`\mathrm{\Gamma }_{exp}^{}=(1.909\pm 0.038)\times 10^7\mathrm{eV}`$, $`\mathrm{\Gamma }_{exp}=(2.149\pm 0.038)\times 10^7\mathrm{eV}`$ and $`\mathrm{\Gamma }_{nat}=(0.728\pm 0.002)\times 10^7\mathrm{eV}`$ into Eq.(4), we obtain $`\mathrm{\Gamma }^{(a)}=(0.487\pm 0.010)\times 10^7\mathrm{eV},`$ (6) from which the half-life modified by plates can be found as $`\tau _{1/2}^{(a)}={\displaystyle \frac{\mathrm{}}{\mathrm{\Gamma }^{(a)}}}ln2=(9.37\pm 0.19)\mathrm{ns}.`$ (7) This value is larger than $`\tau _{1/2}=6.27\pm 0.02\mathrm{ns}`$ by 49.4$`\%`$. ## III Discussion Hence, our finding is that the life-time increases by 49.4$`\%`$ when the absorbing nucleus is placed between two parallel flat plates. The change of life-time, $`\mathrm{\Delta }\tau =3.10\pm 0.19`$ ns, arises purely from the effect of plates, because all experimental conditions are the same except for setting plates around the absorber. Why does the life-time become longer when a nucleus is placed between two parallel flat plates? It may be understood as a phenomenon caused by the self-interaction that photons (not necessarily real photons) emitted from the excited nuclei are reabsorbed by these nuclei after being reflected by plates. Consider an excited nucleus put between two parallel flat plates. This excited nucleus emits virtual (even real) gamma-rays in arbitrary directions, and parts of them are reflected by plates. Since the plate is, of course, not perfect, some of the gamma rays are absorbed or pass through the plate. And the reflected gamma-rays may come back to be reabsorbed by the nucleus. Through such a process, the population of the excited state in a nucleus could be amplified. However, one may worry about that $`81KeV`$ is too high energy for the gamma-ray to be reflected by the plates. Since the wavelength of $`81KeV`$ gamma-ray is about 0.015$`nm`$, the silicon plate would be almost transparent at such a short wavelength. Indeed, our Monte-Carlo simulation with the program GEANT3 shows that only 0.018 percents of the $`81KeV`$ gamma-ray are reflected at the same energy. Nevertheless, if the process can be repeated many times, the effect must be enhanced. Generally, the amount of nucleus decaying during $`\mathrm{\Delta }t`$ is given by $`\mathrm{\Delta }N=\lambda N\mathrm{\Delta }t`$, from which we obtain $`N=N_0exp(\lambda t)`$, where $`\lambda `$ is the decay constant and $`N_0`$ is the initial value of $`N`$. If emitted photons can once return after being reflected by the plates, the equation is modified as $`\mathrm{\Delta }N=\lambda N\mathrm{\Delta }t+\sigma \lambda N\mathrm{\Delta }t=(1\sigma )\lambda N\mathrm{\Delta }t`$, where $`\sigma `$ is the reflection coefficient, i.e. $`\sigma =0.00018`$ for the present case. If such a process is assumed to repeat $`n`$ times, we have $`\mathrm{\Delta }N=(1\sigma )^n\lambda N\mathrm{\Delta }t`$ (8) For $`\sigma =0`$, it reduces to $`\mathrm{\Delta }N=\lambda N\mathrm{\Delta }t`$. Furthermore, $`n=0`$ leads Eq.(8) to $`\mathrm{\Delta }N=\lambda N\mathrm{\Delta }`$. This is the case without any plate, i.e. photons have no chance to return. Solving Eq.(8), we find $`N=N_0exp(\stackrel{~}{\lambda }t)`$ (9) where $`\stackrel{~}{\lambda }=(1\sigma )^n\lambda `$, alternatively $`\mathrm{}\stackrel{~}{\lambda }\stackrel{~}{\mathrm{\Gamma }}=(1\sigma )^n\mathrm{\Gamma }`$. If $`n=2200`$, we have $`\mathrm{\Delta }\mathrm{\Gamma }/\mathrm{\Gamma }=(\mathrm{\Gamma }\stackrel{~}{\mathrm{\Gamma }})/\mathrm{\Gamma }=0.33`$, which implies $`\mathrm{\Delta }\tau /\tau _{1/2}=0.492`$. Let $`t_0`$ be the time when the photon consumes during its round trip between the nucleus and the plate, number of repeat of the process during the nuclear half-life is $`n=\tau _{1/2}/t_0=1542`$. Since the program GEANT3 is known to be efficient only for the gamma-ray energy larger than $`100KeV`$, the value of $`\sigma `$ may be flexible for $`81KeV`$ gamma-ray. For instance, $`\sigma =0.00026`$ renovates $`n=1540`$ to retain $`\mathrm{\Delta }\mathrm{\Gamma }/\mathrm{\Gamma }=0.33`$. This analysis is anyway ad hoc. ## IV Conclusion In this paper, we report our discovery that the decay of $`{}_{}{}^{133}Cs`$ in the first excited state was delayed when the nucleus was placed between two parallel flat plates. This phenomenon is based on the process that the population of the excited state of the nucleus is increased by reabsorption of the emitted photon into the same nucleus after being reflected by the plates. ###### Acknowledgements. The author thanks Langouche and Milant in Leuven University for their help in the experiment. This work was supported by the Korean Ministry of Education(Grant no.98-015-D00061) and the Korean Science and Engineering Foundation(Grant no. 976-0200-002-2). Table 1. Changes of Decay Constant $`{}_{43}{}^{99in}Tc`$ Technetium internal conversion $`\tau _{1/2}=6.007h`$, 2.2KeV E3 transition Chemical * 1953 K.Bainbridge et al. $`\frac{\lambda (KTcO_4)\lambda (Tc_2S_7)}{\lambda (Tc_2S_7)}=(2.70\pm 0.10)\times 10^3`$ * 1980 H.Mazaki et al. $`[\lambda (TcO_4)\lambda (Tc_2S_7)]/\lambda (Tc_2S_7)=(3.18\pm 0.7)\times 10^3`$ $`[\lambda (TcS_7)\lambda (Tc_2S_7)]/\lambda (Tc_2S_7)=(5.6\pm 0.7)\times 10^4`$ * 1999 A.Odahara, T.Tsutsumi, Y.Gono, Y.Isozumi, R.Katana, T.Kikegawa, T.Suda, T.Kajino $`[\lambda (compound)\lambda (metal)]/\lambda (metal)=`$in progress High pressure 10GPa * 1952 K.Bainbridge et al. $`[\lambda (10GPa)\lambda (0Pa)]/\lambda (0Pa)=(2.3\pm 0.5)\times 10^4`$ * 1972 H.Mazaki et al. $`[\lambda (10GPa)\lambda (0Pa)]/\lambda (0Pa)=(4.6\pm 2.3)\times 10^4`$ Low temperature * 1958 D.Byers et al. $`[\lambda (4.2K)\lambda (293K)]/\lambda (293K)=(1.3\pm 0.4)\times 10^4`$ External electric field gradient $`2\times 10^4V/cm`$ * 1970 H.Leunberger et al. $`\frac{\mathrm{\Delta }\lambda }{\lambda }10^4`$ Phase transition : ferroelectric $``$ para electric * 1972 M. Nishi et al. $`\frac{\mathrm{\Delta }\lambda }{\lambda }(2.6\pm 0.4)\times 10^3`$
no-problem/9905/cond-mat9905036.html
ar5iv
text
# Infrared Studies of the Onset of Conductivity in Ultra-Thin Pb Films ## Abstract In this paper we report the first experimental measurement of the infrared conductivity of ultra-thin quenched-condensed Pb films. For dc sheet resistances such that $`\omega \tau 1`$ the ac conductance increases with frequency but is in disagreement with the predictions of weak localization. We attribute this behavior to the effects of an inhomogeneous granular structure of these films, which is manifested at the very small probing scale of infrared measurements. Our data are consistent with predictions of two-dimensional percolation theory. preprint: Draft copy — not for distribution Transport measurements in ultra-thin films have been a subject of active interest over many years . These systems, consisting of a thin layer of metal deposited onto a substrate held at LHe temperatures, provide a relatively simple way to study the interplay between localization, electron-electron interactions, and superconductivity in disordered quasi-2D metals. These experiments are in quantitative agreement with predictions of localization theory combined with the effects of diffusion-enhanced electron-electron interactions . The reason why these theories, developed for homogeneous materials, work so well in the case of granular, inhomogeneous films is that the length scale at which electrons lose phase coherence in these measurements is usually much larger than the characteristic size of inhomogeneities (grains, percolation clusters, etc.) of the film. Yet another way to modify the length scale is by simply changing the probing frequency. This gives rise to frequency dependence of the ac conductivity in the region $`\omega \tau 1`$, where the Drude theory predicts a plateau. While the experimental data on the frequency dependence of conductivity are virtually non-existent for ultra-thin quenched-condensed films, they abound for thicker, more granular films, deposited onto a warm substrate . In ac conductivity the frequency itself defines a characteristic dephasing length scale $`L_\omega =\sqrt{D/\omega }`$, where $`D`$ is a diffusion coefficient. In the frequency range where $`L_\omega `$ is smaller than other dephasing length scales, it enters into all localization and interaction formulas and gives rise to frequency-dependent quantum corrections to the conductivity. However, these quantum effects constitute only one source of frequency dependence of the conductivity. In the region where the material is strongly inhomogeneous on the scale of $`L_\omega `$, the frequency dependence of conductivity is dominated by purely classical effects due to charge dynamics on a network of capacitively and resistively-coupled clusters of grains. The effective way to describe these effects theoretically is provided by the framework of percolation theory . In this theoretical approach the ac conductivity is shown to increase with frequency. Indeed, since capacitive coupling between grains is proportional to frequency, grains become more and more connected as the frequency is increased. It is also known that purely quantum effects such as localization and diffusion-enhanced interaction corrections become profoundly modified on length scales where the material can no longer be treated as homogeneous . The presence of inhomogeneities is known to change the functional dependence of these corrections on the phase coherence length, while leaving the absolute magnitude of the effect virtually unchanged. In this letter we report the first measurement of conductivity at infrared frequencies in ultra-thin films. Films used in this experiment were made in situ by evaporating Pb onto a Si(111) (sets 1 and 2) and glass (set 3) substrates, mounted in an optical cryostat, held at 10 K. Ag tabs, pre-deposited onto the substrate, were used to monitor the dc resistance of the film. Infrared transmission measurements from 500 to 5000 cm<sup>-1</sup> (set 1), and 2000 to 8000 cm<sup>-1</sup> (sets 2 and 3) were made using a Bruker 113v spectrometer at the new high-brightness U12IR beamline at the BNL’s National Synchrotron Light Source. The substrates were covered with a 5 Å thick layer of Ge to promote two-dimensional thin-film growth, rather than the agglomeration of the deposited Pb in larger grains. Even when using this traditional method, which is known to promote the growth of more homogeneous films, where the continuity has been measured at near a monolayer of deposited metal, we only saw the beginnings of continuity near $`820`$ Å of metal with consistently thinner films using the glass substrates. Films were evaporated at pressures ranging from the low $`10^8`$ to the mid $`10^9`$ Torr range. The transmission spectra were obtained after successive in-situ Pb depositions. The dc resistances in set 1 on Si range from $`64\mathrm{M}\mathrm{\Omega }/\mathrm{}`$ at 17.4 Å average thickness to $`543\mathrm{\Omega }/\mathrm{}`$ at 70 Å. The 70 Å sample was then annealed twice, first to 80 K, and then to 300 K. As a result its resistance at 10 K became $`166\mathrm{\Omega }/\mathrm{}`$ after the first annealing, and 100 $`\mathrm{\Omega }/\mathrm{}`$ after the second annealing. Films from set 2 (also on Si) are similar to set 1: we have observed $`R_{\mathrm{}}=20\mathrm{M}\mathrm{\Omega }/\mathrm{}`$ at 18 Å and $`R_{\mathrm{}}=1000\mathrm{\Omega }/\mathrm{}`$ at 88 Å. Finally, films from set 3, deposited on a Ge-coated glass substrate, range from 13 to 200 Å, while $`R_{\mathrm{}}`$ changes between $`5.6\mathrm{M}\mathrm{\Omega }`$ and $`22.8\mathrm{\Omega }`$. The transmission coefficient of a film deposited on the substrate, measured relative to the transmission of the substrate itself, is related to real and imaginary parts of the sheet conductance of the film as $$\mathrm{T}(\omega )=\frac{1}{[1+Z_0\sigma _{\mathrm{}}^{}(\omega )/(n+1)]^2+(Z_0\sigma _{\mathrm{}}^{\prime \prime }(\omega )/(n+1))^2}.$$ (1) Here $`Z_0=377\mathrm{\Omega }`$ is the impedance of free space, $`n`$ is the index of refraction of the substrate, equal to $`n_{Si}=3.315`$ for silicon and $`n_G=1.44`$ for glass, and $`\sigma _{\mathrm{}}^{}(\omega )`$ (sometimes called G), $`\sigma _{\mathrm{}}^{\prime \prime }(\omega )`$ are the real and imaginary parts of the sheet conductance of the film. Almost everywhere in our experiments $`\sigma _{\mathrm{}}^{}(\omega ),\sigma _{\mathrm{}}^{\prime \prime }(\omega )(n+1)/Z_0`$. In this case the contribution of the imaginary part of conductance to the transmission coefficient is negligible and Eq. (1) can be approximately replaced by $`\mathrm{T}(\omega )\left[1+\mathrm{Z}_0\sigma _{\mathrm{}}^{}(\omega )/(\mathrm{n}+1)\right]^2`$. Even for our thickest films, where $`\sigma _{\mathrm{}}^{\prime \prime }(\omega )(n+1)/Z_0`$, the error in calculating $`\sigma _{\mathrm{}}^{}(\omega )`$ in this way is less than $`10\%`$ over our frequency range. Throughout the manuscript we will use this approximation to extract the real part of the sheet conductance of the film from its transmission coefficient. Only for our thickest films will we use Eq. (1) to derive parameters of Drude fits. In Fig. 1 we plot the frequency-dependent conductance, extracted from the transmission data for the films from the set 3 with the help of the above approximation to Eq. (1). The seven thickest films from this set exhibit a characteristic Drude falloff at high frequencies. For the rest of the films the conductivity systematically increases with frequency throughout our frequency range. The inset in Fig. 1 shows the average ac conductance as well as the dc sheet conductance for set 3 as a function of thickness. Note the curves start to significantly deviate from each other at around 50 Å. In order to fit the conductance of our thickest films with the Drude formula, one needs to use the untruncated Eq. (1) for the transmission coefficient. Inserting the Drude expression for the sheet conductance $`\sigma _{\mathrm{}}(\omega )=\sigma _D/(1i\omega \tau )`$ directly into Eq. 1 one gets $`\mathrm{T}(\omega )/[1\mathrm{T}(\omega )]=(1+\omega ^2\tau ^2)/[(\sigma _D/\sigma _0)^2+2\sigma _D/\sigma _0]`$, where $`\sigma _0=(n+1)/Z_0`$. Therefore, the transmission data, which are consistent with the Drude formula can be fitted with a straight line, when $`\mathrm{T}(\omega )/[1\mathrm{T}(\omega )]`$ is plotted as a function of $`\omega ^2`$. In Fig. 2 we plot the 7 thickest films from the set 3 in this way. The knowledge of the average thickness of our films along with parameters of the Drude formula enables us to calculate the plasma frequencies in our films. They are shown in the inset of Fig. 2 as a function of $`1/\sigma _D`$ — the dc sheet resistance in Drude formula, which itself was extracted from our Drude fit. These results are in excellent agreement with the experimentally determined lead plasma frequency of $`\omega _p=\mathrm{59\hspace{0.17em}400}`$ cm<sup>-1</sup> . In the remainder of the manuscript we discuss possible interpretations of the increase of the conductance with frequency, which we observe in our thinner films. One mechanism which is known to cause a frequency dependence of conductivity within a Drude plateau $`(\omega \tau 1)`$ is purely quantum mechanical in origin. The conductivity is known to be reduced due to increased back scattering of phase-coherent electrons (so called weak localization (WL) ), as well as diffusion enhanced electron-electron interactions (EEI) . The magnitude of this reduction depends on the length scale over which an electron maintains its phase coherence. In the absence of external magnetic field or ac electric field this length is determined by the temperature. It is given by the inelastic scattering length $`l_{in}(T)=\sqrt{D\tau _{in}(T)}`$ for WL, and the thermal coherence length $`L_T=\sqrt{\mathrm{}D/kT}`$ for EEI. Here $`D`$ is the diffusion coefficient of the electron related to its dc conductivity by the Einstein formula $`\sigma =e^2(dN/d\mu )_{E_F}D`$, and $`\tau _{in}(T)`$ is the temperature dependent inelastic scattering (dephasing) time. In the presence of the ac electric field the diffusive motion of an electron is restricted to a spatial region of size $`L_\omega =\sqrt{D/\omega }`$. If this length scale turns out to be shorter than the corresponding dc length scale, it is $`L_\omega `$ which enters in all WL and EEI formulas. The question of effective dimensionality of the quasi-2D sample is decided by comparing $`L_\omega `$ to the film thickness $`d`$. The frequency dependent WL corrections to the sheet conductance of the film are given by $`\mathrm{\Delta }\sigma _{\mathrm{}}^{2D}(\omega )=\frac{e^2}{2\pi ^2\mathrm{}}\mathrm{ln}\omega \tau `$ in the 2D limit ($`d<L_\omega `$) , and $`\mathrm{\Delta }\sigma _{\mathrm{}}^{3D}(\omega )=\frac{\sqrt{2}e^2}{4\pi ^2\mathrm{}}d\sqrt{\frac{\omega }{D}}`$ in the 3D limit ($`d>L_\omega `$) . At the lower end of our frequency range $`\omega =500`$ cm<sup>-1</sup> for a realistic value of $`D=5`$ cm<sup>2</sup>/s we can estimate $`L_\omega 20\mathrm{\AA }d`$. Therefore, for our films one should use the formulas of three-dimensional localization theory. The frequency-dependent sheet conductance in most of our films is consistent with the $`\sqrt{\omega }`$ dependence of 3D WL. However, we believe that in order to explain the frequency dependence of our experimental data one needs to look for yet another mechanism, supplementing that due to weak localization and electron-electron interactions. The problems with ascribing the observed frequency dependence of conductivity solely to WL and EEI effects are: (i) the dependence of the slope of the conductivity vs $`\sqrt{\omega }`$ on the thickness of the film and the dc sheet conductance, which determines the diffusion coefficient $`D`$, does not agree with predictions of the 3D localization. (ii) the weak localization theory is only supposed to work in the limit where its corrections are much smaller than the dc conductivity. In our experimental data we don’t see any change of behavior as the corrections to conductivity become bigger than the dc conductivity. In fact the $`\sqrt{\omega }`$ fit works very well and gives roughly the same slope even for films with dc sheet resistance of $`100`$ k$`\mathrm{\Omega }`$, while the ac sheet resistance is only $`1\mathrm{k}\mathrm{\Omega }`$. Furthermore, the 3D-localization theory predicts that the $`\sqrt{\omega }`$ dependence of weak localization theory should be replaced by $`\omega ^{(d1)/d}=\omega ^{1/3}`$ dependence at or near the 3D metal-insulator transition . In our experimental data we see no evidence for such a crossover. There exists yet another, purely classical effect that gives rise to the frequency dependence of the conductivity. It is relevant in strongly inhomogeneous, granular films. There is ample experimental evidence that even ultra-thin quenched-condensed films have a microscopic granular structure . In order to describe the ac response of a film with such a granular microstructure one needs to know the geometry and conductivity of individual grains as well as the resistive and capacitive couplings between grains. The disorder, which is inevitably present in the placement of individual grains, makes this problem even more complicated. However, there exist two very successful approaches to the analytical treatment of such systems. One of them, known as the effective-medium theory , can be viewed as a mean-field version of a more refined approach, based on scaling near the percolation transition. The EMT takes into account only concentrations of metallic grains and the voids between the grains, disregarding any spatial correlations. A more refined approach takes into account the geometrical properties of the mixture of metallic grains and voids. The insulator-to-metal transition in this approach is nothing else but the percolation transition, in which metallic grains first form a macroscopic connected path at a certain critical average thickness $`d_c`$ of the film. The dc conductivity above the transition point scales as $`(dd_c)^t`$, where $`t=1.3`$ in 2D and $`t=1.9`$ in 3D . Just below the percolation transition the dielectric constant of the medium diverges as $`ϵ(d)(d_cd)^s`$, where $`s=1.3`$ in 2D and $`s=0.7`$ in 3D. The diverging dielectric constant is manifested as the imaginary part of the ac conductivity $`\sigma (\omega )i\omega (f_cf)^s`$. In general the complex ac conductivity of the metal-dielectric (void) mixture close to the percolation transition is known to have the following scaling form: $$\sigma (\omega ,d)=|dd_c|^tF_\pm (i\omega |dd_c|^{(t+s)}).$$ (2) Here $`F_+(x)`$ and $`F_{}(x)`$ are scaling functions above and below the transition point correspondingly. Note that this scaling form correctly reproduces the scaling of the dc conductivity above the transition and the divergence of the dielectric constant below the transition provided that $`F_+(x)=F_+^{(0)}+F_+^{(1)}x+F_+^{(2)}x^2+\mathrm{}`$, while $`F_{}(x)=F_{}^{(1)}x+F_{}^{(2)}x^2+\mathrm{}`$ One should mention that the predictions of the EMT can also be written in this scaling form with mean-field values of the exponents $`t=s=1`$, and scaling functions $`F\pm (x)=(\sqrt{D^2+4(D1)x}\pm D)/(2(D1))`$, where $`D`$ is the spatial dimension. Since the metallic grains in our films form not more than two layers, our data should be interpreted in terms of the two-dimensional percolation theory. In two dimensions $`t=s=1.3`$ , and according to Eq. (2) the ac conductivity precisely at the transition point $`d=d_c`$ is given by $`\sigma (\omega ,d_c)=A(i\omega /\omega _0)^{t/(t+s)}=A(i\omega /\omega _0)^{1/2}`$. This prediction is in agreement with our experimental data. In Fig. 3 we attempt the rescaling of our data according to Eq. (2). The critical thickness $`d_c`$ is determined as the point where the ac conductivity divided by $`\sqrt{\omega }`$ is frequency independent. Of course, the experimental uncertainty in our data points does not allow us to determine which exponents $`t`$ and $`s`$ provide the best data collapse. However, as we can see from Fig. 3 our data are consistent with the scaling form of the 2D percolation theory. Finally, we use Fig. 3 to estimate basic parameters such as typical resistance $`R`$ of an individual grain and typical capacitance $`C`$ between nearest neighboring grains. From the limiting value of $`\sigma (\omega ,d)(d_c/|dd_c|)^{1.3}`$ at small values of the scaling variable $`x=\omega (d_c/|dd_c|)^{2.6}`$ for $`d>d_c`$, one estimates the conductance of an individual grain to be of order of $`R1000\mathrm{\Omega }`$. In the simplest $`RC`$ model, where the fraction of the bonds of the square lattice are occupied by resistors of resistance $`R`$, while the rest of the bonds are capacitors with capacitance $`C`$, the ac conductivity exactly at the percolation threshold is given by $`A/R(i\omega RC)^{1/2}`$, where $`A`$ is a constant of order of one. Therefore, the slope $`\sigma /\sqrt{\omega }`$ in our system should be of the same order of magnitude as $`\sqrt{RC}/R`$. This gives $`C2.6\times 10^{19}`$ F, which is in agreement with a very rough estimate of the capacitance between two islands $`200\mathrm{\AA }\times 200\mathrm{\AA }\times 30\mathrm{\AA }`$ separated by a vacuum gap of some $`20\mathrm{\AA }`$, giving $`C2.7\times 10^{19}`$ F. This order of magnitude estimate confirms the importance of taking into account inter-island capacitive coupling when one interprets the ac conductivity measured in our experiment. Indeed, $`R=1000\mathrm{\Omega }`$, and $`C=3\times 10^{19}`$ F define a characteristic frequency $`1/RC17000`$ cm<sup>-1</sup> comparable to our frequency range. In summary, we have measured the conductivity of ultra-thin Pb films in the frequency range 500 to 8000 cm<sup>-1</sup>. The evolution of $`\sigma (\omega )`$ with DC sheet resistance is consistent with classical two-dimensional percolation theory in this range. At lower probing frequencies, where $`L_\omega `$ becomes larger than the scale of inhomogeneities in these films, we expect that the effects of weak localization will become more prevalent. We have benefited from fruitful discussions with P.B. Allen, A.M. Goldman, V.J. Emery, V.N. Muthukumar, Y. Imry, Z. Ovadyahu, and M. Pollak. The work at Brookhaven was supported by the U.S. Department of Energy, Division of Materials Sciences, under contract no. DE-AC02-98CH10886. Support from NSF grants DMR-9875980 (D.N.B.) and DMR-9725037 (B.N.) is acknowledged. Research undertaken at NSLS is supported by the U.S. DOE, Divisions of Materials and Chemical Sciences.
no-problem/9905/astro-ph9905291.html
ar5iv
text
# 1 Introduction ## 1 Introduction Radiative transfer plays an important role in the evolution of the spectral appearance of galaxies, and at wavelengths longer than the Lyman limit, scattering by dust in the interstellar medium (ISM) is the major factor. The ISM is known to be composed of at least three phases: diffuse clouds, dense molecular clouds, and a low density inter-cloud medium (ICM). Most likely the ISM has a spectrum of densities and temperatures, as proposed by theoretical models , with correlated multi-scale spatial structure, as evidenced by sky surveys such as IRAS and H I radio surveys. The transfer of radiation becomes complicated in such an inhomogeneous medium, however in most cases the effective optical depth is less than that of the homogeneous medium with equal mass of dust, allowing relatively more photons to escape. The simplest model of an inhomogeneous medium is two phases (densities): dense clumps of dust in a less dense ICM. Radiative transfer through such a clumpy plane-parallel medium was investigated by Boissé , and then by Hobson & Scheuer for two and three phase media. Their results for a three phase medium were found to be significantly different than the two phase case. Recently Witt & Gordon performed Monte Carlo simulations of radiative transfer from a central source in a two phase clumpy medium within a sphere. All these investigations verify the expectation that the medium becomes more transparent as the degree of clumpiness is increased. ## 2 Monte Carlo Simulations To further explore these effects we have developed a Monte Carlo code for simulating radiative transfer with multiple scattering in an inhomogeneous dusty medium . The geometry and density of the medium can be specified by a continuous functional $`\rho (x,y,z)`$, or on a three dimensional grid. For each wavelength, the number of photons absorbed by the dust in each element of the 3D grid is saved, allowing computation of the dust temperatures and resulting infrared emission spectrum. The grid resolution is limited only by the available computer memory: increasing the number of grid elements does not affect the computation time. This is achieved by employing the Monte Carlo method of imaginary/real scatterings and rejections in selecting the random distances each photon travels between interactions, instead of numerical integration across volume elements of the grid. Our Monte Carlo simulations agree exactly with the radiative transfer results of Witt & Gordon for the situation they considered, that of cubic clumps on a body centered cubic percolation lattice. However, we find that randomly located spherical clumps create a more natural two phase medium, and the radiative transfer properties can then be approximated by the Mega-Grains approach of Hobson & Padman . Since the ISM has a wide spectrum of densities controlled by both compressible turbulence and gravitational collapse , the structure of gas and dust clouds is more likely a self-similar hierarchy of denser clumps within clumps. A fractal distribution of matter has exactly such properties. We construct fractal cloud models using a modification of the algorithm described by Elmegreen , then create density maps on a 3D grid, considering everything outside of the fractal cloud to be the low density ICM. The maximum density contrast is determined by the resolution of the density map grid. Any subset which is much larger than the resolution limit is also a fractal cloud, by the self-similar construction. Figure 1 shows an array of images, each one being a map of the photons from a central source that are absorbed by dust in a 2D slice through the center of two types of inhomogeneous media. Blue coloring indicates the minimum absorption, red indicates more absorption, yellow is maximum, and the logarithmic scaling over five orders of magnitude is the same for each image. Simulations of the left column are in a two-phase clumpy medium, in which the clumps are spherical, $`30`$ times denser than the ICM, and have a volume filling factor $`f_c=0.1`$. Simulations of the right column are in a medium with a fractal cloud of dimension $`D=2.7`$, filling factor $`f_c=0.1`$, and densities having an exponential distribution (tending toward lognormal) with an average that is $`30`$ times the ICM density, and maximum that is $`260`$ times the ICM density. In all cases the dust is characterized by a scattering albedo $`\omega =0.6`$ and an angular scattering phase function parameter $`g=\mathrm{cos}\theta =0.6`$, which are typical values for UV photons scattering off dust grains. Each row of images is at the same homogeneous optical depth $`\tau _{hom}`$, which is the radial optical depth of absorption and scattering that would result if the dust was distributed uniformly in the sphere instead of in clumps. The upper row has $`\tau _{hom}=1`$ and lower row is for $`\tau _{hom}=10`$. Increasing $`\tau _{hom}`$can be viewed as either increasing the dust abundance or decreasing the wavelength of the photons, resulting in more absorption. As $`\tau _{hom}`$increases the clumps become opaque, creating the apparent shadows behind the clumps. However scattering by the dust causes photons to go behind the clumps and become absorbed, thus diminishing the effect of what would otherwise be completely dark shadows in the case of no scattering. As the clumps become opaque absorption occurs more at the clump surfaces. By observing the fraction of photons that escape in the case of a central source, an effective optical depth is defined as $`\tau _{eff}=\mathrm{ln}(L_{escape}/L_{emit})`$. Usually $`\tau _{eff}`$$`<`$ $`\tau _{hom}`$in a clumpy medium and the functional relationship is nonlinear. This can be seen in Fig.2(a), were the effective optical depths of 20 randomly created fractal dust clouds (with $`D=2.5`$ and $`f_c=0.065`$) are plotted versus the equivalent homogeneous optical depth. The upper graph is for no scattering and the lower graph is with scattering. Also plotted for comparison are the homogeneous optical depth, and effective optical depth of related two-phase clumpy medium (solid curve) having the same $`f_c`$, and clump densities equal to the average density of fractal clouds. Most of the fractal clouds have $`\tau _{eff}`$that varies as a function of $`\tau _{hom}`$similar to the two-phase clumpy medium. However, many of the fractal clouds modeled have $`\tau _{eff}`$that exceeds $`\tau _{hom}`$for low values of $`\tau _{hom}`$. This is because the position of the source relative to the fractal cloud is more important than in the case of two-phase clumpy medium. The spherical clumps are small and their distribution is uniformly random so the relative position of the source is not so important, whereas the fractal cloud is a more connected set, and even though $`f_c`$is small, if the source is near to any part of the cloud, it is near to a lot of the cloud and thus its emission is more likely to be absorbed or scattered. ## 3 Analytic Approximations Since Monte Carlo simulations can require a large amount of computer time, it is useful to have analytical approximations for the basic results of radiative transfer: the fraction of photons escaping and the fraction of photons absorbed in each phase of the medium. In the case of a spherical homogeneous medium with uniformly distributed emitters, the escape probability formula of Osterbrock is an exact solution when there is absorption only. $$P_0(\tau )=\frac{3}{4\tau }\left[1\frac{1}{2\tau ^2}+\left(\frac{1}{\tau }+\frac{1}{2\tau ^2}\right)e^{2\tau }\right]$$ (1) Lucy et al. suggested a formula that extends any absorption only escape probability to approximately include the effects of scattering, $$P(\tau ,\omega )=\frac{P_0(\tau )}{1\omega [1P_0(\tau )]}$$ (2) where $`\omega >0`$ is the scattering albedo, the optical depth $`\tau `$ includes both absorption and scattering, and $`P_0`$ is any escape probability for $`\omega =0`$. Lucy’s formula is based on the assumption that the scattered photons mimic the photons emitted uniformly by the sources so that the $`\omega =0`$ escape probability formula applies recursively. The combination of Eqs.(1) and (2), which we call the Osterbrock-Lucy formulae, was tested extensively against Monte Carlo radiative transfer simulations and was found to be a reasonable approximation of the fraction of photons escaping from a homogeneous medium. However, since the angular distribution of the scattered photons is ignored in Lucy’s approximation, the formula is exact for only a single value of the angular scattering parameter $`g=\mathrm{cos}\theta `$, where $`\theta `$ is the deflection angle, and this value also depends $`(\tau ,\omega )`$. Coincidentaly, the $`g`$ dependence of escape probability validity follows the scattering properties of silicate and graphite dust, that is, for low optical depths the escape probability agrees with the isotropic scattering ($`g=0`$) case, and as optical depth increases the agreement shifts toward more forward scattering cases ($`g1`$). For the case of a two-phase clumpy medium, Hobson & Padman provide formulae approximating the effective radiative transfer properties by assuming spherical clumps and treating them as “Mega-Grains”. The upper graph in Fig.2(b) compares the effective radial optical depth of a spherical clumpy medium obtained from the Mega-Grains approximation (dashed curve) to $`\tau _{eff}`$derived from Monte Carlo simulations (diamonds), over the full range of clump filling factor $`f_c`$, in the case of $`\tau _{hom}=5.0`$ and absorption only ($`\omega =0`$). Each clump has a radius which is 5% of the radius of the spherical region, and density which is $`100`$ times the ICM density. The graph shows that the Mega-Grains approximation is valid for $`f_c<0.25`$, but overpredicts $`\tau _{eff}`$at larger filling factors. By introducing another dependence on $`f_c`$, the Mega-Grains approximation can be extended to the full range of $`0f_c1`$ (solid curve), fitting the Monte Carlo results better and reproducing the correct asymptotic value of $`\tau _{eff}`$= $`\tau _{hom}`$in the $`f_c1`$ limit. The Mega-Grains (MG) approximation gives the effective optical depth $`\tau _{eff}`$and effective albedo $`\omega _{eff}`$of the clumpy medium. Using these parameters, the escaping fraction of photons for the case of a uniform source can be computed by substituting $`\tau _{eff}`$and $`\omega _{eff}`$directly into the Osterbrock-Lucy escape probability formulae, Eqs.(1) and (2). The lower graph in Fig.2(b) compares this analytically computed escaping fraction to the Monte Carlo simulations of uniformly distributed emitters, including scattering. The standard MG approximation gives agreement with Monte Carlo only for $`f_c<0.15`$. Introducing another dependence on $`f_c`$in the MG formulae (scaling the clump radius by $`1f_c`$) and using the escape probability formulae to get the effective albedo of each clump, improves the agreement with Monte Carlo results for the full range of $`f_c`$. The escaping fractions determined by the combination of the extended MG and escape probability approximations are found to be in reasonable agreement with Monte Carlo results for $`0<\tau _{hom}40`$ and $`0f_c1`$. These approximations can be applied to the case of a uniformly distributed source in a disk by using an effective radius: $`R_{eff}=3Rh/(R+2h)`$, where $`R`$ is the actual radius and $`h`$ is the half-thickness of the disk. We have also formulated equations for estimating what fraction of photons get absorbed in clumps and what fraction in the ICM, for the cases of central and uniform source, and find the equations to be reasonable approximations of the Monte Carlo results. These absorbed fractions are necessary for computing the dust temperatures and the resulting infrared emission. A test of the approximations that needs to be performed is to check how well the dust temperatures thus calculated match the distribution of dust temperatures from the corresponding Monte Carlo simulation. ## 4 Summary The degree of clumpiness of a medium is as important as the total dust mass and scattering albedo for the transfer of radiation. For example, Fig.3 compares the effective optical depth (absorption and scattering) and effective albedo of spherical regions of dust with different degrees of clumpiness, computed using the MG approximation, as a function of photon wavelength. The dust is composed of equal amounts of graphite and silicates by mass, having the $`a^{3.5}`$ grain size distribution for $`0.001\mu m<a<0.25\mu m`$. The solid curves are the homogeneous case ($`f_c=0`$) of no clumps, with dust mass density of $`1.6\times 10^{23}gm/cm^3`$. The dotted curves are for the case $`f_c=0.05`$ with $`\rho _c/\rho _{icm}=100`$, and the dashed curves are for the extreme case of $`f_c=0.01`$ with $`\rho _c/\rho _{icm}=10^4`$. Each clump has a radius of $`0.01`$ pc, and the radius of the spherical region is $`0.6`$ pc. All cases have the same total mass of dust. From the graphs it is evident that the effective radiative transfer properties of the dusty medium can be radically affected by the degree of clumpiness. Simulations of radiative transfer in a fractal distribution of dust indicate that the medium becomes more transparent as the fractal dimension decreases. Since the filling factor follows the fractal dimension, this behavior is similar to that of a two-phase clumpy medium, but there can be significant qualitative and quantitative differences, as seen in Figs.1 and 2(a). However, it may be possible to use the Mega-Grains approach to approximate the effective optical depth of a random fractal cloud when the sources are not correlated with the cloud, as shown by the solid curves in Fig.2(a). Those smooth solid curves are actually created by the MG approximation for the case of spherical clumps with radii $`R_c=0.05`$ of the medium radius (MG agrees with Monte Carlo). The parameters $`f_c=0.065`$ and $`\rho _c/\rho _{icm}=16.2`$ used in the MG approximation match the filling factor and average density of the fractal cloud, but the radii of the clumps is a free parameter, and the value $`R_c=0.05`$ happens work in this case. However, for the case shown in Fig.1, $`R_c=0.05`$ does not work, as seen by the different values of $`\tau _{eff}`$resulting when $`\tau _{hom}`$increases. So either an effective $`R_c`$ needs to be determined as a function of cloud fractal dimension, or some other generalization of the MG approximation is needed.
no-problem/9905/quant-ph9905058.html
ar5iv
text
# Optimal compression for mixed signal states ## Abstract We consider the problem of the optimal compression rate in the case of the source producing mixed signal states within the visible scheme (where Alice, who is to compress the signal, can know the identities of the produced states). We show that a simple strategy based on replacing the signal states with their extensions gives optimal compression. As a result we obtain a considerable simplification of the formula for optimal compression rate within visible scheme. Given a quantum source, what is the minimal number of quantum resources needed for faithful transmission of the states produced by the source? This quantum analogue of the problem of data compression was stated for the first time by Schumacher . It has been solved for stationary memoryless sources generating pure states . In general, the stationary memoryless sources are described by ensemble $`\{p_i,\varrho _i\}`$. This means that the source emits system in state $`\varrho _i`$ with probability $`p_i`$ (one can of course generalize it by considering probability measure on the set of states). The lack of memory implies that the state of the sequence of systems emitted by the source is in product state $`\varrho _{i_1}\mathrm{}\varrho _{i_N}`$ and probability of emission of such string is product of suitable probabilities $`p_{i_1}\mathrm{}p_{i_N}`$. Henceforth we will deal with such kind of sources. It appears, that for pure signal states $`\varrho _i`$, the minimal number of qubits allowing faithful recovery of the input states is equal to the von Neumann entropy of the density matrix of ensemble $`\varrho =_ip_i\varrho _i`$ . Then the von Neumann entropy has clear interpretation within purely quantum communication theory. In the case of $`\varrho _i`$ being impure the problem is unsolved. Apart from some particular cases we do not know much about optimal compression of ensembles of mixed states. The scheme that succeeded in the pure states case can be also applied in the present case, compressing the signal down to the von Neumann entropy. However, one knows that in many cases it is not optimal compression . A good candidate for the minimal number of qubits in this general case could be the Levitin-Holevo function of ensemble $`I_{LH}=S(_ip_i\varrho _i)_ip_iS(\varrho _i)`$ where $`S`$ is von Neumann entropy. Indeed, in this quantity the lost of information caused by impurity of signal states is taken into account by subtracting their mean entropy. As a matter of fact, it has been proven that $`I_{LH}`$ is lower bound for the needed number of qubits. However a very difficult problem, whether this rate of compression can be reached remains still unsolved. Additional motivation to consider this problem is that one would like to know whether $`I_{LH}`$ has interpretation in terms of qubits (so far it has the interpretation in terms of capability of sending classical bits via quantum states ). In general, the scheme of compression is as follows. Alice (who is to compress the signal) waits for a long sequence of the systems generated by the source. Then she performs some operation on the sequence. Her aim is to decrease the support of the total density matrix of the ensemble of sequences (the least Hilbert space the matrix lives on), as the number of needed qubits is equal to the logarithm of the dimension of the support. However, she must do it in a clever way, in order not to disturb the signal too much, so that Bob would be able to recover it with high fidelity. There are two basic schemes of compression. In the first one, called blind, Alice does not know the identities of the produces states. Then all she can do is to apply some quantum operation, independent of the input states. If, instead, she knows the identities of the states (visible scheme) her operation can be state-dependent, so that she has more possibilities. Of course, Bob does not know the identities of signal states in either case, so that his operation is always state-independent. Thus, in general, the compression could be better within the visible scheme. The optimal compression rate for pure signal states appears to be independent of the kind of applied scheme . For mixed state case, the answer is not known. In both schemes to obtain the optimal compression rate, one must perform optimization over Alice and Bob actions satisfying the condition of high fidelity of transmission. All that must be performed in the limit of long sequences. Thus the task is exceedingly difficult. In one of the attempts to solve the problem, a simple visible protocol of compression was proposed . Namely, Alice can replace the signal states with their purifications , applying then Schumacher (or Jozsa-Schumacher ) compression protocol to the resulting ensemble of pure states. More generally Alice can replace signal states with their extensions (by extension of a state $`\sigma `$ we mean another state that, partially traced, reproduces $`\sigma `$) that are not necessarily pure. This replacement aims at decreasing the von Neumann entropy $`S`$ of the density matrix of the initial ensemble to some lower value $`S^{}`$. If it is possible, then the subsequent Jozsa-Schumacher (JS) protocol will compress the signal at rate $`S^{}`$ qubits/message, hence with better performance than in the case of direct application of the protocol, resulting in $`S`$ qubits/message. In this paper we consider visible scheme. We prove that the above very simple strategy provides optimal compression rate. More precisely, to compress the signal optimally, Alice should replace the sequences of signal states with their extensions chosen in such a way that the von Neumann entropy of the resulting ensemble is minimal. As a result, we obtain a considerable simplification of the formula for optimal compression rate. The very tedious task of optimization is now reduced to minimization of the von Neumann entropy of the ensemble of extensions. Let us now introduce some notation (the same as in Ref. ). Suppose that the source generates a system in state $`\varrho _i^0`$ acting on a Hilbert space $`_𝒬`$ with probability $`p_i^0`$. The produced ensemble $`_0=\{p_i^0,\varrho _i^0\}`$ has the density matrix $`\varrho ^0=_ip_i^0\varrho _i^0`$. Denote the product $`\varrho _{i_1}^0\mathrm{}\varrho _{i_N}^0`$ by $`\varrho _i`$, where $`i`$ stands now for multi-index (to avoid complicated notation we do not write the index $`N`$ explicitly unless necessary). The corresponding ensemble and state are denoted by $``$ and $`\varrho `$ respectively. Now Alice performs a coding operation over the initial ensemble $``$ ascribing to any input state $`\varrho _i`$ a new state $`\stackrel{~}{\varrho }_i`$. The map $`\varrho _i\stackrel{~}{\varrho }_i=\mathrm{\Lambda }_A(\varrho _i)`$ is supposed to be a quantum operation i.e. linear completely positive trace-preserving map for blind scheme or an arbitrary map - for visible one. In the latter case we allow Alice to know which states are generated by the source, so that she can prepare separately each of the states $`\stackrel{~}{\varrho }_i`$ for each $`i`$. The new states $`\stackrel{~}{\varrho }_i`$ represent the compressed signal that is then flipped onto the suitable number of qubits determined by the dimension of subspace occupied by the total state $`\stackrel{~}{\varrho }`$ of the ensemble and sent through the noiseless channel to Bob. Now the states $`\stackrel{~}{\varrho }_i`$ are to be decompressed to become close to the initial states $`\varrho _i`$. For this purpose Bob performs some established quantum operation $`\mathrm{\Lambda }`$ which of course does not depend on $`i`$. Then the resulting states are $`\varrho _i^{}=\mathrm{\Lambda }_B(\stackrel{~}{\varrho }_i)`$ and the total scheme is the following $`\varrho _i\underset{\mathrm{\Lambda }_A}{\overset{\mathrm{compression}}{}}\stackrel{~}{\varrho }_i\underset{I}{\overset{\mathrm{noiseless}\mathrm{channel}}{}}\stackrel{~}{\varrho }_i`$ (1) $`\underset{\mathrm{\Lambda }_B}{\overset{\mathrm{decompression}}{}}\varrho _i^{},`$ (2) where $`\varrho _i`$ and $`\varrho _i^{}`$ act on the Hilbert space $`_Q^N`$ while $`\stackrel{~}{\varrho }_i`$ on the channel Hilbert space $`H_𝒞`$. Now we should determine the measure of quality of transmission $`\varrho _i\varrho _i^{}`$. As one knows, there exist many different metrics on the set of mixed states. The most common ones are Hilbert-Schmidt distance $`D_{HS}^2(\varrho ,\sigma )=\mathrm{Tr}(\varrho \sigma )^2`$, the one induced by trace norm $`\varrho \sigma =\mathrm{Tr}|\varrho \sigma |`$ and the Bures metric $`D_B=2\sqrt{F(\varrho ,\sigma )}`$, where the fidelity $`F`$ is given by $$F(\varrho ,\sigma )=\left[\mathrm{Tr}\left(\sqrt{\sqrt{\varrho }\sigma \sqrt{\varrho }}\right)\right]^2.$$ (3) Instead of Bures metric, one usually uses directly the fidelity. The latter has an appealing property: if one of the states (say $`\varrho `$) is pure then it is of the form $$F(\sigma ,|\psi \psi |)=\psi |\sigma |\psi .$$ (4) In this case the fidelity has a clear interpretation as probability that the state $`\sigma `$ passes the test of being $`\psi `$. The fidelity was used in the problem of compression of quantum information , and it is now an important tool in quantum information theory. However some results were obtained by use of other measures of quality of transmission. To the author knowledge, there is no special discrepancy among the results obtained via different metrics. In fact, it is yet not clear whether and to what extent different metrics could lead to non-equivalent conclusions. In this paper we will use fidelity, partially due to one of its properties being especially useful in the context of the problem of extensions we are dealing with. Namely the fidelity can be expressed in the following way $$F(\sigma ,\varrho )=\underset{\psi }{\mathrm{max}}|\psi |\varphi |^2,$$ (5) where $`\varphi `$ is arbitrary purification of $`\sigma `$ and the maximum runs over all possible purifications of $`\varrho `$. As we will see further, this property allows to prove an important lemma. Consequently, the average fidelity $`\overline{F}(,^{})_ip_iF(\varrho _i,\varrho _i^{})`$ will indicate us the quality of the process of recovery of quantum information by Bob after compression by Alice. Now, for a fixed source determined by the ensemble $`_0`$, one considers the sequence of compression-decompression pairs $`(\mathrm{\Lambda }_A,\mathrm{\Lambda }_B)`$ with the property that $$\underset{N\mathrm{}}{lim}\overline{F}(,^{})=1$$ (6) (recall that the pair is implicitly indexed by N). Such sequences will be called protocols. Define now the quantity $`R_P`$ characterizing the asymptotic degree of compression of the initial quantum data at a given protocol $`P`$ by $$R_P=\underset{N\mathrm{}}{lim}\frac{1}{N}\mathrm{log}\mathrm{dim}\stackrel{~}{\varrho }$$ (7) Here $`\mathrm{dim}\stackrel{~}{\varrho }`$ denotes the dimension of the support of the state $`\stackrel{~}{\varrho }`$ given by the number of its nonzero eigenvalues. The quantity $`\mathrm{log}\mathrm{dim}\stackrel{~}{\varrho }`$ has the interpretation of the number of qubits needed to carry the state $`\stackrel{~}{\varrho }`$ undisturbed ($`\stackrel{~}{\varrho }`$ is to be transferred through a noiseless channel). Actually, only one of the signal sequences $`\stackrel{~}{\varrho }_i`$ is being transmitted at a time. However, it is easy to see, that $`\mathrm{dim}\stackrel{~}{\varrho }`$ is the minimal dimension that guarantees transmission of any of the states $`\stackrel{~}{\varrho }_i`$ without disturbance. Now, given a class $`𝒫`$ of protocols, we define the quantity $$I_𝒫=\underset{P𝒫}{inf}R_P$$ (8) which is equal to the least number of qubits per system needed for asymptotically faithful transmission of the initial signal states from Alice to Bob within the considered class of protocols (to be strict one needs $`I_𝒫+\delta `$ qubits per message, where $`\delta `$ can be chosen arbitrarily small). Now, if $`𝒫`$ is the set of visible protocols, the $`I_𝒫`$ is called effective information carried by the ensemble and is denoted by $`I_{eff}`$ (optimal rate within the class of blind protocols is called passive information). As one can see, the definition of the effective information, even though physically natural, is very complicated from mathematical point of view. One must optimize the limit (7) over the Alice and Bob actions keeping satisfied the condition (6) at the same time. Moreover, the definition does not give any intuition on how the structure of ensemble could be related to its effective information content. Let us now try to reduce the problem to obtain more transparent form of the effective information. Consider the most general compression-decompression protocol. Any Bob’s operation, as a completely positive trace-preserving map, amounts to (i) adding ancilla in some pure state, (ii) performing unitary transformation over the total system and (iii) performing partial trace . Now, the two first stages can be incorporated into Alice action. Then decompression will amount only to performing partial trace. Of course, the new protocol will give the same rate of compression as the previous one, because both stages do not change the dimension of the support of a state. Thus we can consider optimal protocol in the following form $$\varrho _i\underset{\mathrm{\Lambda }_A}{\overset{\mathrm{compression}}{}}\varrho _{i}^{}{}_{}{}^{ext}\underset{\mathrm{\Lambda }_B=\mathrm{Tr}_{anc}}{\overset{\mathrm{decompression}}{}}\varrho _i^{}$$ (9) where $`\varrho _{i}^{}{}_{}{}^{ext}`$ are some extensions of the state $`\varrho _i^{}`$ and they act on the Hilbert space $`_Q^N_{anc}`$ (in the following, the extensions of a state $`\sigma `$ will be denoted by $`\sigma ^{ext}`$). Then the optimal compression rate is given by $$I_{eff}=\underset{n}{lim}\frac{1}{N}\mathrm{log}\mathrm{dim}\varrho _{}^{}{}_{}{}^{ext}.$$ (10) We will need the following lemma. Lemma. Let $`\varrho ,\varrho ^{}`$ act on space $`_Q^N`$ and let $`\varrho _{}^{}{}_{}{}^{ext}`$, acting on $`_Q^N_{anc}`$ be extension of $`\varrho ^{}`$. Then there exists a state $`\varrho ^{ext}`$ acting on $`_Q^n_{anc}`$ such that (a) $`\varrho ^{ext}`$ is an extension of $`\varrho `$ (b) $`F(\varrho ^{ext},\varrho _{}^{}{}_{}{}^{ext})=F(\varrho ,\varrho ^{})`$. Proof. Let $`_{ext}=_Q^N_{anc}`$ and let $`\varphi ^{}_{ext}_{pur}`$ be purification of $`\varrho _{}^{}{}_{}{}^{ext}`$. Then it is also purification of the state $`\varrho ^{}`$. From the formula (5) we obtain that there exists some purification $`\varphi `$ of $`\varrho `$ such that $`F(\varrho ,\varrho ^{})=|\varphi ^{}|\varphi |^2`$. Now we can take $`\varrho ^{ext}=\mathrm{Tr}_{_{pur}}|\varphi \varphi |`$. Using the formula (5) once more, we get $`F(\varrho ^{ext},\varrho _{}^{}{}_{}{}^{ext})|\varphi ^{}|\varphi |^2=F(\varrho ,\varrho ^{})`$. Since the fidelity does not decrease under partial trace (this can be easily seen from (5)), we obtain $`F(\varrho ^{ext},\varrho _{}^{}{}_{}{}^{ext})=F(\varrho ,\varrho ^{})`$. Let us now formulate the main result of the paper. Theorem. Let $`^{ext}=\{p_i,\varrho _i^{ext}\}`$ with $`\varrho _i^{ext}`$ being extensions of the signal states $`\varrho _i`$; let $`\varrho ^{ext}`$ be the total density matrix of ensemble $`^{ext}`$. Then the optimal compression rate within the visible scheme is given by $$I_{eff}=\underset{N\mathrm{}}{lim}\frac{1}{N}infS(\varrho ^{ext})$$ (11) where the infimum runs over the set of ensembles $`^{ext}`$. Remarks. (1) Since one can choose trivial $`_{anc}`$ ($`_{anc}=𝒞`$), $`\varrho _i`$ itself is the extension of $`\varrho _i`$, too. (2) One can show that the limit on the right hand side of the equality (11) exists. Indeed, it follows from the fact that if a sequence $`\{a_n\}`$ satisfies $`a_nkn`$ for some $`k`$ and $`a_n+a_ma_{n+m}`$ for any $`m,n`$, then $`a_n/n`$ is convergent . Proof. To prove that the formula (11) is valid, we must first provide the protocol that achieves such rate, and then show that the latter is equal to the optimal rate given by (10). To this end, consider the following concatenated protocol (cf. Ref.) (call it extension protocol and denote by $`EP`$). Alice replaces the signal state with the state $`\varrho _i^{ext}`$, and then applies the JS protocol . The number of needed qubits per system is now equal to the entropy of the density matrix of the new ensemble. As JS protocol needs no decompression , Bob has only to perform partial trace to come back to the original space $`_Q^N`$. Then, the overall scheme is the following $`\varrho _{i_1}\mathrm{}\varrho _{i_k}\stackrel{\genfrac{}{}{0pt}{}{\mathrm{Alice}^{}\mathrm{s}}{\mathrm{action}}}{}\varrho _{i_1}^{ext}\mathrm{}\varrho _{i_k}^{ext}`$ (12) $`\stackrel{\genfrac{}{}{0pt}{}{\mathrm{JS}}{\mathrm{compression}}}{}\varrho _{i_1\mathrm{}i_k}^{}{}_{}{}^{ext}\stackrel{\genfrac{}{}{0pt}{}{\mathrm{Bob}^{}\mathrm{s}}{\mathrm{partial}\mathrm{trace}}}{}\varrho _{i_1\mathrm{}i_k}^{}`$ (13) Here $`i_j`$’s are multi-indices of length $`N`$; $`\varrho _{i_1}\mathrm{}\varrho _{i_k}`$ and $`\varrho _{i_1\mathrm{}i_k}^{}`$ act on the Hilbert space $`\left(_Q^N\right)^k`$ while $`\stackrel{~}{\varrho }_{i_1}^{ext}\mathrm{}\stackrel{~}{\varrho }_{i_k}^{ext}`$ and $`\stackrel{~}{\varrho }_{i_1\mathrm{}i_k}^{ext}`$ act on $`\left(_Q^N_{anc}\right)^k`$. The former two states can be obtained from the latter ones by tracing over the space $`_{anc}^k`$. Now, as it was mentioned, the fidelity does not decrease under partial trace. Then (as in Ref. for different transmission quality measure) we obtain that average fidelity produced by the composed protocol is greater than or equal to the one within the “intermediate” JS compression protocol. The latter fidelity tends to one if $`N`$ is kept fixed and $`k`$ tends to infinity (of course, $`N`$, although fixed, can be chosen arbitrarily large). Thus the total protocol satisfies asymptotic fidelity condition (6). Since the JS protocol compresses the signal down to the von Neumann entropy of the ensemble, the extension protocol has compression rate equal to $$R_{EP}=\underset{N\mathrm{}}{lim}\frac{1}{N}S(\varrho ^{ext})$$ (14) Minimizing this expression over all possible extensions of signal states we obtain the rate of optimal extension protocol (OEP) $$R_{OEP}\underset{EP}{inf}R_{EP}=\underset{N\mathrm{}}{lim}\frac{1}{N}infS(\varrho ^{ext}).$$ (15) Now we must show that $`R_{OEP}I_{eff}`$. To this end consider the optimal protocol of the form (9) so that $`I_{eff}`$ is given by equation (10). Then it suffices to find such extensions $`\varrho _i^{ext}`$ that $$\underset{N\mathrm{}}{lim}\frac{1}{N}S(\varrho ^{ext})\underset{N\mathrm{}}{lim}\frac{1}{N}\mathrm{log}\mathrm{dim}\varrho _{}^{}{}_{}{}^{ext}.$$ (16) The suitable extensions are suggested by the lemma. Namely, suppose that $`N`$ is large, so that $`F(,^{})>1ϵ`$ (within the considered optimal protocol). Then, by the lemma there exist extensions $`\varrho _i^{ext}`$ of $`\varrho _i`$ such that $`\overline{F}(^{ext},_{}^{}{}_{}{}^{ext})>1ϵ`$. Now we can use the inequality proved in (which is similar to the Fannes inequality ) saying that for states acting on Hilbert space $``$ we have $$|S(\varrho )S(\varrho ^{})|2\mathrm{log}\mathrm{dim}\sqrt{1F(\varrho ,\varrho ^{})}+1$$ (17) if only $`F(\varrho ,\varrho ^{})>1\frac{1}{36}`$. By double concavity of square of $`F`$ we obtain in our case that $`{\displaystyle \frac{1}{N}}|S(\varrho ^{ext})S(\varrho _{}^{}{}_{}{}^{ext})|`$ (18) $`4(\mathrm{log}\mathrm{dim}_𝒬+{\displaystyle \frac{1}{N}}\mathrm{log}\mathrm{dim}_{anc})\sqrt{ϵ}+{\displaystyle \frac{1}{N}}`$ (19) One can show that it suffices to consider $`_{anc}`$ satisfying $`\mathrm{log}\mathrm{dim}_{anc}2N\mathrm{log}\mathrm{dim}_Q`$. Thus the entropy (per system) of the state $`\varrho ^{ext}`$ is asymptotically equal to the one of $`\varrho _{}^{}{}_{}{}^{ext}`$. Now, since $`S(\varrho _{}^{}{}_{}{}^{ext})\mathrm{log}\mathrm{dim}\varrho _{}^{}{}_{}{}^{ext}`$ we obtain the inequality (16). To summarize, we obtained much simpler formula for optimal compression rate in visible coding scheme. So far the task was to minimize the support of the states under Alice and Bob operations constrained by the asymptotic high fidelity condition. The latter is very difficult to deal with in the case of mixed states. The present expression does not involve such constraint, nor it involves Alice and Bob actions. Now one needs minimize entropy (which is more feasible than dealing with dimension of the support) varying over extensions of the ensemble. Thus the constraints are now much more convenient. An interesting question arises: Can the optimal compression be achieved by means of pure extensions (purifications)? A closely related question is: Given an ensemble, do there exist such purifications, that the entropy of the purification ensemble is not greater than the entropy of the initial one? If it is not the case in general, can it be asymptotically true for typical sequences of states? Finally, one could ask, whether the limit in the formula (11) is really needed. It might be the case that the minimal entropy could be attained by means of extensions of single signals $`\varrho _i^0`$. However, almost everywhere in quantum information theory the collective operations are much more powerful than the ones performed on separate systems. Then it is likely, that collective extensions of long signal sequences are necessary to obtain optimal compression. We believe that the presented result will stimulate to answer these questions, to find whether the Levitin-Holevo function has physical sense in terms of quantum bits, and eventually, to resolve the highly nontrivial problem of compression of quantum information carried by ensembles of mixed states. ###### Acknowledgements. The author is grateful to Ryszard Horodecki for stimulating discussion and helpful comments. He also would like to thank Chris Fuchs for helpful discussion. The work is supported by Polish Committee for Scientific Research, Contract No. 2P03B 143 17.
no-problem/9905/quant-ph9905015.html
ar5iv
text
# Wave Function Interpretation and Quantum Mechanics Equations ## Introduction The predictions brilliantly proved in experiments won for the quantum theory the reputation of one of the most successful physical theories. However, up to present, the disputes about its meaning and limits of its implementation are not quiet yet. This is an unique phenomenon in the history of science . The Nobel Prize laureate in physics M. Gell-Mann characterised the quantum physics as a discipline “full of mysteries and paradoxes, that we do not completely understand but are able to use. As we know, it perfectly operates in the physics reality description, but as sociologists would say, it is anti-intuitive discipline. The quantum physics is not a theory, but limits, in which, as we suppose, any correct theory needs to be included” . The logistic analysis of a quantum mechanics as a science leads to the conclusion about its incompleteness and non-completability, in consequence of an inconsistency of the quantum objects, that is fixed in a corpuscular- wave dualism . The theory incompleteness is an original “payment” for a tendency to create a non-contradictory description of contradictory objects. One of consequences of the quantum mechanics logistic analysis is a proof of an absence of a positive decision for a hidden parameter method, from point of view that “there are no any possibilities for more complete description in standard quantum mechanics theory limits. Its realization needs a quantum mechanics creation on a principal different basis”. The facts of the particles and anti-particles annihilation with the photon and neutrino creation, the birth of particles from different classes during the high energy photons interactions are the circumstantial evidence of the unified origin of fields and particles. Do not discussing about “a forced formalism”, i.e. a science penetration into the fields with different from “everyday”, principal new forms and meanings, it would be desirable to decrease the formal apparatus as possible, by changing it to the system of views. From this point of view, the development of physical object models, which give a possibility to unite the description of corpuscular and wave properties of real objects, i.e. fields and particles, has a conceptual meaning. Models need to correlate with the existing quantum mechanics approach, so as it has a brilliant experimental confirmation. Moreover, although some quantum mechanics postulates and concepts should be a corollary from the properties of the proposed model and properties of the space, where the object exists. If the relativistic properties are included in the space description, the corresponding expressions describing the object behaviour should have a relativistic nature and in passage to the limit should be transfer to well-known quantum mechanics equations. At any probability, if Minkowski space is examined, the absolute remote fields have to be included in the physical object description. An effort has been undertaken to solve mentioned problems and “to pave a way” for the complement the existing quantum mechanics theory for the full, correct, non-contradictory theory, dreamed by Louis de Broglie, mentioned by Marry Gell-Mann, looked for by Ilya Prigogine. ## I Physical object In special theory of relativity the object evolution is examined in a four-dimension space-time continuity with a pseudo-Euclidean metric. The supposition about the space-time continuity strikes against the conceptual problem, that we will call the scale problem. Let’s consider the four- dimensional space-time as some mathematical set. The Cantor’s power of a neighbourhood of any space point is equal to the power of whole space. The question is: what defines the observed size of a physics object, for example, elementary particles? The size of the observer could not be a reason. Moving this way we will come to the dilemma about the initial appearance of a hen or an egg. The speed of light as a fundamental constant connects space and time measurements but, apparently, is not good for a role of the scale coefficient. In all probability the problem can be solved by introducing in a description the discreteness. Considering a time as some parameter characterising the changing of processes in space and putting these processes in order during the analyses in a chronological sequence, realising the causality principle, the hypothesis about the space discreteness can be introduced. This way it is not difficult to demand the time continuity as some abstract parameter. However, of course, it is not necessary to refuse completely from the possibility of the time discreteness. Whether or not the discreteness is necessary. Let’s consider the possible variants of putting the events in order in inertial frames of reference. We will leave the question of the space discreteness open. A conventional approach is the following . Some frame is introduced in the four-dimensional continuity, which is represent a set of four continuous marks $`(x,y,z,\tau )=(\stackrel{}{r},\tau )`$ over the space and time coordinates. It is established, that the infinite set of the equivalent frames exists, and these frames are connected to each other with the help of four continued differentiable functions with the non-zero functional determinant. Usually, this demand is connected to the principle of the equivalence of inertial frames of references. It is considered that some property, unchangeable with these transformations, can correspond to each point. The property, expressed by the number, that “by order” don’t change during the transformations of the frames of references, is called invariant or scalar. It is said about an invariant or scalar field if this correspondence takes place not only for one concrete point but some number is compared with every point from some defined region, and all these numbers reflect the same invariant property. Thus, the scalar field is defined by the function against coordinates $`\varphi (\stackrel{}{r},\tau )`$, that can be interpreted as some continuous physical object. For the stable in time physical object, due to the invariance of values in its every point, the operation integral can be composed for every point and by the principle of the minimum effect the trajectory of the object motion may be defined. This trajectory will represent the chain of the events putting in order in time. We will consider this approach as the classical one. It often needs to have deal with non-invariant characteristics of physical objects. Let’s pose a set of one or more numbers $`(g(\stackrel{}{r},\tau ),h(\stackrel{}{r},\tau ),\mathrm{})`$ in some inertial frame of references, each of that is not a scalar. If there is a functional on one or more elements of this set, that is a scalar or a scalar field, the process of putting events in order may be done, although on an indirect way or with some limitations. We will call such sets of numbers functional, also as the corresponding fields. Let’s introduce a point functional, that is important for us. Further we will call it the normalised functional. Let’s combine the function $`f(\stackrel{}{r},\tau )=g(\stackrel{}{r},\tau )+ih(\stackrel{}{r},\tau )`$, where $`i`$ is an imaginary unit. The corresponding scalar field we will define as follows: $$\varphi (\stackrel{}{r},\tau )=|f(\stackrel{}{r},\tau )|^2=g(\stackrel{}{r},\tau )^2+h(\stackrel{}{r},\tau )^2.$$ (1) Note, that the function $`f(\stackrel{}{r},\tau )`$ defines the normalised functional field. The invariant coordinate transformations in different frames of references in special theory of relativity are given by Lorentz transformations: $$x^{}=x,y^{}=y,z^{}=\gamma (z\beta \tau ),\tau ^{}=\gamma (\tau \beta z),$$ (2) where $`\beta =V/c`$, $`\gamma =1/\sqrt{1\beta ^2}`$, $`V`$ is a velocity of the frame of references $`𝐊`$ related to a fundamental frame of references $`𝐊^{}`$, and, without loosing the generality, we will consider that the velocity vector is parallel to axes $`0z`$ and $`0z^{}`$. Parameters in the fundamental frame will be marked by apostrophes. Let the function $`g^{}(\stackrel{}{r^{}},\tau ^{})`$ in some field $`\mathrm{\Omega }`$ of the Minkowski space defines the functional field, corresponding to the physical object in its fundamental frame $`𝐊^{}`$, extensive in space and stable in time. Let’s mark as $`V^{}`$ the cross-section of $`\mathrm{\Omega }`$ in $`𝐊^{}`$ by the hyper-plane $`\tau ^{}=const`$. Note, that at any fixed moment of time the interval between any two events from $`V^{}`$ will be space-like, because this hyper-plane will lay down in absolute remote fields of Minkowski space. For any point from $`V^{}`$, by virtue of physical object stability, it will be such time interval $`\mathrm{\Delta }\tau ^{}`$, what $`g^{}(\stackrel{}{r^{}},\tau ^{})=g^{}(\stackrel{}{r^{}},\tau ^{}+\mathrm{\Delta }\tau ^{})`$. Thus, the function $`g^{}(\stackrel{}{r^{}},\tau ^{})`$ will describe the stationary process, defined in the space point $`\stackrel{}{r^{}}`$ (or in some discrete field of space). If the time average $`<|g^{}|>_T`$, $`<|g^{}|^2>_T`$ exist with $`T\mathrm{}`$, $`\left(<g(t)>_T=_T^Tg(t)𝑑t/(2T)\right)`$ and $`g^{}(\stackrel{}{r^{}},\tau ^{})`$ is the limited variation function on each finite interval of $`\tau ^{}`$, it can be represented as a sum of the average value $`<g^{}>_T=g_0^{}(\stackrel{}{r^{}})`$, some number of periodical components and non-periodical component $`g_a^{}`$ : $$g^{}(\stackrel{}{r^{}},\tau ^{})=g_0^{}(\stackrel{}{r^{}})+\underset{k=1}{\overset{\mathrm{}}{}}g_k^{}(\stackrel{}{r^{}})\mathrm{cos}(\omega _k^{}\tau ^{}+\alpha _k^{})+g_a^{}(\stackrel{}{r^{}},\tau ^{}).$$ (3) Considering $`g^{}(\stackrel{}{r^{}},\tau ^{})`$ as a part of the functional field (1), we will define this field as: $$\psi ^{}(\stackrel{}{r^{}},\tau ^{})=\underset{k=0}{\overset{\mathrm{}}{}}q_k^{}(\stackrel{}{r^{}})\mathrm{exp}(i\omega _k^{}\tau ^{})+q_a^{}(\stackrel{}{r^{}},\tau ^{}).$$ (4) where $`\omega _0^{}=0`$, the limit of $`<\psi ^{}\mathrm{exp}(i\omega \tau ^{})>_T`$ with $`T\mathrm{}`$ is equal to $`q_k^{}(\stackrel{}{r^{}})=g_k^{}(\stackrel{}{r^{}})\mathrm{exp}(i\alpha _k^{}(\stackrel{}{r^{}}))`$ with $`\omega =\omega _k^{}`$, $`(k=0,1,2,\mathrm{})`$, and is equal to zero for all other values of $`\omega `$. The analogous limit for the non-periodical component always is equal to zero for any $`\stackrel{}{r^{}}`$ from $`V^{}`$. Functions $`g_k^{}(\stackrel{}{r^{}})`$ and $`\alpha _k^{}(\stackrel{}{r^{}})`$ are real. In frame $`𝐊`$ the considered function will be defined as $`\psi (\stackrel{}{r},\tau )=\psi ^{}(\stackrel{}{r^{}}(\stackrel{}{r},\tau ),\tau ^{}(\stackrel{}{r},\tau ))`$, that, with taking into account (2) and, designating $`\xi =\gamma (z\beta \tau )`$, $`\eta =\gamma (\tau \beta z)\tau `$, may be represented as follows: $$\psi (\stackrel{}{r},\tau )=\underset{k=0}{\overset{\mathrm{}}{}}q_k(x,y,\xi )\mathrm{exp}[i\omega _k^{}(\eta +\tau )]+q_a(\stackrel{}{r},\tau ).$$ (5) For every fixed harmonics (5) the scalar field (1) corresponding to the introduced normalised one will be equal to squared amplitude of the corresponding harmonics. As far as the scalar sum is also a scalar, the result scalar field corresponding to (5) is defined as: $$\varphi (\stackrel{}{r},\tau )=\underset{k=0}{\overset{\mathrm{}}{}}|q_k(x,y,\xi )|^2=\underset{k=0}{\overset{\mathrm{}}{}}g_k^2(\stackrel{}{r},\tau ).$$ (6) The important distinguish of the normalised field (5) from the scalar one (6) is an oscillation, resonant nature of the first one. It is necessary to note that the established correspondence between (5) and (6) has not reciprocally a single meaning, because, for example, the information about the mutual phases of harmonics is loosing. However, in considering only one harmonic of the stable physical object function, this limitation is not important. But the information about frequency will be still loosing. ## II Quantum mechanics basic equations Omitting the corresponding indexes of $`\psi `$, $`q`$, $`\omega `$ for the notation simplicity, we will represent the $`k`$-th harmonics of the spectrum expansion (5) as follows: $$\psi (\stackrel{}{r},\tau )=\psi ^b(\stackrel{}{r},\tau )\mathrm{exp}(i\omega \tau ),\psi ^b(\stackrel{}{r},\tau )=q(x,y,\xi )\mathrm{exp}(i\omega \eta ).$$ (7) Partial derivatives by $`\stackrel{}{r}`$ and $`\tau `$ for function $`\psi ^b`$ will be expressed in the following way: $$\frac{\psi ^b}{\tau }=\gamma \beta q_\xi e^{i\omega \eta }+i\omega (\gamma 1)qe^{i\omega \eta },\frac{\psi ^b}{z}=\gamma q_\xi e^{i\omega \eta }i\gamma \omega \beta qe^{i\omega \eta },\frac{\psi ^b}{x(y)}=q_{x(y)}e^{i\omega \eta }.$$ (8) It is designating here $`f_sf/s`$. For second derivatives the following expressions may be got: $$\frac{^2\psi ^b}{\tau ^2}=\gamma ^2\beta ^2q_{\xi \xi }e^{i\omega \eta }2i\gamma (\gamma 1)\omega \beta q_\xi e^{i\omega \eta }\omega ^2(\gamma 1)^2qe^{i\omega \eta },\frac{^2\psi ^b}{z^2}=\gamma ^2qe^{i\omega \eta }(q_{\xi \xi }/q\omega ^2\beta ^2)2i\gamma ^2\omega \beta q_\xi e^{i\omega \eta }.$$ (9) It is possible to cancel imaginary parts by combining the expressions for the first derivative in time (8) and the second one in space (9). This way for function $`\psi ^b(\stackrel{}{r},\tau )`$ we will get the equation: $$i\gamma \frac{1}{\psi ^b}\frac{\psi ^b}{\tau }+\frac{1}{2\omega \psi ^b}^2\psi ^b=\frac{1}{2\omega }\frac{^2q}{q}+\frac{\omega }{2}(\gamma 1)^2.$$ (10) Here $`^2^2/x^2+^2/y^2+^2/z^2`$ is a Laplace operator. The second term in the right part of the equation speeds to zero fast enough $`(\beta ^4)`$ with non-relativistic velocities. If it is possible to separate the variables in some frame of references or at least to separate the time variable in the function $`q`$, so $`^2q/q=u(x,y,z)`$ and, supposing $`\omega =mc/\mathrm{}`$, where $`m`$ and $`\mathrm{}`$ are some constants (usually $`m`$ is understood as a rest mass, $`\mathrm{}`$ is a Planck constant), we will get: $$i\mathrm{}c\gamma \frac{\psi ^b}{\tau }+\frac{\mathrm{}^2}{2m}^2\psi ^b=\left[\frac{\mathrm{}^2}{2m}u(x,y,z)+mc^2\frac{(\gamma 1)^2}{2}\right]\psi ^b.$$ (11) Designating $`\mathrm{}^2u(x,y,z)/2m=U(x,y,z)`$, we will get in non-relativistic case in passage to the limit $`(\gamma 1)`$ the Schrődinger equation (conjugated to usually used) . It is known that $`U(x,y,z)`$ has a meaning of the potential energy of the particle in the force field, and $`\psi ^b(\stackrel{}{r},\tau )`$ is agree with the de Broglie’s description of the particle wave properties. The equation may be interpreted in the following way. The lengthy in space physical object is changing by an external field, moving or changing of the object internal structure will depend on this field. In distinguish of the Schrődinger equation, the equation (10) and, with some stipulations, (11) have to be true also in the relativistic case. It is necessary to pay attention to the particularity of the described passage to the limit to the potential function of the external forced field, because an interaction can be as complete as partial and also with new physical objects creation. Combining the expressions analogous to (9) for function $`\psi (\stackrel{}{r},\tau )`$, it is possible to cancel imaginary terms in right part of the expressions. The following equation can be got: $$\frac{1}{\psi }\left(\frac{^2\psi }{\tau ^2}^2\psi \right)=\left(\frac{^2q\beta ^2q_{zz}}{q}+\omega ^2\right).$$ (12) There are included the functions only from space coordinates on the right part of the equation in the fundamental frame of references $`(^2q\beta ^2q_{zz})/q=^2q^{}/q^{}`$, but on the left part - in frame $`𝐊`$. Thus, with the stability condition of the considered physical object, it needs to be a scalar on the right part in brackets. Designating this scalar as $`(mc/\mathrm{})^2`$, we will get the Klein-Gordon-Fock equation for a free relativistic (pseudo-) scalar particle with rest mass $`m`$, that corresponds to the conventional model, when the plane monochromatic wave is confronted to the particle (without spin). Thus, the introduced function (5) may be interpreted as a wave function of the physical object. The particular case of the zero scalar corresponds to the wave equation. In this particular case, real and imaginary parts of the functional normalised field are the components of the electric and magnetic field intensities, and the scalar field, corresponding to this functional is a density distribution of the electromagnetic field. If the wave equation is considered as a special case of the common description of the physical objects, the introduced conceptions of the functional and corresponding scalar fields are getting very interesting physics analogies. ## conclusion Thus, the proposed model of the physical objects allowing to unify the description of the corpuscular and wave properties of the real objects - particles and fields, has a conceptual nature. The model correlates with the existing quantum mechanics approach and, as follows, has an experimental confirmation in non-relativistic and in a number of particular cases. Such basic quantum mechanics equations as the Schrődinger and Klein-Gordon-Fock ones can be considered as a corollary from the proposed model properties and properties of a space, in which the physical object exists (it is considered the Minkowski space in this paper). The proposed model includes the absolute remote fields of the Minkowski space in the description. On base of this possibility it is possible to go further- inside the physical object and to try to make clear its internal structure. So, this paper is supposed by the author as the first one in a number of papers, continuing this theme, in direction of further working up the theses following through the properties of the proposed model of the physical object and the space properties. For example, there are preparing papers about an internal structure of physical objects, their interconnections. This work is an effort to remove the conceptual internal contradictions of the quantum mechanics theory. It is supposede, it will allow to find a way of creating in its frames the complete, correct, non-contradictory theory, dreamed by L. de Broglie, mentioned by M. Gell-Mann and ”paved a way” by I. Prigogine.
no-problem/9905/cond-mat9905328.html
ar5iv
text
# Dynamics of ripple formation in sputter erosion: nonlinear phenomena ## Abstract Many morphological features of sputter eroded surfaces are determined by the balance between ion induced linear instability and surface diffusion. However, the impact of the nonlinear terms on the morphology is less understood. We demonstrate that while at short times ripple formation is described by the linear theory, after a characteristic time the nonlinear terms determine the surface morphology by either destroying the ripples, or generating a new rotated ripple structure. We show that the morphological transitions induced by the nonlinear effects can be detected by monitoring the surface width and the erosion velocity. PACS numbers:68.55.-a,05.45.+b,64.60.Cn,79.20.Rf The morphological evolution of ion sputtered surfaces has generated much experimental and theoretical interest in recent years. As a result, there is extensive evidence that ion bombardment can result in ordered surface ripples, or lead to kinetic roughening, depending on the experimental conditions. These experimental results, which cover amorphous and crystalline materials (SiO<sub>2</sub>), and both metals (Ag) and semiconductors (Ge, Si), have motivated extensive theoretical work aiming to uncover the mechanism responsible for ripple formation and kinetic roughening. A particularly successful model has been proposed by Bradley and Harper (BH) , in which the height $`h(x,y,t)`$ of the eroded surface is described by the linear equation $$_th=\nu _x_x^2h+\nu _y_y^2hK^4h,$$ (1) where $`K`$ is the surface diffusion constant and the coefficients $`\nu _x`$ and $`\nu _y`$ are induced by the erosion process such that $`\nu _y<0`$ and $`\nu _x`$ can change sign as the angle of the incidence of the ions is varied. The balance of the unstable erosion term ($`|\nu |^2h`$) and the smoothening surface diffusion term ($`K^4h`$) generates ripples with wavelength $$\mathrm{}_i=2\pi \sqrt{2K/|\nu _i|},$$ (2) where $`i`$ refers to the direction ($`x`$ or $`y`$) along which the associated $`\nu _i`$ ($`\nu _x`$ or $`\nu _y`$) is the largest. While successful in predicting the ripple wavelength and orientation, this linear theory cannot explain a number of experimental features, such as the saturation of the ripple amplitude, the observation of rotated ripples, and the appearance of kinetic roughening. Recently it has been proposed that the inclusion of nonlinear terms and noise (both of which were derived from Sigmund’s theory of sputtering) can cure these shortcomings. Consequently, Eq.(1) has to be replaced by the noisy Kuramoto-Sivashinsky equation (KS) equation $`_th`$ $`=`$ $`\nu _x_x^2h+\nu _y_y^2hK_x_x^4hK_y_y^4hK_{xy}_x^2_y^2h`$ (4) $`+{\displaystyle \frac{\lambda _x}{2}}(_xh)^2+{\displaystyle \frac{\lambda _y}{2}}(_yh)^2+\eta (x,y,t),`$ where $`\eta (x,y,t)`$ is an uncorrelated white noise with zero mean, mimicking the randomness resulting from the stochastic nature of ion arrival to the surface. All coefficients in Eq.(3) have been determined in terms of the experimental parameters, such as the ion flux, angle of incidence, ion penetration depth, and substrate temperature. While it is expected that the nonlinear theory incorporates most features of ripple formation and kinetic roughening, the morphological and dynamical features of the surface described by it are known only in certain special cases. For example, when the nonlinear terms and the noise are neglected ($`\lambda _x=\lambda _y=0`$, $`\eta =0`$), Eq.(3) reduces to the linear theory (1), and predicts ripple formation. It is also known that the isotropic KS equation ($`\nu _x=\nu _y<0`$, $`K_x=K_y=K_{xy}/2`$, and $`\lambda _x=\lambda _y`$) asymptotically (for large time and length scales) predicts kinetic roughening, with exponents similar to that seen experimentally in ion sputtering. For positive $`\nu _x`$ and $`\nu _y`$, Eq.(3) reduces to the anisotropic KPZ equation, whose scaling behavior is controlled by the sign of $`\lambda _x\lambda _y`$. Finally, recent integration by Rost and Krug of the noiseless version of Eq.(3) provided evidence that when $`\lambda _x\lambda _y<0`$, new ripples, unaccounted for by the linear theory, appear and their direction is rotated with respect to the ion direction. However, it is not known if this rotated phase survives in the presence of noise, $`\eta `$. These special cases illustrate the complexity of the morphological evolution predicted by Eq.(3). To be able to make specific predictions on the morphology of ion-sputteted surfaces, we need to gain a full understanding of the behavior predicted by (3), going beyond the special cases, that are often experimentally irrelevant. In this paper, we integrate numerically Eq.(3), aiming to uncover the dynamics and the morphology of the surfaces for different values of the parameters. We demonstrate a clear separation of the linear and nonlinear behavior. For short erosion times, the ripple development and orientation follow the predictions of the linear theory of BH. However, after a well defined crossover time, which depends on the coefficients of Eq.(3), the surface morphology is determined by the nonlinear terms. We find that when $`\lambda _x\lambda _y>0`$ the nonlinear terms destroy the ripple morphology. However, when $`\lambda _x\lambda _y<0`$, they result in a long and apparently rough transient regime, followed by a new morphology of rotated ripples, as seen in the noiseless KS equation. We show that these morphological transitions can be detected by monitoring the surface width or erosion velocity, quantities that can be measured more easily $`in`$ $`situ`$. Finally, we discuss the impact of our result on current and future experimental work. The direct numerical integration is carried out by discretizing the continuum equation of (3), using the standard discretization techniques. We choose a temporal increment $`\mathrm{\Delta }t=0.01`$ and impose periodic boundary conditions $`h(x,y,t)=h(x+L,y,t)=h(x,y+L,t)`$ where $`L\times L`$ is the size of the substrate. We choose the noise to be uniformly distributed between \[-1/2,1/2\], and perturb the initial flat configuration with the noise. Since the sign of the nonlinear terms plays a significant role in defining the surface morphology, we discuss separately the $`\lambda _x\lambda _y>0`$ and $`\lambda _x\lambda _y<0`$ cases. $`\lambda _x\lambda _y>0`$ — A general feature of sytems such as Eq.(3) is that the nonlinear terms do not affect the surface morphology or dynamics until a crossover time $`\tau `$ has been reached. Thus, we expect that for early times, i.e. for $`t<\tau `$, the surface morphology and dynamics is properly described by the linear theory. To demonstrate this separation of the linear and nonlinear regimes, in Fig. 1 we show the time dependence of the surface width defined as $`W^2(L,t)\frac{1}{L^2}_{x,y}h^2(x,y,t)\overline{h}^2`$ and of the mean height $`\overline{h}=\frac{1}{L^2}_{x,y}h(x,y,t)`$. We find that for $`t<\tau `$, the width $`W`$ increases exponentially while the mean height stays constant at $`\overline{h}=0`$. Indeed, both of these findings are consistent with the predictions of the linear theory: $`W`$, being proportional to the ripple amplitude, according to Eq.(1) increases as $`W\mathrm{exp}(\nu t/\mathrm{}^2)`$, and the linear terms do not change the mean height of the surface. Furthermore, inspecting the surface morphology, we find that in this regime the ripple wavelength and orientation are also correctly described by the linear theory. For example, for the parameters $`\nu _x=0.0001`$, $`\nu _y=0.6169`$, $`K_x=K_y=K_{xy}/2=2`$, and $`\lambda _x=\lambda _y=0.001`$, according to (2), the ripple wavelength along the $`y`$ axis is $`\mathrm{}_y16`$, and along the $`x`$-axis is $`\mathrm{}_x1257`$. Since the dominant wavelength is determined by the growth rate $`\mathrm{exp}(\nu t/\mathrm{}^2)`$, the smaller wavelength, i.e. $`\mathrm{}_x`$, will dominate. As Fig. 2a shows, for a system of size $`64\times 64`$ we observe four ripples aligned along the $`x`$-axis, in agreement with the previous prediction. As a second example we consider the case, $`\nu _x=1.2337`$, $`\nu _y=0.0001`$, $`K_x=K_y=K_{xy}/2=1`$, and $`\lambda _x=\lambda _y=0.001`$, for which we expect ripples of wavelength $`\mathrm{}_x8`$, smaller than $`\mathrm{}_y889`$. As Fig. 2b shows, in this case we observe eight ripples aligned along the $`y`$-axis. While the early time behavior is correctly predicted by the linear theory, beyond the crossover time $`\tau `$ the nonlinear terms become effective. One of the most striking consequence of these terms is that the surface width stabilizes rather abruptly (see Fig. 1). Furthermore, the ripple pattern generated in the linear regime disappears, and the surface exhibits kinetic roughening. A typical surface morphology, demonstrating the absence of ripples, is shown in Fig. 2c. The crossover time $`\tau `$ from the linear to the nonlinear behavior can be estimated by comparing the strength of the linear term with that of the nonlinear term. Let the typical height at the crossover time $`\tau `$ be $`W_0\sqrt{W^2(L,\tau )}`$. Then, from the linear equation we obtain $`W_0\mathrm{exp}(\nu \tau /\mathrm{}^2)`$, while from $`_th\lambda (h)^2`$ we estimate $`W_0/\tau \lambda W_0^2/\mathrm{}^2`$. Combining these two relations we obtain $$\tau (K/\nu ^2)\mathrm{ln}(\nu /\lambda ).$$ (5) In this expression, $`\nu `$, $`K`$ and $`\lambda `$, refer to the direction perpendicular to the ripple orientation. The predicted $`\lambda `$-dependence of $`\tau `$ is confirmed in the inset of Fig. 1a. An another quantity that refects the transition from the linear to the nonlinear regime is the erosion velocity $`v=_t\overline{h}`$. The main contribution to the erosion velocity comes from a constant erosion rate $`v_0`$, that has been omitted from (1) and (3), since it does not affect the surface morphology . However, in addition to $`v_0`$, the mean height is also modified by the nonlinear terms, $`\lambda _x(_xh)^2`$ and $`\lambda _y(_yh)^2`$. In the following for simplicity, we neglect the $`v_0`$ term, since its value does not depend on the surface morphology, and it is constant throughout the erosion process. The nonlinear terms act to decrease the mean height in the case of $`\lambda _x<0`$ and $`\lambda _y<0`$. We can estimate the surface velocity as $`v\lambda W_0^2/\mathrm{}^2\nu ^3/(K\lambda )`$ using $`W_0\nu /\lambda `$. This dependence of $`v`$ on $`\lambda `$ is consistent with the numerical results, shown in the inset of Fig. 1b. In this regime ($`t>\tau `$) the surface exhibits kinetic roughening, i.e. the surface width should increase either logarithmically (when $`\lambda =0`$) or as a power law (when $`\lambda 0`$). However, compared with the exponential increase in the early regime ($`t<\tau `$), this dependence is hardly observable. The simulation times required to investigate the asymptotic scalings of $`W`$ with $`t`$ are currently prohibitive. $`\lambda _x\lambda _y<0`$ — As Fig. 3a shows, we again observe a separation of the linear and nonlinear regimes, however, we find that the morphology and the dynamics of the surface in the nonlinear regime is quite different from the case $`\lambda _x\lambda _y>0`$. In regime I, for early times ($`t<\tau `$), the surface forms ripples (see Fig. 4a), whose wavelength and orientation is correctly described by the linear theory. After the first crossover time $`\tau `$, given by Eq. (4), the surface width is stabilized, and the ripples disappear, as shown in Fig. 4b. After $`\tau `$, the system enters a rather long transient regime, that we call the regime II. Here, the surface is rough, and no apparent spatial order is present. We often observe the development of individual ripples, but they soon disappear, and no long-range order is present in the system. However, at a second crossover time $`\tau _2`$, a new ripple structure suddenly forms, as shown in Fig. 4c, in which the ripples are stable and rotated with an angle $`\theta _c`$ to the $`x`$ direction. The angle $`\theta _c`$ has the value $`\theta _c=\mathrm{tan}^1\sqrt{\lambda _x/\lambda _y}`$, (or $`\mathrm{tan}^1\sqrt{\lambda _y/\lambda _x}`$), which can be calculated by moving to a rotated frame of coordinates that vanishes the nonlinear term in the transverse direction. Indeed, as Fig. 4c shows, the observed angle is in excellent agreement with $`\theta _c=\mathrm{tan}^1(1/2)`$ for $`\lambda _x=1`$ and $`\lambda _y=4`$. We also find that the time the system spends in regime II fluctuates from system to system, thus $`\tau _2`$ has a wide distribution. The transitions between the three regimes can be detected by monitoring the surface width $`W`$ (Fig. 3a): in regime I, the width increases exponentially, as predicted by the linear theory; it is approximately constant but hightly fluctuating in regime II, and suddenly increases and stabilizes in regime III. Note that the amplitude of the rotated ripples in regime III is much larger than in regime II, a rather attractive feature for possible applications as patterned templates for various microelectronic applications. The demonstrated morphological transitions generate an anomalous behavior in $`\overline{h}`$ as well. As Fig. 3a shows, the mean height is zero in the linear regime, increases as the ripples are destroyed in regime II, and decreases with a constant velocity in regime III. In order to understand this complex behavior, we consider a specific example, for which the surface morphologies are shown in Fig. 4. For this parameter set, ripples are aligned along the $`y`$-axis in the region I, because $`\mathrm{}_x\mathrm{}_y`$. Thus, the contribution of $`(_xh)^2`$ is much larger than that of $`(_yh)^2`$, even though $`|\lambda _x|<|\lambda _y|`$, and the surface height increases due to the term $`\lambda _x(_xh)^2`$ with $`\lambda _x>0`$ in regime II. However, as the ripples are destroyed by the nonlinear effects, the contribution of $`(_yh)^2`$ term increases, and eventually $`\lambda _y(_yh)^2`$ becomes larger than $`\lambda _x(_xh)^2`$, forcing the mean height to decrease because $`\lambda _y<0`$. The velocity in regime III is determined by the nonlinear coefficient in the direction along the ripples, which reduces to $`\lambda _x+\lambda _y`$ after the coordinate transformation to the rotated ripple direction. This prediction is in good agreement with the results of Fig. 4, which demonstrates that $`v1/(\lambda _x+\lambda _y)`$. The clear separation of the linear and the nonlinear behavior, that holds for both signs of $`\lambda _x\lambda _y`$, has a direct impact on the experimental observations. Numerous experiments have observed the development of ripples whose wavelength and orientation is in good agreement with the prediction of the linear theory . Based on our results, we expect that these experiments were in the $`t<\tau `$ regime, where indeed the linear theory fully describes the system. However, recent results have provided detailed experimental evidence of ripple amplitude stabilization , a clear sign of the presence of nonlinear effects. Furthermore, it was found that the different $`W`$ versus $`t`$ curves can be collapsed by rescaling time with a factor $`\nu ^2/K`$ and amplitude with $`\sqrt{\nu /2K}`$. Indeed, this is in excellent agreement with our prediction, Eq.(4). Finally, since the values of $`\nu `$ and $`\lambda `$ can be tuned by changing the ion energy and the angle of incidence, and $`K`$ can be tuned with the temperature, the values of $`\tau `$ and $`\tau _2`$ can be changed continuously, and thus our predictions on the morphological transition between the linear and nonlinear regimes could be tested experimentally. Furthermore, the detailed morphological evolution uncovered here, combined with earlier calculations that connect the coefficients in Eq.(3) to the numerical values of the parameters describing the ion-bombardment process , offer a detailed roadway that can guide further experiments and facilitate the use of ion sputtering for surface patternings. This work is supported by KOSEF (Grant No. 971-0207-025-2), the Korean Research Foundation (Grant No. 98-015-D00090), NSF-DMR and ONR.
no-problem/9905/hep-ph9905507.html
ar5iv
text
# Elastic 𝜌 meson production at HERA ## 1 Introduction We present results on elastic electroproduction of $`\varrho `$ mesons: $`e+pe+p+\varrho `$, the $`\varrho `$ meson decaying into two pions ($`\rho \pi ^+\pi ^{}`$, BR $``$ 100 %). The data were collected in 1995 and 1996 by the H1 detector, corresponding respectively to an integrated luminosity of 125 $`\mathrm{nb}^1`$ and 3.87 $`\mathrm{pb}^1`$. The kinematical range covered in $`Q^2`$, $`W`$ and $`t`$ is the following: $`1<Q^2<60`$ $`\mathrm{GeV}^2`$, $`30<W<140`$ GeV and $`|t|<0.5`$ $`\mathrm{GeV}^2`$. The $`\varrho `$ meson having the same quantum numbers as the photon ($`J^{PC}=1^{}`$), the $`\gamma ^{}p`$ interaction is mediated by the exchange of a colourless object, called the pomeron in the Regge model. It is important to understand the pomeron in terms of partons in the framework of the QCD theory. ## 2 Models Quantitative predictions in perturbative QCD are possible when a hard scale is present in the interaction. For $`\varrho `$ meson production, this scale can be given by $`Q^2`$ ($`Q^2`$ $`\text{ }>`$ several $`\mathrm{GeV}^2`$). Most models rely on the fact that, at high energy in the proton rest frame, the photon fluctuates into a $`q\overline{q}`$ pair a long time before the interaction, and recombines into a $`\varrho `$ meson a long time after the interaction. The amplitude $``$ then factorizes in three terms: $`\psi _{\lambda _\rho }^\rho T_{\lambda _\rho \lambda _\gamma }\psi _{\lambda _\gamma }^\gamma `$ where $`T_{\lambda _\rho \lambda _\gamma }`$ are the interaction helicity amplitudes ($`\lambda _\gamma `$ and $`\lambda _\rho `$ being the helicities of the photon and the $`\varrho `$ meson, respectively) and $`\psi `$ represent the wave functions. In most models, the $`q\overline{q}p`$ interaction is described by 2 gluon exchange. The cross section is then proportional to the square of the gluon density in the proton: $`\sigma _{\gamma p}\alpha _s^2(Q^2)/Q^6\left|xg(x,Q^2)\right|^2`$. The main uncertainties of the models come from the choice of the scale, of the gluon distribution parametrisation, of the $`\varrho `$ meson wave function (Fermi motion), and from the neglect of off-diagonal gluon distributions and of higher order corrections. ## 3 Signal The shape of the ($`\pi \pi `$) mass distribution has been studied as a function of $`Q^2`$. The mass distributions are skewed compared to a relativistic Breit-Wigner profile: enhancement is observed in the low mass region and suppression in the high mass side. This effect has been attributed to an interference between the resonant and the non-resonant production of two pions. The skewing of the mass distribution is observed to decrease with $`Q^2`$. ## 4 Cross sections $``$ $`t`$ dependence: the data present the characteristic exponential falling off of the $`t`$ distribution $`\sigma \mathrm{exp}(b|t|)`$. The $`b`$ slope parameter, measured for different $`Q^2`$ interval, is shown in Fig 1a, confirming the decrease of $`b`$ when $`Q^2`$ increases from photoproduction to the deep-inelastic domain, reflecting the decrease of the transverse size of the virtual photon. $``$ $`Q^2`$ dependence: Fig. 1b presents the $`Q^2`$ dependence of the $`\sigma (\gamma ^{}p\rho p)`$ cross section for $`W`$ = 75 GeV. The data are well described by the parametrisation ($`Q^2`$ \+ $`m_\rho ^2`$)<sup>n</sup>, with $`n`$ = 2.24 $`\pm `$ 0.09 (full line). $``$ $`W`$ dependence: the $`W`$ dependence of $`\sigma (\gamma ^{}p\rho p)`$ was measured for different $`Q^2`$ values, and the parametrisation $`\sigma W^\delta `$ was fitted to the data. In a Regge context, $`\delta `$ can be related to the exchange trajectory and the values of the intercept $`\alpha (0)`$ are shown in Fig. 1c. The measurements are compared to the values $`1.081.10`$ obtained from fits to the total and elastic hadron–hadron cross sections. They suggest that the intercept of the effective trajectory governing high $`Q^2`$ $`\varrho `$ electroproduction is larger than that describing elastic and total hadronic cross sections. The strong rise of the cross section with $`W`$, observed at high $`Q^2`$, is in agreement with perturbative QCD prediction $`\sigma _{\gamma p}|xg(x,Q^2)|^2`$. ## 5 Helicities studies The study of the angular distributions of the production and decay of the $`\varrho `$ meson gives information on the photon and $`\varrho `$ polarisation states. In the helicity frame, three angles are used: the polar ($`\theta `$) and azimuthal ($`\phi `$) angles of the $`\pi ^+`$ direction in the $`\varrho `$ meson centre of mass system (cms), and the $`\mathrm{\Phi }`$ angle between the electron scattering plane and the $`\varrho `$ meson production plane, in the hadronic cms. The decay angular distribution $`W(\mathrm{cos}\theta ,\phi ,\mathrm{\Phi })`$ is a function of 15 matrix elements $`r_{ij}^\alpha `$ and $`r_{ij}^{\alpha \beta }`$, which are related to the helicity amplitudes $`T_{\lambda _\rho \lambda _\gamma }`$. Figure 2a presents the measurement of the 15 matrix elements (using the “moment method”) as a function of $`Q^2`$. In case of s-channel helicity conservation (SCHC), the helicity of the vector meson is the same as that of the photon ($`T_{\lambda _\rho \lambda _\gamma }=T_{01}=T_{10}=T_{11}=T_{11}=0`$), and 10 of the matrix elements vanish (dotted lines in Figs. 2a and 2b). The measurement of the matrix elements are in agreement which SCHC except for the $`r_{00}^5`$ element, which is observed to be significantly different from zero. This element is proportional to the single helicity flip amplitude $`T_{\lambda _\rho \lambda _\gamma }=T_{01}`$. Another way to extract the $`r_{00}^5`$ matrix element is to study the $`\mathrm{\Phi }`$ distribution. Indeed the decay angular distribution $`W(\mathrm{\Phi })`$ depends on the combinaison $`(2r_{11}^5+r_{00}^5)`$. Figure 2b presents the result of the fits in different $`Q^2`$, $`W`$ and $`|t|`$ bins. Again, we observe a clear deviation of the $`r_{00}^5`$ parameter from the null value expected for SCHC. The ratio of helicity flip to non helicity flip amplitudes is hence estimated to be $`8.0\pm 3.0`$ %. The ratio of the longitudinal to the transverse cross section, $`R=\sigma _L/\sigma _T`$, can be extracted using the measurement of the $`r_{00}^{04}`$ matrix element. $`R`$ is observed to increase with $`Q^2`$, and to reach the value $`R`$ = 3 – 4 for $`Q^2`$ $``$ 20 $`\mathrm{GeV}^2`$ (see Fig. 2c). The $`Q^2`$ dependence of the ratio $`R`$ is well described by the perturbative QCD models of Royen and Cudell (full line), and of Martin, Ryskin and Teubner (dashed line) and by the model of Schildknect, Schuler and Surrow (dotted line) based on generalised vector dominance model (GVDM). The following hierarchy between the helicity amplitudes, observed in the data: $`|T_{00}|>|T_{11}|>|T_{01}|>|T_{10}|,|T_{11}|`$, is in agreement with perturbative QCD calculations performed by Ivanov and Kirschner . ## 6 Conclusions The elastic electroproduction of $`\varrho `$ mesons has been studied at HERA with the H1 detector in a wide kinematical domain: $`1<Q^2<60`$ GeV<sup>2</sup> and 30 $`<`$ $`W`$ $`<`$ 140 GeV. Measurements of the cross section $`\sigma (\gamma ^{}p\rho p)`$ show an indication for an increasingly strong energy dependence when $`Q^2`$ increases. Full helicity studies have been performed showing a small but significant violation of SCHC. The $`Q^2`$ dependence of the ratio $`R=\sigma _L/\sigma _T`$ was measured and is well described by two models based on perturbative QCD and by a model based on GVDM .
no-problem/9905/cond-mat9905016.html
ar5iv
text
# Influence of the confinement geometry on surface superconductivity ## Abstract The nucleation field for surface superconductivity, $`H_{c3}`$, depends on the geometrical shape of the mesoscopic superconducting sample and is substantially enhanced with decreasing sample size. As an example we studied circular, square, triangular and wedge shaped disks. For the wedge the nucleation field diverges as $`H_{c3}/H_{c2}=\sqrt{3}/\alpha `$ with decreasing angle ($`\alpha `$) of the wedge, where $`H_{c2}`$ is the bulk upper critical field. PACS numbers: 74.80-g, 74.20De, 75.25Dw Recent progress in microfabrication techniques has made it possible to study mesoscopic superconducting samples of micronmeter and submicronmeter dimensions. The size and shape of such samples strongly influences the superconducting properties. Whereas bulk superconductivity exists at low magnetic field (either the Meissner state for $`H<H_{c1}`$ in type-I and type-II superconductors or the Abrikosov vortex state for $`H_{c1}<H<H_{c2}`$ in type-II superconductors), surface superconductivity survives in infinitely large bounded samples up to the third critical field $`H_{c3}1.695H_{c2}`$. For mesoscopic samples of different shape, whose characteristic sizes are comparable to the coherence length $`\xi `$, recent experiments have demonstrated the Little-Parks-type oscillatory behavior of the phase boundary between the normal and superconducting state in the $`HT`$ plane, where $`T`$ and $`H`$ are the critical temperature and the applied magnetic field, respectively. While for a circular sample, when the superconductor state is characterized by a definite angular momentum, the ocsillatory behavior of the nucleation field is well understood theoretically , the problem of the nucleation of superconductivity in arbitrary shaped systems is still to be solved. Earlier numerical calculations by the present authors showed that the oscillations of the nucleation field with sample size for ellipsoidal and rectangular shaped samples disappears only for large aspect ratios. Recently, the asymptotic behavior of the nucleation field in the small size limit, $`L\xi `$, and in the large size limit, $`L\xi `$, where $`L`$ is the characteristic sample size, was found for smooth (i.e. without sharp corners) and square samples . The particular case of surface superconductivity in a rectangular loop and in a wedge was investigated recently. In Ref. a variational approach was used and they found that $`H_{c3}`$ is maximum for a wedge angle of $`\alpha =0.44\pi `$. This is surprising in view of the intuitive idea that surface superconductivity increases with decreasing size of the system and therefore, one would expect that $`H_{c3}`$ is an uniform increasing function with decreasing $`\alpha `$. The result of Ref. was also in disagreement with calculations for the particular case of a square sample. In the present paper, we calculate, within the Ginzburg-Landau (GL) mean field approach, the nucleation field for bounded square and triangular samples as well as for a wedge with infinitely long sides and arbitrary corner angle $`\alpha `$. We resolve the discrepancy between the results of Refs. and . Furthermore, we obtain the following analytical expression for the asymptotic behavior of the nucleation field of a wedge $`H_{c3}/H_{c2}=\sqrt{3}/\alpha `$ in the limiting case of $`\alpha 1`$. Nucleation of superconductivity in a finite bounded sample. Close to the superconducting - normal transition the demagnetization effects are not important and consequently the nucleation of superconductivity is described by the first GL equation for the order parameter $`\mathrm{\Psi }`$ $$\frac{1}{2m^{}}\left(i\mathrm{}\stackrel{}{}\frac{e^{}}{c}\stackrel{}{A}\right)^2\mathrm{\Psi }=\alpha \mathrm{\Psi }\beta \mathrm{\Psi }|\mathrm{\Psi }|^2,$$ (1) where $`\alpha `$, $`\beta `$ are the GL parameters which depend on temperature, $`\stackrel{}{A}`$ is the vector potential of the applied magnetic field $`\stackrel{}{H}=rot\stackrel{}{A}`$ and $`m^{}=2m`$ and $`e^{}=2e`$ are the mass and electrical charge of the Cooper pairs, respectively. We consider a uniform external magnetic field applied along the $`z`$axis and assume that the order parameter is uniform in this direction. The latter is satisfied for samples which are close to the nucleation field and which have flat top and bottom sides. When calculating the nucleation field we neglect the nonlinear term and rewrite Eq. (1) as $$\left(i\stackrel{}{}\stackrel{}{A}\right)^2\mathrm{\Psi }=\mathrm{\Psi },$$ (2) where the distance is measured in units of the coherence length $`\xi =\mathrm{}/\sqrt{2m\alpha }`$, the vector potential in $`c\mathrm{}/2e\xi `$, and the magnetic field in $`H_{c2}=c\mathrm{}/2e\xi ^2`$, the bulk upper critical field. Thus the problem of nucleation of superconductivity is reduced to the calculation of the lowest eigenvalue and corresponding eigenfunction of the two-dimensional (2D) linear operator $`\widehat{L}=(i\stackrel{}{}\stackrel{}{A})^2`$ which also describes the motion of a quantum particle. The transition from the normal to the superconducting state occurs when the lowest eigenvalue of $`\widehat{L}`$ becomes smaller than unity. Contrary to the case of the usual Schrödinger equation, where the wave function is taken equal to zero at the sample boundaries, here the normal component of the superconducting current must be zero at the boundary between the superconducting and the insulating material $$\left(i\stackrel{}{}\stackrel{}{A}\right)|_n\mathrm{\Psi }=0.$$ (3) This difference in boundary condition changes drastically the behavior of the eigenvalues as function of the sample size. Whereas the energy of a quantum particle increases as $`E1/L^2`$ for decreasing sample size $`L`$, the resulting energy for the boundary condition Eq. (3) decreases as $`EL^2`$. As a result the nucleation field decreases with increasing sample size and tends to the limit $`H_{c3}=1.695H_{c2}`$ for infinitely large samples with a flat boundary. In circular samples this increase of $`H_{c3}`$ with decreasing sample size has an oscillatory behavior due to the quantization of the angular momentum. For axially symmetric samples the latter is an exact quantum number To find the nucleation field for an arbitrary shaped sample we use a finite-difference representation of the differential operator $`\widehat{L}`$ on a uniform Cartesian space grid within the link variable approach . The lowest eigenvalue and corresponding order parameter are found by applying the following iteration procedure $`\widehat{L}\mathrm{\Psi }^i=\mathrm{\Psi }^{i1}`$. The calculated nucleation field as a function of the square root of the sample area is shown in Fig. 1 for different geometries. Note, that the oscillatory behavior of the nucleation field survives even in triangular samples although the oscillations are less pronounced then for a circular geometry. The contour plots of the order parameter density $`|\mathrm{\Psi }|^2`$ are depicted in the insets of Fig. 1 for two square samples with different sizes. For smaller samples areas $`S=(2.32\xi )^2`$, the maximum of the order parameter is situated in the center of the square, which corresponds to the Meissner state (see upper inset of Fig. 1). The jump in the first derivative of the nucleation field at $`\sqrt{S}2.33\xi `$ is caused by the appearance of a vortex in the center of the square (see lower inset of Fig. 1). This changes the slope of the nucleation field and corresponds to the appearance of a vortex state with a larger value of the angular momentum. The existence of a sequence of vortex-like states is inherent to all considered samples. Notice also that the nucleation field is larger for samples with sharper corners as expected intuitively. The nucleation of superconductivity in a wedge. For the wedge geometry it is more convenient to use cylindrical coordinates ($`\rho `$,$`\varphi `$) with $`A_\varphi =H\rho ^2/2`$ and to measure all distances in $`\sqrt{c\mathrm{}/eH}`$. Then the linearized first GL equation takes the form: $$\frac{1}{\rho }\frac{}{\rho }\rho \frac{\mathrm{\Psi }}{\rho }+\frac{1}{\rho ^2}\left(\frac{}{\varphi }i\rho ^2\right)^2\mathrm{\Psi }=\frac{2H_{c2}}{H}\mathrm{\Psi },$$ (4) with the boundary conditions $$\left(\frac{}{\varphi }i\rho ^2\right)\mathrm{\Psi }|_{\varphi =0,\alpha }=0,\frac{\mathrm{\Psi }}{\rho }|_\rho \mathrm{}=0.$$ (5) The nucleation field $`H_{c3}=2H_{c2}/\lambda `$ is obtained from the lowest eigenvalue $`\lambda `$ of the operator in the LHS of Eq. (4). In order to find the lowest eigenvalue and the corresponding eigenfunction we use again the finite-difference technique described above. Instead of an infinitely long wedge we consider a finite fragment with a very large radius $`R=15/\sqrt{\alpha }`$ such that the value for $`\lambda `$ is independent of $`R`$. Fig. 2 shows the spatial distribution of the square of the absolute value of the order parameter $`|\mathrm{\Psi }|^2`$ for two different wedge angles $`\alpha =0.05\pi `$ and $`0.5\pi `$. For the $`\alpha =0.05\pi `$ case, the order parameter practically does not depend on the azimuthal angle and decays faster then exponentially deep into the sample. For $`\alpha =\pi /2`$, the order parameter still decays quickly with the radius but a prominent angular dependence appears, specially for large radii. This expected behavior, namely superconductivity only exists in the corner , implies a decay of the order parameter from the wedge side into the sample. The numerically obtained nucleation field is shown in Fig. 3 by the solid curve which decreases monotonically with increasing wedge angle and diverges in the limit $`\alpha 0`$. For $`\alpha =\pi /2`$, we found $`H_{c3}1.96H_{c2}`$ which is close to the estimated result $`H_{c3}1.82H_{c2}`$ of . Note, that increasing the wedge angle beyond $`\alpha >\pi /2`$ changes very weakly the nucleation field and we found the well-known result $`H_{c3}=1.695H_{c2}`$ for $`\alpha =\pi `$ . Because the order parameter varies only weakly with the angle it allows us to find the asymptotic behavior of the order parameter analytically for small wedge angles $`\alpha 1`$. To this end we rewrite Eqs. (4,5) as $$e^{ix^2\eta }\frac{1}{x}\frac{}{x}x\frac{}{x}e^{ix^2\eta }\psi \frac{1}{x^2\alpha ^2}\frac{^2\psi }{\eta ^2}=\mu \psi ,$$ (6) with the boundary condition $$\frac{\psi }{\eta }|_{\eta =0,1}=0,$$ (7) where $`x=\sqrt{\alpha }\rho `$, $`\eta =\varphi /\alpha `$, $`\mu =\lambda /\alpha `$, $`\psi =exp(ix^2\eta )\mathrm{\Psi }`$ . The second term in the LSH of Eq. (6) dominates for the case of very small wedge angles. Therefore, the new order parameter $`\psi `$ depends slightly on the angle $`\eta `$ which is also in agreement with the boundary condition (7). This allows to simplify the problem. Using the boundary condition (7) we integrate Eq. (6) over $`\eta `$ assuming that $`\psi `$ depends only on the radius and obtain $$\frac{^2\psi }{x^2}(\frac{1}{x}+2ix)\frac{\psi }{x}(2i\frac{4x^2}{3})\psi =\mu \psi .$$ (8) Note, that the wedge angle does not enter into the last equation and the reduced eigenvalue $`\mu `$ does not depend on $`\alpha `$. After the substitution $`\psi (x)=exp(ix^2/2)f(x)`$ Eq. (8) is reduced to the well-known equation for the harmonic oscillator with the lowest eigenvalue $`\mu =2/\sqrt{3}`$. Thus the order parameter for sharp wedges $`\alpha 1`$ can be written as $$\mathrm{\Psi }=exp\left((i\varphi i\frac{\alpha }{2}\frac{\alpha }{2\sqrt{3}})\rho ^2\right),$$ (9) and the nucleation field is inversly proportional to the wedge angle $$H_{c3}=\frac{\sqrt{3}}{\alpha }H_{c2}.$$ (10) Our numerical results deviates from the asymptotic expression (10) with about $`10\%`$ when the wedge angle is increased up to $`\alpha 0.15\pi 0.5`$ (compare the dashed curve with the solid curve in Fig. 3). As is evident from Eq. (6), the corrections to the above asymptotic nucleation field are of second order $`O(\alpha ^2)`$. From this observation we tried to fit $`H_{c3}/H_{c2}`$ to a function $`g(\alpha ^2)\sqrt{3}/\alpha `$. Within the accuracy of our numerical results we found that the nucleation field for a wedge could be accurately fitted to $$\frac{H_{c3}}{H_{c2}}=\frac{\sqrt{3}}{\alpha }\left(1+0.14804\alpha ^2+\frac{0.746\alpha ^2}{\alpha ^2+1.8794}\right).$$ (11) This function is shown by the dotted curve in Fig. 3 and agrees very well with our numerical results (solid curve). Let us compare the asymptotic result for the nucleation field in a wedge with those for thin strips and small circles. Assuming that the order parameter depends slightly on the width $`d`$ of the strip and on the radius $`R`$ of the circle, we find the following asymptotic expressions for the nucleation field in the strip $`H_{c3}=\sqrt{3}H_{c2}\xi /d`$ and the circle $`H_{c3}=2\sqrt{2}H_{c2}\xi /R`$. In conlusion, for $`L\xi `$, where $`L`$ is the smallest dimension of the sample, the nucleation field increases inversely with the sample size, which corresponds to $`\alpha \xi `$ in the case of a wedge-like sample. Note added in proof: The authors of Ref. recently found a mistake in their calculation which was corrected in the erratum . The new asymptotic behavior found with their variational calculation is $`H_{c3}/H_{c2}=\sqrt{2/3}/\alpha =0.816/\alpha `$ which is a factor $`3/\sqrt{2}=2.12`$ smaller than our exact result (10). We acknowledge discussions with J.T. Devreese and V.V. Moshchalkov. This work was supported by the Flemish Science Foundation (FWO-Vl) and IUAP-VI. FMP is a research director with the FWO-Vl.
no-problem/9905/cond-mat9905375.html
ar5iv
text
# Core-softened potentials and the anomalous properties of water ## I introduction Water is an anomalous substance in many respects. Liquid water has a maximum as a function of temperature in both density and isothermal compressibility. It solidifies with volume increasing at low pressures, and the solid phase (ice) shows a remarkable variety of crystalline structures in different sectors of the pressure-temperature plane. Some of these properties are known from long ago, but their origin is still controversial. In an effort to rationalize these anomalous properties, the supercooled (metastable) sector of the phase diagram of liquid water has received much attention in the last years. It was observed that when appropriately cooled (using techniques for preventing crystallization) water becomes a viscous fluid with many properties (as heat capacity and isothermal compressibility) displaying a tendency that has suggested even a thermodynamic singularity at some lower temperature. Although there is a limit of about 235 K below which water cannot be cooled without crystallization, amorphous states of water at much lower temperatures can be obtained by different techniques. All these amorphous states are observed to correspond to one of two different structures (referred to as low-density amorphous -LDA- and high-density amorphous -HDA) that differ by about 20 per cent in density, which transform reversibly one into the other upon changes of pressure. There is evidence that these amorphous states are thermodynamically connected with fluid water, although a direct verification is not possible due to recrystallization at intermediate temperatures. The observation of LDA and HDA was an experimental clue that led to the proposal of the second critical point hypothesis. This hypothesis states that in the deeply supercooled region water can exist in two different amorphous configurations, separated by a line of first order transitions. This line should end in a critical point very much as the usual liquid-vapor line ends in a critical point. This hypothesis, in addition to obviously explain the reversible transformation between LDA and HDA, provides a natural though phenomenological explanation for the anomalous behavior of density and isothermal compressibility. However, the very existence of the second critical point is known to be not neccesary for the appearance of other anomalies, and the issue of what are the microscopic properties of water molecules that may produce the appearance of the second critical point are only poorly understood. In all cases it seems to be crucial the fact that water (because of the particular form of its molecules and peculiarities of the hydrogen bond) exhibits competition between more expanded structures (preferred at low pressures) and more compact ones (which are favored at high pressures). But it is not obvious to what extent this simple fact can be made responsible for all the anomalies of water, or if more subtle properties of the interaction potential (in particular, cooperative hydrogen bonding) are crucial. Numerical simulations based on some of the available pair potentials for the interaction between water molecules reproduce reasonably well many of its properties, although the systems that have been studied are strongly limited in size, due to computational constraints. These simulations only suggest the existence of the second critical point, but up to now they were not able to prove its existence unambiguously. Other simplified and in some cases ad-hoc models have been devised to show the appearance of anomalous properties in the phase diagram. Some of these models have a second critical point, but in these cases a global characterization of the phase diagram that includes all other anomalies has not bee achieved. In all cases, the models used have as a fundamental ingredient the competition between expanded, less dense structures and compressed, more dense ones. It is the goal of the present work to show that a very simple model of spherical particles interacting trough a repulsive potential that possesses two different preferred equilibrium positions has a phase diagram in which a) lines of maxima for the density and the isothermal compressibility of the liquid exist; b) the fluid phase freezes with an increase in volume in some pressure range; c) the solid phase has multiple different crystalline structures depending on $`P`$ and $`T`$. When a long range van der Waals attraction is included on top of the previous, exclusively repulsive potential, the system d) preserves the anomalies existent in the non-attractive case; e) develops a liquid gas first order coexisting line that ends in a critical point in the usual fashion; f) depending on the strength of the attractive potential, a line of first order transitions separating two amorphous phases in the supercooled region appears. This line ends in a second critical point from which the line of maxima in isothermal compressibility starts. These statements will be justified mostly using numerical (Monte Carlo) techniques. However, for the deep supercooled states, the long equilibration times make the numerical studies not completely reliable. In this case, the numerical results are supported by analytical calculations in a limiting case of the interacting potential that shows neatly how the second critical point appears. The paper is organized as follows. In Section II we give a brief description of the interaction potential and the numerical technique. Section III focus on the different stable crystalline configurations of the system. In Section IV we study the phase diagram of the fluid phase, both where it is thermodynamically stable and also in the supercooled region. Here we rely both in analytical calculations and in simulations, and show how a second critical point can appear. In Section V we describe the melting of the most expanded solid structure and its anomalies. Finally, in Section VI we take all the results as a whole and comment upon their importance for the understanding of the properties of water. ## II Model and numerical details The interaction potential $`U(r)`$ between particles that we will consider is chosen to be the hard-core plus linear-ramp potential originally studied by Stell and Hemmer. The radius of the hard core is taken to be $`r_0`$, and the ramp extends linearly from the value $`\epsilon _0`$ at $`r=r_0`$ up to 0 at $`r=r_1`$. In addition, a long range van der Waals attraction will be included through a global term in the energy per particle of the system proportional to $`\gamma /v`$, with $`v`$ the specific volume and $`\gamma `$ a coefficient that represents the total integrated strength of the attraction. The van der Waals term can be accounted for without its explicit inclusion in the simulations in the following way. Since the free energy per particle contains the term $`Pv\gamma /v`$, when minimizing with respect to $`v`$ the combination $`P+\gamma /v^2`$ appears. So we will call $`P^{}P+\gamma /v^2`$, and make the simulations in terms of $`P^{}`$, with $`\gamma =0`$. At the end, the self consistent replacement $`P^{}P+\gamma /v^2`$ is made, and this provides the results for finite $`\gamma `$. Numerical simulations are performed at constant $`P`$, $`T`$, and $`N`$ (the number of particles) by standard Monte Carlo techniques. Periodic boundary conditions are used in the three directions. The equilibrium volume at each pressure is reached by allowing the system size to increase or decrease through Monte Carlo movements that expand of contract all coordinates of the particles as well as the total size of the system. The rescaling is accepted of rejected depending on the energy change it involves. The contraction-expansion procedure is made independently for the three spatial coordinates, and the maximum ratio between the size of the system in different directions is limited to 1.2. A subtle but important technical change was introduced in the simulation procedure to be able to equilibrate the volume in the low temperature region. Let us think for instance of the $`T=0`$ case. If we increase $`P`$, as soon as two particles are at a distance $`r_0`$ from each other, the total volume gets stuck in the simulation because (due to the scheme adopted for doing volume changes) the volume can be reduced further only if a global contraction of all coordinates reduces the energy. But this contraction would bring the two particles (which are already at a distance $`r_0`$) at a distance lower than $`r_0`$, then formally to a state of infinite energy, and so the trial movement is rejected. To avoid this problem we relax a bit the rigid hard core at $`r_0`$, replacing it by a new linear ramp between $`r=0`$ and $`r=r_0`$ of the form $`U(r)=\epsilon _0[50(1r/r_0)+1]`$. The last term is included in order to match smoothly the potential for $`r>r_0`$. This modification was seen not to modify the behavior of the system, it just provides a convenient way of reaching the equilibrium values of the volume within a reasonable computing time in the low temperature regime. The property of the potential that renders it interesting for our problem is that depending on the external force acting on them, two particles prefer to be at distance $`r_0`$ or $`r_1`$ from each other, and the effect of this simple fact on the phase diagram is dramatic. Already in the original papers about this potential, it was realized that in fact the competition between configurations with particles at distances $`r_0`$ and $`r_1`$ may produce the appearance of polimorfism in the system. More precisely, in 1D the system may exhibit many liquid phases when $`\gamma 0`$, with sharp transitions between them. In 3D these transitions occur within the solid region of the phase diagram, and it was suggested that they may still be observable as isostructural transitions of the solid. In this connection we want to emphasize the following two points: i) isostructural transitions within the solid phase for this kind of potentials are usually preempted by the appearance of new intervening solid phases of different symmetry; ii) polimorfism of the liquid state as observed in the 1D system also appears in 3D samples, but now in the supercooled liquid state, and is the responsible for the existence of the second critical point. We address these two points in the next two sections. ## III Crystalline configurations at $`T=0`$ Multiple crystalline structures for our potential arise from the competition between expanded and contracted structures. At $`T=0`$ the preferred configuration of the system will be the one that minimizes the enthalpy $`he+Pv`$. At low $`P`$ the best way of minimizing $`h`$ is first to minimize $`e`$, and then $`v`$. This is achieved in a close packed structure with first neighbors distance equal to $`r_1`$. At very large $`P`$, in order to minimize $`h`$ it is energetically more convenient to minimize $`v`$, and the structure becomes again a close packed one with first neighbors distance equal to $`r_0`$. However, in the range in which these two configurations have approximately the same enthalpy, there are others which are more stable, having pairs of neighbor particles both at distances $`r_0`$ and $`r_1`$. For our potential $`U(r)`$ (and also for more general potentials) the existence of other stable configurations can be demonstrated quite generally. However, to tell safely which is the structure of lowest enthalpy for each $`P`$ is a problem for which a closed solution is not known. The usual approach is to compare the enthalpies of structures proposed beforehand, using simulated annealing techniques to guess the possible structures. We refer to for a detailed discussion of the two-dimensional case, and also for discussions on the neccesary conditions on the potential for different structures to appear. Here we show only in Figure 1 the result of comparing (for $`r_1/r_0=1.75`$) the enthalpies of particles arranged in Bravais lattices corresponding to cubic, tetragonal, rhombohedral, and hexagonal systems as a function of $`P`$ and $`\gamma `$. The structures searched are those that can be defined by no more than two parameters (that fix the form and size of the Bravais lattice). All of them were supposed to have only one particle per unit cell, except for HCP (which has two), that was included due to its known stability. We find five different crystalline configurations as a function of $`P`$. Note also how the increasing of the van der Waals attraction moves all the borders between structures to lower pressures, since those that are stable at higher $`P`$ are always more compact, and thus become more stable in the presence of the van der Waals term. It must be kept in mind that there may be other configurations (corresponding to other crystalline systems, or with more complex unit cells) with lower enthalpy. In fact, they are likely to occur, as for instance in the 2D case crystalline structures with up to five particles per unit cell appear. Only a thorough numerical work can determine all possible structures. ## IV Properties in the Fluid and Supercooled Regions In the previous section we saw that at $`T=0`$ there is a sequence of solid phases interpolating between the lowest density and highest density ones. At each transition (as $`P`$ is increased) a finite fraction of particles that were at distance $`r_1`$ from each other passes to be at distance $`r_0`$. Each of these rearrangements involves a change of symmetry, and thus the appearance of a new crystalline structure. The picture is different in the metastable, disordered sheet of the phase diagram. At very low pressures the particles behave as hard spheres with radius $`r_1/2`$. Hard spheres are known to have a maximum density of random packing, corresponding to a volume per particle $`v_1`$ than in our case is $`v_10.808r_1^3`$. Being the densest disordered structure possible for hard spheres, this is also the thermodynamically stable amorphous configuration of the system when $`T=0`$, namely, the one which minimizes the enthalpy. When $`P\mathrm{}`$ the linear ramp of $`U(r)`$ is irrelevant to calculate the free energy, and the thermodynamically stable configuration is again a random packing of spheres, now with radius $`r_0/2`$. As in the crystalline case, the nearest neighbor distance between particles must collapse from $`r_1`$ to $`r_0`$ as a function of $`P`$. The crucial question is whether this collapse is discontinuous at some well defined $`P`$ (or even if there are more than one transitions at different values of $`P`$) or if it is just a smooth crossover. We address the issue in the following two subsections, first analytically (when $`r_1/r_0\mathrm{}`$) and then numerically, but let us quote briefly the answer to this question in advance. For the case of no van der Walls attraction the behavior of the specific volume $`v`$ as a function of $`P`$ is smooth at any finite temperature. But there is a range of pressures in which $`\frac{v}{P}`$ is anomalously large. This fact is enough for the van der Waals interaction to produce (if sufficiently strong) the appearance of a metastable critical point and a line of first order transitions between two disordered structures with a finite difference in density. ### A The limit $`r_1/r_0\mathrm{}`$ We will start by considering only the repulsive part of the potential (i.e., $`\gamma =0`$). As we already said, at very low pressures the particles behave as hard spheres with radius $`r_1/2`$. At $`T=0`$ the enthalpy per particle of this configuration is $`h=Pv_1`$. This is the thermodynamically stable state upon increasing $`P`$ up to the point where it is energetically more convenient to overlap neighbor particles in pairs. The structure will now be similar to the low pressure one, but with two particles overlapped on each position (see a sketch of this fact in Figure 2). The enthalpy of this configuration is $`h^{}=Pv_1/2+\epsilon _0/2`$, since now the total volume of the system is reduced in a factor 2, and an energy $`\epsilon _0`$ must be counted for each pair of particles. The pressure at which $`h=h^{}`$ determines the transition pressure $`P_{TR}=\epsilon _0/v_1`$. Close to the point ($`P_{TR}`$, $`T=0`$) of the phase diagram, we can calculate approximately the free energy of the system in the following way. Let us suppose we have $`N`$ particles, $`n`$ of them in non overlapped positions and $`n^{}`$ pairs of overlapped particles ($`N=n+2n^{}`$). The configurational free energy of the system may be written as (higher than double overlaps that will ocurr at higher pressures are dismissed) $$F=[Pv^{}Ts^{HS}(v^{})](n+n^{})+\epsilon _0n^{}Ts^I$$ (1) where $`v^{}V/(n+n^{})`$ \[note that $`v^{}`$ is not the specific volume, which instead is given by $`v=V/(n+2n^{})`$\], $`s^{HS}(v^{})`$ is the entropy per particle of hard spheres with radius $`r_1`$ on the metastable sheet, and $`s^I`$ is the configurational entropy for choosing which particles will be in pairs, and which ones will be singled $`\left[s^I=k_B\mathrm{ln}\left(\begin{array}{c}n+n^{}\\ n^{}\end{array}\right)\right]`$. Using $`v^{}`$ and $`n^{}`$ as independent variables for minimizing $`F`$ we obtain the equations $`{\displaystyle \frac{P}{T}}`$ $`=`$ $`{\displaystyle \frac{s^{HS}(\stackrel{~}{v}^{})}{\stackrel{~}{v}^{}}}`$ (2) $`{\displaystyle \frac{P\stackrel{~}{v}^{}}{T}}s^{HS}(\stackrel{~}{v}^{})`$ $`=`$ $`{\displaystyle \frac{\epsilon _0}{T}}k_B\mathrm{ln}{\displaystyle \frac{\left(12n^{}/N\right)^2}{\left(1n^{}/N\right)n^{}/N}}`$ (3) (we use $`\stackrel{~}{v}^{}`$ in this case with $`\gamma =0`$, to distinguish from the $`\gamma 0`$ case). The first one is the equation of state of hard spheres in the metastable region. We will use for it the following expression provided by Speedy $$\frac{P}{T}=\frac{2.65k_B}{\stackrel{~}{v}^{}v_1}$$ (4) For given values of $`P`$ and $`T`$, $`\stackrel{~}{v}^{}`$ is determined from this equation, and the value obtained is used to determine $`n^{}`$ from (3). The volume per particle of the system is $`\stackrel{~}{v}V/N=\stackrel{~}{v}^{}(n+n^{})/N`$. Although $`\stackrel{~}{v}^{}`$ has a behavior on $`P`$ and $`T`$ that is the same as for hard spheres, the $`(n+n^{})`$ factor (that takes the value $`N`$ when $`T=0`$, $`P<P_{TR}`$, and $`N/2`$ when $`T=0`$, $`P>P_{TR}`$) makes the behavior of $`v`$ on $`T`$ and $`P`$ be non trivial. We see the surface $`\stackrel{~}{v}(P,T)`$ that is obtained from equations (2), (3), and (4) in Fig. 3. At $`T=0`$ we obtain the expected result, with $`\stackrel{~}{v}(P,T=0)`$ passing from $`v_1`$ to $`v_1/2`$ at the transition pressure $`P_{TR}`$. But at any finite $`T`$ the entropy transforms this jump in a smooth crossover. The point $`P=P_{TR}`$, $`T=0`$ is for this system the metastable critical point. The inclusion of a finite long range attraction through a non zero $`\gamma `$ produces the critical point to move into the $`T>0`$ region. The mechanism is identical to the one that produces the appearance of the usual liquid-gas coexistence curve. We must replace $`P`$ by $`P+\gamma /v^2`$ in expressions (2), (3), and (4) to take account of the van der Waals attraction. A singularity in $`v(P)`$ at a finite temperature exists if $`v/P`$ becomes negative (this is the signature of a van der Waals loop, and thus of a first order transition). Since $$v(P)=\stackrel{~}{v}(P+\gamma /v^2),$$ (5) we can calculate $`v/P`$ as $$v/P=\frac{\stackrel{~}{v}(x)/x}{1+\frac{2\gamma }{v^3}\frac{\stackrel{~}{v}(x)}{x}}|_{x=P+\gamma /v^2}.$$ (6) We see that a singularity occurs if $`\stackrel{~}{v}(P)/P`$ is larger (in absolute value) than $`\frac{v^3}{2\gamma }`$. This always happens in our model close enough to $`P_{TR}`$, $`T=0`$. In Figure 4 the function $`v(P,T)`$, calculated using the self consistency condition (5), with $`\gamma _1=0.1`$ ($`\gamma _1\gamma \epsilon _0^1r_1^3`$)is shown. The rapid change in $`v`$ as a function of $`P`$ close to the critical point is the responsible for the anomalous behavior of $`v`$ and the isothermal compressibility $`K_T\frac{1}{v}\frac{v}{P}`$. We see the location in the $`P`$-$`T`$ diagram of the extrema of $`v`$ and $`K_T`$ as a function of temperature in Figure 5. We also see in this figure the first order line that appears due to the van der Waals attraction, ending in the critical point C’, and also the two spinodal lines that mark the limit of metastability of the two phases on both sides of the first order line. Note that the singularity of the thermodynamic properties at the critical point manifests itself in anomalous properties (of $`v`$ and $`K_T`$) that can be detected at higher temperatures. The analytical treatment of the case $`r_1/r_0\mathrm{}`$ provides insight into the appearance of the second critical point in the phase diagram of water. In fact, the anomalies in $`v`$ and $`K_T`$ exist even for exclusively repulsive potentials. It is the van der Walls attraction that brings the critical point to a finite temperature, in the same way that it is this attraction (or a more realistic finite range one) that generates the familiar liquid-gas coexistence line. Now we will see how much of this scenario remains for finite $`r_1/r_0`$. ### B Numerical results for finite $`r_1/r_0`$ When $`r_1/r_0`$ is finite, no analytical calculation seems to be possible to tell the existence or not of the metastable critical point. But guided by the previous findings, we can more safely interpret the numerical results. In Figure 6 we show the results of numerical simulations for $`r_1/r_0=1.5`$. Rapid runs (2000 steps per temperature) were made by decreasing the temperature at different values of $`P`$ in a system of 197 particles. The rapid cooling allows to reach the supercooled states without crystallization except in the continuous-dashed region. The curves shown are averages over 20 different runs. Only the points in which the system did not crystallize and displays well reproducible values for the density are shown. In spite of this, since the runs were rapid and the low temperature states are highly viscous, we can rise some doubts about the final state reached for $`T0`$. It might be that we are observing some frozen configuration typical of larger $`T`$. To answer this point we made runs at a low temperature ($`T=0.01`$) increasing and decreasing pressure (Fig. 7). The results of this simulation show evident effects of hysteresis due to the glassiness of the states. This hysteresis was seen not to be greatly reduced by decreasing the rate of temperature change in a factor ten. But anyway the hysteresis path encloses the values of $`v`$ obtained by decreasing $`T`$ at fixed $`P`$ (large symbols in Fig. 7), and we have also checked that the radial distribution functions are comparable in both cases. The finding of essentially the same results when we arrive from different paths in the $`P`$-$`T`$ plane is an indication that in fact these are thermodynamic values. There is no sign in Fig. 6 (in contrast to the $`r_1/r_0\mathrm{}`$ case) of an abrupt jump in $`v`$ as a function of $`P`$ at $`T=0`$, all that remains is a value of pressure with a maximum in $`v(P,T=0)/P`$ (as is seen in Fig. 7, close to $`P_0=2`$). Around this value $`K_T`$ has a maximum (as a function of $`T`$) at $`T=0`$, whereas $`v`$ has maxima and minima at finite temperature, as can be seen from Fig. 6 in the range $`1.0P_01.8`$ (on the dashed line). These facts are sufficient for the van der Walls attraction to induce the appearance of a critical point, if $`\gamma `$ is large enough. In fact, we find that for $`\gamma _0\gamma _0^{cr}4.2`$ a critical point enters the phase diagram at $`T=0`$, $`P_00.65`$. The location of the critical point as a function of $`\gamma `$ can be determined from data as those of Fig. 6 by requiring that $`v/P\mathrm{}`$, and $`^2v/P^2\mathrm{}`$ calculated according to Eq. (6). We find in our case that for $`\gamma _0`$ slightly larger than $`\gamma _0^{cr}`$ the critical point position can be estimated as $`T^{cr}`$ $``$ $`0.07(\gamma _04.2)`$ (7) $`P_0^{cr}`$ $``$ $`0.650.35(\gamma _04.2).`$ (8) The locus of the anomalies of $`v`$ and $`K_T`$ also move with $`\gamma `$. We note that if $`\gamma `$ is such that the critical point exists, the line of $`K_T`$ maxima necessarily ends at the critical point (since at the critical point $`K_T\mathrm{}`$). For the extrema of $`v`$ this is not necessarily so, although it is known that the anomalies in $`K_T`$ and $`v`$ are thermodynamically related. ## V Characteristics of melting In this section we show results of numerical simulations that focus on the melting of the most expanded of the solid phases of our system (which is the equivalent of ice Ih in water). Figure 8 shows the specific volume $`v`$ as a function of $`T`$ for different values of $`P`$ obtained in slow simulations decreasing and increasing $`T`$ in a system of 216 particles. The hysteresis upon heating and cooling embraces the position of the thermodynamic melting transition temperature. In all the range of $`P`$ indicated in this figure, the system freezes into one and the same solid configuration, corresponding to a dense stacking of triangular planes, in which each particle has twelve nearest neighbors at distance $`r_1`$ (the dispersion in the limiting value of $`v`$ when $`T0`$ is due to a few defects that remain in the solid structure). Upon increasing $`T`$, $`v`$ increases for $`P_00.95`$ (which is of course the standard behavior), but decreases for larger $`P_0`$. This decrease is driven by the possibility of particles of being at distances smaller than $`r_1`$ from each other. Depending on $`P`$, the tendency of particles to become closer (gaining energy from the $`Pv`$ term) may be higher than the entropic tendency to increase $`v`$. In the same way, at the lowest pressures, the solid melts by increasing its volume, whereas at the largest pressures shown in Fig. 8 it melts by reducing its volume. This is consistent with the form of the solid-liquid border in the $`P`$-$`T`$ plane that is seen in Figure 9, which has positive derivative at low $`P`$, but has negative derivative at larger $`P`$. In the same Figure 9 we see the modification of the phase diagram when we consider the van der Waals attraction. Any finite value of $`\gamma `$ makes a liquid-gas first order line appear. In the scale of Fig. 9 this critical line cannot be distinguished from the $`P=0`$ axis, only the critical temperature is indicated. In addition, the whole solid-fluid coexistence line basically moves down with $`\gamma _0`$. If $`\gamma _05.5`$ the triple point that is defined is ‘standard’, in the sense that the slope of the solid-liquid coexistence line is positive at the triple point. For larger $`\gamma _0`$ the triple point is ‘anomalous’ (the slope of the solid-liquid coexistence line is negative). ## VI Summary and conclusions We studied the phase diagram of a model of spherical particles with pairwise interactions, consistent of a hard core at a distance $`r_0`$ plus a repulsive linear shoulder that extends up to distance $`r_1`$. This potential favors the particles to be in one or the other (depending on $`P`$) of the two different equilibrium distances $`r_0`$ and $`r_1`$. On top of that, a long range van der Waals attraction was also included. The solid phase of the system exhibits polimorfism. Namely, there are different sectors of the $`P`$-$`T`$ phase diagram in which the crystalline structure of the system is different. This behavior is observed even in the case of non attractive part in the potential (i.e., $`\gamma =0`$). The fluid phase part of the phase diagram has the following characteristics. At low pressures particles prefer to be at distances $`r_1`$ from each other, whereas at high pressures the typical distance is $`r_0`$ ($`<r_1`$). This implies a crossover region of $`P`$ with an anomalously large isothermal compressibility $`K_T`$. When we include the van der Walls attraction ($`\gamma 0`$) the anomaly in $`K_T`$ may become (if $`\gamma `$ is sufficiently large) a first order transition line (similar to the liquid-gas coexistence line). This first order line starts from a finite $`P`$ at $`T=0`$, and ends in a critical point at finite $`T`$ and $`P`$. From this point the line of $`K_T`$ maxima continues towards larger values of $`T`$. There are also anomalies in the density of the system, which has extrema in a locus that, whereas it does not necessarily touch the critical point, appears in the region that is influenced by the existence of it. For $`\gamma =0`$, the melting line of the most expanded solid structure in the $`P`$-$`T`$ plane has positive derivative at low pressures, but is reentrant at higher $`P`$. This reentrant behavior is associated (through the Clausius-Clapeyron equation) to a melting with density increasing. When the van der Waals attraction is included, a liquid-gas first order line appears, that ends in a critical point as usual. This liquid-gas line defines a triple point where it meets the fluid-solid line. For small $`\gamma `$ the slope of the solid-liquid line at the triple point is positive, but it becomes negative if $`\gamma `$ is large enough. Our model, although very simple, has many of the properties that characterize water as an anomalous fluid, and gives insight into the properties of real water. Actually, the simplicity of the model allows to single out the crucial characteristic that produces all the anomalies, without the complications introduced by non-spherical interactions and cooperative hydrogen bonding in real water. This characteristic is the existence in the interatomic potential of two different equilibrium distances for the particles. From all our results it is difficult to elude the claim that there must be an effective description of the interaction in water in which two diggerent distances compete as being the most stable one. In fact, it would be even more daring to say that all the similarities we found are accidental. Although there is evidence favoring this view, the complications added by the peculiarities of water molecules has made this point very disputed. Among all anomalous properties of water, the existence of the second critical point is the one that is not fully proven to ocurr, and also the one that has been most elusive to adress numerically in previous studies. Our model shows that its existence is a consequence of the effect of the attractive part of the potential on a system that (due to peculiarities of the interaction) possesses anomalously large values of $`K_T`$ at some pressure. At this point experimental evidence about the existence of two amorphous phases (LDA and HDA) that transform reversibly into each other seems crucial to indicate that in water, the attraction between molecules is strong enough to bring the second critical point into existence. From a more fundamental point of view, we note that our model has essentially two free parameters, the ratio between equilibrium distances $`r_1/r_0`$ and the strength of the van der Waals attraction $`\gamma `$. Other characteristics, as if the ramp between $`r_0`$ and $`r_1`$ is linear or not, are only marginal for the phase diagram that is obtained. An important result of our study is the fact that these two parameters determine the phase behavior of the fluid phase both in the zone where the fluid is stable, and also in the deeply supercooled region. If water admits a similar effective representation in term of two parameters, then these could be extracted from fitting experimental data in the high temperature region, and then used, relying on our model, to predict the supercooled part of the phase diagram, in particular the existence and location of the second critical point. This means that the present model may even be of quantitative importance. Work on this direction is under way.
no-problem/9905/astro-ph9905311.html
ar5iv
text
# Milliarcsec-scale polarisation observations of the gravitational lens B1422+231 ## 1 Introduction Radio polarisation observations can be used as a powerful tool in the study of gravitational lenses. Such observations yield two useful parameters - the distributions of polarised intensity (or degree of polarisation) and of position angle (PA) of polarisation. Both the degree and PA of polarisation of a point in an object are unchanged in its images by the action of a gravitational lens. In gravitational lens searches the equal degrees of polarisation of images can be used to discriminate amongst lens candidates. However, the measured PAs of polarisation need not be the same at any given frequency, since the different ray paths of the images can undergo different amounts of Faraday rotation. After correcting for the rotation measure (RM), the intrinsic PA (PA at zero wavelength) must be the same for the lensed images. Any difference in RM between lensed images can give clues to the nature of the lensing galaxy. For a gas-rich lens or a spiral galaxy lens, the RMs of each lensed image (and also their difference) are expected to be large compared to those from an elliptical galaxy lens. The difference in RMs is assumed to be caused by the interstellar medium of the lens. The polarisation angle is not altered by the gravitational deflection even though the total intensity (Stokes parameter I) distribution can be distorted (Dyer & Shaver 1992). Milliarcsec-scale observations of the gravitational lens B0218+357 have been used to demonstrate this feature of the gravitational potential (Patnaik, Porcas & Browne 1995). This property can help identify corresponding features in distorted lensed images, even when this is unclear from total intensity maps, since their degrees of polarisation must be equal, and their PAs of polarisation must be parallel (after correction for any differential RM). The gravitational lens system B1422+231 was discovered by Patnaik et al. (1992); it is a 4-image system with maximum image separation of 1.3 arcsec (Fig. 1). The background radio source is associated with a 15.5 mag quasar at a redshift of 3.62. The lensing galaxy has a redshift of 0.338 (Kundić et al. 1997, Tonry 1998). The lensed images have similar spectra at radio as well as optical wavelengths. The three bright images, A, B and C, have similar radio polarisation properties; image D is too weak at radio wavelengths for its polarisation properties to be determined reliably. The source has been observed in IR and optical bands (Lawrence et al. 1992; Remy et al. 1993; Yee & Ellingson 1994, Bechtold & Yee 1995; Akujor et al. 1996; Yee & Bechtold 1996; Impey et al. 1996) and models of the lensed system have been proposed by several authors (Hogg & Blandford 1994; Narasimha & Patnaik 1994, Kormann, Schneider & Bartlemann 1994; Mao & Schneider 1998). In this paper we describe high resolution radio polarisation observations of the gravitational lens B1422+231, made at 8.4 GHz using the VLBA together with the Effelsberg radio telescope. The observations and data reductions are described in Section 2; the results and their implications are discussed in Section 3. ## 2 Observations and Analysis We observed B1422+231 on 1997 June 11/12 at 8.4 GHz, using all 10 antennas of the VLBA and the 100m telescope at Effelsberg. We recorded eight 8 MHz channels using a dual polarisation set$``$up, giving a total bandwidth of 32 MHz in each polarisation; we used 1-bit sampling of the signals. The data were correlated at a single field centre using the VLBA correlator. The sampling in time was 1 sec and in frequency 0.5 MHz. Since the image separations are large (the largest is 1.3 arcsec) compared to the synthesised beam (about 1 milliarcsec), it is important to preserve short time and frequency sampling in the analysis, as this avoids smearing of the visibility function, and hence distortion of the images near the edge of the field. We observed in cycles consisting of 6 mins. on the calibration source 1308+326 and 16 mins. on B1422+231, resulting in a total integration time on B1422+231 of 410 minutes. In between we observed the calibrator sources 1226+023 (3C 273), 1749+096 and 1823+568 once each for 8 min. The data were analysed using the NRAO software package AIPS. The data were corrected for the parallactic angle variations of the telescopes, and amplitude calibrations determined from the measured system temperatures and gains were applied. The phase slopes across the 8 MHz bands were aligned using the pulse-cal information from each telescope. Standard fringe-fitting was then performed on the calibration sources. After careful editing, we used the data on 1308+326 and 1823+568 to determine the bandpass function. We used the following procedures to determine the instrumental polarisation terms and correct for them. After applying the total intensity calibration table, we fringe-fit the cross-hand (RL, LR) data of 1308+326 on the single baseline Los Alamos (our reference antenna) to Pie Town. In this way one determines the residual LHC-RHC delay offset for the reference antenna. The solutions were smoothed, copied to the calibration table and applied to the main data. Since the sources used for polarisation calibration usually have structures at milliarcsec-scales, a model of the source is required in order to determine the instrumental polarisation leakage terms. 1308+326 was mapped to determine its structure and the model was used in the AIPS task LPCAL. 1308+326 had a peak polarisation of 3.6 percent, and typical instrumental polarisation terms were 1 to 2 percent for the VLBA telescopes and 4 percent for Effelsberg. We have not been able determine the absolute angle of polarisation. In Fig. 2 we show the map of 1308+326 with electric vectors overlaid on the total intensity contours; the length of the vector is proportional to polarised intensity. The PA of polarisation changes within the source, but from the integrated flux density of Q and U Stokes parameters we find that the PA of 1308+326 was 11. We do not have polarisation PA measurements of 1308+326 sufficiently close to our observing epoch to calibrate the absolute angle of polarisation. After performing the above polarisation calibration procedures, the data for B1422+231 were analysed. Fringe-fitting sources with widely separated components, such as B1422+231, can prove difficult, especially if no component consistently dominates on all baselines (see Porcas, 1994). This is often the case for gravitational lenses, since the equal surface brightness property of lensed images tends to result in roughly equal contributions from them for baselines on which they are resolved. The AIPS fringe-fitting task FRING can utilise a source model, and this feature can be used to guide the process. We used the following procedure for the B1422+231 data. First the data are fringe-fitted using a point source model, and after applying the solutions, a map is made using the AIPS tasks IMAGR and CALIB. Since the lensed images are widely separated, we map each image in a separate sub-field. Even though this map is inaccurate, the images can be identified at their correct locations. A second fringe-fit is then made, using a model comprising components from each of the images, and a new map made. This process is repeated a few times to converge on a consistent fringe-fit solution. For the final map of the 4 image subfields, several cycles of phase self-calibration were performed. Polarisation maps were made using usual procedures. ## 3 Results and Discussion Our maps of the 4 images of B1422+231, made using uniform weighting, are shown in Figures 3 and 4. The two strongest images, A and B, have highly elongated structures, confirming the basic image shapes derived from VLBA 15 GHz observations by Patnaik and Porcas (1998). There is only a single peak in the total intensity distributions of the images, so we fit a single elliptical Gaussian function to each image using task JMFIT. The 3 strongest images are many resolution elements in length, and the fits are not particularly good, but they do parameterize some basic image properties. The total flux densities (column 2), polarised intensity (column 3), deconvolved sizes (column 4) and relative positions (columns 5 and 6) are listed in Table 1. As the strongest images are highly elongated, we have also made estimates of the (local) positions of the central peaks using MAXFIT (columns 7 and 8 in Table 1). However, there is no obvious compact feature within the smooth image structures, and the peak positions are thus not well defined, especially in the direction of elongation of the images. We therefore estimate an uncertainty in defining the image positions of about 1/20th of the image size in the corresponding direction. Kochanek, Kolatt & Bartelmann (1996) have suggested that repeated measurements with $``$10$`\mu `$arcsec precision of the separation between highly magnified image pairs such as A and B, would yield a measurement of the proper motion of the lens with respect to the observer. Movement of the lens can result in a detectable image motion due to magnification along the tangential direction. The B1422+231 system may prove difficult to use for such proper motion studies, since the elongation of the relatively featureless images results in an increased position uncertainty in the same direction. This may also apply to the bright ‘merging image’ pairs of other 4-image gravitational lens systems if they do not contain highly compact features. None of the images exhibits an obvious asymmetric structure of the canonical ”core-jet” type in its total intensity distribution. Whilst such morphologies occur frequently amongst radio-loud quasars, it should be noted that the radio spectrum of B1422+231 is peaked around 5 GHz, and the relatively steep spectrum at higher frequencies (Patnaik et al. 1992, Patnaik et al. 1999, in preparation) does not show any evidence for ”core dominance”. In any case, the magnification and distortion of the background source structure by the lens must also be taken into account. Images A and B are highly magnified, and image C is magnified by a factor of a few in most lens models. Without the presence of the lens, B1422+231 would most likely be a 19 mag quasar with a radio flux density at 8.4 GHz of only ca 20 mJy - almost a ”radio quiet” quasar. We obtain a number of important results from the total intensity distribution. We have investigated the surface brightness of the 2 strongest images, A and B, by measuring the flux in each image contained above the contour level of 1 mJy beam<sup>-1</sup>. The flux ratio of 0.94$`\pm `$0.02 is essentially the same as the area ratio of 0.95$`\pm `$0.05, thus demonstrating that the surface brightness is the same in these two lensed images. This accords with the property that gravitational lensing does not change the surface brightness of an image. A second result is that the PA of elongation of the structures of A, B and C are tangential with respect to the lens, which is believed to be located close to image D (Fig. 1). This is expected from lens models where the background source lies close to the diamond caustic. Even though image D is weak, it is interesting to note that its elongation appears ‘radial’ with respect to the lens. The polarisation distribution in the images is more remarkable. Although we do not have enough sensitivity to detect polarised emission from the weak D image, the polarisation distribution in the other 3 images is clearly non-uniform, and the PA of polarisation changes over the images. In particular, there is a ”reflection symmetry” between A and B in the polarisation distribution along the image major axes. The SW of A and NE of B are regions of little or no polarisation. Progressing from these points along the image axes, the degree of polarisation rises to $``$2.5% at the central peaks, with polarisation angles of $``$23 in A and $``$43 in B. Progressing further, the PAs rotate to final values of 40 in the NE of A, and 22 in the SW of B. Since the polarisation properties of the source are not changed by gravitational lensing, we may use such matching features in these images to identify corresponding regions. Thus the run of polarisation emission in opposite directions in A and B reveals the opposite orientation of these images in the tangential direction. This is entirely expected, since these two bright images must have opposite parity. Indeed, the slight Southerly offset of the polarisation peaks (in the NE of A and SW of B) from the axis of the total intensity distributions establishes their opposite parity directly. Even though the polarisation of C is weak, the change of polarisation within the image matches closely with that of image A; this is expected as A and C should have the same parity. From the measurements made on the images, noted above, it is clear that there is a systematic difference of 20 between the polarisation PAs of corresponding features in A and B. Since the gravitational action of the lens does not change the polarisation angle of regions of the object in either image, we attribute this observed difference to Faraday rotation along one or both image paths; the difference in RM amounts to 280$`\pm `$20 rad m<sup>-2</sup>. Images A and B are considered to be ”merging images” and hence their ray paths, separated by 0.5 arcsec and located on the same side of the lens, are expected to traverse similar environments in the lens. Moreover, these images are located some distance, (ca 1 arcsec) from the lens centre. The differential RM is assumed to be caused by the magneto-ionic medium of the lens. Such high RMs are generally found in gas-rich environments; thus it is rather surprising that this lens, thought to be an elliptical galaxy, can give rise to this large differential Faraday rotation. VLBI measurements of lensed image structures can potentially provide more constraints for lens modelling than image relative flux densities and positions. These are conveniently presented by the relative magnification matrix between image pairs. Ideally there would be at least 3 non-collinear points in the source, recognisable in each image; the matrix could then be determined by considering the transformation of 2 non-parallel vectors. In B1422+231 there is only a single peak in the total intensity distribution, but there are separate and distinct peaks in the polarised flux distributions of A and B, which we recognise as corresponding points from their polarisation properties. We measured the positions of these peaks in the maps of the A and B images and attempted to determine an A/B relative magnification matrix. However, we were unable to get values for the matrix elements consistent with the relative flux densities of the images. The elements derived by this method are in any case ill-defined, because the three peaks used are almost collinear. We have used the separations between the polarisation peaks within the A and B images to test the matrices given by Hogg & Blandford (1992). Our measured separation in B is 1.478 mas in PA $``$145.2 . Using their two matrices, we find that this transforms to a predicted separation in image A of 1.607 mas in PA 27.9, and 1.680 mas in PA 22.8 respectively. The measured separation is 1.0 mas in PA 62.6. It is perhaps not surprising that these models fail in their predictions, as they are based on an measured optical flux ratio between A and B of 0.77. Transformation matrices are not explicitly given for the other models, and so we have not tested them. However, all the models predict tangential stretching of the three bright images. ## 4 Conclusion We have presented polarisation observations of the gravitational lens system B1422+231 made using the VLBA and Effelsberg at 8.4 GHz. Our 1 mas resolution maps of the images reveal the parity reversal of image B through the distribution of polarised emission. We show that the surface brightness is the same in A and B, as expected from preservation of the source surface brightness. We find that the differential Faraday rotation between A and B is rather large, considering that the lens is an elliptical galaxy. It is difficult to derive a relative magnification matrix from the total intensity and polarisation distributions of the A and B images. Published matrices, however, do not successfully predict the structural relationships between A and B. ### ACKNOWLEDGMENTS We thank D. Narasimha for helpful discussion and J. Schmid-Burgk for critical comments. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. ## References > Akujor, C.E., Patnaik, A.R., Smoker, J.V., Garrington, S.T., 1996, in Kochanek, C.S, Hewitt, J.N., eds, Proceedings of the 173rd Symposium of the IAU ‘Astrophysical applications of gravitational lensing’, Kluwer Academic Publishers, Dordrecht, p335 > Bechtold, J., Yee, H.K.C., 1995, AJ, 110, 1984 > Dyer, C.C., Shaver, E.G., 1992, ApJ, 390, L5 > Hogg, D.W., Blandford, R.D., 1994, MNRAS, 268, 889 > Impey, C.D., Foltz, C.B., Petry, C.E., Browne, I.W.A., Patnaik, A.R., 1996, ApJ, 462, L53 > Kochanek, C.S., Kolatt, T.S., Bartelmann, M., 1996, ApJ, 473, 610 > Kormann, R., Schneider, P., Bartelmann, M., 1994, AA, 286, 357 > Kundić, T., Hogg, D.W., Blandford, R.D., Cohen, J.G., Lubin, L.M., Larkin, J.E., 1997, AJ, 114, 2276 > Lawrence, C.R., Neugebauer, G., Weir, N., Matthews, K., Patnaik, A.R., 1992, MNRAS, 259, 5P > Mao, S., Schneider, P., 1998, MNRAS, 295, 587 > Narasimha, D., Patnaik, A.R., 1994, in Surdej, J., Fraipont-Caro, D, Grosset, E., Refsdal, S., Remy, M., eds, Proc. 31st Liège International Astrophysical Colloq., Gravitational Lenses in the Universe, Université de Liège, Belgique, p.295 > Patnaik, A.R., Browne, I.W.A., Walsh, D., Chaffee, F.H., Foltz, C.B., 1992, MNRAS, 259, 1P > Patnaik, A.R., Porcas, R.W., 1998, in Zensus, J.A., Taylor, G.B., Wrobel, J.M., eds, Radio Emission from Galactic and Extragalactic Compact Sources, ASP Conference Series, Volume 144, IAU Colloquium 164, p319 > Patnaik, A.R., Porcas, R.W., Browne, I.W.A., 1995, MNRAS, 274, L5 > Porcas, R.W., 1994, in Zensus, J.A., Kellermann, K.I., eds, ‘Compact Extragalactic Radio Sources’, NRAO, p125 > Remy, M., Surdej, J., Smette, A., Claeskens, J.-F., 1993, AA, 278, L19 > Tonry, J.L., 1998, AJ, 115, 1 > Yee, H.K.C., Bechtold. J., 1996, AJ, 111, 1007 > Yee, H.K.C., Ellingson, E., 1994, AJ, 107, 28
no-problem/9905/math9905025.html
ar5iv
text
# Untitled Document New Kazhdan groups <sup>1</sup> both authors were partially supported by the KBN grant 2 P03A 023 14. The first author was French government fellow under CIES bourse 231404E. Jan Dymara Instytut Matematyczny Polskiej Akademii Nauk Kopernika 18, 51–617 Wrocław, Poland dymara@math.uni.wroc.pl Tadeusz Januszkiewicz Instytut Matematyczny, Uniwersytet Wrocławski pl. Grunwaldzki 2/4, 50–384 Wrocław, Poland and Instytut Matematyczny Polskiej Akademii Nauk Kopernika 18, 51–617 Wrocław, Poland tjan@math.uni.wroc.pl Introduction A locally compact, second countable group $`G`$ is called Kazhdan if for any unitary representation of $`G`$, the first continuous cohomology group is trivial $`H_{ct}^1(G,\rho )=0`$. There are several other equivalent definitions, the reader should consult , esp. 1.14 and 4.7. For some time now Kazhdan groups have attracted attention. One of the main challenges is to understand them geometrically. Recently Pansu , Żuk , and Ballmann–Świa̧tkowski , went back to Garland’s paper , improved it in several respects and produced among other things new examples of Kazhdan’s groups. These examples, especially those in , are explicit and significantly different from classical ones. We also go back to Garland’s paper, but instead of euclidean buildings, we study hyperbolic ones. An interesting class of hyperbolic buildings with cocompact groups of automorphisms were constructed by Tits . He associates with a ring $`\mathrm{\Lambda }`$ and a generalised Cartan matrix $`M`$ a Kac–Moody group. These groups provide $`BN`$ pairs for buildings. A special case of particular interest to us is that of $`\mathrm{\Lambda }`$ a finite field and generalised Cartan matrix coming from hyperbolic reflection groups for which the fundamental domain is a simplex; there are 10 of them in dimension 2, two in dimension 3 and one in dimension 4. Buildings associated to these data are locally finite and their automorphism groups are locally compact topological groups. It turns out that they are Kazhdan’ (and more). Theorem 1. Let $`X_q`$ be an $`n`$-dimensional building of thickness $`(q+1)`$, associated to a cocompact hyperbolic group with the fundamental domain a simplex. Suppose $`G`$ is a closed in the compact open topology, unimodular subgroup of the simplicial automorphism group which acts cocompactly on the building. Then for large $`q`$ and $`1kn1`$ $$H_{ct}^k(G,\rho )=0,$$ that is the continous cohomology groups of $`G`$ with coefficients in any unitary representation vanish. In particular $`G`$, considered as a topological group is a Kazhdan group. Several comments are in order: 1. The theorem holds for any hyperbolic building. However at present Tits’ Kac–Moody buildings are the only examples where we can verify the assumptions. 2. For Tits’ Kac–Moody building, simplicial automorphism groups which are uncountable, are bigger than Kac–Moody groups (given by countably many generators and relations), and Tits’ Kac–Moody groups are not discrete as subgroups of automorphism groups. In , B. Rémy using twin buildings exhibits Kac–Moody groups as discrete cofinite volume groups acting on product of buildings, and also constructs discrete cofinite volume groups acting on the building itself. His examples are not cocompact. 3. Unimodularity, brought in by the topology of the group, is an essential assumption. Kazhdan groups are unimodular. On the other hand Świa̧tkowski pointed out to us nonunimodular groups acting cocompactly on classical euclidean buildings: the upper triangular subgroup of $`SL_n(Q_p)`$ acting on its building. In Tits’ Kac–Moody examples, it is easy to establish that the group of simplicial automorphisms is unimodular. 4. Three and four dimensional dimensional buildings provide first examples of Kazhdan groups of large dimension, not coming form locally symmetric spaces or euclidean buildings (they are also not products of lower dimensional examples, since they are hyperbolic). Here ”dimension” may be understood either as ”continuous cohomological dimension” or as ”large scale dimension”. The argument here requires the computation of $`H_{ct}^n(G,\mathrm{St})`$, where $`\mathrm{St}`$ is the Steinberg representation of $`G`$ on the space of $`l_2`$-harmonic $`n`$-cochains on the building, computation which is not essentially not different form the euclidean building case. If we have discrete subgroups of automorphisms of these buildings, they are necessarily Gromov hyperbolic, and the ”dimension” is one more than the dimension of their Gromov boundaries. 5. Bourdon noticed that several two dimensional hyperbolic buildings admit cocompact actions of discrete groups and thus one can use results of , to show that some of them are Kazhdan. He does not use Tits’ construction, but builds his buildings as complexes of groups. Most of his buildings are not Kac–Moody. There are two ingredients we use in the proof. First is the Garland’s method (which we take from , but actually implicitely contains almost all we need) for proving vanishing of cohomology groups of a simplicial complex. Second is the use of continuous cohomology of topological groups, in particular of Borel–Wallach result relating the cohomology of a complex on which the topological group acts with compact stabilizers, to its continuous cohomology. The progress we obtain is that one does not have to worry about the existence of discrete subgroups. This is very handy, since bare existence of Tits examples is nontrivial, let alone their subtle properties. In a future paper we construct more examples of Kazhdan groups, all related to buildings. Acknowledgements. We are grateful to Jacek Świa̧tkowski for many useful discussions. §1 Generalities about automorphism groups. We recall basic facts about simplicial automorphism groups of simplicial complexes. They are all fairly standard. Let $`X`$ be a countable locally finite simplicial complex, let $`Aut(X)`$ be the group of its simplicial automorphisms. The compact–open topology on $`Aut(X)`$ is defined using the basis of open neighbourhoods of the identity $`U(K)=\{g:g|_K=id_K\}`$, where $`K`$ runs over compact subsets in $`X`$. Let $`G`$ be a closed subgroup of $`Aut(X)`$, with induced topology. Since $`X`$ is a countable complex, $`Aut(X)`$, thus $`G`$, has countable basis and hence it is metrizable by a left invariant metric. Proposition 1.1. $`G`$ is locally compact. In fact stabilizers of compact subcomplexes in $`X`$ are compact and open. Proposition 1.2. $`G`$ is separable. Thus, being metrizable and separable, $`G`$ is second countable (has a countable basis). Proposition 1.3. $`G`$ is countable at infinity, or $`\sigma `$-compact: the sum of countably many compact subsets. Proposition 1.4. Stabilizers of compact subcomplexes are either all finite or all uncountable. Proposition 1.5. $`G`$ is totally disconnected. Unimodularity of $`G`$ will play important role. Observe first that if the group $`G`$ is generated by compact subgroups then it is unimodular, since all generators go to identity under the modular homomorphism. Suppose that a subgroup $`GAut(X)`$ generated by compact subgroups of $`Aut(X)`$ acts transitively on $`n`$-simplices. Then $`Aut(X)`$ is unimodular, since it is generated by $`G`$ and a stabilizer of a simplex. A situation of interest to us where this happens is this: Lemma 1.6. Suppose $`X`$ is a connected locally finite simplicial complex, and suppose that links of simplices of codimension $`2`$ in $`X`$ are connected. Suppose that stabilizers of $`(n1)`$-simplices act transitively on their respective links. Then $`Aut(X)`$ acts transitively on $`X`$ and is unimodular. The proof is clear from the above discussion. Observe that locally finite buildings coming from $`BN`$ pairs satisfy the assumptions of the Lemma. §2 Borel–Wallach Lemma Assume that $`G`$, a closed subgroup of the group of simplicial automorphisms of $`X`$, acts cocompactly. Sometimes one can identify $`H_{ct}^{}(G,\rho )`$ with the cohomology of $`X`$ with coefficients in $`\rho `$. Specifically: Consider all alternating maps $`\varphi `$ from ordered $`k`$-simplices in $`X`$ to $``$, satisfying for all $`gG`$ and $`\sigma X`$ $$\varphi (g\sigma )=\rho (g)\varphi (\sigma )$$ Call the space of such maps $`C^k(X,\rho )`$. There is a natural differential making $`C^{}(X,\rho )`$ into a complex $$d\varphi (\sigma )=\underset{i=0}{\overset{k}{}}(1)^i\sigma _i$$ where $`\sigma =(v_0,\mathrm{},v_k)`$ and $`\sigma _i=(v_0,\mathrm{},v_{i1},v_{i+1},\mathrm{},v_k)`$ Lemma 2.1 (, lemma X.1.12 page 297) Let $`(X,G)`$ be an acyclic locally finite complex with a cocompact action of a group of its simplicial automorphisms. Suppose $`\rho `$ is a representation of $`G`$ on a quasi-complete (for example Hilbert) space. Then $$H_{ct}^i(G,\rho )=H^i(C^{}(X,\rho )).$$ The assumptions of this theorem are satisfied for locally finite buildings coming from $`BN`$ pairs. §3 Vanishing theorem Here we adapt Ballmann–Świa̧tkowski presentation of Garland’s method to greater generality of not necessarily discrete group actions. To keep the exposition short we refer to their paper for the notation. Theorem 3.1. Let $`X`$ be a locally finite simplicial complex, and $`G`$ a cocompact unimodular group of its simplicial automorphisms. Assume that for any simplex $`\tau `$ of $`X`$ the link $`X_\tau `$ is connected and $$\kappa _\tau >\frac{k(nk)}{k+1}$$ where $`\kappa _\tau `$ is the smallest positive eigenvalue of the Laplacian $`\mathrm{\Delta }_\tau `$ on $`C^0(X_\tau ,R)`$. Then for $`1kn1`$, $`H^k(C^{}(X,\rho ))=0`$ for any unitary representation $`\rho `$ of $`G`$. ¿From Lemma 2.1. we immediately get. Corollary 3.2. Under the assumptions of Theorem 3.1, $`G`$ is a Kazhdan group, provided $`X`$ is acyclic. Proof: Theorem 3.1 corresponds to Theorem 2.5 of . Their calculation goes through as it stands, except for two changes. 1. $`|G_\sigma |`$, there cardinality of the stabilizer of $`\sigma `$, should now be understood as the Haar measure of that stabilizer inside $`G`$. 2. Their Lemma 1.3 should be shifted form the discrete to locally compact setting. Here is how this can be done. Let $`\mathrm{\Sigma }(k)`$ denote the set of ordered $`k`$-simplices in $`X`$, let $`\mathrm{\Sigma }(k,G)`$ be a set of representatves of $`G`$ orbits on $`\mathrm{\Sigma }(k)`$. Modified Lemma 1.3 of reads now as follows: Lemma 3.3. Let $`X`$, $`G`$ ba as in Theorem 3.1. For $`0l<kn`$, let $`f=f(\tau ,\sigma )`$ be a $`G`$-invariant function on the set of pairs $`(\tau ,\sigma )`$, where $`\tau `$ is an ordered $`l`$-simplex and $`\sigma `$ is an ordered $`k`$-simplex with $`\tau \sigma `$, that is vertices of $`\tau `$ are vertices of $`\sigma `$. Then $$\underset{\sigma \mathrm{\Sigma }(k,G)}{}\underset{\genfrac{}{}{0pt}{}{\tau \mathrm{\Sigma }(l)}{\tau \sigma }}{}\frac{f(\tau ,\sigma )}{|G_\sigma |}=\underset{\tau \mathrm{\Sigma }(l,G)}{}\underset{\genfrac{}{}{0pt}{}{\sigma \mathrm{\Sigma }(k)}{\tau \sigma }}{}\frac{f(\tau ,\sigma )}{|G_\tau |}.$$ Proof: $$\begin{array}{cc}\hfill \underset{\sigma \mathrm{\Sigma }(k,G)}{}\underset{\genfrac{}{}{0pt}{}{\tau \mathrm{\Sigma }(l)}{\tau \sigma }}{}\frac{f(\tau ,\sigma )}{|G_\sigma |}& =\underset{\genfrac{}{}{0pt}{}{\sigma \mathrm{\Sigma }(k,G)}{\tau \mathrm{\Sigma }(l,G)}}{}\underset{\genfrac{}{}{0pt}{}{\gamma _i:\gamma _i\tau \sigma }{\gamma _i\tau \gamma _j\tau }}{}\frac{f(\gamma _i\tau ,\sigma )}{|G_\sigma |}\hfill \\ & =\underset{\genfrac{}{}{0pt}{}{\sigma \mathrm{\Sigma }(k,G)}{\tau \mathrm{\Sigma }(l,G)}}{}_{_i\gamma _iG_\tau }\frac{f(\gamma \tau ,\sigma )}{|G_\tau ||G_\sigma |}𝑑\gamma \hfill \\ & =\underset{\genfrac{}{}{0pt}{}{\sigma \mathrm{\Sigma }(k,G)}{\tau \mathrm{\Sigma }(l,G)}}{}_{\gamma :\gamma \tau \sigma }\frac{f(\tau ,\gamma ^1\sigma )}{|G_\tau ||G_\sigma |}𝑑\gamma \hfill \\ & =\underset{\genfrac{}{}{0pt}{}{\sigma \mathrm{\Sigma }(k,G)}{\tau \mathrm{\Sigma }(l,G)}}{}_{\gamma :\tau \gamma \sigma }\frac{f(\tau ,\gamma \sigma )}{|G_\tau ||G_\sigma |}𝑑\gamma \hfill \\ & =\underset{\tau \mathrm{\Sigma }(l,G)}{}\underset{\genfrac{}{}{0pt}{}{\sigma \mathrm{\Sigma }(k)}{\tau \sigma }}{}\frac{f(\tau ,\sigma )}{|G_\tau |}\hfill \end{array}$$ §4 Hyperbolic buildings Here we rely on results of Tits . We need the existence of buildings with cocompact proper group action, with unimodular automorphism group, and arbitrarily large thickness. Tits provides us with what we need as follows. Take a Coxeter group, whose Dynkin diagram is a triangle, a square or a pentagon. For triangles we allow labels $`m,n,k`$ on edges, such that $`m,n,k=2,3,4,6`$, and $`\frac{1}{m}+\frac{1}{n}+\frac{1}{k}<1`$. For a square one of the edges is labelled 4, and remaining ones are labelled 3, or two opposite edges are labelled 4 and remaining ones are labelled 3 . For pentagon one of the edges is labelled 4 and remaining ones are labelled 3. These are all cocompact hyperbolic reflection group with the fundamental domain a simplex, and edges labelled $`2,3,4,6`$ . For each such diagram and a finite field $`F_q`$, Tits constructs a Kac-Moody group, acting cocompactly (in fact transitively on simplices of maximal dimension) on a hyperbolic building, with links of vertices being spherical buildings of thickness $`q+1`$ corresponding to parabolic subgroups of the Coxeter system. Moreover the group is generated by elements stabilizing codimension 1 simplices. Thus taking the closure of the Kac–Moody group in the full automorphism group we obtain a cocompact unimodular group acting on a hyperbolic building. As far as we know, the existence of discrete cocompact subgroups in these groups has not been established except for some two dimensional examples. Now all we have to do to finish the proof of the theorem 1 is to check that the spectral condition holds for the links. But this has been done (for large thickness) already by Garland \[5, Sections 6–8\] (see also remark at the end of the Section 3.1 of ). It seems to us that Garland could have included these hyperbolic examples in his original paper. Bibliography Ballmann W. and Świa̧tkowski J., On $`L^2`$-cohomology and property (T) for automorphism groups of polyhedral cell complexes, GAFA 7(1997) 615–645 Borel A. and Wallach N., Continuous cohomology, discrete subgroups, and representation of reductive groups, Ann. of Math. Studies 94, Princeton University Press and University of Tokyo Press 1980. Bourbaki N., ”Groupes et algebres de Lie, chapitres IV-VI” Hermann 1968. Bourdon M., Sur les immeubles fuchsiennes et leur type de quasiisometrie Preprint, Nancy, December 1997. Garland H., $`p`$-adic curvature and the cohomology of discrete subgroups of $`p`$-adic groups, Ann of Math. 97 (1973) 375–423. de la Harpe P., Valette A., La propriété (T) de Kazhdan pour les groupes localement compacts, Astérisque 175, Soc. Math. France 1989. Pansu P., Formule de Matsushima, de Garland, et propirété (T) pour des groupes agissant sur des espaces symmetriques ou des immeubles, Bull. Soc. Math. France 126(1998) pp. 107–139. Remy B. Immeubles à courbure négative et théorie de Kac–Moody, preprint (Nancy 1998). Tits J., Uniqueness and Presentation of Kac–Moody groups over fields, J. of Algebra 105(1987) pp. 542–573. Żuk A., La propriété (T) de Kazhdan pour les groupes agissant sur les polyèdres, C. R. Acad. Sci. Paris 323, Serie I (1996), 453–458.
no-problem/9905/quant-ph9905033.html
ar5iv
text
# Minimal Length Uncertainty Relation and Hydrogen Atom ## I Introduction Study of modified Heisenberg algebra, by adding certain small corrections to the canonical commutation relations, arouse a great interest for some years (see for example ). These modifications yield new short distance structure characterized by a finite minimal uncertainty $`\mathrm{\Delta }x_0`$ in position measurement. The existence of this minimal observable length has been suggested by quantum gravity and string theory . In this context, the new short distance behavior would arise at the Planck scale, and $`\mathrm{\Delta }x_0`$ would correspond to a fundamental quantity closely linked with the structure of the space-time . This feature constitutes a part of the motivation to study the effects of this modified algebra on various observables. Recently, it has been suggested that this formalism could be also used to describe, as an effective theory, non-pointlike particles, e.g. hadrons, quasi-particles or collective excitations . In this case, $`\mathrm{\Delta }x_0`$ is interpreted as a parameter linked with the structure of particles and their finite size. In the work the $`d`$-dimensional isotropic harmonic oscillator was solved, in the context of a nonvanishing $`\mathrm{\Delta }x_0`$, with a special interest to the 3-dimensional case. This calculation shows that splittings of usual degenerate energy levels appear, leaving only the degeneracy due to the independence of the energy on the azimuthal quantum number, $`m`$. It has been also indicated that application to the hydrogen atom should yield the relation between the scale of a non-pointlikeness of the electron and the scale of the caused effects on the hydrogen spectrum. Indeed, the high precision of the experimental data for the transition $`1S2S`$ , for example, can yield an interesting upper bound for the possible, in the sense studied here, finite size of the electron. The purpose of this work is to continue to investigate whether the Ansatz concerning the deformation of the Heisenberg algebra, with suitably adjusted scale, may also serve for an effective low energy description of non-pointlike particles. In this way, we calculate corrections to the hydrogen spectrum using the minimally modified Heisenberg algebra, i.e. which preserves the commutation relations between position operators. To perform this calculation we propose a new approach which allow to solve the Schrödinger equation in the position representation. This method leads to the correct harmonic oscillator spectrum found in Ref. . Application to hydrogen atom shows that splittings of the usual degenerate energy levels are also present and that these corrections cannot be seen experimentally if $`\mathrm{\Delta }x_0`$ is smaller than $`0.01`$ fm. ## II Method The modified Heisenberg algebra studied here, as it has been done in Ref. , is defined by the following commutation relations ($`\mathrm{}=c=1`$) $`[\widehat{X}_i,\widehat{P}_j]`$ $`=`$ $`i\left(\delta _{ij}+\beta \delta _{ij}\widehat{P}^2+\beta ^{}\widehat{P}_i\widehat{P}_j\right),`$ (1) $`[\widehat{P}_i,\widehat{P}_j]`$ $`=`$ $`0,`$ (2) where $`\widehat{P}^2=_{i=1}^3\widehat{P}_i\widehat{P}_i`$ and where $`\beta ,\beta ^{}>0`$ are considered as small quantities of the first order. In this paper, we study only the case $`\beta ^{}=2\beta `$, which leaves the commutation relations between the operators $`\widehat{X}_i`$ unchanged , i.e. $`[\widehat{X}_i,\widehat{X}_j]=0`$. This constitutes the minimal extension of the Heisenberg algebra and is thus of a special interest. To calculate a spectrum for a given potential, we must find a representation of the operators $`\widehat{X}_i`$ and $`\widehat{P}_i`$, involving position variables $`x_i`$ and partial derivatives with respect to these position variables, which satisfies Eqs. (1), and solve the corresponding Schrödinger equation: $$\left[\frac{\widehat{P}^2}{2m}+V\left(\stackrel{}{\widehat{X}}\right)\right]\mathrm{\Psi }(\stackrel{}{x})=E\mathrm{\Psi }(\stackrel{}{x}).$$ (3) It is straightforward to verify that the following representation fulfill the relations (1), in the first order in $`\beta `$, $`\widehat{X}_i\mathrm{\Psi }(\stackrel{}{x})`$ $`=`$ $`x_i\mathrm{\Psi }(\stackrel{}{x}),`$ (4) $`\widehat{P}_i\mathrm{\Psi }(\stackrel{}{x})`$ $`=`$ $`p_i\left(1+\beta \stackrel{}{p}^2\right)\mathrm{\Psi }(\stackrel{}{x})\mathrm{with}\mathrm{p}_\mathrm{i}={\displaystyle \frac{1}{\mathrm{i}}}{\displaystyle \frac{}{\mathrm{x}_\mathrm{i}}}.`$ (5) Neglecting terms of order $`\beta ^2`$, the Schrödinger equation (3) takes the form $$\left[\frac{\stackrel{}{p}^2}{2m}+\frac{\beta }{m}\stackrel{}{p}^4+V(\stackrel{}{x})\right]\mathrm{\Psi }(\stackrel{}{x})=E\mathrm{\Psi }(\stackrel{}{x}).$$ (6) This is the ordinary Schrödinger equation with an additional term proportional to $`\stackrel{}{p}^4`$. As this correction is assumed to be small, we calculate its effects on energy spectra in the first order of perturbations. The evaluation of the spectrum to the first order in the deformation parameter $`\beta `$ leads to $$E_k=E_k^0+\mathrm{\Delta }E_k,$$ (7) where $`k`$ denotes the set of quantum numbers which labels the energy level, and where $`\mathrm{\Delta }E_k`$ are the eigenvalues of the matrix $$\frac{\beta }{m}\mathrm{\Psi }_k^0(\stackrel{}{x})|\stackrel{}{p}^4|\mathrm{\Psi }_k^{}^0(\stackrel{}{x})\frac{\beta }{m}k|\stackrel{}{p}^4|k^{},$$ (8) where $`\mathrm{\Psi }_k^0(\stackrel{}{x})`$ are solutions of (6) with $`\beta =0`$. This matrix is computed with all the wave functions corresponding to the unperturbed energy level $`E_k^0`$. This is a $`g\times g`$ matrix where $`g`$ is the multiplicity of the state $`E_k^0`$ considered. In general, $`\mathrm{\Delta }E_k`$ takes $`f`$ ($`fg`$) different values which removes the degeneracy of some energy levels. For an arbitrary interaction $`V(\stackrel{}{x})`$ used in the Schrödinger equation, the matrix (8) is non-diagonal. But, since we know the action of $`\stackrel{}{p}^2`$ (from Eq. (6)) on the unperturbed wave functions, the expression of the matrix elements, for a central potential, can be written as $$4\beta m\left(\left(E_{n,\mathrm{}}^0\right)^2\delta _{nn^{}}(E_{n,\mathrm{}}^0+E_{n^{},\mathrm{}}^0)n\mathrm{}m|V(r)|n^{}\mathrm{}m+n\mathrm{}m|V(r)^2|n^{}\mathrm{}m\right)\delta _{\mathrm{}\mathrm{}^{}}\delta _{mm^{}},$$ (9) and, in the cases studied here, there are no degenerate states with equal values of the angular momentum $`\mathrm{}`$ and the azimuthal quantum number $`m`$ which have different values of radial quantum number $`n`$. Thus the matrix (8) is diagonal and the correction to the spectrum can be written a $$\mathrm{\Delta }E_{n,\mathrm{}}=4\beta m\left(\left(E_{n,\mathrm{}}^0\right)^22E_{n,\mathrm{}}^0n\mathrm{}m|V(r)|n\mathrm{}m+n\mathrm{}m|V(r)^2|n\mathrm{}m\right).$$ (10) This nice relation can be simplified if one considers power-law central potential, $`V(r)r^p`$. In this case, the virial theorem gives $$n\mathrm{}m|V(r)|n\mathrm{}m=\frac{2}{p+2}E_{n,\mathrm{}}^0,$$ (11) which leads to the following form for the expression of the energy level shift in the first order in $`\beta `$: $$\mathrm{\Delta }E_{n,\mathrm{}}=4\beta m\left(\left(E_{n,\mathrm{}}^0\right)^2\left(\frac{p2}{p+2}\right)+n\mathrm{}m|V(r)^2|n\mathrm{}m\right).$$ (12) This simple expression will allow us to find the correction of the harmonic oscillator and hydrogen spectra just by calculating the mean value of the square of the potential. ## III Harmonic Oscillator For this potential, the energy level shift is only given by the mean value of the square of the potential. The normalized unperturbed wave function of the harmonic oscillator reads $$\mathrm{\Psi }_{n\mathrm{}m}^0(\stackrel{}{r})=\lambda ^{3/2}\sqrt{\frac{2n!}{\mathrm{\Gamma }(n+\mathrm{}+3/2)}}(\lambda r)^{\mathrm{}}e^{(\lambda r)^2/2}L_n^{\mathrm{}+1/2}\left((\lambda r)^2\right)Y_{lm}(\theta ,\phi ),$$ (13) where $`\lambda =\sqrt{m\omega }`$ and $`L_n^\alpha (x)`$ are Laguerre polynomials \[13, p. 1037\]. $`n`$ is the radial quantum number. Using the change of variable $`x=(\lambda r)^2`$, the energy shift is found to be: $$\mathrm{\Delta }E_{n,\mathrm{}}=\frac{4\beta m(n!)k^2}{\lambda ^4\mathrm{\Gamma }(n+\mathrm{}+3/2)}_0^{\mathrm{}}x^{\mathrm{}+5/2}e^x\left[L_n^{\mathrm{}+1/2}(x)\right]^2𝑑x,$$ (14) where $`2k=m\omega ^2`$ is the strength of the oscillator force. The calculation of the remaining integral is straightforward. Knowing the following relations concerning the Laguerre polynomials \[13, p. 1037, p. 844\] $`L_n^{\alpha 1}(x)`$ $`=`$ $`L_n^\alpha (x)L_{n1}^\alpha (x),`$ (15) $`{\displaystyle _0^{\mathrm{}}}e^xx^\alpha L_n^\alpha (x)L_m^\alpha (x)𝑑x`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Gamma }(\alpha +n+1)}{n!}}\delta _{nm},`$ (16) we obtain the expression of the harmonic oscillator spectrum for the modified Heisenberg algebra (1): $$E_{n,\mathrm{}}=\omega (2n+\mathrm{}+3/2)+(\mathrm{\Delta }x_0)^2\frac{m\omega ^2}{5}(6n^2+9n+6nl+\mathrm{}^2+4\mathrm{}+15/4),$$ (17) where $`\mathrm{\Delta }x_0=\sqrt{5\beta }`$. This formula reproduces exactly the splittings calculated in Ref. using another approach. Because the dependence on quantum numbers of the correction term is not of the form $`f(2n+\mathrm{})`$, we obtain splittings of degenerate levels. But the energy does not depend on the azimuthal quantum number $`m`$ and each level remains $`(2\mathrm{}+1)`$-fold degenerate. This example shows the usefulness of this approach which provides, with simple calculations, an analytical expression of the energy shift. The main interest of this method is that it can easily be used to solve analytically or numerically other problems, such as the Coulomb problem which is solved in the next section. ## IV Hydrogen Atom As we mentioned in the Introduction, the evaluation of corrections of the energy spectrum can provide information concerning, in the sense studied here, an assumed finite size of electrons. The method used here to describe non-pointlike particles neglects the internal structure degree of freedom. But obviously these effects have much smaller order of magnitude and thus can be omitted. The normalized unperturbed wave function of the hydrogen atom reads $$\mathrm{\Psi }_{n\mathrm{}m}^0(\stackrel{}{r})=(2\gamma _n)^{3/2}\sqrt{\frac{(n\mathrm{}1)!}{2n(n+\mathrm{})!}}(2\gamma _nr)^{\mathrm{}}e^{\gamma _nr}L_{n\mathrm{}1}^{2\mathrm{}+1}(2\gamma _nr)Y_{lm}(\theta ,\phi ),$$ (18) where $`\gamma _n=m\alpha /n`$ and $`\alpha `$ is the fine structure constant. $`n`$ is the principal quantum number and $`\mathrm{}`$ varies between $`0`$ and $`n1`$. The change of variable $`x=2\gamma _nr`$ allow to write the energy shift as $$\mathrm{\Delta }E_{n,\mathrm{}}=12\beta m\left(E_{n,\mathrm{}}^0\right)^2+8\beta m\gamma _n^2\alpha ^2\frac{(n\mathrm{}1)!}{n(n+\mathrm{})!}_0^{\mathrm{}}x^2\mathrm{}e^x\left[L_{n\mathrm{}1}^{2\mathrm{}+1}(x)\right]^2𝑑x.$$ (19) Like for the harmonic oscillator problem, the evaluation of this integral is quite simple. Indeed, using the following relation for Laguerre polynomials \[13, p. 1038\] $$\underset{m=0}{\overset{n}{}}L_m^\alpha (x)=L_n^{\alpha +1}(x),$$ (20) with the relation (16) and the following summation formula $$\underset{p=0}{\overset{b}{}}\frac{(p+a)!}{p!}=\frac{(a+b+1)!}{(1+a)b!},$$ (21) the expression of the hydrogen spectrum, in the first order in the deformation parameter $`\beta `$, reads $$E_{n,\mathrm{}}=\frac{m\alpha ^2}{2n^2}+\left(\mathrm{\Delta }x_0\right)^2\frac{m^3\alpha ^4}{5}\frac{(4n3(\mathrm{}+1/2))}{n^4(\mathrm{}+1/2)}.$$ (22) This formula shows that the corrections to the spectrum are always positive. The value of this additional term is maximum for the ground state and for each value of $`n`$, the maximal contribution is obtained for $`\mathrm{}=0`$ levels. Like in the harmonic oscillator case, the correction term, which depends explicitly on $`\mathrm{}`$, lifts the degeneracy of energy levels which remain, however, $`(2\mathrm{}+1)`$-fold degenerate. The accuracy concerning the measurement of the frequency of the radiation emitted during the transition $`1S2S`$ is about 1 kHz . Thus the energy difference between this two levels is determined with a precision about $`10^{12}`$ eV. Then, if we suppose that effects of finite size of electrons cannot yet be seen experimentally, we find $$\mathrm{\Delta }x_00.01\mathrm{fm}.$$ (23) But corrections calculated here could already play a role in the theoretical description of the hydrogen atom since the accuracy of theoretical calculations is less good than the precision of experimental data. The main theoretical error is the determination of the proton charge radius. Thus, at this moment, confrontation between experimental data and standard theoretical calculations cannot exclude the effects studied in this paper. Nevertheless, the upper bound (23) seems to be reasonable. Moreover, a naive argument can give an order of magnitude of an “experimental” upper bound for the finite size of the electron. Indeed, a lower bound for the mass of an excited state of the electron is about 85 GeV . Thus a photon with an energy of about 85 GeV cannot excited an electron. In a first approximation, this means that the resolution obtained with this photon is not sufficient to detect a finite size of electrons. The wave length of a such photon could constitute an upper bound for the size of electrons, $$\mathrm{\Delta }x_0\lambda 0.015\mathrm{fm}.$$ (24) This naive argument applied to the nucleon and its first radial excitation N(1440) yields a size of about 2.5 fm which is the correct order of magnitude. Thus in a (very?) near future, with improvement of the accuracy of experimental data and above all improvement of the precision of standard theoretical calculations, it could be possible either to lower down the upper bound (23) or detect the existence of a non-vanishing $`\mathrm{\Delta }x_0`$. ## V Summary We have proposed a new formulation of the Schrödinger equation which takes into account the deformation of the Heisenberg algebra in the first order in the deformation parameter $`\beta `$. This modified algebra introduces a minimal observable length in the uncertainty relations. It has been proposed in Ref. that this framework could be used to describe non-pointlike particles as an effective low energy theory, neglecting their internal structure degree of freedom. The minimal length $`\mathrm{\Delta }x_0`$ would be then linked with the non-pointlikeness of particles. In Sec. III, we have calculated, with the new approach, the corrections to the harmonic oscillator spectrum which are in agreement with those derived in a previous calculation using another approach . Note that this method can be generalized to other dimensions. In particular, we have verify that the spectrum of the 1-dimensional harmonic oscillator is in agreement with that found in Ref. . Moreover, the wave function in the position space can also be calculated, in the first order in $`\beta `$, just as various observables associated to the systems studied. In Sec. IV, we have used this method to obtained the corrections to the hydrogen atom spectrum. Comparison with the experimental data for the transition $`1S2S`$ yields a plausible upper bound for the non-pointlikeness $`\mathrm{\Delta }x_0`$ of the electron which is about $`0.01`$ fm. The formulation of the Schrödinger equation proposed here could prove to be very useful to study properties of some systems and their various associated observables in the context of the deformed Heisenberg algebra studied here. ###### Acknowledgements. We thank Professor F. Michel for stimulating discussions, and Professor Y. Brihaye for reading the manuscript. We would like to thank IISN for financial support.
no-problem/9905/cond-mat9905177.html
ar5iv
text
# Wave Scattering through Classically Chaotic Cavities in the Presence of Absorption: An Information-Theoretic Model ## a The $`n=1`$ case. This case, which describes a cavity with one waveguide supporting only one open channel ($`S`$ is thus the reflection amplitude back to the only channel we have), is, within our model (7), independent of the universality class $`\beta `$. Eq. (3) for $`S`$ in the polar representation reduces to $`S=\rho \mathrm{exp}i\theta `$; $`\rho ^2`$ represents the reflection coefficient $`R`$. The uniform weight (4) and the distribution (7) reduce to $$d\mu _{sub}(S)=\rho d\rho d\theta ,dP(S)=Ce^{\nu \rho ^2}\rho d\rho d\theta .$$ (8) The $`R`$-probability density is $$w(R)=De^{\nu R},0R1,$$ (9) $`D`$ and $`\nu `$ being given by $$D=\frac{\nu }{1e^\nu },R=\frac{1}{\nu }\frac{1}{e^\nu 1}=\alpha .$$ (10) For weak absorption, $`\alpha 1`$, $`\nu \mathrm{}`$ and the distribution (9) becomes strongly peaked around $`R=1`$, i.e. the unitarity circle, reducing to the one-sided delta function $`\delta (1R)`$ as $`\alpha 1`$. In the other extreme of strong absorption, $`\nu +\mathrm{}`$, $`\alpha 1/\nu `$, $`D1/\alpha `$ and $$w(R)R^1e^{R/R},$$ (11) Rayleigh’s distribution, with the average $`R=\alpha `$. ## b The orthogonal case for arbitrary $`n`$. Using the results of Ref. we find, for the average of an individual (angular) transmission or reflection coefficient $$T_{ab}^{(1)}=R_{ab}^{(1)}=\alpha /(n+1)=(1/2)R_{aa}^{(1)}$$ (12) We see the occurrence of the familiar backward enhancement factor 2. For the second moments we find $`T_{ab}^2^{(1)}=R_{ab}^2^{(1)}={\displaystyle \underset{\alpha =1}{\overset{n}{}}}|U_{1\alpha }|^4|U_{2\alpha }|^4_0\rho _\alpha ^4^{(1)}`$ (13) $`+2{\displaystyle \underset{\alpha \gamma =1}{\overset{n}{}}}|U_{1\alpha }|^2|U_{2\alpha }|^2|U_{1\gamma }|^2|U_{2\gamma }|^2_0\rho _\alpha ^2\rho _\gamma ^2^{(1)}.`$ (14) The indices 1 and 2 indicate any pair of different channels; $`\mathrm{}_0`$ stands for an average with respect to the invariant measure of the unitary group . For $`R_{aa}^2^{(1)}`$ one sets the two indices $`1`$ and $`2`$ equal. For a two-waveguide problem with one channel in each waveguide the $`S`$ matrix is two dimensional ($`n=2N=2`$). Eq. (12) gives $`T^{(1)}=\alpha /3`$, $`R^{(1)}=2\alpha /3`$. Restricting ourselves to the limit of strong absorption, $`\alpha 1`$, the Lagrange multiplier $`\nu =3/2\alpha `$ and we obtain $$T^2^{(1)}2\left[T^{(1)}\right]^2.$$ (15) Although we have not calculated the statistical distribution of $`T`$, result (15) is consistent with the Rayleigh distribution for the transmission coefficient $`T`$. For a large number of channels, $`n=2N1`$, Eqs. (14) and (12) give $$T_{ab}^2^{(1)}=R_{ab}^2^{(1)}2\frac{\alpha ^2}{n^2}2\left[T_{ab}^{(1)}\right]^2.$$ (16) and similarly, $`R_{aa}^2^{(1)}2\left[R_{aa}^{(1)}\right]^2`$, the relation between first and second moments for an exponential distribution with the average value (12), which becomes smaller as the absorption increases. For no absorption one reaches, in the limit $`n1`$, a Rayleigh distribution for $`R_{aa}`$. Ref. shows that, for the COE ($``$ being an approximation for large $`n`$) $`w^{(1)}(R_{aa})=C\left(1R_{aa}\right)^{\frac{n3}{2}}R_{aa}^1e^{R_{aa}/R_{aa}}.`$ ## c The unitary case for arbitrary $`n`$. In the unitary case, the statistical properties of a transmission coefficient is identical to that of a diagonal or off-diagonal reflection coefficient. The following equations are thus written for $`T_{ab}`$. We find, for its average, $`T_{ab}^{(2)}=\alpha /n`$; we have used the result $`\left|U_{1\alpha }\right|^2_0=1/n`$. The difference in expectation values between the two symmetry classes is thus $$T_{ab}^{(1)}T_{ab}^{(2)}=\alpha /[n(n+1)].$$ (17) For the second moment of $`T_{ab}`$ we have $`T_{ab}^2^{(2)}=2{\displaystyle \underset{\alpha \gamma }{}}|U_{1\alpha }|^2|U_{1\gamma }|^2_0|V_{\alpha 2}|^2|V_{\gamma 2}|^2_0\rho _\alpha ^2\rho _\gamma ^2`$ (18) $`=[2/(n+1)^2]\left\{[(n1)/n]\rho _1^2\rho _2^2^{(2)}+(2/n)\rho _1^4^{(2)}\right\}.`$ (19) We have used the result $`|U_{1\alpha }|^2|U_{1\gamma }|^2_0=\left(1+\delta _{\alpha \gamma }\right)/\left[n(n+1)\right]`$. For a two-waveguide problem with one channel in each waveguide ($`n=2N=2`$) we have $`T^{(2)}=\alpha /2`$. In the limit of strong absorption, $`\alpha 1`$, the Lagrange multiplier $`\nu =2/\alpha `$ and we obtain a relation like (15). For a large number of channels $`n=2N1`$, Eq. (19) gives a similar relation, now for $`T_{ab}`$, which is again consistent with Rayleigh’s distribution. For CUE (i.e. no absorption) one reaches a Rayleigh distribution for $`n1`$. Ref. finds for the the distribution of a single transmission coefficient as $$w(T_{ab})=C\left(1T_{ab}\right)^{n2}T_{ab}^1e^{T_{ab}/T_{ab}}.$$ (20) ## d Comparison with numerical simulations Some of our predictions are compared below with RMT numerical simulations for $`\beta =1`$. The $`S`$ matrices are constructed as $`S(E)=\left[I_niK(E)\right]^1\left[I_n+iK(E)\right]`$, with $`K_{ab}(E)=_\lambda \gamma _{\lambda a}\gamma _{\lambda b}/\left[E_\lambda E\right]`$ and $`(I_n)_{ab}=\delta _{ab}`$ ($`a,b=1,\mathrm{},n`$). The $`E_\lambda `$’s are generated from an “unfolded” zero-centered GOE with average spacing $`\mathrm{\Delta }`$. The $`\gamma _{\lambda a}`$’s are statistically independent, real, zero-centered Gaussian random variables. At $`E=0`$, $`S_{ab}=\left[1+\pi \gamma _{\lambda a}^2/\mathrm{\Delta }\right]^1\left[1\pi \gamma _{\lambda a}^2/\mathrm{\Delta }\right]\delta _{ab}`$, and we require $`S=0`$. In the quantum case, addition of a constant imaginary potential $`iW`$ inside the cavity makes the $`E_\lambda `$’s complex and equal to $`E_\lambda iW`$ (see also Ref. ). This is equivalent to evaluating the above expressions at the complex energy $`E+iW`$, which makes $`S(E+iW)`$ subunitary. Although Eq. (7) gives the probability distribution for the full $`S`$ matrix and arbitrary $`n`$, we only analyze below individual (angular) reflection and transmission coefficients, for $`n=1,2`$. Fig. 1 shows for $`n=1`$ the results of the RMT numerical simulations (as histograms), compared with the present model for the corresponding value of $`\alpha `$ (continuous curves). For strong absorption the model works very well, the agreement with Rayleigh’s law being excellent, while for moderate and weak absorptions the model fails. Fig. 2 shows the distribution of the transmission coefficient $`T`$ obtained from a RMT simulation for $`n=2`$. The agreement with the Rayleigh distribution with the centroid $`T=\alpha /3`$ is excellent; $`R`$ was also checked and found to agree with $`2\alpha /3`$. That individual transmission and reflection coefficients attain a Rayleigh distribution for strong absorption can be understood as follows. $`S_{ab}(E+iW)`$ coincides with the energy average of $`S_{ab}(E)`$ evaluated with a Lorentzian weighting function of half-width $`W`$. If $`\mathrm{\Gamma }^{corr}`$ is the correlation energy, $`W`$ can be thought of as containing $`m=W/\mathrm{\Gamma }^{corr}`$ independent intervals. If $`m1`$, by the central-limit theorem the real and imaginary parts of $`S_{ab}`$ attain a Gaussian distribution, and $`\left|S_{ab}\right|^2`$ an exponential distribution. This seems to be the situation captured by the maximum-entropy approach. Summarizing, the results presented in this paper indicate that wave scattering through classically chaotic cavities in the presence of strong absorption can be described in terms of an information-theoretic model. We benefited from the constructive criticism of the first version of the paper by C. W. J. Beenakker and P. W. Brouwer. One of the authors (PAM) acknowledges partial financial support from CONACyT, Mexico, through Contract No. 2645P-E, as well as the hospitality of Bar-Ilan University, where most of this work was performed.
no-problem/9905/cond-mat9905438.html
ar5iv
text
# 1 Introduction ## 1 Introduction Since the pioneering work by Sompolinsky et al. , the occurrence of oscillations and chaos has become a major field of interest in the frame of neural networks . Neural networks with symmetric synaptic connections have been object of extensive studies by methods closely related to those used in the theoretical description of the spin glasses , since they admit an energy function. Also asymmetric synapses have been studied and the presence of chaotic dynamics was examined, following , in a number of subsequent papers (see e.g. ). The investigation of chaotic neural networks is interesting not only from a theoretical point of view, but also for practical reasons, as their dynamical possibilities are richer and allow for a larger spectrum of engineering applications (see, e.g., Ref. ). It is also worth stressing that the brain is a highly dynamic system. The rich temporal structure (oscillations) of neural processes has been studied in ; chaotic behaviour has been found out in the nervous system . Relying on these neurophysiological findings, the study of chaos in neural networks may be useful in the comprehension of cognitive processes in the brain . Asymmetric synapses are not the only route to chaos in neural networks; another possibility is to use a nonmonotonic functional dependence for the activation function, i.e. the transfer function that gives the state of the neuron as a function of the post-synaptic potential. In recent papers it has been shown that such a nonmonotonic transfer function may lead to macroscopic chaos in attractor neural networks: chaos appears in a class of macroscopic trajectories characterized by an overlap with the initial configuration that never vanishes. In other words, the network preserves a memory of the initial configuration, but the macroscopic overlap does not converge to a fixed value and oscillates giving rise to a chaotic time series. Also the case of diluted networks with dynamical, adaptative synapses and nonmonotonic neurons in presence of a Hebbian learning mechanism has been studied,and it has been found that the adaptation leads to reduction of dynamics . In this paper we further analyze the dynamic behaviour of attractor neural networks with nonmonotonic transfer function. In particular we analyze a network by mean-field equations whose macroscopic dynamics can be analitically calculated . The time evolution of the macroscopic parameters describing the system is determined by a two-dimensional map that exhibits chaotic behaviour and represents in our opinion a non trivial and interesting example of a non-linear dynamical system (for recent reviews see e.g. ). In the present work the following issues are considered: structure and hyperbolicity of the strange attractor, Hausdorff dimension and Lyapunov exponents. These are typical analyses of the non linear dynamical behaviour that we perform in a neural-motivated two-dimensional map in order to achieve a better understanding of the dynamics of this class of neural networks. We also analyze the problem of the fragility of chaos and we explicitly prove that the present model behaves in agreement with the conjecture in , i.e. that periodicity windows are constructed around spine loci (one-dimensional manifolds in the two-dimensional parameter space of the model here considered). This is in our opinion an interesting confirmation of this conjecture that sheds light on the geometrical features of the periodicity windows to be found in the chaotic regions. Finally, we examine the microscopic behaviour underlying the mean field description: we consider two replicas of the system starting from slightly different initial conditions and we show that these two different configurations never become identical, independently of their macroscopic behaviour. This feature was already observed in diluted networks with monotone transfer function ; here we prove that such behaviour is also present in the case of nonmonotonic neurons. It follows that at microscopic level the network dynamics is always to be considered chaotic, whereas from a macroscopic, mean field point of view, a rich variety of behaviours can occurr: fixed-point, periodicity and chaos. We note that a similar emergence of a macroscopic evolution in presence of microscopic chaos has been recently found in another framework, i.e. Chaotic Coupled Maps models, and it has been termed nontrivial collective behaviour (NTCB, see and references therein). The paper is organized as follows: in the next section the model is described and the flow equations for macroscopic parameters are reported and analyzed. In section 3 we study the time evolution of the distance between two replicas of the network. In section 4 we present our conclusions. ## 2 The model: analysis of flow equations. We consider the model of Ref. , i.e. a neural network with $`N`$ three-states neurons (spins) $`s_i(t)\{1,0,1\}`$, $`i=1,\mathrm{},N`$. For each neuron $`s_i`$, $`K`$ input sites $`j_1(i),\mathrm{},j_K(i)`$ are randomly chosen among the $`N`$ sites, and $`NK`$ synaptic interactions $`J_{ij}`$ are introduced. We assume that the synapses are two-states variables $`J_{ij}\{1,1\}`$, randomly and independently sampled with mean $`J_0`$; they are not assumed to evolve in time (the case of adapting synapses is studied in ). A parallel deterministic dynamics is assumed for neurons, where the local field acting on neuron $`s_i`$ (the post synaptic potential) is given by $$h_i(t)=\underset{j}{}J_{ij}s_j(t),$$ (1) with the sum taken over the $`K`$ input neurons. We assume a nonmonotonic transfer function, depending on the parameter $`\theta `$ : $$s_i(t+1)=F_\theta \left(h_i(t)\right),$$ (2) where $`F_\theta (x)=\text{sign}(x)`$ when $`|x|<\theta `$ and vanishes otherwise. The dynamics of this model is solved by macroscopic flow equations for the parameters describing the system. Let us now introduce order parameters for the neurons. The overlap with pattern $`\{\xi \}`$ to be retrieved (we choose $`\{\xi =1\}`$ for simplicity) is measured by $`m(t)=s(t)`$. We stress that the suppression of the site index $`i`$ is possible because all averages are site-independent. The neuronic activity is given by $`Q(t)=s^2(t)`$. The flow equations for $`m`$ and $`Q`$ have been obtained in : $$m(t+1)=\text{erf}\left(\frac{\mu (t)}{\sqrt{\sigma (t)}}\right)\frac{1}{2}\left[\text{erf}\left(\frac{\theta +\mu (t)}{\sqrt{\sigma (t)}}\right)\text{erf}\left(\frac{\theta \mu (t)}{\sqrt{\sigma (t)}}\right)\right],$$ (3) $$Q(t+1)=\frac{1}{2}\left[\text{erf}\left(\frac{\theta +\mu (t)}{\sqrt{\sigma (t)}}\right)+\text{erf}\left(\frac{\theta \mu (t)}{\sqrt{\sigma (t)}}\right)\right],$$ (4) where $$\mu (t)=Km(t)J_0$$ (5) and $$\sigma (t)=K\left(Q(t)J_0^2m^2(t)\right)$$ (6) are mean and variance, respectively, of the local field acting on neurons at time $`t`$. Depending on the value of $`\theta `$ and $`J_0`$, three kinds of dynamic behaviour are possible for the network, which lead to a phase diagram . Two fixed point ordered phases are present, the ferromagnetic phase (F) characterized by $`m>0,Q>0`$ and the self-sustained activity phase (S) characterized by $`m=0,Q>0`$. A phase without fixed points, corresponding to cyclic or chaotic attractors and characterized by $`m_t>0,Q_t>0`$, is also found; we call it period-doubling phase (D). We remark that the phase corresponding to the fixed-point $`(m=0,Q=0)`$ is missing in this model. According to the values of the parameters we can get a phase or another; in Fig. 1 the bifurcation diagram of $`m`$ versus $`J_0`$, while keeping $`\theta =5`$ fixed, is shown (for $`K=10`$). The S fixed point is stable for $`J_0<0.5`$; at $`J_00.5`$ the F fixed point continously appears and remains stable until $`J_00.69`$, where a bifurcation to a stable 2-cycle take place. The bifurcation mechanism is period doubling, i.e. an eigenvalue of the Jacobian matrix at the fixed point leaves the unitary disk passing through $`1`$. Increasing $`J_0`$, successive bifurcations arise; eventually the system enters in the chaotic region at $`J_00.88`$. In the chaotic region windows of periodicity intercalate with chaotic attractors, which is a well-known feature of dynamical systems with chaotic behaviour. We verified that the values of $`J_0`$ where the successive bifurcations take place are consistent with Feigenbaum’s universality law , i.e. the length in $`J_0`$ of the range of stability for an orbit of period $`2^n`$ decreases approximately geometrically with $`n`$ and the ratio of successive range lengths is close to $`4.669\mathrm{}`$ for large $`n`$. We note that, in the phase $`D`$, the two-dimensional map (3-4) still possesses the $`F`$ and $`S`$ fixed points, but they are unstable. Let us now consider the strange attractor and its dependence on $`J_0`$. For example, in Fig. 2 the strange attractor is shown, for $`\theta =5`$ and $`K=10`$, in correspondence with $`J_0=0.9`$, 0.95 and 0.99 respectively; the fixed point F is represented by a star. In the case $`J_0=0.9`$, the attractor is made of two disconnected components; in the stationary regime successive points on the attractor jump from one component to the other. As $`J_0`$ grows ($`J_0=0.95`$), the attractor evolves into a more complicated structure, still composed of two disconnected components. We remark that in these two cases the fixed point F is not a limit point of the attractor. At $`J_0=0.99`$ the two components of the attractor merge and F becomes a limit point of the attractor. Concerning the Hausdorff dimension of the attractor, we found it to be close to 0.95 in the three cases described above (the dimension was estimated by the method described in , see also ). Let us now discuss the hyperbolicity of the attractor. We remind that in the hyperbolic case many interesting properties about the structure and dynamics of chaos hold (see, e.g. ). In Fig.3 we have shown finite length segments of the stable manifold of the fixed point F (for $`J_0=0.99,\theta =5,K=10`$); since F is on the attractor, we can argue that the attractor is the closure of the unstable manifold of F. We note that the stable and unstable manifolds have homoclinic intersections. Moreover Fig.3 displays near tangencies between the stable and unstable manifolds; it is reasonable that some other segment of the stable manifold will be exactly tangent to the unstable manifold. We conclude, therefore, that the attractor is not hyperbolic (see for a discussion on the non-hyperbolicity of Henon’s attractor , which has similarities with the attractor of the map here considered). We continue our analysis of the dynamical properties of the neural model and we turn now to the Lyapunov exponents. By evaluating the Jacobian of the map, we find it to be area contracting, therefore there is one positive Lyapunov exponent at most. We have evaluated the first ($`\lambda _1`$) and the second Lyapunov ($`\lambda _2`$) exponents by the method of Ref. and the results are displayed in Figs 4a) and 4b): as expected, the second Lyapunov exponent is always negative. For a given model, the ratio of the number of free parameters to the number of positive Lyapunov exponents seems to be related, according to a conjecture in , to the fragility of chaos: if this ratio is greater or equal to one a slight change of the parameters can tipically destroy chaos and a stable periodic orbit sets in. Since in our case the ratio is $`2`$ (we have two parameters, $`J_0`$ and $`\theta `$) we conclude that the macroscopic chaos of the model should be fragile. In Fig. 5 a portion of the parameter space is depicted; here black pixels correspond to chaotic attractors and white pixels to stable periodic attractors. One can see that extended periodicity windows are present; they are apparently everywhere dense. The above cited conjecture in is based on the idea that periodicity windows are constructed around spine loci, i.e. values of the parameters that give rise to superstable orbits. For two dimensional maps, the spine locus of cycles with period $`p`$ is determined by the conditions $`detM=0`$ and $`trM=0`$, where $`M`$ is the Jacobian matrix of the $`p`$-iterated map. Our map has no critical points (like, e.g., the Henon’s map) because the determinant of the Jacobian never vanishes; hence the condition $`detM=0`$ can not be strictly satisfied. However, since the map is area contracting, $`detM0`$ for a periodic orbit with sufficiently high period $`p`$ (see ): one can therefore neglect the condition $`detM=0`$ and the stability requirements reduce to one condition for stability, which according to the discussion above is $`trM=0`$. It follows that the spines are of codimension one, i.e. one-dimensional manifolds in the $`J_0\theta `$ plane. In Fig. 6 we have shown finite length segments of the spine loci determined by the condition $`trM=0`$ with $`p=32,64`$, and $`24`$. Since we do not have an analytic treatment of the two-dimensional map, we have implemented the condition $`trM=0`$ numerically. White areas are periodicity windows which are apparently constructed around spine loci; hence the conjecture in is confirmed. We note, in passing, that our findings confirm that the behaviour of two-dimensional area-contracting maps is often similar to that of one-dimensional maps with critical points . The concept of robust chaos as associated with an attractor for which the number of positive Lyapunov exponents (in some region of parameter space) is larger than the number of free (accessible) parameters in the model has been also discussed in ; recently it has been pointed out that nonmonotonic transfer functions may lead to robust chaos in time series generated by feed-forward neural nets . Probably fully connected networks with nonmonotonic activation function might provide robust chaos; however the analytic analysis of these models is difficult since the dynamical theory which describe the fully connected Hopfield model has not yet been extended to the case of nonmonotonic neurons. ## 3 Damage spreading. A system is said to exhibit damage spreading if the distance between two of its replicas, that evolve from slightly different initial conditions, increase with time (see, e.g., ). Even though damage spreading was first introduced in the context of biologically motivated dynamical systems , it has become an important tool to study the influence of initials conditions on the time evolution of various physical systems. In this phenomenon was studied in diluted networks with monotonic transfer function. The occurrence of damage spreading in the Little-Hopfield neural networks, both for fully connected and strongly diluted systems, has been studied in . Here we generalize this study to the case of diluted networks with nonmonotonic neurons. Let us consider two replicas of the same system having initial $`t=0`$ configurations with the same activity $`Q_0`$ and overlap $`m_0`$ but microscopically different for a small number of neurons. Subsequently the two replicas evolve, subject to the same dynamics since their synaptic connections are identical. The two replicas will have the same macroscopic parameters $`m(t)`$ and $`Q(t)`$ at every later time $`t`$, since the trajectories $`m(t)`$ and $`Q(t)`$ are obtained by $`m_0`$ and $`Q_0`$ iterating equations (3-4). At microscopic level the situation may be different and we therefore study a suitably defined distance between the two replicas. Let us call $`h^1(t)`$ and $`h^2(t)`$ the local fields acting on $`s^1`$ and $`s^2`$, two corresponding neurons of the replicas located on the same lattice site. The distance between the local fields is defined by: $$d(t)=\left(h^1(t)h^2(t)\right)^2=2\left(\sigma (t)\mathrm{\Delta }(t)\right),$$ (7) where $$\mathrm{\Delta }(t)=h^1(t)h^2(t)h^1(t)h^2(t)$$ (8) is the linear correlation between local fields at time $`t`$. In the limit $`N\mathrm{}`$ with $`K`$ large and finite, $`h^1`$ and $`h^2`$ can be treated as gaussian variables with probability density: $$P_t(h^1,h^2)=\frac{1}{C}\mathrm{exp}\left\{\frac{1}{2}\left[\frac{\sigma }{\sigma ^2\mathrm{\Delta }^2}\left((h^1\mu )^2+(h^2\mu )^2\right)\frac{2\mathrm{\Delta }}{\sigma ^2\mathrm{\Delta }^2}(h^1\mu )(h^2\mu )\right]\right\},$$ (9) where $`C`$ is a normalization factor and $`\sigma `$, $`\mu `$, $`\mathrm{\Delta }`$ are implicitly dependent on the time $`t`$. The time evolution law for $`\mathrm{\Delta }(t)`$ is given by: $$\mathrm{\Delta }(t+1)=K\left(s^1(t+1)s^2(t+1)J_0^2m^2(t+1)\right).$$ (10) The average of the product of corresponding neurons in the two replicas is evaluated as follows: $$s^1(t+1)s^2(t+1)=𝑑h^1𝑑h^2P_t(h^1,h^2)F_\theta (h^1)F_\theta (h^2);$$ (11) the evaluation of the integral on the r.h.s. of (11) is straightforward and leads to the time evolution law for $`\mathrm{\Delta }(t)`$, which can be written as follows: $$\mathrm{\Delta }(t+1)=K\left(_{\frac{\mu }{\sqrt{\sigma }}}^{\frac{\theta \mu }{\sqrt{\sigma }}}DzI(z)_{\frac{\theta +\mu }{\sqrt{\sigma }}}^{\frac{\mu }{\sqrt{\sigma }}}DzI(z)J_0^2m^2(t+1)\right),$$ (12) where $`Dz=e^{\frac{1}{2}z^2}\frac{dz}{\sqrt{2\pi }}`$ is the gaussian measure and $$I(z)=\text{erf}(A)+\frac{1}{2}[\text{erf}(B)\text{erf}(C)],$$ (13) with $$A=\frac{\mu \sqrt{\sigma }}{\sqrt{\sigma ^2\mathrm{\Delta }^2}}+\frac{\mathrm{\Delta }z}{\sqrt{\sigma ^2\mathrm{\Delta }^2}},$$ (14) $$B=\frac{(\theta \mu )\sqrt{\sigma }}{\sqrt{\sigma ^2\mathrm{\Delta }^2}}\frac{\mathrm{\Delta }z}{\sqrt{\sigma ^2\mathrm{\Delta }^2}},$$ (15) $$C=\frac{(\theta +\mu )\sqrt{\sigma }}{\sqrt{\sigma ^2\mathrm{\Delta }^2}}+\frac{\mathrm{\Delta }z}{\sqrt{\sigma ^2\mathrm{\Delta }^2}}.$$ (16) Equation (12), together with (3-4), solves the time evolution of $`\mathrm{\Delta }(t)`$. We remark that $`\mathrm{\Delta }(t)=\sigma (t)`$ (i.e. $`d(t)=0`$) is a fixed point for eq.(12). The possible occurrence of damage spreading can be now seen to be equivalent to the instability of the fixed point $`\mathrm{\Delta }=\sigma `$. We have studied this problem numerically. We find that damage spreading occurs for any choice of the parameters $`J_0`$, $`\theta `$, $`K`$. It follows that, from a microscopic point of view, the motion of the system is always to be considered chaotic even though at the macroscopic level it can exhibit different behaviours (fixed point, periodicity, chaos). In fig. 7 we depict the stationary regime of $`d(t)`$ in the case of periodic macroscopic dynamics and chaotic dynamics; the initial distance was $`d(0)=10^5`$. The macroscopic behaviour can be seen in the lower part of the figures (the overlap trajectory $`m(t)`$); it is a cycle with period 4 in case (a) and it is chaotic in case (b). Correspondingly the distance $`d(t)`$ is always greater than zero (i.e. damage spreading occurs) and has period 4 in case (a) whereas it shows chaotic behaviour in case (b). We remark that in Ref. it has been shown that the presence of adapting synapses in these networks leads to reduction of macroscopic dynamics. The adapting system self-regulates its synaptic configuration, by its own dynamics, so as to escape from chaotic regions: in the stationary regime, the mean of synapses $`J(t)`$ remains practically constant (equal to the stationary value $`J_{stat}`$) and the system settles in periodic macroscopic orbits. Since we found damage spreading for all fixed values of $`J_0`$, it follows that the adapting system also displays damage spreading in the stationary regime. In other words, the adaptiveness of synapses should not remove damage spreading although it reduces the macroscopic dynamics. However damage spreading may be suppressed if the neuron updating rule becomes stochastic by a proper amount of noise (the two replicas being subject to the same noise, see ). ## 4 Conclusions. In this paper we have studied a diluted neural network with nonmonotonic activation function whose macroscopic dynamics is given by a two-dimensional map. Some properties of this non-linear map have been studied: the structure and non-hyperbolicity of the strange attractor; in particular we have analyzed the fragility of the chaos and we have shown the validity of a recently discussed conjecture, i.e. periodicity windows are constructed around spine loci. Finally we have studied the time evolution of the distance between two replicas of the model which evolve subject to the same synaptic configuration. We have found that the two replicas never become identical, and the system exhibits damage spreading for any choice of the parameters. In the stationary regime the distance between the two replicas does not vanish and the trajectory $`d(t)`$ behaves in agreement with the macroscopic dynamics: it is periodic (chaotic) if $`m(t)`$ is periodic (chaotic). ## Ackowledgements The authors gratefully thank L. Angelini, G. Gonnella, M. Pellicoro and M. Villani for useful discussions. Figure Captions Figure 1: Bifurcation map of $`m`$ versus $`J_0`$ in the case $`K=10`$ and $`\theta =5`$. Figure 2: The strange attractor of map (3-4), corresponding to $`K=10`$, $`\theta =5`$ and $`J_0=0.9`$ (a), $`0.95`$ (b), $`0.99`$ (c). The star represents the fixed point F. Figure 3: The stable and unstable manifolds of the fixed point F (represented by the star), for $`J_0=0.99,K=10,\theta =5`$. Figure 4: First (a) and second (b) Lyapunov exponents versus $`\theta `$, corresponding to $`K=10`$ and $`J_0=0.9`$. Figure 5: A portion of the parameter space: $`(\theta ,J_0)[5,5.12]\times [0.88,0.9]`$. Black pixels correspond to chaotic bahaviour whereas white pixels correspond to periodicity. Figure 6: A portion of parameter space: $`(\theta ,J_0)[5,5.015]\times [0.88,0.8825]`$. White areas correspond to periodic behaviour. The solid line represents the spine locus with $`p=24`$, the dashed line is the spine locus with $`p=64`$, the dotted line is the spine with $`p=32`$. These curves are obtained numerically by interpolation of a finite set of points characterized by the condition $`trM=0`$ . Grey areas correspond to chaotic behaviour; they contain infinite periodicity windows not displayed here. Figure 7: The stationary regime of the distance between two configurations having initial distance $`d(0)=10^5`$. Squares correspond to the distance: $`y=d(t)`$, while triangles represent the overlap: $`y=m(t)`$. The parameter values are $`K=10,\theta =5,J_0=0.85`$ (a) and $`K=10,\theta =5,J_0=0.95`$ (b).
no-problem/9905/hep-th9905009.html
ar5iv
text
# DAMTP-1999-52, hep-th/9905009 Instanton vibrations of the 3-Skyrmion ## 1 Introduction In the Skyrme model, the classical $`B`$-nucleon nucleus is a $`B`$-Skyrmion: a minimum energy Skyrme field with topological charge $`B`$. The $`B`$-Skyrmions have been calculated numerically for $`B`$ up to nine . The Skyrme model is nonrenormalizable and so cannot be quantized as a field theory. However, it is hoped that the quantum mechanics on some finite-dimensional space in the charge $`B`$ sector might give a good model of the quantized $`B`$-nucleon. This approach has been reasonably successfully in the 1-Skyrmion case but for higher $`B`$ it is hard to choose a suitable, tractible, finite-dimensional space. The 1-Skyrmion is spherically symmetric and has six zero modes: three translational and three rotational. This suggests that the finite-dimensional space should be $`6B`$-dimensional and one popular candidate is the gradient-flow manifold descending from the charge $`B`$ spherical saddle-point . Recently, the vibration spectra of $`B`$-Skyrmions have been calculated numerically for $`B`$ equals two, three, four and seven . It was found that the the vibration frequencies of the $`B`$-Skyrmion are divided into two groups by the breather mode which corresponds to dilation. This suggests that it might be necessary to add the breather mode to the $`6B`$-dimensional space to give $`(6B+1)`$-dimensions, or even to include seven dimensions for each Skyrmion to give a $`7B`$-dimensional space. Another suggestion is that $`8B3`$ vibrational modes should be expected . The modes below the breather have been interpreted as being monopole-like and may correspond to the gradient-flow manifold descending from the saddle-points of infinite Skyrmion separation and from the charge $`B`$ torus . It may be that this is the space upon which the quantization should be performed. It is a $`(4B+2)`$-dimensional space. All these spaces are thought to be well approximated by instanton generated Skyrme fields . In the instanton construction, Skyrme fields are derived from instanton fields by calculating their holonomy in the $`x_4`$ direction . There is a $`(8B1)`$-dimensional family of baryon number $`B`$ Skyrme fields derived from the space of $`B`$-instantons. It is known for $`B`$ equals one, two, three and four that the $`B`$-Skyrmion is well approximated by an instanton-generated Skyrme field . In this paper the vibrations around the instanton generated 3-Skyrmion are studied. The decomposition of these vibrations as representations of the tetrahedral group includes the same representations as are found in the decomposition of the numerically determined spectrum. This seems to indicate that the numerically determined vibrations are close to being tangent to the space of instanton generated Skyrme fields. It is consistient with the view that, whatever space should be used to quantize the $`B`$-Skyrmion, it is approximated by a subspace of the space of instanton generated Skyrme fields. The 3-Skyrmion has tetrahedral symmetry . In , the Jackiw-Nohl-Rebbi (JNR) ansatz is used to derive a tetrahedral 3-instanton. From this, the instanton-generated 3-Skyrmion is calculated. In , Walet examines vibration modes of the instanton-generated 3-Skyrmion by varying the JNR parameters. Although a large class of instantons can be constructed using the JNR ansatz, it is not general. However, the Atiyah-Drinfeld-Hitchin-Manin (ADHM) construction is general and, in this paper, the tetrahedral 3-instanton ADHM matrix is calculated. The instanton-generated 3-Skyrmion vibration modes are then examined by varying the ADHM parameters. The vibration frequencies are not calculated. However, the vibrations are decomposed under the action of the tetrahedral symmetry. This allows the decomposition to be compared to other calculations of 3-Skyrmion vibrations. ## 2 The ADHM matrix for the 3-Skyrmion Symmetric ADHM matrices have been discussed in a recent paper by Singer and Sutcliffe and this should be consulted for any details not included in this section. The ADHM matrix for a $`B`$-instanton is a quaternionic matrix $$\widehat{M}=\left(\begin{array}{c}L\\ M\end{array}\right)$$ (1) where $`L`$ is a $`B`$-vector and $`M`$ is a symmetric $`B\times B`$ matrix. $`\widehat{M}`$ must satisfy the ADHM constraint $$\widehat{M}^{}\widehat{M}\text{ is real}.$$ (2) Dagger denotes quaternionic conjugation and matrix transposition. Pure quaternions can be identified with $`𝔰𝔲_2`$ by $`i𝝈=(i,j,k)`$ where $`𝝈=(\sigma _1,\sigma _2,\sigma _3)`$ are the Pauli matrices. With this identification, the instanton gauge fields are $$A_\mu (x)=N^{}(x)_\mu N(x)$$ (3) where $`N(x)`$ is the unit length $`(B+1)`$-vector solving $$N^{}(x)\left(\begin{array}{c}L\\ Mx\mathrm{𝟏}_B\end{array}\right)=0.$$ (4) In this equation, $`\mathrm{𝟏}_B`$ is the $`B\times B`$ identity matrix and the $`𝐑^\mathrm{𝟒}`$ position is written as a quaternion: $`x=x_4+x_1i+x_2j+x_3k`$. There is an ambiguity in choosing $`N(x)`$ given by $$N(x)N(x)g(x)$$ (5) where $`g(x)`$ is a unit quaternion. The unit quaternions are identified with the two-dimensional representation of SU<sub>2</sub> and so this ambiguity corresponds to gauge transformations of the fields. There is also an ambiguity in $`\widehat{M}`$ given by $$\widehat{M}\left(\begin{array}{cc}g& 0\\ 0& \rho \end{array}\right)\widehat{M}\rho ^1$$ (6) where $`g`$ is a unit quaternion and $`\rho `$ is a real orthogonal $`B\times B`$ matrix. This is a gauge transformation of the ADHM matrix: it does not affect the fields. This convenient version of the ADHM data is the canonical form discussed in . The ADHM construction, as originally introduced, involved a larger gauge ambiguity and a second ADHM matrix: a matrix coefficient of $`x`$ in (4). The canonical form is a partial fixing of the larger gauge ambiguity. Under the conjugate action of unit quaternions on $`x`$, the real part of $`x`$ is fixed and the imaginary part transforms under the three-dimensional representation of SO<sub>3</sub>. This means that for a spatial rotation $`R`$ there is a quaternion $`g`$ so that $$x_4+(ijk)R\left(\begin{array}{c}x_1\\ x_2\\ x_3\end{array}\right)=gxg^1.$$ (7) Of course, $`g`$ corresponds to the same $`R`$: SU<sub>2</sub> is a double cover of SO<sub>3</sub>. As explained in , an instanton has the spatial rotation symmetry $`xgxg^1`$ for unit quaternion $`g`$ if $$\left(\begin{array}{c}L\\ Mgxg^1\mathrm{𝟏}_B\end{array}\right)=\left(\begin{array}{cc}\stackrel{~}{g}& 0\\ 0& \rho g\end{array}\right)\left(\begin{array}{c}L\\ Mx\mathrm{𝟏}_B\end{array}\right)g^1\rho ^1$$ (8) where $`\stackrel{~}{g}`$ is a unit quaternion and $`\rho g`$ is the product of the real orthogonal matrix $`\rho `$ and the unit quaternion $`g`$. Thus, the ADHM matrix is symmetric if the spatial rotation is equivalent to a gauge transformation. If the instanton is symmetric under some subgroup of SO<sub>3</sub> then the collection of $`\rho `$’s and of $`\stackrel{~}{g}`$’s form a real $`B`$-dimensional representation and a complex two-dimensional representation of the corresponding binary subgroup of SU<sub>2</sub>. ### 2.1 The tetrahedrally symmetric ADHM matrix Since the 3-Skyrmion is tetrahedrally symmetric, the corresponding ADHM matrix is also tetrahedrally symmetric. The tetrahedral group $`T`$ is the twelve element subgroup of SO<sub>3</sub> which, in one orientation, is generated by a rotation of $`\pi `$ about the $`x_3`$-axis and a rotation of $`2\pi /3`$ about $`x_1=x_2=x_3`$. These generators will be called $`r`$ and $`t`$ respectively and the corresponding unit quaternions are $`g(r)=k`$ and $`g(t)=(1ijk)/2`$. The group is isomorphic to the alternating group $`𝔄_4`$. The tetrahedral double group is the 24 element subgroup of SU<sub>2</sub> which double covers the tetrahedral group. The representation theory of the tetrahedral group is described in, for example, Hamermesh . There are one, two and three dimensional representations derived by restricting the one, two and three dimensional irreducible representations of SU<sub>2</sub> to the tetrahedral group. They are $`A=\underset{¯}{1}|_T`$, $`E^{}=\underset{¯}{2}|_T`$ and $`F=\underset{¯}{3}|_T`$ where $`\underset{¯}{n}`$ denotes the irreducible $`n`$-dimensional representation of SU<sub>2</sub>. There is, in addition, the two-dimensional representation $`E`$ and the four-dimensional representation $`G^{}`$. These representations are reducible into conjugate pairs of representations with complex characters. The ADHM matrix $$\widehat{M}_T=\left(\begin{array}{c}L_T\\ M_T\end{array}\right)=\left(\begin{array}{ccc}i& j& k\\ 0& k& j\\ k& 0& i\\ j& i& 0\end{array}\right)$$ (9) is tetrahedrally symmetric. This matrix was found by trial and error. Having written down a likely form of the matrix it is easy to check whether or not it has the required symmetries. Explicitly, the matrices giving the compensating gauge transformations are $$\rho (r)=\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)$$ (10) with $`\stackrel{~}{g}(r)=g(r)`$ for $`r`$ and $$\rho (t)=\left(\begin{array}{ccc}0& 1& 0\\ 0& 0& 1\\ 1& 0& 0\end{array}\right)$$ (11) with $`\stackrel{~}{g}(t)=g(t)`$ for $`t`$. Thus, in this case the $`\rho `$’s form the representation $`F`$ and the $`\stackrel{~}{g}`$’s form the representation $`E^{}`$. $`\widehat{M}_T`$ is not just symmetric under $`T`$. It is also symmetric under the 24 element group $`T_d`$ which extends $`T`$ by the $`S_4`$ element $`u`$: $$u:(x_1,x_2,x_3,x_4)(x_2,x_1,x_3,x_4).$$ (12) In fact, $$\left(\begin{array}{c}L\\ Mu(x)\mathrm{𝟏}_B\end{array}\right)=\left(\begin{array}{cccc}1& & & \\ & \rho & & \end{array}\right)\frac{1k}{\sqrt{2}}\left(\begin{array}{c}L\\ Mx\mathrm{𝟏}_3\end{array}\right)\frac{1+k}{\sqrt{2}}\rho ^1$$ (13) where $$\rho =\left(\begin{array}{ccc}0& 1& 0\\ 1& 0& 0\\ 0& 0& 1\end{array}\right).$$ (14) The representation theory for $`T_d`$ is also described in Hamermesh . The vector representation $`F`$ of $`T`$ is replaced by a true vector $`F_2`$ and an axial vector $`F_1`$. In the same way, the trivial representation $`A`$ is replaced by a true scalar $`A_1`$ and a pseudo-scalar $`A_2`$. Although reducible as a representation of $`T`$, $`E`$ is irreducible as a representation of $`T_d`$. There are similar changes to the double group representations. ### 2.2 Uniqueness and the tetrahedrally symmetric ADHM matrix $`\widehat{M}_T`$ is not unique, there is a two-parameter family of tetrahedral matrices given by $`x(\widehat{M}_T+y\mathrm{𝟏}_4)`$. $`y`$ can be set to zero. It corresponds to translation of the instanton in the $`x_4`$ direction and this does not change the corresponding Skyrme field. $`x`$ is a scale parameter and, when calculating the instanton-generated Skyrmion, the scale is fixed by minimization of the Skyrme energy. There can be no more than two parameters because, as explained in , it follows from (8) that $$M_T:\underset{¯}{3}|_T(\underset{¯}{3}\times \underset{¯}{2}\times \underset{¯}{2})|_T$$ (15) that is $$M_T:F3F+E+A$$ (16) and so there is a three-parameter family of candidate $`M_T`$ matrices. This is exhausted by $$\left(\begin{array}{ccc}d& qk& rj\\ rk& d& qi\\ qj& ri& d\end{array}\right).$$ (17) Similarly, $$L_T:(\underset{¯}{3}\times \underset{¯}{2})|_T\underset{¯}{2}|_T=E^{}.$$ (18) and, since $`(\underset{¯}{3}\times \underset{¯}{2})|_T=E^{}+G^{}`$, there is a one-parameter family of $`L_T`$. The symmetry of $`M_T`$ and the ADHM constraint (2) reduce these four parameters to the two parameters $`x`$ and $`y`$ above. There is another two-parameter family of symmetric matrices corresponding to the dual tetrahedron. This is given by replacing $`M_T`$ in $`\widehat{M}_T`$ by $`M_T`$. It is possible to translate between JNR data and ADHM matrices. This is useful here because it gives an explicit verification that the 3-Skyrmion generating instanton of lies in the one-parameter family $`x\widehat{M}_T`$. The general formula, translating JNR data into ADHM data, is given in Section 5 of . Unfortunately, these ADHM data are not in the canonical form involving a single ADHM matrix. It seems that it is difficult to write the general JNR-derived ADHM data in canonical form. However, in the particular case of interest here, a straight-forward calculation shows that $`\widehat{M}_T`$ is the canonical form of ADHM data derived from tetrahedral JNR data. ## 3 Variations Small variations around $`\widehat{M}_T`$ are now considered. Writing $$\widehat{M}=\widehat{M}_T+\widehat{m}$$ (19) $`\widehat{m}`$ satisfies the linearized ADHM constraint $$\text{Im}(\widehat{m}^{}\widehat{M}_T+\widehat{M}_T^{}\widehat{m})=0.$$ (20) If $$\widehat{m}=\left(\begin{array}{ccc}l_1& l_2& l_3\\ m_{11}& m_{12}& m_{13}\\ m_{12}& m_{22}& m_{23}\\ m_{13}& m_{23}& m_{33}\end{array}\right)$$ (21) the linearized equations are $`\text{Im}(\overline{l}_1j+\overline{m}_{11}k+\overline{m}_{13}iil_2km_{22}jm_{23})`$ $`=`$ $`0,`$ (22) $`\text{Im}(\overline{l}_1k+\overline{m}_{11}j+\overline{m}_{12}iil_3km_{23}jm_{33})`$ $`=`$ $`0,`$ $`\text{Im}(\overline{l}_2k+\overline{m}_{22}i+\overline{m}_{12}jjl_3km_{13}im_{33})`$ $`=`$ $`0`$ where bar denotes quaternionic conjugation. These equations can be solved to give expressions for nine of the twelve $`l_i`$ parameters. The remaining three components correspond to the gauge transformation $$\widehat{M}_T\left(\begin{array}{cc}g& 0\\ 0& \mathrm{𝟏}_3\end{array}\right)\widehat{M}_T.$$ (23) This gauge freedom can be fixed by requiring, for example, that $`l_1`$ is proportional to $`i`$. In this way, $`l`$ may be completely determined by $`m`$ and by gauge fixing. In order to decompose the 24 $`m_{ij}`$ component as representations of $`T_d`$, the actions of $`r`$, $`t`$ and $`u`$ on $`\widehat{M}_T`$ are considered. Thus, for example, $$\rho (r)g(r)mg(r)^1\rho (r)^1=\left(\begin{array}{ccc}km_{11}k& km_{12}k& km_{13}k\\ km_{12}k& km_{22}k& km_{23}k\\ km_{13}k& km_{23}k& km_{33}k\end{array}\right)$$ (24) and so the character of $`r`$ is zero. The characters of $`t`$ and $`u`$ can be calculated in the same way: they are also zero. This means that the decomposition of the $`m_{ij}`$ is $$A_1+A_2+2E+3F_1+3F_2.$$ (25) Not all of these multiplets correspond to Skyrmion vibrations. There remains the gauge freedom $$M_T\rho M_T\rho ^1.$$ (26) By considering an infinitesimal $`\rho `$ and calculating the character, it is found that this variation is an $`F_1`$. The remaining 21 variations correspond to the 21 dimensions of the space of 3-instantons. However, the variation corresponding to time translation, $`A_2`$, does not affect the 3-Skyrmion and is discarded. Twenty variational modes remain: six of these correspond to zero modes. In fact, the translation and rotation zero modes of the Skyrmion correspond to an $`F_2`$ and an $`F_1`$ respectively. Thus, the instanton modes of the 3-Skyrmion decompose as $$A_1+2E+2F_1+3F_2$$ (27) under $`T_d`$ and, of these, one $`F_1`$ and one $`F_2`$ are zero modes and the rest are vibrational modes. The isospin zero modes are not included in this decomposition because they do not correspond to variations of the ADHM matrix. ## 4 Discussion The $`A_1`$ is the breather mode corresponding to dilation. In the numerical results of Baskerville, Barnes and Turok , it appears in the middle of the vibration spectrum. In order of increasing frequency and ignoring radiation, Baskerville, Barnes and Turok find the spectrum to be $$F_2+E+A_1+F_2+E.$$ (28) The $`E+F_2`$ below the breather are the modes described in as monopole modes. They correspond to variations of the rational map parameters in the rational map ansatz of . The $`E+F_2`$ above the breather are discussed in by Baskerville and Michaels. It has been observed that, to a good approximation, there are $`2B2`$ straight lines of zero baryon density, known as branch lines, radiating from the centre of a $`B`$-Skyrmion. In , the variations of the angular positions of the branch lines are parametrized. These are then decomposed. It is noted that if an axial vector is removed from this decomposition, the decomposition then matches the super-breather modes in the 2-Skyrmion and 3-Skyrmion spectra. In the 4-Skyrmion case the decomposition is consistent with the observed spectrum. Baskerville and Michaels interprete the axial vector which must be removed as the axial vector of rotational zero modes. Some of the monopole modes also change the branch line positions. Therefore, implies that the super-breather mode decomposition duplicates part of the decomposition of the monopole modes. There are, in total, $`4B+2`$ monopole modes of a $`B`$-Skyrmion. Six of them do not change the branch lines: of these, three are the isospin zero modes. Because the parameters in the rational map are complex, the monopole modes come in pairs of opposite parity. There are three monopole modes which compose pairs with isospin modes in this way. These are the other three modes which do not change the positions of branch lines. The remaining $`4B4`$ modes change the positions of the branch lines. These $`4B4`$ modes include the three translational and three rotational zero modes along with $`4B10`$ vibrational modes. Thus, there are $`4B7`$ monopole modes which are not rotational zero modes and which change the positions of the branch lines. It is possible to reformulate the observation made in : the decomposition of these $`4B7`$ modes duplicates the decomposition of the super-breather modes. Thus, an exception is made for the rotational zero modes, the translational zero modes are duplicated but the rotational zero modes are not. In fact, in the $`B=3`$ case, the instanton modes contain a $`F_1`$ duplicating the rotational zero modes. This $`F_1`$ has not been observed numerically. The reason for this may be that the $`F_1`$ mode has a rather high frequency. For $`B=3`$, the monopole modes which fix the branch line positions are an $`F_1`$ of isospin and an $`F_2`$ vibration. The monopole modes which change the branch line positions are the rotational and translational zero modes $`F_1+F_2`$ and the multiplet of vibration modes $`E`$. Thus, the $`E`$ in the super-breather part of the 3-Skyrmion spectrum duplicates the $`E`$ in the monopole part. The $`F_2`$ duplicates the translational zero modes. If a duplicate is also included for the rotational zero modes; then the aggregate of the breather, the monopole modes and their duplicates matches the instanton mode decomposition (27). Because Walet uses JNR ansatz instantons, not all of the instanton modes are included in the harmonic analysis of . To be precise, there is only one $`E`$, whereas the ADHM construction gives two. In the case of the rational map decomposition, it is known that the $`E`$ is spanned by tangent vectors lying along the $`S_4`$ symmetric geodesics. These are referred to in as twisted line scattering geodesics. There is a three-dimensional family of ADHM matrices, symmetric under the $`D_{2d}`$ generated by $`u`$ and $`\pi `$ rotations about the Cartesian axes. Vibrations tangent to this family lie in the $`A_1+2E`$ of the decomposition (27). The symmetric ADHM matrices are $$\widehat{M}_{D_{2d}}=\left(\begin{array}{ccc}ai& aj& bk\\ e& ck& dj\\ ck& e& di\\ dj& di& 0\end{array}\right)$$ (29) with, from the ADHM constraint, $`ab+dedc`$ $`=`$ $`0`$ (30) $`d^2a^2+2ec`$ $`=`$ $`0.`$ For $`a=b=c=d=x`$ and $`e=0`$ this is $`\widehat{M}_T`$ and for $`a=c=d=e=0`$ it is axially symmetric about the $`x_3`$-axis. Translating $`D_{2d}`$ JNR data to ADHM data and rewriting it in canonical form gives the two-parameter subfamily with $`a=d`$, $`c=b`$ and $`e=0`$. However, as noted by Walet, this subfamily of $`D_{2d}`$ symmetric ADHM matrices shares the curious feature of the $`T_h`$ matrices in , it does not include well-separated instantons of equal scale. More complicated subfamilies, including well-separated instantons of equal scale, may be chosen by using arguments similar to those in . One simple example, with $`b=1`$ fixing the scale, is $$e=\frac{c(c^21)}{2(c^4+1)}.$$ (31) Of course, this is just one path which passes though the various features associated with twisted line scattering. The infinitesimal behaviour around $`\widehat{M}_T`$ does not give the splitting of the $`2E`$ into a sub-breather $`E`$ and the super-breather $`E`$. It is not known how to make this split, without calculating the holonomy and performing the full harmonic analysis as Walet did for JNR ansatz instanton-generated 3-Skyrmions. In conclusion, the instanton modes of the 3-Skyrmion have been calculated and decomposed. The decomposition fits well with other similar decompositions. The primary question provoked by the calculations is whether it is possible to split the modes further without undertaking the harmonic analysis. ## Acknowledgments The financial assistance of Fitzwilliam College, Cambridge is gratefully acknowledged. This work is supported, in part, by PPARC. I am grateful to Kim Baskerville for useful discussion.
no-problem/9905/astro-ph9905184.html
ar5iv
text
# Discovery of type-I X-ray bursts from GRS 1741.9–2853 ## 1 Introduction GRS 1741.9$``$2853 was discovered during the first observations of the Galactic Centre region performed by the GRANAT satellite in Spring 1990. The source was detected by the low-energy (4–30 keV) imaging telescope ART-P in the March 24–April 8 observations and was tentatively associated (Mandrou (1990)) to the soft EINSTEIN source 1E 1741.7$``$2850 (Watson et al. (1981)). Further analysis of the same data (Sunyaev (1990); Sunyaev et al. (1991); Pavlinsky et al. (1994)) refined the source position, obtaining $`\alpha =17^\mathrm{h}41^\mathrm{m}50^\mathrm{s}`$, $`\delta =28^{}52^{}54\mathrm{}`$ (B1950, error radius $`45\mathrm{}`$, 90% confidence). The possible association with 1E 1741.7$``$2850 and also with the nearby GINGA transient GS 1741.2$``$2859/1741.6$``$289 (Mitsuda et al. (1990)) was ruled out. The average 4–20 keV intensity of GRS 1741.9$``$2853 was $`9.6\pm 0.7`$ mCrab, corresponding to $`(1.6\pm 0.1)\times 10^{36}\mathrm{erg}\mathrm{s}^1`$ at 8.5 kpc, and the source spectrum could be fitted by a thermal bremsstrahlung with a temperature of $`8`$ keV. During the same GRANAT observations, GRS 1741.9$``$2853 was detected above the $`3.5\sigma `$ detection level of the soft Gamma-ray telescope SIGMA in the 40–100 keV band (Churazov et al. (1993); Vargas et al. (1997)). On the other hand, both ART-P and SIGMA did not detect the source $`4`$ months later during the Fall 1990 observation campaign, suggesting GRS 1741.9$``$2853 is transient in nature. A $`3\sigma `$ upper limit of 1.2 mCrab was obtained by ART-P in the 4–20 keV band (Pavlinsky et al. (1994)), thus implying a drop in the source intensity of at least a factor of 7. Moreover, GRANAT failed to detect GRS 1741.9$``$2853 in all the subsequent campaigns on the Galactic Centre (Spring 1991, Fall 1991, Spring 1992, e.g. Churazov et al. (1993); Pavlinsky et al. (1994); Vargas et al. (1997)). The source was not observed in detailed mappings of the Galactic Centre region by soft X-ray instruments like EINSTEIN (0.5–4.5 keV, $`3\sigma `$ upper limit of $`0.7`$ mCrab, see Watson et al. (1981)), SPACELAB-2 (3–30 keV, $`<0.8`$ mCrab, Skinner et al. (1987)), and by ROSAT (0.8–2.5 keV, $`<0.1`$ mCrab, Predehl & Trümper (1994)), thus confirming its transient nature. More recently, no detections of GRS 1741.9$``$2853 were reported by RXTE-ASM in the 2–10 keV energy band since February 1996. In the next section we briefly introduce the Wide Field Cameras telescopes and report on the observations of GRS 1741.9$``$2853. Time resolved spectroscopy of the burst data is presented in Section 3, while the impact of our results on the knowledge of the source are discussed in Section 4. In particular, we propose GRS 1741.9$``$2853 as a transient low-mass X-ray binary harbouring a neutron star and we give an estimate of the source distance. ## 2 Observations One of the main scientific objectives of the Wide Field Cameras (WFC) on board the BeppoSAX satellite is the study of the timing/spectral behavior of both transient and persistent sources of the Galactic Bulge region, X-ray binaries in particular, on time scales from seconds to years. To this end, an observation program of systematic monitoring of the Sgr A sky region is being carried out (e.g. Heise (1998)). The WFCs consist of two identical coded mask telescopes (Jager et al. (1997)) pointing in opposite directions. Each camera covers a $`40\mathrm{°}\times 40\mathrm{°}`$ field of view, the largest ever flown for an arcminute resolution X-ray imaging device. With their source location accuracy in the range $`1\mathrm{}`$$`3\mathrm{}`$ (99% confidence), a time resolution of 0.244 ms at best, and an energy resolution of 18% at 6 keV, the WFCs are very effective in studying X-ray transient phenomena in the 2–28 keV bandpass. The imaging capability and the good instrument sensitivity (5-10 mCrab in $`10^4`$ s) allow an accurate monitoring of complex sky regions, like the Galactic bulge. The data of the two cameras are systematically searched for bursts and flares by analyzing the time profiles of the detectors in the 2–11 keV energy range with a time resolution down to 1 s. Reconstructed sky images are generated for any statistically meaningful event, to identify possible bursters. The accuracy of the reconstructed position, which of course depends on the burst intensity, is typically better than $`5\mathrm{}`$. This analysis procedure demonstrated its effectiveness throughout the Galactic Bulge WFC monitoring campaigns (e.g. Cocchi et al. 1998a ), leading to the identification of $`700`$ X-ray bursts (156 of which from the Bursting Pulsar GRO J1744$``$28) in a total of about $`2\times 10^6`$ s net observing time. A total of 13 new X-ray bursting sources were found, thus enlarging the population of the bursters by $`30\%`$ (Heise et al. (1999); Ubertini et al. (1999)). GRS 1741.9$``$2853 is in the field of view whenever the WFCs point at the Galactic Centre region, being only $`10\mathrm{}`$ away from the Sgr A position. No steady emission was observed during the whole WFC monitoring campaign. Typical 2–10 keV $`3\sigma `$ upper limits of $`3`$ mCrab were derived (see Table 1). Three X-ray bursts were detected at a position consistent with that of GRS 1741.9$``$2853 in two different observations (Aug. 21.774–31.519 and Sep. 13.408–18.254) during the Fall 1996 monitoring campaign. Due to the BeppoSAX orbit characteristics, the source covering efficiency during an observation is in average $`53\%`$, so other bursts could be missed. The averaged error circle obtained for the position of the bursting source is shown in Fig. 1. None of the observed bursts can be associated to other known sources. In Fig. 2 the time profiles of the three bursts are displayed. The August 22 burst occurred in coincidence with a $`10`$ s telemetry gap and some seconds of data belonging to the leading part of the burst are missed. So we can not determine the burst on-time with sufficient accuracy. The characteristics of the observed bursts are summarized in Table 1. An accurate search for bursts from GRS 1741.9$``$2853 was performed on all the data available from the 1996-1998 BeppoSAX-WFC Galactic Bulge monitoring campaigns but no other events were observed. ## 3 Data Analysis Energy resolved time analysis of the bursts was performed to study the spectral evolution of the observed events. Due to the above mentioned missing data in the August 22 burst observation, only the August 24 and September 16 bursts were analyzed this way (see Fig. 3). The time histories of the bursts are constructed by accumulating only the detector counts associated with the shadowgram obtained for the sky position of the analyzed source, thus improving the signal to noise ratio of the profile. The background is the sum of (part of) the diffuse X-ray background, the particles background and the contamination of other sources in the field of view. Source contamination is the dominating background component for crowded sky fields like the Galactic Bulge. Nevertheless, the probability of source confusion during a short time-scale event (10–100 s) like an X-ray burst is negligible. The burst spectra of GRS 1741.9$``$2853 are consistent with absorbed blackbody radiation with average color temperatures of $`2`$ keV. A summary of the spectral parameters of the three bursts is given in Table 1. The value of the $`N_\mathrm{H}`$ parameter obtained for the August 24 burst is higher with respect to the August 22 and September 16 ones. For burst 2, freezing the $`N_\mathrm{H}`$ value to the average value of burst 1 and 3 ($`10.3\times 10^{22}\mathrm{cm}^2`$) leads to higher values of the reduced $`\chi ^2`$ (1.30 for 27 d.o.f.), to an higher blackbody temperature ($`2.64\pm 0.15`$ keV) and to a lower blackbody radius ($`3.9\pm 0.4`$ km at 10 kpc). Conversely, if we assume that all the three bursts had in average the same characteristics (color temperature and radius of the emitting sphere), this implies a 1-day time-scale $`N_\mathrm{H}`$ variability of a factor of $`3`$. Time-resolved spectra were accumulated for burst 2 and 3, in order to study the time evolution of their spectral parameters. To better constrain the fits, the $`N_\mathrm{H}`$ parameter was kept fixed, according to the values obtained for the total bursts, i.e. $`36.0\times 10^{22}\mathrm{cm}^2`$ and $`10.3\times 10^{22}\mathrm{cm}^2`$ for burst 2 and burst 3 respectively. Blackbody spectra allow to determine the relationship between the average radius of the emitting sphere $`R_{\mathrm{km}}`$ (in units of km) and the source distance $`d_{10\mathrm{kpc}}`$ (in units of 10 kpc). In Fig. 3 and in Table 2 the time histories of the measured $`R_{\mathrm{km}}`$/$`d_{10\mathrm{kpc}}`$ ratios are shown, assuming isotropic emission and not correcting for gravitational redshift and conversion to true blackbody temperature from color temperature (see Lewin, van Paradijs, & Taam (1993) for details). A radius expansion of a factor of $`2`$ is observed in the September 16 burst. ## 4 Discussion On the basis of their spectral and timing properties, we interpret the three bursts detected from GRS 1741.9$``$2853 as type-I X-ray bursts, typically associated to low-mass binary (LMXB) systems (see Lewin, van Paradijs, & Taam (1995) for a review). The blackbody emission and the measured color temperatures of $`2`$ keV are consistent with this hypothesis. Spectral softening is observed in the time resolved spectra of the bursts (Table 2). Moreover, the bursts time profiles can be fitted with exponential decays whose characteristic times are energy dependent, being shorter at higher energies (see Fig. 3). Type-I bursts strongly suggest a neutron star nature for the binary system. This indicates GRS 1741.9$``$2853 is a transient neutron-star LMXB. The photospheric radius expansion derived from the time resolved spectral analysis of the brightest burst (burst 3) can be interpreted as adiabatic expansion during an high luminosity (super-Eddington) type-I burst. Actually, the 7–28 keV time history of the September 16 burst (Fig. 3, right panel) shows a top-flattened and perhaps double-peaked profile which is typical of super-Eddington events (e.g. Lewin, van Paradijs, & Taam (1995)). Eddington-luminosity X-ray bursts can lead to an estimate of the source distance. Assuming a $`2\times 10^{38}\mathrm{erg}\mathrm{s}^1`$ Eddington bolometric luminosity for a $`1.4\mathrm{M}_{}`$ neutron star, and taking into account the observed peak flux of burst 3 which extrapolates to an unabsorbed bolometric luminosity of $`527\pm 42`$ mCrab ($`3.26\pm 0.26\times 10^8\mathrm{ergcm}^2\mathrm{s}^1`$), we obtain $`d=7.2\pm 0.6`$ kpc. If we adopt the average luminosity of super-Eddington bursts proposed by Lewin, van Paradijs, & Taam (1995) ($`3.0\pm 0.6\times 10^{38}\mathrm{ergs}^1`$) the distance value becomes $`d=8.8\pm 1.2`$ kpc, indicating GRS 1741.9$``$2853 to be very close to the Galactic Centre. Assuming a Crab-like spectrum, for the source bolometric luminosity we derive an upper limit of $`1.6\times 10^{36}\mathrm{erg}\mathrm{s}^1`$ during the bursting activity (August-September 1996). We also obtain an average radius of $`6`$ km for the blackbody emitting region during the bursts, a value supporting the neutron-star nature of the collapsed object. Taking into account the intensity and the spectrum observed in the 1990 outburst (Sunyaev (1990)), we can also derive a peak bolometric luminosity of $`2\times 10^{36}\mathrm{erg}\mathrm{s}^1`$, which extrapolates to an accretion rate of $`3\times 10^{10}\mathrm{M}_{}\mathrm{y}^1`$ for a canonical $`1.4\mathrm{M}_{}`$ neutron star. These values are common among low luminosity LMXB transients (e.g. Tanaka & Shibazaki (1996); Chen, Shrader, & Livio (1997)). During the past two decades, bursting activity from LMXB transients has been reported in about 10 cases (e.g. Rapid Burster, Aql X-1, Cen X-4, 0748$``$673, 1658$``$298, see Hoffman, Marshall, & Lewin (1978); Tanaka & Shibazaki (1996), Lewin et al. 1995, and references therein), thus indicating the sources to be neutron-star binaries. Among the LMXB transients less than $`50\%`$ of sources ($`30\%`$, according to Chen et al. 1997, $`45\%`$, according to Tanaka & Shibazaki (1996)) are neutron-star systems, the rest being black hole (BH) binaries. All the BH candidates in LMXB systems are transient sources. The recent (1996-1999) BeppoSAX-WFC results report on several observations of type-I X-ray bursts in transient sources (e.g. SAX J1750.8$``$2900, SAX J1806.5$``$2215, SAX J1753.5$``$2349, SAX J1808.4$``$3658, RX J170930.2$``$263927, SAX J1810.8$``$2609, see Heise et al. (1999); Ubertini et al. (1999)). Conversely, no firm LMXB BH candidate was established. This could imply the population of black hole LMXB to be overstimated, since most of them are suggested as BH candidates on the basis of their spectral characteristics only. Actually, for only 7 out of about 40 known transient LMXB the available mass functions suggest BH systems (Chen, Shrader, & Livio (1997)). ###### Acknowledgements. We thank the staff of the BeppoSAX Science Operation Centre and Science Data Centre for their help in carrying out and processing the WFC Galactic Centre observations. The BeppoSAX satellite is a joint Italian and Dutch program. M.C., A.B., L.N. and P.U. thank Agenzia Spaziale Nazionale (ASI) for grant support.
no-problem/9905/hep-ex9905006.html
ar5iv
text
# Imaging Gaseous Detector based on Micro Processing Technology ## 1 Introduction The MicroStrip Gas Chamber (MSGC) was proposed in 1988 by Oed . This new detector has been expected to provide stable operation under intense irradiation and radiation hardness, which are required for an X-ray photo-counting detector operated in high intensity radiation environments. MSGCs also have a good position resolution of $``$ a few tens $`\mu `$m. Therefore a two-dimensional MSGC would enable to realize an ideal X-ray imaging detector also having a photo-counting ability. A MSGC is usually produced using micro-electronics technology: sequences of alternating thin anodes and cathodes are formed with a few hundred micron pitch on an insulating substrate. The closeness of the electrodes provides the above features of a MSGC. While most MSGCs have been realized on glass or quartz substrates so far, we have been developing another type of MSGC having a $``$20$`\mu `$m thin polyimide substrate since 1991(Nagae et al., Tanimori et al.). Our MSGC is made using Multi-Chip Module (MCM) technology, which allows a high density assembly of bare silicon LSI chips on a silicon or a ceramic board. A very thin substrate of the MSGC enables us to control the flow of positive ions from anodes to cathodes by optimizing the potential on the back plane. Also due to the thin substrate, a fast signal is induced on a backplane, which enables two-dimensional readout from one MSGC (Nagae et al., Tanimori et al.). In 1993, the 5cm square 2D-MSGC with 200 $`\mu `$m anode and backstrip pitches were made, and clear two-dimensional X-ray images were successfully obtained. The performances obtained from this MSGC were described in detail in Tanimori et al. , in which position resolution, stability, durability, and operation in high counting rate were investigated. Based on that study, the new 2D-MSGC having a large area of a 10cm square has been developed since 1997. In addition, we have developed a new type of the readout system in which the data are synchronously managed in digital electronics to handle the huge quantity of data from the MSGC for real-time image processing. Using this new system, we successfully got real-time movies from the MSGC, and examine the new X-ray crystal analysis methods using the photo-counting ability of the MSGC (Tanimori et al. ). Here, after summarizing both the MSGC imaging device and the readout system, we mainly report the new approach to overcome the destruction of electrodes due to discharges, which is the most crucial problem for MSGCs, and the further development of the application of the MSGC imager to the X-ray crystal analysis. ## 2 Structure of the MSGC Figure 1 shows a schematic structure of our two-dimensional MSGC, which is formed on the 20 $`\mu `$m thin polyimide substrate. On the polyimide layer, 10 $`\mu `$m wide anodes and 100 $`\mu `$m wide cathodes are formed alternately by photo-lithography technology. Between the ceramic base and the polyimide substrate, there are back strips with a 200 $`\mu `$m pitch in orthogonal to the anode, which provide information in the second dimension. All electrodes are made of gold with a thickness of 1 $`\mu `$m (recently chromium are used as mentioned in section 5). In order to reduce the effect of parallax broadening of the position distributions, a drift plane is placed 3 mm above the substrate. Every 32 cathode strips are aggregated to one group at one end in the a 10cm square MSGC. There are 16 cathode groups, to which high voltages are independently supplied. The signal from groups of cathodes can be used as an energy measurement of an X-ray. The edge of the cathode is coated by polyimide with a width of $``$7 $`\mu `$m for suppressing discharges between anodes and cathodes(Tanimori et al.) . The back strips are connected to the preamplifiers. A gas mixture of Argon (80%) and C<sub>2</sub>H<sub>6</sub> (20%) was used in atmospheric pressure. The absorption efficiency of this condition (3mm gap and the use of the above gas) is $``$8% for 8.9 keV X-rays. The resistivity of the substrate is considered to be a key factor to keep stable operation of a MSGC under high counting rate. A very thin organic-titanium is coated on the surface, by which a surface resistivity of $`10^{15}`$ $`\mathrm{\Omega }`$/square was obtained. We found an optimum operating point by adjusting both the thickness of the substrate and the potential of both anodes and cathodes. The details of the features about gas amplification and its stability of the MSGC are reported in Tanimori et al. . ## 3 Read-out Electronics System The new 10 cm square MSGC was directly mounted on the 30 cm square mother board by a bonding technique. In order to handle more than a thousand-signal lines in a 30 cm square size, the structure of 8 layers and the micro resistor arrays were adopted. Figure 2 shows the mother board and the gas vessel in which the 10 cm square MSGC is mounted. The preamplifier cards are inserted vertically to the connecter on the rear side of the motherboard, which has 64 fast amplifiers (MQS104 developed by LeCroy) and discriminators. All discriminated signals from the anodes and the cathodes (ECL level) are fed to the position encoded system mentioned hereafter. As pointed out in the Tanimori et al. , the fast and narrow pulses from both anodes and back strips provide very tight timing coincidence between anodes and back strips within $``$10ns. This means that the two coordinates of an incident point are able to be synchronously encoded with a few ten ns clock cycle by requiring the coincidence between the timings of both anodes and back strips. This procedure enables us to encode more than 10<sup>7</sup> events per second. Since almost all events generates about three hit strips on both anodes and back strips, a simple method of getting the hit position as a center gravity of the hit electrodes can provide a position resolution of less than 100 $`\mu `$m. This resolution reaches the limit due to the diffusion of drift electrons. Therefore, we need to record only the positions of the hit anodes and back strips instead of the pulse heights of those electrodes. To realize the above idea for handling more than million events per second, the synchronous encoding system has been developed, of which block diagram is shown in Fig.3. The readout system consists of 9-U VME modules of two types. One is the position encoding module (PEM) which has 128 inputs and trees of Programmable Logic Devices (PLD). The PEM encodes hit strips to X or Y coordinates. The other is the control and memory module (CMM) which has a large buffer memory of $``$200 Mbyte for keeping the image data during $``$10 seconds at the counting rate of 10<sup>7</sup> events/s, and generates the synchronous clock. The new system can handle more than 3 million events per second, which is more than 3$`\times `$10<sup>3</sup> times the ability of the CAMAC system. This enables us to take $``$ 30 frames/seconds of images with enough quality. Figures 4 show the several sequent frame images in the movie taken a metal pendant rotating at the front of the MSGC by an X-ray irradiation, where 25 images per second were taken. The details of the readout method and those VME modules are described in Tanimori et al. and Ochi et al. , respectively. ## 4 New approach for crystal analysis In general, a two-dimensional image of a diffraction pattern is not sufficient to obtain the three dimensional information of the objective crystal. When using a monochromatic X-ray beam, several diffraction patterns are taken varying the angle between one axis of the crystal and the X-ray beam over an acceptable angle range (a few degrees). The MSGC can record the arriving time of each X-ray photon with a few ten ns resolution. The timing just gives us the information on the angle of the rotation of the crystal with a very fine angular resolution. Figures 5(a) shows the three-dimensional images obtained from MSGC which consists of two positional and one rotating angle coordinates. You note that this fine angular resolution of $``$ 0.1 degree is obtained for each diffraction spot, which enable to remove the noises spreaded uniformly in this space from real spots made by an X-ray diffraction from a target crystal. Using this method, the sensitivity for a faint diffraction spot can be improved more than 10 times as shown in Fig.5(b). For crystals having the little constituent atoms, the MSGC allows us to get all the information needed for crystal analysis from only a few minutes measurement with one continuous rotation of a crystal. Fig.6 shows a reciprocal lattice image calculated from the data obtained by the MSGC. Thus the MSGC would dramatically improve an X-ray crystallography. The most intriguing approach for an X-ray crystallography using an ultimate time resolved image is a direct observation of the dynamical change of a crystal structure for periodic variations or reactions. The imaging device based on the photo counting method such as MWPC or MSGC has an essential upper limit of $``$ 10<sup>7</sup> events/s for handling the data, which restricts the number of picture frames to less than hundreds per second. However, a fast process within $``$ $`\mu `$s can be observed as continuous images if periodic measurements are done for this process. Since the MSGC records the timings both of each detected X-ray and of each periodic process, all X-rays obtained by the MSGC can be folded into one phase of the periodic process. When a process with variation times of 100 $`\mu `$s is measured periodically by the MSGC at the event rate of a few MHz during 10s, hundred images with $``$1 $`\mu `$s timing bin could be obtained in one process. Each image made of about 10<sup>6</sup> X-rays gives a high quality picture. We already applied this method, and succeeded to catch the dynamical change of the crystal structure of \[Bu<sub>4</sub>N\]<sub>4</sub>\[Pt<sub>2</sub>(pop)<sub>4</sub>\] between a photon excited state and a stable state first in the world. Details of the X-ray crystallography application and another potentialities of MSGC are described in Ochi . ## 5 Diagnosis of discharges: Capillary intermediate multiplier Although the technology of a MSGC seems to be established due to the recent intensive studies in the world, there still remains one crucial problem to prevent a MSGC from the stable operation: discharges damage the electrodes of a MSGC. The process of the discharge in MSGCs are studied in detail by Peskov, Ramsey & Fonte . Although we do not know the complete diagnosis of discharges in a MSGC yet, the tolerance can be increased by several improvements. About 5$``$8% of electrodes were damaged by discharges during one years operation in our experience, testing lots of 5 cm square MSGC. Since damage due to discharges usually concentrated in warming up at the first use, dust and parts of bad quality of electrodes might be the reasons for discharges. For the 10cm square MSGC, the surface are now looked into by a microscope before the operation to remove dusts. Also chromium has been used as the material of the electrode in the latest MSGCs due to its higher melting point. These efforts have distinctly suppressed the occurrences of broken strips by a discharge. Another solution is the insertion of an intermediate gas-multiplier such as Gas Electron Multiplier (GEM) proposed by Bouclier et al. . An intermediate gas-multiplier was at first realized in a multi-step avalanche chamber using fine mesh planes, which were intensively studied around 1980. Originally an intermediate gas-multiplier was used to attain a very high gain by combined with a MWPC, for detecting one ultraviolet photon of Cherenkov light. We are now investigating a capillary plate as an intermediate gas-multiplier, which consists of a bundle of fine glass capillaries with uniform length, the ends of which forms flat planes and was coated by Inconel metal. The gas multiplication of a capillary plate in gases has already been confirmed by Sakurai et al. . In this paper, high voltages were fed to both end planes of a capillary plate with the diameter of 2cm, and the high-electric field in a capillary induced a gas multiplication. Its gain was reported to reach up to more thousands. The detail is mentioned in this reference. Figure 7 shows the side view of our system combined with the 10cm square capillary and the 10 cm square MSGC, where capillary are set 4mm above the MSGC. The large capillary plate used here is made by Hamamatsu Photonics, and the diameter and the length of its capillary are 100 $`\mu `$m and 1mm respectively. A thickness of a capillary plate of $``$ 1mm, which is more ten times thicker than that of GEM, and the very high surface resistivity of a capillary easily let us infer the unstable operation of its gas multiplication under a high intense radiation due to a space-charge effect in a capillary. Actually, non-uniformity and instability of a gain were observed every measurement for this 10 cm square capillary even under a relative low irradiation of $``$ 100 Hz/mm<sup>2</sup>. Under a medium radiation, most bright parts in a image obtained by this system were observed to diminish soon. By such instability, we could not evaluate the performance of it quantitatively. In order to absorb ions in a capillary, a little conductivity was added to the surface of the capillary, by which the resistivity of 40 M$`\mathrm{\Omega }`$ appeared between both sides of the 10cm square capillary. This conductive capillary has dramatically improved the performance of this system. Figure 8 shows the energy spectrum of Cu characteristic X-rays obtained by this improved system, where the peak generated by a single MSGC and that by combined MSGC and capillary plates are obviously distinguished. From this figure the gain of the capillary itself can be estimated, and the rate capability was measured. As shown in Fig.9, the capillary was observed to be operated steadily up to more than a 10<sup>5</sup>Hz/mm<sup>2</sup>. The gain of a conductive capillary reached more than $``$ 3000, and non-uniformity of the gain more than 10% was not observed. Figures 10(a) and (b) show the comparison of the image performance between this system and Imaging Plate using the powder diffraction of sugar for same exposure times; the very good performance of this system are distinctly noted. Here this system was operated with the total gain of 1000 (Cu characteristic X-rays were used), and the gain of the MSGC itself was only a few tens. By adopting this intermediate multiplier, the total gain was increased about ten times, and the operation voltage of the MSGC between anodes and cathodes could be reduced by $``$ 100 V. This condition ensures the stable operation free from both discharges and electrical noises even under an intense irradiation. The detail of the study on the conductive capillary plate will be described in Nishi et al. . Summary We have developed a two-dimensional MSGC and the fast readout system, both of which are essential developments to realize a new time resolved X-ray imaging detector. In addition, the new type of an intermediate electron multiplier has been proposed: the capillary plate of which capillary has a conductive surface was made and set above the MSGC. This combined system was found to be operated very stably with an enough gain, which has never been attained by a simple MSGC. Furthermore, supplied voltages for anodes and cathodes of the MSGC can be kept within the quite safety range against discharges, and we have been free from the risk of the destruction of electrodes due to discharges. Such an desirable operation of the MSGC provides an ideal image having very high qualities such as good position resolution, no distortion, very wide-dynamic range, and steady and flat uniformity of the efficiency; the new application of this device for the X-ray imaging analyses has been able to be discussed with reality as described in above section. We also stress that this detector system is a complete electric system controlled by the computer, where electric system means not only that the data are electrically transferred to computers, but also that all the components of this system are made from IC technology. The MSGC itself is made using high density printed board technology for the direct mounting of a bare LSI. All electrical elements of the MSGC readout system are made of commercially available LSI chips. In this system, IC chips with single function such as fast amplifiers, comparators, and the ECL-TTL transfers are the main elements of the components, whereas PLDs, the core parts of the readout system, occupies less than 10% of the system. Then we have begun to redesign the readout system using commercially manufactured IC chips having 32channels amplifiers or discriminators in one chip which will be mounted directly on the MSGC mother board. PLDs for the position encoding also will be set in the the MSGC box, and no ECL-TTL transfer chip is needed. This improvement will realize a handy MSGC imager similar to a liquid crystal display in very near future. We gratefully acknowledge the kind support and fruitful discussion of Prof. Y. Ohashi, Dr. H. Uekusa and their colleagues of Dept. of Chemistry, Tokyo Institute of Technology. T. Tanimori, Yuji Nishi, and A. Ochi would like to thank Dr. T. Ueki, Dr. M. Suzuki, Dr. T. Fujisawa, Dr. Toyokawa and members of Biological Physics group of The Institute of Physical and Chemical Research (RIKEN) and Japan Synchrotron Radiation Research Institute (JASRI) for the continuous support and encouragement. This work is supported by CREST:Japan Science and Technology Corporation (JST) and partially by JASRI. Figure Captions Figure 1. Schematic structure of the two-dimensional MSGC which were formed on a 17 $`\mu `$m thin polyimide substrate. On the polyimide layer, 7 $`\mu `$m wide anodes and 63 $`\mu `$m wide cathodes were formed with a 200 $`\mu `$m pitch (the width of the cathodes was changed to 100 $`\mu `$m in the imaging measurement as mentioned in section 4). Between the ceramic base and the polyimide substrate, there are set back strips with a 200 $`\mu `$m pitch orthogonal to the anodes. All electrodes are made of gold with a thickness of 1 $`\mu `$m (recently chromium are used as mentioned in section 5). To define the drift field, the drift plane was placed at 10 mm above the substrate. Figure 2. Top view of the new 10 cm square MSGC mother board. The MSGC is mounted in the gas vessel seen in the lower right of the mother board. Figure 3. Block diagram of the new synchronous readout system. Figure 4. Several sequent frame images in the movie taken a metal pendant rotating at the front of the MSGC bye an X-ray irradiation, where 25 images per second were taken. Figure 5. (a)Three dimensional image (X, Y and rotating angle coordinates) of X-ray diffraction spots of the Phenothiazine-Benzilic acid complex, in which the sample crystal at 10 cm front of the MSGC was rotated continuously along the axis normal to incident monochromatic X-ray beam. (b)similar image after the noise reduction mentioned in the text. Figure 6. Reciprocal lattice image obtained by the MSGC. Figure 7. Side view of the 10cm square capillary and the 10 cm square MSGC. Figure 8. Energy spectrum of Cu characteristic X-rays obtained by the 10cm square capillary and the 10 cm square MSGC, where the peak generated by a single MSGC and that by combined MSGC and capillary plates are obviously distinguished Figure 9. Rate Capability of the conductive capillary plate. Figure 10. Comparison of the image performance between the MSGC + conductive capillary (a) and Imaging Plate (b) using the powder diffraction of sugar for same exposure times.