id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9903/astro-ph9903228.html
ar5iv
text
# The Be/X-ray Transient V0332+53: Evidence for a tilt between the orbit and the equatorial plane? ## 1 Introduction The hard X-ray transient V0332+53 (X 0331+53) was first detected by the Vela 5B satellite during a bright outburst in 1973 (Terrel & Priedhorsky 1984). It was rediscovered ten years later by Tenma during a series of smaller outbursts (Tanaka et al. 1983). EXOSAT observations were used to determine that the source pulsates with a period of 4.4 s (Stella et al. 1985). Doppler shifts in pulse arriving times indicate that the pulsar is in a 34.25-d binary orbit with an eccentricity $`e=0.31`$ (Stella et al. 1985). The optical counterpart was identified by Honeycutt & Schlegel (1985) as the heavily-reddened early-type star BQ Cam. This object was observed to display highly variable H$`\alpha `$ emission (Corbet, Charles & van der Klis 1986, and references therein) and infrared excess (Coe et al. 1987). These characteristics are typical of a Be/X-ray binary. In this subclass of Massive X-ray Binaries, the X-ray emission is believed to be due to accretion of matter from a Be star by a compact companion (see White, Nagase & Parmar 1995; Negueruela 1998). The name “Be star” is used as a general term describing an early-type non-supergiant star, which at some time has shown emission in the Balmer series lines (Slettebak 1988, for a review). Both the emission lines and the characteristic strong infrared excess when compared to normal stars of the same spectral types are attributed to the presence of a circumstellar disc. Most Be/X-ray binaries have relatively eccentric orbits and the neutron star companion is normally far away from the disc surrounding the Be star. Due to their different geometries and the varying physical conditions in the circumstellar disc, Be/X-ray binaries can present very different states of X-ray activity (Stella, White & Rosner 1986). In quiescence, they display persistent low-luminosity ($`L_\mathrm{x}10^{36}`$ erg s<sup>-1</sup>) X-ray emission or no detectable emission at all. Occasionally, they show series of periodical (Type I) X-ray outbursts ($`L_\mathrm{x}10^{36}10^{37}`$ erg s<sup>-1</sup>), at the time of periastron passage of the neutron star (e.g., A 0535+26, Motch et al. 1991). More rarely, they undergo giant (Type II) X-ray outbursts ($`L_\mathrm{x}10^{37}`$ erg s<sup>-1</sup>), which do not show clear orbital modulation. Some systems only display persistent emission, but most of them show outbursts and are termed Be/X-ray transients. Like most other Be/X-ray transients, V0332+53 has shown both types of outbursts. The 1973 outburst lasted $`100`$ days and peaked at $`1.6`$ Crab near July 10. It was clearly a Type II outburst, even though Whitlock (1989) found an underlying orbital modulation when the main trend was removed. On the other hand, the three weak outbursts separated by the orbital period observed in 1983 are Type I outbursts. During these outbursts, the pulsed fraction was small ($`1015`$%) and the temporal behaviour was dominated by random rapid fluctuations (Makishima et al. 1990a). The spectrum was fitted with a power-law modified by cyclotron absorption. Unger et al. (1992) found that the pulse profile varied between a double-peaked and a single-peaked structure. The equivalent hydrogen column density remained relatively constant at $`1\times 10^{22}`$ atom cm<sup>-2</sup>. A prominent absorption feature at 28.5 keV, if attributed to electron cyclotron resonance, implies a magnetic field at the surface of the neutron star of $`2.5\times 10^{12}`$G (Makishima et al. 1990b). A new outburst was discovered by $`Ginga`$ in September-October 1989. The source remained very bright for more than two weeks, indicating that this was a Type II outburst. A quasi-periodic oscillation, possibly implying the presence of an accretion disc, was detected (Takeshima et al. 1994). V0332+53 has not been detected by the BATSE experiment since the CGRO satellite started operations in April 1991 (Bildsten et al. 1997). It is not detected with any significance by the All-Sky Monitor (ASM) on board RXTE either, according to the quick-look results provided by the RXTE/ASM team. Stella et al. (1986) interpret the lack of quiescence emission as an effect of centrifugal inhibition of accretion. ## 2 Observations We present data obtained as a part of the Southampton/Valencia/SAAO long-term monitoring campaign of Be/X-ray binaries (see Reig et al. 1997a), consisting of optical spectroscopy, infrared and optical broad-band photometry of BQ Cam, the optical counterpart to V0332+53. ### 2.1 Blue optical spectroscopy The source was observed on August 14, 1997 using the 4.2-m William Herschel Telescope (WHT), located at the Observatorio del Roque de los Muchachos, La Palma, Spain. The blue arm was equipped with the Loral1 CCD and the R1200R grating, which gives a nominal dispersion of $`0.25`$ Å/pixel. A second observation was taken on November 14, 1997. On this occasion the blue arm was equipped with the R1200B grating and the EEV#10 CCD, giving a nominal dispersion of $`0.22`$ Å/pixel over $`900`$ Å. A composite spectrum is shown in Fig. 1. The signal-to-noise ratio (SNR) of the August spectrum is relatively low. The Loral camera introduces several artifacts in the range $`\lambda `$ 4150 – 4250 Å, where the spectrum could well be dominated by noise. A very strong spurious feature was present at the wavelength where He ii $`\lambda `$ 4200 Å should be found. Between $`\lambda `$ 4320 Å and $`\lambda `$ 4500 Å, where both spectra overlap, all the features look very similar and only the higher quality November spectrum is shown. ### 2.2 Red optical spectroscopy We have monitored the source since 1990, using the 2.5-m Isaac Newton Telescope (INT) and 4.2-m WHT, both located at the Observatorio del Roque de los Muchachos, and the 1.5-m telescope at Palomar Mountain (PAL). A log of observations, together with some parameters measured, is presented in Table 1. The Palomar spectra have relatively low SNR and the error in EW, arising due to the difficulty of determining the continuum, is $`15\%`$. The line shapes are also difficult to establish. In contrast, the INT and WHT spectra have all relatively high resolution and errors in the EW of H$`\alpha `$ are $`5\%`$. The double-peaked structure of H$`\alpha `$ is only clearly visible in the spectra with dispersions of 0.4 Å/pixel or better. All the data have been reduced using the Starlink software packages ccdpack (Draper 1998) and figaro (Shortridge et al. 1997) and analysed using figaro and dipso (Howarth et al. 1997). H$`\alpha `$ spectra normally show He i $`\lambda `$6678 Å as a very weak emission feature when the SNR is sufficiently large for it to be separated from the noise. An analysis of H$`\alpha `$ variability in BQ Cam during 1990 – 1991 has been presented in Negueruela et al. (1998). Many of the spectra whose parameters are listed in Table 1 are shown in their Fig. 8, and are therefore not reproduced here. Negueruela et al. (1998) found that during 1990 – 1991, the H$`\alpha `$ line presented V/R variability with a quasi-period of $`1`$ year, but this variability stopped late in 1991. ### 2.3 Optical Photometry Optical photometry of the source was obtained on November 11, 1997, using the 1-m Jakobus Kapteyn Telescope (JKT) at the Observatorio del Roque de los Muchachos, La Palma, Spain. The telescope was equipped with the Tek4 CCD and the Harris filter set. Conditions were photometric. Instrumental magnitudes were extracted through synthetic aperture routines contained in the iraf package, and transformed to the Johnson/Cousins system through calibrations derived from observations of a number of Landolt (1992) standard stars taken on the same night. The values measured are $`U=17.74\pm 0.10`$, $`B=17.29\pm 0.02`$, $`V=15.73\pm 0.02`$, $`R=14.69\pm 0.02`$ and $`I=13.27\pm 0.08`$. Errors in the $`U`$ and $`I`$ bands are dominated by calibration uncertainties (mostly in the colour correction equations). The smaller errors in $`B,V`$ and $`R`$ are dominated by measurement errors. ### 2.4 Infrared Photometry Infrared observations of BQ Cam are listed in Table 2. They have been obtained with the Continuously Variable Filter (CVF) on the 1.5-m Carlos Sánchez Telescope (TCS) at the Teide Observatory, Tenerife, Spain, and the UKT9 detector at the 3.9-m UK Infrared Telescope (UKIRT) at the Mauna Kea observatory on Hawaii. The December 1994 observation was taken with IRCAM mounted on UKIRT and 540-s total exposure in each filter. Data for the period 1983 – 1986 have already been reported in Coe et al. (1987). The observations presented here extend for a further ten years. The long-term lightcurve is shown in Fig. 2. ## 3 Results ### 3.1 Spectral classification The spectrum of BQ Cam in the classification region is displayed in Fig. 1. We can see that H$`\beta `$ is in emission, with a clear double-peaked structure (see also Fig. 3), almost to the continuum level. The He i lines at $`\lambda \lambda `$ 4713 & 5016 Å also show weak double-peaked emission, just above the continuum level. In contrast, H$`\gamma `$ only shows two very weak emission components on the wings of the absorption line. The very strong diffuse interstellar lines are consistent with the high reddening of the object. The strength of the He ii lines clearly identifies the object as an O-type star. This is confirmed by the strong Si iv lines. Overall the spectrum is very similar to that of LS 437, the optical counterpart to 3A 0726$``$260 (Negueruela et al. 1996). In Fig. 1, the spectrum of BQ Cam is compared to that of two MK standards from the digital atlas of Walborn & Fitzpatrick (1990), $`\iota `$ Orionis (O9III) and HD 48279 (O8V). Also shown is a high SNR spectrum of the Be star HD 333452 (O9IIe) from Steele, Negueruela & Clark (1999). Points to be noted are: * The H i and He i lines are weaker in BQ Cam than in any standard, due to the presence of circumstellar emission. * BQ Cam must be later than O7, since He i $`\lambda `$4471 Å$`>>`$ He ii $`\lambda `$4541 Å. * The presence of He i $`\lambda \lambda `$ 4026, 4144 Å the strength of the Si iv doublet, C iii $`\lambda `$4072 Å and He i $`\lambda `$4388 Å all point to BQ Cam being no earlier than O8. * The main luminosity criterion, namely, the strength of the Si iv $`\lambda \lambda `$ 4089, 4116 Å doublet is difficult to judge in BQ Cam, because He i $`\lambda `$4121 Å is not present (presumably filled-in by emission) and the quality of the spectrum is low at that end. * The Mg ii $`\lambda `$4481 Å line is visible at this resolution in both evolved O9 stars, but it is absent in BQ Cam, indicating a lower luminosity class. Similarly, N iii $`\lambda \lambda `$ 4511-4515 Å is much weaker in BQ Cam than in the giants. This line is also stronger in HD 48279, but Walborn & Fitzpatrick (1990) warn that this object is N-enhanced. * The N iii lines are only seen in absorption in the O8 – O9 range in main-sequence stars (Walborn & Fitzpatrick 1990). N iii $`\lambda \lambda `$ 4379, 4642 Å and possibly N iii $`\lambda `$4511 Å are visible on the spectrum of BQ Cam. * The equivalent width (EW) of He ii $`\lambda `$4686 Å in BQ Cam (0.6 Å) is inside the range typical for O8 – O9 stars (Conti & Alschuler 1971). * The O ii $`\lambda `$4367 Å line, which can be seen in the spectra of the two O9 stars is also visible in the spectra of the O9V standards 10 Lac and HD 93028 at this resolution, hinting that BQ Cam could be earlier. * The complete absence of C iii $`\lambda `$4650 Å in BQ Cam is surprising. In main sequence stars, it is clearly visible as early as O8 (Walborn & Fitzpatrick 1990). This line is also absent in LS 437. There is no reason to think that BQ Cam is carbon deficient, since C iii $`\lambda `$4072 Å is present and there is no sign of N-enhancement. One possible explanation is that the inner regions of the circumstellar discs of Oe stars are hot enough to produce C iii emission, which would be filling in the $`\lambda `$4650 Å line. From all the above, it is clear that BQ Cam is an unevolved star in the O8 – 9 range. An O8.5V classification is very likely, but, since the presence of emission affects the main classification criteria, we prefer to be cautious and give an spectral type O8-9Ve. ### 3.2 Distance In order to obtain an estimate of the distance to the system, it is necessary to determine the interstellar absorption in its direction. This is complicated because Be stars present circumstellar reddening, due to the infrared continuum emission. The calculation by Kodaira et al. (1985) of $`A_V=7.4`$ is a gross overestimate, since they took the infrared magnitudes measured in 1983 to be those intrinsic to the star, while our data show that the disc emission accounted for at least $`0.8`$ mag in $`J`$. Our $`UBVRI`$ photometry was taken at a time when the circumstellar disc was certainly small (see Sections 3.3 and 4). The small EW of H$`\alpha `$ at the time of the observations (see Table 3) should be accompanied by a small circumstellar reddening (see Dachs, Engels & Kiehling 1988). Since the intrinsic colour of a late O-type star is $`(BV)_0=0.31`$ for luminosity classes III – V (Schmidt-Kaler 1982), the measured $`(BV)=1.56\pm 0.03`$ implies $`E(BV)=1.87`$. This value is almost identical to the $`E(BV)=1.88\pm 0.1`$ deduced from the strength of different interstellar bands by Corbet et al. (1986), though the photometric determination is more reliable. It is also compatible within the errors with the value of $`N_\mathrm{H}`$ derived by Unger et al. (1992) from $`EXOSAT`$ X-ray data taken during the 1983 Type I outbursts, which implies $`E(BV)1.7`$ (Bohlin, Savage & Drake 1978). Since a very small amount of circumstellar reddening could be present, the derived distance should be taken as a lower limit. Zorec & Briot (1991) and Fabregat & Torrejón (1998) have noted that Be stars are on average 0.3 mag brighter than main-sequence objects (due to the added luminosity of the circumstellar disc). Though the disc surrounding BQ Cam must be small, some contribution to the absolute luminosity should be expected. However, we will use the absolute magnitude of a normal O8.5V star $`M_V=4.5`$ (Vacca, Garmany & Shull 1996), once again taking into account that the distance calculated will be a lower limit. Using the standard reddening of $`R=3.1`$, we derive $`d=7.6`$ kpc. However, using Schmidt-Kaler’s (1982) expression for the reddening to O stars, we find $`R=3.3`$, which gives a distance of $`d=6.3`$ kpc. Given the uncertainty in $`R`$ and taking into account the above considerations, we will accept the distance $`d=6`$ kpc as a lower limit (unless the reddening in that direction is exceptionally strong). An estimate of the different factors mentioned above, would indicate a range $`6<d<9`$ kpc for the distance to BQ Cam. We note that for an O9III star, $`M_V=5.5`$ (Vacca et al. 1996) and the implied distance is $`d10`$ kpc, which would place the object well outside the galactic disc. This is taken as confirmation of the main-sequence classification for BQ Cam. ### 3.3 System parameters The very low mass function of V0332+53, $`f(M)=0.10\pm 0.03`$ (Stella et al. 1985) indicates that the orbit of the neutron star is seen under a very small inclination angle. Assuming a lower limit for the mass of an O-type companion $`M_{}20M_{\mathrm{}}`$ and the standard mass for a neutron star $`M_\mathrm{x}=1.44M_{\mathrm{}}`$, an inclination angle $`i10^{}.3\pm 0^{}.9`$ is obtained. Waters et al. (1989) have argued that the orbital plane of Be/X-ray binaries with close orbits should not be very inclined with respect to the equatorial plane in which the circumstellar disc is supposed to form. We have used our high-resolution spectra from November 14, 1997 to measure the parameters of several emission and absorption lines, which are listed in Table 3, in order to estimate the rotational velocity of BQ Cam. This is particularly difficult given the presence of a circumstellar component. We have selected the two strongest He lines in the blue. He ii $`\lambda `$4686 Å is not likely to be contaminated by any emission component from the disc, but can be affected by non-LTE effects. On the other hand, He i $`\lambda `$4471 Å is likely to be affected by circumstellar emission, which will reduce its FWHM. Therefore any estimation of $`v\mathrm{sin}i`$ based on this line should be taken as a lower limit. From Buscombe’s (1969) approximation, the measured FWHMs corrected for instrumental broadening imply $`v\mathrm{sin}i=160\mathrm{km}\mathrm{s}^1`$ from the He ii line and $`v\mathrm{sin}i=130\mathrm{km}\mathrm{s}^1`$ from the He i line. Similarly, using the correlation between FWHM of He i $`\lambda `$4471 Å and $`v\mathrm{sin}i`$ from Slettebak et al. (1975), we obtain $`v\mathrm{sin}i=135\mathrm{km}\mathrm{s}^1`$. The two values derived from the He i $`\lambda `$4471 Å line are very similar and set a lower limit for $`v\mathrm{sin}i`$. A very different way of estimating the rotational velocity is by using the mean relation between FWHM of the H$`\alpha `$ emission line and $`v\mathrm{sin}i`$ for Be stars from Hanuschik, Kozok & Kaiser (1988). Since the FWHM of H$`\alpha `$ in BQ Cam had not changed significantly during 6 years, we deduce that the disc is dynamically stable and the correlation can be trusted to a relatively high degree. We note that the scatter in the correlation is due to the inclusion of measurements for stars with dynamically unstable discs. Using $$\mathrm{log}\frac{\mathrm{FWHM}}{2(v\mathrm{sin}i)}=0.2\mathrm{log}W_\alpha +0.11$$ we obtain $`v\mathrm{sin}i=140\mathrm{km}\mathrm{s}^1`$. Similarly, using the mean relation between peak separation of H$`\alpha `$ and $`v\mathrm{sin}i`$ for Be stars (Hanuschik et al. 1988), $$\mathrm{log}\frac{\mathrm{\Delta }v_{\mathrm{peak}}}{2(v\mathrm{sin}i)}=0.4\mathrm{log}W_\alpha 0.1$$ we obtain $`v\mathrm{sin}i=170\mathrm{km}\mathrm{s}^1`$ All the above estimates provide similar values. The estimates based on He i $`\lambda `$4471 Å yield a lower limit for $`v\mathrm{sin}i130\mathrm{km}\mathrm{s}^1`$. Averaging the four estimates gives a value of $`150\mathrm{km}\mathrm{s}^1`$. The errors associated with this estimation are formally large. However, the shape and FWHM of H$`\alpha `$ are not compatible with a very low $`v\mathrm{sin}i`$ ($`100\mathrm{km}\mathrm{s}^1`$), while a value approaching $`v\mathrm{sin}i200\mathrm{km}\mathrm{s}^1`$ does not seem compatible with the comparatively small width (both at the base and at half-maximum) of H$`\alpha `$ when a large population of Be stars are considered (see, for example, Hanuschik et al. 1996). Since all Be stars are believed to be fast rotators, this value of $`v\mathrm{sin}i`$ confirms that the star is seen under a small inclination angle. We note, however, that in order to show a $`v\mathrm{sin}i150\mathrm{km}\mathrm{s}^1`$ with an inclination angle $`i=10^{}`$, the rotational velocity of the star should be $`v900`$ km s<sup>-1</sup>, well above the break-up velocity of a late O-type star ($`600`$ km s<sup>-1</sup>). Assuming an upper limit for the rotational velocity of $`v=0.8v_{\mathrm{break}}480`$ km s<sup>-1</sup>, still gives $`i19^{}`$. An orbital inclination of $`i=19^{}`$, would imply an enormously undermassive primary with $`M_{}=5M_{\mathrm{}}`$ (the errors in the orbital parameters allow up to $`M_{}=7M_{\mathrm{}}`$ within 2-$`\sigma `$). Even if we assume that the O star is rotating at break-up velocity, $`i=15^{}`$ implies $`M_{}=8M_{\mathrm{}}`$, still undermassive by a factor $`>2`$. We note the uncertainty in our estimate of $`v\mathrm{sin}i`$, but the constraints $`v<600`$ km s<sup>-1</sup> and $`i10^{}`$ can only be compatible with $`v\mathrm{sin}i100`$ km s<sup>-1</sup>. This is not only far away from our estimate of $`v\mathrm{sin}i`$, but also very difficult to reconcile with the FWHM and shape of H$`\alpha `$ (see Hummel 1994, Hanuschik et al. 1996). On the other hand, we have no strong reasons to expect a very undermassive optical star. The calculations by Vanbeveren & De Loore (1994) show that, under certain circumstances, mass transfer in massive binaries can lead to the formation of overluminous post-main-sequence stars with compact companions (e.g., Vela X-1). It is not clear how noticeable this effect will be as long as the star which has received mass remains in the main sequence, and whether this will affect $`T_{\mathrm{eff}}`$ (and therefore, spectral class). Gies et al. (1998) have found evidence suggesting that the optical component of the Be + sdO binary $`\varphi `$ Per is moderately undermassive, but no evidence exists for any main-sequence component of an X-ray binary being undermassive by a factor $`>2`$. As a consequence, we believe that the discrepancy in the two values for $`\mathrm{sin}i`$ strongly suggests that the orbital plane is not exactly aligned with the equatorial plane of the Be star, even though the difference may be small ($`10^{}`$). Hummel (1994) has shown that, for inclination angles $`i30^{}`$, the profile of emission lines from Be stars is dominated by the flank inflections generated by non-coherent scattering, giving rise to what is known as the wine-bottle shape. Wine-bottle shapes are found for Be stars with $`v\mathrm{sin}i250`$ km s<sup>-1</sup> and Hanuschik et al. (1996) estimate that flank inflections are visible for inclinations up to $`i60^{}`$. However, the 1997 H$`\alpha `$ profiles of BQ Cam have no sign of flank inflections (see Fig. 3). Since it is not reasonable to suppose that $`i>60^{}`$ for BQ Cam, we interpret the absence of flank inflections as proof that the envelope of BQ Cam is small and the optical depth in the vertical direction is not large enough to produce the wine-bottle profile typical of non-coherent scattering. Therefore the peak separation of emission lines in November 1997 will reflect the actual extent of the disc. ### 3.4 Disc evolution Iye & Kodaira (1985) and Corbet et al. (1986) describe radial-velocity changes in the H$`\alpha `$ emission line and investigate their possible connection with the orbital period. Our H$`\alpha `$ spectroscopy shows that these changes were also present during 1990 – 1991, but Negueruela et al. (1998) have shown that these velocity variations can be explained by quasi-cyclic V/R variability with a quasi-period $`1`$ year. Similar cyclic variability is seen in many other Be/X-ray binaries (Negueruela et al. 1998). It is noteworthy that the system was displaying V/R variability in 1983 – 1984 when it showed a short span of X-ray activity and again in 1990, immediately after the 1989 Type II outbursts. The possibility of a causal connection between V/R variability and X-ray activity has been discussed in Negueruela et al. (1998). The infrared lightcurves (see Fig. 2) of BQ Cam show a general fading trend, only interrupted by a brief brightening in 1988 – 1989. The brightest magnitudes observed are those from late 1983, when the source was active in the X-rays, coinciding with the highest EWs of H$`\alpha `$ reported ($`8`$ and $`10`$ Å; Kodaira et al. 1985; Stocke et al. 1985). The decline of the strength of H$`\alpha `$ (Iye & Kodaira 1985) was accompanied by the fading of infrared magnitudes. Corbet et al. (1986) and Coe et al. (1987) interpret the decline as being due to the dispersion of the circumstellar disc of the Be star. The infrared magnitudes remained stable during 1985 – 1987, but brightened again in 1988, reaching values similar to those of 1983. After the type II outburst in late 1989, the infrared magnitudes faded to a deeper minimum, where they have remained until 1995, though showing considerable short-time variability. There does not seem to be any corresponding systematic change in H$`\alpha `$ EW (see Table 1). The variability in H$`\alpha `$ during 1990 – 1991 can be associated with the V/R cycle seen at the time (Negueruela et al. 1998). Given the similarity between the relatively high-resolution spectra of 28 August 1991, 14 December 1991, 14 August 1997 and 14 November 1997, it seems unlikely that any significant V/R variability has been present after 1991. It is noteworthy that, while the infrared colours have experimented large fluctuations ($`0.8`$ mag in $`J`$ and $`1.2`$ in $`K`$), the associated colours have remained much more stable (with $`(JK)0.8\mathrm{\hspace{0.17em}1.2}`$). There is no clear correlation between the brightness and the colours. If, for instance, we compare the $`J`$ magnitude with $`(JK)`$, we see that the faintest observations can be either very blue (TJD 48665) or relatively red (TJD 48494). The brightest observations are on average relatively red, but not more than some faint points. Using the correlations of Rieke & Lebofsky (1985), from the observed $`A_V=R\times E(BV)=6.17`$ we deduce $`A_J=1.74`$ and $`A_K=0.69`$, implying an interstellar reddening $`E(JK)=1.05`$. Using the fainter and bluer infrared observations (TJD 48665), we find $`J_0=JA_J=10.42`$ and $`K_0=KA_K=10.71`$, implying $`(JK)_0=0.29\pm 0.07`$, which is roughly compatible with Wegner’s (1994) average value of $`(JK)_0=0.18`$ for O9V stars, if we consider the errors in his value and in the standard relations used. The measured $`E(JK)=(JK)(JK)_0=0.94`$ is, within the errors, compatible with the value for interstellar reddening found above and implies that no circumstellar reddening was present. This corresponds to a state in which the disc is optically thin at all infrared wavelengths and very little infrared emission is produced (see Dougherty et al. 1994). Brighter magnitudes with a similarly blue colour (as in TJD 49701) must represent a state in which the disc is producing a significant amount of infrared emission, but still remains optically thin, giving rise to no circumstellar reddening – a condition which could be associated with a very small disc (Dougherty et al. 1994). Very faint magnitudes with redder colours (as in TJD 49664) represent states in which little emission is present, but the disc has become (partially) optically thick in $`K`$, which can be associated with a larger disc or with a change in the density gradient. When the disc is very bright (as in 1983 or 1988), it is always relatively red. The circumstellar emission is very intense (contributing $`1`$ mag), but the disc is optically thick in all wavelengths, and the circumstellar reddening remains constant at a low value, presumably because the disc is still small (Dougherty et al. 1994). ## 4 Discussion We have shown that the optical counterpart to V0332+53 is an O8–9Ve star, further skewing the spectral distribution of Be/X-ray binary transients towards earlier spectral types (see Negueruela 1998). Our distance estimate is higher than those of previous authors, who observed brighter magnitudes due to larger disc contamination. Honeycutt & Schlegel (1985) report $`B=17.04\pm 0.06`$ and $`(BV)=1.62\pm 0.06`$ in two separate observations taken on November 27-28, 1983, and February 21, 1984, while our observations show magnitudes fainter by $`0.3`$ mag, though the values for $`(BV)`$ are compatible within the observational errors. Normally observed photometric variability of classical Be stars is typically $`0.2\mathrm{mag}`$. Therefore BQ Cam does not show variations as large as V635 Cas, the optical counterpart to 4U 0115+634 (Negueruela et al. 1997), but it is more variable than most classical Be stars. At a distance of $`7`$ kpc, the maximum X-ray luminosity of V0332+53 observed by $`Vela`$ 5$`B`$ is $`L_\mathrm{x}10^{38}`$ erg s<sup>-1</sup>, close to the Eddington luminosity for a neutron star. At this distance, BQ Cam is not likely to be part of the Perseus arm, but rather of an outer galactic arm (see Kimeswenger & Weinberger 1989). With an inclination angle $`i10^{}`$, the orbital solution for V0332+53 implies $`a_\mathrm{x}8.5\times 10^{10}`$ m. For a companion mass $`M_{}20M_{\mathrm{}}`$, this implies a periastron distance $`a_{\mathrm{per}}6.3\times 10^{10}`$ m $`10R_{}`$, where the radius of the optical star is assumed to be $`R_{}=9R_{\mathrm{}}`$ (Vacca et al. 1996). Using our value for $`v\mathrm{sin}i`$ and Huang’s (1972) law $$\frac{R_\mathrm{d}}{R_{}}=\left(\frac{2v\mathrm{sin}i}{\mathrm{\Delta }v_{\mathrm{peak}}}\right)^2$$ where $`v_{\mathrm{peak}}`$ is the separation between the peaks of an emission line and Keplerian rotation of the envelope is assumed (which gives upper limits), we obtain outer emission radii $`R_\mathrm{d}=4.0,2.5`$ and $`1.8R_{}`$ for H$`\alpha `$, H$`\beta `$ and He i $`\lambda `$6678Å in November 1997. This clearly shows that the neutron star does not come close to the dense regions of the circumstellar disc in the present situation. The neutron star is only reached by the low-density outer envelope and accretion is centrifugally inhibited (Stella et al. 1986). The reduced size of the circumstellar disc strongly points at the possibility of disc truncation by the neutron star, an idea advanced by Okazaki (1998) and supported by the results of Reig, Fabregat & Coe (1997b). This truncation would explain why we observe instances of a small disc which is optically thick at all wavelengths, implying a very high density. We note that V635 Cas shows large brightness variations with little change in the associated colours (Negueruela et al. 1997), a behaviour that could be associated with a very optically thick disc (Dougherty et al. 1994). This variability extends to the $`B`$ band, which can change by at least $`0.6\mathrm{mag}`$. It seems likely then that the disc in BQ Cam is not so optically thick as that in V635 Cas at optical wavelengths, but can be very optically thick in the infrared. Roche et al. (1999) find that optical and infrared observations of the Be/X-ray transient Cep X-4 are also best explained with a truncated dense disc. We have established that the orbit of the neutron star is likely to be inclined with respect to the equatorial plane of the Be star (in which the disc is suppose to lie while it is dynamically stable). This is in contradiction with the general argument presented by Waters et al. (1989), but it is in not unexpected. We note that the Be + neutron star system PSR B1259$``$63, which is believed to be a representative of the class of systems which will evolve into Be/X-ray binaries, is likely to have a tilted orbit with respect to the equatorial plane of the Be star (see Ball et al. 1999 and references therein), and the B + neutron star system PSR J0045$``$7319, which must have formed in a way analogous to Be/X-ray binaries, has been shown to have a rotation axis misaligned with the orbit (Kaspi et al. 1996). The relevance of this misalignment to the formation and evolution of binary systems containing neutron stars has been discussed by van den Heuvel & van Paradijs (1997) and Iben & Tutukov (1998) Corbet & Peele (1997) have suggested a possible 34.5-d period for the Be/X-ray binary 3A 0726$``$260. If this period was to be confirmed, the comparison between V0332+53 and 3A 0726$``$260 would be most interesting, since the two systems would have neutron stars orbiting Oe stars of almost identical spectral types with extremely similar orbital periods. In contrast, the X-ray activity of these two systems is completely different. V0332+53 is a transient, which spends most time in quiescence and shows very bright outbursts, while 3A 0726$``$260 seems to be a persistent low-luminosity source with small outbursts. The difference in pulse periods (4.4-s against 103.2-s) could be reflecting the very different behaviour of accreted material in magnetic fields of very different intensity. However, we note that the quiescent luminosity of 3A 0726$``$260 is almost identical to that of A 0535+262, which has almost the same spin period, but a much broader orbit. This, together with the fact that the source lies nowhere close to the $`P_{\mathrm{orb}}/P_\mathrm{s}`$ relationship for Be/X-ray binaries, casts some doubt on the orbital period until it can be confirmed using orbital Doppler shift in the arrival time of pulses. ## 5 Conclusions We have presented long-term photometry and spectroscopy of the optical component of the Be/X-ray binary V0332+53, which indicate that it is an O8–9Ve star at a distance of $`7`$ kpc. We find evidence for a tilt of the orbital plane with respect to the equatorial plane. The lack of recent X-ray activity is explained by the fact that the dense regions of the circumstellar disc around the Oe star do not reach the orbit of the neutron star. The low inclination of the orbit allows us to determine a periastron distance $`a_{\mathrm{per}}10R_{}`$, while measurements from our high-resolution spectroscopy of emission lines set the outer radius of the H$`\alpha `$ emitting region at $`R_\mathrm{d}4R_{}`$. Under these conditions, centrifugal inhibition of accretion effectively prevents any X-ray emission. ## Acknowledgements We would like to thank the UK PATT and the Spanish CAT panel for supporting our long-term monitoring campaign. We are very grateful to the INT and WHT service programmes for obtaining most optical observations. The 1.5-m TCS is operated by the Instituto de Astrofísica de Canarias at the Teide Observatory, Tenerife. The WHT and INT are operated on the island of La Palma by the Royal Greenwich Observatory in the Spanish Observatorio del Roque de Los Muchachos of the Instituto de Astrofísica de Canarias. The 1.5-m telescope at Mount Palomar is jointly owned by the California Institute of Technology and the Carnegie Institute of Washington. We are very grateful to all astronomers who have taken part in observations for this campaign, G. Capilla, D. Chakrabarty, J. S. Clark, C. Everall, J. Grunsfeld, A. J. Norton, H. Quaintrell, P. Reig, A. Reynolds, J. B. Stevens, J. M. Torrejón and S. J. Unger. James Stevens obtained and reduced the optical photometry of BQ Cam. This research has made use of the Simbad database, operated at CDS, Strasbourg, France. The data reduction was carried out using the Southampton University and Liverpool John Moores University Starlink nodes, which are funded by PPARC. At Liverpool IN was funded by PPARC, while now he holds an ESA external fellowship. An anonymous referee is thanked by helpful remarks which helped to improve the paper.
no-problem/9903/astro-ph9903140.html
ar5iv
text
# WHAT DETERMINES THE DEPTH OF BALS? KECK HIRES OBSERVATIONS OF BALQSO 1603+3002 ## 1 INTRODUCTION Broad Absorption Line (BAL) QSOs are a manifestation of AGN outflows. BALs are associated with prominent resonance lines such as C iv $`\lambda `$1549, Si iv $`\lambda `$1397, N v $`\lambda `$1240, and Ly$`\alpha `$ $`\lambda `$1215. They appear in about 10% of all quasars Foltz et al. (1990) and have typical velocity widths of $`10,000`$ km s<sup>-1</sup> Weymann et al. (1985); Turnshek (1988) and terminal velocities of up to 50,000 km s<sup>-1</sup>. The small percentage of BALQSOs among quasars is generally interpreted as an orientation effect Weymann et al. (1991) and it is probable that the majority of quasars and other types of AGN harbor intrinsic outflows. A crucial issue in the study of the outflows is whether the observed depth of the BALs is determined by the column density along the line of sight, or is due to ‘non-black saturation’ — the partial covering of the emission source by an optically thick flow. Non-black saturation can also be caused by filling in the bottom of the troughs by scattered photons. The question of column density vs. geometry (i.e., covering factor) is especially important for determining the ionization equilibrium and abundances (IEA) of the BAL material. Inferences about the IEA in the BAL region are made by simulating BAL ionic-column-densities ($`N_{ion}`$) using photoionization codes. Several groups Korista et al. (1996); Turnshek et al. (1996); Hamann (1996) have used extracted $`N_{ion}`$ from HST observations of BALQSO 0226–1024 Korista et al. (1992) in their IEA studies while introducing innovative theoretical approaches to the problem. These studies, however, used the BAL apparent optical depths (defined as $`\tau =ln(I)`$, where $`I`$ is the residual intensity seen in the trough) to determine $`N_{ion}`$. The hazard of this approach is that the apparent optical depths in the BALs cannot be directly translated to realistic $`N_{ion}`$ unless the covering factor and level of saturation are known. In saturated BALs the inferred apparent $`N_{ion}`$ are only lower limits to the real $`N_{ion}`$, making conclusions regarding IEA in BALQSOs, such as very high BAL metallicity Turnshek et al. (1996), highly uncertain. Recently, several groups presented evidence for non-black saturation in BALs Arav (1997); Arav et al. (1999); Barlow et al. (1997); Telfer et al. (1998); however, the importance of the phenomenon and its detailed study as a function of velocity across the absorption troughs are still in the preliminary stages. Here we present such a study of Keck HIRES observations of BALQSO 1603+3002 ($`z=2.03`$). The source was discovered during the FIRST (Faint Images of the Radio Sky at Twenty centimeters) Bright Quasar Survey (FBQS, Gregg et al. 1996; White et al. 1999), which selects quasar candidates by comparing the catalog of radio sources found by the VLA FIRST survey Becker et al. (1995); White et al. (1997) with the APM catalog of the POSS-I plates McMahon and Irwin (1992). One of the biggest surprises from the FBQS is the prevalence of BAL quasars in this radio-selected sample. Although previous studies indicate that none of the known BAL quasars are radio-loud, BAL quasars have been found in the FBQS at a rate equal to or greater than that for optically-selected quasar samples. This result motivated us to begin an in depth study of the BAL quasars in the FBQS. In this paper we present a high-resolution spectrum of FIRST J160354.2+300209 (hereafter BALQSO 1603+3002). The low-resolution discovery spectrum is presented in (White et al. 1999). BALQSO 1603+3002 is a radio-loud quasar with a flux density of 54 mJy at 1400 MHz and an optical magnitude of B=18.0 \[$`\mathrm{log}(R^{})=2.01`$\]. In § 4 we establish that the absorption in this object is BAL in nature. BALQSO 1603+3002 is a high ionization BALQSO, in contrast to the two previously published BALs from the FIRST survey Becker et al. (1997). In this paper we will focus on the optical properties of this object. ## 2 ANALYSIS ### 2.1 Data Acquisition and Reduction On May 18, 1998 we used the High Resolution Echelle Spectrometer (HIRES, Vogt et al. 1994) on the Keck-1 10-m telescope to obtain three 40 minute exposures of BALQSO 1603+3002 covering 3900 – 6000 Å using a 1$`\stackrel{}{\mathrm{.}}`$1 wide slit. The orders overlapped up to 5128 Å, beyond which small gaps occur between orders. The slit was rotated to the parallactic angle to minimize losses due to differential atmospheric refraction. The observing conditions were excellent with subarcsecomd seeing and near photometric transparency. The spectra were extracted using routines tailored for HIRES reductions Barlow (1999), normalizing the continuum to unity. The resolution of the extracted spectrum varies from 3.4 to 3.6 pixels FWHM, being 0.119Å at 5000Å or 6.5 km s<sup>-1</sup> in velocity space. The continuum signal-to-noise of the extracted data is roughly 10-15 per pixel. In the analysis which follows, we boxcar smoothed the spectrum by 10 pixels, increasing the continuum signal-to-noise to 30-50 throughout the wavelength regions of interest while retaining sufficient velocity resolution for our analysis ($`18`$ km s<sup>-1</sup>). ### 2.2 Si iv BAL Three distinct troughs are seen in the Si iv BAL (Fig. 1b). Since the total absorption width is only slightly greater than the Si iv doublet separation ($`2000`$ km s<sup>-1</sup>), the two main troughs are seen in both the blue and red components of the doublet and are unblended with other absorption. The ability to measure unblended features from two lines of the same ion allows us to solve separately for the effective covering factor and the real optical depth Barlow et al. (1997); Hamann et al. (1997); Arav et al. (1999). The effective covering-factor ($`C`$)<sup>1</sup><sup>1</sup>1We do not use the notation $`C_f`$ (introduced by Barlow and Hamann) in order to reserve the use of a subscript to differentiate between continuum and BEL covering factors. is defined such that $`(1C)`$ accounts for photons that arise either from regions not covered by the BAL flow or those that are scattered into the observer’s line of sight. If scattering into the line of sight is negligible, then $`C`$ is the total emission-covering-fraction of the BAL flow. In Si iv $`\lambda \lambda `$ 1394, 1403 the expected intrinsic optical depth ratio is 2:1 since the oscillator strength of the $`\lambda `$1394 line is twice that of the $`\lambda `$1403 line. The relationships between the residual intensity in the red and blue doublet components ($`I_r`$ and $`I_b`$, respectively), $`C`$ and the optical depth are given by: $`I_r`$ $`=`$ $`(1C)+Ce^{\frac{1}{2}\tau }`$ (1) $`I_b`$ $`=`$ $`(1C)+Ce^\tau ,`$ (2) where $`\tau `$ is the real optical depth of the stronger transition. We concentrate our analysis on the deepest of the Si iv troughs ($`A`$ in Fig. 1b). As we demonstrate below, we obtain a lower limit to the true optical depth and minimize the role of the covering factor if we assume that the flow covers the broad emission line (BEL) region to the same extent that it covers the continuum source. The first step is to fit an emission model for the whole Si iv region. We then divide the data by the emission model to obtain the normalized residual intensities. Working in $`\mathrm{log}(\lambda )`$ space (in which the doublet separation is constant) we shift the absorption due to the red component by the doublet separation to obtain a dataset which contains $`I_r`$ and $`I_b`$ on the same $`\mathrm{log}(\lambda )`$ scale. For each $`\mathrm{log}(\lambda )`$ bin we solve equations (1) and (2) for both $`C`$ and $`\tau `$. The results are shown in Figure 2, where for clarity we transformed the x-axis to a velocity presentation. Physical solutions for equations (1) and (2) exist only if $`I_rI_bI_r^2`$ Hamann et al. (1997). Values outside this constraint are due to photon shot noise or systematic errors. Whenever we encountered a bin in which $`I_rI_b`$, we treated it as though $`I_r=I_b`$, i.e., $`C=1I_b`$ and $`\tau =\mathrm{}`$. For the segment we have solved for, this situation arises only once ($`1940`$ km s<sup>-1</sup>). Figure 2 shows that the covering factor has almost the exact same shape as $`I_b`$. The dashed line shows $`e^\tau `$, which would have been the shape of the absorption trough if the coverage were complete. Since $`C(v)`$ is almost identical to $`I_b(v)`$ while $`e^{\tau (v)}`$ does not correlate with $`I_b(v)`$ we conclude that the shape of absorption trough $`A`$ is determined by variation in the covering factor and not by changes in the real optical depth. This characteristic is most noticeable in the “hump” between $`2000`$ and $`1850`$ km s<sup>-1</sup>. If the shape of this hump was determined by changes in real optical depth, we would expect $`e^{\tau (v)}`$ to mimic $`I_b(v)`$. From Figure 2 this is clearly not the case. In fact the highest residual intensity is actually the point of largest optical depth. The real optical depth across trough $`A`$ is 3–6 times larger than the apparent optical depth ($`\tau _{apparent}ln(I_b)`$), demonstrating the unreliability of extracting column densities from measurements of $`\tau _{apparent}`$. A similar result is obtained for the second deepest Si iv trough (the last trough being too shallow and partially blended cannot be used for this analysis). In § 3.3 and § 4 we combine the dominance of the covering factor in determining the shape of trough $`A`$ with the information gathered from the C ii BAL (§ 3.3) to produce a geometrical picture for the flow. As we discuss in the next section, the C iv data suggest that the BAL flow does not cover the BEL region in this object. If this is the case for the Si iv absorption, how does it affect the results shown in Figure 2? The contribution of the Si iv BEL to the total emission is larger for the red doublet component of trough $`A`$ than for the blue component, and when we subtract a modeled Si iv BEL from the data the residual intensity of the two troughs are identical within the noise. In such a case, the lines must be highly saturated (with no useful upper limit for $`\tau `$ possible) and the shape of the trough is determined solely by the behavior of the covering factor. In § 4 we argue that this picture is the simplest interpretation of the data. ### 2.3 C iv BAL For the C iv BAL (Fig. 1c) we cannot use the same solution technique since the intrinsic doublet separation is only 500 km s<sup>-1</sup>, much smaller than the width of the flow ($`2000`$ km s<sup>-1</sup>), thus the trough is a blend of the two doublet components. However, we have an independent indicator for non-black saturation in this BAL as well. We model the unabsorbed emission with a BEL on top of a linear continuum (Fig. 1c). For the BEL, we used a two Gaussian model that gave an excellent fit to the unabsorbed part of the C iv BEL. We show the C iv BEL (which is derived by subtracting the continuum from the full emission model) on the same plot. The flux as a function of velocity at the deepest part of the C iv BAL ($`2700`$ to $`1500`$ km s<sup>-1</sup>) is remarkably similar to the flux of the modeled C iv BEL. From this we deduce that the BAL flow in this object does not cover a significant fraction of the BEL region. A similar behavior is seen in Q1413+113 Turnshek et al. (1988). Accepting this assertion leads to the conclusion that the C iv BAL flow is optically thick, since it blocks virtually all the continuum emission. Therefore, the shape of the C iv BAL trough contains information about the geometry and kinematics of the flow but not about the column density of the absorber. Low resolution data of the Ly$`\alpha `$ BEL (for which we do not have Keck HIRES coverage), were taken with the KAST double spectrograph at Lick observatory. These data (shown in White et al. 1999) support our assertion that the BAL flow does not cover the BELs. Our data show unambiguous Ly$`\alpha `$ BAL absorption on the blue wing of the Ly$`\alpha `$ BEL. Between $`1000`$ km s<sup>-1</sup> and $`2200`$ km s<sup>-1</sup> the shape of the trough is consistent with a covered continuum and an uncovered BEL. Since the peak flux of the Ly$`\alpha `$ BEL is roughly twice as strong as the continuum, and six times stronger than the C iv BEL, we expect to see substantial Ly$`\alpha `$ emission peeking through the BAL flow. This is indeed the case, for example, at $`1500`$ km s<sup>-1</sup> where the observed flux is 1.5 times higher than the continuum level, in agreement with our predictions. In addition to the Ly$`\alpha `$ BAL we also see the N v BAL in the low resolution data. The shape of this trough also supports our assertion that the BAL flow does not cover the BELs. ### 2.4 C ii BAL Absorption associated with the BAL flow is clearly detected in C ii $`\lambda `$1335 (Fig. 1a). This line is a triplet with components at 1335.708 Å, 1335.663 Å (both from an excited level) and 1334.532 Å Verner et al. (1996). The 1335.663 Å component is only 11% as strong as the 1335.708 Å component and is separated from it by only $``$ 10 km s<sup>-1</sup>. For our purposes we can therefore treat the whole line as a doublet with components at 1335.703 Å and 1334.532 Å, which have an intrinsic optical depth ratio of 2:1, respectively. From the comparable absorption equivalent widths seen in C ii $`\lambda `$1334.532 and C ii$`{}_{}{}^{}\lambda `$1335.703, and taking into account the possibility of saturation, a lower limit of $`10`$ cm<sup>-3</sup> can be obtained for the number density of the gas Wood and Linsky (1997). This lower limit cannot be taken as evidence for the intrinsic nature of the absorption, since similar number density values are inferred for some intervening absorption systems (for example Q1037–2704; Lespine and Petitjean, 1997). Due to the lower S/N and the shallowness of the absorption in C ii, we cannot get meaningful results from trying to solve for the covering factor and real optical depth in this line. Even so, the data are strongly suggestive of saturation since the troughs have an apparent optical depth ratio less than 2:1. From Figure 1, it is evident that the C ii absorption is perfectly aligned with the deepest subtrough of Si iv trough $`A`$. No significant C ii absorption is seen associated with the low-velocity subtrough of this feature even though its residual intensity in Si iv is almost identical to that of the deepest subtrough. (There is also a third subtrough around $`2090`$ km s<sup>-1</sup>, but since it is narrower and less distinct we ignore it.) Our explanation for this occurrence is that what we see are two distinct outflows. One outflow might have a lower ionization equilibrium and thus shows C ii absorption. Alternatively, the flow that shows C ii absorption might be in a similar ionization equilibrium but have a significantly larger optical depth in all lines, which allows a detection of a small C ii contribution. This picture agrees well with our inferences from the Si iv analysis. We know that the shape of trough $`A`$ is determined by changes in the covering factor, which shows two distinct subtroughs. The simplest way to explain one such subtrough is to assume that an accelerating outflow moves in and out of our line of sight Arav (1996); Arav et al. (1999). Two such outflows which happen to cross our line of sight at similar radial velocities will give rise to the two subtroughs seen in trough $`A`$. Since these are not physically connected it is less of a surprise to detect C ii absorption in only one of them. ## 3 DISCUSSION To relate our findings to the whole class of BALQSOs, we need to establish the relationship of the absorption seen in BALQSO 1603+3002 to the BAL phenomenon in general. Weymann et al. (1991) defined a BAL as a continuous absorption of at least 10% in depth spanning more than 2000 km s<sup>-1</sup>, discounting absorption closer than 3000 km s<sup>-1</sup> bluewards of the emission peak. In Figure 1c we show the data for the C iv BAL. The width of continuous absorption deeper than 10% is 2600 km s<sup>-1</sup>, which satisfies the width criterion, but most of the absorption is at velocities closer than –3000 km s<sup>-1</sup> from the emission line peak. However, the –3000 km s<sup>-1</sup> condition was introduced in order to unambiguously distinguish between associated absorbers and “classical BALs” but does not hold any physical meaning. The flow in BALQSO 1603+3002 shows non-black saturation and the C iv data suggest that the flow does not cover the broad emission line region of the object (with a size of $`0.1`$ pc.; Netzer, 1990). Each of these independent findings mark the flow as arising from the vicinity of the central source and as being physically similar to “classical BALs”. With the data improvements available in recent years (especially high resolution spectroscopy) we advocate classification of absorption systems based on their physical characteristics (see Barlow et al., 1997), rather than the older phenomelogical one. The geometry that we proposed for trough $`A`$ (§ 3.3) can be extrapolated to the full observed BAL. We have already mentioned that the structure seen in the trough situated at $`2600`$ km s<sup>-1</sup> (see Fig 1b) is due to variations in the covering factor. Therefore, following the arguments we used for trough $`A`$, it seems plausible that this trough is also the result of two outflows that cross our line of sight at similar radial velocities. If we extend this picture to the shallowest trough at $`3400`$ km s<sup>-1</sup>, which also shows two adjacent absorption features, we are led to postulate six different outflow components in the full BAL. Multi-component flows might be quite common in BALQSOs since many of them show several absorption troughs. For example, in the spectra shown by Korista et al. (1993) there are four C iv troughs in Q0146+0142, 3 in Q0226–1014, 3 in Q0932+5010 and 4 in Q2240–3702. One unexplained coincidence in our flow model is the occurrence of three pairs of closely adjacent subflows. Starting from trough $`A`$, the separations between the deepest absorption features in each trough are: 154 $`\pm 8`$, 136$`\pm 8`$ and 117$`\pm 8`$ km s<sup>-1</sup>. Having three such absorption pairs all with separations between 100–150 km s<sup>-1</sup>, across a full velocity width of more than 2000 km s<sup>-1</sup> seems improbable without a dynamical justification. Independent of our flow model, however, solving for the Si iv doublet components shows that the structure in at least the first two troughs is mainly due to changes in the covering factor (see Fig. 2). Assuming that the flow covers the whole emission region leads to $`\tau _{real}`$ values between 2–5 across trough $`A`$. Alternatively, assuming the flows do not cover any appreciable part of the BEL region yields indistinguishable residual intensities for both the red and blue components of each trough (after subtracting the BEL contribution). In this case $`\tau _{real}`$ values are between $`5\mathrm{}`$. The latter option seems more physical for two reasons. First, from the C iv data we infer that the flow as seen in C iv does not cover the broad emission line. There is no reason to assume that the Si iv case is different. Second, in the absence of a physical preference for $`\tau _{real}`$ values of order unity, values between 2–5 necessitates some fine tuning whereas the range $`5\mathrm{}`$ is simply much more probable numerically. Although the three flow components are seen in both C iv and Si iv, there are important differences between these two manifestations. The C iv absorption is always deeper and somewhat wider than the Si iv flow. Also, in C iv there is no trace of the large variations in covering factor seen in Si iv, trough $`A`$. These differences show that in the two main troughs the covering factor is ion dependent. A model based on column density gradient and kinematic effects can explain this behavior qualitatively Arav et al. (1999). ## 4 SUMMARY AND CONCLUSIONS High resolution spectroscopy of BALQSO 1603+3002 has yielded important diagnostics for the nature of quasar outflows. The presence of two relatively wide but still unblended doublet components of Si iv in its spectrum has allowed us to distinguish between the effects of column density and covering factor in determining the shape of the absorption troughs. A straightforward solution of equations (1) and (2) demonstrates that changes in the covering factor are responsible for the troughs’ shape as opposed to variations in the real optical depth. This result was independently supported by the findings from the C iv BAL, which indicated that the flow does not cover an appreciable portion of the BEL region (Further evidence for the non-covering comes from the low resolution data of the Ly$`\alpha `$ and N v BALs; see § 2.3.). Subtracting the BEL contribution, the resultant BAL is black across a considerable span and therefore saturated. The inference from the C iv BAL, that the BEL region is not covered by the flow, strengthens the results derived from the Si iv analysis. After subtracting a modeled Si iv BEL, the residual intensities of the blue and red components of the troughs are identical within the noise. Such occurence indicates that the absorption is highly saturated and that the shape of the trough is solely determined by changes in the covering factor. It also suggests that the transition from opaque matter to $`\tau 1`$ is quite sharp. Once we know that the shape of the absorption line is due to the covering factor, it is natural to model the structure within trough $`A`$ of Si iv as arising from two separate outflows. Independent evidence for this assertion comes from the C ii BAL which shows an absorption feature which coincides with only one of the subtroughs seen in Si iv, trough $`A`$. This result supports a picture of a BAL region consisting of several flows that appear to have different properties, either as a result of a different ionization state or simply because of different optical depth. However, the real situation must be more complicated since the C iv and Si iv BALs show different covering factors at the same velocities. Our findings have important implications for abundance studies of the flows. As we showed, extracting $`\tau `$ from the depth of the trough using $`\tau =ln(I_r)`$ severely underestimates the true optical depth. This leads to a similar underestimation of the resultant column density. Since abundances are determined by a relative comparison of column densities after accounting for the ionization equilibrium, underestimating the hydrogen column density can produce erroneously high absolute abundances for all the heavy elements. Differential metal abundance determinations are also susceptible to large errors arising from underestimating column densities. Based on the apparent column densities in Q0226-1024, Turnshek et al. (1996) found (their table 4, first model) that Si and S are highly enriched relative C, compared to solar ratios: (Si/C)$`4`$(Si/C), (S/C)$`8`$(S/C). Similarly, Junkkarinen et al. (1997) found (P/C)$`60`$(P/C) in PG 0946+301. When compared to the solar abundance ratios, (C/Si)=11, (C/S)=20 and (C/P)=1000 Grevesse and Anders (1989), a trend of higher enrichment for rarer elements is evident. This surprising and suspect correlation can be eliminated if we accept that the column densities are large, the absorption is saturated, and consequently the shapes of the troughs are only mildly dependent on the real optical depth. In such a case, we would expect only a small variation in the depth of troughs which arise from different elements, even when the abundances differ by large factors. If one does not assume saturation, a progressively higher enrichment for rarer elements has to be invoked to explain the small variation in apparent column density. Non-black saturation accounts for this without invoking fantastic metal enrichment. Rare elements are simply less saturated than more abundant elements<sup>2</sup><sup>2</sup>2Turnshek et al. (1996) also found higher enrichment relative to carbon for nitrogen and oxygen. These findings can also be explained by the saturation scenario. If we assume that all the BALs in Q0226-1024 are similar in depth and shape (which is correct to within a factor of 2), we deduce higher apparent column densities for lines with weaker oscillator strength. This is exactly the case for the O iii, O iv, and N iii BALs observed in Q0226-1024, and the high column densities deduced for these ions (Turnshek et al., 1996, Table 2) are largely responsible for the very high enrichment reported for these elements.. Based on the results shown in this paper and on independent evidence for non-black BAL saturation (see § 1), we conclude that BAL abundances claims in the literature which are based on apparent $`\tau `$ should be treated with the utmost caution. ## ACKNOWLEDGMENTS We thank the referee Kirk Korista for several valuable suggestions. Part of this work was performed under the auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48. We acknowledge support from the NRAO, NSF grant AST-9802791, STScI and Sun Microsystems.
no-problem/9903/hep-ph9903377.html
ar5iv
text
# DESY 99–035 ISSN 0418–9833 MZ-TH/99–05 TTP99-13 hep–ph/9903377 March 1999 Photon Plus Jet Production in Large-𝑄² 𝑒⁢𝑝 Collisions at Next-to-Leading Order QCD ## 1 Introduction The production of isolated photons in high-energy hadronic collisions is an important testing ground for QCD. Since the photon does not take part in the strong interaction, it is a ”direct” probe of the hard scattering process and provides a means to measure the strong coupling constant $`\alpha _s`$ or to extract information on the parton distributions, in particular the gluon density in the proton . Moreover, good knowledge of the standard model predictions for direct photon production is required since it is an important background for many searches of new physics. At HERA, with increasing luminosity, the measurement of isolated photon production will give information on the parton content of the proton and at $`Q^2=0`$, i.e. for photoproduction, also on the parton distributions in the photon. First experimental results from the ZEUS collaboration (see also ) have been reported and found in reasonable agreement with next-to-leading order (NLO) predictions . Cross sections for the production of hard photons in deep inelastic scattering are much smaller as compared to photoproduction and, therefore, more difficult to measure. Typical cross sections, for example with $`Q^2>10GeV^2`$, are of the order of $`10pb`$. With a luminosity of $`50pb^1`$ one thus expects measurements of differential cross sections to become feasible. A NLO calculation for direct photon production in deep inelastic $`ep`$ scattering, $`epe\gamma X`$, at large $`Q^2`$ has been reported recently by two of us and D. Michelsen . Since hard photon production occurs, compared to inclusive deep inelastic scattering, at a relative order O($`\alpha `$) one expects a sizable cross section only at moderately large $`Q^2`$. Therefore, one can restrict the calculation to pure virtual photon exchange with $`Z`$ exchange neglected. In the hadronic final state is separated into $`\gamma +(1+1)`$\- and $`\gamma +(2+1)`$-jet topologies (the remnant jet being counted as ”+1” jet us usual). The approach is thus analogous to the calculation of $`(2+1)`$\- and $`(3+1)`$-jet cross sections, where one of the final state gluons is replaced by a photon . In addition to the direct production, photons can also be produced through the fragmentation of a hadronic jet into a single photon carrying a large fraction of the jet energy . This long-distance process is described in terms of the quark-to-photon and gluon-to-photon fragmentation functions which absorb collinear singularities present in the perturbative calculation. First measurements of the $`q\gamma `$ fragmentation function in $`e^+e^{}\gamma +1`$-jet are presented in (see also for the discussion of an inclusive measurement). The NLO theory for this process has been worked out in . In the fragmentation contributions were discarded and the photon-quark collinear singularities had been removed by explicit parton-level cutoffs. The results depended strongly on these photon-parton cutoffs, in particular for the incoming gluon contributions . However, these cutoffs are difficult to control experimentally. In this paper we report results in which the fragmentation contributions are included together with isolation criteria that limit the hadronic energy in the jet containing the photon. Whereas in various photon + jet cross sections with invariant mass jet resolution criteria were calculated, in this work we concentrate on the calculation of various differential cross sections either exclusively or inclusively which depend on the transverse momenta and rapidities of the photon or the accompanying jet. We use the $`\gamma ^{}p`$ center-of-mass system to define the kinematic variables. The cone method is applied to define the parton jets and to isolate the photon signal. In section 2 a brief outline of the theoretical background for calculating the cross section is given. In section 3 numerical results are presented. Section 4 contains a short summary and the conclusions. ## 2 Subprocesses Through Next-to-Leading Order ### 2.1 Leading-Order Contributions In leading order, the production of photons in deep inelastic electron (positron) scattering is described by the quark (antiquark) subprocess $$e(p_1)+q(p_3)e(p_2)+q(p_4)+\gamma (p_5)$$ (1) where the particle momenta are given in parentheses. The momentum of the incoming quark is a fraction $`\xi `$ of the proton momentum $`p_P`$, $`p_3=\xi p_P`$. The proton remnant $`r`$ has the momentum $`p_r=(1\xi )p_P`$. It hadronizes into the remnant jet so that the process (1) gives rise to $`\gamma +(1+1)`$-jet final states. In the virtual photon $`\gamma ^{}`$-proton center-of-mass system the hard photon recoils against the hard jet back-to-back. To remove photon production by incoming photons $`\gamma ^{}`$ with small virtuality (photoproduction channel) and to restrict to the case where the scattered electron $`e(p_2)`$ is observed, one applies cuts on the usual deep inelastic scattering variables $`x,y`$ and $`Q^2`$. In addition, to have photons $`\gamma (p_5)`$ of sufficient energy we require an explicit cut on the invariant mass $`W`$ of the final state, $`W^2=(q+p_P)^2`$, where $`q`$ is the electron momentum transfer, $`q=p_1p_2`$ and $`Q^2=q^2`$ as usual. Both leptons and quarks emit photons. The subset of diagrams where the photon is emitted from the initial or final state lepton (leptonic radiation) is explicitly gauge invariant and can be considered separately. Similarly, the diagrams with a photon emitted from quark lines is called quarkonic radiation. In addition, there are also contributions from the interference of these two. Since we are interested in testing QCD under the circumstances that the photon is emitted from quarks the contributions from leptonic radiation are viewed as a background and must be suppressed. This can easily be done by a cut on the photon emission angle with respect to the incoming electron . In our numerical evaluation we include this background source as well as the interference contribution. At lowest order, each parton is identified with a jet and the photon is automatically isolated from the quark jet by requiring a non-zero transverse momentum of the photon or jet in the $`\gamma ^{}`$-proton center-of-mass frame. Therefore the photon fragmentation contribution is absent at this order. ### 2.2 Next-To-Leading Order Corrections At NLO, processes with an additional gluon, either in the final state or in the initial state, must be taken into account, i.e. $`e(p_1)+q(p_3)e(p_2)+q(p_4)+\gamma (p_5)+g(p_6),`$ (2) $`e(p_1)+g(p_3)e(p_2)+q(p_4)+\gamma (p_5)+\overline{q}(p_6),`$ (3) where the momenta of the particles are again given in parentheses. In addition, virtual corrections (one-loop diagrams at O($`\alpha _s`$)) to the LO process (1) have to be included. The complete matrix elements for (2) and (3) are given in . The processes (2) and (3) contribute both to the $`\gamma +(1+1)`$-jets cross section, as well as to the cross section for $`\gamma +(2+1)`$-jets. In the latter case each parton in the final state of (2) and (3) builds a jet on its own, whereas for $`\gamma +(1+1)`$-jets a pair of final state partons is experimentally unresolved. The exact criteria for combining two partons into one jet will be introduced when we present our results. Following the customary experimental procedure the resolution constraints will be based on the cone algorithm. In the calculation of the cross section for $`\gamma +(1+1)`$-jets we encounter the well-known infrared and collinear singularities. They appear for the processes (2) and (3) in those phase space regions where two partons are degenerate to one parton, i.e. when one of the partons becomes soft or two partons become collinear to each other. The singularities are assigned either to the initial state (ISR) or to the final state (FSR). Contributions involving the product of an ISR and a FSR factor are separated by partial fractioning. The FSR singularities cancel against singularities from the virtual corrections to the LO process (1). For the ISR singularities, this cancellation is incomplete and the remaining singular contributions have to be factorized and absorbed into the renormalized parton distribution functions (PDF’s) of the proton. To accomplish this procedure, the singularities are isolated in an analytic calculation with the help of dimensional regularization. This is difficult for the complete cross sections of the processes (2) and (3). After partial fractioning, the phase-space slicing method is used to separate the singular regions in the 4-particle phase space and to determine in these regions the approximated matrix elements and phase space factors. In those regions only, the calculation is performed analytically. For this purpose a slicing cut $`y_0^J`$ is applied to the scaled invariant masses $`y_{ij}`$, where $`y_{ij}=(p_i+p_j)^2/W^2`$ with $`W^2=(p_P+p_1p_2p_5)^2`$. $`y_0^J`$ must be chosen small enough, so that terms of the order O($`y_0^J`$), which are discarded due to the singular approximation, are so small that an accuracy of a few percent can be achieved for the final result. To be somewhat more explicit, let us assume that by partial fractioning the contribution proportional to the pole term $`1/y_{46}`$ has been isolated in the matrix element $`|M|^2`$ for the process (2). In the infrared region $`p_60`$ the two invariants $`y_{46}`$ and $`y_{36}`$ vanish. Then the integration over these two variables is performed: ($`i`$) over the singular region (S), $`y_{46}<y_0^J`$, $`y_{36}>0`$, analytically with $`42ϵ`$ dimensions and using the singular approximation; ($`ii`$) over the finite region (F), $`y_{46}>y_0^J`$, $`y_{36}<y_0^J`$, numerically without any approximation to $`|M|^2`$ and in 4 dimensions; and ($`iii`$) over the explicit two-parton region (R), $`y_{46}>y_0^J`$, $`y_{36}>y_0^J`$, also numerically. This separation yields two contributions, the parton-level $`\gamma +(1+1)`$-jets, which come from the integration over the regions (S) and (F) and the parton-level $`\gamma +(2+1)`$-jets contribution, which corresponds to the integration over the region (R). All the remaining phase space integrations are performed numerically with usual Monte Carlo routines. For the physical cross sections defined in the next section, which are obtained by adding the contributions from the regions S, F and R, of course, and after adding the virtual contributions and performing the subtraction of the remaining ISR collinear singularities, the dependence on the slicing parameter $`y_0^J`$ cancels. This has been checked in . This means that the cut-off $`y_0^J`$ is purely technical. In addition, the matrix elements $`|M|^2`$ for the processes (2) and (3) have photonic infrared and collinear singularities, i.e. due to soft or collinear photons. The infrared singularity can not occur since we require a sufficiently large photon energy $`E_\gamma =|\stackrel{}{p_5}|`$. But collinear singularities are present in general. In the earlier work these $`q`$-$`\gamma `$ collinear contributions were eliminated by an isolation cut on the photon of the form $`y_{5i}>y_0^\gamma `$ with a sufficiently large isolation parameter $`y_0^\gamma `$ which was considered as a physical cut. In this approach, the photon was considered as a special parton, which was always isolated from all other partons in the initial and final state. Such a photon isolation is very difficult to impose experimentally, since it refers to a separation of the photon from partons, whereas in the experiment only hadrons are measured directly which are recombined to jets. In addition, it was found in that the results, in particular for the gluon-initiated process (3), depend strongly on the isolation cut $`y_0^\gamma `$. In (3) the photon can become collinear to two final state partons, namely $`q`$ and $`\overline{q}`$, which explains the stronger dependence compared to the process (2), where there is only one quark (or antiquark) in the final state. Although under the kinematical conditions assumed in the contribution of the process (3) was only some fraction of the total cross section for $`\gamma +(1+1)`$-jets, the $`y_0^\gamma `$ dependence of the final result was undesirable. In a more systematic treatment the $`y_0^\gamma `$ dependence can be avoided by adding contributions from the quark-to-photon fragmentation function (FF). In order to achieve this we include the contributions to $`|M|^2`$ from (2) and (3), where $`y_{5i}<y_0^\gamma `$ with $`i=4`$ in (2) and $`i=4`$ and 6 in (3). This leads to collinear divergent contributions which are regulated by dimensional regularization. The divergent part is absorbed into the bare photon FF to yield the renormalized FF denoted by $`D_{q\gamma }`$. The additional fragmentation contribution, which includes the contributions from the region $`y_{5i}<y_0^\gamma `$ ($`i=3,4`$), has, for example, for the process (2) the following form $`M_{\gamma ^{}qqg\gamma }^2=M_{\gamma ^{}qqg}^2D_{q\gamma }(z)`$ (4) where $`D_{q\gamma }(z)`$ in (4) is given by $`D_{q\gamma }(z)=D_{q\gamma }(z,\mu _F^2)+{\displaystyle \frac{\alpha e_q^2}{2\pi }}\left(P_{q\gamma }(z)\mathrm{ln}{\displaystyle \frac{z(1z)y_0^\gamma W^2}{\mu _F^2}}+z\right).`$ (5) $`P_{q\gamma }(z)`$ is the LO quark-to-photon splitting function $$P_{q\gamma }(z)=\frac{1+(1z)^2}{z}$$ (6) and $`e_q`$ is the electric quark charge. $`D_{q\gamma }(z,\mu _F^2)`$ stands for the non-perturbative FF of the transition $`q\gamma `$ at the factorization scale $`\mu _F`$. This function will be specified in the next section when we present our results. The second term in (5), if substituted in (4), is the finite part of the result of the integration over the collinear region $`y_{5i}<y_0^\gamma `$. As will be explicitly shown in , the $`y_0^\gamma `$ dependence in (5) cancels the dependence of the numerically evaluated $`\gamma +(1+1)`$-jet cross section restricted to the region $`y_{5i}>y_0^\gamma `$, investigated in . The variable $`z`$ stands for the fraction of the photon energy in terms of the energy of the quark that emits the photon. Suppose the photon is emitted from the final state quark with 4-momentum $`p_4^{}=p_4+p_5`$, then $`z`$ can be related to the invariants $`y_{35}`$ and $`y_{34}`$ $$z=\frac{y_{35}}{y_{34^{}}}=\frac{y_{35}}{y_{34}+y_{35}}$$ (7) The fragmentation contribution is proportional to the cross section for $`\gamma ^{}qqg`$, which is of O($`\alpha _s`$) and well known. It must be convoluted with the function in (5) as indicated in (4) to obtain the contribution to the cross section for $`\gamma ^{}qqg\gamma `$ at $`O(\alpha \alpha _s)`$. Equivalent formulas are used to calculate the fragmentation contributions to the channel (3) and in the case where the quark in the initial and final state is replaced by an antiquark in (2). ## 3 Results The results presented in this section are obtained for energies and kinematical cuts appropriate for the HERA experiments. The energies of the incoming electron (positron) and proton are $`E_e=27.5GeV`$ and $`E_P=820GeV`$, respectively. The cuts on the usual DIS variables are $`Q^210GeV^2,W>10GeV,`$ (8) $`10^4x0.5,0.05y0.95.`$ To eliminate the background from lepton radiation we require $`90^{}<\theta _\gamma <173^{},\theta _{\gamma e}10^{}`$ (9) where $`\theta _\gamma `$ is the emission angle of the photon measured with respect to the momentum of the incoming electron in the HERA laboratory frame. The cut on $`\theta _{\gamma e}=\text{)}<(e(p_2),\gamma (p_5))`$ suppresses leptonic radiation from the final-state electron. The PDF’s of the proton are taken from (MRST) and $`\alpha _s`$ is calculated from the two-loop formula with the same $`\mathrm{\Lambda }`$ value ($`\mathrm{\Lambda }_{\overline{MS}}(n_f=4)=300MeV`$) as used in the MRST parametrization of the proton PDF. The scale in $`\alpha _s`$ and the fractorization scale are equal and fixed to $`Q^2`$. We are interested in the differential two-particle inclusive cross section $`E_\gamma E_Jd\sigma /d^3p_\gamma d^3p_J`$ at NLO (up to $`O(\alpha \alpha _s)`$), where $`(E,\stackrel{}{p})`$ represents the four-vector momentum of the $`\gamma `$ or jet. The $`\gamma +(1+1)`$-jet cross section receives contributions from leading and next-to-leading order and the $`\gamma +(2+1)`$-jet cross section from leading order only. In the latter case, only $`\gamma +2`$-parton-level jets contribute, while each parton including the photon build a jet on their own. The evaluation of the $`\gamma +(1+1)`$-jet cross section is based on two separate contributions, a set of two-body contributions, i.e. $`\gamma +1`$ parton-level jet, and a set of three-body contributions, i.e. $`\gamma +2`$ parton-level jets. In this definition of parton-level jets the remnant jet $`\mathrm{"}+1\mathrm{"}`$ is not counted whereas the photon is considered also as a parton, like $`q,\overline{q}`$ and $`g`$. Each set of contributions is completely finite as all infrared and collinear singularities have been canceled or absorbed into the proton PDF or the quark-to-photon FF. Each contribution to the $`\gamma +(1+1)`$-jet cross section depends separately on the slicing parameter $`y_0^J`$. The analytic contributions are valid only for very small $`y_0^J`$. Separately, the two contributions have no physical meaning. For the contributions to the $`\gamma +(1+1)`$-jet cross section coming from $`\gamma +2`$ parton-level jets, a slicing cut $`y_0^\gamma `$ for the photon is introduced. After adding the photon fragmentation contribution this sample becomes independent of $`y_0^\gamma `$, i.e. also $`y_0^\gamma `$ has the status of a technical cut like $`y_0^J`$. In the $`\gamma +1`$ parton-level jet event sample the photon is isolated from $`q`$ and $`\overline{q}`$ by requiring a non-zero transverse momentum of the photon in the $`\gamma ^{}p`$ center-of-mass frame. The two technical cuts $`y_0^J`$ and $`y_0^\gamma `$ serve only to distinguish the phase space regions, where the integrations have been done analytically with arbitrary dimensions from those where they have been done numerically in four dimensions. These two parameters must be chosen sufficiently small to justify the neglect of terms proportional to $`y_0^J`$ and $`y_0^\gamma `$, respectively in the analytical part of the calculations. We found that $`y_0^J=10^4`$ and $`y_0^\gamma =10^5`$ is sufficient to fulfill this requirement. In order to comply with the jet definitions in future analyses of experimental measurements the partons and the photon in the $`\gamma +2`$ parton-level jet sample are recombined to $`\gamma +(1+1)`$-jets using the cone algorithm of the Snowmass convention . In this recombination scheme the photon is treated like any other parton (so-called democratic algorithm). In the $`\gamma ^{}p`$ center-of-mass frame, two partons $`i`$ and $`j`$ are combined into a jet $`J`$ if they obey the cone constraint $`R_{i,J}<R`$, where $$R_{i,J}=\sqrt{(\eta _i\eta _J)^2+(\varphi _i\varphi _J)^2}.$$ (10) $`\eta _J`$ and $`\varphi _J`$ are the rapidity and azimuthal angle of the recombined jet. These variables are obtained by taking the averages of the corresponding variables of the recombined partons $`i`$ and $`j`$ multiplied with their $`p_T`$ values. The $`p_T`$ of jet $`J`$ is the sum of $`p_{T,i}`$ and $`p_{T,j}`$. We choose $`R=1`$. In some cases, an ambiguity may occur, when two partons $`i`$ and $`j`$ qualify both as two individual jets $`i`$ and $`j`$ and as a combined jet $`J`$. In this case we count only the combined jet $`J`$ to avoid double counting. The rapidity is always defined positive in the direction of the proton remnant momentum. The azimuthal angle is defined with respect to the scattering plane given by the momentum of the beam and the scattered electron. One of the recombined jets may be the photon jet. To qualify a jet as a photon jet we restrict the hadronic energy in this jet by requiring $$z_\gamma =\frac{p_{T,\gamma }}{p_{T,\gamma }+p_{T,had}}=1ϵ_{had}>z_{cut}.$$ (11) $`p_{T,\gamma }`$ and $`p_{T,had}`$ denote the transverse momenta of the photon and the parton producing hadrons in this jet, respectively. For our predictions we choose $`ϵ_{had}ϵ_{had}^0=0.1`$ . Eventually the parameter $`ϵ_{had}^0`$ (or $`z_{cut}`$) must be chosen in accordance with the experimental analysis. Our results depend also on the choice of the quark-to-photon fragmentation function. The factorization scale dependent quark-to-photon FF of $`O(\alpha )`$ is taken from . It is the sum of two contributions, the solution of the evolution equation at this order and an initial FF at some initial scale $`\mu _0`$. The initial FF function and initial scale have been fitted to the ALEPH $`\gamma +1`$-jet data . The factorization scale dependent quark-to-photon fragmentation function also gives a good description of the inclusive photon distribution as measured by OPAL . With these definitions it is clear that in NLO the final state may consist of two or three jets, where one jet is always a photon jet. The three-jet sample, equivalent to $`\gamma +(2+1)`$-jets in the notation of the previous sections, consists of all $`\gamma +(2+1)`$ parton level jets, which do not fulfill the cone constraint (10). In Fig. 1 and 2 we show our results for the $`p_T`$ and $`\eta `$ dependence of the cross sections, $`d\sigma /dp_T`$ and $`d\sigma /d\eta `$, concerning the photon and the jet with the largest $`p_T`$. In each figure we have plotted three curves, (i) the cross section in LO (dotted curve), which has only $`\gamma +(1+1)`$-jets and is independent of the jet defining parameters $`R`$ and $`ϵ_{had}^0`$, (ii) the NLO cross section for $`\gamma +(1+1)`$-jets (dashed curve) and (iii) the sum of the NLO cross sections for $`\gamma +(1+1)`$-jets and $`\gamma +(2+1)`$-jets (full curve). Specifically, in Fig. 1a we present $`d\sigma /dp_{T,\gamma }`$, the transverse momentum dependence of the three cross sections (i), (ii) and (iii) for $`p_{T,\gamma }5GeV`$. All other variables, in particular $`\eta _J`$, $`\eta _\gamma `$ and $`p_{T,J}`$, are integrated over the kinematically allowed ranges. We see that all three cross section have a similar shape. The sum of the $`\gamma +(1+1)`$\- and $`\gamma +(2+1)`$-jets cross section is only slightly larger than the $`\gamma +(1+1)`$-jets cross section. Both cross sections do not differ very much from the LO cross section indicating that the NLO corrections are not very large. Of course, this is a consequence of our choice for the cone radius $`R`$. In Fig. 1b the plot of $`d\sigma /dp_{T,J}`$ for the jet with the largest $`p_T`$ is shown. The qualitative behaviour of the three cross sections (i) - (iii) is similar as in Fig. 1a. For the $`\eta `$ distributions we integrate over $`p_{T,\gamma }5GeV`$ and $`p_{T,J}6GeV`$. The choice of two different values of minimal $`p_T`$’s for the photon and the jet is necessary to avoid the otherwise present infrared sensitivity of the NLO predictions. This sensitivity is known from similar calculations of dijet cross sections in $`ep`$ collisions and must be avoided. The cross section $`d\sigma /d\eta _\gamma `$ is plotted in Fig. 2a, again for the three cases (i), (ii) and (iii). The shapes of the three curves are similar. Here we have integrated over the full kinematic range of the variable $`\eta _J`$. Figure 2b contains the predictions for $`d\sigma /d\eta _J`$, where $`\eta _J`$ is the rapidity of the jet with the largest $`p_T`$. In Fig. 2b we observe that the full curve, which represents $`d\sigma /d\eta _J`$ for the sum of the two jet cross sections, is shifted somewhat more to $`\eta _J>0`$ compared to the NLO $`\gamma +(1+1)`$-jets cross section. The LO cross section (dotted curve) peaks more in the backward direction than the other two. Compared to $`d\sigma /d\eta _\gamma `$, shown in Fig. 2a the $`\eta _J`$ distribution for the jet peaks at somewhat smaller $`\eta _J`$. For completeness we also give the results for the various cross sections in Table 1, separated into the contributions S, R (for incoming quarks or antiquarks and incoming gluons), F as described in the previous section and the fragmentation contribution denoted by D as defined in Eq. (5). About $`12\%`$ of the total NLO cross section is due to $`\gamma +(2+1)`$-jet final states. The $`\gamma +(1+1)`$-jets cross section is reduced by about $`9\%`$ by NLO corrections. ## 4 Concluding Remarks We have presented a NLO calculation for the production of photons accompanied by jets in deep inelastic electron proton scattering, taking into account the contribution from quark-to-photon fragmentation. This improves a previous work which suffered from the presence of parton-level cutoff parameters. The present consistent treatment allows for a direct comparison of our theoretical predictions with experimental measurements without being sensitive to uncertainties from unphysical cutoff parameters. We expect that the measurement of photon plus jet production at HERA will contribute to testing perturbative QCD. Moreover, our results add another piece to the set of NLO predictions of the standard model needed in searches for new physics. The calculation covers the range of large $`Q^2`$ up to several $`10^3GeV^2`$. At even larger momentum transfers additional contributions from $`Z`$ exchange become as important as pure $`\gamma `$ exchange to which the present work was restricted. ### Acknowledgements A. G. would like to thank A. Wagner for financial support during her stay at DESY where part of this work has been carried out.
no-problem/9903/physics9903019.html
ar5iv
text
# References The phenomenon of frustrated total internal reflection illustrated in Figure 1 has been the subject of a considerable amount of research (see and references therein). The explicit expression for the transit time in frustrated total internal reflection has been obtained by Ghatak and Banerjee . This expression infered from the stationary phase analysis has the form $$\tau =\frac{2}{\left(k_{1z}^2+K^2\right)k_{1z}K}\left(\frac{k_1}{v_1}K^2+\frac{k_2}{v_2}k_{1z}^2\right)$$ (1) for $`Kd1`$. Here $`k_1=\frac{\omega }{c}n_1,k_2=\frac{\omega }{c}n_2,k_{1x}=k_1sin\theta _i,k_{1z}=k_1cos\theta _i,K=\sqrt{k_{1x}^2k_2^2},`$ $`k_1`$ and $`k_2`$ are wavenumbers in regions I and II, $`K`$ is the evanescent-wave wavenumber , $`v_1`$ and $`v_2`$ are group velocities in regions I and II, $`n_1`$ and $`n_2`$ are refractive indexes, $`\theta _i`$ is the incidence angle, $`\omega `$ is the frequency of the incoming wave, $`d`$ is the barrier width. | | | --- | | Figure 1 | Another expression for the transit time has been recently proposed by Jakiel, Olkhovsky and Recami . This expression infered from the analogy between photon and nonrelativistic-particle tunneling has the form $$\tau =\frac{2}{cK}$$ (2) for $`Kd1`$. Note that formula (1) is valid for all available values of parameters ($`n_1,\theta _i`$). The formula (2) is valid only in the vicinity of the singular point ($`n_1=\sqrt{2},\theta _i=\frac{\pi }{4}`$) because $$\frac{2}{cK}=\frac{1}{K}Res_{\left(n_1=\sqrt{2},\theta _i=\frac{\pi }{4}\right)}\left[\frac{2}{\left(k_{1z}^2+K^2\right)k_{1z}K}\left(\frac{k_1}{v_1}K^2+\frac{k_2}{c}k_{1z}^2\right)\right].$$ Therefore the expression (2) is the trivial consequence of the formula (1). Superluminal photon tunneling arises the problem of Einstein causality. To elucidate this problem we proceed to the analysis of evanescent-wave propagation. According to , the evanescent-wave wavenumber $`K`$ satisfies the equation $$k_{1x}^2K^2k_2^2=0.$$ This equation is invariant under the group $`SO(1,2)`$, which is the subgroup of the group $`SO(2,2)`$. The group $`SO(2,2)`$ differs from the Lorentz group $`SO(3,1)`$. Therefore evanescent-wave propagation is incompatible with Einstein causality. Acknowledgement The author would like to thank V.Agranovich for discussions.
no-problem/9903/hep-th9903002.html
ar5iv
text
# Untitled Document UCLA/99/TEP/3 Columbia/99/Math LAX PAIRS AND SPECTRAL CURVES FOR CALOGERO-MOSER AND SPIN CALOGERO-MOSER SYSTEMS <sup>*</sup><sup>*</sup>Research supported in part by the National Science Foundation under grants PHY-95-31023 and DMS-98-00783. Eric D’Hoker<sup>1</sup> and D.H. Phong<sup>2</sup> <sup>1</sup> Department of Physics University of California, Los Angeles, CA 90024 Institute for Theoretical Physics University of California, Santa Barbara, CA 93106 <sup>2</sup> Department of Mathematics Columbia University, New York, NY 10027 Abstract We summarize recent results on the construction of Lax pairs with spectral parameter for the twisted and untwisted elliptic Calogero-Moser systems associated with arbitrary simple Lie algebras, their scaling limits to Toda systems, and their role in Seiberg-Witten theory. We extend part of this work by presenting a new parametrization for the spectral curves for elliptic spin Calogero-Moser systems associated with $`SL(N)`$. Contribution to the issue of “Regular and Chaotic Dynamics” dedicated to Professor J. Moser on the occasion of his 70-th birthday I. INTRODUCTION Calogero-Moser systems are Hamiltonian systems with an amazingly rich structure. Recently, another remarkable property of these systems has been brought to light, namely their intimate connection with exact solutions of supersymmetric gauge theories. The $`𝒩=2`$ supersymmetric $`SU(N)`$ gauge theory with a hypermultiplet in the adjoint representation was the first gauge theory to be linked with elliptic Calogero-Moser systems. In their 1995 work, based on several consistency checks, Donagi and Witten had proposed that the Seiberg-Witten spectral curves for the low-energy exact solution of this theory were given by the spectral curves of a $`SU(N)`$ Hitchin system. Krichever in unpublished work, Gorsky and Nekrasov and Martinec have recognized the $`SU(N)`$ Hitchin spectral curves as identical to the spectral curves for elliptic $`SU(N)`$ Calogero-Moser systems. That the $`SU(N)`$ elliptic Calogero-Moser curves do provide the Seiberg-Witten solution of the $`SU(N)`$ gauge theory with matter in the adjoint representation was established by the authors in . In particular, it was shown in that the resulting prepotential has the correct logarithmic singularities predicted by field theoretic perturbative calculations, and that it satisfies a renormalization group equation which determines explicitly instanton contributions to any order. The major problem in Seiberg-Witten theory is to determine the spectral curves, and hence the integrable models, corresponding to an arbitrary asymptotically free or conformally invariant $`𝒩=2`$ supersymmetric gauge theory with gauge algebra $`𝒢`$, and matter hypermultiplets in a representation $`R`$ of $`𝒢`$. For reviews, see e.g. . It has been known now for a long time, thanks to the work of Olshanetsky and Perelomov , that Calogero-Moser systems can be defined for any simple Lie algebra<sup>*</sup><sup>*</sup>Other models associated to Lie algebras include the Toda systems, of which more will be said below, and the Ruijsenaars-Schneider systems, whose role in gauge theories is still obscure.. Olshanetsky and Perelomov also showed that the Calogero-Moser systems for classical Lie algebras were integrable, although the existence of a spectral curve (or Lax pair with spectral parameter) as well as the case of exceptional Lie algebras remained open. Thus several immediate questions were: $``$ Does the elliptic Calogero-Moser system for general Lie algebra $`𝒢`$ admit a Lax pair with spectral parameter? $``$ Does it correspond to the $`𝒩=2`$ supersymmetric gauge theory with gauge algebra $`𝒢`$ and a hypermultiplet in the adjoint representation? $``$ Can this correspondence be verified in the limiting cases when the mass $`m`$ of the hypermultiplet tends to $`0`$ and the gauge theory acquires an $`𝒩=4`$ supersymmetry and becomes exact, and in the limit $`m\mathrm{}`$, when the hypermultiplet decouples and the theory reduces to pure $`𝒩=2`$ Yang-Mills? The answers to these questions turn out to be the following . $``$ The elliptic Calogero-Moser systems defined by an arbitrary simple Lie algebra $`𝒢`$ do admit Lax pairs with spectral parameters. (In the case of $`E_8`$, we need to assume the existence of a cocycle). $``$ The correspondence between elliptic $`𝒢`$ Calogero-Moser systems and $`𝒩=2`$ supersymmetric $`𝒢`$ gauge theories with matter in the adjoint representation is only correct when the Lie algebra $`𝒢`$ is simply-laced. When $`𝒢`$ not simply-laced, we require new integrable models, namely the twisted elliptic Calogero-Moser systems introduced in . $``$ The new twisted elliptic Calogero-Moser systems also admit a Lax pair with spectral parameter, except possibly in the case $`𝒢=G_2`$ . $``$ In the scaling limit $`m=Mq^{\frac{1}{2}\delta }\mathrm{}`$, $`M`$ fixed, the twisted (respectively untwisted) elliptic $`𝒢`$ Calogero-Moser systems tend to the Toda system for $`(𝒢^{(1)})^{}`$ (respectively $`𝒢^{(1)}`$) for $`\delta =\frac{1}{h_𝒢^{}}`$ (respectively $`\delta =\frac{1}{h_𝒢}`$). Here $`h_𝒢`$ and $`h_𝒢^{}`$ are the Coxeter and the dual Coxeter numbers of $`𝒢`$ . The main purpose of this paper is to review some of these developments. Although the case of the adjoint representation for arbitrary gauge algebras has now been solved, the correspondence between gauge theories and integrable models is still far from complete. In particular, one can wonder about the eventual role, if any, of other generalizations of elliptic Calogero-Moser systems such as the Ruijsenaars-Schneider systems or the spin Calogero-Moser systems . Such questions require a better understanding of the spectral curves of these systems, and particularly of their parametrizations. Thus we have taken this opportunity to describe also a new parametrization for the spectral curves of spin Calogero-Moser systems. This new parametrization is suggestive of the order parameters for the $`SU(N)`$ gauge theory, and may be valuable in future developments. See also for recent developments. II. TWISTED AND UNTWISTED CALOGERO-MOSER SYSTEMS The $`SU(N)`$ Elliptic Calogero-Moser System The basic system in this paper is the elliptic Calogero-Moser system defined by the Hamiltonian $$H(x,p)=\frac{1}{2}\underset{i=1}{\overset{N}{}}p_i^2\frac{1}{2}m^2\underset{ij}{}\mathrm{}(x_ix_j)$$ $`(2.1)`$ Here $`m`$ is a mass parameter, and $`\mathrm{}(z)`$ is the Weierstrass $`\mathrm{}`$-function, defined on a torus $`𝐂/(2\omega _1𝐙+2\omega _2𝐙)`$. As usual, we denote by $`\tau =\omega _2/\omega _1`$ the moduli of the torus, and set $`q=e^{2\pi i\tau }`$. The well-known trigonometric and rational limits with respective potentials $$\frac{1}{2}m^2\underset{ij}{}\frac{1}{4\mathrm{sh}^2\frac{x_ix_j}{2}}\mathrm{and}\frac{1}{2}m^2\underset{ij}{}\frac{1}{(x_ix_j)^2}$$ arise in the limits $`\omega _1=i\pi ,\omega _2\mathrm{}`$ and $`\omega _1,\omega _2\mathrm{}`$. All these systems have been shown to be completely integrable in the sense of Liouville, i.e. they all admit a complete set of integrals of motion which are in involution . Our considerations require however a notion of integrability which is in some sense more stringent, namely a Lax pair $`L(z)`$, $`M(z)`$ with spectral parameter $`z`$. Such a Lax pair was obtained by Krichever in 1980. He showed that the Hamiltonian system (2.1) is equivalent to the Lax equation $`\dot{L}(z)=[L(z),M(z)]`$, with $`L(z)`$ and $`M(z)`$ given by the following $`N\times N`$ matrices $$\begin{array}{ccc}\hfill L_{ij}(z)=& p_i\delta _{ij}m(1\delta _{ij})\mathrm{\Phi }(x_ix_j,z)\hfill & \\ \hfill M_{ij}(z)=& m\delta _{ij}\underset{ki}{}\mathrm{}(x_ix_k)m(1\delta _{ij})\mathrm{\Phi }^{}(x_ix_j,z).\hfill & (2.2)\hfill \end{array}$$ The function $`\mathrm{\Phi }(x,z)`$ is defined by $$\mathrm{\Phi }(x,z)=\frac{\sigma (zx)}{\sigma (z)\sigma (x)}e^{x\zeta (z)},$$ $`(2.3)`$ where $`\sigma (z)`$, $`\zeta (z)`$ are the usual Weierstrass $`\sigma `$ and $`\zeta `$ functions on the torus $`𝐂/(2\omega _1𝐙+2\omega _2𝐙)`$. The function $`\mathrm{\Phi }(x,z)`$ satisfies the key functional equation $$\mathrm{\Phi }(x,z)\mathrm{\Phi }^{}(y,z)\mathrm{\Phi }(y,z)\mathrm{\Phi }^{}(x,z)=(\mathrm{}(x)\mathrm{}(y))\mathrm{\Phi }(x+y,z).$$ $`(2.4)`$ It is well-known that functional equations of this form are required for the Hamilton equations of motion to be equivalent to the Lax equation $`\dot{L}(z)=[L(z),M(z)]`$ with a Lax pair of the form (2.2). Often, solutions had been obtained under additional parity assumptions in $`x`$ (and $`y`$), which prevent the existence of a spectral parameter. The solution $`\mathrm{\Phi }(x,z)`$ with spectral parameter $`z`$ is obtained by dropping such parity assumptions for general $`z`$. It is a relatively recent result of Braden and Buchstaber that, conversely, the functional equation (2.4) essentially determines $`\mathrm{\Phi }(x,z)`$. Calogero-Moser Systems defined by Lie Algebras As Olshanetsky and Perelomov realized very early on, the Hamiltonian system (2.1) is only one example of a whole series of Hamiltonian systems associated with each simple Lie algebra. More precisely, given any simple Lie algebra $`𝒢`$, Olshanetsky and Perelomov introduced the system with Hamiltonian $$H(x,p)=\frac{1}{2}\underset{i=1}{\overset{r}{}}p_i^2\frac{1}{2}\underset{\alpha (𝒢)}{}m_{|\alpha |}^2\mathrm{}(\alpha x),$$ $`(2.5)`$ where $`r`$ is the rank of $`𝒢`$, and $`(𝒢)`$ denotes the set of roots of $`𝒢`$. The $`m_{|\alpha |}`$ are mass parameters. To preserve the invariance of the Hamiltonian (2.5) under the Weyl group, the parameters $`m_{|\alpha |}`$ depend only on the length of $`|\alpha |`$ of the root $`\alpha `$, and not on the root $`\alpha `$ itself. In the case of $`A_{N1}=SU(N)`$, it is common practice to use $`N`$ pairs of dynamical variables $`(x_i,p_i)`$, since the roots of $`A_{N1}`$ lie conveniently on a hyperplane in $`𝐂^N`$. The dynamics of the system are unaffected if we shift all $`x_i`$ by a constant, and the number of degrees of freedom is effectively $`N1=r`$. Now the roots of $`SU(N)`$ are given by $`\alpha =e_ie_j`$, $`1i,jN`$, $`ij`$. Thus we recognize the original elliptic Calogero-Moser system as the special case of (2.5) corresponding to $`A_{N1}`$. As in the original case, the elliptic systems (2.5) admit rational and trigonometric limits. Olshanetsky and Perelomov succeeded in constructing a Lax pair for all these systems in the case of classical Lie algebras, albeit without spectral parameter. Twisted Calogero-Moser Systems defined by Lie Algebras It turns out that the Hamiltonian systems (2.5) are not the only natural extensions of the basic elliptic Calogero-Moser system. A subtlety arises for simple Lie algebras $`𝒢`$ which are not simply-laced, i.e., algebras which admit roots of uneven length. This is the case for the algebras $`B_n`$, $`C_n`$, $`G_2`$, and $`F_4`$ in Cartan’s classification. For these algebras, the following twisted elliptic Calogero-Moser systems were introduced by the authors in $$H_𝒢^{twisted}=\frac{1}{2}\underset{i=1}{\overset{r}{}}p_i^2\frac{1}{2}\underset{\alpha (𝒢)}{}m_{|\alpha |}^2\mathrm{}_{\nu (\alpha )}(\alpha x).$$ $`(2.6)`$ Here the function $`\nu (\alpha )`$ depends only on the length of the root $`\alpha `$. If $`𝒢`$ is simply-laced, we set $`\nu (\alpha )=1`$ identically. Otherwise, for $`𝒢`$ non simply-laced, we set $`\nu (\alpha )=1`$ when $`\alpha `$ is a long root, $`\nu (\alpha )=2`$ when $`\alpha `$ is a short root and $`𝒢`$ is one of the algebras $`B_n`$, $`C_n`$, or $`F_4`$, and $`\nu (\alpha )=3`$ when $`\alpha `$ is a short root and $`𝒢=G_2`$. The twisted Weierstrass function $`\mathrm{}_\nu (z)`$ is defined by $$\mathrm{}_\nu (z)=\underset{\sigma =0}{\overset{\nu 1}{}}\mathrm{}(z+2\omega _a\frac{\sigma }{\nu }),$$ $`(2.7)`$ where $`\omega _a`$ is any of the half-periods $`\omega _1`$, $`\omega _2`$, or $`\omega _1+\omega _2`$. Thus the twisted and untwisted Calogero-Moser systems coincide for $`𝒢`$ simply laced. The original motivation for twisted Calogero-Moser systems was based on their scaling limits (which will be discussed in the next section) . Another motivation based on the symmetries of Dynkin diagrams was proposed subsequently by Bordner, Sasaki, and Takasaki . III. SCALING LIMITS OF CALOGERO-MOSER SYSTEMS Results of Inozemtsev for $`A_n`$ For the standard elliptic Calogero-Moser systems corresponding to $`A_{N1}`$, Inozemtsev has shown in the 1980’s that in the scaling limit $$\begin{array}{ccc}\hfill m& =Mq^{\frac{1}{2N}},q0\hfill & (3.1)\hfill \\ \hfill x_i& =X_i2\omega _2\frac{i}{N},1iN\hfill & (3.2)\hfill \end{array}$$ where $`M`$ is kept fixed, the elliptic $`A_{N1}`$ Calogero-Moser Hamiltonian tends to the following Hamiltonian $$H_{Toda}=\frac{1}{2}\underset{i=1}{\overset{N}{}}p_i^2\frac{1}{2}\left(\underset{i=1}{\overset{N1}{}}e^{X_{i+1}X_i}+e^{X_1X_N}\right)$$ $`(3.3)`$ The roots $`e_ie_{i+1}`$, $`1iN1`$, and $`e_Ne_1`$ can be recognized as the simple roots of the affine algebra $`A_{N1}^{(1)}`$. (For basic facts on affine algebras, we refer to ). Thus (3.3) can be recognized as the Hamiltonian of the Toda system defined by $`A_{N1}^{(1)}`$. Scaling Limits based on the Coxeter Number The key feature of the above scaling limit is the collapse of the sum over the entire root lattice of $`A_{N1}`$ in the Calogero-Moser Hamiltonian to the sum over only simple roots in the Toda Hamiltonian for the Kac-Moody algebra $`A_{N1}^{(1)}`$. Our task is to extend this mechanism to general Lie algebras. For this, we consider the following generalization of the preceding scaling limit $$\begin{array}{ccc}\hfill m& =Mq^{\frac{1}{2}\delta },\hfill & (3.4)\hfill \\ \hfill x& =X2\omega _2\delta \rho ^{},\hfill & (3.5)\hfill \end{array}$$ Here $`x=(x_i)`$, $`X=(X_i)`$ and $`\rho ^{}`$ are $`r`$-dimensional vectors. The vector $`x`$ is the dynamical variable of the Calogero-Moser system. The parameters $`\delta `$ and $`\rho ^{}`$ depend on the algebra $`𝒢`$ and are yet to be chosen. As for $`M`$ and $`X`$, they have the same interpretation as earlier, namely as respectively the mass parameter and the dynamical variables of the limiting system. Setting $`\omega _1=i\pi `$, the contribution of each root $`\alpha `$ to the Calogero-Moser potential can be expressed as $$m^2\mathrm{}(\alpha x)=\frac{1}{2}M^2\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\frac{e^{2\delta \omega _2}}{\mathrm{ch}(\alpha x2n\omega _2)1}$$ $`(3.6)`$ It suffices to consider positive roots $`\alpha `$. We shall also assume that $`0\delta \alpha \rho ^{}1`$. The contributions of the $`n=0`$ and $`n=1`$ summands in (3.6) are proportional to $`e^{2\omega _2(\delta \delta \alpha \rho ^{})}`$ and $`e^{2\omega _2(\delta 1+\delta \alpha \rho ^{})}`$ respectively. Thus the existence of a finite scaling limit requires that $$\delta \delta \alpha \rho ^{}1\delta .$$ $`(3.7)`$ Let $`\alpha _i`$, $`1ir`$ be a basis of simple roots for $`𝒢`$. If we want all simple roots $`\alpha _i`$ to survive in the limit, we must require that $$\alpha _i\rho ^{}=1,1ir.$$ $`(3.8)`$ This condition characterizes the vector $`\rho ^{}`$ as the level vector. Next, the second condition in (3.7) can be rewritten as $`\delta \{1+max_\alpha (\alpha \rho ^{})\}1`$. But $$h_𝒢=1+max_\alpha (\alpha \rho ^{})$$ $`(3.9)`$ is precisely the Coxeter number of $`𝒢`$, and we must have $`\delta \frac{1}{h_𝒢}`$. Thus when $`\delta <\frac{1}{h_𝒢}`$, the contributions of all the roots except for the simple roots of $`𝒢`$ tend to $`0`$. On the other hand, when $`\delta =\frac{1}{h_𝒢}`$, the highest root $`\alpha _0`$ realizing the maximum over $`\alpha `$ in (3.9) survives. Since $`\alpha _0`$ is the additional simple root for the affine Lie algebra $`𝒢^{(1)}`$, we arrive in this way at the following theorem, which was proved in Theorem 1. Under the limit (3.4-3.5), with $`\delta =\frac{1}{h_𝒢}`$, and $`\rho ^{}`$ given by the level vector, the Hamiltonian of the elliptic Calogero-Moser system for the simple Lie algebra $`𝒢`$ tends to the Hamiltonian of the Toda system for the affine Lie algebra $`𝒢^{(1)}`$. Scaling Limit based on the Dual Coxeter Number If the Seiberg-Witten spectral curve of the $`𝒩=2`$ supersymmetric gauge theory with a hypermultiplet in the adjoint representation is to be realized as the spectral curve for a Calogero-Moser system, the parameter $`m`$ in the Calogero-Moser system should correspond to the mass of the hypermultiplet. In the gauge theory, the dependence of the coupling constant on the mass $`m`$ is given by $$\tau =\frac{i}{2\pi }h_𝒢^{}\mathrm{ln}\frac{m^2}{M^2}m=Mq^{\frac{1}{2h_𝒢^{}}}$$ $`(3.10)`$ where $`h_𝒢^{}`$ is the quadratic Casimir of the Lie algebra $`𝒢`$. This shows that the correct physical limit, expressing the decoupling of the hypermultiplet as it becomes infinitely massive, is given by (3.4), but with $`\delta =\frac{1}{h_𝒢^{}}`$. To establish a closer parallel with our preceding discussion, we recall that the quadratic Casimir $`h_𝒢^{}`$ coincides with the dual Coxeter number of $`𝒢`$, defined by $$h_𝒢^{}=1+max_\alpha (\alpha ^{}\rho ),$$ $`(3.11)`$ where $`\alpha ^{}=\frac{2\alpha }{\alpha ^2}`$ is the coroot associated to $`\alpha `$, and $`\rho =\frac{1}{2}_{\alpha >0}\alpha `$ is the well-known Weyl vector. For simply laced Lie algebras $`𝒢`$ (ADE algebras), we have $`h_𝒢=h_𝒢^{}`$, and the preceding scaling limits apply. However, for non simply-laced algebras ($`B_n`$, $`C_n`$, $`G_2`$, $`F_4`$), we have $`h_𝒢>h_𝒢^{}`$, and our earlier considerations show that the untwisted elliptic Calogero-Moser Hamiltonians do not tend to a finite limit under (3.10), $`q0`$, $`M`$ is kept fixed. This is why the twisted Hamiltonian systems (2.6) have to be introduced. The twisting produces precisely to an improvement in the asymptotic behavior of the potential which allows a finite, non-trivial limit. More precisely, we can write $$m^2\mathrm{}_\nu (x)=\frac{c_\nu }{2}\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\frac{m^2}{\mathrm{ch}\nu (x2n\omega _2)1},$$ $`(3.12)`$ where $`c_\nu =\nu ^2`$. Setting $`x=X2\omega _2\delta ^{}\rho `$, we obtain the following asymptotics $$m^2\mathrm{}_\nu (x)=c_\nu M^2\{\begin{array}{cc}e^{2\omega _2(\delta ^{}\alpha ^{}\rho \delta ^{})\alpha ^{}X}+e^{2\omega _2(1\delta ^{}\alpha ^{}\rho \delta ^{})+\alpha ^{}X},\hfill & \text{if }\alpha \text{ is long;}\hfill \\ e^{2\omega _2(\delta ^{}\alpha ^{}\rho \delta ^{})\alpha ^{}X},\hfill & \text{if }\alpha \text{ is short.}\hfill \end{array}$$ $`(3.13)`$ This leads to the following theorem Theorem 2. Under the limit $`x=X+2\omega _2\frac{1}{h_𝒢^{}}\rho `$, $`m=Mq^{\frac{1}{2h_𝒢^{}}}`$, with $`\rho `$ the Weyl vector and $`q0`$, the Hamiltonian of the twisted elliptic Calogero-Moser system for the simple Lie algebra $`𝒢`$ tends to the Hamiltonian of the Toda system for the affine Lie algebra $`(𝒢^{(1)})^{}`$. This suggests that the twisted Calogero-Moser system is the integrable model solving the N=2 supersymmetric gauge theory with gauge algebra $`𝒢`$ since, in view of the work of Martinec and Warner , it is the Toda system for $`(𝒢^{(1)})^{}`$ which solves the corresponding pure Yang-Mills theory. So far we have discussed only the scaling limits of the Hamiltonians. However, similar arguments show that the Lax pairs constructed below also have finite, non-trivial scaling limits whenever this is the case for the Hamiltonians. The spectral parameter $`z`$ should scale as $`e^z=Zq^{\frac{1}{2}}`$, with $`Z`$ fixed. The parameter $`Z`$ can be identified with the loop group parameter for the resulting affine Toda system. IV. LAX PAIRS FOR CALOGERO-MOSER SYSTEMS The General Ansatz Let the rank of $`𝒢`$ be $`n`$, and $`d`$ be its dimension. Let $`\mathrm{\Lambda }`$ be a representation of $`𝒢`$ of dimension $`N`$, of weights $`\lambda _I`$, $`1IN`$. Let $`u_I𝐂^N`$ be the weights of the fundamental representation of $`GL(N,𝐂)`$. Project orthogonally the $`u_I`$’s onto the $`\lambda _I`$’s as $$su_I=\lambda _I+u_I,\lambda _Iv_J.$$ $`(4.1)`$ It is easily verified that $`s^2`$ is the second Dynkin index. Then $$\alpha _{IJ}=\lambda _I\lambda _J$$ $`(4.2)`$ is a weight of $`\mathrm{\Lambda }\mathrm{\Lambda }^{}`$ associated to the root $`u_Iu_J`$ of $`GL(N,𝐂)`$. The Lax pairs for both untwisted and twisted Calogero-Moser systems will be of the form $$L=P+X,M=D+X,$$ $`(4.3)`$ where the matrices $`P,X,D`$, and $`Y`$ are given by $$X=\underset{IJ}{}C_{IJ}\mathrm{\Phi }_{IJ}(\alpha _{IJ},z)E_{IJ},Y=\underset{IJ}{}C_{IJ}\mathrm{\Phi }_{IJ}^{}(\alpha _{IJ},z)E_{IJ}$$ $`(4.4)`$ and by $$P=ph,D=d(h\stackrel{~}{h})+\mathrm{\Delta }.$$ $`(4.5)`$ Here $`h`$ is in a Cartan subalgebra $`_𝒢`$ for $`𝒢`$, $`\stackrel{~}{h}`$ is in the Cartan-Killing orthogonal complement of $`_𝒢`$ inside a Cartan subalgebra $``$ for $`GL(N,𝐂)`$, and $`\mathrm{\Delta }`$ is in the centralizer of $`_𝒢`$ in $`GL(N,𝐂)`$. The functions $`\mathrm{\Phi }_{IJ}(x,z)`$ and the coefficients $`C_{IJ}`$ are yet to be determined. We begin by stating the necessary and sufficient conditions for the pair $`L(z)`$, $`M(z)`$ of (4.1) to be a Lax pair for the (twisted or untwisted) Calogero-Moser systems. For this, it is convenient to introduce the following notation $$\begin{array}{ccc}\hfill \mathrm{\Phi }_{IJ}& =\mathrm{\Phi }_{IJ}(\alpha _{IJ}x)\hfill & \\ \hfill \mathrm{}_{IJ}^{}& =\mathrm{\Phi }_{IJ}(\alpha _{IJ}x,z)\mathrm{\Phi }_{JI}^{}(\alpha _{IJ}x,z)\mathrm{\Phi }_{IJ}(\alpha _{IJ}x,z)\mathrm{\Phi }_{JI}^{}(\alpha _{IJ}x,z).\hfill & (4.6)\hfill \end{array}$$ Then the Lax equation $`\dot{L}(z)=[L(z),M(z)]`$ implies the Calogero-Moser system if and only if the following three identities are satisfied $$\underset{IJ}{}C_{IJ}C_{JI}\mathrm{}_{IJ}^{}\alpha _{IJ}=s^2\underset{\alpha (𝒢)}{}m_{|\alpha |}^2\mathrm{}_{\nu (\alpha )}(\alpha x)$$ $`(4.7)`$ $$\underset{IJ}{}C_{IJ}C_{JI}\mathrm{}_{IJ}^{}(v_Iv_J)=0$$ $`(4.8)`$ $$\begin{array}{ccc}\hfill \underset{KI,J}{}C_{IK}C_{KJ}(\mathrm{\Phi }_{IK}\mathrm{\Phi }_{KJ}^{}\mathrm{\Phi }_{IK}^{}\mathrm{\Phi }_{KJ})& =sC_{IJ}\mathrm{\Phi }_{IJ}d(v_Iv_J)+\underset{KI,J}{}\mathrm{\Delta }_{IJ}C_{KJ}\mathrm{\Phi }_{KJ}\hfill & \\ & \underset{KI,J}{}C_{IK}\mathrm{\Phi }_{IK}\mathrm{\Delta }_{KJ}\hfill & (4.9)\hfill \end{array}$$ The following theorem was established in : Theorem 3. A representation $`\mathrm{\Lambda }`$, functions $`\mathrm{\Phi }_{IJ}`$, and coefficients $`C_{IJ}`$ with a spectral parameter $`z`$ satisfying (4.7-4.9) can be found for all twisted and untwisted elliptic Calogero-Moser systems associated with a simple Lie algebra $`𝒢`$, except possibly in the case of twisted $`G_2`$. In the case of $`E_8`$, we have to assume the existence of a $`\pm 1`$ cocycle. Lax Pairs for Untwisted Calogero-Moser Systems We now describe some important features of the Lax pairs we obtain in this manner. $``$ In the case of the untwisted Calogero-Moser systems, we can choose $`\mathrm{\Phi }_{IJ}(x,z)=\mathrm{\Phi }(x,z)`$, $`\mathrm{}_{IJ}(x)=\mathrm{}(x)`$ for all $`𝒢`$. $``$ $`\mathrm{\Delta }=0`$ for all $`𝒢`$, except for $`E_8`$. $``$ For $`A_n`$, the Lax pair (2.2-2.3) corresponds to the choice of the fundamental representation for $`\mathrm{\Lambda }`$. A different Lax pair can be found by taking $`\mathrm{\Lambda }`$ to be the antisymmetric representation. $``$ For the $`BC_n`$ system, the Lax pair is obtained by imbedding $`B_n`$ in $`GL(N,𝐂)`$ with $`N=2n+1`$. When $`z=\omega _a`$ (half-period), the Lax pair obtained this way reduces to the Lax pair obtained by Olshanetsky and Perelomov . $``$ For the $`B_n`$ and $`D_n`$ systems, additional Lax pairs with spectral parameter can be found by taking $`\mathrm{\Lambda }`$ to be the spinor representation. $``$ For $`G_2`$, a first Lax pair with spectral parameter can be obtained by the above construction with $`\mathrm{\Lambda }`$ chosen to be the $`\mathrm{𝟕}`$ of $`G_2`$. A second Lax pair with spectral parameter can be obtained by restricting the 8 of $`B_3`$ to the $`\mathrm{𝟕}\mathrm{𝟏}`$ of $`G_2`$. $``$ For $`F_4`$, a Lax pair can be obtained by taking $`\mathrm{\Lambda }`$ to be the $`\mathrm{𝟐𝟔}\mathrm{𝟏}`$ of $`F_4`$, viewed as the restriction of the 27 of $`E_6`$ to its $`F_4`$ subalgebra. $``$ For $`E_6`$, $`\mathrm{\Lambda }`$ is the 27 representation. $``$ For $`E_7`$, $`\mathrm{\Lambda }`$ is the 56 representation. $``$ For $`E_8`$, a Lax pair with spectral parameter can be constructed with $`\mathrm{\Lambda }`$ given by the 248 representation, if coefficients $`c_{IJ}=\pm 1`$ exist with the following cocycle conditions $$\begin{array}{ccc}\hfill c(\lambda ,\lambda \delta )c(\lambda \delta ,\mu )=& c(\lambda ,\mu +\delta )c(\mu +\delta ,\mu )\hfill & \\ & \mathrm{when}\delta \lambda =\delta \mu =1,\lambda \mu =0\hfill & \\ \hfill c(\lambda ,\mu )c(\lambda \delta ,\mu )=& c(\lambda ,\lambda \delta )\hfill & \\ & \mathrm{when}\delta \lambda =\lambda \mu =1,\delta \mu =0\hfill & \\ \hfill c(\lambda ,\mu )c(\lambda ,\lambda \mu )=& c(\lambda \mu ,\mu )\hfill & \\ & \mathrm{when}\lambda \mu =1.\hfill & (4.10)\hfill \end{array}$$ The matrix $`\mathrm{\Delta }`$ in the Lax pair is then the $`8\times 8`$ matrix given by $$\begin{array}{ccc}\hfill \mathrm{\Delta }_{ab}=& \underset{\genfrac{}{}{0pt}{}{\delta \beta _a=1}{\delta \beta _b=1}}{}\frac{m_2}{2}\left(c(\beta _a,\delta )c(\delta ,\beta _b)+c(\beta _a,\beta _a\delta )c(\beta _a\delta ,\beta _b)\right)\mathrm{}(\delta x)\hfill & \\ & \underset{\genfrac{}{}{0pt}{}{\delta \beta _a=1}{\delta \beta _b=1}}{}\frac{m_2}{2}\left(c(\beta _a,\delta )c(\delta ,\beta _b)+c(\beta _a,\beta _a\delta )c(\beta _a\delta ,\beta _b)\right)\mathrm{}(\delta x)\hfill & \\ \hfill \mathrm{\Delta }_{aa}=& \underset{\beta _a\delta =1}{}m_2\mathrm{}(\delta x)+2m_2\mathrm{}(\beta _ax),\hfill & (4.11)\hfill \end{array}$$ where $`\beta _a`$, $`1a8`$, is a maximal set of 8 mutually orthogonal roots. We note that recently Lax pairs of root type have been considered which correspond, in the above Ansatz (4.3-5), to $`\mathrm{\Lambda }`$ equal to the adjoint representation of $`𝒢`$ and the coefficients $`C_{IJ}`$ vanishing for $`I`$ or $`J`$ associated with zero weights. This construction yields another Lax pair for the case $`E_8`$. Spectral curves for certain gauge theories with matter in the adjoint representation have also been proposed in and , based on branes and M-theory. Lax Pairs for Twisted Calogero-Moser Systems Recall that the twisted and untwisted Calogero-Moser systems differ only for non-simply laced Lie algebras, namely $`B_n`$, $`C_n`$, $`G_2`$ and $`F_4`$. These are the only algebras we discuss in this paragraph. The construction (4.3-4.9) gives then Lax pairs for all of them, with the possible exception of twisted $`G_2`$. Unlike the case of untwisted Lie algebras however, the functions $`\mathrm{\Phi }_{IJ}`$ have to be chosen with care, and differ for each algebra. More specifically, $``$ For $`B_n`$, the Lax pair is of dimension $`N=2n`$, admits two independent couplings $`m_1`$ and $`m_2`$, and $$\mathrm{\Phi }_{IJ}(x,z)=\{\begin{array}{cc}\mathrm{\Phi }(x,z),\hfill & \text{if }IJ0,\pm n\hfill \\ \mathrm{\Phi }_2(\frac{1}{2}x,z),\hfill & \text{if }IJ=\pm n\hfill \end{array}.$$ $`(4.12)`$ Here a new function $`\mathrm{\Phi }_2(x,z)`$ is defined by $$\mathrm{\Phi }_2(\frac{1}{2}x,z)=\frac{\mathrm{\Phi }(\frac{1}{2}x,z)\mathrm{\Phi }(\frac{1}{2}x+\omega _1,z)}{\mathrm{\Phi }(\omega _1,z)}$$ $`(4.13)`$ $``$ For $`C_n`$, the Lax pair is of dimension $`N=2n+2`$, admits one independent coupling $`m_2`$, and $$\mathrm{\Phi }_{IJ}(x,z)=\mathrm{\Phi }_2(x+\omega _{IJ},z),$$ where $`\omega _{IJ}`$ are given by $$\omega _{IJ}=\{\begin{array}{cc}0,\hfill & \text{if }IJ=1,2,\mathrm{},2n+1\text{;}\hfill \\ \omega _2,\hfill & \text{if }1I2n,J=2n+2\text{;}\hfill \\ \omega _2,\hfill & \text{if }1J2n,I=2n+2\text{.}\hfill \end{array}$$ $`(4.14)`$ $``$ For $`F_4`$, the Lax pair is of dimension $`N=24`$, two independent couplings $`m_1`$ and $`m_2`$, $$\mathrm{\Phi }_{\lambda \mu }(x,z)=\{\begin{array}{cc}\mathrm{\Phi }(x,z),\hfill & \text{if }\lambda \mu =0\text{;}\hfill \\ \mathrm{\Phi }_1(x,z),\hfill & \text{if }\lambda \mu =\frac{1}{2}\text{;}\hfill \\ \mathrm{\Phi }_2(\frac{1}{2}x,z),\hfill & \text{if }\lambda \mu =1\text{.}\hfill \end{array}$$ $`(4.15)`$ where the function $`\mathrm{\Phi }_1(x,z)`$ is defined by $$\mathrm{\Phi }_1(x,z)=\mathrm{\Phi }(x,z)e^{\pi i\zeta (z)+\eta _1z}\mathrm{\Phi }(x+\omega _1,z)$$ $`(4.16)`$ Here it is more convenient to label the entries of the Lax pair directly by the weights $`\lambda =\lambda _I`$ and $`\mu =\lambda _J`$ instead of $`I`$ and $`J`$. $``$ For $`G_2`$, candidate Lax pairs can be defined in the 6 and 8 representations of $`G_2`$, but it is still unknown whether elliptic functions $`\mathrm{\Phi }_{IJ}(x,z)`$ exist which satisfy the required identities. V. CALOGERO-MOSER AND SPIN CALOGERO-MOSER SPECTRAL CURVES A Lax pair $`L(z),M(z)`$ with spectral parameter gives rise to a spectral curve $`\mathrm{\Gamma }`$ defined by $$\mathrm{\Gamma }=\{(k,z);R(k,z)det(kIL(z))=0\}$$ $`(5.1)`$ Since the matrix $`L(z)`$ is expressed in terms of the dynamical variables of the Calogero-Moser system, the family of spectral curves $`\mathrm{\Gamma }`$ can be parametrized by constants of motion of the system. However, to make contact with supersymmetric gauge theories, it is important to find parametrizations of the spectral curves in terms of the order parameters of the gauge theory. This problem was solved for the $`A_{N1}`$ Calogero-Moser systems in . Here we extend the solution given there to the more general class of $`SL(N,𝐂)`$ spin Calogero-Moser systems. The $`SL(N,𝐂)`$ spin Calogero-Moser system introduced in is the system with Hamiltonian $$H=\frac{1}{2}\underset{i=1}{\overset{N}{}}p_i^2\frac{1}{2}m^2\underset{ij}{}(b_i^{}a_j)(b_j^{}a_i)V(x_ix_j)$$ $`(5.2)`$ The terms $`a_i=(a_i)_\alpha `$, $`b_i=(b_i)^\alpha `$ are respectively $`l`$-dimensional vectors and $`l`$-dimensional covectors, and $`b_i^{}a_j`$ is their scalar product. The system (5.2) admits a Lax pair $`L(z)`$, $`M(z)`$ which is a generalization of (2.2). In particular, $`L(z)`$ is given by $$L_{ij}(z)=p_i\delta _{ij}m(1\delta _{ij})f_{ij}\mathrm{\Phi }(x_ix_j,z)$$ $`(5.3)`$ with $$f_{ij}=b_i^{}a_j,f_{ii}=m.$$ $`(5.4)`$ Krichever et al. have shown that the corresponding family of spectral curves $`\mathrm{\Gamma }`$ is a $`Nl\frac{1}{2}l(l1)`$-dimensional family of Riemann surfaces of genus $`g=Nl+1\frac{1}{2}l(l+1)`$. The defining equation $`R(k,z)=0`$ can be expressed in the form $$R(k,z)=k^N+\underset{i=0}{\overset{N1}{}}r_i(z)k^i$$ where $`r_i(z)`$, $`0iN1`$, is an elliptic function with a pole of order $`Ni`$. Since elliptic functions can be expanded linearly in terms of $`\mathrm{}(z)`$ and $`\mathrm{}^{}(z)`$, the family of spectral curves $`\mathrm{\Gamma }`$ can be parametrized by the coefficients of $`r_i(z)`$ in such an expansion. The number of these coefficients exceeds $`Nl\frac{1}{2}l(l1)`$ however, and Krichever et al. show that the correct number of parameters can be obtained by imposing linear constraints on the coefficients. We present now a different parametrization of the spectral curves of the spin Calogero-Moser systems, motivated by the order parameters of $`N=2`$ supersymmetric $`SU(N)`$ gauge theories. As in , we introduce the functions $`h_n(z)`$ by $$h_n(z)=\frac{_z^n\theta _1(\frac{z}{2\omega _1}|\tau )}{\theta _1(\frac{z}{2\omega _1}|\tau )},n𝐍.$$ $`(5.5)`$ and set $$f(k,z)=R(k+mh_1(z),z).$$ $`(5.6)`$ Theorem 4. The function $`f(k,z)`$ can be expressed as $$f(k,z)=\underset{p=1}{\overset{l}{}}_z^{p1}\left(\frac{\theta _1(\frac{1}{2\omega _1}(zm\frac{}{k})|\tau )}{\theta _1(\frac{1}{2\omega _1}|\tau )}H_p(k)\right)$$ $`(5.7)`$ where $`H_p(k)`$ is a polynomial in $`k`$ of degree $`Np+1`$, for $`1pl`$. The polynomial $`H_1(k)`$ is monic because $`R(k,z)`$ and $`f(k,z)`$ are. As for the polynomials $`H_p(k)`$ with $`p>1`$, their terms of order $`k^0`$ do not contribute in (5.7) and may be taken to be $`0`$. Thus we note that the total number of parameters for the $`l`$ monic polynomials $`H_p(k)`$ is $`_{p=1}^l(Np+1)=lN\frac{1}{2}l(l1)`$, which is indeed the dimension of the family of spectral curves $`\mathrm{\Gamma }`$ for the $`SL(N,𝐂)`$ spin Calogero-Moser system. Proof of Theorem 4. It is easily seen from the transformation properties of $`h_1(z)`$ that the transformation properties for $`f(k,z)`$ are $$\begin{array}{ccc}\hfill f(k,z+2\omega _1)=& f(k,z)\hfill & \\ \hfill f(k,z+2\omega _2)=& f(k\beta m,z),\beta =\frac{i\pi }{\omega _1}\hfill & (5.8)\hfill \end{array}$$ Furthermore, the function $`f(k,z)`$ has poles at $`z=0`$, with the residue a polynomial in $`k`$ of degree $`Np`$ at a pole of order $`p`$. Now the functions $`h_n(z)`$ satisfy the monodromy conditions $$\begin{array}{ccc}\hfill h_n(z+2\omega _1)=& h(z)\hfill & \\ \hfill h_n(z+2\omega _2)=& \underset{p=0}{\overset{n}{}}\left(\begin{array}{c}n\\ p\end{array}\right)\beta ^{np}h_p(z).\hfill & (5.9)\hfill \end{array}$$ It follows that the monodromies for their derivatives are $$\begin{array}{ccc}\hfill _z^sh_n(z+2\omega _1)=& _z^sh(z)\hfill & \\ \hfill _z^sh_n(z+2\omega _2)=& \underset{p=1}{\overset{n}{}}\left(\begin{array}{c}n\\ p\end{array}\right)\beta ^{np}_z^sh_p(z).\hfill & (5.10)\hfill \end{array}$$ (The $`p=0`$ term in the second identity does not contribute since $`h_0(z)=1`$.) Note also that $`_zh_1(z)=_z^2\mathrm{log}\theta _1(\frac{z}{2\omega _1}|\tau )=4\omega _1^2\mathrm{}(z)`$ is doubly periodic. Thus we may set $$f(k,z)=\underset{p=1}{\overset{l}{}}\underset{n=0}{\overset{Np+1}{}}Q_{p,Np+1n}(k)_z^{p1}h_n(z).$$ $`(5.11)`$ Next, we translate the monodromy transformations for $`f(k,z)`$ in terms of the polynomials $`Q_{p,Np+1n}(k)`$. We may write $$\begin{array}{ccc}\hfill f(k,z)=& \underset{n=0}{\overset{N}{}}h_n(z)Q_{1,Nn}(k)\hfill & \\ & +\underset{p=2}{\overset{l}{}}\underset{n=1}{\overset{Np+1}{}}_z^{p1}h_n(z)Q_{p,Np+1n}(k)\hfill & \\ \hfill f(k,z+2\omega _2)=& \underset{n=0}{\overset{N}{}}\underset{s=0}{\overset{n}{}}\left(\begin{array}{c}n\\ s\end{array}\right)\beta ^{ns}h_1(z)Q_{1,Nn}(k)\hfill & \\ & +\underset{p=2}{\overset{l}{}}\underset{n=1}{\overset{Np+1}{}}\underset{s=1}{\overset{n}{}}\left(\begin{array}{c}n\\ s\end{array}\right)\beta ^{ns}_z^{p1}h_s(z)Q_{p,Np+1n}(k).\hfill & (5.12)\hfill \end{array}$$ But the functions $`h_n(z)`$, $`0nN`$, and $`_z^{p1}h_n(z)`$, $`1nN`$, $`2pl`$, are linearly independent. Thus we may equate coefficients and obtain for $`Q_{1,Ns}(k)`$ the relation $$Q_{1,Ns}=\underset{n=0}{\overset{N}{}}\left(\begin{array}{c}n\\ s\end{array}\right)\beta ^{ns}Q_{1,Nn}(k)$$ $`(5.13)`$ Changing $`Nsp`$ and $`Nnn`$, this can be rewritten as $$Q_{1,p}(k\beta m)=\underset{n=0}{\overset{N}{}}\left(\begin{array}{c}Nn\\ pn\end{array}\right)\beta ^{pn}Q_{1,n}(k)$$ $`(5.14)`$ This is a relation of the form studied in ,(3.9). We recall briefly the argument: the equation (5.14) is equivalent to the equation $`H(t+\beta ,k+\beta m)=H(t,k)`$ where $`H(t,k)=_{p=0}^Nt^{Np}Q_{1,p}(k)`$ is the generating function. Since $`H(t,k)`$ is a polynomial in both $`t`$ and $`k`$, this means that $`H(t,k)=H(0,ktm)`$ depends only on $`ktm`$. Setting $`H_1(k)=H(0,k)`$, it follows easily that $$Q_{1,Nn}(k)=\frac{(m)^n}{n!}H_1^{(n)}(k).$$ $`(5.15)`$ Next, we solve for the higher order terms $`Q_{p,Np+1s}(k)`$. They satisfy $$Q_{p,Np+1s}(k\beta m)=\underset{n=1}{\overset{Np+1}{}}\left(\begin{array}{c}n\\ s\end{array}\right)\beta ^{ns}Q_{p,Np+1n}(k)$$ $`(5.16)`$ This is again a relation of the form (5.14), with $`N`$ replaced by $`Np+1`$. Thus there is again a polynomial $`H_p(k)`$, of degree $`Np+1`$ so that $$Q_{p,Np+1}(k)=\frac{(m)^s}{s!}H_p^{(s)}(k).$$ $`(5.17)`$ Substituting in (5.11), and noting that $$\frac{\theta _1(\frac{1}{2\omega _1}(zm\frac{}{k})|\tau )}{\theta _1(\frac{z}{2\omega _1}|\tau )}=\underset{n=0}{\overset{\mathrm{}}{}}h_n(z)\frac{(m)^n}{n!}(\frac{}{k})^n,$$ $`(5.18)`$ we obtain the desired expression (5.7). Evidently, the coefficients of the polynomials $`H_p(k)`$ (or equivalently, their zeroes) are integrals of motion of the spin Calogero-Moser system. It would be valuable to express them directly in terms of the dynamical variables $`(p_i,x_i)`$ of the system. For the $`SU(N)`$ Calogero-Moser system, this problem was solved in . Finally, we would like to note also that in the simpler case of the $`SU(N)`$ Calogero-Moser system, an alternative derivation of the parametrization in is now available . It would be interesting to explore also generalizations of this new derivation. REFERENCES Donagi, R. and E. Witten, “Supersymmetric Yang-Mills and integrable systems”, Nucl. Phys. B 460 (1996) 288-334, hep-th/9510101. Gorsky, A. and N. Nekrasov, “Elliptic Calogero-Moser systems from two-dimensional current algebra”, hep-th/9401021; N. Nekrasov, “Holomorphic bundles and many-body systems”, Comm. Math. Phys. 180 (1996) 587; Martinec, E., “Integrable structures in supersymmetric gauge and string theory”, hep-th/9510204. D’Hoker, E. and D.H. Phong, “Calogero-Moser systems in $`SU(N)`$ Seiberg-Witten theory”, Nucl. Phys. B 513 (1998) 405-444, hep-th/9709053. Krichever, I.M. and D.H. Phong, “Symplectic forms in the theory of solitons”, hep-th/9708170, to appear in Surveys in Differential Geometry, Vol. III. Lerche, W., “Introduction to Seiberg-Witten theory and its stringy origins”, Proceedings of the Spring School and Workshop on String Theory, ICTP, Trieste (1996), hep-th/9611190, Nucl. Phys. Proc. Suppl. B 55 (1997) 83. Marshakov, A., “On integrable systems and supersymmetric gauge theories”, Theor. Math. Phys. 112 (1997) 791-826, hep-th/9702083. Marshakov, A., A. Mironov, and A. Morozov, “WDVV-like equations in N=2 SUSY Yang-Mills theory”, Phys. Lett. B 389 (1996) 43, hep-th/9607109; “More evidence for the WDVV equations in N=2 SUSY Yang-Mills theory”, hep-th/9701123. Olshanetsky, M.A. and A.M. Perelomov, “Completely integrable Hamiltonian systems connected with semisimple Lie algebras”, Inventiones Math. 37 (1976) 93-108. Olshanetsky, M.A. and A.M. Perelomov, “Classical integrable finite-dimensional systems related to Lie algebras”, Phys. Rep. 71 C (1981) 313-400. D’Hoker, E. and D.H. Phong, “Calogero-Moser Lax pairs with spectral parameter for general Lie algebras”, Nucl. Phys. B 530 (1998) 537-610, hep-th/9804124. D’Hoker, E. and D.H. Phong, “Calogero-Moser and Toda systems for twisted and untwisted affine Lie algebras”, Nucl. Phys. B 530 (1998) 611-640, hep-th/9804125. D’Hoker, E. and D.H. Phong, “Spectral curves for super Yang-Mills with adjoint hypermultiplet for general Lie algebras”, Nucl. Phys. B 534 (1998) 697-719, hep-th/9804126. Krichever, I.M. and A. Zabrodin, “Spin generalizations of the spin Ruijsenaars-Schneider model, non-abelian 2D Toda chain, and representations of the Sklyanin algebra”, hep-th/9505039. Ruijsenaars, S.N.M. and H. Schneider, “A new class of integrable systems and its relation to soliton equations”, Ann. Phys. (NY) 170 (1986) 370-405; S.N.M Ruijsenaars, Comm. Math. Phys. 110 (1987) 191; see also A. Gorsky and N. Nekrasov, “Relativistic Calogero-Moser Model as a gauged WZW model”, Nucl. Phys. B436 (1995) 582, hep-th/9401017; H.W. Braden and R. Sasaki, Prog. Theor. Phys. 97 (1997) 1003. Krichever,I.M., O. Babelon, E. Billey, and M. Talon, “Spin generalization of the Calogero-Moser system and the matrix KP equation”, Amer. Math. Soc. Transl. 170 (1995) 83-119; J.A. Minahan and A.P. Polychronakos, “Integrable systems for particles with internal degrees of freedom”, Phys. Lett. B302 (1993) 265; “Interacting Fermion Systems from Two Dimensional QCD”, Phys. Lett. B326 (1994) 288, hep-th/9309044; A.P. Polychronakos, “Exchange Operator Formalism for Integrable Systems of Particles”, Phys. Rev. Lett. 69 (1992) 703, hep-th/9202057; “Generalized Statistics in one dimension”, hep-th/9902157. Calogero, F., “Exactly solvable one-dimensional many-body problems”, Lett. Nuovo Cim. 13 (1975) 411-416. Moser, J., “Three integrable Hamiltonian systems connected with isospectral deformations”, Advances Math. 16 (1975) 197. Krichever, I.M., “Elliptic solutions of the Kadomtsev-Petviashvili equation and integrable systems of particles”, Funct. Anal. Appl. 14 (1980) 282-290. Braden, H.W. and V.M. Buchstaber, “The general analytic solution of a functional equation of addition type”, Siam J. Math. anal. 28 (1997) 903-923. Bordner, A., R. Sasaki, and K. Takasaki, “Calogero-Moser systems II: symmetries and foldings”, hep-th/9809068; A. Bordner and R. Sasaki, “Calogero-Moser systems III: Elliptic Potentials and Twisting, hep-th/9812232. Inozemtsev, I., “Lax representation with spectral parameter on a torus for integrable particle systems”, Lett. Math. Phys. 17 (1989) 11-17. Inozemtsev, I., “The finite Toda lattices”, Comm. Math. Phys. 121 (1989) 628-638. Goddard, P. and D. Olive, “Kac-Moody and Virasoro algebras in relation to quantum physics”, International J. Mod. Phys. A, Vol. I (1986) 303-414. Martinec, E. and N. Warner, “Integrable systems and supersymmetric gauge theories”, Nucl. Phys. B 459 (1996) 97-112, hep-th/9509161. Bordner, A., E. Corrigan, and R. Sasaki, “Calogero-Moser systems: a new formulation”, hep-th/9805106. Uranga, A.M., “Towards mass deformed N=4 $`SO(N)`$ and $`Sp(K)`$ gauge theories from brane configurations”, Nucl. Phys. B 526 (1998) 241-277, hep-th/9803054. Yokono, T., “Orientifold four plane in brane configurations and N=4 $`USp(2N)`$ and $`SO(2N)`$ theory”, Nucl. Phys. B 532 (1998) 210-226, hep-th/9803123. D’Hoker, E. and D.H. Phong, “Order parameters, free fermions, and conservation laws for Calogero-Moser systems”, hep-th/9808156, to appear in Asian J. Math. Vaninsky, K., “On explicit parametrization of spectral curves for Moser-Calogero particles and its applications”, December 1998 preprint. Nekrasov, N., Nucl. Phys. B531 (1998) 323; H.W. Braden, A. Marshakov, A. Mironov and A. Morozov, “The Ruijsenaars-Schneider Model in the Context of Seiberg-Witten Theory”, hep-th/9902205.
no-problem/9903/hep-ph9903472.html
ar5iv
text
# Particle Physics from Stars ## 1 INTRODUCTION Astrophysical and cosmological arguments and observations have become part of the main-stream methodology to obtain empirical information on existing or hypothetical elementary particles and their interactions. The “heavenly laboratories” are complementary to accelerator and non-accelerator experiments, notably at the “low-energy frontier” of particle physics, which includes the physics of neutrinos and other weakly interacting low-mass particles such as the hypothetical axions, novel long-range forces, and so forth. The present review is dedicated to stars as particle-physics laboratories, or more precisely, to what can be learned about weakly interacting low-mass particles from the observed properties of stars. The prime argument is that a hot and dense stellar plasma emits low-mass weakly interacting particles in great abundance. They subsequently escape from the stellar interior directly, without further interactions, and thus provide a local energy sink for the stellar medium. The astronomically observable impact of this phenomenon provides some of the most powerful limits on the properties of neutrinos, axions, and the like. Once the particles have escaped they can decay on their long way to Earth, allowing one to derive interesting limits on radiative decay channels from the absence of unexpected x- or $`\gamma `$-ray fluxes from the Sun or other stars. Finally, the weakly interacting particles can be directly detected at Earth, thus far only the neutrinos from the Sun and supernova (SN) 1987A, allowing one to extract important information on their properties. The material covered here has been reviewed in 1990, with a focus on axion limits, by Turner and by Raffelt , and a very brief “Mini-Review” was included in the 1998 edition of the Review of Particle Physics . My 1996 book Stars as Laboratories for Fundamental Physics treats these topics in much greater detail than is possible here. The present chapter is intended as a compact, up-to-date, and easily accessible source for the most important results and methods. The subject of “Particle Physics from Stars” is broader than both my expertise and the space available here. I will not touch on the solar neutrino problem and its oscillation interpretation—this is a topic unto itself and has been extensively reviewed by other authors, for example . The high densities encountered in neutron stars make them ideal for studies and speculations concerning novel phases of nuclear matter (e.g. meson condensates or quark matter), an area covered by two recent books . Quark stars are also the subject of an older review and are covered in the proceedings of two topical conferences . Certain grand unified theories predict the existence of primordial magnetic monopoles. They would get trapped in stars and then catalyze the decay of nucleons by the Rubakov-Callan effect. The ensuing anomalous energy release is constrained by the properties of stars, in particular neutron stars and white dwarfs, a topic that has been reviewed a long time ago . It was re-examined, and the limits improved, in the wake of the discovery of the faintest white dwarf ever detected which puts restrictive limits on an anomalous internal heat source . Weakly interacting massive particles (WIMPs), notably in the guise of the supersymmetric neutralinos, are prime candidates for the cosmic dark matter. Some of them would get trapped in stars, annihilate with each other, and produce a secondary flux of high-energy neutrinos. The search for such fluxes from the Sun and the center of the Earth by present-day and future neutrino telescopes is the “indirect method” to detect galactic particle dark matter, an approach which is competitive with direct laboratory searches—see for a review. Returning to the topics which are covered here, Sections 24 are devoted to a discussion of the main stellar objects that have been used to constrain low-mass particles, viz. the Sun, globular-cluster stars, compact stars, and SN 1987A. In Sections 57 the main constraints on neutrinos, axions, and novel long-range forces are summarized. Section 8 is given over to brief concluding remarks. ## 2 THE SUN ### 2.1 Basic Energy-Loss Argument The Sun is the best-known star and thus a natural starting point for our survey of astrophysical particle laboratories. It is powered by hydrogen burning which amounts to the net reaction $`4p+2e^{}{}_{}{}^{4}\mathrm{He}+2\nu _e+26.73\mathrm{MeV}`$, giving rise to a measured $`\nu _e`$ flux which now provides one of the most convincing indications for neutrino oscillations . Instead of neutrinos from nuclear processes we focus here on particle fluxes which are produced in thermal plasma reactions. The photo neutrino process $`\gamma +e^{}e^{}+\nu \overline{\nu }`$ is a case in point, as is the production of gravitons from electron bremsstrahlung. The solar energy loss from such standard processes is small, but it may be large for new particles. To be specific we consider axions (Sec. 6) which arise in a variety of reactions, and in particular by the Primakoff process in which thermal photons mutate into axions in the electric field of the medium’s charged particles (Fig. 1). In Sec. 6.2.1 we will discuss direct search experiments for solar axions, while here we focus on what is the main topic of this review, the backreaction of a new energy loss on stars. The Sun is a normal star which supports itself against gravity by thermal pressure, as opposed to degenerate stars like white dwarfs which are supported by electron degeneracy pressure. If one pictures the Sun as a self-gravitating monatomic gas in hydrostatic equilibrium, the “atoms” obey the virial theorem $`E_{\mathrm{kin}}=\frac{1}{2}E_{\mathrm{grav}}`$. The most important consequence of this relationship is that extracting energy from such a system, i.e. reducing the total energy $`E_{\mathrm{kin}}+E_{\mathrm{grav}}`$, leads to contraction and to an increase of $`E_{\mathrm{kin}}`$. Therefore, all else being equal, axion losses lead to contraction and heating. The nuclear energy generation rate scales with a high power of the temperature. Therefore, the heating implied by the new energy loss causes increased nuclear burning—the star finds a new equilibrium configuration where the new losses are compensated by an increased rate of energy generation. The main lesson is that the new energy loss does not “cool” the star; it leads to heating and an increased consumption of nuclear fuel. The Sun, where energy is transported from the central nuclear furnace by radiation, actually overcompensates the losses and brightens, while it would dim if the energy transfer were by convection. Either behavior is understood by a powerful “homology argument” where the nonlinear interplay of the equations of stellar structure is represented in a simple analytic fashion . The solar luminosity is well measured, yet this brightening effect is not observable because all else need not be equal. The present-day luminosity of the Sun depends on its unknown initial helium mass fraction $`Y`$; in a solar model $`Y`$ has to be adjusted such that $`L_{}=3.85\times 10^{33}\mathrm{erg}\mathrm{s}^1`$ is reproduced after $`4.6\times 10^9\mathrm{years}`$ of nuclear burning. For solar models with axion losses the required presolar helium abundance $`Y`$ as a function of the axion-photon coupling constant $`g_{a\gamma }`$ is shown in Table 1. The axion luminosity $`L_a`$ is also given as well as the central helium abundance $`Y_c`$, density $`\rho _c`$, and temperature $`T_c`$ of the present-day Sun. Even axion losses as large as $`L_{}`$ can be accommodated by reducing the presolar helium mass fraction from about 27% to something like 23% . The “standard Sun” has completed about half of its hydrogen-burning phase. Therefore, the anomalous energy losses cannot exceed approximately $`L_{}`$ or else the Sun could not have reached its observed age. Indeed, for $`g_{10}=30`$ no consistent present-day Sun could be constructed for any value of $`Y`$ . The emission rate of other hypothetical particles would have a different temperature and density dependence than the Primakoff process, yet the general conclusion remains the same that a novel energy loss must not exceed approximately $`L_{}`$. ### 2.2 Solar Neutrino Measurements This crude limit is improved by the solar neutrino flux which has been measured in five different observatories with three different spectral response characteristics, i.e. by the absorption on chlorine, gallium, and by the water Cherenkov technique. The axionic solar models produce larger neutrino fluxes; in Table 1 we show the expected detection rates for the Cl, Ga, and H<sub>2</sub>O experiments relative to the standard case. For $`g_{10}\begin{array}{c}<\hfill \\ \hfill \end{array}10`$ one can still find oscillation solutions to the observed $`\nu _e`$ deficit, but larger energy-loss rates appear to be excluded . Once the neutrino oscillation hypothesis has been more firmly established and the mixing parameters are better known, the neutrino measurements may be used to pin down the central solar temperature, allowing one to constrain novel energy losses with greater precision. For now it appears safe to conclude that the Sun does not emit more than a few tenths of $`L_{}`$ in new forms of radiation. ### 2.3 Helioseismology Over the past few years the precision measurements of the solar p-mode frequencies have provided a more reliable way to study the solar interior. For example, the convective surface layer is found to reach down to 0.710–$`0.716R_{}`$ , the helium content of these layers to exceed 0.238 . Gravitational settling has reduced the surface helium abundance by about 0.03 so that the presolar value must have been at least 0.268, in good agreement with standard solar models. The reduced helium content required of the axionic solar models in Table 1 disagrees significantly with this lower limit for $`g_{10}10`$. One may also invert the p-mode measurements to construct a “seismic model” of the solar sound-speed profile, e.g. . All modern standard solar models agree well with the seismic model within its uncertainties (shaded band in Fig. 2) which mostly derive from the inversion method itself, not the measurements. The difference between the sound-speed profile of a standard solar model and those including axion losses are also shown in Fig. 2. For $`g_{10}10`$ the difference is larger than the uncertainties of the seismic model, implying a limit $$g_{a\gamma }\begin{array}{c}<\hfill \\ \hfill \end{array}10\times 10^{10}\mathrm{GeV}^1.$$ (1) Other cases may be different in detail, but it appears safe to assume that any new energy-loss channel must not exceed something like 10% of $`L_{}`$. ### 2.4 “Strongly” Interacting Particles Thus far we have assumed that the new particles couple so weakly that they escape from the stellar interior without further interactions, in analogy to neutrinos or gravitons. They emerge from the entire stellar volume, i.e. their emission amounts to a local energy sink for the stellar plasma. But what if the particles interact so strongly that their mean free path is less than the solar radius? The impact of such particles on a star compares to that of photons, which are also “trapped” by their “strong” interaction. Their continuous thermal production and re-absorption amounts to the net transfer of energy from regions of higher temperature to cooler ones. In the Sun this radiative form of energy transfer is more important than conduction by electrons or convection, except in the outer layers. A particle which interacts more weakly than photons is more effective because it travels a larger distance before re-absorption—the ability to transfer energy is proportional to the mean free path. The properties of the Sun roughly confirm the standard photon opacities, so that a new particle would have to interact more strongly than photons to be allowed . Therefore, contrary to what is sometimes stated in the literature, a new particle is by no means allowed just because its mean free path is less than the stellar dimensions. The impact of a new particle is maximal when its mean free path is of order the stellar radius. Of course, usually one is interested in very weakly interacting particles so that this point is moot. ## 3 LIMITS ON STELLAR ENERGY LOSSES ### 3.1 Globular-Cluster Stars #### 3.1.1 Evolution of Low-Mass Stars The discussion in the previous section suggests that the emission of new weakly interacting particles from stars primarily modifies the time scale of evolution. For the Sun this effect is less useful to constrain particle emission than, say, the modified p-mode frequencies or the direct measurement of the neutrino fluxes. However, the observed properties of other stars provide far more restrictive limits on their evolutionary time scales so that anomalous modes of energy loss can be far more tightly constrained. We begin with globular-cluster stars which, together with SN 1987A, are the most successful example of astronomical observations that provide nontrivial limits on the properties of elementary particles. Our galaxy has about 150 globular clusters such as M3 (Fig. 3) which are gravitationally bound systems of up to a million stars. In Fig. 4 the stars of the cluster M3 are arranged according to their color or surface temperature (horizontal axis) and brightness (vertical axis) in the usual way, leading to a characteristic pattern which allows for rather precise tests of the theory of stellar evolution, and notably for quantitative measurements of certain evolutionary time scales. Globular clusters are the oldest objects in the galaxy and thus almost as old as the universe. The stars in a given cluster all formed at about the same time with essentially the same chemical composition, differing primarily in their mass. Because more massive stars evolve faster, present-day globular-cluster stars are somewhat below<sup>1</sup><sup>1</sup>1The letter $``$ denotes stellar masses with $`1_{}=2\times 10^{33}\mathrm{g}`$ the solar mass. The letter $`M`$ is traditionally reserved for the absolute stellar brightness (in magnitudes or mag). The total or bolometric brightness is defined as $`M_{\mathrm{bol}}=4.742.5\mathrm{log}_{10}(L/L_{})`$, with the solar luminosity $`L_{}=3.85\times 10^{33}\mathrm{erg}\mathrm{s}^1`$. $`1_{}`$ so that we are concerned with low-mass stars ($`\begin{array}{c}<\hfill \\ \hfill \end{array}2_{}`$). Textbook expositions of stellar structure and evolution are . Stars begin their life on the main sequence (MS) where they burn hydrogen in their center. Different locations on the MS in a color-magnitude diagram like Fig. 4 correspond to different masses, with more massive stars shining more brightly. When central hydrogen is exhausted the star develops a degenerate helium core, with hydrogen burning in a shell. Curiously, the stellar envelope expands, leading to a large surface area and thus a low surface temperature (red color)—they become “red giants.” The luminosity is governed by the gravitational potential at the edge of the growing helium core so that these stars become ever brighter: they ascend the red-giant branch (RGB). The higher a star on the RGB, the more massive and compact its helium core. The core grows until about $`0.5_{}`$ when it has become dense and hot enough to ignite helium. The ensuing core expansion reduces the gravitational potential at its edge and thus lowers the energy production rate in the hydrogen shell source, dimming these stars. Helium ignites at a fixed core mass, but the envelope mass differs due to varying rates of mass loss on the RGB, leading to different surface areas and thus surface temperatures. These stars thus occupy the horizontal branch (HB) in the color-magnitude diagram. In Fig. 4 the HB turns down on the left (blue color) where much of the luminosity falls outside the V filter; in terms of the total or “bolometric” brightness the HB is truly horizontal. Finally, when helium is exhausted, a degenerate carbon-oxygen core develops, leading to a second ascent on what is called the asymptotic giant branch (AGB). These low-mass stars cannot ignite their carbon-oxygen core—they become white dwarfs after shedding most of their envelope. The advanced evolutionary phases are fast compared with the MS duration which is about $`10^{10}\mathrm{yr}`$ for stars somewhat below $`1_{}`$. For example, the ascent on the upper RGB and the HB phase each take around $`10^8\mathrm{yr}`$. Therefore, the distributions of stars along the RGB and beyond can be taken as an “isochrone” for the evolution of a single star, i.e. a time-series of snapshots for the evolution of a single star with a fixed initial mass. Put another way, the number distribution of stars along the different branches are a direct measure for the duration of the advanced evolutionary phases. The distribution along the MS is different in that it measures the distribution of initial masses. #### 3.1.2 Core Mass at Helium Ignition Anomalous energy losses modify this picture in measurable ways. We first consider an energy-loss mechanism which is more effective in the degenerate core of a red giant before helium ignition than on the HB so that the post-RGB evolution is standard. Since an RGB-star’s helium core is supported by degeneracy pressure there is no feedback between energy-loss and pressure: the core is actually cooled. Helium burning ($`3{}_{}{}^{4}\mathrm{He}{}_{}{}^{12}\mathrm{C}`$) depends very sensitively on temperature and density so that the cooling delays the ignition of helium, leading to a larger core mass $`_c`$, with several observable consequences. First, the brightness of a red giant depends on its core mass so that the RGB would extend to larger luminosities, causing an increased brightness difference $`\mathrm{\Delta }M_{\mathrm{HB}}^{\mathrm{tip}}`$ between the HB and the RGB tip. Second, an increased $`_c`$ implies an increased helium-burning core on the HB. For a certain range of colors these stars are pulsationally unstable and are then called RR Lyrae stars. From the measured RR Lyrae luminosity and pulsation period one can infer $`_c`$ on the basis of their so-called mass-to-light ratio $`A`$. Third, the increased $`_c`$ increases the luminosity of RR Lyrae stars so that absolute determinations of their brightness $`M_{\mathrm{RR}}`$ allow one to constrain the range of possible core masses. Fourth, the number ratio $`R`$ of HB stars vs. RGB stars brighter than the HB is modified. These observables also depend on the measured cluster metallicity as well as the unknown helium content which is usually expressed in terms of $`Y_{\mathrm{env}}`$, the envelope helium mass fraction. Since globular clusters formed shortly after the big bang, their initial helium content must be close to the primordial value of 22–25%. $`Y_{\mathrm{env}}`$ should be close to this number because the initial mass fraction is somewhat depleted by gravitational settling, and somewhat increased by convective dredge-up of processed, helium-rich material from the inner parts of the star. An estimate of $`_c`$ from a global analysis of these observables except $`A`$ was performed in and re-analysed in , $`A`$ was used in , and an independent analysis using all four observables in . In Fig. 5 we show the allowed core mass excess $`\delta _c`$ and envelope helium mass fraction $`Y_{\mathrm{env}}`$ from the analyses ; references to the original observations are found in these papers. Figure 5 suggests that, within the given uncertainties, the different observations overlap at the standard core mass ($`\delta _c=0`$) and at an envelope helium abundance of $`Y_{\mathrm{env}}`$ which is compatible with the primordial helium abundance. Of course, the error bands do not have a simple interpretation because they combine observational and estimated systematic errors, which involve some subjective judgement by the authors. The difference between the two panels of Fig. 5 gives one a sense of how sensitive the conclusions are to these more arbitrary aspects of the analysis. As a nominal limit it appears safe to adopt $`|\delta _c|\begin{array}{c}<\hfill \\ \hfill \end{array}0.025`$ or $`|\delta _c|/_c\begin{array}{c}<\hfill \\ \hfill \end{array}5\%`$; how much additional “safety-margin” one wishes to include is a somewhat arbitrary decision which is difficult to make objective in the sense of a statistical confidence level. In it was shown that this limit can be translated into an approximate limit on the average anomalous energy-loss rate $`ϵ_x`$ of a helium plasma, $$ϵ_x\begin{array}{c}<\hfill \\ \hfill \end{array}10\mathrm{erg}\mathrm{g}^1\mathrm{s}^1\text{ at }T10^8\mathrm{K},\rho 2\times 10^5\mathrm{g}\mathrm{cm}^3.$$ (2) The density represents the approximate average of a red-giant core before helium ignition; the value at its center is about $`10^6\mathrm{g}\mathrm{cm}^3`$. The main standard-model neutrino emission process is plasmon decay $`\gamma \nu \overline{\nu }`$ with a core average of about $`4\mathrm{erg}\mathrm{g}^1\mathrm{s}^1`$. Therefore, Eq. (2) means that a new energy-loss channel must be less effective than a few times the standard neutrino losses. #### 3.1.3 Helium-Burning Lifetime of Horizontal-Branch Stars We now turn to an energy-loss mechanism which becomes effective in a nondegenerate medium, i.e. we imagine that the core expansion after helium ignition “switches on” an energy-loss channel that was negligible on the RGB. Therefore, the pre-HB evolution is taken to be standard. As in the case of the Sun (Sec. 2.1) there will be little change in the HB stars’ brightness, rather they will consume their nuclear fuel faster and thus begin to ascend the AGB sooner. The net observable effect is a reduction of the number of HB relative to RGB stars. From the measured HB/RGB number ratios in 15 globular clusters and with plausible assumptions about the uncertainties of other parameters one concludes that the duration of helium burning agrees with stellar-evolution theory to within about 10% . This implies that the new energy loss of the helium core should not exceed about 10% of its standard energy production rate. Therefore, the new energy-loss rate at average core conditions is constrained by $$ϵ_x\begin{array}{c}<\hfill \\ \hfill \end{array}10\mathrm{erg}\mathrm{g}^1\mathrm{s}^1\text{ at }T0.7\times 10^8\mathrm{K},\rho 0.6\times 10^4\mathrm{g}\mathrm{cm}^3.$$ (3) This limit is slightly more restrictive than the often-quoted “red-giant bound,” corresponding to $`ϵ_x\begin{array}{c}<\hfill \\ \hfill \end{array}100\mathrm{erg}\mathrm{g}^1\mathrm{s}^1`$ at $`T=10^8\mathrm{K}`$ and $`\rho =10^4\mathrm{g}\mathrm{cm}^3`$. It was based on the helium-burning lifetime of the “clump giants” in open clusters . They have fewer stars, leading to statistically less significant limits. The “clump giants” are the physical equivalent of HB stars, except that they occupy a common location at the base of the RGB, the “red-giant clump.” #### 3.1.4 Applications After the energy-loss argument has been condensed into the simple criteria of Eqs. (2) and (3) it can be applied almost mechanically to a variety of cases. The main task is to identify the dominant emission process for the new particles and to calculate the energy-loss rate $`ϵ_x`$ for a helium plasma at the conditions specified in Eqs. (2) or (3). The most important limits will be discussed in the context of specific particle-physics hypotheses in Secs. 57. Here we just mention that these and similar arguments were used to constrain neutrino electromagnetic properties , axions , paraphotons , the photo production cross section on <sup>4</sup>He of new bosons , the Yukawa couplings of new bosons to baryons or electrons , and supersymmetric particles . One may also calculate numerical evolution sequences including new energy losses . Comparing the results from such studies with what one finds from Eqs. (2) and (3) reveals that, in view of the overall theoretical and observational uncertainties, it is indeed enough to use these simple criteria . ### 3.2 White Dwarfs White dwarfs are another case where astronomical observations provide useful limits on new stellar energy losses. These compact objects are the remnants of stars with initial masses of up to several $`_{}`$ . For low-mass progenitors the evolution proceeds as described in Sec. 3.1.1. When they ascend the asymptotic giant branch they eventually shed most of their envelope mass. The degenerate carbon-oxygen core, having reached something like $`0.6_{}`$, never ignites. Its subsequent evolution is simply one of cooling, first dominated by neutrino losses throughout its volume, later by surface photon emission. The cooling speed can be observationally infered from the “luminosity function,” i.e. the white-dwarf number density per brightness interval. As white dwarfs are intrinsically dim they are observed only in the solar neighborhood, out to perhaps $`100\mathrm{pc}`$ ($`1\mathrm{pc}=3.26\mathrm{lyr}`$) which is far less than the thickness of the galactic disk. The measured luminosity function (Fig. 6) reveals that there are few bright white dwarfs and many faint ones. The dotted line represents Mestel’s cooling law , an analytic treatment based on surface photon cooling. The observed luminosity function dips at the bright end, a behavior ascribed to neutrino emission which quickly “switches off” as the star cools. The luminosity function drops sharply at the faint end. Even the oldest white dwarfs have not yet cooled any further, implying that they were born 8–12 Gyr ago, in good agreement with the estimated age of the galaxy. Therefore, a novel cooling agent cannot be much more effective than the surface photon emission. This conclusion also follows from the agreement between the implied birthrate with independent estimates. The shape of the luminosity function can be deformed for an appropriate temperature dependence of the particle emission rate, e.g. enhancing the “neutrino dip” at the bright end. Finally, white dwarfs in a certain range of surface temperatures are pulsationally unstable and are then called ZZ Ceti stars. The pulsation period of a few minutes depends on the luminosity, the period decrease thus on the cooling speed. For G117–B15A the period change was measured , implying a somewhat large cooling rate. While this discrepancy may be worrisome, probably these measurements should be taken as an approximate confirmation of the predicted white-dwarf cooling speed. White dwarfs were used to constrain the axion-electron coupling . It was also noted that the somewhat large period decrease of G117–B15A could be ascribed to axion cooling . Finally, a limit on the neutrino magnetic dipole moment was derived . A detailed review of these limits is provided in ; they are somewhat weaker than those from globular-cluster stars, but on the same general level. Therefore, white-dwarf cooling essentially corroborates some of the globular-cluster limits, but does not improve on them. ### 3.3 Old Neutron Stars Neutron stars are the compact remnants of stars with initial masses beyond about $`8_{}`$. After their formation in a core-collapse supernova (Sec. 4) they evolve by cooling, a process that speeds up by a new energy-loss channel. Neutron-star cooling can now be observed by satellite-borne x-ray measurements of the thermal surface emission of several old pulsars—a recent review is . Limits on axions were derived in , on neutrino magnetic dipole moments in . These bounds are much weaker than those from SN 1987A or globular clusters. Turning this around, anomalous cooling effects by particle emission is probably not important in old neutron stars, leaving them as laboratories for many of the other uncertain bits of input physics such as the existence of new phases of nuclear matter . If a neutron star converts into a strange-matter star an axion burst emerges , but for now this effect has not provided new empirical information on axion properties. ## 4 SUPERNOVAE ### 4.1 SN 1987A Neutrino Observations When the explosion of the star Sanduleak $`\mathrm{69\hspace{0.17em}202}`$ was detected on 23 February 1987 in the Large Magellanic Cloud, a satellite galaxy of our Milky Way at a distance of about 50 kpc (165,000 lyr), it became possible for the first time to measure the neutrino emission from a nascent neutron star, turning this supernova (SN 1987A) into one of the most important stellar particle-physics laboratories . A type II supernova explosion is physically the implosion of an evolved massive star ($`\begin{array}{c}>\hfill \\ \hfill \end{array}8_{}`$) which has become an “onion-skin structure” with several burning shells surrounding a degenerate iron core. It cannot gain further energy by fusion so that it becomes unstable when it has reached the limiting mass (Chandrasekhar mass) of 1–$`2_{}`$ that can be supported by electron degeneracy pressure. The ensuing collapse is intercepted when the equation of state stiffens at around nuclear density ($`3\times 10^{14}\mathrm{g}\mathrm{cm}^3`$), corresponding to a core size of a few tens of kilometers. At temperatures of tens of MeV this compact object is opaque to neutrinos. The gravitational binding energy of the newborn neutron star (“proto neutron star”) of about $`3\times 10^{53}\mathrm{erg}`$ is thus radiated over several seconds from the “neutrino sphere.” Crudely put, the collapsed SN core cools by thermal neutrino emission from its surface. The neutrino signal from SN 1987A (Fig. 7) was observed by the $`\overline{\nu }_epne^+`$ reaction in several detectors . The number of events, their energies, and the distribution over several seconds corresponds well to theoretical expectations and thus has been taken as a confirmation of the standard picture that a compact remnant formed which emitted its energy by quasi-thermal neutrino emission. Detailed statistical analyses of the data were performed in . The signal does show a number of “anomalies.” The average $`\overline{\nu }_e`$ energies infered from the Irvine-Michigan-Brookhaven (IMB) and Kamiokande observations are quite different . The large time gap of 7.3 s between the first 8 and the last 3 Kamiokande events looks worrisome . The distribution of the final-state positrons from the $`\overline{\nu }_epne^+`$ capture reaction should be isotropic, but is found to be significantly peaked away from the direction of the SN . In the absence of other explanations, these features have been blamed on statistical fluctuations in the sparse data. ### 4.2 Signal Dispersion A dispersion of the neutrino burst can be caused by a time-of-flight delay from a nonvanishing neutrino mass . The arrival time from SN 1987A at a distance $`D`$ would be delayed by $$\mathrm{\Delta }t=2.57\mathrm{s}\left(\frac{D}{50\mathrm{kpc}}\right)\left(\frac{10\mathrm{MeV}}{E_\nu }\right)^2\left(\frac{m_\nu }{10\mathrm{eV}}\right)^2.$$ (4) As the $`\overline{\nu }_e`$ were registered within a few seconds and had energies in the 10 MeV range, $`m_{\nu _e}`$ is limited to less than around 10 eV. Detailed analyses reveal that the pulse duration is consistently explained by the intrinsic SN cooling time and that $`m_{\nu _e}\begin{array}{c}<\hfill \\ \hfill \end{array}20\mathrm{eV}`$ is implied as something like a 95% CL limit . The apparent absence of a time-of-flight dispersion effect of the $`\overline{\nu }_e`$ burst was also used to constrain a “millicharge” of these particles (they would be deflected in the galactic magnetic field) , a quantum field theory with a fundamental length scale , and deviations from the Lorentzian rule of adding velocities . Limits on new long-range forces acting on the neutrinos seem to be invalidated in the most interesting case of a long-range leptonic force by screening from the cosmic background neutrinos . The SN 1987A observations confirm that the visual SN explosion occurs several hours after the core-collapse and thus after the neutrino burst. Again, there is no apparent time-of-flight delay of the relative arrival times between the neutrino burst and the onset of the optical light curve, allowing one to confirm the equality of the relativistic limiting velocity for these particle types to within $`2\times 10^9`$ . Moreover, the Shapiro time delay in the gravitational field of the galaxy of neutrinos agrees with that of photons to within about $`4\times 10^3`$ , constraining certain alternative theories of gravity . ### 4.3 Energy-Loss Argument The late events in Kamiokande and IMB reveal that the signal duration was not anomalously short. Very weakly interacting particles would freely stream from the inner core, removing energy which otherwise would power the late-time neutrino signal. Therefore, its observed duration can be taken as evidence against such novel cooling effects. This argument has been advanced to constrain axion-nucleon couplings , majorons , supersymmetric particles , and graviton emission in quantum-gravity theories with higher dimensions . It has also been used to constrain right-handed neutrinos interacting by a Dirac mass term , mixed with active neutrinos , interacting through right-handed currents , a magnetic dipole moment , or an electric form factor . Many of these results will be reviewed in Secs. 57 in the context of specific particle-physics hypotheses. Here we illustrate the general argument with axions (Sec. 6) which are produced by nucleon bremsstrahlung $`NNNNa`$ so that the energy-loss rate depends on the axion-nucleon Yukawa coupling $`g_{aN}`$. In Fig. 8 we show the expected neutrino-signal duration as a function of $`g_{aN}`$. With increasing $`g_{aN}`$, corresponding to an increasing energy-loss rate, the signal duration drops sharply. For a sufficiently large $`g_{aN}`$, however, axions no longer escape freely; they are trapped and thermally emitted from the “axion sphere” at unit optical depth. Beyond some coupling strength axions are less important than neutrinos and cannot be excluded. However, particles which are on the “strong interaction” side of this argument need not be allowed. They could be important for the energy-transfer during the infall phase and they could produce events in the neutrino detectors. For example, “strongly coupled” axions in a large range of $`g_{aN}`$ are actually excluded because they would have produced too many events by their absorption on <sup>16</sup>. Likewise, particles on the free-streaming side can cause excess events in the neutrino detectors. For example, right-handed neutrinos escaping from the inner core could become “visible” by decaying into left-handed states or by spin-precessing in the galactic magnetic field if they have a dipole moment. Returning to the general argument, one can estimate a limit on the energy-loss rate on the free-streaming side by the simple criterion that the new channel should be less effective than the standard neutrino losses, corresponding to $$ϵ_x\begin{array}{c}<\hfill \\ \hfill \end{array}10^{19}\mathrm{erg}\mathrm{g}^1\mathrm{s}^1\text{ at }\rho =3\times 10^{14}\mathrm{g}\mathrm{cm}^3,T=30\mathrm{MeV}.$$ (5) The density is the core average, the temperature an average during the first few seconds. Some authors find higher temperatures, but for a conservative limit it is preferable to stick to a value at the lower end of the plausible range. At these conditions the nucleons are partially degenerate while the electrons are highly degenerate. Several detailed numerical studies reveal that this simple criterion corresponds to approximately halving the neutrino signal duration . A simple analytic treatment is far more difficult on the trapping side; see for an example in the context of axions. The SN 1987A energy-loss argument tends to be most powerful at constraining new particle interactions with nucleons. Therefore, it is necessary to calculate the interaction rate with a hot and dense nuclear medium that is dominated by many-body effects. Besides the sparse data, the theoretical treatment of the emission rate is the most problematic aspect of this entire method. ### 4.4 Radiative Neutrino Decays If neutrinos have masses one expects that the heavier ones are unstable and decay radiatively as $`\nu \nu ^{}\gamma `$. SN 1987A is thought to have emitted similar fluxes of neutrinos and antineutrinos of all flavors so that one would have expected a burst of $`\gamma `$-rays in coincidence with the neutrinos. No excess counts were observed in the gamma-ray spectrometer (GRS) on the solar maximum mission (SMM) satellite , leading to restrictive limits on neutrino decays . The GRS happened to go into calibration mode about 223 s after the neutrino burst, but for low-mass neutrinos ($`m_\nu \begin{array}{c}<\hfill \\ \hfill \end{array}40\mathrm{eV}`$) the entire $`\gamma `$-ray burst would have been captured, leading to a radiative decay limit of $$\tau _\gamma /m_\nu \begin{array}{c}>\hfill \\ \hfill \end{array}0.8\times 10^{15}\mathrm{s}/\mathrm{eV}.$$ (6) For higher-mass neutrinos the photon burst would have been stretched beyond the GRS window. The first $`\gamma `$-rays from decays near the SN would arrive in coincidence with the $`\overline{\nu }_e`$ burst, but the $`\gamma `$-burst duration would be given by something like Eq. (4). As a further complication, such higher-mass neutrinos violate the cosmological mass limit unless they decay sufficiently fast and thus nonradiatively. Put another way, one must simultaneously worry about radiative and nonradiative decay channels—a detailed discussion is in . Comparable limits in the higher-mass range were also derived from $`\gamma `$-ray data of the Pioneer Venus Orbiter (PVO) . For $`m_\nu \begin{array}{c}>\hfill \\ \hfill \end{array}0.1\mathrm{MeV}`$, decay photons still arrive years after SN 1987A. In 1991 the COMPTEL instrument aboard the Compton Gamma Ray Observatory looked at the SN 1987A remnant for about $`0.68\times 10^6\mathrm{s}`$, providing the most restrictive limits in this mass range . For $`m_\nu \begin{array}{c}>\hfill \\ \hfill \end{array}2m_e1.2\mathrm{MeV}`$, which is only possible for $`\nu _\tau `$ with an experimental mass limit of about 18 MeV, the dominant radiative decay channel is $`\nu _\tau \nu _ee^+e^{}`$. From SN 1987A one would still expect $`\gamma `$-rays from the bremsstrahlung process $`\nu _\tau \nu _ee^+e^{}\gamma `$, leading to interesting limits . The decay positrons from past galactic SNe would be trapped by the galactic magnetic fields and thus linger for up to $`10^5\mathrm{yr}`$. Independently of SN 1987A, measurements of the galactic positron flux thus provide limits on neutrino decays with final-state positrons . ### 4.5 Explosion Energetics The standard scenario of a type II SN explosion has it that a shock wave forms near the edge of the core when its collapse halts at nuclear density and that this shock wave ejects the mantle of the progenitor star. However, in typical numerical calculations the shock wave stalls so that this “prompt explosion” scenario does not seem to work. In the “delayed explosion” picture the shock wave is revived by neutrino heating, perhaps in conjunction with convection, but even then it appears difficult to obtain a successful or sufficiently energetic explosion. Therefore, one may speculate that nonstandard modes of energy transfer play an important role. An example is Dirac neutrinos with a magnetic dipole moment of order $`10^{12}\mu _\mathrm{B}`$ (Bohr magnetons). The right-handed (sterile) components would arise in the deep inner core by helicity-flipping collisions and escape. They precess back into interacting states in the large magnetic fields outside the SN c¸ore and heat the shock region; their interaction cross section would be relatively large because of their large inner-core energies . Certainly it is important not to deposit too much energy in the mantle and envelope of the star. 99% of the gravitational binding energy of the neutron star goes into neutrinos, about 1% into the kinetic energy of the explosion, and about 0.01% into the optical supernova. Therefore, neutrinos or other particles emitted from the core must not decay radiatively within the progenitor’s envelope radius of about 100 s or else too much energy lights up . ### 4.6 Neutrino Spectra and Neutrino Oscillations Neutrino oscillations can have several interesting ramifications in the context of SN physics because the temporal and spectral characteristics of the emission process depend on the neutrino flavor . The simplest case is that of the “prompt $`\nu _e`$ burst” which represents the deleptonization of the outer core layers at about 100 ms after bounce when the shock wave breaks through the edge of the collapsed iron core. This “deleptonization burst” propagates through the mantle and envelope of the progenitor star so that resonant oscillations take place for a large range of mixing parameters between $`\nu _e`$ and some other flavor, notably for most of those values where the MSW effect operates in the Sun . In a water Cherenkov detector this burst is visible as $`\nu _e`$-$`e`$ scattering, which is forward peaked, but one would have expected only a fraction of an event from SN 1987A. The first event in Kamiokande may be attributed to this signal, but this interpretation is statistically insignificant. During the next few hundred milliseconds the shock wave stalls at a few hundred kilometers above the core and needs rejuvenating. The efficiency of neutrino heating can be increased by resonant flavor oscillations which swap the $`\nu _e`$ flux with, say, the $`\nu _\tau `$ one. Therefore, what passes through the shock wave as a $`\nu _e`$ was born as a $`\nu _\tau `$ at the proto neutron star surface. It has on average higher energies and thus is more effective at transfering energy. In Fig. 9 the shaded range of mixing parameters is where supernovae are helped to explode, assuming a “normal” neutrino mass spectrum with $`m_{\nu _e}<m_{\nu _\tau }`$ . Below the shaded region the resonant oscillations take place beyond the shock wave and thus do not affect the explosion. The logic of this scenario depends on deviations from strictly thermal neutrino emission at some blackbody “neutrino sphere.” The neutrino cross sections are very energy dependent and different for different flavors so that the concept of a neutrino sphere is rather crude—the spectra are neither thermal nor equal for the different flavors . The dominant opacity source for $`\nu _e`$ is the process $`\nu _e+np+e^{}`$, for $`\overline{\nu }_e`$ it is $`\overline{\nu }_e+pn+e^+`$, while for $`\nu _{\mu ,\tau }`$ and $`\overline{\nu }_{\mu ,\tau }`$ it is neutral-current scattering on nucleons. Therefore, unit optical depth is at the largest radius (and lowest medium temperature) for $`\nu _e`$, and deepest (highest temperature) for $`\nu _{\mu ,\tau }`$ and $`\overline{\nu }_{\mu ,\tau }`$. In typical calculations one finds a hierarchy $`E_{\nu _e}:E_{\overline{\nu }_e}:E_{\mathrm{others}}\frac{2}{3}:1:\frac{5}{3}`$ with $`E_{\overline{\nu }_e}=14`$–17 MeV . The SN 1987A observations imply a somewhat lower range of $`E_{\overline{\nu }_e}7`$–14 MeV . It should be noted that, pending a more detailed numerical confirmation , the difference between the $`\overline{\nu }_e`$ and $`\nu _{\mu ,\tau }`$ or $`\overline{\nu }_{\mu ,\tau }`$ average energies appears to be smaller than commonly assumed , but there is no doubt that the $`\nu _e`$ spectrum is softer than the others. Still, the quantitative import of flavor oscillations depends on details of the neutrino spectra formation process in those SN core layers where the diffusion approximation for the neutrino transport is no longer valid, yet neutrinos are still trapped. A few seconds after core bounce the shock wave has long since taken off, leaving behind a relatively dilute “hot bubble” above the neutron-star surface. This region is one suspected site for r-process heavy-element synthesis, which requires a neutron-rich environment . The neutron-to-proton ratio, which is governed by the $`\beta `$ reactions $`\nu _e+np+e^{}`$ and $`\overline{\nu }_e+pn+e^+`$, is shifted to a neutron-rich phase if $`E_{\nu _e}<E_{\overline{\nu }_e}`$ as for standard neutrino spectra. Resonant oscillations can again swap the $`\nu _e`$ flux with another one, inverting this hierarchy of energies. In the hatched range of mixing parameters shown in Fig. 9 the r-process would be disturbed . On the other hand, $`\nu _e\nu _s`$ oscillations into a sterile neutrino could actually help the r-process by removing some of the neutron-stealing $`\nu _e`$ . A large body of recent literature was devoted to explaining the large kick velocities of the observed radio pulsars as a “neutrino rocket effect.” The required few-percent anisotropy of the SN neutrino emission was attributed to an intricate interplay between the magnetic-field induced neutrino dispersion relation and resonant oscillations . However, due to a conceptual error the effect was vastly overestimated so that the pulsar kicks do not seem to be related to neutrino oscillations in any obvious way. If the mixing angle between $`\nu _e`$ and some other flavor is large, the $`\overline{\nu }_e`$ flux from a SN contains a significant fraction of oscillated states that were born as $`\overline{\nu }_\mu `$ or $`\overline{\nu }_\tau `$ and thus should have higher average energies. The measured SN 1987A event energies are already somewhat low, a problem so strongly exacerbated by oscillations that a large-mixing-angle solution of the solar neutrino deficit poses a problem . This conclusion, however, depends on the standard predictions for the average neutrino energies which may not hold up to closer scrutiny as mentioned above. ## 5 LIMITS ON NEUTRINO PROPERTIES ### 5.1 Masses and Mixing Astrophysics and cosmology play a fundamental role for neutrino physics as the properties of stars and the universe at large provide some of the most restrictive limits on nonstandard properties of these elusive particles. Therefore, it behoves us to summarize what the astrophysical arguments introduced in the previous sections teach us about neutrinos. Unfortunately, stars do not tell us very much about neutrino masses, the holy grail of neutrino physics. The current discourse centers on the interpretation of the solar and atmospheric neutrino anomalies and the LSND experiment which all provide very suggestive evidence for neutrino oscillations. Solar neutrinos imply a $`\mathrm{\Delta }m_\nu ^2`$ of about $`10^5\mathrm{eV}^2`$ (MSW solutions) or $`10^{10}\mathrm{eV}^2`$ (vacuum oscillations), atmospheric neutrinos $`10^3`$$`10^2\mathrm{eV}^2`$, and the LSND experiment 0.3–$`8\mathrm{eV}^2`$. Taken together, these results require a fourth flavor, a sterile neutrino, which is perhaps the most spectacular implication of these experiments, but of course also the least secure. Core-collapse supernovae appear to be the only case in stellar astrophysics, apart from the solar neutrino flux, where neutrino oscillations can be important. However, Fig. 9 reveals that the experimentally favored mass differences negate a role of neutrino oscillations for the explosion mechanism or r-process nucleosynthesis, except perhaps when sterile neutrinos exist . Oscillations affect the interpretation of the SN 1987A signal and that of a future galactic SN . However, as discussed in Sec. 4.6, the main challenge is to develop a quantitatively more accurate understanding of supernovae as neutrino sources before relying on relatively fine points of the neutrino spectral characteristics to learn about neutrino mixing parameters. Oscillation experiments reveal only mass differences so that one still needs to worry about the absolute neutrino mass scale. The absence of anomalous SN 1987A signal dispersion (Sec. 4.2) gives a limit $$m_{\nu _e}\begin{array}{c}<\hfill \\ \hfill \end{array}20\mathrm{eV},$$ (7) somewhat weaker than current laboratory bounds. A high-statistics observation of a galactic SN by a detector like Superkamiokande could improve this limit to about $`3\mathrm{eV}`$ by using the fast rise-time of the neutrino burst as a measure of dispersion effects . If the neutrino mass differences are indeed very small, this limit carries over to the other flavors. One can derive an independent mass limit on $`\nu _\mu `$ and $`\nu _\tau `$ in the range of a few $`10\mathrm{eV}`$ if one identifies a neutral-current signature in a water Cherenkov detector , or if a future neutral-current detector provides an additional measurement . The SN 1987A energy-loss argument (Sec. 4.3) provides a limit on a neutrino Dirac mass of $$m_\nu (\mathrm{Dirac})\begin{array}{c}<\hfill \\ \hfill \end{array}30\mathrm{keV}.$$ (8) It is based on the idea that trapped Dirac neutrinos produce their sterile component with a probability of about $`(m_\nu /2E_\nu )^2`$ in collisions and thus feed energy into an invisible channel. This result was important in the discourse on Simpson’s 17 keV neutrino which is now only of historical interest . ### 5.2 Dipole and Transition Moments #### 5.2.1 Electromagnetic Form Factors Neutrino electromagnetic interactions would provide for a great variety of astrophysical implications. In the vacuum, the most general neutrino interaction with the electromagnetic field is $$_{\mathrm{int}}=F_1\overline{\psi }\gamma _\mu \psi A^\mu G_1\overline{\psi }\gamma _\mu \gamma _5\psi _\mu F^{\mu \nu }\frac{1}{2}\overline{\psi }\sigma _{\mu \nu }(F_2+G_2\gamma _5)\psi F^{\mu \nu },$$ (9) where $`\psi `$ is the neutrino field, $`A^\mu `$ the electromagnetic vector potential, and $`F^{\mu \nu }`$ the field-strength tensor. The form factors are functions of $`Q^2`$ with $`Q`$ the energy-momentum transfer. In the $`Q^20`$ limit $`F_1`$ is the electric charge, $`G_1`$ an anapole moment, $`F_2`$ a magnetic, and $`G_2`$ an electric dipole moment. If neutrinos are electrically strictly neutral, corresponding to $`F_1(0)=0`$, they still have a charge radius, usually defined as $`r^2=6F_1(Q^2)/eQ^2|_{Q^2=0}`$. This form factor provides for a contact interaction, not for a long-range force, and as such modifies processes with $`Z^0`$ exchange . As astrophysics provides no precision test for the effective strength of neutral-current interactions, this form factor is best probed in laboratory experiments . Likewise, the anapole interaction vanishes in the $`Q^20`$ limit and thus represents a modification to the standard neutral-current interaction, with no apparent astrophysical consequences. The most interesting possibility are magnetic and electric dipole and transition moments. If the standard model is extended to include neutrino Dirac masses, the magnetic dipole moment is $`\mu _\nu =3.20\times 10^{19}\mu _\mathrm{B}m_\nu /\mathrm{eV}`$ where $`\mu _\mathrm{B}=e/2m_e`$ is the Bohr magneton . An electric dipole moment $`ϵ_\nu `$ violates CP, and both are forbidden for Majorana neutrinos. Including flavor mixing implies electric and magnetic transition moments for both Dirac and Majorana neutrinos, but they are even smaller due to GIM cancellation. These values are far too small to be of any experimental or astrophysical interest. Significant neutrino electromagnetic form factors require a more radical extension of the standard model, for example the existence of right-handed currents. #### 5.2.2 Plasmon Decay in Stars Dipole or transition moments allow for several interesting processes (Fig. 10). For the purpose of deriving limits, the most important case is $`\gamma \nu \overline{\nu }`$ which is kinematically possible in a plasma because the photon acquires a dispersion relation which roughly amounts to an effective mass. Even without anomalous couplings, the plasmon decay proceeds because the charged particles of the medium induce an effective neutrino-photon interaction. Put another way, even standard neutrinos have nonvanishing electromagnetic form factors in a medium . The standard plasma process dominates the neutrino production in white dwarfs or the cores of globular-cluster red giants. The plasma process was first used in to constrain neutrino electromagnetic couplings. Numerical implementations of the nonstandard rates in stellar-evolution calculations are . The helium-ignition argument in globular clusters (Sec. 3.1.2), equivalent to Eq. (2), implies a limit $$\mu _\nu \begin{array}{c}<\hfill \\ \hfill \end{array}3\times 10^{12}\mu _\mathrm{B},$$ (10) applicable to magnetic and electric dipole and transition moments for Dirac and Majorana neutrinos. Of course, the final-state neutrinos must be lighter than the photon plasma mass which is around 10 keV for the relevant conditions. The corresponding laboratory limits are much weaker . The most restrictive bound is $`\mu _{\nu _e}<1.8\times 10^{10}\mu _\mathrm{B}`$ at 90% CL from a measurement of the $`\overline{\nu }_e`$-$`e`$-scattering cross section involving reactor sources. A significant improvement should become possible with the MUNU experiment , but it is unlikely that the globular-cluster limit can be reached anytime soon. #### 5.2.3 Radiative Decay A neutrino mass eigenstate $`\nu _i`$ may decay to another one $`\nu _j`$ by the emission of a photon, where the only contributing form factors are the magnetic and electric transition moments. The inverse radiative lifetime is found to be $$\tau _\gamma ^1=\frac{|\mu _{ij}|^2+|ϵ_{ij}|^2}{8\pi }\left(\frac{m_i^2m_j^2}{m_i}\right)^3=5.308\mathrm{s}^1\left(\frac{\mu _{\mathrm{eff}}}{\mu _\mathrm{B}}\right)^2\left(\frac{m_i^2m_j^2}{m_i^2}\right)^3\left(\frac{m_i}{\mathrm{eV}}\right)^3,$$ (11) where $`\mu _{ij}`$ and $`ϵ_{ij}`$ are the transition moments while $`|\mu _{\mathrm{eff}}|^2|\mu _{ij}|^2+|ϵ_{ij}|^2`$. Radiative neutrino decays have been constrained from the absence of decay photons of reactor $`\overline{\nu }_e`$ fluxes , the solar $`\nu _e`$ flux , and the SN 1987A neutrino burst . For $`m_\nu m_im_j`$ these limits can be expressed as $$\frac{\mu _{\mathrm{eff}}}{\mu _\mathrm{B}}\begin{array}{c}<\hfill \\ \hfill \end{array}\{\begin{array}{cc}0.9\times 10^1\text{ }(\mathrm{eV}/m_\nu )^2\hfill & \text{Reactor (}\overline{\nu }_e\text{),}\hfill \\ 0.5\times 10^5\text{ }(\mathrm{eV}/m_\nu )^2\hfill & \text{Sun (}\nu _e\text{),}\hfill \\ 1.5\times 10^8\text{ }(\mathrm{eV}/m_\nu )^2\hfill & \text{SN 1987A (all flavors),}\hfill \\ 1.0\times 10^{11}\text{ }(\mathrm{eV}/m_\nu )^{9/4}\hfill & \text{Cosmic background (all flavors).}\hfill \end{array}$$ (12) In this form the SN 1987A limit applies for $`m_\nu \begin{array}{c}<\hfill \\ \hfill \end{array}40\mathrm{eV}`$ as explained in Sec. 4.4. The decay of cosmic background neutrinos would contribute to the diffuse photon backgrounds, excluding the shaded areas in Fig. 11. They are approximately delineated by the dashed line, corresponding to the bottom line in Eq. (12). More restrictive limits obtain for certain masses above 3 eV from the absence of emission features from several galaxy clusters . For low-mass neutrinos the $`m_\nu ^3`$ phase-space factor in Eq. (11) is so punishing that the globular-cluster limit is the most restrictive one for $`m_\nu `$ below a few eV. This is precisely the mass range which today appears favored from neutrino oscillation experiments. Turning this around, the globular-cluster limit implies that radiative decays of low-mass neutrinos do not seem to have observable consequences. For masses above about 30 eV one must invoke fast invisible decays in order to avoid a conflict with the cosmological mass limit. In this case radiative decay limits involve the total lifetime as another parameter; the SN 1987A limits have been interpreted in this sense in . #### 5.2.4 Cherenkov Effect Another form of “radiative decay” is the Cherenkov effect $`\nu \nu +\gamma `$, which involves the same initial- and final-state neutrino. This process is kinematically allowed for photons with $`\omega ^2𝐤^2<0`$, which obtains in certain media or in external magnetic fields. The neutrino may have an anomalous dipole moment, but there is also a standard-model photon coupling induced by the medium or the external field. Thus far it does not look as if the neutrino Cherenkov effect has any strong astrophysical significance (see for a review of the literature). #### 5.2.5 Spin-Flip Scattering The magnetic or electric dipole interaction couples neutrino fields of opposite chirality. In the relativistic limit this implies that a neutrino flips its helicity in an “electromagnetic collision,” which in the Dirac case produces the sterile component. The active states are trapped in a SN core so that spin-flip collisions open an energy-loss channel in the form of sterile states. Conversely, the SN 1987A energy-loss argument (Sec. 4.3) allows one to derive a limit , $$\mu _\nu (\mathrm{Dirac})\begin{array}{c}<\hfill \\ \hfill \end{array}3\times 10^{12}\mu _\mathrm{B},$$ (13) for both electric and magnetic dipole and transition moments. It is the same as the globular-cluster limit Eq. (10), which however includes the Majorana case. Spin-flip collisions would also populate the sterile Dirac components in the early universe and thus increase the effective number of thermally excited neutrino degrees of freedom. Full thermal equilibrium is attained for $`\mu _\nu (\mathrm{Dirac})\begin{array}{c}>\hfill \\ \hfill \end{array}60\times 10^{12}\mu _\mathrm{B}`$ . In view of the SN 1987A and globular-cluster limits this result assures us that big-bang nucleosynthesis remains undisturbed. #### 5.2.6 Spin and Spin-Flavor Precession Neutrinos with magnetic or electric dipole moments spin-precess in external magnetic fields . For example, solar neutrinos can precess into sterile and thus undetectable states in the Sun’s magnetic field . The same for SN neutrinos in the galactic magnetic field where an important effect obtains for $`\mu _\nu \begin{array}{c}>\hfill \\ \hfill \end{array}10^{12}\mu _\mathrm{B}`$. Moreover, the high-energy sterile states emitted by spin-flip collisions from the inner SN core could precess back into active ones and cause events with anomalously high energies in SN neutrino detectors, an effect which probably requires $`\mu _\nu (\mathrm{Dirac})\begin{array}{c}<\hfill \\ \hfill \end{array}10^{12}\mu _\mathrm{B}`$ from the SN 1987A signal . For the same general $`\mu _\nu `$ magnitude one may expect an anomalous rate of energy transfer to the shock wave in a SN, helping with the explosion (Sec. 4.5). In a medium the refractive energy shift for active neutrinos relative to sterile ones creates a barrier to the spin precession . The mass difference has the same effect if the precession is between different flavors through a transition moment . However, the mass and refractive terms may cancel, leading to resonant spin-flavor oscillations in the spirit of the MSW effect . This mechanism can explain all solar neutrino data , but requires rather large toroidal magnetic fields in the Sun since the neutrino magnetic (transition) moments have to obey the globular-cluster limit of Eq. (10). For Majorana neutrinos, the spin-flavor precession amounts to transitions between neutrinos and antineutrinos so that the observation of antineutrinos from the Sun would be a diagnostic for this effect . Large magnetic fields exist in SN cores so that spin-flavor precession could play an important role there, with possible consequences for the explosion mechanism, r-process nucleosynthesis, or the measurable neutrino signal . The downside of this richness of phenomena is that there are so many unknown parameters (electromagnetic neutrino properties, masses, mixing angles) as well as the unknown magnetic field strength and distribution that it is difficult to come up with reliable limits or requirements on neutrino properties. The SN phenomenon is probably too complicated to serve as a laboratory to pin down electromagnetic neutrino properties, but it clearly is an environment where these properties could have far-reaching consequences. ### 5.3 Millicharged Particles It is conceivable that neutrinos carry small electric charges if charge conservation is not exact or if the families are not sequential . Moreover, new particles with small electric charges are motivated in certain models with a “mirror sector” and a slightly broken mirror symmetry . Therefore, it is interesting to study the experimental, astrophysical, and cosmological bounds on “millicharged” particles . A model-independent $`\nu _e`$ charge limit arises from the absence of dispersion of the SN 1987A neutrino signal in the galactic magnetic field $$e_{\nu _e}\begin{array}{c}<\hfill \\ \hfill \end{array}3\times 10^{17}e.$$ (14) If charge conservation holds in neutron decay, $`e_{\nu _e}\begin{array}{c}<\hfill \\ \hfill \end{array}3\times 10^{21}e`$ results, based on a limit for the neutron charge of $`e_n=(0.4\pm 1.1)\times 10^{21}e`$ and on the neutrality of matter which was found to be $`e_p+e_e=(0.8\pm 0.8)\times 10^{21}e`$ . The measured $`\nu _\mu `$-$`e`$ cross section implies $`e_{\nu _\mu }\begin{array}{c}<\hfill \\ \hfill \end{array}10^9e`$ . Generic millicharged particles (charge $`e_x`$, mass $`m_x`$) could appear as virtual states and would thus modify the Lamb shift unless $`e_x<0.11em_x/\mathrm{MeV}`$ . A number of limits follow from a host of previous accelerator experiments and a recent dedicated search at SLAC —see Fig. 12. Millicharged particles are produced by the plasmon decay process and thus drain energy from stars. In globular clusters, the emission rate is almost the same for HB stars and red giants before helium ignition, in contrast with the magnetic-dipole case. Therefore, Eqs. (2) and (3) give an almost identical limit $$e_x\begin{array}{c}<\hfill \\ \hfill \end{array}2\times 10^{14}e,$$ (15) applicable for $`m_x`$ below a few keV. The SN 1987A energy-loss argument extends the exclusion range to about 10 MeV for $`10^9e\begin{array}{c}<\hfill \\ \hfill \end{array}e_x\begin{array}{c}<\hfill \\ \hfill \end{array}10^7e`$ . The usual big-bang nucleosynthesis (BBN) limit on the effective number of neutrino species $`N_{\mathrm{eff}}`$ provides another constraint. A millicharged neutrino is of Dirac nature so that its right-handed component adds one effective species. If the millicharged particles are not neutrinos, then depending on their spin $`N_{\mathrm{eff}}`$ may increase even more. If BBN excludes one extra species one finds $`e_x\begin{array}{c}<\hfill \\ \hfill \end{array}3\times 10^9e`$ for $`m_x\begin{array}{c}<\hfill \\ \hfill \end{array}1\mathrm{MeV}`$ . More stringent limits apply in certain models where the millicharged particles are associated with a shadow sector . Further regions in Fig. 12 are excluded to avoid “overclosing” the universe by the new particles . However, because their relic density depends on their annihilation cross section, it is necessary to specify a model. It is hard to imagine new particles which interact solely through their small electric charge! ### 5.4 Nonstandard Weak Interactions #### 5.4.1 Right-Handed Currents Right-handed (r.h.) weak interactions may exist on some level, e.g. in left-right symmetric models where the r.h. gauge bosons differ from the standard ones by their mass. In the low-energy limit relevant for stars one may account for the new couplings by a r.h. Fermi constant $`ϵG_\mathrm{F}`$ where $`ϵ`$ is a small dimensionless parameter. In left-right symmetric models one finds explicitly for charged-current processes $`ϵ_{\mathrm{CC}}^2=\zeta ^2+[m(W_L)/m(W_R)]^2`$ where $`m(W_{L,R})`$ are the l.h. and r.h. gauge boson masses and $`\zeta `$ is the left-right mixing parameter . Assuming that neutrinos are Dirac particles, a SN core loses energy into r.h. states as an “invisible channel” by the process $`e+pn+\nu _{e,R}`$. The SN 1987A energy-loss argument (Sec. 4.3) then requires $`ϵ_{\mathrm{CC}}\begin{array}{c}<\hfill \\ \hfill \end{array}10^5`$ . Laboratory experiments yield a weaker limit of order $`ϵ_{\mathrm{CC}}\begin{array}{c}<\hfill \\ \hfill \end{array}3\times 10^2`$ , but do not depend on the assumed existence of r.h. neutrinos. For neutral currents the dominant emission process is $`NNNN\nu _R\overline{\nu }_R`$ which is subject to saturation effects as in the case of axion emission . One then finds $`ϵ_{\mathrm{NC}}\begin{array}{c}<\hfill \\ \hfill \end{array}3\times 10^3`$ , somewhat less restrictive than the original limits of ; see also . This bound is also somewhat less restrictive than $`ϵ_{\mathrm{NC}}\begin{array}{c}<\hfill \\ \hfill \end{array}10^3`$ found from big-bang nucleosynthesis . #### 5.4.2 Secret Neutrino Interactions and Majorons The neutrino-neutrino cross section is not known experimentally. It could be anomalously large if neutrino Majorana masses were to arise from a suitable majoron model . “Secret” neutrino-neutrino interactions were constrained by the fact that the SN 1987A neutrino signal was not depleted by collisions with cosmic background neutrinos . Supernova physics with majorons and SN 1987A limits were discussed in . There is little doubt that majoron models will have an important impact on SN physics for neutrino-majoron Yukawa couplings in the $`10^6`$$`10^3`$ range. The existing literature, however, is too confusing for this author to come up with a clear synthesis of what SN physics implies for majoron models. #### 5.4.3 Flavor-Changing Neutral Currents In certain models the neutrino neutral current has an effective flavor-changing component. Neutrinos propagating in matter then have medium-induced mixings and thus can oscillate even if they are strictly massless . Naturally, this phenomenon can be important for the oscillation of solar and supernova neutrinos. ## 6 AXIONS AND OTHER PSEUDOSCALARS ### 6.1 Interaction Structure New spontaneously broken global symmetries imply the existence of Nambu-Goldstone bosons that are massless and as such present the most natural case (besides neutrinos) for using stars as particle-physics laboratories. Massless scalars would lead to new long-range forces (Sect. 7) so that we may focus here on pseudoscalars. The most prominent example are axions which were proposed more than twenty years ago as a solution to the strong CP problem ; for reviews see and for the latest developments the proceedings of a topical conference . We use axions as a generic example—it will be obvious how to extend the following results and discussions to other cases. Actually, axions are only “pseudo Nambu-Goldstone bosons” in that the spontaneously broken chiral Peccei-Quinn symmetry $`U_{\mathrm{PQ}}(1)`$ is also explicitly broken, providing these particles with a small mass $$m_a=0.60\mathrm{eV}\frac{10^7\mathrm{GeV}}{f_a}.$$ (16) Here, $`f_a`$ is the Peccei-Quinn scale, an energy scale which is related to the vacuum expectation value of the field that breaks $`U_{\mathrm{PQ}}(1)`$. The properties of Nambu-Goldstone bosons are always related to such a scale which is the main quantity to be constrained by astrophysical arguments, while Eq. (16) is specific to axions and allows one to express limits on $`f_a`$ in terms of $`m_a`$. In order to calculate the axionic energy-loss rate from stellar plasmas one needs to specify the interaction with the medium constituents. The interaction with a fermion $`j`$ (mass $`m_j`$) is generically $$_{\mathrm{int}}=\frac{C_j}{2f_a}\overline{\mathrm{\Psi }}_j\gamma ^\mu \gamma _5\mathrm{\Psi }_j_\mu a\text{ or }i\frac{C_jm_j}{f_a}\overline{\mathrm{\Psi }}_j\gamma _5\mathrm{\Psi }_ja,$$ (17) where $`\mathrm{\Psi }_j`$ is the fermion and $`a`$ the axion field and $`C_j`$ is a model-dependent coefficient of order unity. The combination $`g_{aj}C_jm_j/f_a`$ plays the role of a Yukawa coupling and $`\alpha _{aj}g_{aj}^2/4\pi `$ acts as an “axionic fine structure constant.” The derivative form of the interaction is more fundamental in that it is invariant under $`aa+a_0`$ and thus respects the Nambu-Goldstone nature of these particles. The pseudoscalar form is usually equivalent, but one has to be careful when calculating processes where two Nambu-Goldstone bosons are attached to one fermion line, for example an axion and a pion attached to a nucleon . The dimensionless couplings $`C_i`$ depend on the detailed implementation of the Peccei-Quinn mechanism. Limiting our discussion to “invisible axion models” where $`f_a`$ is much larger than the scale of electroweak symmetry breaking, it is conventional to distinguish between models of the DFSZ type (Dine, Fischler, Srednicki , Zhitnitskiĭ ) and of the KSVZ type (Kim , Shifman, Vainshtein, Zakharov ). In KSVZ models, axions have no tree-level couplings to the standard quarks or leptons, yet axions couple to nucleons by their generic mixing with the neutral pion. The latest analysis gives numerically $$C_p=0.34,C_n=0.01$$ (18) with a statistical uncertainty of about $`\pm 0.04`$ and an estimated systematic uncertainty of roughly the same magnitude. The tree-level couplings to standard quarks and leptons in the DFSZ model depend on an angle $`\beta `$ which measures the ratio of vacuum expectation values of two Higgs fields. One finds $$C_e=\frac{1}{3}\mathrm{cos}^2\beta ,C_p=0.070.46\mathrm{cos}^2\beta ,C_n=0.15+0.38\mathrm{cos}^2\beta ,$$ (19) with similar uncertainties as in the KSVZ case. The CP-conserving interaction between photons and pseudoscalars is commonly expressed in terms of an inverse energy scale $`g_{a\gamma }`$ according to $$_{\mathrm{int}}=\frac{1}{4}g_{a\gamma }F_{\mu \nu }\stackrel{~}{F}^{\mu \nu }a=g_{a\gamma }𝐄𝐁a,$$ (20) where $`F`$ is the electromagnetic field-strength tensor and $`\stackrel{~}{F}`$ its dual. For axions $$g_{a\gamma }=\frac{\alpha }{2\pi f_a}C_\gamma ,C_\gamma =\frac{E}{N}1.92\pm 0.08,$$ (21) where $`E/N`$ is the ratio of the electromagnetic and over color anomalies, a model-dependent ratio of small integers. In the DFSZ model or grand unified models one has $`E/N=8/3`$, for which $`C_\gamma 0.75`$, but one can also construct models with $`E/N=2`$, which significantly reduces the axion-photon coupling . The value of $`C_\gamma `$ in a great variety of cases was reviewed in . ### 6.2 Limits on the Interaction Strength #### 6.2.1 Photons The axion interaction with fermions or photons allows for numerous reactions which can produce axions in stars, which may imply limits on the axion coupling strength. Beginning with photons, pseudoscalars interact according to the Lagrangian of Eq. (20) which allows for the decay $`a2\gamma `$. In stellar plasmas the photon-axion interaction also makes possible the Primakoff conversion $`\gamma a`$ in the electric fields of electrons and nuclei —see Fig. 1. For low-mass pseudoscalars the emission rate was calculated for various degrees of electron degeneracy in , superseding an earlier calculation where screening effects had been ignored . The helioseismological constraint on solar energy losses then leads to Eq. (1) as a bound on $`g_{a\gamma }`$. Figure 13 shows this constraint (“Sun”) in the context of other bounds; similar plots are found in . For axions the relationship between $`g_{a\gamma }`$ and $`m_a`$ is indicated by the heavy solid line, assuming $`E/N=8/3`$. One may also search directly for solar axions. One method (“helioscope”) is to direct a dipole magnet toward the Sun, allowing solar axions to mutate into x-rays by the inverse Primakoff process . A pilot experiment was not sensitive enough , but the exposure time was significantly increased in a new experiment in Tokyo where a dipole magnet was gimballed like a telescope so that it could follow the Sun . The resulting limit $`g_{a\gamma }\begin{array}{c}<\hfill \\ \hfill \end{array}6\times 10^{10}\mathrm{GeV}^1`$ is more restrictive than Eq. (1). Another helioscope project was begun in Novosibirsk several years ago , but its current status has not been reported for some time. An intruiging project (SATAN) at CERN would use a decommissioned LHC test magnet that could be mounted on a turning platform to achieve reasonable periods of alignment with the Sun . This setup could begin to compete with the globular-cluster limit of Eq. (22). The axion-photon transition in a macroscopic magnetic field is analogous to neutrino oscillations and thus depends on the particle masses . For a large mass difference the transition is suppressed by the momentum mismatch of particles with equal energies. Therefore, the Tokyo limit applies only for $`m_a\begin{array}{c}<\hfill \\ \hfill \end{array}0.03\mathrm{eV}`$. In a next step one will fill the helioscope with a pressurized gas, giving the photon a dispersive mass to overcome the momentum mismatch. An alternative method is “Bragg diffraction,” which uses uses the strong electric field of a crystal lattice which has large Fourier components for the required momentum transfer . The experiment has been performed using Ge detectors which were originally built to search for neutrinoless double-beta decay and for WIMP dark matter; the crystal serves simultaneously as a Primakoff “transition agent” and as an x-ray detector. A first limit of the SOLAX Experiment of $`g_{a\gamma }\begin{array}{c}<\hfill \\ \hfill \end{array}27\times 10^{10}\mathrm{GeV}^1`$ is not yet compatible with Eq. (1) and thus not self-consistent. In the future one may reach this limit, but prospects to go much further appear dim . The Primakoff conversion of stellar axions can also proceed in the magnetic fields of Sun spots or in the galactic magnetic field so that one might expect anomalous x- or $`\gamma `$-ray fluxes from the Sun , the red supergiant Betelgeuse , or SN 1987A . Observations of SN 1987A yield $`g_{a\gamma }\begin{array}{c}<\hfill \\ \hfill \end{array}0.1\times 10^{10}\mathrm{GeV}^1`$ for nearly massless pseudoscalars with $`m_a\begin{array}{c}<\hfill \\ \hfill \end{array}10^9\mathrm{eV}`$. A similar limit obtains from the isotropy of the cosmic x-ray background which would be modified by the conversion to axions in the galactic magnetic field . Axion-photon conversion in the magnetic fields of stars, the galaxy, or the early universe were also studied in , but no additional limits emerged. The existence of massless pseudoscalars would cause a photon birefringence effect in pulsar magnetospheres, leading to a differential time delay between photons of opposite helicity and thus to $`g_{a\gamma }\begin{array}{c}<\hfill \\ \hfill \end{array}0.5\times 10^{10}\mathrm{GeV}^1`$ . A laser beam in a laboratory magnetic field would also be subject to vacuum birefringence , adding to the QED Cotton-Mouton effect. First pilot experiments did not reach the QED level. Two vastly improved current projects are expected to get there , but they will stay far away from the “axion line” in Fig. 13. With a laser beam in a strong magnet one can also search for Primakoff axion production and subsequent back-conversion, but a pilot experiment naturally did not have the requisite sensitivity . The exclusion range of current laser experiments is schematically indicated in Fig. 13. The most important limit on the photon coupling of pseudoscalars derives from the helium-burning lifetime of HB stars in globular clusters, i.e. from Eq. (3), $$g_{a\gamma }\begin{array}{c}<\hfill \\ \hfill \end{array}0.6\times 10^{10}\mathrm{GeV}^1.$$ (22) For $`m_a\begin{array}{c}>\hfill \\ \hfill \end{array}10\mathrm{keV}`$ this limit quickly degrades as the emission is suppressed when the particle mass exceeds the stellar temperature. For a fixed temperature, the Primakoff energy-loss rate decreases with increasing density so that Eq. (2) implies a less restrictive constraint. Equation (22) was first stated in , superseding the slightly less restrictive but often-quoted “red-giant bound” of —see the discussion after Eq. (3). The axion relation Eq. (21) leads to $$m_aC_\gamma \begin{array}{c}<\hfill \\ \hfill \end{array}0.3\mathrm{eV}\text{ and }f_a/C_\gamma \begin{array}{c}>\hfill \\ \hfill \end{array}2\times 10^7\mathrm{GeV}.$$ (23) In the DFSZ model and grand unified models, $`C_\gamma 0.75`$ so that $`m_a\begin{array}{c}<\hfill \\ \hfill \end{array}0.4\mathrm{eV}`$ and $`f_a\begin{array}{c}>\hfill \\ \hfill \end{array}1.5\times 10^7\mathrm{GeV}`$ (Fig. 14). For models in which $`E/N=2`$ and thus $`C_\gamma `$ is very small, the bounds are significantly weaker. On the basis of their two-photon coupling alone, pseudoscalars can reach thermal equilibrium in the early universe. Their subsequent $`a2\gamma `$ decays would contribute to the cosmic photon backgrounds , excluding a non-trivial $`m_a`$-$`g_{a\gamma }`$-range (Fig. 13). Some of the pseudoscalars would end up in galaxies and clusters of galaxies. Their decay would produce an optical line feature that was not found , leading to the “telescope” limits in Fig. 13. For axions, the telescope limits exclude an approximate mass range 4–14 eV even for a small $`C_\gamma `$. Axions with a mass in the $`\mu \mathrm{eV}`$ ($`10^6\mathrm{eV}`$) range could be the dark matter of the universe (Sec. 6.3). The Primakoff conversion in a microwave cavity placed in a strong magnetic field (“haloscope”) allows one to search for galactic dark-matter axions . Two pilot experiments and first results from a full-scale search already exclude a range of coupling strength shown in Fig. 13. The new generation of full-scale experiments should cover the dotted area in Fig. 13, perhaps leading to the discovery of axion dark matter. #### 6.2.2 Electrons Pseudoscalars which couple to electrons are produced by the Compton process $`\gamma +e^{}e^{}+a`$ and by the electron bremsstrahlung process $`e^{}+(A,Z)(A,Z)+e^{}+a`$ . A standard solar model yields an axion luminosity of $`L_a=\alpha _{ae}\mathrm{\hspace{0.17em}6.0}\times 10^{21}L_{}`$ where $`\alpha _{ae}`$ is the axion electron “fine-structure constant” as defined after Eq. (17). The helioseismological constraint $`L_a\begin{array}{c}<\hfill \\ \hfill \end{array}0.1L_{}`$ of Sec. 2.3 implies $`\alpha _{ae}\begin{array}{c}<\hfill \\ \hfill \end{array}2\times 10^{23}`$. White-dwarf cooling gives $`\alpha _{ae}\begin{array}{c}<\hfill \\ \hfill \end{array}1.0\times 10^{26}`$, while the most restrictive limit is from the delay of helium ignition in low-mass red-giants in the spirit of Eq. (2) $$\alpha _{ae}\begin{array}{c}<\hfill \\ \hfill \end{array}0.5\times 10^{26}\text{ or }g_{ae}\begin{array}{c}<\hfill \\ \hfill \end{array}2.5\times 10^{13}.$$ (24) For $`m_a\begin{array}{c}>\hfill \\ \hfill \end{array}T10\mathrm{keV}`$ this limit quickly degrades because the emission from a thermal plasma is suppressed. With Eq. (17) one finds for axions $$m_aC_e\begin{array}{c}<\hfill \\ \hfill \end{array}0.003\mathrm{eV}\text{ and }f_a/C_e\begin{array}{c}>\hfill \\ \hfill \end{array}2\times 10^9\mathrm{GeV}.$$ (25) In KSVZ-type models $`C_e=0`$ at tree level so that no interesting limit obtains. In the DFSZ model $`m_a\mathrm{cos}^2\beta \begin{array}{c}<\hfill \\ \hfill \end{array}0.01\mathrm{eV}`$ and $`f_a/\mathrm{cos}^2\beta \begin{array}{c}>\hfill \\ \hfill \end{array}0.7\times 10^9\mathrm{GeV}`$. Since $`\mathrm{cos}^2\beta `$ can be very small, there is no generic limit on $`m_a`$. #### 6.2.3 Nucleons The axion-nucleon coupling strength is primarily constrained by the SN 1987A energy-loss argument . The main problem is to estimate the axion emission rate reliably. In the early papers it was based on a somewhat naive calculation of the bremsstrahlung process $`NNNNa`$, using quasi-free nucleons that interact perturbatively through a one-pion exchange potential. Assuming an equal axion coupling $`g_{aN}`$ to protons and neutrons this treatment leads to the $`g_{aN}`$-dependent shortening of the SN 1987A neutrino burst of Fig. 8. However, in a dense medium the bremsstrahlung process likely saturates, reducing the naive emission rate by as much as an order of magnitude . With this correction, and assuming that the neutrino burst was not shortened by more than half, one reads from Fig. 8 an excluded range $$3\times 10^{10}\begin{array}{c}<\hfill \\ \hfill \end{array}g_{aN}\begin{array}{c}<\hfill \\ \hfill \end{array}3\times 10^7.$$ (26) With Eq. (17) this implies an exclusion range $$0.002\mathrm{eV}\begin{array}{c}<\hfill \\ \hfill \end{array}m_aC_N\begin{array}{c}<\hfill \\ \hfill \end{array}2\mathrm{eV}\text{ and }3\times 10^6\mathrm{GeV}\begin{array}{c}<\hfill \\ \hfill \end{array}f_a/C_N\begin{array}{c}<\hfill \\ \hfill \end{array}3\times 10^9\mathrm{GeV}.$$ (27) For KSVZ axions the coupling to neutrons disappears while $`C_p0.34`$. With a proton fraction of about 0.3 one estimates an effective $`C_N0.2`$ so that $$0.01\mathrm{eV}\begin{array}{c}<\hfill \\ \hfill \end{array}m_a\begin{array}{c}<\hfill \\ \hfill \end{array}10\mathrm{eV}\text{ and }0.6\times 10^6\mathrm{GeV}\begin{array}{c}<\hfill \\ \hfill \end{array}f_a\begin{array}{c}<\hfill \\ \hfill \end{array}0.6\times 10^9\mathrm{GeV}$$ (28) is excluded. In a detailed numerical study the values for $`C_n`$ and $`C_p`$ appropriate for the KSVZ model and for the DFSZ model with different choices of $`\mathrm{cos}^2\beta `$ were implemented . For KSVZ axions one finds a limit $`m_a\begin{array}{c}<\hfill \\ \hfill \end{array}0.008\mathrm{eV}`$, while it varies between about 0.004 and $`0.012\mathrm{eV}`$ for DFSZ axions, depending on $`\mathrm{cos}^2\beta `$. In view of the large overall uncertainties it is probably good enough to remember $`m_a\begin{array}{c}<\hfill \\ \hfill \end{array}0.01\mathrm{eV}`$ as a generic limit (Fig. 14). Axions on the “strong interaction side” of the exclusion range Eq. (26) would have produced excess counts in the neutrino detectors by their absorption on oxygen if $`1\times 10^6\begin{array}{c}<\hfill \\ \hfill \end{array}g_{aN}\begin{array}{c}<\hfill \\ \hfill \end{array}1\times 10^3`$ . For KSVZ axions this crudely translates into $`20\mathrm{eV}\begin{array}{c}<\hfill \\ \hfill \end{array}m_a\begin{array}{c}<\hfill \\ \hfill \end{array}20\mathrm{keV}`$ as an exclusion range (Fig. 14). #### 6.2.4 Hadronic Axion Window This limit as well as the “trapping side” of the energy-loss argument have not been studied in as much detail because the relevant $`m_a`$ range is already excluded by the globular-cluster argument (Fig. 14) which, however, depends on the axion-photon interaction which would nearly vanish in models with $`E/N=2`$. In this case a narrow gap of allowed axion masses in the neighborhood of 10 eV may exist between the two SN arguments (“hadronic axion window”). In this region one can derive interesting limits from globular-cluster stars where axions can be emitted by nuclear processes, causing a metallicity-dependent modification of the core mass at helium ignition . It is intruiging that in this window axions could play a cosmological role as a hot dark matter component . Usually, of course, axions are a cold dark matter candidate. Moreover, in this window it may be possible to detect a $`14.4\mathrm{keV}`$ monochromatic solar axion line which is produced by transitions between the first excited and ground state of <sup>57</sup>Fe. In the laboratory one can then search for axion absorption which would give rise to x-rays as <sup>57</sup>Fe de-excites . A recent pilot experiment did not have enough sensitivity to find axions , but a vastly improved detector is now in preparation in Tokyo (private communication by S. Moriyama and M. Minowa). ### 6.3 Cosmological Limits The astrophysical axion mass limits are particularly interesting when juxtaposed with the cosmological ones which we thus briefly review. For $`f_a\begin{array}{c}>\hfill \\ \hfill \end{array}10^8\mathrm{GeV}`$ cosmic axions never reach thermal equilibrium in the early universe. They are produced by a nonthermal mechanism that is intimately intertwined with their Nambu-Goldstone nature and that implies that their contribution to the cosmic density is proportional to $`f_a^{1.175}`$ and thus to $`m_a^{1.175}`$. The requirement not to “overclose” the universe with axions thus leads to a lower mass limit. One must distinguish between two generic cosmological scenarios. If inflation occurred after the Peccei-Quinn symmetry breaking or if $`T_{\mathrm{reheat}}<f_a`$, the initial axion field takes on a constant value $`a_\mathrm{i}=f_a\mathrm{\Theta }_\mathrm{i}`$ throughout the universe, where $`0\mathrm{\Theta }_\mathrm{i}<\pi `$ is the initial “misalignment” of the QCD $`\mathrm{\Theta }`$ parameter . If $`\mathrm{\Theta }_\mathrm{i}1`$ one obtains a critical density in axions for $`m_a1\mu \mathrm{eV}`$, but since $`\mathrm{\Theta }_\mathrm{i}`$ is unknown there is no strict cosmological limit on $`m_a`$. However, the possibility to fine-tune $`\mathrm{\Theta }_\mathrm{i}`$ is limited by inflation-induced quantum fluctuations which in turn lead to temperature fluctuations of the cosmic microwave background . In a broad class of inflationary models one thus finds an upper limit to $`m_a`$ where axions could be the dark matter. According to the most recent discussion it is about $`10^3\mathrm{eV}`$ (Fig. 14). If inflation did not occur at all or if it occurred before the Peccei-Quinn symmetry breaking with $`T_{\mathrm{reheat}}>f_a`$, cosmic axion strings form by the Kibble mechanism . Their motion is damped primarily by axion emission rather than gravitational waves. After axions acquire a mass at the QCD phase transition they quickly become nonrelativistic and thus form a cold dark matter component. Unknown initial conditions no longer enter, but details of the string mechanism are sufficiently complicated to prevent an exact prediction of the axion density. On the basis of Battye and Shellard’s treatment and assuming that axions are the cold dark matter of the universe one finds a plausible mass range of $`m_a=\text{6–2500}\mu \text{eV}`$ . Sikivie et al. predict somewhat fewer axions, allowing for somewhat smaller masses if axions are the dark matter. Either way, the ongoing full-scale search experiments for galactic dark matter axions (Sec. 6.2.1 and Fig. 13) in Livermore (U.S. Axion Search ) and in Kyoto (CARRACK ) aim at a cosmologically well-motivated range of axion masses (Fig. 14). ## 7 LONG-RANGE FORCES ### 7.1 Fifth Force New low-mass scalar or vector bosons would mediate long-range forces between macroscopic bodies. This is in contrast with pseudoscalars which couple to the spin and thus produce no long-range force between unpolarized bodies except for a residual force from two-boson exchange . In stars, a new long-range force has two different consequences. First, it modifies the effect of gravity. Second, it drains the star of energy, for the quanta of the new force are massless, or nearly so, and thus arise in thermal reactions. Thermal graviton emission is a case in point . However, the graviton luminosity is very small, about $`10^{19}L_{}`$ for the Sun. Naturally, the coherent large-scale force is the most important aspect of gravity in stars! This conclusion carries over to new forces, notably a putative “fifth force.” According to experiment, a fifth force has to be much weaker than gravity so that possible modifications of stellar structure or the solar p-mode frequencies are too small to be observable. Likewise, modifications of fundamental coupling constants near pulsars or scalar boson emission by the Hulse-Taylor binary pulsar are negligible effects. However, there are no experimental fifth-force limits below about the centimeter scale, corresponding to boson masses exceeding about $`10^3\mathrm{eV}`$, where the most restrictive bounds arise from the energy loss of stars. The Yukawa coupling $`g_S`$ ($`g_V`$) of scalar (vector) bosons $`\varphi `$ to electrons has been constrained by the bremsstrahlung process $`e^{}+{}_{}{}^{4}\mathrm{He}{}_{}{}^{4}\mathrm{He}+e^{}+\varphi `$ which leads with Eq. (3) to limits of $`g_S\begin{array}{c}<\hfill \\ \hfill \end{array}1.3\times 10^{14}`$ and $`g_V\begin{array}{c}<\hfill \\ \hfill \end{array}0.9\times 10^{14}`$ . The Yukawa coupling to baryons has been constrained by the Compton process $`\gamma +{}_{}{}^{4}\mathrm{He}{}_{}{}^{4}\mathrm{He}+\varphi `$, leading to $`g_S\begin{array}{c}<\hfill \\ \hfill \end{array}4.3\times 10^{11}`$ and $`g_V\begin{array}{c}<\hfill \\ \hfill \end{array}3.0\times 10^{11}`$ . ### 7.2 Leptonic and Baryonic Gauge Interactions It has been speculated that lepton and baryon number could play the role of gauge charges . One consequence would be the existence of long-range leptonic and baryonic forces. The globular-cluster limits of the previous section translate into $`e_L\begin{array}{c}<\hfill \\ \hfill \end{array}1\times 10^{14}`$ and $`e_B\begin{array}{c}<\hfill \\ \hfill \end{array}3\times 10^{11}`$ on the leptonic and baryonic gauge charges. Tests of the equivalence principle on solar-system scales constrain a composition-dependent fifth force, leading to something like $`e_{L,B}\begin{array}{c}<\hfill \\ \hfill \end{array}10^{23}`$ . The cosmic background neutrinos would screen leptonic forces over large distances, but the solar-system limit on $`e_L`$ remains unaffected . On the other hand, the SN 1987A neutrino burst would not suffer dispersion in the leptonic field of the galaxy because it is shielded by the cosmic background neutrinos . Leptonic forces contribute to the neutrino self-energy, modifying matter-induced neutrino oscillations in the Sun and supernovae . ### 7.3 Time-Variation of Newton’s Constant Astrophysics and cosmology are natural laboratories for testing all conceivable deviations from the standard theory of gravitation . One hypothesis, which goes back to Dirac’s large numbers hypothesis , holds that the value of Newton’s constant $`G_\mathrm{N}`$ evolves in time. The present-day rate of change can be measured by a precision study of the orbits of celestial bodies. In the solar system data come from laser ranging of the Moon and radar ranging of the planets, notably by the Viking landers on Mars . The increase of the length of day from 1663–1972 caused by tidal forces in the Earth-Moon system are consistent with a constant $`G_\mathrm{N}`$ , although some controversial claims for a decreasing $`G_\mathrm{N}`$ have been raised . Beginning in 1974, very precise orbital data exist for the Hulse-Taylor binary pulsar PSR 1923+16 . A weaker but less model-dependent bound arises from the spin-down rate of the pulsar PSR 0655+64 . Finally, the long-time stability of galaxy clusters limits a decreasing $`G_\mathrm{N}`$ . The bounds from these methods are summarized in Table 2. Very intruiging limits follow from the properties of the Sun. Paleontological evidence for its past luminosity, which scales approximately as $`G_\mathrm{N}^7^5`$, provides limits on previous values of $`G_\mathrm{N}`$ . Solar models with a time-varying $`G_\mathrm{N}`$ were studied in the 1960s and 1970s , but truly interesting limits arose only recently from helioseismological observations . A limit $`|\dot{G}_\mathrm{N}/G_\mathrm{N}|\begin{array}{c}<\hfill \\ \hfill \end{array}2\times 10^{12}\mathrm{yr}^1`$ was derived by a comparison of the measured small-spacing p-mode frequency differences with those calculated from solar models with a time-varying $`G_\mathrm{N}`$ . The authors believe that the uncertainty is dominated by the observational errors while the prime systematic uncertainty is the exact solar age. A more conservative approach was used in deriving limits on a new solar energy loss mechanism (Sec. 2.3). The helioseismological analysis of provides the most restrictive limit on $`\dot{G}_\mathrm{N}/G_\mathrm{N}`$, hopefully stimulating other groups to re-assess the bound independently. A large effect is expected on the oldest stars which “integrate” $`G_\mathrm{N}(t)`$ into the distant past. White-dwarf cooling is a case in point . Under reasonable assumptions for the galactic age, the faint end of the luminosity function prefers a negative value for $`\dot{G}_\mathrm{N}/G_\mathrm{N}`$ around $`10`$ to $`30\times 10^{12}\mathrm{yr}^1`$ . Very recently this case has been re-examined in greater detail. The observational uncertainty of the faint end of the luminosity function and the uncertainty of the galactic age preclude a clear limit on $`\dot{G}_\mathrm{N}/G_\mathrm{N}`$. However, it is remarkable that even values as small as $`10^{14}\mathrm{yr}^1`$ seem to make a noticeable difference for the cooling behavior of the oldest white dwarfs. Globular clusters are another important case because a different $`G_\mathrm{N}`$ in the past changes their apparent age based on the brightness of the main-sequence turn-off . A comparison with the expansion age of the universe brackets the allowed rate-of-change to the interval shown in Table 2. A very sensitive limit arises from the observed masses of several old pulsars which measure the value of $`G_\mathrm{N}`$ at their time of formation in a SN explosion . The mass of a SN core at the time of collapse depends on its Chandrasekhar value which in turn scales as $`G_{\mathrm{N}}^{}{}_{}{}^{3/2}`$. The limits shown in Table 2 are difficult to compare on an equal footing as they involve vastly different ways of dealing with statistical and systematic uncertainties. However, it looks fair to conclude that $`|\dot{G}_\mathrm{N}/G_\mathrm{N}|`$ cannot exceed a few $`10^{12}\mathrm{yr}^1`$ and that stars play an important role in this discourse. A similar bound arises from the cosmic expansion rate at the time of big-bang nucleosynthesis as measured by the primordial light-element abundances . It implies that three minutes after the big bang $`G_\mathrm{N}`$ agreed with its present-day value to within a few tens of percent. A comparison of this limit with those of Table 2 requires a specific assumption about the functional dependence of $`G_\mathrm{N}(t)`$. ### 7.4 Equivalence Principle The general relativistic equivalence principle implies that the space-time trajectories of relativistic particles are independent of internal degrees of freedom such as spin or flavor, and independent of the particle type (e.g. photon, neutrino). Several astronomical observations allow tests of this prediction. Limits on a gravitationally induced birefringence effect for photon propagation have been derived from the absence of depolarization of the Zeeman components of spectral lines emitted in magnetically active regions of the Sun . Observations of the light deflection by the Sun could soon become interesting . The depolarization effect on distant radio galaxies already provides very restrictive limits , as do pulsar observations . One may also test for the equality of the Shapiro time delay between different particles which propagate through the same gravitational field. The absence of an anomalous shift between the SN 1987A photon and neutrino arrival times (Sec. 4.2) gave limits on violations of the equivalence principle . The observation of a future galactic SN could provide independent arrival information for $`\overline{\nu }_e`$ and $`\nu _e`$ and thus provide another such test . A violation of the equivalence principle could manifest itself by a relative shift of the energies of different neutrino flavors in a gravitational field. For a given momentum $`p`$ the matrix of energies in flavor space (relativistic limit) is $`E=p+M^2/2p+2p\varphi (𝐫)(1+F)`$ where $`M^2`$ is the squared neutrino mass matrix, $`\varphi (𝐫)`$ the Newtonian gravitational potential, and $`F`$ a matrix of dimensionless constants which parametrize the violation of the equivalence principle. $`F0`$ can lead to neutrino oscillations in analogy to the standard vacuum oscillations which are caused by the matrix $`M^2`$ . Values for $`F_{ij}`$ in the general $`10^{14}10^{17}`$ range could account for the solar neutrino problem. ### 7.5 Photon Mass While it is usually taken for granted that photons are strictly massless, this theoretical expectation still needs to be tested experimentally. Some of the most restrictive constraints are related to the long-range nature of static electric or magnetic fields. The best laboratory limit of $`m_\gamma \begin{array}{c}<\hfill \\ \hfill \end{array}10^{14}\mathrm{eV}`$ derives from a test of Coulomb’s law—see for a review. In the astrophysical domain, the dispersion of the pulsed signal of radio pulsars is not a very sensitive diagnostic as the interstellar medium mimics a photon mass corresponding to a plasma frequency of order $`10^{11}\mathrm{eV}`$. The spatial variation of magnetic fields of celestial bodies is far more sensitive. Jupiter’s magnetic field as measured by Pioneer-10 yields $`m_\gamma \begin{array}{c}<\hfill \\ \hfill \end{array}0.6\times 10^{15}\mathrm{eV}`$ while the Earth’s field gives $`m_\gamma \begin{array}{c}<\hfill \\ \hfill \end{array}0.8\times 10^{15}\mathrm{eV}`$ . If the photon has a mass, $`Am_\gamma ^2`$ is an observable quantity where $`A`$ is the vector potential corresponding to known magnetic fields. A recent laboratory experiment discloses $`Am_\gamma ^2\begin{array}{c}<\hfill \\ \hfill \end{array}0.8\times 10^{22}\mathrm{T}\mathrm{m}\mathrm{eV}^2`$ . The galactic magnetic field implies $`A2\times 10^9\mathrm{T}\mathrm{m}`$ (Tesla-meter) so that $`m_\gamma \begin{array}{c}<\hfill \\ \hfill \end{array}2\times 10^{16}\mathrm{eV}`$ while a cluster-level field corresponds to $`A10^{12}\mathrm{T}\mathrm{m}`$, providing $`m_\gamma \begin{array}{c}<\hfill \\ \hfill \end{array}10^{17}\mathrm{eV}`$. Even more restrictive limits obtain from astrophysical objects in which magnetic fields, and hence the Maxwellian form of electrodynamics, play a key role at maintaining equilibrium or creating long-lived stable structures . The most restrictive case is based on an argument about the magneto-gravitational equilibrium of the gas in the Small Magellanic Cloud. The argument requires that the range of the interaction exceeds the characteristic field scale of about 3 kpc . This resulting limit $`m_\gamma \begin{array}{c}<\hfill \\ \hfill \end{array}10^{27}\mathrm{eV}`$, if correct, is surprisingly close to $`10^{33}\mathrm{eV}`$ where the photon Compton wavelength would exceed the radius of the observable universe and thus would cease to have any observable consequences. ### 7.6 Multibody Neutrino Exchange Two-neutrino exchange between fermions gives rise to a long-range force. A neutrino may also pass around several fermions, so to speak, producing a much smaller potential. In a thought-provoking paper it was claimed that this multibody neutrino exchange could be a huge effect in neutron stars, essentially because combinatorial factors among many neutrons win out against the smallness of the potential . To stabilize neutron stars, it was claimed, the long-range nature of neutrino exchange had to be suppressed by a nonvanishing mass exceeding about $`0.4\mathrm{eV}`$ for all flavors. In an interesting series of papers it was shown, however, that a proper resummation of a seemingly divergent series of terms leads to a well-behaved and small “neutron-star self-energy” , invalidating the claim of a lower neutrino mass limit. ## 8 CONCLUSION Stellar-evolution theory together with astronomical observations, the SN 1987A neutrino burst, and certain x- and $`\gamma `$-ray observations provide a number of well-developed arguments to constrain the properties of low-mass particles. The most successful examples are globular-cluster stars where the “energy-loss argument” was condensed into the simple criteria of Eqs. (2) and (3) and SN 1987A where it was summarized by Eq. (5). New particle-physics conjectures must first pass these and other simple astrophysical standard tests before being taken too seriously. A showcase example for the interplay between astrophysical limits with laboratory experiments and cosmological arguments is provided by the axion hypothesis. The laboratory and astrophysical limits push the Peccei-Quinn scale to such high values that it appears almost inevitable that axions, if they exist at all, play an important role as a cold dark matter component. This makes the direct search for galactic axion dark matter a well-motivated effort. Other important standard limits pertain to neutrino electromagnetic form factors—laboratory experiments will have a difficult time catching up. The globular-cluster limit was based on relatively old observational data. A plot like Fig. 5 could be made more significant with dedicated CCD observations of globular clusters and improved theoretical interpretations. Assuming that such an effort produces internally consistent results, the statistical significance would improve, but I would not expect a vast gain for, say, the neutrino magnetic-moment limit as there always remain irreducible systematic uncertainties. Shockingly, SN 1987A as a particle-physics laboratory is based on no more than two dozen measured neutrinos. The observation of a future galactic SN with a large detector like Superkamiokande or a future observatory such as OMNIS would provide a high-statistics neutrino light curve and thus a sound empirical basis for SN theory in general and for particle-physics interests in particular. Alas, galactic supernovae happen only once every few decades, perhaps only once per century. Thus, while the neutrinos from the next galactic SN surely are on their way, it could be a long wait until they arrive. Most of the theoretical background relevant to this field could not be touched upon in this brief overview. The physics of weakly coupled particles in stars is a nice playing field for “particle physics in media” which involves field theory at finite temperature and density (FTD), many-body effects, particle dispersion and reactions in magnetic fields and media, oscillations of trapped neutrinos, and so forth. It is naturally in the context of SN theory where such issues are of particular interest, but even the plasmon decay $`\gamma \nu \overline{\nu }`$ in normal stars or the MSW effect in the Sun are interesting cases. Particle physics in media and its astrophysical and cosmological applications is a fascinating topic in its own right which well deserves a dedicated review. Much more information of particle-physics interest may be written in the sky than has been deciphered as yet. Other objects or phenomena should be considered, perhaps other kinds of conventional stars, perhaps more exotic phenomena such as $`\gamma `$-ray bursts. The particle-physics lessons to be learned from them are left to be reviewed in a future report! ## ACKNOWLEDGMENTS This work was supported, in part, by the Deutsche Forschungsgemeinschaft under grant No. SFB-375.
no-problem/9903/cond-mat9903270.html
ar5iv
text
# Multifractal Scaling in the Bak-Tang-Wiesenfeld Sandpile and Edge Events ## Abstract An analysis of moments and spectra shows that, while the distribution of avalanche areas obeys finite size scaling, that of toppling numbers is universally characterized by a full, nonlinear multifractal spectrum. Rare, large avalanches dissipating at the border influence the statistics very sensibly. Only once they are excluded from the sample, the conditional toppling distribution for given area simplifies enough to show also a well defined, multifractal scaling. The resulting picture brings to light unsuspected, novel physics in the model. PACS numbers: 64.60.Lx, 64.60.Ak, 05.40.+j, 05.65.+b Finite size scaling (FSS) is a widely adopted framework for the description of finite, large systems near criticality. In the last decade, after the work of Bak et al., much attention has been devoted to a class of models in which criticality is spontaneously generated by the dynamics itself, without the necessity of tuning parameters. This self–organized criticality (SOC) has been advocated as a paradigm for a wide range of phenomena, from earthquakes to interface depinning, economics and biological evolution . The prototype model of SOC is the two dimensional (2D) Bak, Tang, and Wiesenfeld sandpile (BTW) , which represents a system driven by a slow external influx, dissipated at the borders through a local, nonlinear mechanism. In spite of its apparent simplicity and relative analytical tractability, the 2D BTW resisted, so far, all theoretical attempts to fully and exactly characterize its scaling. These attempts were essentially all based on the FSS ansatz. Numerical approaches, also based on FSS, led to rather scattered and sometimes contradictory numerical results , which hardly concile with existing theoretical conjectures. Thus, with its intriguing intractability, BTW scaling remains a formidable challenge for nonequilibrium statistical mechanics and it is very important to check if FSS works in this context. In this Letter we apply a new strategy of data collection and interpretation, in order to determine to what extent the FSS ansatz can be applied, or rather has to be modified, for a correct description of the 2D BTW. Our results are striking and largely unexpected: while compelling evidence is obtained that the probability distribution functions (pdf) of some quantities obey FSS, for other magnitudes, whose fractal dimensions can widely fluctuate within the the nonlinear dynamics, this is definitely not the case. Following our protocol of analysis, we demonstrate that the well known difficulties in the description of the BTW are due to unexpected, very nonstandard features of its dynamical behavior. In the BTW, relations between different key quantities do not reduce to standard power laws, as in FSS, and are substantially influenced by the infrared cutoff given by the size of the system. The peculiar fluctuations characterizing intermittent dissipation at the cutoff scale, provide a dynamical mechanism for unusual deviations from finite size, and even multifractal scaling. We consider the BTW on a square lattice box $`\mathrm{\Lambda }`$; to each site $`i`$ we associate an integer height $`z_i>0`$, the number of grains. When $`z_i`$ exceeds a threshold $`z_c=4`$, site $`i`$ topples: $`z_iz_i4`$, while for the nearest neighbors $`j`$ of $`i`$, $`z_jz_j+1`$. At the boundary less than $`4`$ neighbor sites are upgraded with consequent grain dissipation. Further instabilities can be created by the first toppling. An avalanche is the set of the $`s`$ topplings necessary to reach a stable system configuration after addition of one grain ( $`z_kz_k+1`$ at some randomly chosen $`k\mathrm{\Lambda }`$), $`a`$ is the number of lattice sites toppling at least once during the avalanche. A sequence of avalanches is created by successive random additions. After sufficiently many grains, thanks to dissipation at the borders, the sandpile reaches a steady state. We analyzed up to $`10^8`$ avalanches in this state for $`L=128,256,512`$ and $`1024`$. The pdf for $`a`$ and $`s`$ do not reveal characteristic scales intermediate between lattice spacing and linear pile size $`L`$. FSS postulates for them the forms $`P_s(s,L)`$ $`=`$ $`s^{\tau _s}F_s\left(s/L^{D_s}\right)`$ (1) $`P_a(a,L)`$ $`=`$ $`s^{\tau _a}F_a\left(a/L^{D_a}\right),`$ (2) and usually assumes that different quantities characterizing an avalanche are simply related by power laws, determined by sharply peaked conditional pdf. For example, one expects that, given $`a`$, $`sa^\gamma `$ , with $`D_s=\gamma D_a`$, and $`D_s`$ ($`D_a`$) representing the fractal dimension of $`s`$ ($`a`$). Without sticking to the FSS form (1), scaling of $`P_s`$ is most generally described by the multifractal spectrum: $$f\left(\alpha \right)=\frac{\mathrm{log}(_{L^\alpha }^{\mathrm{}}P_s(s,L)𝑑s)}{\mathrm{log}(L)}.$$ (3) $`f`$ is the Legendre transform of the moment scaling function $`\sigma \left(q\right)`$ defined by: $$s^q_L=P_s(s,L)s^q𝑑sL^{\sigma \left(q\right)},$$ (4) i.e. $`\sigma \left(q\right)=sup_\alpha [q\alpha +f(\alpha )]`$. Analogous definitions apply to the spectrum $`g(\beta )`$ ($`\beta =\mathrm{log}a/\mathrm{log}L`$) and the moment exponent $`\rho (q)`$ of $`P_a`$. If Eqs. (1) hold, $`f\left(\alpha \right)=(\tau _s1)\alpha `$ for $`0<\alpha <D_s`$ and $`f=\mathrm{}`$ for $`\alpha >D_s`$. Consistently $`\sigma \left(q\right)=D_s(q\tau _s+1)`$ for $`q>\tau _s1`$ and $`\sigma \left(q\right)=0`$ for $`q<\tau _s1`$. Corresponding expressions for $`a`$-quantities hold if FSS is valid. So, within FSS, both $`f`$ and $`\sigma `$ are piece-wise linear functions of their arguments. A reliable way to establish if Eqs.(1) hold, is by checking the above linearity of $`\sigma `$ and $`\rho `$ in significant ranges of $`q`$. Being extrapolated for $`L\mathrm{}`$ from finite $`L`$ moments, $`\rho `$ and $`\sigma `$ provide a very asymptotic characterization of the pdf’s. While for $`P_a`$ a constant gap $`\mathrm{\Delta }\rho (q)=\rho (q+1)\rho (q)2.02\pm 0.03D_a`$ establishes already for $`q=1`$ (Table I), for $`P_s`$, $`\sigma ^{}(1)2.5`$, while $`\mathrm{\Delta }\sigma `$ steadily increases from $`\mathrm{\Delta }\sigma (1)2.70`$ to $`\mathrm{\Delta }\sigma (8)2.92`$. $`\mathrm{\Delta }\sigma (1)2.7`$, was also found in Ref. , based on large $`L`$ data. Thus, unlike for $`P_a`$, and in violation of FSS, for $`P_s`$ there is clearly no constant gap in the range $`q1`$. The gap tends to rise to $`3.0`$ for increasing $`q`$. Therefore, we should also expect $`f(\alpha )>\mathrm{}`$ as long as $`\alpha 3.0`$. Fig.1 reports $`g`$ and $`f`$ as obtained by the data collapse technique in Ref. . The linear form $`g\left(\beta \right)=(\tau _a1)\beta `$ is well verified for $`\beta 1.5`$ with an estimated slope $`0.19\pm 0.01`$. In the region $`\beta >1.5`$, the collapse gets worse. One expects $`D_a=2`$ and, in fact, the poor $`g`$-collapse for $`\beta 2`$ is consistent with the infinite discontinuity of a FSS spectrum with $`D_a=2`$: curves for various $`L`$ smooth out to different degrees such discontinuity, and underestimate $`g`$ for $`\beta 2`$. Assuming a linear $`g`$ in the whole domain $`0\beta 2`$ , and $`\tau _a=1/5`$ exactly, as suggested by the estimated initial slope, $`a_L`$ should scale with $`\rho (1)=sup_\beta \left[g(\beta )+\beta \right]=2/5+2=1.6`$, in nice agreement with our determination $`\rho (1)=1.59\pm 0.02`$. Evidence of FSS for $`P_a`$ comes also from the fact that a standard FSS collapse ($`\tau _a=6/5`$; $`D_a=2`$) works very well (Fig. (1), inset). For $`\alpha 2`$ the collapsed $`f`$ is very close to linear and overlaps with the expected $`g`$ ($`f\left(2\right)0.39`$). Thus, an acceptable FSS form of $`P_s`$ should assume $`\tau _s=\tau _a`$, in order to be consistent with the well collapsed, initial part of the plots. Within such assumption, $`D_s=2.5`$ would be imposed by the exact result $`\sigma (1)=2`$ (we find $`\sigma \left(1\right)=1.99\pm 0.02`$). Indeed, $`sup_\alpha [f(\alpha )+\alpha ]=2`$ in such case, the $`sup`$ being attined at $`\alpha =D_s=2.5`$. The hypothetical linear spectrum should therefore have support in $`0\alpha 2.5`$, and satisfy $`f\left(5/2\right)=1/2`$. In Ref. , a relatively limited analysis of $`\sigma `$ suggested $`\mathrm{\Delta }\sigma (q)2.5`$ for $`q1`$. Such constant gap would leave room to a FSS approximation for $`P_s`$, with $`\tau _s6/5`$ and $`D_s2.5`$, and a linear $`\sigma `$ deviating from the measured one possibly only at very low $`q`$’s. According to Eq. (1), for such effective $`P_s`$ one would also have $`D_s(2\tau _s)2=\sigma (1)`$. However, this does not agree with the plots which show that $`f>\mathrm{}`$ at least up to $`\alpha 3.0`$. Indeed, curves for various $`L`$ collapse rather well for $`\alpha 3.0`$, and clearly suggest a support of $`f`$ asymptotically bounded by $`\alpha 3.0`$, consistent with the trend of $`\mathrm{\Delta }\sigma `$ (Table I). This bound follows also from the leftward trend of the curves for increasing $`L`$ in the region $`\alpha 3`$, where collapse gets worse. An even approximate $`\tau _s`$ exponent, so extensively discussed in the last decade, can not be simultaneously consistent with the initial slope $`1/5`$ of $`f`$, $`\mathrm{\Delta }\sigma (\mathrm{})3`$ and $`\sigma (1)=2`$. Full consistency, in particular with $`\sigma (1)=2`$, can be recovered by assuming that $`f`$ is indeed linear, with the same slope $`1/5`$ as $`g`$, up to $`\alpha =2.5`$ ($`f(2.5)=1/2`$), but has a nonlinear continuous drop in the range $`2.5<\alpha <3.0`$. The slight underestimation by the plot for $`\alpha 2.5`$ ( $`f(5/2)0.57`$) should again be imputed to roundoff and slower $`L`$-convergence in correspondence to the major bending of $`f`$ . The striking difference described above between $`P_s`$ and $`P_a`$ suggests that the conditional pdf, $`C`$, such that $$P_s(s,L)=𝑑aC(s|a,L)P_a(a,L),$$ (5) should have unusual structure. Within FSS one would assume $`C\delta (sa^\gamma )`$ for $`a<L^2`$ and $`L\mathrm{}`$. Such $`C`$ would lead to $`\gamma =(\tau _s1)/(\tau _a1)`$. Thus, $`\tau _s=\tau _a`$ would imply $`\gamma =1`$, while $`D_s=2.5`$ would give $`\gamma =D_s/D_a=1.25`$, already contradicting FSS. In fact $`C`$ is a complex, broad pdf. Its properties elucidate and confirm our conclusions for $`P_a`$ and $`P_s`$. Fig. 2 reports values of $`\alpha =\mathrm{log}s/\mathrm{log}L`$ vs. $`\beta =\mathrm{log}a/\mathrm{log}L`$ for $`L=1024`$ avalanches. Ratios $`\gamma =\alpha /\beta `$ range between $`1`$ ( $`sa`$) and $`1.25`$, and their spread is not modified appreciably by sampling data for progressively larger $`L`$’s. For $`\beta 2`$ also some $`\alpha /\beta >1.25`$ are found, which are too rare to be displayed in Fig. 2. If a relation $`sa^\gamma `$ would hold, data should coalesce into a straight line with slope $`\gamma `$. To the contary, points are quite spread and form an open angle, rather than a narrow strip, as is instead the case, e.g., for the radius of gyration, $`r`$, of the surface covered by the avalanche (Fig. 2). $`C`$-moments can be assumed to scale as $`s^q_a=𝑑sC(s|a,L)s^qa^{\kappa (\beta ,q)}=L^{\beta \kappa (\beta ,q)}`$. One would hope exponents like $`\kappa `$ to be well defined, i.e. independent of $`\beta `$, for $`L\mathrm{}`$. Furthermore, with a FSS $`C\delta (sa^\gamma )`$, one should find $`\beta \kappa (\beta ,q)/q=\beta \gamma `$ independent of $`q`$. In Fig.3 we plot $`\mathrm{log}(s^q_a^{1/q})/\mathrm{log}L`$ versus $`\beta `$ for various $`q`$ and $`L=1024`$. The curves, which correspond to $`\beta \kappa (\beta ,q)/q`$ asymptotically, do not overlap. Moreover, for each $`q`$ the plots have pronounced curvature, especially for $`\beta 1.7`$. This curvature does not decrease appreciably, or increase signalling crossovers, if one considers progressively larger $`L`$’s. Thus, in spite of the sensible spread of the various curves in Fig.3 (which also persists for increasing $`L`$), the complex scaling of $`C`$-moments can not even be classified as multifractal. Indeed, in that case one should still require $`\beta \kappa /q`$ to be linear in $`\beta `$. On the other hand, the $`\beta `$-nonlinearity described above is essential in view of our conclusions on $`g`$. Since $`s^q_L=𝑑aP_a(a,L)s^q_a`$, we must have $`\sigma (q)=sup_\beta [g(\beta )+\beta \kappa (q,\beta )]`$. With $`g=\beta /5`$, as argued above, the $`sup`$ for $`q=1`$ should fall at $`\beta =2`$ (Fig. 3), and be equal to $`2/5+2\kappa (1,2)`$. Thus, one must find $`\kappa (1,2)=1.2`$. If $`\kappa (1,\beta )`$ would remain constant with its initial value $`1.04\pm 0.02`$ in the whole $`\beta `$-range, we could certainly not find $`\sigma (1)=2`$. Our determinations consistently extrapolate to $`\kappa (1,2)1.20`$ (Fig. 3). The $`\beta `$-dependence of $`\kappa `$ is crucial for the consistency of the overall scaling properties and is determined by large dissipating avalanches. These are intermittent, edge events, which guarantee grain conservation at stationarity. Remarkably, if only nondissipative avalanches are sampled, one obtains a different conditional pdf, $`C_0`$, whose $`\kappa _0`$ is now independent of $`\beta `$ (Fig.3). Thus, for $`C_0`$ it makes sense to discuss a multifractal spectrum $$\beta h(\gamma )=\frac{\mathrm{log}(_{L^{\beta \gamma }}^{\mathrm{}}𝑑sC_0(s|a,L))}{\mathrm{log}(L)}.$$ (6) As a function of $`\gamma `$, $`h`$ measures the density of points ($`\alpha =\beta \gamma ,\beta `$) along a narrow vertical strip at fixed $`\beta `$ in the plot correspoding to that in Fig.3. Fig.4 reports a data collapse of $`h`$-curves at $`\beta =1.6`$. Even if the collapse suffers of poor $`L`$-asymptoticity, all curves cross nicely at a point corresponding to $`\gamma 1.25`$, where their trend for increasing $`L`$ inverts itself. Collapses at different $`\beta `$’s give very similar results. Thus, $`\gamma 1.25`$ qualifies as a maximum exponent for nondissipating, bulk avalanches. An independent, accurate determination based on rank ordering statistics gave $`\gamma =1.28\pm 0.03`$. $`\gamma =5/4`$ was conjectured as a unique exponent for all avalanches in Ref.. Different values, sometimes larger than $`1.25`$, were conjectured in the recent literature on the basis of FSS and numerical results. Figs. 3 and 4 show that the determination of $`\gamma `$ is not even a well defined task, and can be discussed, within a multifractal framework, only once edge avalanches are eliminated. Indeed, for nondissipative bulk avalanches, $`\gamma `$ has a $`\beta `$-independent, broad, nonlinear spectrum, $`h`$, with $`1`$ as most probable, and $`1.25`$ as extreme, most rare values. The above analysis identifies the inadequacies of the scaling theory in Ref.. Crucial to that approach was the assumption of a relation, $`na^{(2\gamma 2)/2}=a^{1/4}`$, satisfied by $`n`$, the maximum number of topplings in an avalanche (realized at the injection point), and $`a`$, seen as the area of the first in a succession of waves . We found that also the $`n`$-pdf at given $`a`$ is broad, and $`1/4`$ is only the approximate maximum value of the exponent realized by bulk sandpile dynamics. The constant gap $`\mathrm{\Delta }\sigma 2.5`$ at high $`q`$ postulated in Ref. would have been compatible with a narrow, or even point-like spectrum for dissipating avalanches ($`\alpha 2.5`$). To the contrary, it turns out that also dissipating avalanches are multifractal ($`2.0\alpha 3.0`$) consistent with our more correct estimate of $`\mathrm{\Delta }\sigma `$ at high $`q`$. Moreover, a frequency $`L^\zeta `$ with $`\zeta =\frac{1}{2}`$ is precisely associated to dissipating avalanches with $`\alpha 2.50`$. In summary, we showed how the long standing puzzle of 2D BTW scaling finds a solution in a strong violation of FSS by both $`P_s`$ and $`C`$. While $`P_a`$ obeys FSS, with $`D_a=2`$ and $`\tau _a=6/5`$ as most plausible exponents, nonlinear multifractal spectra are needed to characterize $`P_s`$ and $`C_0`$. Our results throw light on the intriguing difficulty of this model. In spite of the several exactly known steady state properties, the belief that the 2D BTW should be “easily” solvable appears unjustified. The unusual scaling pattern discovered here, which is not found in more simple systems, like directed sandpiles , constitutes genuine novel physics and enhances the paradigmatic role of the BTW for statistical mechanics out of equilibrium. The most striking feature of this pattern is the bending of the curves in Fig.3, determined by the effect of intermittent, edge avalanches, and essential in order to fulfill the exact constraint of local, Laplacian conservation at stationarity ($`\sigma (1)=2`$). In fact the very presence of edge avalanches allows the existence of a peculiar, bulk multifractal scaling of $`C_0`$. The first moment of the $`s`$-pdf of nondissipative avalanches scales $`L^{1.8}`$, rather than $`L^2`$, as conservation imposes in the case of $`P_s`$. The role played here by intermittent edge avalanches has striking analogies with the intermittent phenomena in fully developed turbulence, where they cause deviation from pure Kolmogorov scaling. Results of the present analysis have been used most recently to elucidate quantitative connections between BTW scaling and the notoriously difficult problem of turbulence in 3D . We acknowledge partial support from the European Network Contract No. ERBFMRXCT980183. We are grateful to D. Dhar for useful criticism.
no-problem/9903/quant-ph9903096.html
ar5iv
text
# Adiabatic population transfer via multiple intermediate states ## I Introduction Stimulated Raman adiabatic passage (STIRAP) is an established technique for efficient population transfer in three-state systems in $`\mathrm{\Lambda }`$ or ladder configurations. In the original STIRAP, the population is transferred adiabatically from an initial state $`\psi _i`$ to a final target state $`\psi _f`$ via an intermediate state $`\psi _{int}`$ by means of two partly overlapping laser pulses, a pump pulse $`\mathrm{\Omega }_P(t)`$ linking states $`\psi _i`$ and $`\psi _{int}`$, and a Stokes pulse $`\mathrm{\Omega }_S(t)`$ linking states $`\psi _{int}`$ and $`\psi _f`$. By applying the Stokes pulse before the pump pulse (counterintuitive pulse order) and maintaining adiabatic-evolution conditions, one ensures population transfer from the initial state into the final state, with negligible population in the intermediate state at any time. This is so because the transfer is realized via an adiabatic state $`\phi _D(t)`$ (instantaneous eigenstate of the Hamiltonian) — the so-called dark state — which is a linear superposition of states $`\psi _i`$ and $`\psi _f`$ only. In the ideal limit, unit transfer efficiency is guaranteed and the process is robust against moderate changes in the laser parameters. Various aspects of STIRAP have been studied in detail theoretically and experimentally . Among them are the effects of intermediate-state detuning and loss rate , two-photon detuning , nonadiabatic effects , multiple intermediate and final states . The success of STIRAP has encouraged its extensions in various directions, such as population transfer in chainwise connected multistate systems and population transfer via continuum . In the present paper, we examine the possibilities to achieve complete adiabatic population transfer in the case when the single intermediate state in STIRAP is replaced by $`N`$ states, each of which is coupled to the initial state $`\psi _i`$ with a coupling proportional to the pump field and to the final state $`\psi _f`$ with a coupling proportional to the Stokes field, thus forming a parallel multi-$`\mathrm{\Lambda }`$ system. In the first place, our work is motivated by the possibility that appreciable single-photon couplings to more than one intermediate states can exist in a realistic physical situation, for example, when the pump and Stokes lasers are tuned to a highly excited state in an atom, or in the case of population transfer in molecules. Such couplings may be present because, while very sensitive to the two-photon resonance , STIRAP is relatively insensitive to the single-photon detuning from the intermediate state , the detuning tolerance range being proportional to the squared pump and Stokes Rabi frequencies. It is therefore important to know if STIRAP-like transfer can take place in such systems. A second example for multi-$`\mathrm{\Lambda }`$ systems can be found in population transfer in multistate chains. It has been shown that when the couplings between the intermediate states are constant the multistate chain is mathematically equivalent to a multi-$`\mathrm{\Lambda }`$ system, in which the initial state $`\psi _i`$ is coupled simultaneously to $`N2`$ dressed states which are in turn coupled to the final state $`\psi _f`$ . It has been demonstrated that in certain domains of interaction parameters (Rabi frequencies and detunings), adiabatic population transfer in these multistate chains (respectively, in the equivalent multi-$`\mathrm{\Lambda }`$ systems) can take place, while in other domains it cannot. A third motivation for the present work is population transfer via continuum . In their pioneering work on this process , Carroll and Hioe have replaced the single descrete intermediate state in STIRAP by a quasicontinuum consisting of infinite number of equidistant discrete states with energies going from $`\mathrm{}`$ to $`+\mathrm{}`$. Moreover, each of these states was coupled with the same coupling $`\mathrm{\Omega }_P(t)`$ to the initial state $`\psi _i`$ and with the same coupling $`\mathrm{\Omega }_S(t)`$ to the final state $`\psi _f`$. Under these conditions Carroll and Hioe have shown that complete population transfer is achieved in the adiabatic limit with counterintuitively ordered pulses. It has been shown later that the Carroll-Hioe’s quasicontinuum is too simplified and symmetric, and that a real continuum has properties, such as nonzero Fano parameter, which prevent complete population transfer. It is interesting to find out which of the simplifying assumptions in the Carroll-Hioe model (equidistant states, going to infinity both upwards and downwards, equal pump couplings and equal Stokes couplings) makes the dark state in the original STIRAP remain as a zero-eigenvalue eigenstate of the multi-$`\mathrm{\Lambda }`$ Hamiltonian. In this paper, we consider the general asymmetric case of unequal couplings and unevenly distributed, finite number intermediate states. Besides the Carroll-Hioe model , our paper generalizes the results of Coulston and Bergmann who were the first to consider the effects of multiple intermediate states in the simplest case of $`N=2`$ states and equal couplings $`\mathrm{\Omega }_P(t)`$ to state $`\psi _i`$ and equal couplings $`\mathrm{\Omega }_S(t)`$ to state $`\psi _f`$. We shall find the conditions for the existence of the dark state in multi-$`\mathrm{\Lambda }`$ systems as well as the conditions for the existence of a more general adiabatic-transfer state, which still links states $`\psi _i`$ and $`\psi _f`$ adiabatically but is allowed to contain transient contributions from the intermediate states. Our paper is organized as follows. In Sec. II we present the basic equations and definitions and review the standard three-state STIRAP. In Sec. III we discuss the case when all intermediate states are off single-photon resonance. In Sec. IV we consider the case when one of the intermediate states is on single-photon resonance and in Sec. V the case of degenerate resonant states. In Sec. VI we use the adiabatic-elimination approximation to gain further insight of the process. Finally, in Sec. VII we summarize the conclusions. ## II Basic equations and definitions ### A Basic STIRAP The probability amplitudes of the three states in STIRAP satisfy the Schr dinger equation ($`\mathrm{}=1`$), $$i\frac{d}{dt}𝐜(t)=𝐇(t)𝐜(t),$$ (1) where $`𝐜(t)=[c_i(t),c_{int}(t),c_f(t)]^T`$. In the rotating-wave approximation , the Hamiltonian is given by $$𝐇(t)=\left[\begin{array}{ccc}0& \mathrm{\Omega }_P(t)& 0\\ \mathrm{\Omega }_P(t)& \mathrm{\Delta }& \mathrm{\Omega }_S(t)\\ 0& \mathrm{\Omega }_S(t)& 0\end{array}\right].$$ (2) The time-varying Rabi frequencies $`\mathrm{\Omega }_P(t)`$ and $`\mathrm{\Omega }_S(t)`$ are given by products of the corresponding transition dipole moments and electric-field amplitudes. States $`\psi _i`$ and $`\psi _f`$ are assumed to be on two-photon resonance, while the intermediate state $`\psi _{int}`$ may be off single-photon resonance by a detuning $`\mathrm{\Delta }`$. The system is initially in state $`\psi _i`$, $`c_i(\mathrm{})=1,c_{int}(\mathrm{})=c_f(\mathrm{})=0`$, and the quantities of interest are the populations, and partucularly the population of state $`\psi _f`$ at $`t+\mathrm{}`$, $`P_f(\mathrm{})=|c_f(\mathrm{})|^2`$. Throughout this paper, we assume that the two pulses are ordered counterintuitively, i.e., the Stokes pulse precedes the pump pulse, $$\underset{t\mathrm{}}{lim}\frac{\mathrm{\Omega }_P(t)}{\mathrm{\Omega }_S(t)}=0,\underset{t+\mathrm{}}{lim}\frac{\mathrm{\Omega }_S(t)}{\mathrm{\Omega }_P(t)}=0,$$ (3) but we do not impose any restrictions on the particular time dependences in our analysis. In the numerical examples, we assume Gaussian shapes, $$\mathrm{\Omega }_P(t)=\mathrm{\Omega }_0e^{(t\tau )^2/T^2},\mathrm{\Omega }_S(t)=\mathrm{\Omega }_0e^{(t+\tau )^2/T^2},$$ (4) where $`T`$ is the pulse width, $`2\tau `$ is the time delay between the pulses, and we take $`\tau =0.5T`$ everywhere. Furthermore, we choose the peak Rabi frequency $`\mathrm{\Omega }_0`$ to define the frequency and time scales. The essence of STIRAP is explained in terms of the so-called dark (or trapped) state $`\phi _D(t)`$, which is a zero-eigenvalue eigenstate of $`𝐇(t)`$, $$\phi _D(t)=\frac{\mathrm{\Omega }_S(t)}{\mathrm{\Omega }(t)}\psi _i\frac{\mathrm{\Omega }_P(t)}{\mathrm{\Omega }(t)}\psi _f,$$ (5) where $`\mathrm{\Omega }(t)`$ is the mean-square Rabi frequency, $$\mathrm{\Omega }(t)=\sqrt{\mathrm{\Omega }_S^2(t)+\mathrm{\Omega }_P^2(t)}.$$ (6) For counterintuitively ordered pulses, Eq. (3), we have $`\phi _D(\mathrm{})=\psi _i`$ and $`\phi _D(+\mathrm{})=\psi _f`$ and hence, the dark state connects adiabatically states $`\psi _i`$ and $`\psi _f`$. By maintaining adiabatic evolution (a condition which amounts to requiring that the pulse width $`T`$ is large or that the pulse areas are much larger than $`\pi `$), one can force the system to remain in the dark state and achieve complete population transfer from $`\psi _i`$ to $`\psi _f`$. Moreover, since $`\phi _D(t)`$ does not involve the intermediate state $`\psi _{int}`$, the latter is not populated in the adiabatic limit, even transiently, and hence, its properties, including decay, do not affect the transfer efficiency. ### B Multi-$`\mathrm{\Lambda }`$ STIRAP #### 1 The system In the multi-$`\mathrm{\Lambda }`$ generalization of STIRAP the single intermediate state is replaced by $`N`$ intermediate states $`\psi _1,\psi _2,\mathrm{},\psi _N`$, each of which is coupled to both states $`\psi _i`$ and $`\psi _f`$, as shown in Fig. 1. The system is again assumed to be initially in state $`\psi _i`$, $`c_i(\mathrm{})=1,`$ (8) $`c_f(\mathrm{})=c_1(\mathrm{})=\mathrm{}=c_N(\mathrm{})=0,`$ (9) and the objective is to transfer the population to state $`\psi _f`$. The column vector of the probability amplitudes in the Schr dinger equation (1) is given by $`𝐜(t)=[c_i(t),c_1(t),\mathrm{},c_N(t),c_f(t)]^T`$ and the Hamiltonian reads $$𝐇=[\begin{array}{ccccccc}0& \mathrm{\Omega }_{P,1}& \mathrm{\Omega }_{P,2}& \mathrm{}& \mathrm{\Omega }_{P,N1}& \mathrm{\Omega }_{P,N}& 0\\ \mathrm{\Omega }_{P,1}& \mathrm{\Delta }_1& 0& \mathrm{}& 0& 0& \mathrm{\Omega }_{S,1}\\ \mathrm{\Omega }_{P,2}& 0& \mathrm{\Delta }_2& \mathrm{}& 0& 0& \mathrm{\Omega }_{S,2}\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ \mathrm{\Omega }_{P,N1}& 0& 0& \mathrm{}& \mathrm{\Delta }_{N1}& 0& \mathrm{\Omega }_{S,N1}\\ \mathrm{\Omega }_{P,N}& 0& 0& \mathrm{}& 0& \mathrm{\Delta }_N& \mathrm{\Omega }_{S,N}\\ 0& \mathrm{\Omega }_{S,1}& \mathrm{\Omega }_{S,2}& \mathrm{}& \mathrm{\Omega }_{S,N1}& \mathrm{\Omega }_{S,N}& 0\end{array}].$$ (10) States $`\psi _i`$ and $`\psi _f`$ are again assumed to be on two-photon resonance while each intermediate state $`\psi _k`$ may be off single-photon resonance by a detuning $`\mathrm{\Delta }_k`$. The couplings $`\mathrm{\Omega }_{P,k}(t)`$ of the intermediate states $`\psi _k`$ to the initial state $`\psi _i`$ are proportional to the pump field, while the couplings $`\mathrm{\Omega }_{S,k}(t)`$ of the intermediate states to the final state $`\psi _f`$ are proportional to the Stokes field, $$\mathrm{\Omega }_{P,k}(t)=\alpha _k\mathrm{\Omega }_P(t),\mathrm{\Omega }_{S,k}(t)=\beta _k\mathrm{\Omega }_S(t).$$ (11) The dimensionless numbers $`\alpha _k`$ and $`\beta _k`$ characterize the relative strengths of the couplings while $`\mathrm{\Omega }_P(t)`$ and $`\mathrm{\Omega }_S(t)`$ are suitably chosen “units” of pump and Stokes Rabi frequencies. We fix these “units” by choosing $`\alpha _1=\beta _1=1`$, which means that $`\mathrm{\Omega }_P(t)`$ and $`\mathrm{\Omega }_S(t)`$ are the pump and Stokes Rabi frequencies for state $`\psi _1`$, $`\mathrm{\Omega }_P(t)\mathrm{\Omega }_{P,1}(t)`$ and $`\mathrm{\Omega }_S(t)\mathrm{\Omega }_{S,1}(t)`$, and the constants $`\alpha _k`$ and $`\beta _k`$ are the relative strengths of the pump and Stokes couplings of state $`\psi _k`$ with respect to those for state $`\psi _1`$. In real physical systems these constants contain Clebsch-Gordan coefficients and Franck-Condon factors. Moreover, without loss of generality we assume that $`\mathrm{\Omega }_P(t)`$, $`\mathrm{\Omega }_S(t)`$, $`\alpha _k`$, and $`\beta _k`$ are all positive. #### 2 Adiabatic-transfer state As has been emphasized in , a necessary condition for complete adiabatic population transfer in multistate systems is the existence of an adiabatic-transfer (AT) state $`\phi _T(t)`$ which is defined as a nondegenerate eigenstate<sup>*</sup><sup>*</sup>*If the eigenstate $`\phi _T(t)`$ is degenerate, there will be resonant population transfer to the other such eigenstate(s) with a transition probability $`\mathrm{sin}^2\frac{1}{2}A`$, where $`A`$ is the pulse area of the nonadiabatic coupling between these eigenstates, with inevitable population loss. $`A`$ is independent on the pulse width $`T`$ and hence, it does not vanish in the adiabatic limit $`T\mathrm{}`$, unless the nonadiabatic coupling is identically equal to zero. of $`𝐇(t)`$ having the property $$\phi _T(t)=\{\begin{array}{cc}\psi _i,\hfill & t\mathrm{}\hfill \\ \psi _f,\hfill & t+\mathrm{}\hfill \end{array},$$ (12) up to insignificant phase factors. The dark state (5), which is a coherent superposition of states $`\psi _i`$ and $`\psi _f`$ only, is the simplest example of such a state. Under certain (quite restrictive) conditions it is an eigenstate of the multi-$`\mathrm{\Lambda }`$ system too, but in the general case the Hamiltonian (10) does not have such an eigenstate. Under some more relaxed conditions, however, $`𝐇(t)`$ has an eigenstate with the more general properties (12) (which allow for nonzero transient contributions from the intermediate states), and we derive these conditions below. ## III The off-resonance case If all single-photon detunings are nonzero, $`\mathrm{\Delta }_k0(k=1,2,\mathrm{},N)`$, we have $`det𝐇=\mathrm{\Omega }_P^2\mathrm{\Omega }_S^2𝒟\left(𝒮_{\alpha ^2}𝒮_{\beta ^2}𝒮_{\alpha \beta }^2\right)`$ (14) $`=\mathrm{\Omega }_P^2\mathrm{\Omega }_S^2{\displaystyle \underset{k=1}{\overset{N}{}}}{\displaystyle \underset{l=k+1}{\overset{N}{}}}𝒟_{kl}(\alpha _k\beta _l\alpha _l\beta _k)^2,`$ (15) where $$𝒮_{\alpha ^2}=\underset{k=1}{\overset{N}{}}\frac{\alpha _k^2}{\mathrm{\Delta }_k},𝒮_{\beta ^2}=\underset{k=1}{\overset{N}{}}\frac{\beta _k^2}{\mathrm{\Delta }_k},𝒮_{\alpha \beta }=\underset{k=1}{\overset{N}{}}\frac{\alpha _k\beta _k}{\mathrm{\Delta }_k},$$ (16) $$𝒟=\underset{k=1}{\overset{N}{}}\mathrm{\Delta }_k,𝒟_n=\underset{\genfrac{}{}{0pt}{}{k=1}{kn}}{\overset{N}{}}\mathrm{\Delta }_k,𝒟_{mn}=\underset{\genfrac{}{}{0pt}{}{k=1}{km,n}}{\overset{N}{}}\mathrm{\Delta }_k.$$ (17) Hence, $`det𝐇0`$ in the general case. This means that, unlike in STIRAP ($`N=1`$), $`𝐇(t)`$ does not necessarily have a zero eigenvalue. We shall consider first in Sec. III A the case when a zero eigenvalue exists (with the anticipation, in analogy with STIRAP, that the corresponding eigenstate is the desired AT state) and then in Sec. III B the case when it does not exist. ### A A zero eigenvalue #### 1 Condition for a zero eigenvalue The condition for a zero eigenvalue is given by $$𝒮_{\alpha ^2}𝒮_{\beta ^2}𝒮_{\alpha \beta }^2=\underset{k=1}{\overset{N}{}}\underset{l=k+1}{\overset{N}{}}\frac{(\alpha _k\beta _l\alpha _l\beta _k)^2}{\mathrm{\Delta }_k\mathrm{\Delta }_l}=0.$$ (18) Obviously, this condition depends only on the relative coupling strengths and the detunings, but neither on time nor on laser intensities. Hence, it remains unchanged as the adiabatic limit is approached. The eigenstate corresponding to the zero eigenvalue most generally reads $$\phi _0(t)=a_i(t)\psi _i+a_f(t)\psi _f+\underset{k=1}{\overset{N}{}}a_k(t)\psi _k.$$ (19) The amplitudes of the intermediate states are given by $$a_k(t)=\frac{\mathrm{\Omega }_{P,k}(t)}{\mathrm{\Delta }_k}a_i(t)\frac{\mathrm{\Omega }_{S,k}(t)}{\mathrm{\Delta }_k}a_f(t),$$ (20) with $`k=1,2,\mathrm{},N`$. Obviously, they may be nonzero at finite times but vanish at $`\pm \mathrm{}`$, $`a_k(\pm \mathrm{})=0`$, because both $`\mathrm{\Omega }_{P,k}(t)`$ and $`\mathrm{\Omega }_{S,k}(t)`$ vanish at infinity. The amplitudes of the initial and final states satisfy both equations $`𝒮_{\alpha ^2}\mathrm{\Omega }_P(t)a_i(t)+𝒮_{\alpha \beta }\mathrm{\Omega }_S(t)a_f(t)=0,`$ (22) $`𝒮_{\alpha \beta }\mathrm{\Omega }_P(t)a_i(t)+𝒮_{\beta ^2}\mathrm{\Omega }_S(t)a_f(t)=0,`$ (23) which are linearly dependent because of Eq. (18). We are going to consider two cases: when each term in the sum (18) is zero and when the individual terms may be nonzero but the total sum vanishes. #### 2 Proportional couplings When $$\frac{\alpha _1}{\beta _1}=\frac{\alpha _2}{\beta _2}=\mathrm{}=\frac{\alpha _N}{\beta _N}=1,$$ (24) each term in Eq. (18) vanishes and the zero eigenvalue exists regardless of the detunings. Condition (24), which is essentially a condition on the transition dipole moments, means that for each intermediate state $`\psi _k`$, the ratio $`\mathrm{\Omega }_{P,k}(t)/\mathrm{\Omega }_{S,k}(t)`$ between the couplings to states $`\psi _i`$ and $`\psi _f`$ is the same and does not depend on $`k`$. It follows from Eq. (24) that $`𝒮_{\alpha ^2}=𝒮_{\beta ^2}=𝒮_{\alpha \beta }𝒮`$ and we find from Eq. (20) that $`\mathrm{\Omega }_P(t)a_i(t)+\mathrm{\Omega }_S(t)a_f(t)=0`$, provided $`𝒮0`$. Then it follows from Eq. (20) that all intermediate-state amplitudes are zero, $`a_k(t)=0`$. It is easily seen that the zero-eigenvalue eigenstate coincides with the dark state (5) and in the adiabatic limit it transfers the population from state $`\psi _i`$ to state $`\psi _f`$, bypassing the intermediate states. Hence, when condition (24) is fulfilled the multi-$`\mathrm{\Lambda }`$ system behaves very much like the single $`\mathrm{\Lambda }`$ system in STIRAP. This confirms and generalizes the conclusions in that complete population transfer is possible when all $`\alpha _k`$ and $`\beta _k`$ are equal. #### 3 Arbitrary couplings Suppose now that condition (24) is not fulfilled while condition (18) still holds. This can be achieved by changing the two laser frequencies simultaneously while maintaining the two-photon resonance, which corresponds to adding a common detuning $`\mathrm{\Delta }`$ to all single-photon detunings, $`\mathrm{\Delta }_k\mathrm{\Delta }_k+\mathrm{\Delta }(k=1,2,\mathrm{},N)`$. For $`N`$ intermediate states, there are $`N2`$ values of $`\mathrm{\Delta }`$ for which condition (18) is satisfied. In the zero-eigenvalue eigenstate (19) the amplitudes of the intermediate states (20) are generally nonzero. However, if $`𝒮_{\alpha ^2}`$, $`𝒮_{\beta ^2}`$, and $`𝒮_{\alpha \beta }`$ are nonzero we have $`|a_i(\mathrm{})|=1`$ and $`|a_f(+\mathrm{})|=1`$, and hence, this eigenstate is an AT state, as defined by Eq. (12). #### 4 The case of vanishing $`𝒮_{\alpha ^2}`$, $`𝒮_{\beta ^2}`$, and $`𝒮_{\alpha \beta }`$ Let us now suppose that some of the sums $`𝒮_{\alpha ^2}`$, $`𝒮_{\beta ^2}`$ and $`𝒮_{\alpha \beta }`$ are equal to zero. If $`𝒮_{\alpha ^2}0`$ and $`𝒮_{\beta ^2}=0`$ (which also requires that $`𝒮_{\alpha \beta }=0`$) then it follows from Eq. (22) that $`a_i(t)0`$. Hence, we have $`|a_f(\pm \mathrm{})|=1`$ and $`\phi _0(\pm \mathrm{})=\psi _f`$ (up to an irrelevant phase factor) and there is no AT state. A similar conclusion holds when $`𝒮_{\alpha ^2}=0`$ and $`𝒮_{\beta ^2}0`$: then $`a_f(t)0`$, $`|a_i(\pm \mathrm{})|=1`$, and $`\phi _0(\pm \mathrm{})=\psi _i`$. The case when $`𝒮_{\alpha ^2}=𝒮_{\beta ^2}=𝒮_{\alpha \beta }=0`$ is different because then, as can easily be shown, the Hamiltonian has two zero eigenvalues. Consequently, there will be resonant transitions between the corresponding degenerate eigenstates, even in the adiabatic limit, which again prohibit complete STIRAP-like population transfer. We shall illustrate this conclusion by calculating the final-state population for proportional couplings (24). Due to the degeneracy, we have some freedom in choosing the corresponding pair of orthonormal eigenstates. As one of them, it is convenient to take state (5), $`\phi _0^{(1)}(t)=\phi _D(t)`$, because $`\phi _D(\mathrm{})=\psi _i`$ and $`\phi _D(+\mathrm{})=\psi _f`$. The other zero-eigenvalue adiabatic state can be determined by Gram-Schmidt ortogonalization and is $$\phi _0^{(2)}(t)=\nu (t)[\frac{\mathrm{\Omega }_P(t)}{\mathrm{\Omega }(t)}\psi _i+\frac{\mathrm{\Omega }_S(t)}{\mathrm{\Omega }(t)}\psi _f\mathrm{\Omega }(t)\underset{k=1}{\overset{N}{}}\frac{\alpha _k}{\mathrm{\Delta }_k}\psi _k],$$ (25) with $`\nu (t)=\left[1+\mathrm{\Omega }^2(t)_{k=1}^N\alpha _k^2/\mathrm{\Delta }_k^2\right]^{1/2}`$, where $`\mathrm{\Omega }(t)`$ is given by Eq. (6). In the adiabatic limit, the population of state $`\psi _f`$, which is equal to the probability of remaining in the adiabatic state $`\phi _0^{(1)}`$, is given by $$P_f\mathrm{cos}^2\left[_{\mathrm{}}^{\mathrm{}}\dot{\vartheta }(t)\nu (t)𝑑t\right],$$ (26) where $`\dot{\vartheta }(t)=\dot{\phi }_0^{(1)}(t)\phi _0^{(2)}(t)`$ and $`\mathrm{tan}\vartheta (t)=\mathrm{\Omega }_P(t)/\mathrm{\Omega }_S(t)`$. In conclusion, when $`𝐇(t)`$ has a zero eigenvalue STIRAP-like transfer is possible only if $$𝒮_{\alpha ^2}0,𝒮_{\beta ^2}0,𝒮_{\alpha \beta }0.$$ (27) #### 5 Examples In Fig. 2, the final-state population $`P_f`$ in the case of $`N=3`$ intermediate states is plotted against the pulse width $`T`$ for four combinations of coupling strengths and detunings. The solid curve is for a case when condition (24) is satisfied and the zero-eigenvalue eigenstate is the dark state (5). The dotted curve is for a case when condition (24) is not satisfied but conditions (18) and (27) are and the zero-eigenvalue eigenstate is an AT state, Eq. (12). In both cases, the final-state population $`P_f`$ approaches unity as the pulse width increases and the excitation becomes increasingly adiabatic. The dashed curve is for a case when $`𝒮_{\alpha ^2}=𝒮_{\beta ^2}=𝒮_{\alpha \beta }=0`$; then, as the adiabatic limit is approached, $`P_f`$ tends to the constant value $`P_f0.442`$, predicted by Eq. (26). Finally, the dashed-dotted curve is for a case when $`𝒮_{\beta ^2}=𝒮_{\alpha \beta }=0`$ and $`𝒮_{\alpha ^2}0`$; then, in agreement with our analysis, $`P_f`$ tends to zero in the adiabatic limit. In Fig. 3, we show the time evolutions of the populations in the case of $`N=3`$ intermediate states under almost adiabatic conditions. The upper plot is for the solid-curve case in Fig. 2 when the zero-eigenvalue eigenstate is the dark state (5); consequently, the intermediate states remain virtually unpopulated. The lower plot is for the dotted-curve case in Fig. 2 when the zero-eigenvalue eigenstate is an AT state with nonzero components from the intermediate states; consequently, these states acquire some transient populations. ### B No zero eigenvalue When condition (18) is not satisfied, $`det𝐇0`$ and the Hamiltonian (10) does not have a zero eigenvalue. This fact alone does not mean much because any chosen eigenvalue can be made zero by shifting the zero energy level with an appropriate time-dependent phase transformation. More importantly, an AT state $`\phi _T`$, as defined by Eq. (12), may or may not exist and the conditions for its existence are derived below. The derivation is similar in spirit to that for multistate chains in . By setting $`\mathrm{\Omega }_P=\mathrm{\Omega }_S=0`$ in Eq. (10), we find that there are two eigenvalues which vanish as $`t\pm \mathrm{}`$ (although they are nonzero at finite times), while the others tend to the (nonzero) detunings $`\mathrm{\Delta }_k`$. At $`\pm \mathrm{}`$, each of the two eigenstates corresponding to the vanishing eigenvalues is equal to either state $`\psi _i`$ or state $`\psi _f`$ or a superposition of them. Obviously, if an AT state exists, its eigenvalue $`\lambda _T`$ should be one of these eigenvalues. Hence, we have to find the asymptotic behaviors of these eigenvalues and the corresponding eigenstates, i.e., we need to determine how the degeneracy of these eigenvalues is lifted by the laser fields. We note here that only one eigenvalue vanishes when $`\mathrm{\Omega }_P0`$ and $`\mathrm{\Omega }_S0`$, which happens at certain early times, or when $`\mathrm{\Omega }_S0`$ and $`\mathrm{\Omega }_P0`$, which happens at certain late times. #### 1 Early-time eigenvalues Let us first consider the case of early times ($`t\mathrm{}`$). It follows from the above remarks that as soon as the pulse $`\mathrm{\Omega }_S`$ arrives, one of the vanishing eigenvalues, $`\lambda _l^{}`$ (the “large” one), departs from zero, while the other, $`\lambda _s^{}`$ (the “small” one), remains zero until the pulse $`\mathrm{\Omega }_P`$ arrives later. Since $`\lambda _s^{}`$ vanishes when $`\mathrm{\Omega }_P0`$ and $`\lambda _l^{}`$ vanishes when $`\mathrm{\Omega }_S0`$, $`\lambda _s^{}`$ should be proportional to some power of $`\mathrm{\Omega }_P`$ and $`\lambda _l^{}`$ to some power of $`\mathrm{\Omega }_S`$. Since at those times $`\mathrm{\Omega }_P/\mathrm{\Omega }_S0`$, the relation $`\left|\lambda _s^{}\right|\left|\lambda _l^{}\right|`$ holds; hence, the names “small” and “large”. To determine $`\lambda _s^{}`$, we consider the eigenvalue equation, $$det(𝐇\lambda \mathrm{𝟏})=h_0+h_1\lambda +\mathrm{}+h_{N+2}\lambda ^{N+2}=0,$$ (28) as an implicit definition of the functional dependence of $`\lambda _s^{}`$ on $`\mathrm{\Omega }_P`$. Note that $`h_0det𝐇`$. Since all $`h_k`$ depend on $`\mathrm{\Omega }_P`$ only via $`\mathrm{\Omega }_P^2`$, $`\lambda _s^{}`$ has a Taylor expansion in terms of $`\mathrm{\Omega }_P^2`$. We differentiate Eq. (28) with respect to $`\mathrm{\Omega }_P^2`$, set $`\mathrm{\Omega }_P^2=0`$ and $`\lambda _s^{}(\mathrm{\Omega }_P^2=0)=0`$, and obtain $$h_0^{}(0)+h_1(0)\lambda _s^{}(0)=0,$$ (29) where a prime denotes $`d/d\mathrm{\Omega }_P^2`$ and $`h_0^{}(0)=\mathrm{\Omega }_S^2𝒟(𝒮_{\alpha ^2}𝒮_{\beta ^2}𝒮_{\alpha \beta }^2),`$ (31) $`h_1(0)=\mathrm{\Omega }_S^2𝒟𝒮_{\beta ^2}.`$ (32) From here we find $`\lambda _s^{}(0)`$, replace it in the Taylor expansion of $`\lambda _s^{}\left(\mathrm{\Omega }_P^2\right)`$, and keeping the lowest-order nonzero term only, we obtain $$\lambda _s^{}\frac{𝒮_{\alpha ^2}𝒮_{\beta ^2}𝒮_{\alpha \beta }^2}{𝒮_{\beta ^2}}\mathrm{\Omega }_P^2.$$ (33) In order to find the dependence of $`\lambda _l^{}`$ on $`\mathrm{\Omega }_S^2`$, we set $`\mathrm{\Omega }_P=0`$ in Eq. (28), divide by $`\lambda `$ (which amounts to removing the root $`\lambda _s^{}`$), differentiate with respect to $`\mathrm{\Omega }_S^2`$, set $`\mathrm{\Omega }_S^2=0`$ and $`\lambda _l^{}(\mathrm{\Omega }_S^2=0)=0`$, and find $$\lambda _l^{}𝒮_{\beta ^2}\mathrm{\Omega }_S^2.$$ (34) #### 2 Late-time eigenvalues In a similar way, we find that at late times, when $`\mathrm{\Omega }_S/\mathrm{\Omega }_P0`$, the two vanishing eigenvalues behave as $$\lambda _s^+\frac{𝒮_{\alpha ^2}𝒮_{\beta ^2}𝒮_{\alpha \beta }^2}{𝒮_{\alpha ^2}}\mathrm{\Omega }_S^2,$$ (35) $$\lambda _l^+𝒮_{\alpha ^2}\mathrm{\Omega }_P^2.$$ (36) #### 3 Connectivity and AT condition It is easy to verify that the eigenstates corresponding to $`\lambda _s^{}`$ and $`\lambda _l^+`$ coincide with state $`\psi _i`$, while those corresponding to $`\lambda _l^{}`$ and $`\lambda _s^+`$ coincide with state $`\psi _f`$. Hence, the AT state $`\phi _T`$, if it exists, must have an eigenvalue that coincides with $`\lambda _s^{}`$ at early times and with $`\lambda _s^+`$ at late times. It should be emphasized that $`\lambda _s^{}`$ and $`\lambda _s^+`$ do not necessarily correspond to the same eigenvalue and it may happen that $`\lambda _s^{}`$ is linked to $`\lambda _l^+`$ rather than $`\lambda _s^+`$; then an AT state does not exist. In any case, the upper (the lower) of the two eigenvalues at $`\mathrm{}`$ is connected to the upper (the lower) of the two eigenvalues at $`+\mathrm{}`$. Since $`\left|\lambda _l^{}\right|\left|\lambda _s^{}\right|`$ and $`\left|\lambda _l^+\right|\left|\lambda _s^+\right|`$, the linkage is determined by the signs of the “large” eigenvalues $`\lambda _l^{}`$ and $`\lambda _l^+`$. If they have the same signs, they will be both above (or below) $`\lambda _s^{}`$ and $`\lambda _s^+`$ and hence, the desired linkages $`\lambda _l^{}\lambda _l^+`$ and $`\lambda _s^{}\lambda _s^+`$ will take place. If $`\lambda _l^{}`$ and $`\lambda _l^+`$ have opposite signs they cannot be connected because such an eigenvalue will cross the one linking $`\lambda _s^{}`$ and $`\lambda _s^+`$, which is impossible. Thus, from this analysis and Eqs. (34) and (36) we conclude that the necessary and sufficient condition for existence of an adiabatic-transfer state is $$𝒮_{\alpha ^2}𝒮_{\beta ^2}>0.$$ (37) It is easy to see that condition (18) for existence of a zero eigenvalue, along with condition (27) for existence of AT state in this case, agree with condition (37). Indeed, we have $`𝒮_{\alpha ^2}𝒮_{\beta ^2}=𝒮_{\alpha \beta }^2>0`$. Moreover, the zero eigenvalue is reproduced correctly by Eqs. (33) and (35). #### 4 The case of vanishing $`𝒮_{\alpha ^2}`$ and $`𝒮_{\beta ^2}`$ The derivation of the AT condition (37) suggests that both sums $`𝒮_{\alpha ^2}`$ and $`𝒮_{\beta ^2}`$ should be nonzero. Let us examine the case when one of them is zero, e.g., $`𝒮_{\beta ^2}=0`$. By going through the derivation that leads to Eq. (33) we find that now, as evident from Eq. (32), we have $`h_1(0)=0`$. It follows from Eq. (28) that now two, rather than one, eigenvalues vanish when $`\mathrm{\Omega }_P=0`$ and $`\mathrm{\Omega }_S0`$ at early times (because then $`h_0det𝐇=0`$ too). Hence, in contrast to the case of $`𝒮_{\beta ^2}0`$, the arrival of the Stokes pulse $`\mathrm{\Omega }_S(t)`$ does not make one of these eigenvalues depart from zero but rather their degeneracy is lifted only with the arrival of the pump pulse $`\mathrm{\Omega }_P(t)`$ later. The implication is that the initial state $`\psi _i`$ cannot be identified with a single adiabatic state at $`t\mathrm{}`$ but it is rather equal to a superposition of two adiabatic statesThe initial state $`\psi _i`$ is associated with a single adiabatic state at $`\mathrm{}`$ when the arrival of the pump pulse lifts the degeneracy of one and only one eigenvalue. Similarly, the final state $`\psi _f`$ is associated with a single adiabatic state at $`+\mathrm{}`$ when the vanishing Stokes pulse restores the degeneracy of one and only one eigenvalue. ; hence, there is no AT state. A similar conclusion holds in the case when $`𝒮_{\alpha ^2}=0`$: then the final state $`\psi _f`$ cannot be identified with a single adiabatic state at $`t+\mathrm{}`$. Therefore, for $`𝒮_{\alpha ^2}=0`$ or $`𝒮_{\beta ^2}=0`$ an AT state does not exist, as follows formally from Eq. (37). The situation with the mixed sum $`𝒮_{\alpha \beta }`$ is different. In the above derivation the condition $`𝒮_{\alpha \beta }0`$ was not required anywhere which means that an AT state exists even for $`𝒮_{\alpha \beta }=0`$. We will return to this problem in Sec. VI. #### 5 Examples In Fig. 4 we have plotted the time evolutions of the eigenvalues (upper row of figures) in the case of $`N=2`$ intermediate states for two combinations of detunings and the same set of coupling strengths. The solid curves are calculated numerically and the dashed curves show our asymptotic approximations (33)-(36). The bottom row of figures show the components of this eigenstate which is equal to bare state $`\psi _i`$ initially; it corresponds to the eigenvalue whose asymptotics at early times is described by $`\lambda _s^{}`$. Hence, the squared components of this eigenstate give the populations of the four bare states in the adiabatic limit. As we can see in the left column of figures, there is an eigenvalue whose asymptotics is given by $`\lambda _s^{}`$ at early times and by $`\lambda _s^+`$ at late times; this is so because condition (37) is satisfied in this case. The corresponding eigenstate is an AT state, as evident from the bottom left figure, because it is equal to state $`\psi _i`$ initially and to state $`\psi _f`$ finally. For the case shown in the right column of figures, there is no AT eigenvalue because the asymptotic behaviors $`\lambda _s^{}`$ and $`\lambda _s^+`$ are related to two different eigenvalues; this is so because condition (37) is not satisfied in this case. Consequently, there is no AT eigenstate, as evident from the bottom right figure, because the shown eigenstate is equal to state $`\psi _i`$ both initially and finally. In Fig. 5 we have plotted the final-state population $`P_f`$ as a function of the pulse width $`T`$ in the case of $`N=2`$ intermediate states for the same two sets of interaction parameters as in Fig. 4. The solid and dashed curves in Fig. 5 correspond to the left and right columns in Fig. 4, respectively. As follows from Eq. (37), an AT state exists for the solid curve and does not exist for the dashed curve. Indeed, as seen in the figure, as $`T`$ increases, the final-state population $`P_f`$ approaches unity for the solid curve and zero for the dashed curve. In Fig. 6 the final-state population $`P_f`$ is plotted as a function of the single-photon detuning $`\mathrm{\Delta }`$ of the pump and Stokes fields from the lowest intermediate state. We have taken two intermediate states $`\psi _1`$ and $`\psi _2`$ with $`\mathrm{\Delta }_1=\mathrm{\Delta }`$, $`\mathrm{\Delta }_2=\mathrm{\Omega }_0+\mathrm{\Delta }`$. The coupling strengths are taken the same as in Figs. 4 and 5. In the region where an AT state does not exist, $`0.8\mathrm{\Omega }_0\mathrm{\Delta }0.2\mathrm{\Omega }_0`$ \[calculated from Eq. (37)\], the transfer efficiency is low, while outside it the transfer efficiency is almost unity, as a result of the existence of an AT state. The left and right columns of plots in Fig. 4 (respectively, the solid and dashed curves in Fig. 5) correspond to detunings $`\mathrm{\Delta }=0.5\mathrm{\Omega }_0`$ and $`\mathrm{\Delta }=0.5\mathrm{\Omega }_0`$, respectively. The inset shows how the transfer efficiency eventually decreases at large detunings which, as in STIRAP , is due to deteriorating adiabaticity. In Fig. 7, we have plotted the final-state population $`P_f`$ as a function of the single-photon detuning $`\mathrm{\Delta }`$ of the pump and Stokes fields from the lowest intermediate state for the case of $`N=5`$ equidistant intermediate states and randomly taken coupling strengths $`\alpha _k`$ and $`\beta _k`$. As Fig. 7 shows, there are three distinct domains of single-photon detunings: $`\mathrm{\Delta }4\mathrm{\Omega }_0`$, $`4\mathrm{\Omega }_0\mathrm{\Delta }0`$, and $`\mathrm{\Delta }0`$. For $`\mathrm{\Delta }4\mathrm{\Omega }_0`$ and $`\mathrm{\Delta }0`$, we have $`P_f1`$, whereas for $`4\mathrm{\Omega }_0\mathrm{\Delta }0`$, there are alternative regions of high and low transfer efficiency. This behavior is easily explained by the AT condition (37). As $`\mathrm{\Delta }`$ changes, we pass through the zero points of the sums $`𝒮_{\alpha ^2}(\mathrm{\Delta })`$ and $`𝒮_{\beta ^2}(\mathrm{\Delta })`$, thus going from an interval where these sums have the same sign (where $`P_f1`$) to an interval where they have opposite signs (where $`P_f0`$) and vice versa. Obviously, for sufficiently large and negative $`\mathrm{\Delta }`$, both $`𝒮_{\alpha ^2}(\mathrm{\Delta })`$ and $`𝒮_{\beta ^2}(\mathrm{\Delta })`$ are always negative and condition (37) is satisfied, which ensures the existence of AT state and STIRAP-like unit transfer efficiency. Similarly, for sufficiently large and positive $`\mathrm{\Delta }`$, both $`𝒮_{\alpha ^2}(\mathrm{\Delta })`$ and $`𝒮_{\beta ^2}(\mathrm{\Delta })`$ are always positive and we have unit transfer efficiency there too. The conclusion is that whenever the pump and Stokes lasers are tuned below or above all intermediate states, STIRAP-like transfer is always guarantied in the adiabatic limit. When the lasers are tuned within the manifold of intermediate states such a transfer may or may not take place, depending on whether the AT condition (37) is satisfied or not. Furthermore, tuning the lasers below or above all states appears to be more reasonable even than tuning on resonance with an intermediate state because adiabaticity is achieved more easily, as the curves for $`\mathrm{\Omega }_0T=20`$ in Figs. 6 and 7 show. Moreover, as follows from Eq. (20), the transient intermediate-state populations $`P_k(t)`$ decrease with the detuning as $`\mathrm{\Delta }_k^2`$. ## IV A resonant intermediate state ### A A zero eigenvalue If a certain detuning is equal to zero, $`\mathrm{\Delta }_n=0`$, we have $`det𝐇=\mathrm{\Omega }_P^2\mathrm{\Omega }_S^2{\displaystyle \underset{\genfrac{}{}{0pt}{}{k=1}{kn}}{\overset{N}{}}}𝒟_{nk}(\alpha _k\beta _n\alpha _n\beta _k)^2`$ (39) $`=\mathrm{\Omega }_P^2\mathrm{\Omega }_S^2𝒟_n[\alpha _n^2𝒮_{\beta ^2}^{(n)}2\alpha _n\beta _n𝒮_{\alpha \beta }^{(n)}+\beta _n^2𝒮_{\alpha ^2}^{(n)}],`$ (40) where the $`𝒮^{(n)}`$-sums are defined as the $`𝒮`$-sums but without the $`n`$-th terms, $$𝒮_{\alpha ^2}^{(n)}=\underset{\genfrac{}{}{0pt}{}{k=1}{kn}}{\overset{N}{}}\frac{\alpha _k^2}{\mathrm{\Delta }_k},𝒮_{\beta ^2}^{(n)}=\underset{\genfrac{}{}{0pt}{}{k=1}{kn}}{\overset{N}{}}\frac{\beta _k^2}{\mathrm{\Delta }_k},𝒮_{\alpha \beta }^{(n)}=\underset{\genfrac{}{}{0pt}{}{k=1}{kn}}{\overset{N}{}}\frac{\alpha _k\beta _k}{\mathrm{\Delta }_k},$$ (41) The intermediate-state amplitudes, except $`a_n(t)`$, are given by Eq. (20). The equation for $`a_n(t)`$ is replaced by $$\mathrm{\Omega }_{P,n}(t)a_i(t)+\mathrm{\Omega }_{S,n}(t)a_f(t)=0,$$ (42) while Eqs. (20) are replaced by $`𝒮_{\alpha ^2}^{(n)}\mathrm{\Omega }_P(t)a_i(t)+𝒮_{\alpha \beta }^{(n)}\mathrm{\Omega }_S(t)a_f(t)+\alpha _na_n(t)=0,`$ (44) $`𝒮_{\alpha \beta }^{(n)}\mathrm{\Omega }_P(t)a_i(t)+𝒮_{\beta ^2}^{(n)}\mathrm{\Omega }_S(t)a_f(t)+\beta _na_n(t)=0.`$ (45) We can again consider the two possibilities: when condition (24) is fulfilled and when it is not. In the case of proportional couplings (24), it can readily be shown that the zero-eigenvalue adiabatic state is given again by the dark state (5). Moreover, now condition (27) is not required and the dark state $`\phi _D(t)`$ is a zero-eigenvalue eigenstate of $`𝐇(t)`$ even when $`𝒮_{\alpha ^2}^{(n)}`$, $`𝒮_{\beta ^2}^{(n)}`$, and $`𝒮_{\alpha \beta }^{(n)}`$ vanishNote that unlike the off-resonance case with $`𝒮_{\alpha ^2}=𝒮_{\beta ^2}=𝒮_{\alpha \beta }=0`$ (Sec. III A 4), no additional zero eigenvalues exist for $`\mathrm{\Delta }_n=0`$ when $`𝒮_{\alpha ^2}^{(n)}=𝒮_{\beta ^2}^{(n)}=𝒮_{\alpha \beta }^{(n)}=0`$.. In the case of arbitrary couplings, the sum (39) can vanish only by accident because we are not allowed to ”scan” the pump and Stokes laser frequencies across the intermediate states as we would violate the assumed single-photon resonance condition $`\mathrm{\Delta }_n=0`$. If this happens it can easily be shown that again, the zero-eigenvalue eigenstate is an AT state, the only difference from the off-resonance case (Sec. III A 3) being that now the AT state does not contain a component from the resonant bare state, $`a_n(t)=0`$. In both cases, we have complete population transfer in the adiabatic limit. ### B Nonzero eigenvalue For $`det𝐇0`$, we follow the same approach as for nonzero detunings (Sec. III B). By setting $`\mathrm{\Omega }_P=\mathrm{\Omega }_S=0`$ in Eq. (10) we find that for $`\mathrm{\Delta }_n=0`$, there are three, rather than two, vanishing eigenvalues. Hence, we have to establish how the new, third, zero eigenvalue affects the AT state. #### 1 Early-time eigenvalues It is readily seen from Eq. (10) that only one eigenvalue vanishes for $`\mathrm{\Omega }_P=0`$ and $`\mathrm{\Omega }_S0`$. This means that as soon as the Stokes pulse $`\mathrm{\Omega }_S(t)`$ arrives, the degeneracy of two of the eigenvalues, $`\lambda _{l,1}^{}`$ and $`\lambda _{l,2}^{}`$, is lifted and they depart from zero, while the third eigenvalue $`\lambda _s^{}`$ stays zero until the pulse $`\mathrm{\Omega }_P(t)`$ arrives later. Hence, there are two “large” eigenvalues and one “small” eigenvalue. The “small” eigenvalue $`\lambda _s^{}(\mathrm{\Omega }_P^2)`$ can be determined in the same manner as for nonzero detunings. We have $`h_0^{}(0)=\mathrm{\Omega }_S^2𝒟_n\left[\alpha _n^2𝒮_{\beta ^2}^{(n)}2\alpha _n\beta _n𝒮_{\alpha \beta }^{(n)}+\beta _n^2𝒮_{\alpha ^2}^{(n)}\right],`$ (47) $`h_1(0)=\mathrm{\Omega }_S^2\beta _n^2𝒟_n.`$ (48) Using Eq. (29), we find $`\lambda _s^{}(0)`$ and obtain $$\lambda _s^{}\frac{1}{\beta _n^2}\left[\alpha _n^2𝒮_{\beta ^2}^{(n)}2\alpha _n\beta _n𝒮_{\alpha \beta }^{(n)}+\beta _n^2𝒮_{\alpha ^2}^{(n)}\right]\mathrm{\Omega }_P^2.$$ (49) In order to determine the other two eigenvalues $`\lambda _{l,1}^{}`$ and $`\lambda _{l,2}^{}`$, which depart from zero with $`\mathrm{\Omega }_S`$, we set $`\mathrm{\Omega }_P=0`$ in Eq. (28) and divide by $`\lambda `$ (thus removing the root $`\lambda _s^{}`$). Keeping the terms of lowest order with respect to $`\mathrm{\Omega }_S`$ and $`\lambda `$, we find that $`\mathrm{\Omega }_S^2\beta _n^2+\lambda ^20`$, and we identify $`\lambda _{l,1}^{}`$ and $`\lambda _{l,2}^{}`$ as the two roots of this equation, $$\lambda _{l,1}^{}\beta _n\mathrm{\Omega }_S,\lambda _{l,2}^{}\beta _n\mathrm{\Omega }_S.$$ (50) #### 2 Late-time eigenvalues In a similar fashion, we find that for $`t+\mathrm{}`$, the three vanishing eigenvalues behave as $$\lambda _s^+\frac{1}{\alpha _n^2}\left[\alpha _n^2𝒮_{\beta ^2}^{(n)}2\alpha _n\beta _n𝒮_{\alpha \beta }^{(n)}+\beta _n^2𝒮_{\alpha ^2}^{(n)}\right]\mathrm{\Omega }_S^2,$$ (51) $$\lambda _{l,1}^+\alpha _n\mathrm{\Omega }_P,\lambda _{l,2}^+\alpha _n\mathrm{\Omega }_P.$$ (52) #### 3 Connectivity It is straightforward to show that the eigenstate associated with $`\lambda _s^{}`$ tends to state $`\psi _i`$ as $`t\mathrm{}`$ and the eigenstate associated with $`\lambda _s^+`$ tends to state $`\psi _f`$ as $`t+\mathrm{}`$. The eigenstates corresponding to the “large” eigenvalues tend to superpositions of states $`\psi _f`$ and $`\psi _n`$ initially and to superpositions of states $`\psi _i`$ and $`\psi _n`$ finally. Hence, if $`\lambda _s^{}`$ and $`\lambda _s^+`$ correspond to the same eigenvalue, the corresponding eigenstate will be the desired AT state $`\phi _T(t)`$. We have seen above that in the general off-resonance case, this may or may not take place. In the present case of a single-photon resonance, however, this is always the case. To show this we first note that the eigenvalues, which do not vanish at $`\pm \mathrm{}`$, do not interfere in the linkages between the vanishing eigenvalues because each of the nonvanishing eigenvalues $`\lambda _k(t)`$ tends to the corresponding detuning $`\mathrm{\Delta }_k`$ at both $`\pm \mathrm{}`$. Hence, the eigenvalues that are above (below) the three vanishing eigenvalues at $`\mathrm{}`$ remain above (below) them at $`+\mathrm{}`$ as well. Let us now consider the linkages between the three vanishing eigenvalues. Insofar as $`\mathrm{\Omega }_P/\mathrm{\Omega }_S0`$ as $`t\mathrm{}`$, we have $`\lambda _{l,1}^{}<\lambda _s^{}<\lambda _{l,2}^{}`$. Also, since $`\mathrm{\Omega }_S/\mathrm{\Omega }_P0`$ as $`t+\mathrm{}`$, we have $`\lambda _{l,1}^+<\lambda _s^+<\lambda _{l,2}^+`$. This means that the linkages $`\lambda _{l,1}^{}\lambda _{l,1}^+`$, $`\lambda _s^{}\lambda _s^+`$, and $`\lambda _{l,2}^{}\lambda _{l,2}^+`$ take place, and therefore, the AT state $`\phi _T(t)`$ always exists when the lasers are tuned to resonance with an intermediate state, which is indeed seen in Figs. 6 and 7. ### C Examples In Fig. 8 we have plotted the time evolutions of the three eigenvalues that vanish at $`\pm \mathrm{}`$ (upper row of figures) in the case of $`N=2`$ intermediate states for two sets of coupling strengths and the same detunings, $`\mathrm{\Delta }_1=0,\mathrm{\Delta }_2=\mathrm{\Omega }_0`$, i.e., the lower intermediate state is on single-photon resonance. The top left figure is for proportional coupling strengths (24) which give rise to a zero eigenvalue $`\lambda _D`$ and correspondingly, to a dark state $`\phi _D(t)`$. The top right figure is for a case when Eq. (24) is not satisfied and there is no zero eigenvalue. Note the pair of eigenvalues $`\lambda _{l,1}^{}`$ and $`\lambda _{l,2}^{}`$ which depart from zero in opposite directions with the arrival of the Stokes pulse at early times and the pair of eigenvalues $`\lambda _{l,1}^+`$ and $`\lambda _{l,2}^+`$ which vanish with the disappearence of the pump pulse at late times. The bottom row of figures show the components of this eigenstate which is equal to the bare state $`\psi _i`$ initially; it corresponds to the zero eigenvalue for the left figure and to the eigenvalue whose asymptotics at early times is described by $`\lambda _s^{}`$ for the right figure. The squared components of this eigenstate give the populations of the four bare states in the adiabatic limit. As predicted by our analysis, the AT state is seen to exist in both cases. The difference is that for the left column of figures, the AT state is the dark state (5), which does not contain components from the intermediate states, while for the right column of figures, the AT state contains such components. In Fig. 9, we have plotted the final-state population $`P_f`$ as a function of the pulse width $`T`$ in the case of $`N=2`$ intermediate states for the same two sets of interaction parameters as in Fig. 8. The solid and dashed curves in Fig. 9 correspond to the left and right columns in Fig. 8, respectively. In agreement with our conclusions, an AT state exists in both cases and the final-state population $`P_f`$ approaches unity as $`T`$ increases. ## V Degenerate resonant intermediate states Let us suppose now that $`N_0`$ detunings vanish, i.e., that there are $`N_0`$ degenerate resonant intermediate states, and let us assume without loss of generality that these states are $`\psi _1,\psi _2,\mathrm{},\psi _{N_0}`$. If two of the detunings are equal to zero, $`\mathrm{\Delta }_1=\mathrm{\Delta }_2=0`$, we have $$det𝐇=\mathrm{\Omega }_P^2\mathrm{\Omega }_S^2𝒟_{12}(\alpha _1\beta _2\alpha _2\beta _1)^2,$$ (53) and hence, a zero eigenvalue exists when $`\alpha _1/\beta _1=\alpha _2/\beta _2`$. If three or more detunings are equal to zero then $`det𝐇0`$, and a zero eigenvalue always exists, with no restrictions on the interaction parameters. ### A Proportional couplings In the zero-eigenvalue eigenstates, the components $`a_i(t)`$ and $`a_f(t)`$ of states $`\psi _i`$ and $`\psi _f`$ satisfy the equations $$\alpha _n\mathrm{\Omega }_P(t)a_i(t)+\beta _n\mathrm{\Omega }_S(t)a_f(t)=0,$$ (54) with $`n=1,2,\mathrm{},N_0`$. A nonzero solution for $`a_i(t)`$ and $`a_f(t)`$ requires that $$\frac{\alpha _1}{\beta _1}=\frac{\alpha _2}{\beta _2}=\mathrm{}=\frac{\alpha _{N_0}}{\beta _{N_0}}=1.$$ (55) Otherwise, a zero-eigenvalue eigenstate cannot be an AT state. If relation (55) is fulfilled, the amplitudes of the degenerate states are linearly dependent<sup>§</sup><sup>§</sup>§ This is so because the function $`f(t)=\alpha _nc_1(t)\alpha _1c_n(t)`$ satisfies the differential equation $`df(t)/dt=0`$ with the initial condition $`f(\mathrm{})=0`$; hence, $`f(t)=0`$. $$c_n(t)=c_1(t)\frac{\alpha _n}{\alpha _1},(n=2,3,\mathrm{},N_0).$$ (56) This relation allows to replace in the Schr dinger equation (1) the probability amplitudes of the degenerate states by an effective amplitude given by $$c_{\mathrm{eff}}(t)=\mu c_1(t),$$ (57) with pump and Stokes Rabi frequencies given by $$\mathrm{\Omega }_{P,\mathrm{eff}}(t)=\mu \mathrm{\Omega }_P(t),\mathrm{\Omega }_{S,\mathrm{eff}}(t)=\mu \mathrm{\Omega }_S(t),$$ (58) where $`\mu =(1/\alpha _1)\sqrt{\alpha _1^2+\alpha _2^2+\mathrm{}+\alpha _{N_0}^2}`$. Thus the original problem with $`N_0`$ resonant states is reduced to an equivalent problem involving a single resonant state. As we pointed out in Sec. IV, an AT state $`\phi _T(t)`$ always exists in this case. Moreover, in the case when all couplings (and not only those for the degenerate states) are proportional the AT state is the dark state $`\phi _D(t)`$. ### B Arbitrary couplings If Eq. (55) is not fulfilled, the zero-eigenvalue eigenstate(s) cannot be an AT state because the components $`a_i(t)`$ and $`a_f(t)`$ from states $`\psi _i`$ and $`\psi _f`$ vanish. There still might be a possibility that one of the two nonzero eigenvalues, which vanish at $`\pm \mathrm{}`$, corresponds to an AT state. We shall show, however, that this is not the case. For $`N_0`$ degenerate resonant states, it can readily be shown that the number of zero eigenvalues is $`N_02`$ for $`\mathrm{\Omega }_P(t)0`$ and $`\mathrm{\Omega }_S(t)0`$, $`N_0`$ for $`\mathrm{\Omega }_P(t)=0`$ and $`\mathrm{\Omega }_S(t)0`$ \[or for $`\mathrm{\Omega }_P(t)0`$ and $`\mathrm{\Omega }_S(t)=0`$\], and $`N_0+2`$ for $`\mathrm{\Omega }_P(t)=\mathrm{\Omega }_S(t)=0`$. This means that when the Stokes pulse arrives at early times, it lifts the degeneracy of two of the $`N_0+2`$ zero eigenvalues. When the pump pulse arrives later, if lifts the degeneracy of another pair of the remaining $`N_0`$ eigenvalues. The reverse process occurs at large positive times. The remaining $`N_02`$ zero eigenvalues stay degenerate all the time. The implication is that the initial state $`\psi _i`$ and the final state $`\psi _f`$ are given by superpositions of eigenstates both at $`\mathrm{}`$ and $`+\mathrm{}`$, which means that an AT does not exist. ## VI Adiabatic elimination of the off-resonance states ### A The off-resonance case An insight into the population transfer process can be obtained from the adiabatic-elimination approximation. When the single-photon detuning $`\mathrm{\Delta }_k`$ of a given intermediate state $`\psi _k`$ is large compared to the couplings $`\mathrm{\Omega }_{P,k}`$ and $`\mathrm{\Omega }_{S,k}`$ of this state to states $`\psi _i`$ and $`\psi _f`$, this state can be eliminated adiabatically by setting $`dc_k/dt=0`$ and expressing $`c_k`$ from the resulting algebraic equation. By adiabatically eliminating all intermediate states the general $`(N+2)`$-state problem is reduced to an effective two-state problem for the initial and final states, $$i\frac{d}{dt}\left[\begin{array}{c}c_i\\ c_f\end{array}\right]\left[\begin{array}{cc}\mathrm{\Omega }_P^2𝒮_{\alpha ^2}& \mathrm{\Omega }_P\mathrm{\Omega }_S𝒮_{\alpha \beta }\\ \mathrm{\Omega }_P\mathrm{\Omega }_S𝒮_{\alpha \beta }& \mathrm{\Omega }_S^2𝒮_{\beta ^2}\end{array}\right]\left[\begin{array}{c}c_i\\ c_f\end{array}\right].$$ (59) The “detuning” in this two-state problem is $`\mathrm{\Delta }_{\mathrm{eff}}(t)=\mathrm{\Omega }_S^2(t)𝒮_{\beta ^2}\mathrm{\Omega }_P^2(t)𝒮_{\alpha ^2}`$. Obviously, if $`𝒮_{\alpha ^2}𝒮_{\beta ^2}>0`$, $`\mathrm{\Delta }_{\mathrm{eff}}(t)`$ has different signs at $`\pm \mathrm{}`$ and the transition is of level-crossing type, while if $`𝒮_{\alpha ^2}𝒮_{\beta ^2}<0`$, $`\mathrm{\Delta }_{\mathrm{eff}}(t)`$ has the same sign at $`\pm \mathrm{}`$ and there is no crossing. Hence, in the adiabatic limit, the transition probability from state $`\psi _i`$ to state $`\psi _f`$ will be unity for $`𝒮_{\alpha ^2}𝒮_{\beta ^2}>0`$ and zero for $`𝒮_{\alpha ^2}𝒮_{\beta ^2}<0`$, in agreement with the AT condition (37). The “coupling” in the effective two-state problem (59) is $`\mathrm{\Omega }_{\mathrm{eff}}(t)=\mathrm{\Omega }_P(t)\mathrm{\Omega }_S(t)𝒮_{\alpha \beta }`$. Obviously, it vanishes for $`𝒮_{\alpha \beta }=0`$ which suggests that there is no transition from state $`\psi _i`$ to state $`\psi _f`$, both for $`𝒮_{\alpha ^2}𝒮_{\beta ^2}>0`$ and $`𝒮_{\alpha ^2}𝒮_{\beta ^2}<0`$. However, we know from Sec. III A 4 that this prediction is incorrect and that an AT exists even in this case, as long as $`𝒮_{\alpha ^2}𝒮_{\beta ^2}>0`$. This somewhat surprising discrepancy derives from the fact that for $`𝒮_{\alpha \beta }=0`$, the effective coupling between $`\psi _i`$ and $`\psi _f`$ is so small that it is lost in the course of the approximation. Hence, this approximation provides a useful hint for the least favorable combination of parameters which results in the weakest effective coupling between $`\psi _i`$ and $`\psi _f`$; consequently, the adiabatic limit is approached most slowly in this case. The adiabatic-elimination approximation allows to estimate how quickly the adiabatic limit is approached when the AT state exists. Then, as we noted above, we have a level-crossing transition, the probability for which can be roughly described by the Landau-Zener formula, $$P_f1e^{\pi \mathrm{\Omega }_{\mathrm{eff}}^2(t_c)/\dot{\mathrm{\Delta }}_{\mathrm{eff}}(t_c)},$$ (60) where $`t_c`$ is the crossing point. For the Gaussian shapes (4) we have $`t_c=(T^2/8\tau )\mathrm{ln}(𝒮_{\beta ^2}/𝒮_{\alpha ^2})`$ and $$\frac{\mathrm{\Omega }_{\mathrm{eff}}^2(t_c)}{\dot{\mathrm{\Delta }}_{\mathrm{eff}}(t_c)}=(\mathrm{\Omega }_0T)^2\xi ,$$ (61) with $$\xi =\frac{T}{4\tau }\frac{𝒮_{\alpha \beta }^2}{\sqrt{𝒮_{\alpha ^2}𝒮_{\beta ^2}}}\mathrm{exp}\left[\frac{2\tau ^2}{T^2}\frac{T^2}{32\tau ^2}\left(\mathrm{ln}\frac{𝒮_{\beta ^2}}{𝒮_{\alpha ^2}}\right)^2\right].$$ (62) The larger this parameter, the faster the adiabatic limit is approached. We thus conclude that from the adiabaticity viewpoint, the most favorable case is when $`𝒮_{\alpha ^2}=𝒮_{\beta ^2}`$ and the ratio $`𝒮_{\alpha \beta }^2/\sqrt{𝒮_{\alpha ^2}𝒮_{\beta ^2}}`$ is large. Not surprisingly, the Landau-Zener parameter (61) is also proportional to $`(\mathrm{\Omega }_0T)^2`$, which is essentially the squared pulse area. In Fig. 10 the final-state population $`P_f`$ is plotted against the pulse width $`T`$ in the case of $`N=3`$ intermediate states for three combinations of detunings. The coupling strengths are the same in all cases. The parameters for the dotted curve are chosen so that $`𝒮_{\alpha \beta }=0`$ while those for the other curves ensure that $`𝒮_{\alpha \beta }0`$. The parameter (62) is $`\xi =0`$ for the dotted curve, $`\xi 0.326`$ for the dashed curve, and $`\xi 1.367`$ for the solid curve. As a result, the adiabatic limit is approached most slowly for the dotted curve, more quickly for the dashed curve, and most quickly for the solid curve. ### B The on-resonance case When a certain intermediate state $`\psi _n`$ is on single-photon resonance, $`\mathrm{\Delta }_n=0`$, it cannot be eliminated adiabatically. By eliminating all other intermediate states, the general $`N+2`$\- state problem is reduced to an effective three-state problem, $$i\frac{d}{dt}\left[\begin{array}{c}c_i\\ c_n\\ c_f\end{array}\right]\left[\begin{array}{ccc}\mathrm{\Omega }_P^2𝒮_{\alpha ^2}^{(n)}& \alpha _n\mathrm{\Omega }_P& \mathrm{\Omega }_P\mathrm{\Omega }_S𝒮_{\alpha \beta }^{(n)}\\ \alpha _n\mathrm{\Omega }_P& 0& \beta _n\mathrm{\Omega }_S\\ \mathrm{\Omega }_P\mathrm{\Omega }_S𝒮_{\alpha \beta }^{(n)}& \beta _n\mathrm{\Omega }_S& \mathrm{\Omega }_S^2𝒮_{\beta ^2}^{(n)}\end{array}\right]\left[\begin{array}{c}c_i\\ c_n\\ c_f\end{array}\right],$$ (63) where the $`𝒮^{(n)}`$ sums are defined by Eqs. (41). Comparison with the standard three-state STIRAP shows that the off-resonant states induce dynamic Stark shifts of states $`\psi _i`$ and $`\psi _f`$, which result in a nonzero two-photon detuning. Moreover, the off-resonant states induce a direct coupling between states $`\psi _i`$ and $`\psi _f`$. Careful examination of Eq. (63) shows that the AT state (12) always exists, but it involves in general a nonzero contribution from the intermediate state $`\psi _n`$. This contribution vanishes when the proportionality condition (24) is fulfilled; then $`𝒮_{\alpha ^2}^{(n)}=𝒮_{\beta ^2}^{(n)}=𝒮_{\alpha \beta }^{(n)}`$ and the AT state is the dark state, $`\phi _T(t)\phi _D(t)`$. In particular, if $`𝒮_{\alpha ^2}^{(n)}=𝒮_{\beta ^2}^{(n)}=𝒮_{\alpha \beta }^{(n)}=0`$, the multi-$`\mathrm{\Lambda }`$ system behaves exactly like STIRAP. ## VII Discussion and conclusions We have presented an analytic study, supported by numerical examples, of adiabatic population transfer from an initial state $`\psi _i`$ to a final state $`\psi _f`$ via $`N`$ intermediate states by means of two delayed and counterintuitively ordered laser pulses. Thus this paper generalizes the original STIRAP, operating in a single three-state $`\mathrm{\Lambda }`$-system, to a multistate system involving $`N`$ parallel $`\mathrm{\Lambda }`$-transitions. The analysis has shown that the dark state $`\phi _D(t)`$, which is a linear combination of $`\psi _i`$ and $`\psi _f`$ and transfers the population between them in STIRAP, remains a zero-eigenvalue eigenstate of the Hamiltonian (10) only when condition (24) is fulfilled. Hence, in this case the multi-$`\mathrm{\Lambda }`$ system behaves very similarly to the single $`\mathrm{\Lambda }`$-system in STIRAP. Condition (24), which is essentially a relation between the transition dipole moments, requires that for each intermediate state $`\psi _k`$, the ratio $`\mathrm{\Omega }_{P,k}(t)/\mathrm{\Omega }_{S,k}(t)`$ between the couplings to states $`\psi _i`$ and $`\psi _f`$ is the same and does not depend on $`k`$. Moreover, this condition ensures the existence of the dark state both in the case when all intermediate states are off single-photon resonance and when one or more states are on resonance. When condition (24) is not fulfilled the dark state $`\phi _D(t)`$ does not exist but a more general adiabatic-transfer state $`\phi _T(t)`$, which links adiabatically the initial and final states $`\psi _i`$ and $`\psi _f`$, may exist under certain conditions. Unlike $`\phi _D(t)`$, state $`\phi _T(t)`$ contains contributions from the intermediate states which therefore acquire transient populations during the transfer. We have shown that when one and only one intermediate state is on resonance, the AT state always exists. When more than one intermediate states are on resonance, the AT state exists only when the proportionality relation (24) is fulfilled, at least for the degenerate states. In the off-resonance case, the condition for existence of $`\phi _T(t)`$ is given by Eq. (37) which is a condition on the single-photon detunings and the relative coupling strengths. It follows from this condition that when the pump and Stokes frequencies are scanned across the intermediate states (while maintaining the two-photon resonance), the final-state population $`P_f`$ passes through $`N`$ regions of high transfer efficiency (unity in the adiabatic limit) and $`N1`$ regions of low efficiency (zero in the adiabatic limit). Each of the low-efficiency regions is situated between two adjacent intermediate states, while each intermediate state is within a region of high efficiency, as shown in Figs. 6 and 7. Our results suggest that it is most appropriate to tune the pump and Stokes lasers either just below or just above all intermediate states because there firstly, the AT state always exists; secondly, the adiabatic regime is achieved more quickly; thirdly, the transfer is more robust against variations in the laser parameters; and fourthly, the transient intermedaite-state populations, which are proportional to $`\mathrm{\Delta }_k^2`$, can easily be suppressed. ## Acknowledgments This work has been supported financially by the Academy of Finland.
no-problem/9903/astro-ph9903128.html
ar5iv
text
# Numerical simulation of prominence oscillations ## 1 Introduction Solar prominence oscillations have been the subject of both observational and theoretical papers for the past 35 years. One of the first studies (Ramsey & Smith 1966) concerned observations of global oscillations of disk filaments with periods of $`6^m`$ to $`40^m`$, that were interpreted by Hyder (1966) as predominantly vertical motions and by Kleczek & Kuperus (1969) as predominantly horizontal motions. Tsubaki (1988) published a review on oscillation studies of limb prominences. Most of these oscillations pertain to Doppler shifts in the spectra of part of a prominence. The associated mass flows are essentially parallel to the photosphere, both longitudinal and transverse to the prominence main axis. The observed periods range from $`160^s`$ to $`82^m`$, with velocity amplitudes in the range of 0.2–3 km/s. Zhang et al. (1991), Zhang & Engvold (1991) and Thompson & Schmieder (1991) studied disk filaments and found locally periods of $`2.5^m22^m`$, with velocity amplitudes of $`0.51.25`$ km/s. As in the observations by Ramsey & Smith these oscillations may have both horizontal and vertical components. Since Tsubaki’s review more limb studies have been performed by Mashnich & Bashkirtsev (1990), Suematsu et al. (1990), Bashkirtsev & Mashnich (1993), Mashnich et al. (1993), Balthasar et al. (1993), Balthasar & Wiehr (1994), Park et al. (1995), Sütterlin et al. (1997) and Molowny-Horas et al. (1997). These observations all confirm and extend previous results. There is now enough evidence to suggest the following tentative classification for prominence oscillations (see also Bashkirtsev & Mashnich 1993). * very short periods: $`P30^s`$ (Balthasar et al. 1993), perhaps due to fast waves propagating along flux tubes (Roberts et al. 1984). * short periods: $`P3^m10^m`$, at least some of which are related to photospheric or chromospheric forcing with periods of $`3^m`$ and $`5^m`$ (Balthasar et al. 1986, Zhang et al. 1991). * intermediate periods: $`P10^m40^m`$, which are probably genuine eigenmodes of the prominence (Ramsey & Smith 1966, Balthasar et al. 1988, Bashkirtsev & Mashnich 1993). Many prominences show nearly the same oscillation period each time they are perturbed. * long periods: ($`P40^m114^m`$), which may be related to chromospheric forcing (Balthasar et al. 1988). We point out that the longest data set is about 7 hours long and the best temporary resolution is a couple of seconds. In particular the observed $`10^m40^m`$ eigenmodes are interesting as they are damped and thus loose energy, perhaps due to some interaction with the ambient corona (Ramsey & Smith 1966, Kleczek & Kuperus 1969). Typically, the quality factor $`Q=\pi T_{\mathrm{damp}}/P<6`$, with $`T_{\mathrm{damp}}`$ the damping time of the oscillation. This implies that three to four oscillations can be observed after the impulsive perturbation of the prominence. Understanding the damping mechanisms will give more insight in prominence dynamics and may yield an extra diagnostic tool for prominence and ambient coronal plasma parameters. Prominence oscillations have been theoretically modelled by many authors. Some model the prominence as a harmonic oscillator with an ad-hoc damping term: Hyder (1966) used viscous effects, Kleczek & Kuperus (1969) used emission of sound waves, van den Oord & Kuperus (1992), Schutgens (1997a) and van den Oord et al. (1998) used emission of Alfvén waves. Other authors construct a simple MHD equilibrium and study oscillations thereof using the linear adiabatic MHD equations (Oliver et al. 1992, 1993, Oliver & Ballester 1995, 1996, Joarder & Roberts 1992ab, 1993, Joarder et al. 1997). Generally the latter approach yields marginally stable oscillations (no damping), although Joarder & Roberts (1992a) and Joarder et al. (1997) claim that leaky waves are possible solutions to their equations. Apparently, little effort has gone into studying the damping mechanisms themselves. Schutgens (1997ab) and van den Oord et al. (1998) recently studied the global equilibrium of prominences treating the evolution of the magnetic field in a self-consistent way. The equation of motion for the prominence (approximated as a line current) was solved simultaneously with the Maxwell equations for the electro-magnetic fields. Their results show that the Alfvén travel time between prominence and photosphere $`\tau `$ is an important time scale of the system. In particular, van den Oord et al. found that it is only possible to obtain stable prominence equilibria by taking damping mechanisms into account. Hence, damping is not just necessary to describe prominence oscillations quantitatively correct, but it is an essential ingredient of a prominence equilibrium. In this paper, we study prominence oscillations, and in particular the effect of the ambient coronal plasma on the prominence motion. We use the Versatile Advection Code (VAC) to solve numerically the isothermal MHD equations in two dimensions. VAC has been developed by G. Tóth (1996, 1997) and is capable of solving a variety of hydrodynamical and magneto-hydrodynamical problems in one, two and three dimensions using a host of numerical methods. In Sect. 2 a simple analytical model for prominence equilibrium and dynamics is discussed that will be used for comparison with our numerical results. In section 3 we describe the isothermal equations that were solved numerically, the methods used, the grid structure and the initial conditions. In Sect. 4 the simulations are described. Using the analytical model mentioned before, these simulations are interpreted in terms of the physical processes involved. A summary and the conclusions can be found in Sect. 5. All dimensional variables in this paper are in rational MKSA units, unless stated differently. ## 2 The line current approximation We briefly recapitulate the line current approximation for prominence equilibrium and dynamics. This approximation will serve as a guideline for discussing our numerical results. The prominence is approximated by a straight, infinitely thin and long, line current $`I_0>0`$ at a height $`y_0`$ parallel to the photosphere. Along the prominence we assume invariance. The effect of the massive photosphere on the prominence magnetic field is modelled through a mirror current $`I_0`$ at depth $`y_0`$ below the surface of the Sun (Kuperus & Raadu 1974, van Tend & Kuperus 1978, Kaastra 1985, Schutgens 1997a). The momentum equations governing the global prominence dynamics are (ignoring gravity) $`\sigma \ddot{x}`$ $`=`$ $`I_0\left[B_{\mathrm{mir}}^y(x,y)+B_{\mathrm{cor}}^y(x,y)\right]\nu _x\dot{x},`$ $`\sigma \ddot{y}`$ $`=`$ $`I_0\left[B_{\mathrm{mir}}^x(x,y)+B_{\mathrm{cor}}^x(x,y)\right]\nu _y\dot{y}.`$ (1) where $`\sigma `$ is the longitudinal mass density of the oscillating structure, $`𝑩_{\mathrm{cor}}`$ is the coronal arcade field in which the prominence is located and $`𝑩_{\mathrm{mir}}`$ is the field due to the mirror current. The interaction between the moving filament and the ambient coronal plasma gives rise to viscous effects (Hyder 1966) and emission of magneto-acoustic waves (Kleczek & Kuperus 1967, van den Oord & Kuperus 1992) that act as damping mechanisms. These are heuristically modelled through $`\nu _x`$ and $`\nu _y`$, damping constants that are in reality determined by the flow field around the prominence. Since viscosity of the coronal plasma is negligible, we concentrate on the emission of magneto-acoustic waves. An approximation for the damping constants can be found by considering linear motion with constant velocity $`v`$ of a solid body through a homogeneous plasma. If the cross section of the body perpendicular to its motion is $`A`$, the body transfers $`2\rho _{\mathrm{cor}}cAv`$ momentum per unit time onto the plasma (Landau & Lifschitz 1989, p. 256). Here $`c`$ is the characteristic wave speed of the plasma with density $`\rho _{\mathrm{cor}}`$. A factor $`2`$ is added since both the front- and backside of the object transfer momentum. Hence, the damping constants have the form $$\nu =2\rho _{\mathrm{cor}}cA.$$ (2) The coronal arcade is generated by a magnetic line dipole $`M_\mathrm{d}>0`$, a depth $`H_\mathrm{d}`$ below the photosphere $`B_{\mathrm{cor}}^x(x,y)`$ $`=`$ $`{\displaystyle \frac{\mu _0M_\mathrm{d}}{\pi }}{\displaystyle \frac{x^2(y+H_\mathrm{d})^2}{\left(x^2+(y+H_\mathrm{d})^2\right)^2}},`$ $`B_{\mathrm{cor}}^y(x,y)`$ $`=`$ $`{\displaystyle \frac{2\mu _0M_\mathrm{d}}{\pi }}{\displaystyle \frac{x(y+H_\mathrm{d})}{\left(x^2+(y+H_\mathrm{d})^2\right)^2}}.`$ (3) The mirror current’s field at the location of the filament, in the quasi-stationary approach, is given by $`B_{\mathrm{mir}}^x(x,y)`$ $`=`$ $`{\displaystyle \frac{\mu _0}{4\pi }}{\displaystyle \frac{I_0}{y}},`$ $`B_{\mathrm{mir}}^y(x,y)`$ $`=`$ $`0.`$ (4) The fact that $`B_{\mathrm{mir}}^y=0`$ is a direct consequence of the mirror current mirroring the motion of the filament (to conserve the photospheric flux) and the quasi-stationary field assumption. When this assumption is dropped and the magnetic fields evolve dynamically according to Maxwell’s laws, the expressions for $`𝑩_{\mathrm{mir}}`$ become far more complicated and in particular $`B_{\mathrm{mir}}^y0`$ (Schutgens 1997a, van den Oord et al. 1998). Assuming quasi-stationary field evolution ($`v_\mathrm{A}\mathrm{}`$), prominences are in stable equilibrium provided they are on the symmetry axis of the arcade ($`y=0`$) and at a height $`y_0<H_\mathrm{d}`$. Furthermore, the current should attain the value $$I_0=\frac{4y_0M_\mathrm{d}}{(y_0+H_\mathrm{d})^2}.$$ (5) Note that this prominence equilibrium has an inverse polarity topology (see also Fig. 5). In fact, it corresponds to a Kuperus-Raadu prominence (see van Tend & Kuperus 1978). If one linearizes around this equilibrium, the equations of vertical and horizontal motion decouple (due to the symmetry of the coronal field) and both have the form of the familiar damped harmonic oscillator. The solutions are oscillations, characterized by frequencies $`\omega `$ and damping rates $`\delta `$: $`\omega _x`$ $`=`$ $`\left(\mathrm{\Omega }_x^2{\displaystyle \frac{1}{4}}\left({\displaystyle \frac{\nu _x}{\sigma }}\right)^2\right)^{\frac{1}{2}}\mathrm{\Omega }_x^2={\displaystyle \frac{8\mu _0M_\mathrm{d}^2}{\pi \sigma }}{\displaystyle \frac{y_0}{(y_0+H_\mathrm{d})^5}},`$ (6) $`\delta _x`$ $`=`$ $`{\displaystyle \frac{\nu _x}{2\sigma }},`$ (7) $`\omega _y`$ $`=`$ $`\left(\mathrm{\Omega }_y^2{\displaystyle \frac{1}{4}}\left({\displaystyle \frac{\nu _y}{\sigma }}\right)^2\right)^{\frac{1}{2}}\mathrm{\Omega }_y^2={\displaystyle \frac{4\mu _0M_\mathrm{d}^2}{\pi \sigma }}{\displaystyle \frac{H_\mathrm{d}y_0}{(y_0+H_\mathrm{d})^5}},`$ (8) $`\delta _y`$ $`=`$ $`{\displaystyle \frac{\nu _y}{2\sigma }}.`$ (9) where $`\mathrm{\Omega }_x`$ and $`\mathrm{\Omega }_y`$ are the quasi-stationary frequencies of the system without damping. If one drops the quasi-stationary assumption, the same equilibrium is found, but its stability then also depends on the value of the coronal Alfvén speed . In general, the solutions to the equations, which are again decoupled, are damped or growing (!) harmonic oscillations (Schutgens 1997ab, van den Oord et al. 1998). ## 3 Numerical methods We solve the time-dependent ideal isothermal MHD equations in two dimensions using the Versatile Advection Code (VAC) developed by one of us (G. Tóth). Descriptions of this numerical code can be found in Tóth (1996, 1997). The photosphere coincides with the $`y=0`$ plane. In conservative form the equations for density $`\rho `$, mass flux $`\rho 𝒗`$ and magnetic field $`𝑩`$ are ($`p`$ is the thermal pressure) $`_t\rho `$ $`+\left(𝒗\rho \right)`$ $`=0,`$ $`_t(\rho 𝒗)`$ $`+\left(𝒗\rho 𝒗𝑩𝑩\right)+(p+B^2/2)`$ $`=0,`$ $`_t𝑩`$ $`+\left(𝒗𝑩𝑩𝒗\right)`$ $`=0.`$ (10) Note that we ignore gravity (see Sect. 5 for a discussion). These equations must be solved together with an equation of state $`p=c_\mathrm{s}^2\rho `$ and the condition that $`𝑩=0`$. Here $`c_\mathrm{s}`$ is the sound speed, a free parameter of the system, which is constant throughout the numerical domain. The magnetic field unit is chosen such that the current density satisfies $`𝑱=\times 𝑩`$ (i.e. $`\mu _0=1`$), all other units are rational MKSA. The equations are discretized on the same grid and solved using a FCT (Flux Corrected Transport) scheme. Since there are no discontinuities in the solution, FCT performs well. Similar results can be obtained using a TVD (Total Variation Diminishing) Lax-Friedrichs scheme but, for the problem at hand, FCT is more efficient. For a comparison of different methods see Tóth & Odstrčil (1996). A projection scheme (Brackbill & Barnes 1980) is used to keep the magnetic divergence small. Typically $`|𝑩|<10^3B/L`$, where $`B`$ and $`L`$ are characteristic strength and length scale of the magnetic field. We use a Cartesian grid of $`190\times 155`$ cells ($`3\times \mathrm{\hspace{0.17em}10}^9`$ m by $`1.5\times \mathrm{\hspace{0.17em}10}^9`$ m) that is strongly distorted, with the highest resolution at the location of the prominence and a much smaller resolution near the coronal edges of the computational domain (see Fig. 1). In a region surrounding the actual prominence the grid spacing is constant. Typically, the grid spacing increases with 10% from cell to cell outside this region. As a consequence the difference in resolution at the prominence and at the far coronal edges can amount to a factor 100 (see Fig. 2). Near the boundaries the grid spacing is again kept constant. The boundary conditions are implemented using two layers of ghost cells around the physical part of the grid. The actual boundary is located between the inner ghost cells and the outermost cells of the physical grid. The solution on the physical part of the grid can be advanced using fluxes calculated from the ghost and physical cells around the boundary. The prescription for the ghost cells depends on the specific boundary condition and may be constant or depend on the solution in the adjacent physical cells. The photosphere is much denser than the corona and is therefore strongly reflecting: there should be no mass flux or energy flux across it. We therefore choose the plasma density symmetric around the photospheric boundary ($`\rho ^{\mathrm{ghost}}=\rho ^{\mathrm{physical}}`$) and vertical momentum anti-symmetric ($`\rho v_y^{\mathrm{ghost}}=\rho v_y^{\mathrm{physical}}`$). The other momentum component $`\rho v_x`$ is chosen anti-symmetric as well, since we assume no flows along the photosphere (‘no slip’ condition). The magnetic field is tied to the dense photosphere and the photospheric flux ($`B_y`$) is conserved. Magnetic waves should be reflected. The initial field solution is stored in computer memory and subtracted from the advanced solution. The ‘linearized’ field solution thus obtained is chosen to be symmetric or anti-symmetric in the photospheric boundary: $`B_x^{\mathrm{ghost}}=B_x^{\mathrm{ghost},0}+(B_x^{\mathrm{physical}}B_x^{\mathrm{physical},0})`$ and $`B_y^{\mathrm{ghost}}=B_y^{\mathrm{ghost},0}(B_y^{\mathrm{physical}}B_y^{\mathrm{physical},0})`$. In this way, one obtains a good reflection of waves for small perturbations. The other (coronal) boundaries do not coincide with a physical boundary and should be as open as possible. Since any choice of boundary condition will always generate some reflection, which we want to avoid, we decided to place these boundaries at large distances from the prominence so that twice the wave crossing time (prominence–coronal boundary) is longer than the simulation time. Also, the coarsening of the grid damps the outward moving waves and the reflection is minimized. At the location of the coronal boundaries we prescribe fixed $`𝑩^{\mathrm{ghost}}`$, but copy $`\rho `$ and $`\rho 𝒗`$ from the physical part of the grid to the ghost cells. The initial configuration is a superposition of three different analytical MHD equilibria. The global coronal structure is a potential arcade, given by Eq. (3). Since we ignore gravity, the plasma density in this arcade is constant. To this equilibrium we add the fields and plasma densities of two current carrying flux tubes, one above, the other below the photosphere. Both flux tubes are located on the polarity inversion line of the arcade. The flux tube at a height $`y_0`$ above the photosphere represents the prominence. Its total axial current is $`I_0`$. The flux tube $`y_0`$ below the photosphere represents the mirror current and has a total current $`I_0`$. The flux tube equilibrium is derived starting from a current profile (see also Forbes 1990) $$j_z(r)=\{\begin{array}{cc}j_0\hfill & rr_0\mathrm{\Delta }r_0,\\ j_0\mathrm{sin}^2\left(\frac{\pi }{2}\frac{rr_0}{\mathrm{\Delta }r_0}\right)\hfill & r_0\mathrm{\Delta }r_0<rr_0,\\ 0\hfill & r>r_0.\end{array}$$ (11) The radius of the tube $`r_0`$ is larger than $`\mathrm{\Delta }r_0`$, the size of the region in which the current density drops off to zero. The total axial current in the flux tube is $$I_0=2\pi _0^{r_0}r^{}j_z(r^{})𝑑r^{}$$ (12) and the azimuthal field component follows from Stokes’ theorem $$B_\varphi (r)=\{\begin{array}{cc}\frac{\mu _0}{r}_0^rr^{}j(r^{})𝑑r^{}\hfill & rr_0,\\ \frac{\mu _0I_0}{2\pi r}\hfill & r>r_0.\end{array}$$ (13) The current is confined to $`r<r_0`$. Beyond this range, the field is potential. The pinching field $`B_\varphi `$ has to be balanced by gas pressure $$p(r)=_{r_0}^rB_\varphi (r^{})j(r)𝑑r,$$ (14) where it is assumed that the flux tube proper carries no mass outside $`r>r_0`$. The longitudinal mass density of the flux tube equals $$\sigma _0=2\pi _0^{r_0}r^{}p(r^{})/c_\mathrm{s}^2𝑑r^{},$$ (15) while the total longitudinal mass density of the prominence is (due to the superposition of equilibria) $$\sigma =\sigma _0+\pi \rho _{\mathrm{cor}}r_0^2.$$ (16) In a global sense, equilibrium is obtained when Eq. (5) is satisfied. However, especially near the outer edge of the prominence no force balance exists. It is necessary to let the initial configuration relax to a numerical equilibrium, before studying oscillations. As it turns out, force imbalance is mostly due to the gradients in density and field within the flux tube (i.e. numerical discretization errors) and a stable equilibrium is readily obtained. To prevent the flows at the outer edge of the prominence from becoming too large during the relaxation phase, and cause an instability, an artificial damping term $`_t\rho 𝒗=\alpha \rho 𝒗`$ is used. This term is switched off during subsequent oscillation studies. Once a numerical equilibrium has been found, we perturb it and study the resulting oscillations. In all cases, the perturbation is caused by instantaneously adding momentum to the prominence. To check the validity of our results, we performed oscillation simulations for the same physical parameters, but different numerical parameters. In particular we changed the grid resolution ($`\mathrm{\Delta }x=5\times 10^5`$ or $`10^6`$ m inside the prominence), the amount of stretching of the grid (0%, 5% and 10%), the duration of the relaxation ($`T=\mathrm{9\hspace{0.17em}000}`$ or $`T=\mathrm{20\hspace{0.17em}000}`$ s) and the constraint on $`|𝑩|`$ ($`<10^4,10^3B/L`$) and found that the results are similar. Using $`0\%`$ stretching implies that the grid is rather small in its physical size, due to computational limitations. Reflection at the coronal boundaries will then limit the usefulness of the simulations to the first $`4000`$ s for $`\rho _{\mathrm{cor}}=10^{12}`$ kg/m<sup>3</sup>. In addition, the effect of the photospheric boundary condition was studied. A perfectly reflecting photosphere does not allow a net momentum or energy flux. Also, the magnetic flux distribution is constant. The implementation of the photospheric boundary conditions automatically ensures zero momentum flux and constant magnetic flux. However, since the magnetic field is not symmetric in $`y=0`$, a small Poynting flux is present. Consider a box-shaped surface around the prominence. The coronal ‘walls’ of this surface each have a shortest distance to the prominence center of $`2.5\times 10^7`$ m (see Fig. 6) and reach down to the photosphere, along which we close the box. The total time-averaged Poynting flux through the photosphere is typically $`10^310^2`$ times smaller than the coronal flux, for vertical oscillations. For horizontal oscillations, the coronal flux is small in essence (due to the anti-symmetry) and provides no reasonable yardstick (of course one could still use the flux through a single ‘wall’, but this is bound to lead to similar results as for vertical oscillations). Furthermore, we point out that the equilibrium height of the prominence during oscillation studies does not change by more than 0.06%, and often even less. We therefore feel that the photospheric boundary is sufficiently well represented in our numerical boundary conditions. ## 4 Oscillation studies Before we study oscillations in prominences, we first have to compute a stable, numerical prominence equilibrium. For the coronal arcade, we choose the following parameter values: $`H_\mathrm{d}=4\times \mathrm{\hspace{0.17em}10}^8`$ m, $`M_\mathrm{d}=10^{20}`$ Am and plasma-density $`\rho _{\mathrm{cor}}=10^{12}`$ kg/m<sup>3</sup>. The two flux tubes are given by: $`r_0=1.5\times \mathrm{\hspace{0.17em}10}^7`$ m, $`\mathrm{\Delta }r_0=10^7`$ m and $`j_0=2\times \mathrm{\hspace{0.17em}10}^4`$ A/m<sup>2</sup>. Hence the longitudinal mass density of the flux tube proper is $`\sigma _0=1.28\times \mathrm{\hspace{0.17em}10}^4`$ kg/m, and the density of the prominence is $`\sigma =1.35\times \mathrm{\hspace{0.17em}10}^4`$ kg/m. The total current is $`I_0=6.5\times \mathrm{\hspace{0.17em}10}^{10}`$ A. Such a prominence is expected to oscillate with a horizontal period of 2777 s and a vertical period of 1118 s. The sound speed was chosen $`c_\mathrm{s}=128.5`$ km/s, typical of the corona at a temperature $`T=10^6`$ K. We now let the initial state as defined in the previous section relax to a numerical equilibrium. The artificial damping term ($`_t\rho 𝒗=\alpha \rho 𝒗`$) was used to prevent numerical instabilities. We choose $`\alpha =0.1`$ for the first $`10^4`$ seconds of the relaxation, and $`\alpha =0.01`$ during the latter $`10^4`$ seconds. Figure 3 shows the relaxation of the volume averaged horizontal and vertical momentum. At the end of the relaxation the flows in the larger part of the arcade are typically less than 5 m/s. Along the boundary of the prominence body a rather irregular flow field exists with velocities of 175 m/s at most. The prominence still moves downwards at a systematic speed of some 1 m/s (see Fig. 4). Considering both the magnitude of the velocity perturbation applied later and the total simulation time scale, we consider these residual flows to be unimportant. The agreement between simulations starting from different relaxations (with a duration of either $`\mathrm{9\hspace{0.17em}000}`$ or $`\mathrm{20\hspace{0.17em}000}`$ s) confirm this. The numerical equilibrium does not deviate much from the initial state. The resulting field topology around the prominence is shown in Fig. 5. The prominence itself is a region of high plasma-$`\beta `$, surrounded by a region of magnetically dominated plasma (see Fig. 6). At large distances from the prominence the plasma-$`\beta `$ becomes larger than unity again, as the magnetic field strength decreases while the coronal density is constant (due to the absence of gravity). The Alfvén speed is also shown in Fig. 6. It increases as one approaches the prominence from the corona, once inside it falls off rapidly. The sound speed, of course, is the same everywhere (it is a free parameter of Eq. (10)). The obtained numerical equilibrium is used to derive a series of numerical equilibria by adding (or subtracting) a constant value from the density in each cell. Since this does not create any additional forces (the force due to gas pressure is $`c_\mathrm{s}^2\rho `$) a new equilibrium is found. In the resulting equilibria we disturb the inner part of the prominence (all plasma within $`1.25\times \mathrm{\hspace{0.17em}10}^7`$ m from the center of mass) with a velocity perturbation of 10 km/s, at an angle of $`45^o`$ to the photosphere. The evolution of the system is studied for $`10^4`$ s, in some cases even for $`2\times \mathrm{\hspace{0.17em}10}^4`$ s. In Fig. 7 the evolution of the volume averaged momentum is shown for all eight cases considered. Initially momentum is concentrated in the prominence, but it is redistributed throughout the corona in the subsequent evolution. A global oscillation is apparent, whose properties depend strongly on coronal density. For one particular case, the flow field is shown in Figs. 8 and 9. Although only the velocity is shown, the prominence stands out clearly in most graphs. It seems to move through the coronal plasma as a rigid body. The largest velocities are usually found outside the prominence. The coronal plasma ‘washes around the filament’. In particular, we studied the motion of the prominence. This was done by computing every so many time steps the center of longitudinal mass and current density. To rule out the contribution of the coronal part of the grid, only densities above a certain threshold where used. As the contrast between typical coronal and prominence current densities is larger than the contrast between typical coronal and prominence mass densities, the first provide a better estimate of the location of the prominence. Nevertheless, the results are always similar. In essence, the motion of the filament can be described by two decoupled damped harmonic oscillators. The horizontal resp. vertical displacement of the center of longitudinal mass or current density of the prominence may be fitted to $`A\mathrm{e}^{t/T_{\mathrm{damp}}}\mathrm{sin}2\pi t/P`$. The resulting periods and damping times for the horizontal and vertical oscillations are listed in Table 1. Sometimes strong transient effects at the start of the simulation and noise at later times (when the velocities are smaller) cause deviations from a simple damped harmonic oscillator. The horizontal oscillations yield, in general, better fits. Typically the low density simulations lead to better fits than the high density simulations. For horizontal motions we used the cases 1–6, for vertical motions we used the cases 1–5 (see Table1). No reliable vertical time scales could be obtained for the last two cases due to strong transients and noise. Comparing the results with simulations where a purely vertical or horizontal perturbation was applied shows that the horizontal and vertical motions of the filament are actually decoupled. This is understandable as the filament is located on the symmetry line of the coronal field (see also van den Oord et al. 1998). At the same time this suggests that non-linear effects are not important. Further evidence for the linearity of the numerical results is found in the good fits of the prominence oscillation curves to a damped harmonic oscillator. However, simulations with smaller or larger perturbations (3 km/s or 20 km/s instead of 10 km/s) yield different damping times for the horizontal motions (see Fig. 10). For all three simulations the horizontal periods differ by only $`0.1\%`$, while the horizontal damping times differ by $`3060\%`$! Apparently, the horizontal damping mechanism depends non-linearly on the flow speeds. We have no explanation for this result, but believe it to be genuine. Extensive tests with higher spatial and temporal resolution argue strongly against the possibility of a numerical effect. It is a strong indication that the damping mechanism for horizontal and vertical motion differ, at least in their dependencies on the flow field. The horizontal periods show a strong dependence on the coronal density, while the vertical periods are almost constant (Fig. 11). The quality factors (Fig. 12) of the horizontal oscillation are typically larger than four, and hence damping contributes little to the oscillation frequency (see Eq. (6)). The dotted lines (in Fig. 11) represent the undamped quasi-stationary period for a prominence with a longitudinal mass density of $`\sigma =\sigma _0+\pi r_0^2\rho _{\mathrm{cor}}`$. Clearly this is a bad fit to the numerical results for horizontal oscillations. Let us assume that, due to the magnetic structure, the actual oscillating body has a radius $`r_{\mathrm{eff}}>r_0`$. The actual oscillating body now has an effective longitudinal mass density $`\sigma _{\mathrm{eff}}=\sigma _0+\pi r_{\mathrm{eff}}^2\rho _{\mathrm{cor}}`$. From Eq. (6) we obtain $$\frac{1}{\rho _{\mathrm{cor}}}\left(\frac{2\mu _0M_\mathrm{d}^2}{\pi ^3}\frac{y_0}{(y_0+H_\mathrm{d})^5}P_x^2\sigma _0\right)=\pi r_{\mathrm{eff}}^2$$ (17) the left-hand-side of which can be fitted to a function of the form $`C\rho _{\mathrm{cor}}^\gamma `$. We find that $`r_{\mathrm{eff}}=5.0\times \mathrm{\hspace{0.17em}10}^7(\rho _{\mathrm{cor}}/10^{12})^{0.04}`$ m, weakly dependent on the coronal density. The typical radius $`r_{\mathrm{eff}}=5.0\times \mathrm{\hspace{0.17em}10}^7`$ m agrees well with the location of the magnetic surface across which the connectivity of the field lines changes from closed field lines in the corona to field lines anchored in the photosphere. An even better approximation to this surface is obtained by taking into account the effect of induced mass as described by Landau & Lifschitz (1989, p. 29; see also Lamb 1945). For an incompressible, potential hydrodynamical flow the effective mass is the sum of the mass of the actual oscillating body ($`\sigma _0+\pi r_0^2\rho _{\mathrm{cor}}`$) plus the mass of the fluid displaced by the body. In that case $`\sigma _{\mathrm{eff}}=\sigma _0+2\pi r_{\mathrm{eff}}^2\rho _{\mathrm{cor}}`$ (note the factor 2!) and we find $`r_{\mathrm{eff}}=3.5\times \mathrm{\hspace{0.17em}10}^7(\rho _{\mathrm{cor}}/10^{12})^{0.04}`$ m. In Fig. 13, we have plotted the ratio of vertical damping time to horizontal damping time. From Eqs. (2), (7) and (9) this ratio is proportional to $`c_x/c_y`$. Here $`c_x`$ is the typical wave speed for perturbations travelling parallel to the photosphere and $`c_y`$ is the typical wave speed for perturbations travelling perpendicular to the photosphere. For simplicity, we assume that the cross sections for both directions have the same $`\rho _{\mathrm{cor}}`$ dependence, but given the azimuthal invariance of the prominence flux tube this is probably a fair approximation. Now $`c_x`$ and $`c_y`$ are equal to either the slow cusp speed $`c_\mathrm{T}`$, the Alfvén speed $`c_\mathrm{A}`$ or the fast speed $`c_\mathrm{f}`$. In a low plasma-$`\beta `$ environment, the cusp speed equals the sound speed $`c_\mathrm{s}`$, while the fast speed equals the Alfvén speed $`c_\mathrm{A}`$. Hence $`c_x/c_y\rho _{\mathrm{cor}}^{0.5},1`$ or $`\rho _{\mathrm{cor}}^{0.5}`$ depending on whether slow, Alfvén or fast wave emission prevails in a certain direction. From the data, we find $`c_x/c_y\rho _{\mathrm{cor}}^{0.53}`$ which strongly suggests that slow wave emission damps horizontal motions, while Alfvén or fast wave emission damps vertical motions. Since the prominence moves vertically to the field lines in the latter case, fast waves are more likely. The conclusions regarding damping mechanisms can be substantiated further by comparing the relevant time scales for horizontal and vertical oscillations independently to the Eqs. (7) and (9) (see Fig. 14). For the mass, we use the effective longitudinal mass density $`\sigma _{\mathrm{eff}}`$ as obtained previously. From Eq. (2) and either Eq. (7) or (9) we find $$\frac{\sigma _{\mathrm{eff}}}{\rho _{\mathrm{cor}}T_{\mathrm{damp}}}=Ac,$$ (18) the left-hand-side of which can be fitted to $`C\rho _{\mathrm{cor}}^\gamma `$. Here $`A`$ is the cross-section perpendicular to the direction of motion. For horizontal oscillations we find $`c_x\rho _{\mathrm{cor}}^{0.06\pm 0.06}`$, and $`A2\times \mathrm{\hspace{0.17em}10}^7`$ m<sup>2</sup>. This suggests excitation of slow waves by horizontal motion of the prominence. For vertical oscillations $`c_y\rho _{\mathrm{cor}}^{0.54\pm 0.04}`$ which suggests excitation of fast waves by vertical motion of the prominence. Since the Alfvén speed varies in space, it is not possible to derive a value of $`A`$ from the product of $`Ac`$. The cross-section $`A`$ for the horizontal damping mechanism yields a dimension for the oscillating body smaller than the extended size $`r_{\mathrm{eff}}`$ as obtained from the horizontal periods. This may be due to field line curvature. Since the size of the oscillating ‘solid’ body is determined by the field topology, Lorentz forces cause the waves that carry away momentum. However, Lorentz forces only act perpendicular to, not parallel to the field lines. Thus only perturbations of magnetic field with a strong vertical component can excite horizontally travelling waves. ## 5 Summary and conclusions We have made a numerical investigation of prominence oscillations, by solving the isothermal MHD equations in two dimensions. First we computed a prominence equilibrium that is very similar to the Kuperus-Raadu (inverse polarity) topology. However, in our numerical equilibrium the prominence is not infinitely thin, but instead well resolved. From this equilibrium, we derived other equilibria with different coronal plasma densities. We then perturbed the system by instantaneously adding momentum to the prominence mass and followed the ensuing oscillations. The dependence of the characteristic time scales (periods and damping times) on the coronal plasma density was analyzed in terms of a solid body moving through a fluid. To our knowledge this is the first attempt at numerically simulating prominence oscillations. In our numerical model, we ignored the effects of gravity and thermodynamics, for the sake of clarity and practicality. Also, up to now the numerical boundary conditions we are using do not seem to allow a stable gravitationally stratified corona. We believe, however, that gravity and thermodynamics do not contribute significantly to the physics of the system. Gravity is of small consequence for the equilibrium of a Kuperus-Raadu prominence as detailed by van Tend & Kuperus (1978). Force balance is due to two Lorentz forces: one due to the coronal magnetic arcade, the other due to the photospheric flux conservation. The gravitational force can be ignored when describing global equilibrium. Furthermore, the scale height of a corona of $`T=10^6`$ K is $`3\times \mathrm{\hspace{0.17em}10}^8`$ m, which is larger than the typical vertical size of prominences. The inclusion of gravity would change the appearance of the prominence into a slab, with prominennce matter accumulating in the pre-existing (!) dips in the magnetic field lines. The magnetic field configuration would hardly change. Likewise, the absence of a thermally structured corona and prominence seems of only minor influence. The most important effect would be that a cool prominence will be heavier than a prominence of coronal temperatures. In our model this is balanced by the absence of a longitudinal field. As a consequence, the pinching effect of the azimuthal prominence field has to be balanced by gas pressure only (in real prominences the longitudinal field pressure contributes significantly). The total longitudinal mass density of our prominence ($`1.3\times 10^4`$ kg/m<sup>3</sup>) agrees well with that of a slab of height $`5\times \mathrm{\hspace{0.17em}10}^7`$ m, width $`6\times \mathrm{\hspace{0.17em}10}^6`$ m and density $`5\times \mathrm{\hspace{0.17em}10}^{11}`$ kg/m<sup>3</sup>. Oliver & Ballester (1996) studied the influence of the prominence-corona transition region (characterized by strong temperature gradients) and found that it mainly influences the prominence internal oscillations, but not the global oscillations. We point out that the inclusion of both gravity and thermodynamics would give the prominence proper the appearance of a cool slab, suspended in the dips of the field lines belonging to the prominence current. Presumably this will not change the global oscillation discussed in this paper, since that mode is determined by the overall field structure. The results indicate that, for typical coronal densities ($`\rho _{\mathrm{cor}}=10^{13}10^{12}`$ kg/m<sup>3</sup>), the prominence structure can indeed be viewed as a solid body moving through a fluid. However, the mass of this solid body is determined by the magnetic topology, not the prominence proper. In particular, the mass of the body seems to be determined by coronal field lines that enclose the prominence proper. In a low plasma-$`\beta `$ environment this is to be expected. As a consequence, the total mass of the oscillating solid body is larger than the mass of the prominence proper. Due to the symmetry of the system, horizontal and vertical prominence oscillations decouple. These oscillations can each be interpreted as the motion of a damped harmonic oscillator. The horizontal periods and the horizontal and vertical damping times can be explained by assuming that the actual oscillating structure is larger than the prominence proper, due to the magnetic field topology. Only the vertical periods do not agree with this model. They are nearly constant for different values of the coronal density and are best modelled by the oscillation of the prominence proper in a corona with vanishing plasma density ($`\rho _{\mathrm{cor}}0`$). For realistic coronal densities, the horizontal periods change with $`315\%`$ at most when taking the actual oscillating structure into account. However, for lower prominence longitudinal mass densities $`\sigma `$ in a stronger coronal background field (and hence larger prominence currents $`I_0`$), the effect will be much more pronounced. Vertical oscillations lead to the emission of fast waves that carry momentum away from the prominence and damp the oscillation. Horizontal oscillations, on the other hand, lead to the emission of slow waves. These will damp the horizontal oscillation, but less effectively than fast waves ($`Q_x>Q_y`$). The difference in wave emission between horizontal and vertical oscillations can be understood in terms of the coronal arcade in which the prominence is embedded. Due to the large scale height, the arcade field is close to horizontal near the prominence. Waves that travel in the vertical direction (up or down) therefor travel more or less perpendicular to the field lines and must be fast waves. Waves that travel in the horizontal direction travel along the field lines. They could be either Alfvén waves or magneto-acoustic waves. As regards excitation of the waves, it is obvious that the vertical motion of a prominence across field lines that are nearly horizontal will compress both gas and magnetic field and thus set off fast waves. The excitation of the magneto-acoustic slow waves (for our analysis strongly suggests they are slow waves) is not as well understood. But apparently the prominence acts as a piston during horizontal motions along magnetic field lines of the arcade. We surmise that for smaller scale height of the arcade the slow waves might be replaced by fast waves. * N.A.J. Schutgens was financially supported by the Netherlands Organisation for Scientific Research (NWO) under grant nr. 781-71-047. He gratefully acknowledges stimulating discussions with Max Kuperus and Bert van den Oord. The Versatile Advection Code (VAC) was developed by G. Tóth as part of the project on ‘Parallel Computational Magneto-Fluid Dynamics’, funded by the Netherlands Organisation for Scientific Research (NWO) Priority Program on Massively Parallel Computing, while he was working at the Astronomical Institute of Utrecht University. G. Tóth currently receives a post-doctoral fellowship (D25519) from the Hungarian Science Foundation (OTKA).
no-problem/9903/cond-mat9903190.html
ar5iv
text
# Breakdown of the resistor model of CPP-GMR in magnetic multilayered nanostructures ## Abstract We study the effect on CPP GMR of changing the order of the layers in a multilayer. Using a tight-binding simple cubic two band model ($`s`$-$`d`$), magneto-transport properties are calculated in the zero-temperature, zero-bias limit, within the Landauer-Büttiker formalism. We demonstrate that for layers of different thicknesses formed from a single magnetic metal and multilayers formed from two magnetic metals, the GMR ratio and its dependence on disorder is sensitive to the order of the layers. This effect disappears in the limit of large disorder, where the results of the widely-used Boltzmann approach to transport are restored. PACS: 73.23-b, 75.70-i, 75.70Pa Giant magnetoresistance (GMR) in transition metal magnetic multilayers is a spin filtering effect which arises when the magnetizations of adjacent layers switch from an anti-parallel (AP) to a parallel (P) alignment. The resistance in the anti-aligned state is typically higher than the resistance with parallel alignment, the difference being as large as 100%. This sensitive coupling between magnetism and transport allows the development of magnetic field sensors with sensitivity far beyond that of conventional anisotropic magnetoresistance (AMR) devices. In the most common experimental setup, the current flows in the plane of the layers (CIP), and the resistance is measured with conventional multi-probe techniques. Measurements in which the current flows perpendicular to the planes (CPP) are more delicate because of the small resistances involved. Despite these difficulties the use of superconducting contacts , sophisticated lithographic techniques , and electrodeposition , makes such measurements possible (for recent reviews see references ). A widely adopted theoretical approach to GMR is based on the semi-classical Boltzmann equation within the relaxation time approximation. This model has been developed by Valert and Fert , and has the great advantage that the same formalism describes both CIP and CPP experiments. In the limit that the spin diffusion length $`l_{\mathrm{s}f}`$ is much larger than the layer thicknesses (ie in the infinite spin diffusion length limit), this model reduces to a classical two current resistor network, with additional possibly spin-dependent scattering at the interfaces . Despite the undoubted success of this description recent experiments have drawn attention to the possibility of new features which lie outside the theory. Two important and central predictions of this model are that the CPP GMR ratio is independent of the number of bilayers in the case that the total multilayer length is not constrained to be constant, and furthermore is independent of the order of the magnetic layers in the case of different magnetic species. An apparent violation of the first prediction has been observed in CIP and CPP measurements , and of the second prediction in CPP measurements . However a convincing theoretical explanation is lacking. The aim of this letter is to provide a quantitative description of the breakdown of the resistor model in diffusive CPP multilayers in the limit of infinite spin-relaxation length. To illustrate this breakdown, consider a multilayer consisting of two independent building blocks, namely a (N/M) and a (N/M) bilayer, where M and M represent magnetic layers of different materials or of the same material but with different thicknesses and N represents normal metal ‘spacer’ layers. From an experimental point of view M and M must possess different coercive fields, in order to allow AP alignment. In the case of this is achieved by considering respectively Co and Ni<sub>84</sub>Fe<sub>16</sub> layers with Ag as non-magnetic spacer, while in both the layers are Co (with Cu as spacer) but with different thicknesses (respectively 1nm and 6nm). Two kinds of multilayer can be deposited. The first, that we call type I, consists of a (N/M/N/M)$`\times \mu `$ sequence where the species M and M are separated by an N layer and the group of four layers is repeated $`\mu `$ times. The second, that we call type II, consists of a (N/M)$`\times \mu `$(N/M)$`\times \mu `$ sequence, where the multilayers (N/M)$`\times \mu `$ and (N/M)$`\times \mu `$ are arranged in series. If the coercive fields of M ($`H_M`$) and M ($`H_M^{}`$) are different (eg $`H_M<H_M^{}`$) and if N is long enough to decouple adjacent magnetic layers, the AP configuration can be achieved in both type I and type II multilayers by applying a magnetic field $`H`$ whose intensity is $`H_M<H<H_M^{}`$. The AP configuration is topologically different in the two cases, because in type I multilayers it consists of AP alignment of adjacent magnetic layers (conventional AP alignment), while in type II multilayers it consists of the AP alignment between the (N/M)$`\times \mu `$ and (N/M)$`\times \mu `$ portions of the multilayer, within which the alignment is parallel (see figure 1a and figure 1b). From the point of view of a resistor network description of transport, the two configurations are equivalent, because they possess the same number of magnetic and non-magnetic layers, and the same number of N/M and N/M interfaces. Hence the GMR ratio must be the same. In contrast the GMR ratio of type I multilayers is found experimentally to be larger than that of type II multilayers , and the difference between the two GMR ratios increases with the number of bilayers. In the case of the GMR ratio of both type I and type II multilayers increases with the number of bilayers, which again lies outside the resistor network model. In this Letter we demonstrate for the first time that a description which incorporates phase-coherent transport over long length scales can account for such experiments. To illustrate this we have simulated type I and type II multilayers using a Co/Cu system with different thicknesses for the Co layers, namely $`t_{\mathrm{C}u}=10`$AP, $`t_{\mathrm{C}o}=10`$AP, $`t_{\mathrm{C}o}^{}=40`$AP. The technique for computing transport properties is based on a three dimensional simple cubic tight-binding model with nearest neighbor couplings and two degrees of freedom per atomic site. The general spin-dependent Hamiltonian is $$H^\sigma =\underset{i,\alpha }{}ϵ_i^{\alpha \sigma }c_{\alpha i}^\sigma c_{\alpha i}^\sigma +\underset{i,j,\alpha \beta }{}\gamma _{ij}^{\alpha \beta \sigma }c_{\beta j}^\sigma c_{\alpha i}^\sigma ,$$ (1) where $`\alpha `$ and $`\beta `$ label the two orbitals (which for convenience we call $`s`$ and $`d`$), $`i,j`$ denote the atomic sites and $`\sigma `$ the spin. $`ϵ_i^{\alpha \sigma }`$ is the on-site energy which can be written as $`ϵ_i^\alpha =ϵ_0^\alpha +\sigma h\delta _{\alpha d}`$ with $`h`$ the exchange energy and $`\sigma =1`$ ($`\sigma =+1`$) for majority (minority) spins. In equation (1), $`\gamma _{ij}^{\alpha \beta \sigma }=\gamma _{ij}^{\alpha \beta }`$ is the hopping between the orbitals $`\alpha `$ and $`\beta `$ at sites $`i`$ and $`j`$, and $`c_{\alpha i}^\sigma `$ ($`c_{\alpha i}^\sigma `$) is the annihilation (creation) operator for an electron at the atomic site $`i`$ in an orbital $`\alpha `$ with a spin $`\sigma `$. $`h`$ vanishes in the non-magnetic metal, and $`\gamma _{ij}^{\alpha \beta }`$ is zero if $`i`$ and $`j`$ do not correspond to nearest neighbor sites. Hybridization between the $`s`$ and $`d`$ orbitals is taken into account by the non-vanishing term $`\gamma ^{sd}`$. We have chosen to consider two orbitals per site in order to give an appropriate description of the density of states of transition metals and to take into account inter-band scattering occurring at interfaces between different materials. The DOS of a transition metal consists of a narrow band (mainly $`d`$-like) embedded in a broader band (mainly $`sp`$-like). This feature can be reproduced in the above two band model, as shown in reference , where the appropriate choices for $`\gamma _{ij}^{\alpha \beta }`$ and $`ϵ_i^\alpha `$ in Cu and Co are discussed. We analyze the simplest generic model of disorder, introduced by Anderson within the framework of the localization theory , which consists of adding a random potential $`V_i`$ to each on-site energy, with a uniform distribution of width $`W`$ ($`W/2VW/2`$), centered on $`V=0`$ $$\stackrel{~}{ϵ}_i^{\alpha \sigma }=ϵ_i^{\alpha \sigma }+V.$$ (2) The conductances and GMR ratios are calculated within the Landauer-Büttiker theory of transport using a technique already presented elsewhere . In figure 2 we present the mean GMR ratio for type I (type II) multilayers GMR<sub>I</sub> (GMR<sub>II</sub>) and the difference between the GMR ratios of type I and type II multilayers $`\mathrm{\Delta }`$GMR=GMR<sub>I</sub>-GMR<sub>II</sub>, as a function of $`\mu `$ for different values of the on-site random potential. The average has been taken over 10 different random configurations except for very strong disorder where we have considered 60 random configurations. In the figure we display the standard deviation of the mean only for $`\mathrm{\Delta }`$GMR because for GMR<sub>I</sub> and GMR<sub>II</sub> it is negligible on the scale of the symbols. It is clear that type I multilayers possess a larger GMR ratio than type II multilayers, and that both the GMR ratios and their difference increase for large $`\mu `$. These features are in agreement with experiments and cannot be explained within the standard Boltzmann description of transport. The increase of the GMR ratio as a function of the number of bilayers is a consequence of enhancement of the spin asymmetry of the current due to disorder. In fact, despite the Anderson potential being spin-independent it will be more effective on the $`d`$ band than on the $`s`$ band, because the former possesses a smaller bandwidth. Since the minority spin sub-band is dominated by the $`d`$-electrons and the majority by the $`s`$-electrons, the disorder will suppress the conductance more strongly in the minority band than in the majority. Moreover, since the transport is phase-coherent, the asymmetry builds up with the length, resulting in a length-dependent increase of the GMR ratio. The different GMR ratios of type I and type II multilayers can be understood by considering the inter-band scattering. Both multilayers possess the same conductance in the P alignment, while the conductance of type I multilayer in the AP alignment is smaller than that of type II. The inter-band scattering is very strong when an electron crosses phase-coherently a region where the magnetizations have opposite orientations, and this occurs in each (N/M/N/M) cell for type I multilayers, while only in the central cell for type II multilayer (see figure 1a and 1b). Hence the contribution to the resistance in the AP alignment due to inter-band scattering is larger in type I than in type II multilayers. Finally when the elastic mean free path is comparable with a single Co/Cu cell one expects the resistor model to become valid. To illustrate this feature, figure 2 shows that in the case of very large disorder ($`W=1.5`$eV), $`\mathrm{\Delta }`$GMR vanishes within a standard deviation as predicted by the Valert and Fert theory. As a second example in which the dependence of the GMR ratio on disorder changes when the multilayer geometry is varied, consider the system whose AP alignment is sketched in figure 1c and 1d. In this case M and M are different materials chosen in such a way that the minority (majority) band of M possesses a good alignment with the majority (minority) band of M. Moreover the thickness of the N layers has been chosen in order to allow an AP alignment of the magnetizations of adjacent magnetic layers in both type I and type II multilayers. In this case both type I and type II multilayers exhibit conventional P and AP alignments, but their potential profile is quite different. In figure 3 we present a schematic view of the potential profiles for type I and type II multilayers for both the spins in the P and AP configuration. A high barrier corresponds to large scattering and a small barrier corresponds to weak scattering. The dashed line represents the effective potential for material M and and the continuous line for material M. Figure 3 illustrates that type I multilayers possess a high transmission spin-channel in the AP alignment, and hence the resulting GMR ratio will be negative. In contrast type II multilayers do not possess a high transmission channel (there are large barriers for all spins in both the P and AP configuration) and the sign of the GMR ratio will depend on details of the band structure of M and M. Consider the effects of disorder on these two kinds of multilayers. Using the same heuristic arguments as above we expect that the GMR ratio of type I multilayers will increase (become more negative) as disorder increases, in the case of disorder that changes the spin asymmetry of the current. This is a consequence of the fact that, in common with the conventional single-magnetic element, one of the spin sub-bands in the AP alignment is dominated by weak $`s`$-electrons (small barrier), which are only weakly affected by disorder. It is clear that this system is entirely equivalent to conventional single-magnetic element multilayers discussed above. In contrast for type II multilayers there are no spin sub-bands entirely dominated by the weak scattering (small barriers) $`s`$-electrons, and all spins in either the P and AP configuration will undergo scattering by the same number of high barriers. In this case the effect of disorder will be to increase all the resistances and this will result in a suppression of GMR. Moreover it is important to note that in the completely diffusive regime, where the resistances of the different materials may be added in series, the GMR ratio will vanish if $`R_\mathrm{M}^{()}R_\mathrm{M}^{}^{()}`$, where $`R_\mathrm{A}^{()}`$ is the spin-dependent resistance of the material A. To verify this prediction we have simulated both type I and type II multilayers using the parameters corresponding to Co and Fe<sub>72</sub>V<sub>28</sub> of reference , respectively for M and M, and corresponding to Cu for N. This choice was motivated by the fact that a reverse CPP-GMR has been obtained for (Fe<sub>72</sub>V<sub>28</sub>/Cu/Co/Cu)$`\times \mu `$ multilayers . The GMR ratio for type I and type II multilayers is shown in figure 4, which illustrates the remarkable result that the GMR ratio of type I multilayers increases with disorder, while for type II structures it decreases. As explained above this is due to an enhanced asymmetry between the conductances in the P and AP alignment for type I multilayers, and to a global increase of all the resistances for type II multilayers. As far as we know there are no experimental studies of the consequences of the geometry-dependent effect described above, and further investigation will be of interest, in order to clarify the rôle of the disorder in magnetic multilayers. Despite the fact that GMR was discovered more than ten years ago, it continues to present fascinating insights into transport in magnetic heterostructures. In this Letter we have addressed a new issue which lies outside the widely-adopted Boltzmann description of GMR, namely that changing the order of magnetic multilayers can significantly alter the magnetoresistance . We have shown that this effect is a consequence of phase coherence on a length scale greater than the layer thicknesses. Acknowledgments: The authors want to thank D.Bozec, C. Marrows, B.Hickey and M.Howson from the University of Leeds for their suggestions and for the permission to discuss results not yet published. This work is supported by the EPSRC, the EU TMR Programme and the DERA.
no-problem/9903/hep-ph9903537.html
ar5iv
text
# References Some decay modes of the $`1^+`$ hybrid meson in QCD sum rules revisited Shi-Lin Zhu Institute of Theoretical Physics Academia Sinica, P.O.Box 2735 Beijing 100080, China FAX: 086-10-62562587 TEL: 086-10-62569358 E-MAIL: zhusl@itp.ac.cn ## Abstract The pionic coupling constants in the decays of the $`1^+`$ hybrid meson are calculated. The double Borel transformation is invoked and continuum contribution is subtracted. The decay widths of the processes $`1^+\rho \pi ,f_1\pi ,\pi \gamma `$ are around $`40,100,0.3`$ MeV respectively. Comparison is made with previous calculations using three point correlation functions. PACS number: 12.39.Mk Keywords: hybrid meson There appears increasing experimental evidence for a $`J^{PC}=1^+`$ hybrid meson. E852 and Crystal Barrel collaboration reported a resonance with mass and width $`1370\pm 16_{30}^{+50}`$MeV, $`385\pm 40_{105}^{+65}`$MeV and $`1400\pm 20\pm 20`$MeV, $`310\pm 50_{30}^{+50}`$MeV both in $`\eta \pi `$ channel respectively. Beladidze et al. in VES experiment at IHEP reported a broad signal in the $`\eta \pi ^{}`$ state . Very recently E852 collaboration observed a $`J^{PC}=1^+`$ exotic state with a mass of $`1593\pm 8_{47}^{+29}`$ MeV and a width of $`168\pm 20_{12}^{+150}`$ MeV in the $`\rho \pi `$ channel in the reaction $`\pi ^{}p\pi ^+\pi ^{}\pi ^{}p`$ at $`18`$ GeV. We have calculated the binding energy and decay modes of heavy hybrid mesons with a heavy quark in the framework of heavy quark effective theory using the light cone QCD sum rule technique . In this work we extend the same formalism to calculate the decay widths of the processes $`1^+\rho \pi ,f_1\pi ,\pi \gamma `$. Denote the isovector $`J^{PC}=1^+`$ hybrid meson by $`\stackrel{~}{\rho }`$. The interpolating current for $`\stackrel{~}{\rho }`$ reads $$J_\mu (x)=\overline{u}(x)g_s\gamma ^\nu G_{\mu \nu }^a(x)\frac{\lambda ^a}{2}d(x),$$ (1) The overlapping amplitude $`f_{\stackrel{~}{\rho }}`$ is defined as $$0|J_\mu (0)|\stackrel{~}{\rho }=\sqrt{2}f_{\stackrel{~}{\rho }}m_{\stackrel{~}{\rho }}^3ϵ_\mu ,$$ (2) where $`ϵ_\mu `$ is the $`\stackrel{~}{\rho }`$ polarization vectors. The decay amplitude for the p-wave decay process $`\stackrel{~}{\rho }\rho \pi `$ is $$M(\stackrel{~}{\rho }\rho \pi )=iϵ_{\mu \alpha \sigma \beta }ϵ^\mu e^\alpha q^\sigma p^\beta g_1,$$ (3) where $`e_\mu `$ is the polarization vector of the rho meson. For the decay $`\stackrel{~}{\rho }f_1(1285)\pi `$, there exist two independent coupling constants, corresponding to S-wave and D-wave decays. Since the D-wave decay width is much smaller than S-wave width, we shall consider only the sum rules for the S-wave decay coupling constant. The decay amplitude is: $$M(\stackrel{~}{\rho }f_1\pi )=(\eta ϵ)g_2+\mathrm{},$$ (4) where $`\eta _\mu `$ is the polarization vector of $`f_1`$ meson. We consider the correlators $$id^4xe^{ipx}\pi (q)|T\left(J_\rho ^\alpha (x)J_\mu ^{}(0)\right)|0=iϵ_{\mu \alpha \sigma \beta }q^\sigma p^\beta G_1(p^2,p^2),$$ (5) $$id^4xe^{ipx}\pi (q)|T\left(J_{f_1}^\alpha (x)J^\mu (0)\right)|0=g^{\mu \alpha }G_2(p^2,p^2)+\mathrm{},$$ (6) where $`p^{}=pq`$, $`J_\rho ^\alpha (x)=\frac{1}{\sqrt{2}}[\overline{u}\gamma ^\alpha u(x)\overline{d}\gamma ^\alpha d(x)]`$, $`J_{f_1}^\alpha (x)=\frac{1}{\sqrt{2}}[\overline{u}\gamma ^\alpha \gamma _5u(x)+\overline{d}\gamma ^\alpha \gamma _5d(x)]`$, $`<0|J_\rho ^\alpha |\rho >=f_\rho e^\alpha `$, and $`<0|J_{f_1}^\alpha |f_1>=f_{f_1}\eta ^\alpha `$. Since the steps to derive the sum rule for the coupling constant $`g_{1,2}`$ are very similar to those in , we omit the details and present the final results directly. $$\sqrt{2}f_{\stackrel{~}{\rho }}m_{\stackrel{~}{\rho }}^3f_\rho m_\rho g_1e^{(\frac{m_\rho ^2}{M_1^2}+\frac{m_{\stackrel{~}{\rho }}^2}{M_2^2})}=\sqrt{2}f_\pi \{[\mathrm{\Phi }_{}(u_0)\stackrel{~}{\mathrm{\Phi }}_{}(u_0)+\stackrel{~}{\mathrm{\Phi }}_{}(u_0)]M^2+\frac{1}{36}<0|g_s^2G^2|0>\varphi _\pi (u_0)\},$$ (7) $$\sqrt{2}f_{\stackrel{~}{\rho }}m_{\stackrel{~}{\rho }}^3f_{f_1}m_{f_1}g_2e^{(\frac{m_{f_1}^2}{M_1^2}+\frac{m_{\stackrel{~}{\rho }}^2}{M_2^2})}=\frac{f_\pi }{\sqrt{2}}\{[\mathrm{\Phi }_{}^{}(u_0)2\stackrel{~}{\mathrm{\Phi }}_{}^{}(u_0)]M^4+\frac{1}{36}<0|g_s^2G^2|0>\varphi _\pi ^{}(u_0)M^2\},$$ (8) where $`u_0=\frac{M_1^2}{M_1^2+M_2^2}`$, $`M^2\frac{M_1^2M_2^2}{M_1^2+M_2^2}`$, $`M_1^2`$, $`M_2^2`$ are the Borel parameters. The definitions of pion wave functions can be found in and $`\mathrm{\Phi }_{}^{}(u)=\frac{d\mathrm{\Phi }_{}(u)}{du}`$ etc. The sum rule is asymmetric with the Borel parameter $`M_1^2`$ and $`M_2^2`$ since the hybrid meson is heavier than the rho or $`f_1(1285)`$ meson. For simplicity, we have given the expressions after integration of the double spectral density in the interval $`(0,\mathrm{})`$ for the right hand side of (7) and (8). The subtraction of the continuum contribution is discussed in , which is crucial for the numerical analysis. The values of the input parameters are $`f_\pi =0.132`$ GeV, $`m_{\stackrel{~}{\rho }}=1.6`$ GeV, $`f_{\stackrel{~}{\rho }}=0.026`$ GeV , $`m_\rho =0.77`$ GeV, $`f_\rho =0.22`$ GeV, $`m_{f_1}=1.285`$ GeV, $`f_{f_1}=0.24`$ GeV. We have used the mass sum rules of $`f_1(1285)`$ to obtain $`f_{f_1}`$. Moreover we use $`\delta =0.18`$ GeV<sup>2</sup> instead of $`\delta =0.2`$ GeV<sup>2</sup> as in . Let $`M_1^2=2\beta m_{\rho ,f_1}^2`$, $`M_2^2=2\beta m_{\stackrel{~}{\rho }}^2`$, where $`\beta `$ is the dimensionless scale parameter. Then we have $`u_0=\frac{m_{\rho ,f_1}^2}{m_{\rho ,f_1}^2+m_{\stackrel{~}{\rho }}^2}`$, $`M^2=\frac{2m_{\rho ,f_1}^2m_{\stackrel{~}{\rho }}^2}{m_{\stackrel{~}{\rho }}^2+m_{\rho ,f_1}^2}\beta `$. The sum rules (7) and (8) is stable with reasonable variation of the Borel parameter $`M^2`$ and the continuum threshold $`s_0`$. In order to avoid the possible contamination from the radial excited states $`\rho (1450)`$ and $`f_1(1420)`$ we choose the continuum $`s_0=(2.2\pm 0.2)`$GeV<sup>2</sup>. Numerically we have $$g_1=(2.6\pm 1.2)\text{GeV}^1,$$ (9) $$g_2=(5\pm 2)\text{GeV}.$$ (10) The central value corresponds to $`\beta =1.2`$ and $`s_0=2.2`$ GeV<sup>2</sup>. The errors refers to the variations with $`M^2`$, uncertainty of $`s_0`$, uncertainty of the pion wave functions, and the inherent uncertainty of the light cone qCD sum rule approach. Especially the sum rule for $`g_2`$ involves the first derivative of pion wave functions so it is less reliable than that for $`g_1`$. The coupling constant $`|g_1|`$ was first calculated to be around $`27`$ GeV<sup>-1</sup> with $`m_{\stackrel{~}{\rho }}=1.3`$ GeV using three-point correlation functions at the symmetric point $`p^2=q^2=p^2`$ in . Later the hybrid mass and vertex sum rules were reanalysed leading to $`g_1=910`$ GeV<sup>-1</sup> and $`7.7`$ GeV<sup>-1</sup> with $`m_{\stackrel{~}{\rho }}=1.5`$ GeV. The sum rules calculated at the symmetric point receive large contamination from the higher resonances and the continuum contribution since only single Borel transformation can be invoked, which renders its prediction less relaible. In order to illustrate this point more clearly we let $`s_0\mathrm{}`$, i.e., with the continuum contribution unsubtracted. In this case we arrive at $`g_1=(5.2\pm 2.0)`$ GeV<sup>-1</sup>, which is numerically close to the value $`g_1=7.7`$ GeV<sup>-1</sup> in . In other words, the continuum contributes as large as the ground state so its subtraction is cruicial for a reliable extraction of the coupling constant. The formulas of the decay widths are $$\mathrm{\Gamma }(\stackrel{~}{\rho }^{}\rho ^{}\pi ^0+\rho ^0\pi ^{})=\frac{g_1^2}{12\pi }|\stackrel{}{q}_\pi |^3,$$ (11) $$\mathrm{\Gamma }(\stackrel{~}{\rho }^{}f_1\pi ^{})=\frac{g_2^2}{24\pi }\frac{|\stackrel{}{q}_\pi |}{m_{\stackrel{~}{\rho }}^2}(3+\frac{|\stackrel{}{q}_\pi |^2}{m_{f_1}^2}),$$ (12) where $`|\stackrel{}{q}_\pi |`$ is the pion decay momentum. Numerically, $$\mathrm{\Gamma }(\stackrel{~}{\rho }\rho \pi )=(40\pm 20)\text{MeV},$$ (13) $$\mathrm{\Gamma }(\stackrel{~}{\rho }f_1\pi )=(100\pm 50)\text{MeV}.$$ (14) We may further assume the vector dominance to relate the coupling constant for the process $`\stackrel{~}{\rho }\gamma \pi `$ to $`g_1`$ as in , $`g_{\stackrel{~}{\rho }\gamma \pi }=\frac{e}{2\gamma _\rho }g_10.15`$ GeV<sup>-1</sup>, where $`\gamma _\rho =2.56`$. In this way we can estimate $`\mathrm{\Gamma }(\stackrel{~}{\rho }\gamma \pi )g_{\stackrel{~}{\rho }\gamma \pi }^2\frac{m_{\stackrel{~}{\rho }}^3}{96\pi }300`$ keV. The width of $`\rho \pi `$ decay channel from the present calculation is much smaller than those from the vertex sum rules, which is $`600`$ MeV and $`250`$ MeV . One might take a step further and try to extend the same formalism to the decay process $`1^+b_1(1235)\pi `$. However, the $`b_1(1235)`$ mass sum rule is not stable . We do not consider this mode in this work. In short summary we have updated the QCD sum rule predictions for the pionic coupling constants in the light exotic meson decays and estimated the widths of some decay modes. Acknowledgments: This project was supported by the Natural Science Foundation of China.
no-problem/9903/astro-ph9903436.html
ar5iv
text
# Ram pressure stripping of spiral galaxies in clusters ## 1 Introduction There is a long standing debate concerning the effect of environment on galaxy morphology. The pioneering work of Butcher & Oemler (1978, 1984) first demonstrated that distant clusters contained a far higher fraction of blue galaxies than their local counter-parts. Subsequent work has established that many of the red galaxies in these clusters have spectral signatures of recent star formation (Dressler & Gunn, 1983, Couch & Sharples 1987, van Dokkum et al., 1998, Poggianti et al., 1998). The most recent advances have been made with the Hubble Space Telescope that allows the morphology of the distant galaxies to be directly compared with the properties of their nearby counter-parts. The studies of Dressler et al. (1997) and Couch et al. (1998) suggest that the predominant evolutionary effects are that the distant clusters have a substantial deficit of S0 systems compared to nearby systems, and at lower luminosities they contain primarily Sc-Sd spirals compared with the large population of dwarf spheroidals in present day clusters. Many authors have suggested that the predominance of early-type S0 galaxies in local clusters is due to a mechanism that suppresses star formation in these environments leading to a transformation of galaxy morphology. Comparison of the galaxy population of local and distant clusters provides the strongest evidence for this. This leads to the natural conclusion that the primary effect of the cluster environment is to transform luminous spiral galaxies into S0 types through suppression of their star formation. A key ingredient in the explanation of the Butcher-Oemler effect is the rate at which ‘fresh’ galaxies are supplied from the field into the cluster environment. The differences in the fractions of blue, or actively star forming, galaxies between the local and distant clusters may result either from an increase the the general level of star formation activity at higher redshift (eg., Lilly et al. 1996, Cowie et al. 1997), or might result from a different level of infall between local and distant clusters (Bower, 1991, Kauffmann 1996). Several mechanisms have been proposed that may be capable of explaining the transformation of galaxy morphology in dense environments. Ram-pressure stripping has been a long standing possibility, dating from the analytic work of Gunn & Gott (1972), and a mechanism that has been cited in over 200 published abstracts. As a galaxy orbits through the cluster, it experiences a wind due to its motion relative to the diffuse gaseous intra-cluster medium (ICM). Although the ICM is tenuous, the rapid motion of the galaxy causes a large pressure front to build-up in front of the galaxy. Depending on the binding energy of the galaxy’s own interstellar medium, the ICM will either be forced to flow around the galaxy, or will blow through the galaxy removing some or all of the diffuse interstellar medium. Related mechanisms to ram-pressure are thermal evaporation of the interstellar medium (Cowie & Songaila 1977) and viscous stripping of galaxy disks (Nulsen 1982). These occur even when the ram-pressure is insufficient to strip the gas disk directly: turbulence in the gas flowing around the galaxy entrains interstellar medium resulting in its depletion. If ram-pressure or viscous stripping is effective at removing gas, then cluster spirals should have truncated disks deficient in HI. Observational evidence for this is marginal. Some galaxies show clear evidence for stripping, e.g. NGC 4522 (Kenny & Koopmann 1998), UGC 6697 (Nulsen 1982) or several Virgo cluster galaxies (Cayatte 1994). However, a larger survey of 67 cluster galaxies showed no evidence of these effects (Mould et al. 1995). Interactions between galaxies are another possible agent for promoting morphological transformation. However, strong interactions that lead to galaxy merging, are unlikely to be an effective mechanism in virialised clusters of galaxies, since the relative velocity of galaxies is too high for such encounters to be frequent (i.e. Ghigna et al. 1998). Moore et al. (1996) examined the effects of rapid gravitational encounters between galaxies or with the lumpy potential structure of clusters. This mechanism has been termed galaxy ‘harassment’ and is highly effective at transforming fainter Sc-Sd galaxies to dSph’s and even tidally shredding LSB galaxies. Although this mechanism can account for the observed evolution of lower luminosity galaxies in clusters, the concentrated potentials of luminous Sa-Sb galaxies help to maintain their stability (Moore et al. 1999), although their disks are substantially thickened. A final mechanism that should not be over-looked is the truncation of star formation through the removal of the hot gas reservoir that is thought to surround galaxies (Larson et al. 1980, Benson et al. 1999). In clusters any hot diffuse material originally trapped in the the potential of the galaxies halo becomes part of the overall ICM. The galaxy (with the possible exception of the central dominant galaxy) cannot supplement its existing ICM and thus is doomed to slowly exhaust the material available for star formation. This mechanism is the only environmental mechanism currently embedded into hierarchical galaxy formation codes (eg., Kauffmann & Charlot 1998, Baugh et al. 1998); however, it appears unable to adequately reproduce the star formation histories of real cluster galaxies, since the spectroscopic studies require that star formation is suppressed on far shorter timescales (eg., Barger et al. 1996, Poggianti et al. 1998). In this paper, we revisit ram-pressure stripping as a mechanism for the removal of gas from cluster galaxies and thus rapidly suppressing the star-formation rate. In particular, we use fully 3 dimensional SPH simulations to compare with the analytic estimate of Gunn & Gott, and to investigate the effect of differing galaxy infall velocities, inclinations and cluster gas densities. Our main motivation is to investigate whether ram pressure could be effective in clusters less rich than the Coma cluster, and to determine whether the stripping effect is limited only to the outer-part of the disk or whether the effects can propagate inwards. It is also of interest to determine the timescale on which the stripping should occur, since rapid truncation of star formation appears to be one of the key signatures of the spectroscopic Butcher-Oemler effect. Relatively little numerical work has been performed to examine the effects of ram pressure, especially 3 dimensional simulations c.f. Farouki & Shapiro (1980). Balsara et al. (1994) performed several high resolution simulations of the gas stripping process using an Eulerian code, but again using a restricted 2-dimensional version. As well as being able to study the rich structure of the gas ablation process, these authors found that the galaxy could accrete gas from the downstream side of the flow. We note that Kundic et al. (1993) also reported a preliminary investigation of this problem using a smoothed particle hydrodynamic (SPH) code. The structure of this paper is as follows. In Section 2 we discuss the parameters of the galaxy model that we shall use and make predictions for the radius that gas will be stripped via ram pressure. Techniques and results of numerical SPH simulations are presented in Section 3, which are discussed in Section 4, along with the shortcomings of ram pressure stripping as the mechanism behind explaining the Butcher-Oemler effect. ## 2 The galaxy model We construct an equilibrium galaxy model designed to represent the Milky Way, using the techniques described by Hernquist (1993). The model has a stellar and a gaseous disk, halo and bulge components. The bulge is spherical and has a mass density profile of the form: $$\rho _b(r)=\frac{M_b}{2\pi r_b^2}\frac{1}{r(1+r/r_b)^3},$$ (1) where $`r_b`$ is the scale length and $`M_b`$ is the mass. The model has a dark matter halo with density given by the following truncated profile: $$\rho _h(r)=\frac{M_h}{2\pi ^{3/2}}\frac{\alpha }{r_tr_h^2}\frac{exp(r^2/r_t^2)}{(1+r^2/r_h^2)}$$ (2) where $`r_h`$ is the core radius, $`r_t`$ is the truncation radius and $`M_h`$ is the mass. The mass normalisation requires that the constant: $$\alpha =1/\{1\pi ^{1/2}qexp(q^2)[1erf(q)]\}$$ (3) where $`erf(q)`$ is the error function and $`q=r_h/r_t`$. The disk is axisymmetric and is composed of both stars and gas. Its mass density profile is an exponential of the form: $$\rho _d(R,z)=\frac{M_d}{4\pi R_d^2z_d}exp(R/R_d)sech^2(z/z_d)$$ (4) where $`R_d`$, $`z_d`$ and $`M_d`$ are the cylindrical scale length, the vertical thickness and mass, respectively. We will replace the subscript $`d`$ by $`s`$ to refer to the stellar disk, or $`g`$ to refer to the gaseous disk. The characteristic length scales of each component are listed in Table 1 and we plot the contribution to the rotational velocity of the disk provided by each component in Figure 1. Dark matter begins to dominate the baryonic components beyond $`10`$ kpc, and the maximum rotational velocity of the disk is $`220\mathrm{km}\mathrm{s}^1`$. Adopting a B band mass to light ratio of 2, the central surface brightness of the model galaxy is $`21`$ mags arcsec<sup>-2</sup>. ## 3 Analytic solution We have applied the ideas of Gunn & Gott (1972) to the galaxy model in order to obtain analytic estimates of the radius beyond which the gas will be stripped, $`R_{str}`$, when the galaxy is moving through the intracluster medium (ICM). These authors stated that the gaseous disk will be removed if the ram pressure of the ICM is greater than the restoring gravitational force per unit area provided by the galaxy’s disk (c.f. Sarazin 1986). In this case, the ram pressure is $`P=\rho _{icm}v^2`$, where $`v`$ is the velocity of the galaxy with respect to the ICM and $`\rho _{icm}`$ is the gas density of the ICM. The restoring gravitational acceleration of a particle orbiting in the galaxy is $`\varphi /z`$, where $`z`$ is the coordinate perpendicular to $`v`$. The total gravitational potential, $`\varphi `$, of the galaxy can be obtained solving the Poisson equation $`^2\varphi (R,z)=4\pi G\rho (R,z)`$ for each component separately and summing. In the case of a face-on passage, we have for the bulge, $$\frac{\varphi _b}{z}(R,z)=\frac{GM_b}{(r+r_b)^2}\frac{z}{r}$$ (5) and for the halo, $$\frac{\varphi _h}{z}(R,z)=\frac{2\alpha GM_h}{\pi ^{1/2}r^2}\frac{z}{r}_0^{r/r_h}\frac{x^2exp(x^2)}{x^2+q^2}𝑑x.$$ (6) The analytical solution of the Poisson equation for the disk is not so straightforward as for the spherical components. Binney & Tremaine (1987) use separation of variables to solve this problem for the case of an infinitely thin disk with a surface density $$\sigma _d(R)=_{\mathrm{}}^{\mathrm{}}\rho _d(R,z)𝑑z=\frac{M_d}{2\pi R_d^2}exp(R/R_d).$$ (7) We have adopted this approximation and used their formula (2-167) in order to compute the restoring gravitational acceleration for the disk: $$\frac{\varphi _d}{z}(R,z)=GM_d_0^{\mathrm{}}\frac{J_0(kR)exp(k|z|)}{[1+(kR_d)^2]^{3/2}}k𝑑k.$$ (8) where $`J_0(x)=\pi ^1_0^\pi cos(xsin\theta )𝑑\theta `$ is the Bessel function of the first kind of order zero. The stripping radius $`R_{str}`$, can then be computed by solving the equation $$\frac{\varphi }{z}(R_,z)\sigma _g(R)=\rho _{icm}v^2,$$ (9) where the left hand side is the total restoring gravitational force per unit mass of the model and the right hand side is the ram pressure. At a given radius, $`R`$, the restoring gravitational force per unit mass is a function of the coordinate $`z`$. This force is maximum at $`z=0`$ therefore in order to completely remove the gas from the galaxy the ram pressure must be greater than this value. In Figure 2 we plot both sides of Equation 9 (solid lines), for the parameters quoted in Table 1, which were chosen to represent observational values for an Sb type spiral galaxy like the Milky Way. We also show the contribution of each component to the maximum restoring force per unit mass as a function of the radius. The dots show the value of the total gravitational force per unit area computed directly from all particles of the N-body realisation. The agreement between the analytical estimates (solid curved line) and the envelope of values computed from the particles (dots), demonstrate the validity of using Equation 8. The ram-pressure values for 2 different ICM densities and relative velocities are shown as horizontal lines in Figure 2. These may be representative of galaxies passing through the core of the Coma and Virgo clusters. The predicted radius of the final stripped gas disk, $`R_{stp}`$, is given by the intersection of the horizontal line with the total restoring force per unit area (solid curved line), roughly 3 kpc and 10 kpc for the values illustrated here. For $`R<R_{stp}`$ the restoring gravitational force per unit mass is greater the the ram-pressure and the gas remains bound to the galaxy. On the contrary, for $`R>R_{stp}`$ the ram-pressure overcomes the gravitational force and the gas can be stripped. Figure 2 demonstrates that the main contribution to the total restoring gravitational force comes from the stellar disk, although in the central parts of the galaxy the bulge becomes important. For this model, the bulge contributes 30% of the disk mass and dominates the vertical potential in the central 2 kpc. The halo provides a negligible contribution within 10 kpc, but begins to dominate on scales $`>20`$ kpc. One interesting and straightforward application of this model is the comparison of $`R_{stp}`$ with the $`H_I`$ observational data available for galaxies in clusters. Cayatte et al. (1994) analyse the surface brightness of 17 bright spirals in the Virgo cluster. They divided the sample into 4 subsamples according to the shape of the surface brightness profile and conclude that ram-pressure is the main reason of gas removal in subsample III (3 galaxies). They also present a list of isophotal $`H_I`$ and optical diameters for these galaxies. We solved Equation 9 for a ram-pressure corresponding to $`\rho _{icm}=0.1\rho _C`$ and $`v=1000\mathrm{km}\mathrm{s}^1`$, with different values of the galaxy’s scale length $`R_d`$. In order to compare directly with the observations we define an “optical” radius $`R_o`$ as the size of the stellar disk. Then, we scale linearly from $`R_d`$ to $`R_o`$. For $`R_d=3.5`$ kpc this value is $`R_o=24`$ kpc and at this position the density has decreased by a factor $`10^3`$. In Figure 3 we show the ratio of the stripping radius to “optical” radius $`R_{stp}/R_o`$ as a function of $`R_o`$ for the model (solid line). The filled circles show the observed radii obtained by Cayatte el al. (1994) for their galaxies and we find reasonable agreement between the model and the data. ## 4 Numerical simulations Hydro-dynamical simulations of the ram-pressure require enough spatial resolution to follow the interaction between the “cold” disk gas and the hot ICM. If the ICM particles are too massive, then they will punch holes in the gas disk like bullets and the flux of particles against the disk will be dominated by shot noise. With current computational resources, simulations that attempt to capture the full cosmological context of the formation of disks and their subsequent evolution within a cluster environment will be completely dominated by numerical effects. Ideally, gas particles will flow onto the disk, imparting a significant fraction of their momentum as their motion is halted by the disk gas. Too hot to accrete onto the disk, they will be forced to flow around and re-join the ICM. To achieve this spatial resolution, the ICM particles should have a mass that is at least as small as that of the disk gas. The limitation in the number of gas particles that SPH-codes can handle makes it impossible to follow the evolution of a galaxy through an entire cluster. For example, the mass of gas inside the Abell radius $`r_A=1.5h^1\mathrm{Mpc}`$ for the Coma cluster is $`M_{icm}=5\times 10^{13}h^{5/2}M_{\mathrm{}}`$ (White et al. 1993). On the other hand, the $`H_I`$ component of a massive galaxy is $`M_g10^{10}M_{\mathrm{}}`$ (Canizares et al. 1986, Young et al. 1989) so that the ratio between the number of gas particles in the ICM and the galaxy would be $`10^4`$. With just $`10^5`$ SPH particles, a galaxy passing pericenter will encounter of order 10-100 gas particles. These will detonate the disk like nuclear explosions leaving large holes and creating a large artificial drag. To suppress this effect, a minimum ICM gas mass equal to the disk particle mass is necessary, requiring $`N10^{78}`$ gas particles for the ICM. To avoid this problem we simulate only the passage of the galaxy through the cluster core, where $`\rho _{icm}`$ and $`v`$ are maximum and the ram-pressure stripping is most effective. We represent the ICM as a flow of particles along a cylinder of radius $`R_{cyl}=30`$ kpc and thickness $`z_{cyl}=10`$ kpc. The axis of the cylinder is oriented in the $`z`$-direction, perpendicular to the plane of the galaxy in the face-on case. We also carry out simulations in which the galaxy is passing edge-on and inclined at $`45^o`$ to the direction of motion through the ICM, for which we use a box of size $`60\mathrm{k}\mathrm{p}\mathrm{c}\times 60\mathrm{k}\mathrm{p}\mathrm{c}\times 10`$ kpc. Initially, we randomly distributed $`N_{ICM}=16000`$ gas particles inside the cylinder ($`N_{ICM}=20000`$ for the box) with a density $`\rho _{icm}`$ and a temperature $`T`$. We have chosen the temperature $`T=8`$ keV and the density to range from the central density of a cluster like Coma $`\rho _{icm}=\rho _C5.64\times 10^{27}h_{50}^{1/3}gcm^3`$ (Briel, Henry & Bohringer 1992) to the density of a cluster like Virgo $`\rho _{icm}=0.1\rho _C`$. (Throughout this paper we have adopted a value of the Hubble constant of $`H_o=50\mathrm{kms}^1\mathrm{Mpc}^1`$). In order to represent the passage of the galaxy through the ICM we give all gas particles in the cylinder an initial velocity $`v`$. We also carried out a test simulation in which we increased the density of ICM particles from an initial value of zero, such that the galaxy feels a gradual increase in pressure, rather than a sudden shock. The final stripping radius was the same as in the case of an instantaneous wave of particles of the full density. This allows us to save an important fraction of computational time. We have used the TREE-SPH code developed and kindly made available by Navarro & White (1993). We have modified this code in order to include periodic boundary conditions for ICM gas particles that leave the cylinder or the box. Each particle that leaves the cylinder or the box at $`z_{cyl}/2`$, is re-entered at $`z_{cyl}/2`$. We also apply reflecting boundaries conditions for particles that leave the cylinder edges at $`x^2+y^2=R_{cyl}^2`$. In Table 2 we list the main characteristics of the simulations. The code has individual timesteps that are typically $`10^4`$ years and we run each simulation for more than $`10^8`$ years. In Figure 4 and 5 we show the projected distribution of disk gas particles at the final output ($`xy`$ plane and $`xz`$ plane, respectively) for four of the simulations. Run A is the model galaxy in isolation, runs F, H and I are face-on, edge-on and inclined $`45^o`$ to the direction of motion. At the final time, the distribution of stars and dark matter particles remain very similar to the initial conditions, whereas the gas distribution is strongly modified by the ram-pressure. In Figure 6 we show the evolution of the radius $`R_{stp}`$, and the fraction of gas mass that remains inside this radius. We estimate $`R_{stp}`$ as the radius of the most distant gaseous disk particle from the center, and the mass is calculated using the disk particles inside a cylinder of radius $`R_{stp}`$ and thickness of 1 kpc. The dotted and solid curves correspond to simulations B to G (face-on) from top to bottom, respectively, i.e., monotonically increasing the amount of ram-pressure. The short dashed line corresponds to simulation H (edge-on) and the long dashed line to simulation I (inclined $`45^o`$). In Figure 7 we have plotted the stripping radius, $`R_{stp}`$, as function of the velocity $`v`$ of the ICM. The curves show the analytical solution of equation 9 for different values of density: $`\rho _{ICM}=0.1\rho _C`$ (dotted line), $`\rho _{ICM}=\rho _C`$ (solid line) and $`\rho _{ICM}=10.0\rho _C`$ (dashed line). For each ICM density, the upper curve shows the solution taken into account the restoring gravitational force of all components of the galaxy, and the lower corresponds to including just the stellar disk. The solid squares are the values measured from the numerical simulations of the face on passages, whilst the open square denotes the edge-on simulation. The open triangle shows the $`45^o`$ simulation which is intermediate between face and edge-on. We also show as solid circles the observational radius from Cayatte et al. (1993) for 3 galaxies in the Virgo cluster, assigned a velocity of $`1200\mathrm{km}\mathrm{s}^1`$. ## 5 Discussion The motivation for this paper was to examine the effectiveness of ram pressure stripping at removing the reservoir of cold gas from spiral galaxies. In particular, could ram pressure be the key mechanism behind the Butcher-Oemler effect by rapidly truncating star-formation in cluster galaxies? In the introduction, we outlined how a simple explanation of the Butcher-Oemler effect might work. New galaxies are supplied to the dense cluster environment from the field. The evolution of the rate of this supply is well described by numerical models for the evolution of gravitational structure, or their analytical approximations (eg Kauffmann 1996). Another important factor is the level of star formation activity in the galaxies before they feel the influence of the cluster. It is generally believed that star formation levels are higher in the intermediate redshift universe than locally (Lilly et al. 1996, Cowie et al. 1997, Steidel et al. 1998). The second ingredient of the explanation is the effect of the cluster environment on the evolution of the galaxies’ star formation rates. A general decline is expected since galaxies in the cluster will gradually consume the gas in their disks, and the possible sources of replenishment, such as HVC’s or gas rich satellites, will be stripped away. However, a slow decline is not adequate to explain the the strong Balmer absorption line spectra frequently seen in the cluster galaxies (Couch & Sharples, 1987, Barger et al., 1995). In order to match the strength of such lines, a sudden decrease in the star formation rate is required (Poggianti & Barbaro, 1996). In the more extreme cases, the line strength can only be matched if the truncation is preceded by a burst of star formation; a burst would make the age distribution of the weaker lined systems easier to understand as well. Galaxy harassment could provide the mechanism to initiate a burst of star formation once a galaxy enters the cluster environment. It is also very efficient at causing instabilities that drive large amounts of gas to the central regions of spirals (Lake et al. 1998), although these process are less efficient in luminous spirals (Moore et al. 1999). The numerical experiments of this paper put us in the position to assess the plausibility of ram-pressure stripping as the truncation mechanism. Initially, this scenario seems promising. In the Coma cluster environment, the wind due to the ICM causes a substantial reduction in the size of gaseous disks. Indeed, Bothun & Dressler (1986) find several star-bursting, HI deficient spirals in the core of the Coma cluster. The timescale for this is very rapid and is shorter than the time taken to cross the cluster core. However, beyond this superficial success, a number of problems remain to be addressed: * The largest deficit of this model is that in no case is the gas disk completely removed. A substantial portion of the cold gas remains sufficiently bound to the stellar disk such that the external medium prefers to flow around the system. In the most extreme case, of a galaxy passing through the core of the Coma cluster at $`3000\mathrm{km}\mathrm{s}^1`$, the disk is truncated at $`1.5`$ disk scale lengths. We can estimate the corresponding reduction in the star-formation rate using the Schmidt star formation law (Schmidt 1959, Kennicutt 1989) to calculate the contribution to the overall star formation rate at each radius. For the unfortunate galaxy mentioned above, the star formation rate will be reduced by a factor 2. In lower density environments, the effect is much lower: stripping the disk beyond 3 scale lengths reduces the star formation rate by only 10%. Fujita & Nagashima (1998) recently examined the colour evolution of spiral galaxies that have suffered ram-pressure stripping with similar conclusions. * Several authors find that the star-formation rate in cluster galaxies is significantly reduced between the field and the cluster center (Dressler et al. 1997, Balogh et al. 1998, Poggianti et al. 1998). Even when the diffuse gaseous material is stripped from the disk, additional gas will remain in the form of dense molecular clouds. These cannot be removed by the ram-pressure force since they are so small and dense. In local Sa-Sc galaxies, the mass of molecular gas can equal the atomic gas fraction (Young & Scoville, 1991). * Comparison of the stripped gas fractions for galaxies in the cores of the Coma and Virgo cluster clusters shows that ram-pressure is only a significant force in the densest regions. In contrast, the data of Balogh et al. (1998), and of Morris et al. (1998), suggest that the influence of the environment extends out to as much as twice the cluster virial radius. Some of this effect probably comes from galaxies that are embedded in groups and poor clusters that are part of the large-scale structure around the cluster. Secondly, galactic orbits in clusters that form in a hierarchical universe are fairly radial. Ghigna et al. (1998) demonstrate that 20% of cluster galaxies orbit with apocenter to pericenter ratios larger than 10:1. Thus, 20% of galaxies that have orbited through the core of the Coma cluster may be found at, or beyond, the virial radius. Nevertheless, to fully explain this effect we require a mechanism that is effective in environments less dense than the core of the Coma cluster. * Finally, we note that our simulations provide no explanation linking the stripping of gas with a burst of star formation, as is required to explain the most extreme absorption line spectra. The lack of such a link most likely results from physical processes that have been omitted from our simulations. For instance, in the edge-on case, the effect of the wind is to substantially compress the leading edge of the disk. It is quite plausible, that this compression could lead to an increase in the collision rate of molecular clouds, leading to a substantial enhancement in the star formation rate. Fujita (1998) discusses the proposed mechanisms for inducing star-bursts in cluster galaxies, concluding that galaxy-harassment is the most viable candidate. This discussion suggests that simple ram-pressure stripping does not adequately explain the sharp decline of star formation seen in Butcher-Oemler galaxies. One possibility is that our models need to be generalised to explicitly include the effects of star formation and galaxy harassment. This will tend to make galaxies more susceptible to the ram pressure of the ICM. First because the molecular clouds that are disrupted by star formation will not be able to re-form if the diffuse material has already been removed from the disk. Secondly, tidal shocks via galaxy harassment may tend to make the disk structure more diffuse (and therefore more susceptible to stripping); this process could be particularly important if the effect of the stripping were to promote a burst of star formation. We note that the restricted Eulerian treatment of this problem by Balsara et al. (1994) found that cooling gas may accrete back into the galaxy. We do not observe this phenomenon, but this is due to our resolution in low density regions which are better resolved using grid based techniques. We are addressing this problem using higher resolution simulations performed using parallel SPH and Eulerian codes. ## 6 Conclusions We analyze the ram-pressure stripping process of a spiral galaxy passing through the ICM using hydro-dynamical simulations and we conclude that: $``$ Ram pressure stripping is an effective mechanism at depleting gas from cluster spirals. The radius to which gas is removed can be calculated by equating the ram pressure force $`\rho v^2`$ to the restoring force provided by the disk, as originally suggested by Gunn & Gott (1972). $``$ Bulges provide an additional gravitational force that dominate the holding force in the central few kpc. Even a Milky Way type spiral crossing the core of the Coma cluster at $`3000\mathrm{km}\mathrm{s}^1`$ will retain gas within the central region. $``$ The time-scale for gas to be removed is very short $`10^7`$ years, a fraction of a crossing time, whereas the timescale for gravitational interactions (galaxy harassment) to affect morphology and induce star-formation is of order a cluster crossing time. $``$ Disks moving through clusters with orbital inclination edge on to the direction of motion lose about 50% less gas than a full face on encounter with the ICM. $``$ Observations of the $`H_I`$ distribution in cluster spirals show evidence for tidally truncated disks by an amount roughly in accordance with analytic expectations. $``$ Ram pressure stripping alone does not provide the physical mechanism behind the origin of the Butcher-Oemler effect. ## 7 Acknowledgments MGA would like to acknowledge support from Fundación Antorchas, Argentina and the British Council. BM is supported by a Royal Society University Research Fellowship.
no-problem/9903/math9903008.html
ar5iv
text
# Polynomial Relations Among Characters coming from Quantum Affine Algebras ## 1 Introduction ### 1.1 Motivation The Jacobi-Trudi (or Giambelli) formula tells us that the Schur function of an arbitrary partition can be realized as the determinant of a matrix whose entries are homogeneous (or elementary) symmetric functions. In the language of representations of $`𝔤𝔩_n`$ indexed by Young diagrams, this says that the character of an arbitrary representation is a determinant of a matrix whose entries are characters for Young diagrams with a single row (or column). Now look at representations corresponding to rectangular Young diagrams. The matrix coming from an $`\mathrm{}\times (m+1)`$ rectangle contains as minors the matrices corresponding to rectangles of sizes $`(\mathrm{}1)\times m`$, $`(\mathrm{}+1)\times m`$, $`\mathrm{}\times (m1)`$, and $`\mathrm{}\times m`$ in two different ways. The three-term Plücker relation then yields the following identity: $$Q_m(\mathrm{})^2=Q_{m1}(\mathrm{})Q_{m+1}(\mathrm{})+Q_m(\mathrm{}1)Q_m(\mathrm{}+1)$$ (1) where $`Q_m(\mathrm{})`$ is the character associated to the $`\mathrm{}\times m`$ rectangular Young diagram. This beautiful identity is not as well-known as it ought to be. The representations whose Young diagrams are a single column are the fundamental representations of $`𝔤𝔩_n`$, and one might hope that a similar picture could be constructed starting with the fundamental representations of other Lie algebras. Unfortunately, one can easily check that determinants filled in with those fundamental characters do not give characters of actual representations. Variants on the Jacobi-Trudi identity for other groups do exist (see Appendix A.3 of \[FH\]), but they do not behave well with respect to taking minors, and so do not yield analogs of equation (1). We can hope for a more satisfactory generalization, though: perhaps we could start with some other representations of $`𝔤`$, not necessarily irreducible, and build a set of representations made of their determinants which do satisfy relations like equation (1). In 1987, Kirillov and Reshetikhin investigated certain representations of a recently-defined quantum deformation of the universal enveloping algebra of $`𝔤`$. They conjectured that the analogs of fundamental representations for this algebra satisfied a generalization of equation (1). The representations were $`𝔤`$-modules as well, so they formed a good generalization of the complete $`𝔤𝔩_n`$ picture. In the present paper, we reverse this process. Beginning with the desire to generalize the $`𝔤𝔩_n`$ picture to types $`B`$, $`C`$ and $`D`$ and retain certain properties, we show that the Kirillov-Reshetikhin solution is in fact the only one, regardless of its interpretation in terms of quantum deformations. ### 1.2 Background Let $`𝔤`$ be a complex finite-dimensional simple Lie algebra, $`\widehat{𝔤}`$ its corresponding affine Lie algebra. Because of the inclusion of quantum enveloping algebras $`U_q(𝔤)U_q(\widehat{𝔤})`$, any finite-dimensional representation of $`U_q(\widehat{𝔤})`$ is a direct sum of irreducible representations of $`U_q(𝔤)`$. Here we are particularly interested in the representations of $`U_q(\widehat{𝔤})`$ whose highest weights are multiples of one of the fundamental weights $`\omega _1,\mathrm{},\omega _n`$ of $`𝔤`$, $`n=rank(𝔤)`$. Unfortunately, there is presently no character formula known for these modules in general. The decomposition into $`U_q(𝔤)`$-modules has been explored in \[KR\] and \[ChP\], and recently by the author in \[K\]. Let $`Q_m(\mathrm{})`$ denote the character of a certain $`U_q(\widehat{𝔤})`$-module with highest weight $`m\omega _{\mathrm{}}`$, where $`\mathrm{}=1,\mathrm{},n`$ and $`m`$ is a nonnegative integer (see section 2.1 for precise definitions). Based on a conjectural formula for the values of the $`Q_m(\mathrm{})`$, these characters appear to satisfy certain remarkable polynomial identities. When $`𝔤`$ is simply-laced, the identities have the form $$Q_m(\mathrm{})^2=Q_{m1}(\mathrm{})Q_{m+1}(\mathrm{})+\underset{\mathrm{}^{}\mathrm{}}{}Q_m(\mathrm{}^{})$$ (2) for each $`\mathrm{}=1,\mathrm{},n`$ and $`m1`$. The product is taken over all $`\mathrm{}^{}`$ adjacent to $`\mathrm{}`$ in the Dynkin diagram of $`𝔤`$. When $`𝔤=𝔤𝔩_n`$ this is just equation (1); the relations in full generality are written down in Section 2.2. Using these relations, it is possible to write any character $`Q_m(\mathrm{})`$ in terms of the characters $`Q_1(\mathrm{})`$ of the fundamental representations of $`U_q(\widehat{𝔤})`$. The main result of this paper is that, for classical Lie algebras $`𝔤`$, these equations have only one solution where $`Q_m(\mathrm{})`$ is the character of a $`U_q(𝔤)`$-module with highest weight $`m\omega _{\mathrm{}}`$. By this condition, we mean we require that $`Q_m(\mathrm{})`$ is a positive integer linear combination of irreducible $`U_q(𝔤)`$-characters whose highest weights sit under $`m\omega _{\mathrm{}}`$. We use the polynomial relations to write some of the multiplicities with which the smaller representations appear in $`Q_m(\mathrm{})`$ in terms of the multiplicities in the characters $`Q_1(\mathrm{})`$. The resulting inequalities determine all of the multiplicities. The author is grateful to N. Yu. Reshetikhin for suggestion of the problem and words of wisdom. The research was partly supported by an Alfred P. Sloan Doctoral Dissertation Fellowship, and partly conducted while visiting the Research Institute for Mathematical Sciences (RIMS), Kyoto, Japan, thanks to the generosity of T. Miwa. ## 2 Polynomial relations ### 2.1 Definitions We let $`𝔤`$ be a finite-dimensional complex simple Lie algebra of rank $`n`$, and $`\widehat{𝔤}`$ be its corresponding affine Lie algebra. We will concentrate on the classical families $`A_n`$, $`B_n`$, $`C_n`$ and $`D_n`$. Choose simple roots $`\alpha _1,\mathrm{},\alpha _n`$ and fundamental weights $`\omega _1,\mathrm{},\omega _n`$ of $`𝔤`$. We will study certain finite-dimensional representations $`W_m(\mathrm{})`$ of $`U_q(\widehat{𝔤})`$, where $`m=0,1,2,\mathrm{}`$ and $`\mathrm{}=1,\mathrm{},n`$. Since $`U_q(𝔤)`$ appears as a Hopf subalgebra of $`U_q(\widehat{𝔤})`$, we can talk about weights and characters of $`U_q(\widehat{𝔤})`$ modules by restricting our attention to the $`U_q(𝔤)`$ action. From this point of view, $`W_m(\mathrm{})`$ has highest weight $`m\omega _{\mathrm{}}`$. The structure as a $`U_q(\widehat{𝔤})`$ module is determined by Drinfeld polynomials $`P_1(z),\mathrm{},P_n(z)`$ instead of weights; the polynomials for $`W_m(\mathrm{})`$ are $`P_{\mathrm{}}(z)`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{m}{}}}\left(z+{\displaystyle \frac{(\alpha _i,\alpha _i)}{4}}(m+12i)\right)`$ $`P_k(z)`$ $`=`$ $`1,\text{ for }k\mathrm{}`$ Chari and Pressley have also developed the notion of a $`U_q(\widehat{𝔤})`$ module being a “minimal affinization” of an irreducible $`U_q(𝔤)`$ module; see \[ChP\] for details. In this language, our representation $`W_m(\mathrm{})`$ is the unique minimal affinization of the irreducible representation of $`U_q(𝔤)`$ with highest weight $`m\omega _{\mathrm{}}`$. Let $`Q_m(\mathrm{})`$ denote the character of $`W_m(\mathrm{})`$ viewed as a representation of $`U_q(𝔤)`$. If $`m=0`$ then $`W_m(\mathrm{})`$ is the trivial representation and $`Q_m(\mathrm{})=1`$. The objects $`W_1(\mathrm{})`$ and $`Q_1(\mathrm{})`$ are called the fundamental representations and characters. Finally, let $`V(\lambda )`$ denote the character of the irreducible representation of $`U_q(𝔤)`$ with highest weight $`\lambda `$. We will write characters $`Q_m(\mathrm{})`$ as sums $`m_\lambda V(\lambda )`$. Determining the integers $`m_\lambda `$ is of interest in part because they are closely related to solutions of certain Bethe equations; this is the subject of \[KR\] and \[K\]. We will refer to the coefficients $`m_\lambda `$ as the multiplicity of $`V(\lambda )`$ in the sum. ### 2.2 Relations The characters $`Q_m(\mathrm{})`$ when $`𝔤`$ is of type $`A_n`$ satisfy equation (1), known to mathematical physicists as the “discrete Hirota relations.” A conjectured generalization of these relations appears in \[KR\] for the classical Lie algebras, and appear as the “$`Q`$-system” in \[KNS\] for the exceptional cases as well. While we are only interested in the classical cases, we will give the relations in full generality. For every positive integer $`m`$ and for $`\mathrm{}=1,\mathrm{},n`$, $$Q_m(\mathrm{})^2=Q_{m+1}(\mathrm{})Q_{m1}(\mathrm{})+\underset{\mathrm{}^{}\mathrm{}}{}𝒬(m,\mathrm{},\mathrm{}^{})$$ (3) The product is over all $`\mathrm{}^{}`$ adjacent to $`\mathrm{}`$ in the Dynkin diagram of $`𝔤`$, and the contribution $`𝒬(m,\mathrm{},\mathrm{}^{})`$ from $`\mathrm{}^{}`$ is determined by the relative lengths of the roots $`\alpha _{\mathrm{}}`$ and $`\alpha _{\mathrm{}^{}}`$, as follows: $$𝒬(m,\mathrm{},\mathrm{}^{})=\{\begin{array}{cc}Q_m(\mathrm{}^{})\hfill & \text{if }(\alpha _{\mathrm{}},\alpha _{\mathrm{}})=(\alpha _{\mathrm{}^{}},\alpha _{\mathrm{}^{}})\hfill \\ Q_{km}(\mathrm{}^{})\hfill & \text{if }(\alpha _{\mathrm{}},\alpha _{\mathrm{}})=k(\alpha _{\mathrm{}^{}},\alpha _{\mathrm{}^{}})\hfill \\ \underset{i=0}{\overset{k1}{}}Q_{\frac{m+i}{k}}(\mathrm{}^{})\hfill & \text{if }k(\alpha _{\mathrm{}},\alpha _{\mathrm{}})=(\alpha _{\mathrm{}^{}},\alpha _{\mathrm{}^{}})\hfill \end{array}$$ (4) where $`x`$ is the greatest integer not exceeding $`x`$. We note that in the classical cases, the product differs from the simplified version in equation (2) only when: $$\begin{array}{ccc}\hfill 𝔤=𝔰𝔬(2n+1),& \mathrm{}=n1:\hfill & Q_m(n2)Q_{2m}(n)\hfill \\ & \mathrm{}=n:\hfill & Q_{\frac{m}{2}}(n1)Q_{\frac{m+1}{2}}(n1)\hfill \\ \hfill 𝔤=𝔰𝔭(2n),& \mathrm{}=n1:\hfill & Q_m(n2)Q_{\frac{m}{2}}(n)Q_{\frac{m+1}{2}}(n)\hfill \\ & \mathrm{}=n:\hfill & Q_{2m}(n1)\hfill \end{array}$$ The structure of the product is easily represented graphically, with a vertex for each character $`Q_m(\mathrm{})`$ and an arrow from $`Q_m(\mathrm{})`$ pointing at each term of $`𝒬(m,\mathrm{},\mathrm{}^{})`$; see Figure 1 for $`𝔤`$ of type $`B_4`$ and $`C_4`$. The corresponding picture for $`G_2`$ is similarly pleasing. Finally, we can solve equation (3) to get a recurrence relation: $$Q_m(\mathrm{})=\frac{Q_{m1}(\mathrm{})^2𝒬(m1,\mathrm{},\mathrm{}^{})}{Q_{m2}(\mathrm{})}$$ (5) Note that the recurrence is well-founded: repeated use eventually writes everything in terms of the fundamental characters $`Q_1(\mathrm{})`$. This is just the statement that iteration of “move down, then follow any arrow” in Figure 1 will eventually lead you from any point to one on the bottom row. In fact, $`Q_m(\mathrm{})`$ is always a polynomial in the fundamental characters, though from looking at the recurrence it is only clear that it is a rational function. A Jacobi-Trudi style formula for writing the polynomial directly was given in \[KNH\]. The reason that characters of representations of quantum affine algebras are solutions to a discrete integrable system is still a bit of a mystery. ## 3 Main Theorem ### 3.1 Statement of the Main Theorem The result of \[KR\] was to conjecture a combinatorial formula for all the multiplicities $`Z(m,\mathrm{},\lambda )`$ in the decomposition $`Q_m(\mathrm{})=Z(m,\mathrm{},\lambda )V(\lambda )`$. We will refer to these proposed characters as “combinatorial characters” of the representations $`W_m(\mathrm{})`$, although the conjecture that they are characters of some $`U_q(\widehat{𝔤})`$ module is unproven. ###### Theorem 1 (Kirillov-Reshetikhin) Let $`𝔤`$ be of type $`A`$, $`B`$, $`C`$ or $`D`$. The combinatorial characters of $`W_m(\mathrm{})`$ are the unique solution to equations (3) and (4) with the initial data $$\begin{array}{ccccc}A_n:\hfill & Q_1(\mathrm{})\hfill & =& V(\omega _{\mathrm{}})\hfill & 1\mathrm{}n\hfill \\ B_n:\hfill & Q_1(\mathrm{})\hfill & =& V(\omega _{\mathrm{}})+V(\omega _\mathrm{}2)+V(\omega _\mathrm{}4)+\mathrm{}\hfill & 1\mathrm{}n1\hfill \\ & Q_1(n)\hfill & =& V(\omega _n)\hfill & \\ C_n:\hfill & Q_1(\mathrm{})\hfill & =& V(\omega _{\mathrm{}})\hfill & 1\mathrm{}n\hfill \\ D_n:\hfill & Q_1(\mathrm{})\hfill & =& V(\omega _{\mathrm{}})+V(\omega _\mathrm{}2)+V(\omega _\mathrm{}4)+\mathrm{}\hfill & 1\mathrm{}n2\hfill \\ & Q_1(\mathrm{})\hfill & =& V(\omega _{\mathrm{}})\hfill & \mathrm{}=n1,n\hfill \end{array}$$ The solutions $`Q_m(\mathrm{})`$ to this recurrence are all characters of $`U_q(𝔤)`$, and the decomposition into irreducible representations is described combinatorially in terms of generalized “rigged configurations.” The explicit combinatorial formula for this solution, as given in the paper, is computationally intractable. An effective algorithm for computing this solution to the recurrence relations was given by the author in \[K\]. The main result of this paper is that the specification of initial data in Theorem 1 is unnecessary. ###### Theorem 2 Let $`𝔤`$ be of type $`A`$, $`B`$, $`C`$ or $`D`$. The combinatorial characters of the representations $`W_m(\mathrm{})`$ are the only solutions to equations (3) and (4) such that $`Q_m(\mathrm{})`$ is a character of a representation of $`U_q(𝔤)`$ with highest weight $`m\omega _{\mathrm{}}`$, for every nonnegative integer $`m`$ and $`1\mathrm{}n`$. We need only prove that any choice of initial data other than that in Theorem 1 would result in some $`Q_m(\mathrm{})`$ which is not a character of a representation of $`U_q(𝔤)`$. The values $`Q_m(\mathrm{})`$ are always virtual $`U_q(𝔤)`$-characters, but in all other cases, some contain representations occurring with negative multiplicity. As an immediate consequence, we have: ###### Corollary 3 If the characters $`Q_m(\mathrm{})`$ of the representations $`W_m(\mathrm{})`$ obey the recurrence relations in equations (3) and (4), then they must be given by the formula for combinatorial characters in \[KR\]. The technique of proof is as follows. The possible choices of initial data are limited by the requirement that $`Q_1(\mathrm{})`$ be a representation with highest weight $`\omega _{\mathrm{}}`$. That is, $`Q_1(\mathrm{})`$ must decompose into irreducible $`U_q(𝔤)`$-modules as $$Q_1(\mathrm{})=V(\omega _{\mathrm{}})+\underset{\lambda \omega _{\mathrm{}}}{}m_\lambda V(\lambda )$$ Note that we require that $`V(\omega _{\mathrm{}})`$ occur in $`Q_1(\mathrm{})`$ exactly once. Furthermore, we require that for every other component $`V(\lambda )`$ that appears, $`\lambda \omega _{\mathrm{}}`$, i.e. that $`\omega _{\mathrm{}}\lambda `$ is a nonzero linear combination of simple roots with nonnegative integer coefficients. We proceed with a case-by-case proof. For each series, we find explicit multiplicities of irreducible representations occurring in $`Q_m(\mathrm{})`$ which would be negative for any choice of $`Q_1(\mathrm{})`$ other than that of Theorem 1. The calculations for series $`B`$, $`C`$ and $`D`$ are found in sections 3.2, 3.3 and 3.4, respectively. When $`𝔤`$ is of type $`A_n`$, no computations are necessary, because every fundamental root is minuscule: there are no $`\lambda \omega _{\mathrm{}}`$ to worry about, no other choices for initial data to rule out. In fact, $`Q_m(\mathrm{})`$ is just $`V(m\omega _{\mathrm{}})`$ for all $`m`$ and $`\mathrm{}`$, and moreover every $`U_q(𝔤)`$ module is also acted upon by $`U_q(\widehat{𝔤})`$, by means of the evaluation representation. ### 3.2 Series $`B_n`$ Let $`𝔤`$ be of type $`B_n`$. Let $`V_i`$ stand for $`V(\omega _i)`$ for $`1in1`$, and $`V_{sp}`$ for the character of the spin representation with highest weight $`\omega _n`$. For convenience, let $`\omega _0=0`$ and $`V_0`$ denote the character of the trivial representation. Finally, we denote by $`V_n`$ the character of the representation with highest weight $`2\omega _n`$, which behaves like the fundamental representations. There are no dominant weights $`\lambda \omega _n`$, so $`Q_1(n)=V_{sp}`$. The only weights $`\lambda \omega _a`$ are $`0,\omega _1,\mathrm{},\omega _{a1}`$ for $`1an1`$, so we write $$Q_1(a)=V_a+\underset{b=0}{\overset{a1}{}}M_{a,b}V_b$$ (6) Our goal is to prove that the only possible values for the multiplicities are $$M_{a,b}=\{\begin{array}{cc}1,\hfill & ab\text{ even}\hfill \\ 0,\hfill & ab\text{ odd}\hfill \end{array}$$ (7) We will show these values are necessary inductively; the proof for each $`M_{a,b}`$ will assume the result for all $`M_{c,d}`$ with $`\frac{cd}{2}<\frac{ab}{2}`$ as well as those with $`\frac{cd}{2}=\frac{ab}{2}`$ and $`c+d>a+b`$. (Here $`x`$ is the least integer greater than or equal to $`x`$.) This amounts to working in the following order: First follow the diagonal from $`M_{n1,n2}`$ to $`M_{1,0}`$, then the one from $`M_{n1,n4}`$ to $`M_{3,0}`$, etc., ending in the top right corner with $`M_{n1,0}`$ or $`M_{n2,0}`$, depending on the parity of $`n`$. We show that equation (7) must hold for $`M_{a,b}`$, assuming it holds for all $`M_{c,d}`$ which appear earlier in this ordering, by the following calculations: 1. For $`M_{n1,b}`$ where $`n1b`$ is odd, the multiplicity of $`V(\omega _b+\omega _n)`$ in $`Q_3(n)`$ is $`12M_{n1,b}`$, 2. For other $`M_{a,b}`$ where $`ab`$ is odd, the multiplicity of $`V(\omega _{a+2}+\omega _b)`$ in $`Q_2(a+1)`$ is $`M_{a,b}`$, 3. For $`M_{a,b}`$ where $`ab`$ is even: * The multiplicity of $`V(\omega _a+\omega _b)`$ in $`Q_2(a)`$ is $`2M_{a,b}1`$, and * The multiplicity of $`V(\omega _{a+2}+\omega _b)`$ in $`Q_2(a+1)`$ is $`1M_{a,b}`$. Since all $`M_{a,b}`$ and all multiplicities are nonnegative integers, we must have $`M_{a,b}=0`$ to satisfy the first two cases and $`M_{a,b}=1`$ to satisfy the third. The calculations to prove these claims depend on the ability to tensor together the $`U_q(𝔤)`$-modules whose characters form $`Q_m(\mathrm{})`$. A complete algorithm for decomposing these tensors is given in terms of crystal bases in \[N\]. For the current case, though, it happens that the only tensors we need to take are of fundamental representations. Simple explicit formulas for these decompositions had been given in \[KN\] before the advent of crystal base technology. 1. $`M_{n1,b}`$, $`n1b`$ odd: We want to find the multiplicity of $`V(\omega _b+\omega _n)`$ in $`Q_3(n)`$. Recursing through the polynomial relations, we find that $$Q_3(n)=Q_1(n)^32Q_1(n)Q_1(n1)=Q_1(n)\left[Q_1(n)^22Q_1(n1)\right]$$ Assuming equation (7) for $`M_{n1,b^{}}`$ for $`b^{}>b`$ and recalling that $`Q_1(n)^2=V_{sp}^2=V_n+V_{n1}+\mathrm{}+V_0`$, we need to compute the product $$V_{sp}\left[V_nV_{n1}+V_{n2}\mathrm{}V_{b+1}+(12M_{n1,b})V_b\mathrm{}\right]$$ Since $`V_{sp}V_k=_{i=0}^kV(\omega _i+\omega _n)`$, we find that the multiplicity of $`V(\omega _b+\omega _n)`$ in the product is the desired $`12M_{n1,b}`$. 2. $`M_{a,b}`$, $`ab`$ odd, $`an2`$: This calculation is typical of many of the ones that will follow, and will be written out in more detail. We want to know the multiplicity of $`V(\omega _{a+2}+\omega _b)`$ in $`Q_2(a+1)`$. When $`an3`$, we have $$Q_2(a+1)=Q_1(a+1)^2Q_1(a+2)Q_1(a)$$ Assuming inductively that equation (7) holds for all $`M_{c,d}`$ that precede $`M_{a,b}`$ in our ordering, we have $`Q_1(a+1)`$ $`=`$ $`V_{a+1}+V_{a1}+\mathrm{}+V_b+M_{a+1,b1}V_{b1}+\mathrm{}`$ $`Q_1(a+2)`$ $`=`$ $`V_{a+2}+V_a+\mathrm{}+V_{b+1}+M_{a+2,b}V_b+\mathrm{}`$ $`Q_1(a)`$ $`=`$ $`V_a+V_{a2}+\mathrm{}+V_{b+1}+M_{a,b}V_b+\mathrm{}`$ To compute $`Q_1(a+1)^2Q_1(a+2)Q_1(a)`$, we note that the $`V_sV_t`$ term in $`Q_1(a+1)^2`$ and the $`V_{s+1}V_{t1}`$ term in $`Q_1(a+2)Q_1(a)`$ are almost identical: when $`s>t`$, for example, the difference is just $`_{i=0}^tV(\omega _i+\omega _{st2+i})`$. In our case, the only $`V(\omega _{a+2}+\omega _b)`$ term that does not cancel out is the one contributed by $`M_{a,b}V_{a+2}V_b`$, and the multiplicity of $`V(\omega _{a+2}+\omega _b)`$ is $`M_{a,b}`$. When $`a=n2`$ the polynomial relations instead look like $$Q_2(n1)=Q_1(n1)^2Q_1(n)^2Q_1(n2)+Q_1(n1)Q_1(n2)$$ The $`V_n+V_{n2}+\mathrm{}`$ terms of $`Q_1(n)^2`$ behave just like the $`Q_1(a+2)`$ term above. The extra terms from $`Q_1(n2)\left[Q_1(n1)V_{n1}V_{n3}\mathrm{}\right]`$ make no net contribution, as can be seen by checking highest weights. 3. $`M_{a,b}`$, $`ab`$ even: Calculating the multiplicity of $`V(\omega _{a+2}+\omega _b)`$ in $`Q_2(a+1)`$ is similar to the above; the trick of canceling $`V_sV_t`$ with $`V_{s+1}V_{t1}`$ works again. The only terms remaining are $`+1`$ from $`V_{a+1}V_{b+1}`$ and the same $`M_{a,b}`$ from $`V_{a+2}M_{a,b}V_b`$ as above, so the multiplicity is $`1M_{a,b}`$ Likewise, calculating the multiplicity of $`V(\omega _a+\omega _b)`$ in $`Q_2(a)`$ we find two contributions of $`M_{a,b}`$ from $`M_{a,b}V_aV_b`$ (in either order) in $`Q_2(a)^2`$, and a contribution of $`1`$ from $`V_{a+1}V_{b+1}`$ in $`Q_1(a+1)Q_1(a1)`$, so the multiplicity is $`2M_{a,b}1`$. ### 3.3 Series $`C_n`$ Let $`𝔤`$ be of type $`C_n`$. We let $`V_i`$ stand for $`V(\omega _i)`$ for $`1in`$. The only dominant weights $`\lambda \omega _a`$ for $`1an`$ are $`\lambda =\omega _b`$ for $`0b<a`$ and $`ab`$ even, where $`\omega _0=0`$. (If $`ab`$ is odd, then $`\omega _a`$ and $`\omega _b`$ lie in different translates of the root lattice, so are incomparable.) So we write $$Q_1(a)=V_a+\underset{i=0}{\overset{a/2}{}}M_{a,a2i}V_{a2i}$$ (8) We will prove that in fact $`M_{a,b}=0`$ for all $`a`$ and $`b`$. Again we choose a convenient order to investigate the multiplicities: first look at $`M_{a,a2}`$ for $`a=n,n1,\mathrm{},2`$, and then all $`M_{a,b}`$ with $`ab=4,6,8,\mathrm{}`$. This time the multiplicities acting as witnesses are: 1. For $`M_{a,a2}`$, the multiplicity of $`V(\omega _{a1}+2\omega _{a2})`$ in $`Q_3(a1)`$ is $`12M_{a,a2}`$, 2. For $`M_{a,b}`$ for $`ab4`$, the multiplicity of $`V(\omega _{a2}+\omega _b)`$ in $`Q_2(a1)`$ is $`M_{a,b}`$. Performing these computations requires the ability to tensor more general representations of $`𝔤`$ than were needed in the $`B_n`$ case. For this we use the generalization of the Littlewood-Richardson rule to all classical Lie algebras given in \[N\], which we summarize briefly in an Appendix. 1. $`M_{a,a2}`$: We want to calculate the multiplicity of $`V(\omega _{a1}+2\omega _{a2})`$ in $`Q_3(a1)`$. First we write $`Q_3(a1)`$ as a sum of terms of the form $`Q_1(x)Q_1(y)Q_1(z)`$, which we denote as $`(x;y;z)`$ for brevity. When $`2a1n2`$, we have $`Q_3(a1)`$ $`=`$ $`(a1;a1;a1)2(a;a1;a2)`$ $`(a+1;a1;a3)+(a;a;a3)+(a+1;a2;a2)`$ When $`a1`$ is one of $`1,2`$ or $`n1`$, the above decomposition still holds, if we set $`Q_1(0)=1`$ and $`Q_1(1)=Q_1(n+1)=0`$. We want to find the multiplicity of $`V(\omega _{a1}+2\omega _{a2})`$ in each of these terms. First, $`V(\omega _{a1}+2\omega _{a2})`$ occurs with multiplicity 3 in the $`V_{a1}^3`$ component of $`Q_1(a1)^3`$. We calculate this number using the crystal basis technique for tensoring representations. Beginning with the Young diagram of $`V_{a1}`$, we must choose a tableau $`1,2,\mathrm{},a2,p`$ from the second tensor factor, where $`p`$ must be be one of $`a1`$, $`a`$, or $`\overline{a1}`$. Then the choice of tableau from the third tensor component must be the same but replacing $`p`$ with $`\overline{p}`$. Similarly, the $`V_aV_{a1}V_{a2}`$ component of the $`(a;a1;a2)`$ term produces $`V(\omega _{a1}+2\omega _{a2})`$ with multiplicity 1, corresponding to the choice of the tableau $`1,2,\mathrm{},a2,\overline{a}`$ from the crystal of $`V_{a1}`$. We see that the remaining three terms cannot contribute by looking at tableaux in the same way. Second, $`V(\omega _{a1}+2\omega _{a2})`$ occurs in the $`M_{a,a2}V_{a2}V_{a1}V_{a2}`$ piece of $`(a;a1;a2)`$ and the $`M_{a+1,a1}V_{a1}V_{a2}V_{a2}`$ piece of $`(a+1;a2;a2)`$ as the highest weight component. Our inductive hypothesis, however, assumes that $`M_{a+1,a1}=0`$, and we start the induction with $`a=n`$, where the $`(a+1;a2;a2)`$ term vanishes entirely. Totaling these results, we find that the net multiplicity is $`12M_{a,a2}`$, and conclude that $`M_{a,a2}=0`$. 2. $`M_{a,b}`$ for $`ab4`$: We want to calculate the multiplicity of $`V(\omega _{a2}+\omega _b)`$ in $`Q_2(a1)`$. For any $`2a1n1`$, we have $`Q_2(a1)`$ $`=`$ $`Q_1(a1)^2Q_1(a)Q_1(a2)`$ $`=`$ $`(V_{a1}+\mathrm{})(V_{a1}+\mathrm{})(V_a+M_{a,b}V_b+\mathrm{})(V_{a2}+\mathrm{})`$ where every omitted term is either already known to be 0 by induction, or else has highest weight less than $`\omega _b`$, so cannot contribute. As in the $`B_n`$ case, the $`V_{a1}^2`$ and $`V_aV_{a2}`$ terms nearly cancel one another’s contributions: their difference is just $`_{k=0}^{a1}V(2\omega _k)`$. Since $`V(\omega _{a2}+\omega _b)`$ occurs in $`V_{a2}V_b`$ with multiplicity 1, the net multiplicity in $`Q_2(a1)`$ is $`M_{a,b}`$, and we conclude that $`M_{a,b}=0`$. ### 3.4 Series $`D_n`$ Let $`𝔤`$ be of type $`D_n`$. This time we let $`V_i`$ stand for $`V(\omega _i)`$ for $`1in2`$, and use $`V_{n1}`$ for the character of the representation with highest weight $`\omega _{n1}+\omega _n`$. We will not need to explicitly use the characters of the two spin representations individually, only their product, $`V_{n1}+V_{n3}+\mathrm{}`$. There are no dominant weights under $`\omega _{n1}`$ or $`\omega _n`$, and so no work to do on $`Q_1(n1)`$ or $`Q_1(n)`$. For $`1an2`$, the only dominant weights $`\lambda \omega _a`$ are $`\lambda =\omega _b`$ for $`0b<a`$ and $`ab`$ even; again $`\omega _0=0`$. (If $`ab`$ is odd, then $`\omega _a`$ and $`\omega _b`$ lie in different translates of the root lattice, so are incomparable.) So we write $$Q_1(a)=V_a+\underset{i=0}{\overset{a/2}{}}M_{a,a2i}V_{a2i}$$ (9) We will show that in fact $`M_{a,b}=1`$ for all $`a`$ and $`b`$. Again the proof is by induction; to show $`M_{a,b}=1`$ we will assume $`M_{c,d}=1`$ as long as either $`cd<ab`$ or $`cd=ab`$ and $`c>a`$. (This is the same ordering used for the $`B_n`$ series after dropping the $`M_{a,b}`$ with $`ab`$ odd.) Our witnesses this time are: * The multiplicity of $`V(2\omega _b)`$ in $`Q_2(a1)`$ is $`1M_{a,b}`$, and * The multiplicity of $`V(\omega _a+\omega _b)`$ in $`Q_2(a)`$ is $`2M_{a,b}1`$. We must therefore conclude that $`M_{a,b}=1`$. Since we only need to tensor fundamental representations together, the explicit formulas given in \[KN\] are enough to carry out these calculations. For any $`\mathrm{}n3`$, the polynomial relations give us $$Q_2(\mathrm{})=Q_1(\mathrm{})^2Q_1(\mathrm{}+1)Q_1(\mathrm{}1)$$ The multiplicity of $`V(2\omega _b)`$ in $`Q_2(a1)`$ is easily calculated directly, since $`V(2\omega _b)`$ appears in $`V_rV_s`$ if and only if $`r=sb`$, and then it appears with multiplicity one. The $`Q_1(a1)`$ term therefore contains $`V(2\omega _b)`$ exactly $`(ab)/2`$ times, while the $`Q_1(a)Q_1(a2)`$ term subtracts off $`M_{a,b}1+(ab)/2`$ of them. Thus the net multiplicity is $`1M_{a,b}`$. To calculate the multiplicity of $`V(\omega _a+\omega _b)`$ in $`Q_2(a)`$ for $`an3`$, we once again use the trick of canceling the contribution from the $`V_sV_t`$ term of $`Q_1(a)^2`$ with the $`V_{s+1}V_{t1}`$ term of $`Q_1(a+1)Q_1(a1)`$. The cancellation requires more attention this time, since $`V(\omega _a+\omega _b)`$ occurs with multiplicity two in $`V_sV_t`$ when $`ab2nrs`$. In the end, the only terms that do not cancel are the contributions of $`M_{a,b}`$ from $`V_aV_b`$ and $`V_bV_a`$ in $`Q_1(a)^2`$ and of $`1`$ from $`V_{b+1}V_{a1}`$ in $`Q_1(a+1)Q_1(a1)`$. Thus the net multiplicity is $`2M_{a,b}1`$. Finally, if $`a=n2`$ the polynomial relations change to $$Q_2(n2)=Q_1(n2)^2Q_1(n1)Q_1(n)Q_1(n3)$$ This change does not require any new work, though: $`Q_1(n1)Q_1(n)`$ is just the product of the two spin representations, which decomposes as $`V_{n1}+V_{n3}+\mathrm{}`$. Since this is exactly what we wanted $`Q_1(\mathrm{}+1)`$ to look like in the above argument, the preceding calculation still holds. ## Appendix: Littlewood-Richardson Rule for $`C_n`$ This is a brief summary of a generalization of the Littlewood-Richardson rule to Lie algebras of type $`C_n`$, as given in \[N\]. For our purposes, we only need the ability to tensor an arbitrary representation with one of the fundamental representations with highest weights $`\omega _1,\mathrm{},\omega _n`$. The representation with highest weight $`_{k=1}^na_k\omega _k`$ is represented by a Young diagram $`Y`$ with $`a_k`$ columns of height $`k`$. For a fundamental representation $`V_k`$, we create Young tableaux from our column of height $`k`$ by filling in the boxes with $`k`$ distinct symbols $`i_1,\mathrm{},i_k`$ chosen in order from the sequence $`1,2,\mathrm{},n,\overline{n},\mathrm{},\overline{2},\overline{1}`$ in all possible ways, as long as if $`i_a=p`$ and $`i_b=\overline{p}`$ then $`a+(kb+1)p`$. These tableaux label the vertices of the crystal graph of the representation $`V_k`$. Given a Young diagram $`Y`$, the symbols $`1,2,\mathrm{},n`$ act on it by adding one box to the first, second,…,$`n`$th row, and the symbols $`\overline{1},\overline{2},\mathrm{},\overline{n}`$ act by removing one, provided the addition or removal results in a diagram whose rows are still nonincreasing in length. The result of the action of the symbol $`i_a`$ on $`Y`$ is denoted $`Yi_a`$. Then the tensor product $`VV_k`$, where $`V`$ has Young diagram $`Y`$, decomposes as the sum of all representations with diagrams $`(((Yi_1)i_2)\mathrm{}i_k)`$, where $`i_1,\mathrm{},i_k`$ range over all tableaux of $`V_k`$ such that each of the actions still result in a diagram whose rows are still nonincreasing in length.
no-problem/9903/astro-ph9903470.html
ar5iv
text
# STOCHASTIC BACKGROUNDS OF GRAVITATIONAL WAVES FROM COSMOLOGICAL POPULATIONS OF ASTROPHYSICAL SOURCES ## 1 Introduction Stochastic backgrounds of gravitational waves are interesting sources for the interferometric detectors that will soon start to operate. Their production is a robust prediction of any model which attemps to describe the evolution of the Universe at primordial epochs. However, bursts of gravitational radiation emitted by a large number of unresolved and uncorrelated astrophysical sources generate a stochastic background at more recent epochs, immediately following the onset of galaxy formation. Thus, astrophysical backgrounds might overwhelm the primordial ones and their investigation provides important constraints on the detectability of signals coming from the very early Universe. The main characteristics of the gravitational backgrounds produced by cosmological populations of astrophysical sources depend both on the emission properties of each single source and on the source rate evolution with redshift. Extra-galactic backgrounds are proved to be mainly contributed by sources at redshifts $`z12`$ and their formation rate can not be simply extrapolated from its local value but must account for the evolution of the overall galactic population , , . The model we have adopted for the redshift evolution of the source rate of formation is described in Section 1 and it is based on the star formation history derived by UV-optical observations of star forming galaxies out to redshifts of $`45`$ (see e.g. , ). The gravitational wave sources for which the extra-galactic background contributions have been investigated so far are white dwarfs binary systems during the early in-spiral phase and core-collapse SNae. In particular, we have considered the gravitational waves emitted during the core-collapse to a black hole and the gravitational radiation emitted by newly formed, rapidly rotating, hot neutron stars with an instability in their r-modes . The first choice was motivated by the results of numerical simulations of core-collapses: unlike the case of a core-collapse to a neutron star, the gravitational wave emission spectrum produced during a core-collapse to a black hole is rather generic, in the sense that it is sufficiently independent of the initial conditions and of the equation of state of the collapsing star (see, for a recent review, ). The second kind of sources were considered because of their high efficiency in producing gravitational signals: though preliminary, the investigations of the r-modes instabilities in highly rotating young neutron stars have proved that a considerable fraction of the star initial rotational energy is converted in gravitational waves, making the process very interesting for gravitational wave detection , , , , . A brief description of the characteristics of the source emission spectra is given in Section 2. Finally, in Section 3 we derive the spectra of the corresponding backgrounds, explore the parameter space and discuss their detectability. ## 2 The source formation rate In the last few years, the extraordinary advances attained in observational cosmology have led to the possibility of identifying actively star forming galaxies at increasing cosmological look-back times (see e.g. ). Using the rest-frame UV-optical luminosity as an indicator of the star formation rate and integrating on the overall galaxy population, the data obtained with the Hubble Space Telscope (HST , ) Keck and other large telescopes , together with the completion of several large redshift surveys , , have enabled, for the first time, to derive coherent models for the star formation rate evolution throughout the Universe. A collection of some of the data obtained at different redshifts together with a proposed fit is shown in Figure 1. Because dust extinction can lead to an underestimate of the real UV-optical emission and, ultimately, of the real star formation activity, the data shown in Fig. 1 have been corrected upwards according to the factors implied by the Calzetti dust extinction law (see , ). Although the strong luminosity evolution observed between redshift 0 and 1-2 is believed to be quite firmly established, the amount of dust correction to be applied at intermediate redshift (thus the amplitude of the curve at $`z12`$) as well as the behaviour of the star formation rate at high redshift is still relatively uncertain. In particular, the decline of the star formation rate density implied by the $`<z>4`$ point of the Hubble Deep Field (HDF, see Fig. 1) is now contradicted by the star formation rate density derived from a new sample of Lyman break galaxies with $`<z>=4.13`$ which, instead, seems to indicate that the star formation rate density remains substantially constant at $`z>12`$. It has been suggested that this discrepancy might be caused by problems of sample variance in the HDF point at $`<z>=4`$ . Thus, we have up-dated the star formation rate model that we have previously considered in the analysis even though the gravitational wave backgrounds are almost insensitive to the behaviour of the star formation rate at $`z>12`$ because the contributions of very distant sources is very weak , . Conversely, if a larger dust correction factor should be applied at intermediate redshifts, this would result in a similar amplification of the gravitational background spectra. From the star formation history plotted in Figure 1, it is possible to infer the formation rate (number of objects formed per unit time) of a particular population of gravitational wave sources (remnants) by integrating the star formation rate density over the comoving volume element out to redshift $`z`$ and considering only those progenitors with masses falling in the correct dynamical range for the remnant to form, i.e., $$R(z)=_0^z𝑑z^{}\frac{\dot{\rho }_{}(z^{})}{1+z^{}}\frac{dV}{dz}_{\mathrm{\Delta }M}𝑑M^{}\mathrm{\Phi }(M^{}),$$ (1) where the factor $`(1+z)^1`$ takes into account the dilution due to cosmic expansion and $`\mathrm{\Phi }(M)`$ is the initial mass function (IMF) chosen to be of Salpeter type, $`\mathrm{\Phi }(M)M^{(1+x)}`$ with $`x=1.7`$. Stellar evolution models have shown that single stars with masses $`8M_{}`$ pass through all phases of nuclear burning and end up as core-collapse supernovae leading to a neutron star or a black hole remnant. While there seems to be a general agreement that progenitors with masses in the range $`8M_{}\genfrac{}{}{0pt}{}{<}{}M\genfrac{}{}{0pt}{}{<}{}20M_{}`$ leave neutron star remnants, the value of the minimum progenitor mass which leads to a black hole remnant is still uncertain, mainly because of the unknown amount of fall back of material during the supernova explosion , . In our analysis, a reference interval of $`25M_{}\genfrac{}{}{0pt}{}{<}{}M\genfrac{}{}{0pt}{}{<}{}125M_{}`$ was considered but we have also investigated the effects of choosing a lower limit of $`20M_{}`$ and $`30M_{}`$ as well as an upper limit of $`60M_{}`$ . The rate of core-collapse SNae predicted for three cosmological background models is shown in Figure 2 as a function of redshift. The main difference between the three cosmologies is introduced by the geometrical effect of the comoving volume and is significant at $`z\genfrac{}{}{0pt}{}{>}{}12`$. This implies that the gravitational backgrounds, which are mainly contributed by sources at $`z\genfrac{}{}{0pt}{}{<}{}12`$, are almost insensitive to the cosmological parameters. The total black hole formation rate $`R_{BH}`$ and neutron star formation rate $`R_{NS}`$ predicted by our model are, $$R_{BH}=3.34.7\text{s}^1R_{NS}=13.619.3\text{s}^1$$ (2) depending on the cosmological background model considered. The value predicted by our model for the local core-collapse SNa rate is in good agreement with the available observations . ## 3 The single source emission spectra The emission spectrum that we have adopted as our model for the gravitational wave radiated from a core which is collapsing to a black hole was that obtained from a fully non-linear numerical simulation of Einstein+hydrodynamic equations of an axisymmetric core collapse , . The main properties of the spectrum are shown in Fig. 3 for the collapse of a $`1.5M_{}`$ naked core to a black hole at a distance of $`15`$ Mpc and for three assigned values of the angular momentum. The relevant quantity is the rotational parameter $`a=J/(GM_{core}^2/c)`$. In fact, there is a maximum in the emission at a frequency which depends on the value of $`a`$ and whose amplitude, for values of $`a`$ in the range $`0.2<a<0.8`$, scales as $`a^4`$. This peak is located at a frequency which is very close to the frequency of the lowest $`m=0`$ quasi-normal mode. This means that a substantial fraction of the energy will be emitted after the black hole has formed: it will oscillate in its quasi-normal modes until its residual mechanical energy is radiated away in gravitational waves. For high values of the rotational parameter, the geometry of the collapse is different as the star becomes flattened into the equatorial plane and then bounces vertically, but still continues to collapse inward until the black hole is formed. In this case, a low frequency component appears, with an amplitude which may become comparable to that of the peak corresponding to the quasi-normal modes. In general, the efficiency of this axisymmetric core-collapse to a black hole is $`\mathrm{\Delta }E_{GW}/M_{core}c^27\times 10^4`$. It should be remembered that less symmetric configurations may result in a more efficient production of gravitational waves. A number of investigations of relativistic rotating stars has recently led to the discovery of a new class of instability modes, called the r-modes , , , , . These modes are characterized by having the Coriolis force as the restoring force and thus they are relevant only for rotating stars. Even though the analyses carried out so far are still preliminary and are based on several approximations, these modes, whose instability is driven by gravitational radiation, appear to efficiently radiate in gravitational waves a large part of the initial rotational energy in a relatively small time interval. A preliminary estimate of the corresponding emission spectrum was recently obtained in for a polytropic neutron star model with a $`1.4M_{}`$ mass and a radius of $`12.53`$ km. We have adopted their proposed spectrum as our model for the single source emission in order to estimate the gravitational background produced by young, hot, rapidly rotating neutron stars through the r-mode instability . The evolution of the angular momentum of the star is determined by the emission of gravitational waves, which couple to the r-modes through the current multipoles, primarily that with $`l=m=2`$. For this mode, the frequency of the emitted gravitational radiation is $`\nu =(2/3\pi )\mathrm{\Omega }`$. The star is assumed to be initially rotating at its maximum spin rate, i.e., at its Keplerian value $`\mathrm{\Omega }_K`$, which corresponds to a gravitational wave frequency of $`1400`$ km, for the star model considered . The evolution of $`\mathrm{\Omega }`$ during the phase in which the amplitude of the mode is small can be determined from the standard multipole expression for angular momentum loss, and from the energy loss due to the gravitational emission and to the dissipative effects induced by the bulk and shear viscosity. In this phase, $`\mathrm{\Omega }`$ is nearly constant and the instability grows exponentially. After a short time, the amplitude of the mode becomes close to unity and non-linear effects saturate and halt further growth of the mode. This phase lasts for approximately 1 yr, during which the star loses angular momentum radiating approximately $`2/3`$ of its initial rotational energy in gravitational waves, up to a point where the angular velocity reaches a critical value, $`\mathrm{\Omega }_c`$. This value can be determined by solving the equation $`1/\tau (\mathrm{\Omega }_c)=0`$, where $`\tau `$ is the total dissipation time-scale which can be decomposed as a sum of the damping times associated to the gravitational emission, to the shear and to the bulk viscosity. $`\tau (\mathrm{\Omega }_c)`$ is clearly a function of the temperature of the star, and it has been shown that the r-mode instability operates only in hot neutron stars ($`10^{10}\genfrac{}{}{0pt}{}{>}{}T\genfrac{}{}{0pt}{}{>}{}10^9`$ K) . Above $`10^{19}`$ bulk viscosity kills the r-mode instability whereas below $`10^9`$ K superfluidity and other non-perfect fluid effects become important and the damping due to viscosity dominates with respect to the destabilizing effect of the gravitational radiation. For the star model considered, $`\mathrm{\Omega }_c566`$ Hz, which corresponds to a final spin period of $`11`$ ms and to $`\nu _{min}120`$ Hz. Below this critical value, viscous forces and gravitational radiation damp out the energy left in the mode, and the star slowly reaches its final equilibrium configuration. The qualitative picture that arises from this simple model is believed to be sufficiently reliable, even though various uncertainties and approximations might affect the quantitative results for the initial rotation of the star after collapse, for the spin-down time-scales as well as for the final rotation period . However, in this framework the expression of the energy spectrum can be approximated as follows, $$\frac{dE_{GW}}{d\nu }\frac{4}{3}E_K\frac{\nu }{\nu _{max}^2}\text{for}\nu _{min}\nu \nu _{max}$$ (3) where $`E_K`$ indicates the initial rotational energy . Thus, the mean flux emitted by this source can be written as, $$f(\nu )=\frac{1}{4\pi d^2}\left(\frac{dE_{GW}}{d\nu }\right).$$ (4) ## 4 The stochastic backgrounds In order to evaluate the spectral energy density, $`dE/dtdSd\nu `$, of the stochastic backgrounds produced by the radiation emitted during an axisymmetric black hole collapse and by the spin-down radiation from newly born neutron stars, we need to convolve the differential rate of sources, $`dR(z)`$, with the flux emitted by a single source at redshift $`z`$ as it would be observed today (see , ). This means that we account for the luminosity distance damping on the flux emitted by a single source and we redshift the emission frequencies. The corresponding values of the closure energy densities of gravitational waves can be obtained as follows, $$\mathrm{\Omega }_{GW}(\nu _{obs})=\frac{\nu _{obs}}{c^3\rho _{\text{cr}}}\frac{dE}{dtdSd\nu },$$ (5) where $`\rho _{\text{cr}}=3H_0^2/\mathrm{\hspace{0.17em}8}\pi G`$ and are shown in Fig. 4. These Figures have been obtained for a flat cosmological background model with zero cosmological constant and with a Hubble constant of $`H_0=h\mathrm{\hspace{0.17em}100}=50\text{km}/\text{s}^1\text{Mpc}^1`$. As previously mentioned, the effect of a varying cosmological background is negligible on the final properties of the stochastic backgrounds. In fact, the amplification of the rate at high redshifts shown in Fig. 2 for an open model and a model with a cosmological constant is mostly suppressed by the inverse squared luminosity distance dependence of the single source spectrum for the same models. The closure density of the black hole collapse background is shown in the left panel of Fig. 4 for three values of the rotational parameter. Since we do not know the distribution of angular momenta, for each curve all the sources of the ensemble were assumed to have the same value of $`a`$. Depending on this value, the closure density has a maximum amplitude in the range $`10^910^{10}`$ at frequencies between $`23`$ kHz. Even though the final properties of the background depend on the model that we have assumed as being representative of the process of gravitational collapse to a black hole, the relevant features of the energy spectrum we use to model each single event are likely to reasonably represent a generic situation, (see the discussion in ). As for the dependence on the formation rate of black holes, the uncertainties which affect the evolution of the star formation rate at high redshifts are completely irrelevant whereas variations induced by different lower and upper mass cut-offs of the progenitor mass range are limited to a factor $`\genfrac{}{}{0pt}{}{<}{}2`$ . As shown in the right panel of Fig. 4, the closure density for the neutron star background has a larger amplitude than the previous case and the main part of the signal is concentrated at lower frequencies. In fact, it is characterized by a wide maximum, ranging from $`0.71`$ kHz, with an amplitude of a few $`10^8`$. Allowing for variations in $`\nu _{min}`$ and $`\nu _{max}`$ does not substantially alter the main features of the background although some quantitative differences appear both in the small and large frequency part of the signal (see Fig.s 6 and 8 in ). The neutron star background allows a clear inspection of the impact of the star formation rate evolution on its final properties. In fact, in this case all the sources have been assumed to have the same mass and thus, elements of the ensemble at the same redshift have exactly the same emission properties. Therefore, it is easier to distinguish the effects of the source rate evolution from that of the spectrum of each single event. The right panel of Fig. 5 shows the spectrum of the neutron star background. The maximum amplitude occurs around $`700`$ Hz. This means that the most significant contribution to the background signal comes from neutron stars at their maximum spin rate ($`1400`$ Hz, for our model) which are formed at redshifts $`z12`$ where the star formation rate reaches its maximum value before entering its high redshift plateau. Similarly, if one takes into account that the mean value of the core mass which collapses is around $`45M_{}`$, the corresponding maximum in the contribution of a mean single source occurs at rest-frame frequencies in the range $`23`$ kHz. From the left panel of Fig. 5 it is possible to see that the maximum amplitude in the black hole background spectra corresponds to frequencies $`12`$ kHz, depending on the value of the rotational parameter. Thus, the relevant contribution to the final black hole background signal comes from those sources which are formed around $`z12`$. Moreover, it is important to note that for sources, such as the one we have described, which emit gravitational waves at rest-frame frequencies $`\nu \genfrac{}{}{0pt}{}{>}{}100`$ Hz, at frequencies $`1100`$ Hz, where cross- correlation between terrestrial interferometers can be accomplished, the stochastic background signal is entirely produced at $`0<z<12`$. We can conclude that a reliable estimate of astrophysical backgrounds can not set the important effect of the star formation rate evolution aside. Finally, it is possible to show that the first generation of interferometers will not reach the sensitivities required to observe these backgrounds. In fact, the relevant part of the signal is at relatively high frequencies where, at their actual sites, the interferometers that will soon start to operate can not be cross-correlated. For the first generation of interferometers, the best signal-to-noise ratio is obtained by cross-correlating VIRGO and GEO600 optimally oriented. Assuming one year of integration, $`\text{S}/\text{N}2\times 10^3`$. For the same integration time, two LIGO interferometers with advanced sensitivities give $`\text{S}/\text{N}1.23`$ at their actual sites and $`\text{S}/\text{N}15`$ if they were at a distance of $`300`$ km. Though signal-to-noise ratios calculated for intereferometer-bar pairs, such as VIRGO-NAUTILUS or GEO600-NAUTILUS are still very low, two hollow spheres with $`\sqrt{S_n(200\text{Hz})}10^{24}`$ placed at the same site would reach, in one year of integration, a signal-to-noise ratio $`\text{S}/\text{N}1`$ . So far, the stochastic backgrounds we have described were considered to be continuous. This is always the case for the background produced by the spin-down radiation emitted by rapidly rotating neutron stars, as the signal from each single source is emitted in a relatively long time interval, of the order of 1 yr (see Section 2). Thus, these signals can superimpose and do form a continuous background . Conversely, the background produced by core collapses to black holes has a shot noise character. In fact, the typical duration of the gravitational signal emitted by each source is much shorter than the previous case, of the order of a ms. Thus, the contributions from the elements of the ensemble do not superimpose but rather generate a shot-noise background, characterized by a succession of isolated bursts with a mean separation of the order of $`0.1`$ seconds, much longer than the typical duration of each burst . The peculiar statistical character of this background might be exploited in order to design a specific algorithm which may help its detection. ## References
no-problem/9903/hep-ex9903049.html
ar5iv
text
# 1 Introduction ## 1 Introduction The investigation of the internal structure of jets gives insight into the transition between a parton produced in a hard process and the experimentally observable spray of hadrons. The internal structure of a jet is expected to depend mainly on the type of primary parton, quark or gluon, from which it originated and to a lesser extent on the particular hard scattering process. A useful representation of the jet’s internal structure is given by the jet shape . At sufficiently high jet energy, where fragmentation effects become negligible, the jet shape should be calculable by perturbative QCD. Measurements of the jet shape provide a stringent test of pQCD calculations beyond leading order. Gluon jets are predicted to be broader than quark jets due to the larger colour charge of the gluon. The dependence of the structure of quark and gluon jets on the production process can be investigated by comparing measurements of the jet shape in different reactions in which the final-state jets are predominantly quark or gluon initiated. Measurements of the jet shape were made in $`\overline{p}p`$ collisions at $`\sqrt{s}=1.8`$ TeV and in $`e^+e^{}`$ interactions at LEP1 . It was observed that the jets in $`e^+e^{}`$ are significantly narrower than those in $`\overline{p}p`$ and most of this difference was ascribed to the different mixtures of quark and gluon jets in the two production processes. At HERA, measurements have been presented of the jet shape in quasi-real photon proton collisions (photoproduction) and in neutral- and charged-current deep inelastic scattering (DIS) . In photoproduction, the jets were observed to become broader as the jet pseudorapidity ($`\eta ^{jet}`$) increases in agreement with the predicted increase in the fraction of final-state gluon jets. In DIS, the jet shapes in neutral- and charged-current processes were found to be very similar. The jet shapes in DIS were observed to be similar to those in $`e^+e^{}`$ interactions and narrower than those in $`\overline{p}p`$ collisions. Since the jets in $`e^+e^{}`$ interactions and $`e^+p`$ DIS are predominantly quark initiated in both cases, the similarity in the jet shapes indicates that the pattern of QCD radiation within a quark jet is to a large extent independent of the hard scattering process in these reactions. New measurements of the jet shape using the $`k_T`$-cluster algorithm in photoproduction and DIS at HERA provide an improved test of pQCD calculations and are presented here. During 1994-1997 HERA operated with positrons of energy $`E_e=27.5`$ GeV colliding with protons of energy $`E_p=820`$ GeV. ## 2 Measurement of the jet shape in photoproduction At HERA, quasi-real photon proton collisions are studied via $`ep`$ scattering at low four-momentum transfers ($`Q^20`$, where $`Q^2`$ is the virtuality of the exchanged photon). Jets are searched for in the pseudorapidity ($`\eta `$) - azimuth ($`\phi `$) plane of the laboratory frame using the inclusive $`k_T`$-cluster algorithm (see for the experimental implementation). The jet variables are defined according to the Snowmass convention . The inclusive sample of jets with transverse energy $`E_T^{jet}>17`$ GeV and $`1<\eta ^{jet}<2`$ has been studied. The differential jet shape is defined as the average fraction of the jet’s transverse energy that lies inside an annulus in the $`\eta \phi `$ plane of inner (outer) radius $`r\mathrm{\Delta }r/2`$ ($`r+\mathrm{\Delta }r/2`$) concentric with the jet axis : $$\rho (r)\frac{1}{N_{jets}}\frac{1}{\mathrm{\Delta }r}\underset{jets}{}\frac{E_T(r\mathrm{\Delta }r/2,r+\mathrm{\Delta }r/2)}{E_T(0,1)},$$ (1) where $`E_T(r\mathrm{\Delta }r/2,r+\mathrm{\Delta }r/2)`$ is the transverse energy within the given annulus and $`N_{jets}`$ is the total number of jets in the sample. The differential jet shape has been measured for $`r`$ values varying from $`0.05`$ to $`0.95`$ in $`\mathrm{\Delta }r=0.1`$ increments. The differential jet shape has been measured using the ZEUS uranium-scintillator calorimeter and corrected to the hadron level. The measurements are given in the kinematic region defined by $`Q^2<1`$ GeV<sup>2</sup> (with a median of $`Q^210^3`$) and photon-proton centre-of-mass energies between 134 and 277 GeV. The measured differential jet shapes for different regions in $`\eta ^{jet}`$ are shown in Figure 1 (black dots). It is observed that the jet broadens as $`\eta ^{jet}`$ increases in agreement with our previous observation using an iterative cone algorithm with radius $`R=1`$ . The predicted jet shapes at the hadron level from a leading-logarithm parton-shower Monte Carlo calculation using PYTHIA are compared to the measurements in the left-hand side of Figure 1. The calculations include initial- and final-state parton radiation, and the fragmentation into hadrons is performed using the LUND string model. The measured jet shapes are found to be well described by the predictions (solid histogram). The jet shapes, as predicted by PYTHIA, for quark (dot-dashed histogram) and gluon (dashed histogram) jets are also shown in Figure 1: the broadening of the jets in the data as $`\eta ^{jet}`$ increases is consistent with an increasing fraction of gluon jets. It has been shown that the inclusive $`k_T`$-cluster algorithm provides, at present, the best jet algorithm from the theoretical point of view since the problem of overlapping jets, which affects e.g. the iterative cone algorithm , is avoided. To quantify the effects of the specific jet algorithm on the jet shape, the measurements have been repeated using the iterative cone algorithm with radius $`R=1`$. The results (open circles) are compared to those using the $`k_T`$-cluster algorithm (black dots) in the right-hand side of Figure 1: the measured jet shapes differ by less than 10% in the region $`r<0.6`$. For larger values of $`r`$ differences are expected since in the case of the iterative cone algorithm only those particles within a cone concentric to the jet axis are assigned to the jet while in the $`k_T`$ no such a restriction is imposed. Thus, in spite of the differences between the two algorithms the jet shapes are observed to be very similar in the region $`r<0.6`$ and demand pQCD calculations which are able to reproduce the features of the specific jet algorithm with an accuracy better than 10%. Next-to-leading order QCD calculations of the jet shape with the $`k_T`$-cluster algorithm, which are not available at present, are needed to meet such a requirement. ## 3 Measurement of the jet shape in deep inelastic scattering Measurements have been made of the internal jet structure in a sample of inclusive dijet neutral-current DIS events , $`e^++pe^++\mathrm{jet}+\mathrm{jet}+\mathrm{X}`$, in the kinematic region defined by $`10<Q^2\stackrel{<}{}120`$ GeV<sup>2</sup> and $`210^4\stackrel{<}{}x_{Bj}\stackrel{<}{}810^3`$. Jets are searched for in the $`\eta \phi `$ plane of the Breit frame using the inclusive $`k_T`$-cluster algorithm . The jet variables are defined according to the Snowmass convention . The sample of inclusive dijet events with transverse energy (with respect to the direction of the virtual photon in the Breit frame) $`E_T^{jet}(\mathrm{Breit})>5`$ GeV and $`1<\eta ^{jet}(\mathrm{Lab})<2`$ has been investigated. In this analysis the internal structure of a jet is studied in terms of the integrated jet shape, $`\mathrm{\Psi }(r)`$, which is defined as the average fraction of the jet’s transverse energy that lies inside a subcone in the $`\eta \phi `$ plane of radius $`r`$ concentric with the jet axis : $$\mathrm{\Psi }(r)\frac{1}{N_{jets}}\underset{jets}{}\frac{E_T(r)}{E_T^{jet}(\mathrm{Breit})},$$ (2) where $`E_T(r)`$ is the transverse energy within the subcone of radius $`r`$ and $`N_{jets}`$ is the total number of jets in the sample. The measured integrated jet shapes are shown in Figure 2 for two ranges in $`E_T^{jet}(\mathrm{Breit})`$ and three regions in $`\eta ^{jet}(\mathrm{Breit})`$ (negative $`\eta ^{jet}(\mathrm{Breit})`$ corresponds to the virtual-photon hemisphere). The jets are observed to be more collimated as $`E_T^{jet}(\mathrm{Breit})`$ increases. On the other hand, the jets become broader as $`\eta ^{jet}(\mathrm{Breit})`$ increases and this effect is more pronounced at lower $`E_T^{jet}(\mathrm{Breit})`$. The measured dependence of the jet shape on $`E_T^{jet}(\mathrm{Breit})`$ and $`\eta ^{jet}(\mathrm{Breit})`$ is roughly reproduced by the predictions of various QCD-based models (not shown here; see ). However, studies based on these models show that in the region of $`E_T^{jet}(\mathrm{Breit})`$ considered in this analysis the jet shape is strongly influenced by hadronization. Thus, measurements with higher $`E_T^{jet}`$ (see ) are needed to test pQCD calculations. The measurements have been repeated using a version of the iterative cone algorithm which allows improved pQCD calculations of the jet shape. The measured jet shapes with the $`k_T`$-cluster and the iterative cone algorithms are observed to be very similar in the region $`E_T^{jet}(\mathrm{Breit})>8`$ GeV and $`\eta ^{jet}(\mathrm{Breit})<2.2`$. For lower $`E_T^{jet}(\mathrm{Breit})`$ or higher $`\eta ^{jet}(\mathrm{Breit})`$ the jets identified with the cone algorithm are broader. From this comparison and that in photoproduction (with $`E_T^{jet}>17`$ GeV), it is concluded that the effects of the specific jet algorithm decrease rapidly as $`E_T^{jet}`$ increases. The measurements of jet shapes with the $`k_T`$-cluster algorithm at high $`E_T^{jet}`$ ($`E_T^{jet}>17`$ GeV) constitute a challenge to pQCD calculations. Acknowledgements: I would like to thank the organizers for the superb location of the conference. The help of my colleagues from the H1 and ZEUS Collaborations and, in particular, from Claudia Glasman, in the preparation of the material reported here is gratefully appreciated.
no-problem/9903/astro-ph9903401.html
ar5iv
text
# A Brief History of AGN ## 1 INTRODUCTION Although emission lines in the nuclei of galaxies were recognized at the beginning of the twentieth century, a half century more would pass before active galactic nuclei (AGN) became a focus of intense research effort. The leisurely pace of optical discoveries in the first half of the century gave way to the fierce competition of radio work in the 1950s. The race has never let up. Today, AGN are a focus of observational effort in every frequency band from radio to gamma rays. Several of these bands involve emission lines as well as continuum. AGN theory centers on extreme gravity and black holes, among the most exotic concepts of modern astrophysics. Ultrarelativistic particles, magnetic fields, hydrodynamics, and radiative transfer all come into play. In addition, AGN relate to the question of galactic evolution in general. For most of the time since the recognition of quasar redshifts in 1963, these objects have reigned as the most luminous and distant objects in the Universe. Their use as probes of intervening matter on cosmic scales adds a further dimension to the importance of AGN. For all these reasons, the enormous effort to describe and explain AGN in all their variety and complexity is quite natural. We are far from having a detailed and certain understanding of AGN. However, the working hypothesis that they involve at their core a supermassive black hole producing energy by accretion of gas has little serious competition today. If this picture is confirmed, then the past decade may be seen as a time when AGN research shifted from guessing the nature of AGN to trying to prove it. Although the story is not finished, this seems a good time to take stock of the progress that has been made. The present short summary is intended to give students of AGN an account of some of the key developments in AGN research. The goal is to bring the story to the point where a contemporary review of some aspect of AGN might begin its detailed discussion. Thus, various threads typically are followed to a significant point in the 1980s. I have attempted to trace the important developments without excessive technical detail, relying on published sources, my own recollections, and conversations with a number of researchers. The focus is on the actual active nucleus. Fascinating aspects such as intervening absorption lines, statistical surveys, and links to galactic evolution receive relatively little discussion. The volume of literature is such that only a tiny fraction of the important papers can be cited. ## 2 BEGINNINGS Early in the twentieth century, Fath (1909) undertook at Lick Observatory a series of observations aimed at clarifying the nature of the “spiral nebulae”. A major question at the time was whether spirals were relatively nearby, gaseous objects similar to the Orion nebula, or very distant collections of unresolved stars. Fath’s goal was to test the claim that spirals show a continuous spectrum consistent with a collection of stars, rather than the bright line spectrum characteristic of gaseous nebulae. He constructed a spectrograph designed to record the spectra of faint objects, mounted it on the 36-inch Crossley reflector, and guided the long exposures necessary to obtain photographic spectra of these objects. For most of his objects, Fath found a continuous spectrum with stellar absorption lines, suggestive of an unresolved collection of solar type stars. However, in the case of NGC 1068, he observed that the “spectrum is composite, showing both bright and absorption lines”. The six bright lines were recognizable as ones seen in the spectra of gaseous nebulae. The bright and dark lines of NGC 1068 were confirmed by Slipher (1917) with spectra taken in 1913 at Lowell Observatory. In 1917, he obtained a spectrum with a narrow spectrograph slit, and found that the emission lines were not images of the slit but rather “small disks”, i.e., the emission was spread over a substantial range of wavelengths. (However, he rejected an “ordinary radial velocity interpretation” of the line widths.) During the following years, several astronomers noted the presence of nuclear emission lines in the spectra of some spiral nebulae. For example, Hubble (1926) mentioned that the relatively rare spirals with stellar nuclei show a planetary nebula type spectrum, notably NGC 1068, 4051, and 4151. The systematic study of galaxies with nuclear emission lines began with the work of Seyfert (1943). Seyfert obtained spectrograms of 6 galaxies with nearly stellar nuclei showing emission lines superimposed on a normal G-type (solar-type) spectrum: NGC 1068, 1275, 3516, 4051, 4151, and 7469. The two brightest (NGC 1068, 4151) showed “all the stronger emission lines … in planetary nebulae like NGC 7027.” Seyfert attributed the large widths of the lines to Doppler shifts, reaching up to 8,500 $`\mathrm{km}\mathrm{s}^1`$ for the hydrogen lines of NGC 3516 and 7469. The emission-line profiles differed from line to line and from object to object, but two patterns were to prove typical of this class of galaxy. The forbidden and permitted lines in NGC 1068 had roughly similar profiles with widths of $``$3000 $`\mathrm{km}\mathrm{s}^1`$. In contrast, NGC 4151 showed relatively narrow forbidden lines, and corresponding narrow cores of the permitted lines; but the hydrogen lines had very broad (7500 $`\mathrm{km}\mathrm{s}^1`$) wings that were absent from the profiles of the forbidden lines. Seyfert contrasted these spectra with the narrow emission lines of the diffuse nebulae (H II regions) seen in irregular galaxies and in the arms of spiral galaxies. Galaxies with high excitation nuclear emission lines are now called “Seyfert galaxies”. However, Seyfert’s paper was not enough to launch the study of AGN as a major focus of astronomers’ efforts. The impetus for this came from a new direction – the development of radio astronomy. Jansky (1932), working at the Bell Telephone Laboratories, conducted a study of the sources of static affecting trans-Atlantic radio communications. Using a rotatable antenna and a short-wave receiver operating at a wavelength of 14.6 m, he systematically measured the intensity of the static arriving from all directions throughout the day. From these records, he identified three types of static: (1) static from local thunderstorms, (2) static from distant thunderstorms, and (3) “a steady hiss type static of unknown origin”. The latter seemed to be somehow associated with the sun (Jansky 1932). Continuing his measurements throughout the year, Jansky (1933) observed that the source of the static moved around in azimuth every 24 hours, and the time and direction of maximum changed gradually throughout the year in a manner consistent with the earth’s orbital motion around the sun. He inferred that the radiation was coming from the center of the Milky Way galaxy. After further study of the data, Jansky (1935) concluded that the radiation came from the entire disk of the Milky Way, being strongest in the direction of the Galactic center. Few professional astronomers took serious note of Jansky’s work, and it fell to an engineer, working at home in his spare time, to advance the subject of radio astronomy. Reber (1940a,b) built a 31 foot reflector in his backyard near Chicago. He published a map of the radio sky at 160 MHz showing several local maxima, including one in the constellation Cygnus that would prove important for AGN studies (Reber 1944). He also noted that the ratio of radio radiation to optical light was vastly larger for the Milky Way than the sun. With the end of World War II, several groups of radio engineers turned their efforts to the study of radio astronomy. Notable among these were the groups at Cambridge and Manchester in England and at CSIRO in Australia. The study of discrete sources began with the accidental discovery of a small, fluctuating source in Cygnus by Hey, Parsons, and Phillips (1946) in the course of a survey of the Milky Way at 60 MHz. With their 6 degree beam, they set an upper limit of 2 degrees on the angular diameter of the source. The intensity fluctuations, occurring on a time scale of seconds, were proved a few years later to originate in the earth’s ionosphere; but at first they served to suggest that the radiation “could only originate from a small number of discrete sources”. The discrete nature of the Cygnus source was confirmed by Bolton and Stanley (1948), who used a sea-cliff interferometer to set an upper limit of 8 arcmin to the width of the source. These authors deduced a brightness temperature of more than $`4\times 10^6`$ K at 100 MHz and concluded that a thermal origin of the noise was “doubtful”. Bolton (1948) published a catalog of 6 discrete sources and introduced the nomenclature Cyg A, Cas A, etc. Ryle and Smith (1948) published results from a radio interferometer at Cambridge analogous to the optical interferometer used by Michelson at Mt. Wilson to measure stellar diameters. Observing at 80 MHz, they set an upper limit of 6 arcmin to the angular diameter of the source in Cygnus. Optical identifications of discrete sources (other than the sun) were finally achieved by Bolton, Stanley, and Slee (1949). Aided by more accurate positions from sea cliff observations, they identified Taurus A with the Crab Nebula supernova remnant (M 1); Virgo A with M 87, a large elliptical galaxy with an optical jet; and Centaurus A with NGC 5128, an elliptical galaxy with a prominent dust lane. The partnership of optical and radio astronomy was underway. The early 1950s saw progress in radio surveys, position determinations, and optical identifications. A class of sources fairly uniformly distributed over the sky was shown by the survey by Ryle, Smith, and Elsmore (1950) based on observations with the Cambridge interferometer. Smith (1951) obtained accurate positions of four discrete sources, Tau A, Vir A, Cyg A, and Cas A. Smith’s positions enabled Baade and Minkowski (1954) to make optical identifications of Cas A and Cyg A in 1951 and 1952. At the position of Cyg A, they found an object with a distorted morphology, which they proposed was two galaxies in collision. Baade and Minkowski found emission lines of \[Ne V\], \[O II\], \[Ne III\], \[O III\], \[O I\], \[N II\], and H$`\alpha `$, with widths of about 400 $`\mathrm{km}\mathrm{s}^1`$. The redshift of 16,830 $`\mathrm{km}\mathrm{s}^1`$ implied a large distance, 31 Mpc, for the assumed Hubble constant of $`\mathrm{H}_0=540\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1.`$ The large distance of Cyg A implied an enormous luminosity, $`8\times 10^{42}`$ $`\mathrm{erg}\mathrm{s}^1`$ in the radio, larger than the optical luminosity of $`6\times 10^{42}`$ $`\mathrm{erg}\mathrm{s}^1`$ . (Of course, these values are larger for a modern value of H<sub>0</sub>.) This period also saw progress in the measurement of the structure of radio sources. Hanbury Brown, Jennison, and Das Gupta (1952) reported results from the new intensity interferometer developed at Jodrell Bank, including a demonstration that Cyg A was elongated, with dimensions roughly 2 arcmin by 0.5 arcmin. Interferometer measurements of Cyg A by Jennison and Das Gupta (1952) showed two equal components separated by 1.5 arcmin that straddled the optical image, a puzzling morphology that proved to be common for extragalactic radio sources. Radio sources were categorized as ‘Class I’ sources, associated with the plane of the Milky Way, and ‘Class II’ sources, isotropically distributed and possibly mostly extragalactic (e.g., Hanbury Brown 1959). Some of the latter had very small angular sizes, encouraging the view that many were “radio stars” in our Galaxy. Morris, Palmer, and Thompson (1957) published upper limits of 12 arcsec on the size of 3 class II sources, implying brightness temperatures in excess of $`2\times 10^7`$ K. They suggested that these were extragalactic sources of the Cyg A type. Theoretically, Whipple and Greenstein (1937) attempted to explain the Galactic radio background measured by Jansky in terms of thermal emission by interstellar dust, but the expected dust temperatures were far too low to give the observed radio brightness. Reber (1940a) considered free-free emission by ionized gas in the interstellar medium. This process was considered more accurately by Henyey and Keenan (1940) and Townes (1947), who realized that Jansky’s brightness temperature of $`10^5K`$ could not be reconciled with thermal emission from interstellar gas believed to have a temperature $`10,000K`$. Alfvén and Herlofson (1950) proposed that “radio stars” involve cosmic ray electrons in a magnetic field emitting by the synchrotron process. This quickly led Kiepenheuer (1950) to explain the Galactic radio background in terms of synchrotron emission by cosmic rays in the general Galactic magnetic field. He showed order-of-magnitude agreement between the observed and predicted intensities, supported by a more careful calculation by Ginzburg (1951). The synchrotron explanation became accepted for extragalactic discrete sources by the end of the 1950’s. The theory indicated enormous energies, up to $`10^{60}`$ ergs for the “double lobed” radio galaxies (Burbidge 1959). The confinement of the plasma in these lobes would later be attributed to ram pressure as the material tried to expand into the intergalactic medium (De Young and Axford 1967). A mechanism for production of bipolar flows to power the lobes was given by the “twin exhaust model” of Blandford and Rees (1974). The third Cambridge (3C) survey at 159 MHz (Edge et al. 1959) was followed by the revised 3C survey at 178 MHz (Bennett 1962). Care was taken to to minimize the confusion problems of earlier surveys, and many radio sources came to be known by their 3C numbers. These and the surveys that soon followed provided many accurate radio positions as the search for optical identifications accelerated. (AGN were also discovered in optical searches based on morphological “compactness” \[Zwicky 1964\] and strong ultraviolet continuum \[Markarian 1967\] and later infrared and X-ray surveys.) Source counts as a function of flux density (“log N – log S”) showed a steeper increase in numbers with decreasing flux density than expected for a homogeneous, nonevolving universe with Euclidean geometry (e.g., Mills, Slee, and Hill 1958; Scott and Ryle 1961). This was used to argue against the “steady state” cosmology (Ryle and Clark 1961), although some disputed such a conclusion (e.g., Hoyle and Narlikar 1961). ## 3 THE DISCOVERY OF QUASARS Minkowski’s studies of radio galaxies culminated with identification of 3C 295 with a member of a cluster of galaxies at the unprecedented redshift of 0.46 (Minkowski 1960). Allan Sandage of the Mt. Wilson and Palomar Observatories and Maarten Schmidt of the California Institute of Technology (Caltech) then took up the quest for optical identifications and redshifts of radio galaxies. Both worked with Thomas A. Matthews, who obtained accurate radio positions with the new interferometer at the Owens Valley Radio Observatory operated by Caltech. In 1960, Sandage obtained a photograph of 3C 48 showing a $`16^m`$ stellar object with a faint nebulosity. The spectrum of the object showed broad emission lines at unfamiliar wavelengths, and photometry showed the object to be variable and to have an excess of ultraviolet emission compared with normal stars. Several other apparently star-like images coincident with radio sources were found to show strange, broad emission lines. Such objects came to be known as quasi-stellar radio sources (QSRS), quasi-stellar sources (QSS), or quasars. Sandage reported the work on 3C 48 in an unscheduled paper in the December, 1960, meeting of the AAS (summarized by the editors of Sky and Telescope \[Matthews et al. 1961\]). There was a “remote possibility that it may be a distant galaxy of stars” but “general agreement” that it was “a relatively nearby star with most peculiar properties.” The breakthrough came on February 5, 1963, as Schmidt was pondering the spectrum of the quasar 3C 273. An accurate position had been obtained in August, 1962 by Hazard, Mackey, and Shimmins (1963), who used the 210 foot antenna at the Parkes station in Australia to observe a lunar occultation of 3C 273. From the precise time and manner in which the source disappeared and reappeared, they determined that the source had two components. 3C 273A had a fairly typical class II radio spectrum, $`F_\nu \nu ^{0.9}`$; and it was separated by 20 seconds of arc from component ‘B’, which had a size less than 0.5 arcsec and a “most unusual” spectrum, $`f_\nu \nu ^{0.0}`$. Radio positions B and A, respectively, coincided with those of a 13<sup>m</sup> star like object and with a faint wisp or jet pointing away from the star. At first suspecting the stellar object to be a foreground star, Schmidt obtained spectra of it at the 200-inch telescope in late December, 1962. The spectrum showed broad emission lines at unfamiliar wavelengths, different from those of 3C 48. Clearly, the object was no ordinary star. Schmidt noticed that four emission lines in the optical spectrum showed a pattern of decreasing strength and spacing toward the blue, reminiscent of the Balmer series of hydrogen. He found that the four lines agreed with the expected wavelengths of H$`\beta `$, H$`\gamma `$, H$`\delta `$, and H$`ϵ`$ with a redshift of z = 0.16. This redshift in turn allowed him to identify a line in the ultraviolet part of the spectrum with Mg II $`\lambda `$2798. Schmidt consulted with his colleagues, Jesse L. Greenstein and J. B. Oke. Oke had obtained photoelectric spectrophotometry of 3C 273 at the 100-inch telescope, which revealed an emission-line in the infrared at $`\lambda `$7600. With the proposed redshift, this feature agreed with the expected wavelength of H$`\alpha `$. Greenstein’s spectrum of 3C 48 with a redshift of z = 0.37, supported by the presence of Mg II in both objects. The riddle of the spectrum of quasars was solved. These results were published in Nature six weeks later in adjoining papers by Hazard et al. (1963); Schmidt (1963); Oke (1963); and Greenstein and Matthews (1963). The objects might be galactic stars with a very high density, giving a large gravitational redshift. However, this explanation was difficult to reconcile with the widths of the emission lines and the presence of forbidden lines. The “most direct and least objectionable” explanation was that the objects were extragalactic, with redshifts reflecting the Hubble expansion. The redshifts were large but not unprecedented; that of 3C 48 was second only to that of 3C 295. The radio luminosities of the two quasars were comparable with those of Cyg A and 3C 295. However, the optical luminosities were staggering, “10 - 30 times brighter than the brightest giant ellipticals”; and the radio surface brightness was larger than for the radio galaxies. The redshift of 3C 273 implied a velocity of 47,400 $`\mathrm{km}\mathrm{s}^1`$ and a distance of about 500 Mpc (for $`\mathrm{H}_0100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$). The nuclear region would then be less than 1 kpc in diameter. The jet would be about 50 kpc away, implying a timescale greater than $`10^5`$ years and a total energy radiated of at least $`10^{59}`$ ergs. Before the redshift of 3C 273 was announced, Matthews and Sandage (1963) had submitted a paper identifying 3C 48, 3C 196 and 3C 286 with stellar optical objects. They explored the popular notion that these objects were some kind of Galactic star, arguing from their isotropic distribution on the sky and lack of observed proper motion that the most likely distance from the sun was about 100 pc. The objects had peculiar colors, and 3C 48 showed light variations of 0.4 mag. In a section added following the discovery of the redshifts of 3C 273 and 3C 48, they pointed out that the size limit of $``$0.15 pc implied by the optical light variations was important in the context of the huge distance and luminosity implied by taking the redshift to result from the Hubble expansion. A detailed analysis of 3C48 and 3C 273 was published by Greenstein and Schmidt (1964). They considered explanations of the redshift involving (1) rapid motion of objects in or near the Milky Way, (2) gravitational redshifts, and (3) cosmological redshifts. If 3C 273 had a transverse velocity comparable with the radial velocity implied by its redshift, the lack of an observed proper motion implied a distance of at least 10 Mpc (well beyond the nearest galaxies). The corresponding absolute magnitude was closer to the luminosity of galaxies than stars. The four quasars with known velocities were all receding; and accelerating a massive, luminous object to an appreciable fraction of the speed of light seemed difficult. Regarding gravitational redshifts, Greenstein and Schmidt argued that the widths of the emission lines required the line emitting gas to be confined to a small fractional radius around the massive object producing the redshift. The observed symmetry of the line profiles seemed unnatural in a gravitational redshift model. For a 1 M object, the observed H$`\beta `$ flux implied an electron density $`N_e10^{19}`$ cm<sup>-3</sup>, incompatible with the observed presence of forbidden lines in the spectrum. The emission-line constraint, together with a requirement that the massive object not disturb stellar orbits in the Galaxy, required a mass $`10^9`$ M. The stability of such a “supermassive star” seemed doubtful in the light of theoretical work by Hoyle and Fowler (1963a), who had examined such objects as possible sources for the energy requirements of extragalactic radio sources. Adopting the cosmological explanation of the redshift, Greenstein and Schmidt derived radii for a uniform spherical emission-line region of 11 and 1.2 pc for 3C 48 and 3C 273, respectively. This was based on the H$`\beta `$ luminosities and electron densities estimated from the H$`\beta `$, \[O II\], and \[O III\] line ratios. Invoking light travel time constraints based on the observed optical variability (Matthews and Sandage 1963; Smith and Hoffleit 1963), they proposed a model in which a central source of optical continuum was surrounded by the emission-line region, and a still larger radio emitting region. They suggested that a central mass of order $`10^9`$ M might provide adequate energy for the lifetime of $`10^6`$ yr implied by the jet of 3C 273 and the nebulosity of 3C 48. This mass was about right to confine the line emitting gas, which would disperse quickly if it expanded at the observed speeds of 1000 $`\mathrm{km}\mathrm{s}^1`$ or more. Noting that such a mass would correspond to a Schwarzschild radius of $`10^4`$ pc, they observed that “It would be important to know whether continued energy and mass input from such a ‘collapsed’ region are possible”. Finally, they noted that there could be galaxies around 3C 48 and 3C 273 hidden by the glare of the nucleus. Many features of this analysis are recognizable in current thinking about AGN. The third and fourth quasar redshifts were published by Schmidt and Matthews (1964), who found z = 0.425 and 0.545 for 3C 47 and 3C 147, respectively. Schmidt (1965) published redshifts for 5 more quasars. For 3C 254, a redshift z = 0.734, based on several familiar lines, allowed the identification of C III\] $`\lambda `$1909 for the first time. This in turn allowed the determination of redshifts of 1.029 and 1.037 from $`\lambda `$1909 and $`\lambda `$2798 in 3C 245 and CTA 102, respectively. (CTA is a radio source list from the Caltech radio observatory.) For 3C 287, a redshift of 1.055 was found from $`\lambda `$1909, $`\lambda `$2798, and another first, C IV $`\lambda `$1550. Finally, a dramatically higher redshift of 2.012 was determined for 3C 9 on the basis of $`\lambda `$1550 and the first detection of the Lyman $`\alpha `$ line of hydrogen at $`\lambda `$1215. The redshifts were large enough that the absolute luminosities depended significantly on the cosmological model used. Sandage (1965) reported the discovery of a large population of radio quiet objects that otherwise appeared to resemble quasars. Matthews and Sandage (1963) had found that quasars showed an “ultraviolet excess” when compared with normal stars on a color-color (U-B, B-V) diagram. This led to a search technique in which exposures in U and B were recorded on the same photographic plate, with a slight positional offset, allowing rapid identification of objects with strong ultraviolet continua. Sandage noticed a number of such objects that did not coincide with known radio sources. These he called “interlopers”, “blue stellar objects” (BSO), or “quasi-stellar galaxies” (QSG). <sup>1</sup><sup>1</sup>1 Here we adopt the now common practice of using the term “quasi-stellar object” (QSO) to refer to these objects regardless of radio luminosity (Burbidge and Burbidge 1967). Sandage found that at magnitudes fainter than 15, the UV excess objects populated the region occupied by quasars on the color-color diagram, whereas brighter objects typically had the colors of main sequence stars. The number counts of the BSOs as a function of apparent magnitude also showed a change of slope at $`15^m`$, consistent with an extragalactic population of objects at large redshift. Spectra showed that many of these objects indeed had spectra with large redshifts, including z = 1.241 for BSO 1. Sandage estimated that the QSGs outnumbered the radio loud quasars by a factor $`500`$, but this was reduced by later work (e.g., Kinman 1965; Lynds and Villere 1965). The large redshifts of QSOs immediately made them potential tools for the study of cosmological questions. The rough similarity of the emission-line strengths of QSOs to those observed, or theoretically predicted, for planetary nebulae suggested that the chemical abundances were roughly similar to those in our Galaxy (Sklovskii 1964; Osterbrock and Parker 1966). Thus these objects, suspected by many astronomers to lie in the nuclei of distant galaxies, had reached fairly “normal” chemical compositions when the Universe was considerably younger than today. The cosmological importance of redshifts high enough to make L$`\alpha `$ visible was quickly recognized. Hydrogen gas in intergalactic space would remove light from the quasar’s spectrum at the local cosmological redshift, and continuously distributed gas would erase a wide band of continuum to the short wavelength side of the L$`\alpha `$ emission line (Gunn and Peterson 1965; Scheuer 1965). Gunn and Peterson set a tight upper limit to the amount of neutral hydrogen in intergalactic space, far less than the amount that would significantly retard the expansion of the Universe. The study of discrete absorption features in quasar spectra also began to develop. An unidentified sharp line was observed in the spectrum of 3C 48 by Greenstein and Schmidt (1964). Sandage (1965) found that the $`\lambda `$1550 emission line of BSO 1 was “bisected by a sharp absorption feature”. The first quasar found with a rich absorption spectrum was 3C 191 (Burbidge, Lynds, and Burbidge 1966; Stockton and Lynds 1966). More than a dozen sharp lines were identified, including L$`\alpha `$ and lines of C II, III, and IV and Si II, III, and IV. A rich set of narrow absorption lines was also observed in the spectrum of PKS 0237-23, whose emission-line redshift, z = 2.223, set a record at the time. Arp, Bolton, and Kinman (1967) and Burbidge (1967a) respectively proposed absorption line redshifts of z = 2.20 and 1.95 for this object, but each value left many lines without satisfactory identifications. It turned out that both redshifts were present (Greenstein and Schmidt 1967). All these absorption systems had z<sub>abs</sub>$`<`$ z<sub>em</sub>. They could be interpreted as intervening clouds imposing absorption spectra at the appropriate cosmological redshift, as had been anticipated theoretically (Bahcall and Salpeter 1965). Alternatively, they might represent material expelled from the quasar, whose outflow velocity is subtracted from the cosmological velocity of the QSO. However, PKS 0119-04 was found to have z<sub>abs</sub>$`>`$ z<sub>em</sub>, implying material that was in some sense falling into the QSO from the near side with a relative velocity of 10<sup>3</sup> $`\mathrm{km}\mathrm{s}^1`$ (Kinman and Burbidge 1967). Today, a large fraction of the narrow absorption lines with z<sub>abs</sub> substantially less than z<sub>em</sub> are believed to result from intervening material. This includes the so-called “Lyman alpha forest” of closely spaced, narrow L$`\alpha `$ lines that punctuate the continuum to the short wavelength side of the L$`\alpha `$ emission line, especially in high redshift QSOs. The study of intervening galaxies and gas clouds by means of absorption lines in the spectra of background QSOs is now a major branch of astrophysics. A different kind of absorption was discovered in the spectrum of PHL 5200 by Lynds (1967). This object showed broad absorption bands on the short wavelength sides of the L$`\alpha `$, N V $`\lambda `$1240, and C IV $`\lambda `$1550 emission lines, with a sharp boundary between the emission and absorption. Lynds interpreted this in terms of an expanding shell of gas around the central object. Seen in about 10 percent of radio quiet QSOs (Weymann et al. 1991), these broad absorption lines (BALs) are among the many dramatic but poorly understood aspects of AGN. The huge luminosity of QSOs, rapid variability, and implied small size caused some astronomers to question the cosmological nature of the redshifts. Terrell (1964) considered the possibility that the objects were ejected from the center of our galaxy. Upper limits on the proper motion of 3C 273, together with a Doppler interpretation of the redshift, then implied a distance of at least 0.3 Mpc and an age at least 5 million years. Arp (1966), pointing to close pairs of peculiar galaxies and QSOs on the sky, argued for noncosmological redshifts that might result from ejection from the peculiar galaxies at high speeds or an unknown cause. Setti and Woltjer (1966) noted that ejection from the Galactic center would imply for the QSO population an explosion with energy at least $`10^{60}`$ ergs, and more if ejected from nearby radio galaxies such as Cen A as suggested by Hoyle and Burbidge (1966). Furthermore, Doppler boosting would cause us to see more blueshifts than redshifts if the objects were ejected from nearby galaxies (Faulkner, Gunn, and Peterson 1966). Further evidence for cosmological redshifts was provided by Gunn (1971), who showed that two clusters of galaxies containing QSOs had the same redshifts as the QSOs. Also, Kristian (1973) showed that the “fuzz” surrounding the quasistellar image of a sample of QSOs was consistent with the presence of a host galaxy. ## 4 CHARTING THE TERRAIN At this stage, a number of properties of AGN were recognized. Most astronomers accepted the cosmological redshift of QSOs, and the parallel between Seyfert galaxies and QSOs suggested a common physical phenomenon. Questions included the nature of the energy source, the nature of the continuum source and emission-line regions, and the factors that produce an AGN in some galaxies and not others. ### 4.1 Emission Lines The basic parameters of the region of gas emitting the narrow emission lines were fairly quickly established. In one of the first physical analyses of “emission nuclei” in galaxies, Woltjer (1959) derived a density $`\mathrm{N}_\mathrm{e}10^4\mathrm{cm}^3`$ and temperature $`T20,000`$ K from the \[S II\] and \[O III\] line ratios of Seyfert galaxies. The region emitting the narrow lines was just resolved for the nearest Seyfert galaxies, giving a diameter of order 100 pc (e.g., Walker 1968; Oke and Sargent 1968). Oke and Sargent derived a mass of $`10^5\mathrm{M}_{}`$ and a small volume filling factor for the narrow line gas in NGC 4151. Burbidge, Burbidge, and Prendergast (1958) found that the nuclear emission lines of NGC 1068 were much broader than could be accounted for by the rotation curve of the galaxy, and concluded that the material was in a state of expansion. A key question was why, in objects showing broad wings, these were seen on the permitted lines but not the forbidden lines. (Seyfert galaxies with broad wings came to be called “Seyfert 1” or “Sy 1” and those without them “Sy 2” \[Khachikian and Weedman 1974\].) Were these wings emitted by the same gas that emits the narrow lines? Woltjer (1959) postulated a separate region of fast moving, possibly gravitationally bound gas to produce the broad Balmer line wings of Seyfert galaxies. Souffrin (1969a) adopted such a model in her analysis of NGC 3516 and NGC 4151. Alternatively, broad Balmer line wings might be produced by electron scattering (Burbidge et al. 1966). Oke and Sargent (1968) supported this possibility for NGC 4151. Their analysis of the emission-line region gave an electron scattering optical depth $`\tau _e0.1`$. Multiple scattering of Balmer line photons by the line opacity might increase the effective electron scattering probability, explaining the presence of wings only on the permitted lines. However, analysis of electron scattering profiles by other authors (e.g., Weymann 1970) indicated the need for a dense region only a tiny fraction of a light year across. Favoring mass motions were the irregular broad line profiles in some objects (Anderson 1971), which demonstrated the presence of bulk velocities of the needed magnitude. In addition, Shklovskii (1964) had argued for an electron scattering optical depth $`\tau _{es}<1`$ in 3C 273 to avoid excessive smoothing of the continuum light variations. The picture of broad lines from a small region of dense, fast moving clouds (“Broad Line Region” or BLR) and narrow lines from a larger region of slower moving, less dense clouds (“Narrow Line Region” or NLR) found support from photoionization modes (Shields 1974). Early workers (e.g., Seyfert 1943) had noted that the narrow line intensities resembled those of planetary nebulae, and photoionization was an obvious candidate for the energy input to the emitting gas for both the broad and narrow lines. For 3C 273, Shklovskii (1964) noted that the kinetic energy of the emission- line gas could power the line emission only for a very short time, whereas the extrapolated power in ionizing ultraviolet radiation was in rough agreement with the emission line luminosities. Osterbrock and Parker (1965) argued against photoionization because of the observed weakness of the Bowen O III fluorescence lines. Also eliminating thermal collisional ionization because of the observed wide range of ionization stages, they proposed ionization and heating by fast protons resulting from high velocity cloud collisions. Souffrin (1969b) rejected this on the basis of thermal equilibrium considerations, and argued along with Williams and Weymann (1968) that thermal collisional ionization was inconsistent with observed temperatures. Noting that an optical-ultraviolet continuum of roughly the needed power is observed, and that the thermal equilibrium gives roughly the observed temperature, Souffrin concluded that a nonthermal ultraviolet continuum was “the only important source of ionization”. Searle and Sargent (1968) likewise noted that the equivalent widths of the broad H$`\beta `$ emission lines were similar among AGN over a wide range of luminosity and were consistent with an extrapolation of the observed “nonthermal” continuum as a power law to ionizing frequencies. Detailed models of gas clouds photoionized by a power-law continuum were calculated with the aid of electronic computers, with application to the Crab nebula, binary X-ray sources, and AGN (Williams 1967; Tarter and Salpeter 1969; Davidson 1972; MacAlpine 1972). Such models showed that photoionization can account for the intensities of the strongest optical and ultraviolet emission lines. In particular, the penetrating high frequency photons can explain the simultaneous presence of very high ionization stages and strong emission from low ionization stages, in the context of a “nebula” that is optically thick to the ionizing continuum. Photoionization quickly became accepted as the main source of heating and ionization in the emission-line gas. Attention then focussed on improving photoionization models and understanding the geometry and dynamics of the gas emitting the broad lines. It was clear that the emitting gas had only a tiny volume filling factor, and one possible possible geometry was the traditional nebular picture of clouds or “filaments” scattered through the BLR volume. Photoionization models typically assumed a slab geometry representing the ionized face of a cloud that was optically thick to the Lyman continuum. Model parameters included the density and chemical composition of the gas and the intensity and energy distribution of the incident ionizing continuum. Various line ratios, such as C III\]/C IV, were used to constrain the “ionization parameter”, i.e., the ratio of ionizing photon density to gas density. Chemical abundances were assumed to be approximately solar but were hard to determine because the high densities prevented a direct measurement of the electron temperature from available line ratios. A challenge for photoionization models was the discovery that the L$`\alpha `$/H$`\alpha `$ ratio was an order-of-magnitude smaller than the value $`50`$ predicted by photoionization models at the time (Baldwin 1977a; Davidsen, Hartig, and Fastie 1977). This stimulated models with an improved treatment of radiative transfer in optically thick hydrogen lines (e.g., Kwan and Krolik 1979). These models found strong Balmer line emission from a “partially ionized zone” deep in the cloud, heated by penetrating X-rays, from which Lyman line emission was unable to escape. The models still did not do a perfect job of explaining the observed ratios (e.g., Lacy et al. 1982) of the Paschen, Balmer, and Lyman lines. Models by Collin-Souffrin, Dumont, and Tully (1982) and Wills, Netzer, and Wills (1985) suggested the need for densities as high as $`N_e10^{11}\mathrm{cm}^3`$ to explain the H$`\alpha `$/H$`\beta `$ ratio. The X-ray heated region also was important for the formation of the strong Fe II multiplet blends observed in the optical and ultraviolet. Theoretical efforts by several authors culminated in models involving thousands of Fe lines, with allowance for the fluorescent interlocking of different lines (Wills et al. 1985). These models enjoyed some success in explaining the relative line intensities, but the total energy in the Fe II emission was less than observed. Although some of this discrepancy might involve the iron abundance, Collin-Souffrin et al. (1980) proposed a separate Fe II emitting region with a high density ($`N_e10^{11}\mathrm{cm}^3`$) heated by some means other than photoionization. This region might be associated with an accretion disk. The Fe II emission and the Balmer continuum emission that combined to form the 3000 Å “little bump” still are not fully explained, nor is the tendency for radio loud AGN to have weaker Fe II and steeper Balmer decrements than radio quiet objects (Osterbrock 1977). A tendency for the equivalent width of the C IV emission line to decrease with increasing luminosity was found by Baldwin (1977b). Explanations of this involved a possible decrease, with increasing luminosity, in the ionization parameter and in the “covering factor”, i.e., the fraction ($`\mathrm{\Omega }/4\pi `$) of the ionizing continuum intercepted by the BLR gas (Mushotzky and Ferland 1984). The ionization parameter was also the leading candidate to explain the difference in ionization level between classical Seyfert galaxies and the “low ionization nuclear emission regions” or “LINERs” (Heckman 1980; Ferland and Netzer 1983; Halpern and Steiner 1983). The geometry and state of motion of the BLR gas has been a surprisingly stubborn problem. If the BLR was a swarm of clouds, they might be falling in (possibly related to the accretion supply), orbiting, or flying out. Alternatively, the gas might be associated with an accretion disk irradiated by the ionizing continuum (e.g., Shields 1977; Collin-Souffrin 1987). Except for the BAL QSOs, there was little evidence for blueshifted absorption analogous to the P Cygni type line profiles of stars undergoing vigorous mass loss. The approximate symmetry of optically thick lines such as L$`\alpha `$ and H$`\alpha `$ suggested that the motion was circular or random rather than predominantly radial (e.g., Ferland, Netzer, and Shields 1979). However, for orbiting (or infalling) gas, the line widths implied rather large masses for the central object, given prevailing estimates of the BLR radius. In addition, gas in Keplerian orbit seemed likely to give a double peaked line profile or to have other problems (Shields 1978a). In the face of these conflicting indications, the most common assumption was that the gas took the form of clouds flying outward from the central object. The individual clouds would disperse quickly unless confined by some intercloud medium, and a possible physical model was provided by the two-phase medium discussed by Krolik, McKee, and Tarter (1981). Radiation pressure of the ionizing continuum, acting on the bound-free opacity of the gas, seemed capable of producing the observed velocities and giving a natural explanation of the “logarithmic” shape of the observed line profiles (Mathews 1974; Blumenthal and Mathews 1975). Interpretation of the line profiles was complicated by the recognition of systematic offsets in velocity between the high and low ionization lines (Gaskell 1982; Wilkes and Carswell 1982; Wilkes 1984) A powerful new tool was provided by the use of “echo mapping” or “reverberation mapping” of the BLR. Echo mapping relies on the time delays between the continuum and line variations caused by the light travel time across the BLR (Blandford and McKee 1982). Early results showed that the BLR is smaller and denser than most photoionization models had indicated (Ulrich et al. 1984; Peterson et al. 1985). Masses of the central object, by this time assumed to be a black hole, could be derived with increased confidence. The smaller radii implied smaller masses that seemed reasonable in the light of other considerations, and the idea of gravitational motions for the BLR gained in popularity. This was supported by the rough tendency of the line profiles to vary symmetrically, consistent with “chaotic” or circular motions (e.g., Ulrich et al. 1984). ### 4.2 Energy Source The question of the ultimate energy source for AGN stimulated creativity even before the discovery of QSO redshifts. The early concept of radio galaxies as galaxies in collision gave way to the recognition of galactic nuclei as the sites of concentrated, violent activity. Burbidge (1961) suggested that a chain reaction of supernovae (SN) could occur in a dense star cluster in a galactic nucleus. Shock waves from one SN would compress neighboring stars, triggering them to explode in turn. Cameron (1962) considered a coeval star cluster leading to a rapid succession of SN as the massive stars finished their short lives. Spitzer and Saslaw (1966), building on earlier suggestions, developed another model involving a dense star cluster. The cluster core would evolve to higher star densities through gravitational “evaporation”, and this would lead to frequent stellar collisions and tidal encounters, liberating large amounts of gas. Additional ideas involving dense star clusters included pulsar swarms (Arons, Kulsrud, and Ostriker 1975) and starburst models (Terlevich and Melnick 1985). Hoyle and Fowler (1963a,b) discussed the idea of a supermassive star (up to $`10^8\mathrm{M}_{}`$) as a source of gravitational and thermonuclear energy. In additional to producing large amounts of energy per unit mass, all these models seemed capable of accelerating particles to relativistic energies and producing gas clouds ejected at speeds of $`5000\mathrm{km}\mathrm{s}^1`$, suggestive of the broad emission-line wings of Seyfert galaxies. In this regard, Hoyle and Fowler (1963a) suggested that “a magnetic field could be wound toroidally between the central star and a surrounding disk.” The field could store a large amount of energy, leading to powerful “explosions” and jets like that of M87. Hoyle and Fowler (1963b) suggested that “only through the contraction of a mass of $`10^710^8\mathrm{M}_{}`$ to the relativistic limit can the energies of the strongest sources be obtained.” Soon after, Salpeter (1964) and Zeldovich (1964) proposed the idea of QSO energy production from accretion onto a supermassive black hole. For material gradually spiraling to the innermost stable orbit of a nonrotating black hole at $`r=6GM/c^2`$, the energy released per unit mass would be $`0.057c^2`$, enough to provide the energy of a luminous QSO from a reasonable mass. Salpeter imagined some kind of turbulent transport of angular momentum, allowing the matter to move closer to the hole, which would grow in mass during the accretion process. The black hole model received limited attention until Lynden-Bell (1969) argued that dead quasars in the form of “collapsed bodies” (black holes) should be common in galactic nuclei, given the lifetime energy output of quasars and their prevalence at earlier times in the history of the universe. Quiescent ones might be detectable through their effect on the mass-to-light ratio of nearby galactic nuclei. Lynden-Bell explored the thermal radiation and fast particle emission to be expected in a disk of gas orbiting the hole, with energy dissipation related to magnetic and turbulent processes. For QSO luminosities, the disk would have a maximum effective temperature of $`10^5\mathrm{K}`$, possibly leading to photoionization and broad line emission. He remarked that “with different values of the \[black hole mass and accretion rate\] these disks are capable of providing an explanation for a large fraction of the incredible phenomena of high energy astrophysics, including galactic nuclei, Seyfert galaxies, quasars and cosmic rays.” Further evidence for relativistic conditions in AGN came from other theoretical arguments. Hoyle, Burbidge, and Sargent (1966) noted that relativistic electrons emitting optical and infrared synchrotron radiation would also Compton scatter ambient photons, boosting their energy by large factors. This would lead to “repeated stepping up of the energies of quanta”, yielding a divergence that came to be known as the “inverse Compton catastrophe”. This would be attended by rapid quenching of the energy of the electrons. They argued that this supported the idea of noncosmological redshifts. In response, Woltjer (1966) invoked a model with electrons streaming radially on field lines, which could greatly reduce Compton losses. He further noted that because “the relativistic electrons and the photons they emit both move nearly parallel to the line of sight, the time scale of variations in emission can be much shorter than the size of the region divided by the speed of light.” The emission would also likely be anisotropic, reducing the energy requirements for individual objects. ### 4.3 Superluminal Motion Dramatic confirmation of the suspected relativistic motions came from the advancing technology of radio astronomy. Radio astronomers using conventional interferometers had shown that many sources had structure on a sub-arcsec scale. Scintillation of the radio signal from some AGN, caused by the interplanetary medium of our solar system, also implied sub-arcsec dimensions (Hewish, Scott, and Wills 1964). The compact radio sources in some AGN showed flat spectrum components and variability on timescales of months (Dent 1965; Sholomitsky 1965). The variability suggested milliarcsec dimensions on the basis of light travel time arguments. The spectral shape and evolution found explanation in terms of multiple, expanding components that were optically thick to synchrotron self-sbsorption, which causes a low frequency cutoff in the emitted continuum (Pauliny-Toth and Kellermann 1966, and references therein). Such models had interesting theoretical consequences, including angular sizes (for cosmological redshifts) as small as $`10^3`$ arcsec, and large amounts of energy in relativistic electrons, far exceeding the energy in the magnetic field. These inferences made clear the need for angular resolution finer than was practical with conventional radio interferometers connected by wires or microwave links. This was achieved by recording the signal from the two antennas separately on magnetic tape, and correlating the recorded signals later by analog or digital means. This technique came to be known as “very long baseline interferometry” (VLB, later VLBI). After initial difficulties finding “fringes” in the correlated signal, competing groups in Canada and the United States succeeded in observing several AGN in the spring of 1967, over baselines of roughly 200 km (see Cohen et al. 1968). The U.S. experiments typically used the 140 foot antenna at the National Radio Astronomy Observatory in Green Bank, West Virginia, in combination with increasingly remote antennas in Maryland, Puerto Rico, Massachusetts, California, and Sweden. The latter gave an angular resolution of 0.0006 arcsec. Within another year, observations were made between Owens Valley, California, and Parkes, Australia, a baseline exceeding 10,000 km or 80 percent of the earth’s diameter. A number of AGN showed components unresolved on a scale of $`10^3`$ arcsec. On October 14 and 15, 1970, Knight et al. (1971) observed quasars at 7840 MHz with the Goldstone, California - Haystack, Massachusetts “Goldstack” baseline. 3C 279 showed fringes consistent with a symmetrical double source separated by $`(1.55\pm 0.03)\times 10^3`$ arcsec. Later observations on February 14 and 26, 1971, by Whitney et al. (1971) showed a double source structure at the same position angle, but separated by a distinctly larger angle of $`(1.69\pm 0.02)\times 10^3`$ arcsec. Given the distance implied by the redshift of 0.538, this rate of angular separation corresponded to a linear separation rate of ten times the speed of light! Cohen et al. (1971), also using Goldstack data, observed “superlight expansion” in 3C 273 and 3C 279. Whitney et al. and Cohen et al. considered a number of interpretations of their observations, including multiple components that blink on and off (the “Christmas tree model”) and noncosmological redshifts. However, most astronomers quickly leaned toward an explanation involving motion of emitting clouds ejected from the central object at speeds close to, but not exceeding, the speed of light. Rees (1966) had calculated the appearance of relativistically expanding sources, and apparent expansion speeds faster than that of light were predicted. A picture emerged in which a stationary component was associated with the central object, and clouds were ejected at intervals of several years along a fairly stable axis. (Repeat ejections were observed in the course of time by VLBI experiments.) If this ejection occurred in both directions, it could supply energy to the extended double sources. The receding components would be greatly dimmed by special relativistic effects, while the approaching components were brightened. The two observed components are then associated with the central object and the approaching cloud, respectively. The fact that the two observed components had roughly equal luminosities found an explanation in the relativistic jet model of Blandford and Königl (1979). Apparent superluminal motion has now been seen in a number of quasars and radio galaxies, and a possibly analogous phenomenon has been observed in connection with black hole systems of stellar mass in our Galaxy (Mirabel and Rodriguez 1994) ### 4.4 X-rays from AGN One June 18, 1962, an Aerobee sounding rockets blasted skyward from White Sands proving ground in New Mexico. It carried a Geiger counter designed to detect astronomical sources of X-rays. The experiment, carried out by Giacconi et al. (1962), discovered an X-ray background and a “large peak” in a 10 degree error box near the Galactic center and the constellation Scorpius. A rocket experiment by Bowyer et al. (1964) also found an isotropic background, confirmed the Scorpius source, and detected X-rays from the Crab nebula. Friedman and Byram (1967) identified X-rays from the active galaxy M 87. A rocket carrying collimated proportional counters sensitive in the 1 to 10 keV energy range, found sources coincident with 3C 273, NGC 5128 (Cen A), and M87 (Bowyer, Lampton, and Mack 1970). The positional error box for 3C 273 was small enough to give a probability of less that $`10^3`$ of a chance coincidence. The X-ray luminosity, quoted as $`10^{46}\mathrm{erg}\mathrm{s}^1`$, was comparable with quasar’s optical luminosity. The first dedicated X-ray astronomy satellite, Uhuru, was launched in 1970. Operating until 1973, it made X-ray work a major branch of astronomy. X-rays were reported from the Seyfert galaxies NGC 1275 and NGC 4151 (Gursky et al. 1971). The spectrum of NGC 5128 was consistent with a power law of energy index $`\alpha =0.7`$, where $`\mathrm{L}_\nu \nu ^\alpha `$; and there was low energy absorption corresponding to a column density of $`9\times 10^{22}\mathrm{atoms}\mathrm{cm}^2`$, possibly caused by gas in the nucleus (Tucker et al. 1973). Early variability studies were hampered by the need to compare results from different experiments, but Winkler and White (1975) found a large change in the flux from Cen A in only 6 days from OSO-7 data. Using Ariel V observations of NGC 4151, Ives et al. (1976) found a significant increase in flux from earlier Uhuru measurements. Marshall et al. (1981), using Ariel V data on AGN gathered over a 5 year period, found that roughly half of the sources varied by up to a factor of 2 on times less than or equal to a year. A number of sources varied in times of 0.5 to 5 days. Marshall et al. articulated the importance of X-ray variability observations, which show that the X-rays “arise deep in the nucleus” and “relate therefore to the most fundamental aspect of active galaxies, the nature of the central ‘power house’.” Strong X-ray emission as a characteristic of Sy 1 galaxies was established by Martin Elvis and his coworkers from Ariel V data (Elvis et al. 1978). This work increased to 15 the number of known Seyfert X-ray sources, of which at least three were variable. Typical luminosities were $`10^{42.5}`$ to $`10^{44.5}\mathrm{erg}\mathrm{s}^1`$. The X-ray power correlated with the infrared and optical continuum and H$`\alpha `$ line. Seyfert galaxies evidently made a significant contribution to the X-ray background, and limits could be set on the evolution of Seyfert galaxy number densities and X-ray luminosities in order that they not exceed the observed background. Elvis et al. considered thermal bremsstrahlung ($`10^7\mathrm{K}`$), synchrotron, and synchrotron self-Compton models of the X-ray emission. HEAO-1, the first of the High Energy Astronomy Observatories, was an X-ray facility that operated from 1977 to 1979. It gathered data on a sufficient sample of objects to allow comparisons of different classes of AGN and to construct a log N-log S diagram and improved luminosity function. HEAO-1 provided broad-band X-ray spectral information for a substantial set of AGN, showing spectral indices $`\alpha 0.7`$, with rather little scatter, and absorbing columns $`<5\times 10^{22}\mathrm{cm}^2`$ (Mushotzky et al. 1980). The Einstein Observatory (HEAO-2) featured grazing incidence focusing optics allowing detection of sources as faint as $`10^7`$ the intensity of the Crab nebula. Tananbaum et al. (1979) used Einstein data to study QSOs as a class of X-ray emitters. Luminosities of $`10^{43}`$ to $`10^{47}\mathrm{erg}\mathrm{s}^1`$ (0.5 to 4.5 keV) were found. OX169 varied substantially in under 10,000 s, indicating a small source size. This suggested a black hole mass not greater than $`2\times 10^8\mathrm{M}_{}`$, if the X-rays came from the inner portion of an accretion flow. By this time, strong X-ray emission was established as a characteristic of all types of AGN and a valuable diagnostic of their innermost workings. ### 4.5 The Continuum Today, the word “continuum” in the context of AGN might bring to mind anything from radio to gamma ray frequencies. However, in the early days of QSO studies, the term generally meant the optical continuum, extending to the ultraviolet and infrared as observations in these bands became available. Techniques of photoelectric photometry and spectrum scanning were becoming established as QSO studies began. The variability of QSOs, including 3C 48 and 3C 273 (e.g., Sandage 1963), was known and no doubt contributed to astronomers’ initial hesitation to interpret QSO spectra in terms of large redshifts. In his contribution to the four discovery papers on 3C 273, Oke (1963) presented spectrophotometry showing a continuum slope $`L_\nu \nu ^{+0.3}`$ in the optical, becoming redder toward the near infrared. He noted that the energy distribution did not resemble a black body, and inferred that there must be a substantial contribution of synchrotron radiation. A key issue for continuum studies has been the relative importance of thermal and nonthermal emission processes in various wavebands. Early work tended to assume synchrotron radiation, or “nonthermal emission”, in the absence of strong evidence to the contrary. The free-free and bound-free emission from the gas producing the observed emission lines was generally a small contribution. The possibility of thermal emission from very hot gas was considered for some objects such as the flat blue continuum of 3C 273 (e.g., Oke 1966). The energy distributions tend to slope up into the infrared; and for thermal emission from optically thin gas, this would would have required a rather low temperature and an excessive Balmer continuum jump. This left the possibilities of nonthermal emission or thermal emission from warm dust, presumably heated by the ultraviolet continuum. Observational indicators of thermal or nonthermal emission include broad features in the energy distribution, variability, and polarization. For the infrared, one also has correlations with reddening, the silicate absorption and emission features, and possible angular resolution of the source (Edelson et al. 1988). For some objects, rapid optical variability implied brightness temperatures that clearly required a nonthermal emission mechanism. For example, Oke (1967) observed day-to-day changes of 0.25 and 0.1 mag for 3C 279 and 3C 446, respectively. For many objects, the energy distributions were roughly consistent with a power law of slope near $`\nu ^{1.2}`$. Power laws of similar slopes were familiar from radio galaxies and the Crab nebula, where the emission extended through the optical band. These spectra were interpreted in terms of synchrotron radiation with power-law energy distributions for the radiating, relativistic electrons. Such a power-law energy distribution was also familiar from studies of cosmic rays, and thus power laws seemed natural in the context of high energy phenomena like AGN. In addition to simple synchrotron radiation, there might be a hybrid process involving synchrotron emission in the submillimeter and far infrared, with some of these photons boosted to the optical by “inverse” Compton scattering (Shklovskii 1965). The idea of a nonthermal continuum in the optical, whose high frequency extrapolation provided the ionizing radiation for the emission-line regions, was widely held for many years. This was invoked not only for QSOs but also for Seyfert galaxies, where techniques such as polarization were used to separate the “nonthermal” and galaxy components (e.g., Visvanathan and Oke 1968). Infrared observations were at first plagued by low sensitivity and inadequate telescope apertures. Measurements of 3C 273 in the K filter (2.2 $`\mu \mathrm{m}`$), published by Johnson (1964) and Low and Johnson (1965), showed a continuum steeply rising into the infrared. Infrared radiation from NGC 1068 was observed by Pacholczyk and Wisniewski (1967), also with a flux density ($`F_\nu `$) strongly rising to the longest wavelength observed (“N” band, or 10 $`\mu \mathrm{m}`$). The infrared radiation dominated the power output of this object. Becklin et al. (1973) found that much of the 10 $`\mu \mathrm{m}`$ emission from NGC 1068 came from a resolved source 1 arcsec (90 pc) across and concluded that most of the emission was not synchrotron emission. In contrast, variability of the 10 $`\mu \mathrm{m}`$ emission from 3C 273 (e.g., Rieke and Low 1972) pointed to a strong nonthermal component. Radiation from hot dust has a minimum source size implied by the black body limit on the surface brightness, and this is more stringent for longer wavelengths radiated by cooler dust. This in turn implies a minimum variability timescale as a function of wavelength. The near infrared emission of NGC 1068 was found to be strongly polarized (Knacke and Capps 1974). Improving infrared technology, and optical instruments such as the multichannel spectrometer on the 200-inch telescope (Oke 1969), led to larger and better surveys of the AGN continuum. Oke, Neugebauer, and Becklin (1970) reported observations of 28 QSOs from 0.3 to 2.2 $`\mu \mathrm{m}`$. The energy distributions were similar in radio loud and radio quiet QSOs. They found that the energy distributions could generally be described as a power law (index -0.2 to -1.6 for $`F_\nu \nu ^\alpha `$) and that they remained “sensibly unchanged” during the variations of highly variable objects. Penston et al. (1974) studied the continuum from 0.3 to 3.4 $`\mu \mathrm{m}`$ in 11 bright Seyfert galaxies. All turned up toward the infrared, and consideration of the month-to-month variability pointed to different sources for the infrared and optical continua. From an extensive survey of Seyfert galaxies, Rieke (1978) concluded that strong infrared emission was a “virtually universal” feature, and that the energy distributions in general did not fit a simple power law. The amounts of dust required were roughly consistent with the expected dust in the emission-line gas of the active nucleus and the surrounding interstellar medium. A consensus emerged that the infrared emission of Seyfert 2’s was thermal dust emission, but the situation for Seyfert 1’s was less clear (e.g., Neugebauer et al. 1976, Stein and Weedman 1976). From a survey of the optical and infrared energy distribution of QSOs, Neugebauer et al. (1979) concluded that the slope was steeper in the 1-3 $`\mu \mathrm{m}`$ band than in the 0.3-1 $`\mu \mathrm{m}`$ band, and that an apparent broad bump around 3 $`\mu \mathrm{m}`$ might be dust emission. Neugebauer et al. (1987) obtained energy distributions from 0.3 to 2.2 $`\mu \mathrm{m}`$ for the complete set of quasars in the Palomar-Green (PG) survey (Green, Schmidt, and Liebert 1986) as well as some longer wavelength observations. A majority of objects could be fit with two power laws ($`\alpha 1.4`$ at lower frequencies, $`\alpha 0.2`$ at higher frequencies) plus a “3000 Å bump”. Measurements at shorter and longer wavelengths were facilitated by the International Ultraviolet Explorer (IUE) and the Infrared Astronomical Satellite (IRAS), launched in 1978 and 1983, respectively. Combining such measurements with ground based data, Edelson and Malkan (1986) studied the spectral energy distribution of AGN over the wavelength range 0.1-100 $`\mu \mathrm{m}`$. The 3-5 $`\mu \mathrm{m}`$ “bump” was present in most Seyferts and QSOs, involving up to 40 percent of the luminosity between 2.5 and 10 $`\mu \mathrm{m}`$. All Sy 1 galaxies without large reddening appeared to require a hot thermal component, identified with the increasingly popular concept of emission from an accretion disk. Edelson and Malkan (1987) used IRAS observations to study the variability of AGN in the far infrared. The high polarization objects varied up to a factor 2 in a few months, but no variations greater than 15 percent were observed for “normal” quasars or Seyfert galaxies. The former group was consistent with a class of objects known as “blazars” that are dominated at all wavelengths by a variable, polarized nonthermal continuum. Blazars were found to be highly variable at all wavelengths, but most AGN appeared to be systematically less variable in the far infrared than at higher frequencies. This supported the idea of thermal emission from dust in the infrared. This was further supported by observations at submillimeter wavelengths that showed a very steep decline in flux longward of the infrared peak at around 100 $`\mu \mathrm{m}`$. For example, an upper limit on the flux from NGC 4151 at 438 $`\mu \mathrm{m}`$ (Edelson et al. 1988) was so far below the measured flux at 155 $`\mu \mathrm{m}`$ as to require a slope steeper than $`\nu ^{+2.5}`$, the steepest that can be obtained from a self-absorbed synchrotron source without special geometries. Dust emission could explain a steeper slope because of the decreasing efficiency of emission toward longer wavelengths. Sanders et al. (1989) presented measurements of 109 QSOs from 0.3 nm to 6 cm ($`10^{10}10^{18}`$ Hz). The gross shape of the energy distributions was quite similar for most objects, excepting the flat spectrum radio loud objects such as 3C 273. This typical energy distribution could be fit by a hot accretion disk at shorter wavelengths and heated dust at longer wavelengths. Warping of the disk at larger radii was invoked to give the needed amount of reprocessed radiation as a function of radius. As noted by Rees et al. (1969) and others, the rather steep slope in the infrared, giving rise to an apparent minimum in the flux around 1 $`\mu \mathrm{m}`$, could be explained naturally by the fact that grains evaporate if heated to temperatures above about 1500 K. Sanders et al. saw “no convincing evidence for energetically significant nonthermal radiation” in the wavelength range 3 nm to 300 $`\mu \mathrm{m}`$ in the continua of radio quiet and steep-spectrum radio-loud quasars. This paper marked the culmination of a gradual shift of sentiment from nonthermal to thermal explanations for the continuum of non-blazar AGN. The blazar family comprised “BL Lac objects” and “Optically Violent Variable” (OVV) QSOs. BL Lac objects, named after the prototype object earlier listed in catalogs of variable stars, had a nonthermal continuum but little or no line emission. OVVs have the emission lines of QSOs. These objects all show a continuum that is fairly well described as a power law extending from X-ray to infrared frequencies. They typically show rapid (sometimes day-to-day) variability and strong, variable polarization. The continuum in blazars is largely attributed to nonthermal processes (synchrotron emission and inverse Compton scattering). 3C 273 seems to be a borderline OVV (Impey, Malkan, and Tapia 1989). The need for relativistic motions, described above, arises in connection with this class of objects. A comprehensive study of the energy distributions of blazars from $`10^8`$ to $`10^{18}`$ Hz was given by Impey and Neugebauer (1988). Bolometric luminosities ranged from $`10^9`$ to $`10^{14}\mathrm{L}_{}`$, dominated by the 1 to 100 $`\mu \mathrm{m}`$ band. There was evidence for a thermal infrared component in many of the less luminous objects, and an ultraviolet continuum bump associated with the presence of emission lines. When gamma rays are observed from AGN (e.g., Swanenburg et al. 1978), they appear to be associated with the beamed nonthermal continuum. The relationship of blazars to “normal” AGN is a key question in the effort to unify the diverse appearance of AGN. IRAS revealed a large population of galaxies whose luminosity was strongly dominated by the far infrared (Soifer, Houck, and Neugebauer 1987). (Rieke had found early indications of a class of ultraluminous infrared galaxies.) The infrared emission is thermal emission from dust, energized in many cases by star formation but in some cases by an AGN. One suggested scenario was that some event, possibly a galactic merger, injected large quantities of gas and dust into the nucleus. This fueled a luminous episode of accretion onto a black hole, at first enshrouded by the dusty gas, whose dissipation revealed the AGN at optical and and ultraviolet wavelengths (Sanders et al. 1988). ### 4.6 The Black Hole Paradigm The intriguing paper by Lynden-Bell (1969) still did not launch a widespread effort to understand AGN in terms of accretion disks around black holes. Further impetus came from the discovery of black holes of stellar mass in our Galaxy. Among the objects discovered by Uhuru and other early X-ray experiments were sources involving binary star systems with a neutron star or black hole. “X-ray pulsars” emitted regular pulses of X-rays every few seconds as the neutron star turned on its axis. The X-ray power was essentially thermal emission from gas transferred from the companion star, impacting on the neutron star with sufficient velocity to produce high temperatures. Another class of source, exemplified by Cyg X-1, showed no periodic variations but a rapid flickering (Oda et al. 1971) indicating a very small size. Analysis of the orbit gave a mass too large to be a neutron star or white dwarf, and the implication was that the system contained a black hole (Webster and Murdin 1972; Tananbaum et al. 1972). The X-ray emission was attributed to gas from the companion O-star heated to very high temperatures as it spiraled into the black hole by way of a disk (Thorne and Price 1975). Galactic X-ray sources, along with cataclysmic variable stars, protostars, and AGN, stimulated efforts to develop the theory of accretion disks. In many cases, the disk was expected to be geometrically thin, and the structure in the vertical and radial directions could be analyzed separately. A key uncertainty was the mechanism by which angular momentum is transported outward as matter spirals inward. In a highly influential paper, Shakura and Sunyaev (1973) analyzed disks in terms of a dimensionless parameter $`\alpha `$ that characterized the stresses that led to angular momentum transport and local energy release. General relativistic corrections were added by Novikov and Thorne (1973). This “$`\alpha `$-model” remains the standard approach to disk theory, and only recently have detailed mechanisms for dissipation begun to gain favor (Balbus and Hawley 1991). The $`\alpha `$-model gave three radial zones characterized by the relative importance of radiation pressure, gas pressure, electron scattering, and absorption opacity. The power producing regions of AGN disks would fall in the “inner” zone dominated by radiation pressure and electron scattering. Electron scattering would dominate in the atmosphere as well as the interior, and modify the local surface emission from an approximate black body spectrum. The “inner” disk zone suffers both thermal and viscous instabilities (Pringle 1976; Lightman and Eardley 1974), but the ultimate consequence of these was unclear. A model in which the ions and electrons had different, very high temperatures was proposed for Cyg X-1 by Eardley, Lightman, and Shapiro (1975). This led to models of “ion supported tori” for AGN (Rees et al. 1982). The related idea of “advection dominated accretion disks”or “ADAFs” (Narayan and Yi 1994) recently has attracted attention. A key question was, do expected physical processes in disks explain the phenomena observed in AGN? In broad terms, this involved producing the observed continuum and, at least in some objects, generating relativistic jets, presumably along the rotation axis. Shields (1978b) proposed that the flat blue continuum of 3C 273 was thermal emission from the surface of an accretion disk around a black hole. For a mass $`10^9\mathrm{M}_{}`$ and accretion rate $`3\mathrm{M}_{}\mathrm{yr}^1`$, the size and temperature of the inner disk was consistent with the observed blue continuum. This component dominated an assumed nonthermal power law, which would explain the infrared upturn and the X-rays. Combining optical, infrared, and ultraviolet observations, Malkan (1983) successfully fitted the continua of a number of QSOs with accretion disk models. Czerny and Elvis (1987) suggested that the soft X-ray excess of some AGN could be the high frequency tail of the thermal disk component or “Big Blue Bump”, which appeared to dominate the luminosity of some objects. Problems confronted the simple picture of thermal emission from a disk radiating its locally produced energy. Correlated continuum variations at different wavelengths in the optical and ultraviolet were observed in the optical and ultraviolet on timescales shorter than the expected timescale for viscous or thermal processes to modify the surface temperature distribution in an AGN disk (e.g., Clavel, Wamsteker, and Glass 1989; Courvoisier and Clavel 1991). This suggested that reprocessing of X-rays incident on the disk made a substantial contribution to the optical and ultraviolet continuum (Collin-Souffrin 1991). Also troublesome was the low optical polarization observed in normal QSOs, typically one percent or less. The polarization generally is oriented parallel to the disk axis, when this can be inferred from jet structures (Stockman, Angel, and Miley 1979). Except for face on disks, electron scattering in disk atmospheres should produce strong polarization oriented perpendicular to the axis. Yet another problem was the prediction of strong Lyman edge absorption features, given effective temperatures similar to those of O stars (Kolykhalov and Sunyaev 1984). These issues remain under investigation today. The question of fueling a black hole in a galactic nucleus has been difficult. Accretion rates of only a few solar masses a year suffice to power a luminous quasar, and even a billion solar masses is a small fraction of the mass of a QSO host galaxy. However, the specific angular momentum of gas orbiting a black hole at tens or hundreds of gravitational radii is tiny compared to that of gas moving with normal speeds even in the central regions of a galaxy. The angular momentum must be removed if the gas is to feed the black hole. Moreover, some galaxies with massive central black holes are not currently shining. Indeed, the rapid increase in the number of quasars with increasing look back time (Schmidt 1972), implies that there are many dormant black holes in galactic nuclei. What caused some to blaze forth as QSOs while others are inert? A fascinating possibility was the tidal disruption of stars orbiting close to the black hole (Hills 1975). However, the rate at which new stars would have their orbits evolve into disruptive ones appeared to be too slow to maintain a QSO luminosity (Frank and Rees 1976). The probability of an AGN in a galaxy appeared to be enhanced if it was interacting with a nearby galaxy (Adams 1977; Dahari 1984), which suggested that tidal forces could induce gas to sink into the galactic nucleus. There, unknown processes might relieve it of its angular momentum and allow it to sink closer and closer to the black hole. The growing acceptance of the black hole model resulted, not from any one compelling piece of evidence, but rather from the accumulation of observational and theoretical arguments suggestive of black holes and from the lack of viable alternatives (Rees 1984). ### 4.7 Unified Models After the discovery of QSOs, the widely different appearances of different AGN became appreciated. The question arose, what aspects of this diversity might result from the observer’s location relative to the AGN? A basic division was between radio loud and radio quiet objects. Since the extended radio sources radiate fairly isotropically, their presence or absence could not be attributed to orientation. Furthermore, radio loud objects seemed to be associated with elliptical galaxies, and radio quiet AGN with spiral galaxies. The huge range of luminosities from Seyferts to QSOs clearly was largely intrinsic. However, some aspects could be a function of orientation. Blandford and Rees (1978) proposed that BL Lac objects were radio galaxies viewed down the axis of a relativistic jet. Relativistic beaming caused the nonthermal continuum to be very bright when so viewed, and the emission lines (emitted isotropically) would be weak in comparison. The same object, viewed from the side, would have normal emission-line equivalent widths, and the radio structure would be dominated by the extended lobes rather than the core. A key breakthrough occurred as a result of advances in the techniques of spectropolarimetry. Rowan-Robinson (1977) had raised the possibility that the BLR of Seyfert 2 galaxies was obscured by dust, rather than being truly absent. Using a sensitive spectropolarimeter on the 120-inch Shane telescope at Lick Observatory, Antonucci and Miller (1985) found that the polarized flux of NGC 1068, the prototype Seyfert 2, had the appearance of a normal Seyfert 1 spectrum. This was interpreted in terms of a BLR and central continuum source obscured from direct view by an opaque, dusty torus. Electron scattering material above the nucleus near the axis of the torus scattered the nuclear light to the observer, polarizing it in the process. This allowed Seyfert 2’s to have a detectable but unreddened continuum. However, the broad lines had escaped notice because the scattered light was feeble compared with the narrow lines from the NLR, which was outside the presumed obscuring torus. The same object, viewed face on, would be a Seyfert 1. Such a picture had also been proposed by Antonucci (1984) for the broad line radio galaxy 3C 234. Various forms of toroidal geometry had been anticipated by Osterbrock (1978) and others, and the idea received support from the discovery of “ionization cones” in the nuclei of some AGN (Pogge 1988). Orientation indicators were developed involving the ratio of the core and extended radio luminosities (Orr and Browne 1982; Wills and Browne 1986). The concepts of a beamed nonthermal continuum and an obscuring, equatorial torus remain fundamental to current efforts to unify AGN. Consideration of the obscuring torus supports the idea that the X-ray background is produced mostly by AGN (Setti and Woltjer 1989). ## 5 THE VIEW FROM HERE The efforts described above led to many of the observational and theoretical underpinnings of our present understanding of AGN. The enormous effort devoted to AGN in recent years has led to many further discoveries and posed exciting challenges. Massive international monitoring campaigns (Peterson 1993) have revealed ionization stratification with respect to radius in the BLR, that the BLR radius increases with luminosity, and that the gas is not predominantly in a state of radial flow inwards or outwards. This suggests the likelihood of orbiting material. Models involving a mix of gas with a wide range of densities and radii may give a natural explanation of AGN line ratios (Baldwin et al. 1995). Chemical abundances in QSOs have been analyzed in the context of galactic chemical evolution (Hamann and Ferland 1993). Recent theoretical work indicates that the observed, centrally peaked line profiles can be obtained from a wind leaving the surface of a Keplerian disk (Murray and Chiang 1997). Efforts to understand the broad absorption lines (BALs) of QSOs have intensified in recent years. The geometry and acceleration mechanism are still unsettled, although disk winds may be involved here too (Murray et al. 1995). Partial coverage of the continuum source by the absorbing clouds complicates the effort to determine chemical abundances (e.g., Arav 1997). The black hole model has gained support from indirect evidence for massive black holes in the center of the Milky Way and numerous nearby galaxies (see Rees 1997). This includes the remarkable “H<sub>2</sub>O megamaser” VLBI measurements of the Seyfert galaxy NGC 4258 (Miyoshi et al. 1995), which give strong evidence for a black hole of mass $`4\times 10^7\mathrm{M}_{}`$. X-ray observations suggest reflection of X-rays incident on an accretion disk (Pounds et al. 1989), and extremely broad Fe K$`\alpha `$ emission lines may give a direct look at material orbiting close to the black hole (Tanaka et al. 1998). These results reinforce the black hole picture, but much remains to be done to understand the physical processes at work in AGN. In spite of much good work, the origin and fueling of the hole, the physics of the disk, and the jet production mechanism still are not well understood. The nature of the AGN continuum remains unsettled; for example, the contribution of the disk to the optical and ultraviolet continuum is still debated (Koratkar and Blaes 1999). The primary X-ray emission mechanism and the precise role of thermal and nonthermal emission in the infrared remain unclear (Wilkes 1999). Blazars have proved to be strong $`\gamma `$-ray sources, with detections up to TeV energies (Punch et al. 1992). Radio emission was key to the discovery of quasars, and radio techniques have seen great progress. The Very Large Array in New Mexico has produced strikingly detailed maps of radio sources, and shown the narrow channels of energy from the nucleus to the extended lobes. Maps of “head-tail” sources in clusters of galaxies shows the interplay between the active galaxy and its environment. The Very Long Baseline Array (VLBA) will yield improved measurements of structures on light-year scales in QSOs and provide insights into relativistic motions in AGN. Likewise, new orbiting X-ray observatories promise great advances in sensitivity and spectral resolution. The Hubble Deep Field and other deep galaxy surveys have led to the measurement of redshifts for galaxies as high as those of QSOs. This is already stimulating increased efforts to understand the interplay between AGN and the formation and evolution of galaxies. The decline of AGN as an active subject of research is nowhere in sight. ## 6 BIBLIOGRAPHY In addition to the primary literature, I have drawn on a number of reviews, books, and personal communications. For the early work in radio astronomy, the books by Sullivan (1982, 1984) were informative and enjoyable; the former conveniently reproduces many of the classic papers. The book by Burbidge and Burbidge (1967) was an invaluable guide. A brief summary of early studies is contained in the introduction to Osterbrock’s (1989) book. The Conference on Seyfert Galaxies and Related Objects (Pacholczyk and Weymann 1968) makes fascinating reading today. The status of AGN research in the late 1970s is indicated by the Pittsburgh Conference on BL Lac Objects (Wolfe 1978). Many aspects of AGN are discussed in the volume in honor of Professor Donald E. Osterbrock (Miller 1985), which remains of interest both from an historical and a modern perspective. Review articles that especially influenced this work include those by Bregman (1990) on the continuum; Mushotzky, Done, and Pounds (1993) and Bradt, Ohashi, and Pounds (1992) on X-rays; and Stein and Soifer (1983) on dust in galaxies. Historical details of the discovery of QSO redshifts are given by Schmidt (1983, 1990); and an historical account of early AGN studies is given in the introduction to the volume by Robinson et al. (1964). A comprehensive early review of AGN was given by Burbidge (1967b). A review of superluminal radio sources is given by Kellermann (1985), and the emission-line regions are reviewed by Osterbrock and Mathews (1986). A succinct review of important papers in the history of AGN research is given by Trimble (1992). Recent books on AGN include those of Krolik (1999), Peterson (1997), and Robson (1996). Many interesting articles are contained in the volume edited by Arav et al. (1997). Recent technical reviews include those by Koratkar and Blaes (1999) on the disk continuum; Antonucci (1993) and Urry and Padovani (1995) on unified models; Lauroesch et al. (1996) on absorption lines and chemical evolution; Ulrich, Maraschi, and Urry (1997) on variability; and Hewett and Foltz (1994) on quasar surveys. The author is indebted to many colleagues for valuable communications and comments on the manuscript, including Stu Bowyer, Geoff and Margaret Burbidge, Marshall Cohen, Suzy Collin, Martin Elvis, Jesse Greenstein, Ken Kellermann, Matt Malkan, Bill Mathews, Richard Mushotzky, Gerry Neugebauer, Bev Oke, Martin Rees, George Rieke, Maarten Schmidt, Woody Sullivan, Marie-Helene Ulrich, and Bev and Derek Wills. Don Osterbrock was especially supportive and helpful. This article was written in part during visits to the Department of Space Physics and Astronomy, Rice University; Lick Observatory; and the Institute for Theoretical Physics, University of California, Santa Barbara. The hospitality of these institutions is gratefully acknowledged. This work was supported in part by The Texas Higher Education Coordinating Board.
no-problem/9903/astro-ph9903405.html
ar5iv
text
# 1 Introduction ## 1 Introduction As the supernova ejecta expand they become optically thin in the continuum after $``$ 200 days, and it is possible to directly probe the interior of the ejecta. The main purpose of our modeling of the spectra and line fluxes is to put constraints on the nucleosynthesis taking place in the progenitor, as well as during the explosion itself. See Kozma & Fransson (1998b) for a detailed discussion. There are several observational indications that mixing occurred in the explosion of SN 1987A. One is the early emergence of X-rays (Dotani et al.1987; Sunyaev et al.1987) and $`\gamma `$-rays (Matz et al.1988). The effects of mixing can also be seen in the observed line profiles at late times (Stathakis et al.1991; Spyromilio, Stathakis, & Meurer 1993; Hanuschik et al.1993). By modeling line profiles we are able to study the distribution of mass of different elements. This mass distribution gives us information on the hydrodynamics taking place in the explosion (e.g., Müller, Fryxell, & Arnett 1991; Herant & Benz 1992; Herant, Benz, & Colgate 1992). In modeling the line profiles we can also see the importance of including all the different composition regions, as well as time dependence in our calculations. The modeled emission gives information on the energy source powering the ejecta. Possible energy sources at late times are radioactive isotopes, a central compact object, and circumstellar interaction. The radioactive isotopes formed in the explosion, and which power the ejecta at subsequent times are <sup>56</sup>Ni, <sup>57</sup>Ni, and <sup>44</sup>Ti. After $``$ 1700 days <sup>44</sup>Ti is the dominant isotope. The formation of this isotope is sensitive to the occurrence of $`\alpha `$-rich freeze-out (e.g., Woosley & Hoffman 1991; Timmes et al.1996). The amount of this element in the ejecta therefore directly probes the explosion mechanism itself. Although our model is quite general and may be applied to any supernova, as long as it is not dominated by circumstellar contribution, we will here concentrate on the results from our study of SN 1987A. ## 2 Model Our modeling of SN 1987A is described in detail in Kozma & Fransson (1998a). Here we will just make a short summary. The explosion model we use here for our abundances is the 10H model (Woosley & Weaver 1986; Woosley 1988). Our density and velocity structure is based on hydrodynamical calculations (e.g., Herant & Benz 1992) and on line profiles (Phillips et al.1990; Meikle et al.1993). Our treatment of the thermalization of the non-thermal $`\gamma `$-rays and positrons from radioactive decays is described in Kozma & Fransson (1992), and is based on the Spencer-Fano equation (Spencer & Fano 1954). The radioactive isotopes we include are <sup>56</sup>Co, <sup>57</sup>Co, and <sup>44</sup>Ti. We solve the thermal and ionization balances time dependently, as well as the level populations of the most important ions. The total number of transitions included in our calculations are $``$ 6400. We take dust absorption into account by assuming optically thick clumps with a constant covering factor of 0.40 (Lucy et al.1991; Wooden et al.1993) after 600 days, and a linear increase from 0 to 0.40 in the time interval 350 to 600 days. ## 3 Uncertainties One of the uncertainties in our modeling is our treatment of line blanketing. The Sobolev approximation is used for the line transfer which in general is a good approximation for the high velocity expansions in supernovae. However, it does not take into account any interaction between different lines or different regions in the ejecta. Especially in the UV, where there are a lot of resonance lines, such interaction is expected to be important. The importance of UV-scattering decreases with time as the matter thins. The blocking of the UV-lines affects the emergent spectrum by degrading the UV-photons to photons of lower energies. It also changes the UV-field within the ejecta which affects the ionization balance for mainly ions with low ionization potentials and low abundances, such as Na I, Mg I, Si I, Ca I, Fe I. Another aspect of supernova evolution, which we do not include, is the formation of molecules. In SN 1987A CO and SiO were observed already 112 days after the explosion (Oliva, Morwood, & Danziger 1987; Spyromilio et al.1988). Modeling of CO formation in SN 1987A has been done by e.g. Gearhart, Wheeler, & Swartz (1999), who find a CO-mass of $``$ $`10^4\mathrm{M}_{}`$ at 200 days. As molecules can be strong coolers, they can greatly influence the temperature at their formation sites. In order to form CO both carbon and oxygen should be abundant. The most favorable site for this to happen is therefore in the oxygen-carbon rich regions. The most favorable place for SiO to form is in the transition region between the silicon-rich and oxygen-rich regions. The regions suitable for molecule formation have little mass, which explains the low mass of molecules in the ejecta. The importance of the H<sub>2</sub>-molecule has been discussed by Culhane & McCray (1995). They calculate the abundance of H<sub>2</sub> theoretically as it has not been observed. They also discuss how the resonance scattering in the Lyman and Werner bands may affect the emergent UV-spectra. There are several observational indications of dust formation in the ejecta after $``$ 400 - 500 days (Roche et al.1989; Lucy et al.1991; Meikle et al.1993). Dust absorbs much of the harder radiation, thermalizes it, and reradiates it in the IR. The dust absorption affects both the line fluxes and profiles. We treat dust absorption in a simplified way in our models, as mentioned above. Our treatment of the dust assumes it to be located in optically thick clumps. This is a reasonable assumption as the effects of dust absorption are seen in both optical and IR lines (Lucy et al.1991), consistent with optically thick clumps. However, we do not include dust cooling in our calculations.. When we compare our modeled spectra (Figure 2) and light curves (Figure 4) with observations we find a general good agreement. We therefore believe that our temperature structure well describes the conditions in the ejecta, and that dust cooling is not significant in the line emitting regions. In the decay of <sup>44</sup>Ti $``$ <sup>44</sup>Sc + $`e^+`$ $``$ <sup>44</sup>Ca + $`\gamma `$ the positrons dominate the non-thermal energy input at late times. In our model we assume complete trapping of these positrons in the iron-rich regions, containing the newly synthesized iron. Chugai et al.(1997) find that the intensity of the Fe II lines can be explained only if trapping of the positrons from <sup>44</sup>Ti is efficient in the iron-rich parts of the ejecta. Again based on the fact that we find such good agreement between modeled and observed fluxes (Figures 2 and 4), assuming full trapping of positrons, we believe this to be a good assumption. A leakage of positrons would result in a much faster decline of the light curves than observed. ## 4 Temperature and ionization evolution The temperatures in the different composition regions in the supernova ejecta evolve differently, depending on composition and density as discussed in detail in Kozma & Fransson (1998a). The temperature in the inner, heavy-element regions decreases slowly with time until it reaches a temperature of $``$ 2000 K. At that time far-IR lines replace optical lines as the dominant source of cooling and the temperature suddenly drops to $`\stackrel{}{\stackrel{<}{}}\mathrm{\hspace{0.17em}\hspace{0.17em}500}`$ K. The reason for this is a thermal instability setting in. The cooling by far-IR lines is insensitive to the temperature at temperature much larger then $``$ 500 K. This sudden drop in temperature is referred to as the IR-catastrophe (Axelrod 1980; Fransson & Chevalier 1989; Fransson 1994). The time for the onset of the IR-catastrophe varies for the different compositions. For example, in the iron-rich region the drop in temperature sets in at $``$ 600 days (Figure 1). In Figure 1 one can also see the gradual transition from optical to near-IR, and to far-IR lines as the IR-catastrophe sets in. In the hydrogen- and helium-rich regions this IR-catastrophe is not seen. Here adiabatic cooling is more important, and the temperature decreases more slowly, as $`Tt^2`$. In our modeling we find it important to include time dependence. After $``$ 800 - 900 days the recombination and cooling time scales become longer than the radioactive decay time scales and the steady state assumption is no longer valid. This freeze-out effect is discussed in Fransson & Kozma (1993), and is crucial for modeling the bolometric light curve, as well as individual line fluxes and profiles. ## 5 Line fluxes Here we will discuss a couple of interesting points concerning our modeling of line fluxes and spectra. A thorough discussion of the line emission is given in Kozma & Fransson (1998b). In Figure 2 we compare a preliminary calculated spectra with observations and find quite good agreement. In this calculation we have included line scattering in a simplified way. We find that in particular the \[Ca II\] $`\lambda \lambda `$7291,7324 lines and the IR-triplet are very sensitive to the assumption of line scattering. Radiative pumping of the calcium H, K lines by UV emission, mainly from Fe I, is very important at this time. Because of its importance for the oxygen mass determination we now discuss the \[O I\] $`\lambda \lambda `$6300,6364 lines in some detail. In these lines we can see the temperature evolution of the oxygen-rich regions (dotted line in Figure 3). Up to $``$ 800 days these lines are formed by thermal excitation to the $`{}_{}{}^{1}D`$ level. At that time the IR-catastrophe sets in and the temperature drops rapidly. At later epochs the temperature is too low in these regions for thermal excitation to be of any importance, and non-thermal excitation is responsible for the line emission. We are able to model the thermal part of the light curve quite well. For the non-thermal part, however, we run into problems. There are HST observations of the \[O I\] $`\lambda \lambda `$6300,6364 lines up to day 3597, and we underproduce the line fluxes up to factor of almost 10. In Kozma & Fransson (1998b) we discuss in detail our modeling of these lines. We include contributions from the triplet levels to the $`{}_{}{}^{1}D`$ level, which are sensitive to the composition in the oxygen rich regions. We discuss the importance of photoionization, we try different filling factors, etc. Even if we push all our assumptions as far as we can in order to maximize the \[O I\] $`\lambda \lambda `$6300,6364 fluxes we are still not able to reproduce the observations. Because of this failure, we are now happy to report that the solution to this problem is to be found in blending of the \[O I\] $`\lambda \lambda `$6300,6364 lines with a fairly strong \[Fe I\] multiplet (Fransson, Kozma & Wang 1999). While this is unimportant for the thermal part, the low non-thermal flux is dominated by the \[Fe I\] lines. In Figure 3 we show a preliminary calculation of the 6300 Å-feature, including both the \[O I\] and \[Fe I\] emission (solid line), together with observations. The dotted line in Figure 3 shows the contribution only from the \[O I\] $`\lambda \lambda `$6300,6364 lines. This agreement therefore makes our determination of the oxygen mass in SN 1987A considerably more firm. In Kozma & Fransson (1998b) we find a value of $``$ 1.9 $`\mathrm{M}_{}`$ of oxygen enriched gas. ## 6 Line profiles The line profiles provide a tool to probe the distribution of the different elements in the ejecta. In Kozma & Fransson (1998b) we compare our modeled line profiles of H$`\alpha `$, He I 2.058 $`\mu `$m, \[O I\] $`\lambda \lambda `$6300,6364 with observations and estimate the mass and distribution of hydrogen, helium and oxygen. In Fransson et al.(1999) we continue this work and model the H$`\alpha `$ line profile for different epochs up to 4000 days. We find that the hydrogen envelope, as reflected in the line wings and extending from 2000 to 6000 km s<sup>-1</sup>, becomes increasingly important with time. This is an effect of the freeze-out becoming more important in the outer, low density regions of the envelope. Our modeled line profiles of iron, e.g., \[Fe II\] 17.94 $`\mu `$m and 25.99 $`\mu `$m also show line wings out to 6000 km s<sup>-1</sup> (Fransson & Kozma 1999), showing that the primordial iron within the hydrogen envelope contributes significantly, due to freeze-out. The contribution from primordial iron to the \[Fe II\] 25.99 $`\mu `$m line flux can be seen in Figure 5 (dash-dotted line). ## 7 <sup>44</sup>Ti-mass The energy input to the ejecta at late times is the decay of radioactive isotopes formed in the explosion. The three radioactive isotopes that subsequently dominate the energy input are <sup>56</sup>Co, <sup>57</sup>Co, and <sup>44</sup>Ti. The decay of <sup>56</sup>Co dominates up to $``$ 800 - 900 days, thereafter <sup>57</sup>Co becomes an increasingly important energy source. After $``$ 1700 days the energy input to the ejecta is dominated by $`\gamma `$-rays and positrons from the decay of <sup>44</sup>Ti. The amount of <sup>56</sup>Co in the ejecta is accurately determined from modeling of the bolometric light curve and is found to be $``$ 0.07 $`\mathrm{M}_{}`$ for SN 1987A. Also the amount of <sup>57</sup>Co can be inferred from the bolometric light curve if the effects of freeze-out are properly taken into account (Fransson & Kozma 1993). Other ways to estimate the <sup>57</sup>Co-mass are from studies of \[Fe II\] and \[Co II\] IR-lines (Varani et al.1990; Danziger et al.1991) and observation of the 122 keV <sup>57</sup>Co line (Kurfess et al.1992). The inferred mass of <sup>57</sup>Ni expelled is $``$ 0.0033 $`\mathrm{M}_{}`$. The <sup>44</sup>Ti isotope is very interesting as it, together with <sup>57</sup>Co, is formed in the very supernova explosion (e.g., Woosley & Hoffman 1991), and the amount formed is sensitive to the conditions prevailing there. There are several ways to try to estimate the mass of <sup>44</sup>Ti in the ejecta of SN 1987A. To model the bolometric light curve and compare it with observations is no longer a fruitful approach. Observational uncertainties of the bolometric light curve are large for times later than $``$ 1200 days, as more and more of the emission emerges in the IR. For example, Bouchet et al.(1996) find that at 2172 days $``$ 97 % of the energy is emitted in the IR, out of which only $``$ 1 % can be directly observed. A better approach is to observe and model the different broad bands. We then directly avoid the observational uncertainties. In Kozma & Fransson (1999) we discuss our modeling of broad band photometry and by comparing the models with observations we reach a preliminary estimate of the <sup>44</sup>Ti-mass of $`(1.5\pm 1.0)\times 10^4\mathrm{M}_{}`$. In Figure 4 the light curves for the B and V bands are shown. For the three models we use a <sup>44</sup>Ti-mass of $`0.5\times 10^4,1.0\times 10^4`$, and $`2.0\times 10^4\mathrm{M}_{}`$, respectively. Still another approach to estimate the mass of <sup>44</sup>Ti is to look at individual line fluxes. The line used for such an estimate has to be carefully chosen. For example, the strong and well-understood H$`\alpha `$ line turns out to be a bad choice. The H$`\alpha `$ emission has large contributions from the envelope regions at later times where the freeze-out effect dominates (Kozma & Fransson 1999; Fransson et al.1999). The H$`\alpha `$ emission is therefore very insensitive to the instantaneous energy input. A better choice would be the iron lines. At these late times they originate in the iron-rich region within the core where freeze-out is much less important. There are however problems also with these lines. Many of the iron lines in the optical and IR are heavily blended, which makes it difficult to infer fluxes of individual lines. Another problem is the uncertainty of the atomic data, in addition to incomplete atomic models. The best individual line is \[Fe II\] 25.99 $`\mu `$m. This arises from collisional excitations and the uncertainties in the atomic data related to the recombination cascade is avoided. Figure 5 shows the calculated flux for the \[Fe II\] 25.99 $`\mu `$m line for SN 1987A. The three <sup>44</sup>Ti-masses used are $`0.5\times 10^4`$, $`1.0\times 10^4`$, and $`2.0\times 10^4`$ $`\mathrm{M}_{}`$ (the three solid lines). As shown in this figure the flux is almost proportional to the <sup>44</sup>Ti-mass after $``$ 2500 days. Also shown in Figure 5 is a steady state model (the dashed line), for the M(<sup>44</sup>Ti)=$`1.0\times 10^4`$ $`\mathrm{M}_{}`$ case. Steady state is a good approximation (for this line) for times earlier than $``$ 1000 days, and times later than $``$ 3000 days. For times in between, time dependence has to be included in order to model the line fluxes accurately. The reason for this can be understood from the origin of the iron flux. The dotted line in Figure 5 shows the contribution to the total line flux from the newly synthesized iron, in the iron-rich regions. The dash-dotted line, on the other hand, shows the contribution from the primordial iron within the hydrogen-rich regions. Between $``$ 1000 and $``$ 2400 days the primordial iron dominates the line flux. But even at earlier times the primordial iron contributes significantly to the flux. This iron mostly resides in the low density hydrogen envelope which is most sensitive to the freeze-out effect, and therefore to the assumption of time-dependence. After $``$ 3000 days the newly synthesized iron dominates the emission. The iron-rich regions is much less affected by freeze-out and the emission directly reflects the energy input. Therefore this line is a good tracer of the <sup>44</sup>Ti-mass. We find in our modeling that approximately half of the supernova’s emission at 4000 days emerges in this line. Therefore the \[Fe II\] 25.99 $`\mu `$m line flux is one of the most reliable ways to determine the <sup>44</sup>Ti-mass. ISO observations of the \[Fe II\] 25.99 $`\mu `$m are reported in Lundqvist et al.(1999), and give an upper limit on the observed \[Fe II\] 25.99 $`\mu `$m flux on day 3999. These observations together with model calculations give an estimate of the upper limit of the <sup>44</sup>Ti-mass of $`1.4\times 10^4\mathrm{M}_{}`$. A third, and future way, to estimate M(<sup>44</sup>Ti) is to observe the $`\gamma `$-ray line at 1.156 MeV with instruments like INTEGRAL (Leising 1994). ## Acknowledgements I would like to thank Claes Fransson and Peter Lundqvist for stimulating discussions and for useful comments on the manuscript. I also thank the SINS team (P.I. R. Kirshner) for using data prior to publication. This research was supported by the Swedish Natural Science Research Council, the Swedish National Space Board and the Knut and Alice Wallenberg Foundation.
no-problem/9903/astro-ph9903216.html
ar5iv
text
# The kinematics and the origin of the ionized gas in NGC 4036 ## 1 Introduction Stars and ionized gas provide independent probes of the mass distribution in a galaxy. The comparison between their kinematics is particularly important in dynamically hot systems (i.e. whose projected velocity dispersion is comparable to rotation). In fact in elliptical galaxies and bulges the ambiguities about orbital anisotropies can lead to considerable uncertainties in the mass modeling (e.g. Binney & Mamon 1982; Rix et al. 1997). The mass distributions inferred from stellar and gaseous kinematics are usually in good agreement for discs (where both tracers can be considered on nearly circular orbits), but often appear discrepant for bulges (e.g. Fillmore, Boroson & Dressler 1986; Kent 1988; Kormendy & Westpfahl 1989; Bertola et al. 1995b). There are several possibilities to explain these discrepant mass estimates in galactic bulges: 1. If bulges have a certain degree of triaxiality, depending on the viewing angle the gas on closed orbits can either move faster or slower than in the ‘corresponding’ axisymmetric case (Bertola, Rubin & Zeilinger 1989). Similarly, the predictions of the triaxial stellar models deviate from those in the axisymmetric case: whenever $`\sigma _{\mathrm{stars}}>\sigma _{\mathrm{axisym}}`$, then $`v_{\mathrm{stars}}<v_{\mathrm{axisym}}`$; 2. Most of the previous modeling assumes that the gas is dynamically cold and therefore rotates at the local circular speed on the galactic equatorial plane. If in bulges the gas velocity dispersion $`\sigma _{\mathrm{gas}}`$ is not negligible (e.g. Cinzano & van der Marel 1994 hereafter CvdM94; Rix et al. 1995; Bertola et al. 1995b), the gas rotates slower than the local circular velocity due to its dynamical pressure support. CvdM94 showed explicitly for the E4/S0a galaxy NGC 2974 that the gas and star kinematics agree taking into account for the gas velocity dispersion. Furthermore, if $`\sigma _{\mathrm{gas}}`$ is comparable to the observed streaming velocity, the spatial gas distribution can no longer be modeled as a disc; 3. Forces other than gravity (such as magnetic fields, interactions with stellar mass loss envelopes and the hot gas component) might act on the ionized gas (e.g. Mathews 1990). In this paper we pursue the second of these explanations by building for NGC 4036 dynamical models which take into account both for the random motions and the three-dimensional spatial distribution of the ionized gas. NGC 4036 has been classified S0<sub>3</sub>(8)/Sa in RSA (Sandage & Tammann 1981) and S0<sup>-</sup> in RC3 (de Vaucouleurs et al. 1991). It is a member of the LGG 266 group, together with NGC 4041, IC 758, UGC 7009 and UGC 7019 (Garcia 1993). It forms a wide pair with NGC 4041 with a separation of $`17^{}`$ corresponding to 143 kpc at their mean redshift distance of 29 Mpc (Sandage & Bedke 1994). In The Carnegie Atlas of Galaxies (hereafter CAG) Sandage & Bedke (1994) describe it as characterized by an irregular pattern of dust lanes threaded through the disc in an ‘embryonic’ spiral pattern indicating a mixed S0/Sa form (see Panel 60 in CAG). Its total $`V`$band apparent magnitude is $`V_T=10.66`$ mag (RC3). This corresponds to a total luminosity $`L_V=4.210^{10}`$ L$`_V`$ at the assumed distance of $`d=30.2`$ Mpc. The distance of NGC 4036 was derived as $`d=V_0/H_0`$ from the systemic velocity corrected for the motion of the Sun with respect to the centroid of the Local Group $`V_0=1509\pm 50`$ $`\mathrm{km}\mathrm{s}^1`$ (RSA) and assuming $`H_0=50`$ $`\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. At this distance the scale is 146 pc arcsec<sup>-1</sup>. The total masses of neutral hydrogen and dust in NGC 4036 are $`M_{\mathrm{HI}}=1.710^9`$ M and $`M_{\mathrm{dust}}=4.410^5`$ M (Roberts et al. 1991). NGC 4036 is known to have emission lines from ionized gas (Bettoni & Buson 1987) and the mass of the ionized gas is $`M_{\mathrm{HII}}=710^4`$ M (see Sec. 4.1 for a discussion). This paper is organized as follows. In Sec. 2 we present the photometrical and spectroscopical observations of NGC 4036, the reduction of the data, and the analysis procedures to measure the surface photometry and the major-axis kinematics of stars and ionized gas. In Sec. 3 we describe the stellar dynamical model (based on Jeans Equations), and we find the potential due to the stellar bulge and disc components starting from the observed surface brightness of the galaxy. In Sec. 4 we use the derived potential to study the dynamics of both the gaseous spheroid and disc components, assumed to be composed of collisionless cloudlets orbiting as test particles. In Sec. 5 we discuss our conclusions. ## 2 Observations and data analysis ### 2.1 Photometrical observations #### 2.1.1 Ground-based data We obtained an image of NGC 4036 of 300 s in the Johnson $`V`$band at the 2.3-m Bok Telescope at Kitt Peak National Observatory on December 22, 1995. A front illuminated 2048$`\times `$2048 LICK2 Loral CCD with $`15\times 15\mu `$m<sup>2</sup> pixels was used as detector at the Richtey-Chretien focus, $`f/9`$. It yielded a flat field of view with a $`10\stackrel{}{.}1`$ diameter. The image scale was $`0\stackrel{}{.}43`$ pixel<sup>-1</sup> after a $`3\times 3`$ pixel binning. The gain and the readout noise were 1.8 e<sup>-</sup> ADU<sup>-1</sup> and 8 e<sup>-</sup> respectively. The data reduction was carried out using standard IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories which are operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation routines. The image was bias subtracted and then flat-field corrected. The cosmic rays were identified and removed. A Gaussian fit to field stars in the resulting image yielded a measurement of the seeing point spread function (PSF) FWHM of $`1\stackrel{}{.}7`$. The sky subtraction and elliptical fitting of the galaxy isophotes were performed by means of the Astronomical Images Analysis Package (AIAP) developed at the Osservatorio Astronomico di Padova (Fasano 1990). The sky level was determined by a polynomial fit to the surface brightness of the frame regions not contaminated by the galaxy light, and then subtracted out from the total signal. The isophote fitting was performed masking the bad columns of the frame and the bright stars of the field. Particular care was taken in masking the dust-affected regions along the major axis between $`5^{\prime \prime }`$ and $`20^{\prime \prime }`$. No photometric standard stars were observed during the night. For this reason the absolute calibration was made scaling the total apparent $`V`$band magnitude to $`V_\mathrm{T}=10.66`$ mag (RC3). Fig. 1 shows the $`V`$band surface brightness ($`\mu _V`$), ellipticity ($`ϵ`$), major axis position angle (PA), and the $`\mathrm{cos}4\theta `$ ($`a_4`$) Fourier coefficient of the isophote’s deviations from elliptical as functions of radius along the major axis. For $`r4^{\prime \prime }`$ the ellipticity is $`0.12`$. Between $`r4^{\prime \prime }`$ and $`r30^{\prime \prime }`$ it increases to $`0.61`$. It rises to $`0.63`$ at $`r62^{\prime \prime }`$ and then it decreases to $`0.56`$ at the farthest observed radius. The position angle ranges from $`98\mathrm{°}`$ to $`67\mathrm{°}`$ in the inner $`5^{\prime \prime }`$. Between $`r5^{\prime \prime }`$ and $`r10^{\prime \prime }`$ it increases to $`79\mathrm{°}`$ and then it remains constant. The $`\mathrm{cos}4\theta `$ coefficient ranges between $`+0.01`$ and $`0.02`$ for $`r5^{\prime \prime }`$. Further out it peaks to $`+0.05`$ at $`r18^{\prime \prime }`$, and then it decreases to $`+0.03`$ for $`r>30^{\prime \prime }`$. The abrupt variation in position angle ($`\mathrm{\Delta }\mathrm{PA}\mathrm{\hspace{0.17em}30}\mathrm{°}`$) observed inside $`5^{\prime \prime }`$ leads to an isophote twist that can be interpreted as due to a slight triaxiality of the inner regions of the stellar bulge. Anyway this variation has to be considered carefully due to the presence of dust pattern in these regions which are revealed by HST imaging (see Fig. 2). These results are consistent with previous photometric studies of Kent (1984) and Michard (1993) obtained in the $`r`$ and $`B`$band respectively. Our measurements of ellipticity follows closely those by Michard (1993). Kent (1984) measured the ellipticity and position angle of NGC 4036 isophotes in $`r`$band for $`9^{\prime \prime }r\mathrm{\hspace{0.17em}114}^{\prime \prime }`$. Out to $`r=78^{\prime \prime }`$ the $`r`$band ellipticity is lower than our $`V`$band one by $`0.04`$. The $`r`$band position angle profile differs from our only at $`r16^{\prime \prime }`$ ($`ϵ_r=76\mathrm{°}`$) and for $`r>78^{\prime \prime }`$ ($`ϵ_r=81\mathrm{°}`$). We found $`\mathrm{cos}4\theta `$ isophote deviation from ellipses to have a radial profile in agreement with that by Michard (1993). #### 2.1.2 Hubble Space Telescope data In addition, we derived the ionized gas distribution in the nuclear regions of NGC 4036 by the analysis of two Wide Field Planetary Camera 2 (WFPC2) images which were extracted from the Hubble Space Telescope archive<sup>2</sup><sup>2</sup>2Observations with the NASA/ESA Hubble Space Telescope were obtained from the data archive at the Space Telescope Science Institute (STScI), operated by AURA under NASA contract NAS 5-26555.. We used a 300 s image obtained on August 08, 1994 with the F547M filter (principal investigator: Sargent GO-05419) and a 700 s image taken on May 15, 1997 with the F658N filter (principal investigator: Malkan GO-06785). The standard reduction and calibration of the images were performed at the STScI using the pipeline-WFPC2 specific calibration algorithms. Further processing using the IRAF STSDAS package involved the cosmic rays removal and the alignment of the images (which were taken with different position angles). The surface photometry of the F547M image was carried out using the STSDAS task ELLIPSE without masking the dust lanes. In Fig. 2 we plot the resulting ellipticity ($`ϵ`$) and major axis position angle (PA) of the isophotes as functions of radius along the major axis. The continuum-free image of NGC 4036 (Fig. 3) was obtained by subtracting the continuum-band F547M image suitably scaled, from the emission-band F658N image The mean scale factor for the continuum image was estimated by comparing the intensity of a number of 5$`\times `$5 pixels regions near the edges of the frames in the two bandpasses. These regions were chosen in the F658N image to be emission free. Our continuum-free image reveals that less than the $`40\%`$ of the H$`\alpha `$$`\times `$\[N II\] flux of NGC 4036 derives from a clumpy structure of about $`6^{\prime \prime }\times 2^{\prime \prime }`$. The center of this complex filamentary structure which is embedded in a smooth emission pattern coincides with the position of the maximum intensity of the continuum. ### 2.2 Spectroscopical observations A major-axis (PA$`=85\mathrm{°}`$) spectrum of NGC 4036 was obtained on March 30, 1989 with the Red Channel Spectrograph at the Multiple Mirror Telescope<sup>3</sup><sup>3</sup>3The MMT is a joint facility of the Smithsonian Institution and the University of Arizona. as a part of a larger sample of 8 S0 galaxies (Bertola et al. 1995b). The exposure time was 3600 s and the 1200 grooves mm<sup>-1</sup> grating was used in combination with a $`1\stackrel{}{.}25\times 180^{\prime \prime }`$ slit. It yielded a wavelength coverage of 550 Å between about 3650 Å and about 4300 Å with a reciprocal dispersion of 54.67 Å mm<sup>-1</sup>. The spectral range includes stellar absorption features, such as the the Ca II H and K lines ($`\lambda \lambda `$3933.7, 3968.5 Å) and the Ca I g-band ($`\lambda `$4226.7 Å), and the ionized gas \[O II\] emission doublet ($`\lambda \lambda `$3726.2. 3728.9 Å). The instrumental resolution was derived measuring the $`\sigma `$ of a sample of single emission lines distributed all over the spectral range of a comparison spectrum after calibration. We checked that the measured $`\sigma `$’s did not depend on wavelength, and we found a mean value $`\sigma =1.1`$ Å. It corresponds to a velocity resolution of $`88`$ $`\mathrm{km}\mathrm{s}^1`$ at 3727 Å and $`83`$ $`\mathrm{km}\mathrm{s}^1`$ at 3975 Å. The adopted detector was the 800$`\times `$800 Texas Instruments CCD, which has 15$`\times `$15 $`\mu `$m<sup>2</sup> pixel size. No binning or rebinning was done. Therefore each pixel of the frame corresponds to $`0.82`$ Å $`\times 0\stackrel{}{.}33`$. Some spectra of late-G and early-K giant stars were taken with the same instrumental setup for use as velocity and velocity dispersion templates in measuring the stellar kinematics. Comparison helium-argon lamp exposures were taken before and after every object integration. The seeing FWHM during the observing night was in between $`1^{\prime \prime }`$ and $`1\stackrel{}{.}5`$. The data reduction was carried out with standard procedures from the ESO-MIDAS<sup>4</sup><sup>4</sup>4MIDAS is developed and mantained by the European Southern Observatory package. The spectra were bias subtracted, flat-field corrected, cleaned for cosmic rays and wavelength calibrated. The sky contribution in the spectra was determined from the edges of the frames and then subtracted. #### 2.2.1 Stellar kinematics The stellar kinematics was analyzed with the Fourier Quotient Method (Sargent et al. 1977) as applied by Bertola et al. (1984). The K4III star HR 5201 was taken as template. It has a radial velocity of $`2.7`$ $`\mathrm{km}\mathrm{s}^1`$ (Evans 1967) and a rotational velocity of 10 $`\mathrm{km}\mathrm{s}^1`$ (Bernacca & Perinotto 1970). No attempt was made to produce a master template by combining the spectra of different spectral types, as done by Rix & White (1992) and van der Marel et al. (1994). The template spectrum was averaged along the spatial direction to increase the signal-to-noise ratio ($`S/N`$). The galaxy spectrum was rebinned along the spatial direction until a ratio $`S/N10`$ was achieved at each radius. Then spectra of galaxy and template star were rebinned to a logarithmic wavelength scale, continuum subtracted and endmasked. The least-square fitting of Gaussian broadened spectrum of the template star to the galaxy spectrum was done in the Fourier space over the restricted range of wavenumbers $`[k_{\mathrm{𝑚𝑖𝑛}},k_{\mathrm{𝑚𝑎𝑥}}]=[5,200]`$. In this way we rejected the low-frequency trends (corresponding to $`k<5`$) due to the residuals of continuum subtraction and the high-frequency noise (corresponding to $`k>200`$) due to the instrumental resolution. (The wavenumber range is important in particular in the Fourier fitting of lines with non-Gaussian profiles, see van der Marel & Franx 1993; CvdM94). The values obtained for the stellar radial velocity and velocity dispersion as a function of radius are given in Tab. 2. The table reports the galactocentric distance $`r`$ in arcsec (Col. 1), the heliocentric velocity $`V`$ (Col. 2) and its error $`\delta V`$ (Col. 3) in $`\mathrm{km}\mathrm{s}^1`$, the velocity dispersion $`\sigma `$ (Col. 4) and its error $`\delta \sigma `$ (Col. 5) in $`\mathrm{km}\mathrm{s}^1`$. The values for the stellar $`\delta V`$ and $`\delta \sigma `$ are the formal errors from the fit in the Fourier space. The systemic velocity was subtracted from the observed heliocentric velocities and the profiles were folded about the centre, before plotting. We derive for the systemic heliocentric velocity a value $`V_{}=1420\pm 15`$ $`\mathrm{km}\mathrm{s}^1`$. Our determination is in agreement within the errors with $`V_{}=1397\pm 27`$ $`\mathrm{km}\mathrm{s}^1`$ (RC3) and $`V_{}=1382\pm 50`$ $`\mathrm{km}\mathrm{s}^1`$ (RSA) derived from optical observations too. The resulting rotation curve, velocity dispersion profile and rms velocity ($`\sqrt{v^2+\sigma ^2}`$) curve for the stellar component of NGC 4036 are shown in Fig. 4. The kinematical profiles are symmetric within the error bars with respect to the galaxy centre. For $`r2^{\prime \prime }`$ the rotation velocity increases almost linearly with radius up to $`100`$ $`\mathrm{km}\mathrm{s}^1`$, remaining approximatively constant between $`2^{\prime \prime }`$ and $`4^{\prime \prime }`$. Outwards it rises to the farthest observed radius. It is $`180`$ $`\mathrm{km}\mathrm{s}^1`$ at $`9^{\prime \prime }`$, $`220`$ $`\mathrm{km}\mathrm{s}^1`$ at $`21^{\prime \prime }`$ and $`260`$ $`\mathrm{km}\mathrm{s}^1`$ at $`29^{\prime \prime }`$. The velocity dispersion $`\sigma 210`$ $`\mathrm{km}\mathrm{s}^1`$ in the centre and at $`r4^{\prime \prime }`$ with a ‘local minimum of $`180`$ at $`r=2^{\prime \prime }`$. Further out it declines to values $`120`$ $`\mathrm{km}\mathrm{s}^1`$. #### 2.2.2 Ionized gas kinematics To determine the ionized gas kinematics we studied the \[O II\] ($`\lambda \lambda `$3726.2, 3728.9 Å) emission doublet. In our spectrum the two lines are not resolved at any radius. We obtained smooth fits to the \[O II\] doublet using a two-steps procedure. In the first step the emission doublet was analyzed by fitting a double Gaussian to its line profile, fixing the ratio between the wavelengths of the two lines and assuming that both lines have the same dispersion. The intensity ratio of the two lines depends on the state (i.e. electron density and temperature) of the gas (e.g. Osterbrock 1989). We found a mean value of \[O II\]$`\lambda 3726.2`$/\[O II\]$`\lambda 3728.9`$$`=0.8\pm 0.1`$ without any significative dependence on radius. The electron density derived from the obtained intensity ratio of \[O II\] lines is in agreement with that derived (at any assumed electron temperature, see Osterbrock 1989) from the intensity ratio of the \[S II\] lines (\[S II\]$`\lambda 6716.5`$/\[S II\]$`\lambda 6730.9`$=1.23) found by Ho, Filippenko & Sargent (1997). In the second step we fitted the line profile of the emission doublet by fixing the intensity ratio of its two lines at the value above. At each radius we derived the position, the dispersion and the uncalibrated intensity of each \[O II\] emission line and their formal errors from the best-fitting double Gaussian to the doublet plus a polynomial to its surrounding continuum. The wavelength of the lines’ centre was converted into the radial velocity and then the heliocentric correction was applied. The lines’ dispersion was corrected for the instrumental dispersion and then converted into the velocity dispersion. The measured kinematics for the gaseous component in NGC 4036 is given in Tab. 3. The table contains the galactocentric distance $`r`$ in arcsec (Col. 1), the heliocentric velocity $`V`$ (Col. 2) and its error $`\delta V`$ (Col. 3) in $`\mathrm{km}\mathrm{s}^1`$ the velocity dispersion $`\sigma `$ (Col. 4) and its errors $`\delta \sigma _+`$ and $`\delta \sigma _{}`$ (Cols. 5 and 6) in $`\mathrm{km}\mathrm{s}^1`$. The gas velocity errors $`\delta V`$ are the formal errors for the double Gaussian fit to the \[O II\] doublet. The gas velocity dispersion errors $`\delta \sigma _+`$ and $`\delta \sigma _{}`$ take also account for the subtraction of the instrumental dispersion. The rotation curve, velocity dispersion profile and rms velocity curve for the ionized gas component of NGC 4036 resulting after folding about the centre are shown Fig. 5. The \[O II\]$`\lambda 3726.2`$ intensity profile as a function of radius is plotted in Fig. 6. The gas rotation tracks the stellar rotation remarkably well. They are consistent within the errors to one another. The gas velocity dispersion has central dip of $`160`$ $`\mathrm{km}\mathrm{s}^1`$ with a maximum of $`220`$ $`\mathrm{km}\mathrm{s}^1`$ at $`r2^{\prime \prime }`$. It remains higher than 100 $`\mathrm{km}\mathrm{s}^1`$ up to $`4^{\prime \prime }`$ before decreasing to lower values. The velocity dispersion profile appears to be less symmetric than the rotation curve. Indeed between $`2^{\prime \prime }`$ and $`5^{\prime \prime }`$ the velocity dispersion measured onto the E side rapidly drops from its observed maximum to $`50`$ $`\mathrm{km}\mathrm{s}^1`$, while in the W side it smoothly declines to $`140`$ $`\mathrm{km}\mathrm{s}^1`$. Errors on the gas velocity dispersion increase at large radii as the gas velocity dispersion becomes comparable to the instrumental dispersion. #### 2.2.3 Comparison with kinematical data by Fisher (1997) The major-axis kinematics for stars and gas we derived for NGC 4036 are consistent within the errors with the measurements of Fisher (1997, hereafter F97). The only exception is represented by the differences of $`20\%`$$`30\%`$ between our and F97 stellar velocity dispersions in the central $`8^{\prime \prime }`$. In these regions F97 finds a flat velocity dispersion profile with a plateau at $`\sigma 170`$ $`\mathrm{km}\mathrm{s}^1`$. To measure stellar kinematics he adopted the Fourier Fitting Method (van der Marel & Franx 1993) directly on the line-of-sight velocity distribution derived with Unresolved Gaussian Decomposition Method (Kuijken & Merrifield 1993). For $`|r|8^{\prime \prime }`$ the NGC 4036 line profiles are asymmetric (displaying a tail opposite to the direction of rotation) and flat-toped, as result from the $`h_3`$ and $`h_4`$ radial profiles. The $`h_3`$ and $`h_4`$ parameters measure respectively the asymmetric and symmetric deviations of the line profile from a Gaussian (van der Marel & Franx 1993; Gerhard 1993). For NGC 4036 the $`h_3`$ term anticorrelates with $`v`$, rising to $`+0.1`$ in the approaching side and falling to $`0.1`$ in the receding side. The $`h_4`$ term exhibits a negative value ($`0.03`$). ## 3 Modeling the stellar kinematics ### 3.1 Modeling technique We built an axisymmetric bulge-disc dynamical model for NGC 4036 applying the Jeans modeling technique introduced by Binney, Davies & Illingworth (1990), developed by van der Marel, Binney & Davies (1990) and van der Marel (1991), and extended to two-component galaxies by CvdM94 and to galaxies with a DM halo by Cinzano (1995) and Corsini et al. (1998). For details the reader is referred the above references. The main steps of the adopted modeling are (i) the calculation of the bulge and disc contribution to the potential from the observed surface brightness of NGC 4036; (ii) the solution of the Jeans Equations to obtain separately the bulge and disc dynamics in the total potential; and (iii) the projection of the derived dynamical quantities onto the sky-plane taking into account seeing effects, instrumental set-up and reduction technique to compare the model predictions with the measured stellar kinematics. In the following each invidual step is briefly discussed: 1. We model NGC 4036 with an infinitesimally thin exponential disc in its equatorial plane. The disc surface mass density is specified for any inclination $`i`$, central surface brightness $`\mu _0`$, scale length $`r_d`$ and constant mass-to-light ratio $`(M/L)_d`$. The disc potential is calculated from the surface mass density as in Binney & Tremaine (1987). The limited extension of our kinematical data (measured out $`r<30^{\prime \prime }\mathrm{\hspace{0.17em}0.5}R_{\mathrm{𝑜𝑝𝑡}}`$<sup>5</sup><sup>5</sup>5The optical radius $`R_{\mathrm{𝑜𝑝𝑡}}`$ is the radius encompassing the $`83\%`$ of the total integrated light.) prevents us to disentangle in the assumed constant mass-to-light ratios the possible contribution of a dark matter halo. The surface brightness of the bulge is obtained by subtracting the disc contribution from the total observed surface brightness. The three-dimensional luminosity density of the bulge is obtained deprojecting its surface brightness with an iterative method based on the Richardson-Lucy algorithm (Richardson 1972; Lucy 1974). Its three-dimensional mass density is derived by assuming a constant mass-to-light ratio $`(M/L)_b`$. The potential of the bulge is derived solving the Poisson Equation by a multipole expansion (e.g. Binney & Tremaine 1987). 2. The bulge and disc dynamics are derived by separately solving the Jeans Equations for each component in the total potential of the galaxy. For both components we assume a two integral distribution function of the form $`f=f(E,L_z)`$. It implies that the vertical velocity dispersion $`\sigma _z^2`$ is equal to second radial velocity moment $`\sigma _R^2`$ and that $`\overline{v_Rv_z}=0`$. Therefore the Jeans Equations becomes a closed set, which can be solved for the unknowns $`\overline{v_\varphi ^2}`$ and $`\sigma _R^2=\sigma _z^2`$. Other assumptions to close the Jeans Equations are also possible (e.g. van der Marel & Cinzano 1992). For the bulge we made the same hypotheses of Binney et al. (1990). A portion of the second velocity moment $`\overline{v_\varphi ^2}`$ is assigned to bulge streaming velocity $`\overline{v_\varphi }`$ following Satoh’s (1980) prescription. For the disc we made the same hypotheses of Rix & White (1992) and CvdM94. The second radial velocity moment $`\sigma _R^2`$ in the disc is assumed to fall off exponentially with a scale length $`R_\sigma `$ from a central value of $`\sigma _{d0}^2`$. The azimuthal velocity dispersion $`\sigma _\varphi ^2`$ in the disc is assumed to be related to $`\sigma _R^2`$ according to the relation from epicyclic theory \[cfr. Eq. (3-76) of Binney & Tremaine (1987)\]. As pointed out by CvdM94 this relation may introduce systematic errors (Kuijken & Tremaine 1992; Evans & Collett 1993; Cuddeford & Binney 1994). The disc streaming velocity $`\overline{v_\varphi }`$ (i.e. the circular velocity corrected for the asymmetric drift) is determined by the Jeans equation for radial equilibrium. 3. We projected back onto the plane of the sky (at the given inclination angle) the dynamical quantities of both the bulge and the disc to find the line-of-sight projected streaming velocity and velocity dispersion. We assumed that both the bulge and disc have a Gaussian line profile. At each radius their sum (normalized to the relative surface brightness of the two components) represents the model-predicted line profile. As in CvdM94 the predicted line profiles were convolved with the seeing PSF of the spectroscopic observations and sampled over the slit width and pixel size to mimic the observational spectroscopic setup. We mimicked the Fourier Quotient method for measuring the stellar kinematics fitting the predicted line profiles with a Gaussian in the Fourier space to derive the line-of-sight velocities and velocity dispersions for the comparison with the observed kinematics. The problems of comparing the true velocity moments with the Fourier Quotient results were discussed by van der Marel & Franx (1993) and by CvdM94. ### 3.2 Results for the stellar component #### 3.2.1 Seeing-deconvolution The modeling technique described in Sec. 3.1 derives the three-dimensional mass distribution from the three-dimensional luminosity distribution inferred from the observed surface photometry by a fine-tuning of the disc parameters leading to the best fit on the kinematical data. We performed a seeing-deconvolution of the $`V`$band image of NGC 4036 to take into account the seeing effects on the measured photometrical quantities (surface-brightness, ellipticity and $`\mathrm{cos}4\theta `$ deviation profiles) used in the deprojection of the two-dimensional luminosity distribution. We obtained a restored NGC 4036 image through an iterative method based on the Richardson-Lucy algorithm (Richardson 1972; Lucy 1974) available in the IRAF package STSDAS. We assumed the seeing PSF to be a circular Gaussian with a FWHM$`=1\stackrel{}{.}7`$. Noise amplification represents a main backdraw in all Richardson-Lucy iterative algorithms and the number of iterations needed to get a good image restoration depends on the steepness of the surface-brightness profile (e.g. White 1994). After 6 iterations, we did not notice any more substantial change in the NGC 4036 surface-brightness profile while the image became too noisy. Therefore we decided to stop at the sixth iteration. The surface-brightness, ellipticity, position angle radial profiles of NGC 4036 after the seeing-deconvolution are displayed in Fig. 2 compared with the HST and the unconvolved ground-based photometry. The raises found at small radii for the surface brightness ($`r0.2`$ $`\mathrm{mag}\mathrm{arcsec}^2`$) and for the ellipticity ($`r0.05`$) are comparable in size to the values found by Peletier et al. (1990) for seeing effects in the centre of ellipticals. #### 3.2.2 Bulge-disc decomposition We performed a standard bulge-disc decomposition with a parametric fit (e.g. Kent 1985) in order to find a starting guess for the exponential disc parameters to be used in the kinematical fit. We decomposed the seeing-deconvolved surface-brightness profile on both the major and the minor axis as the sum of an $`R^{1/4}`$ bulge, having surface-brightness profile $$\mu _b(r)=\mu _e+8.3268\left[\left(\frac{r}{r_e}\right)^{1/4}1\right],$$ (1) plus an exponential disc, having surface-brightness profile $$\mu _d(r)=\mu _0+1.0857\left(\frac{r}{r_d}\right).$$ (2) We assumed that the minor-axis profile of each component is the same as the major-axis profile, but scaled by a factor $`1ϵ=b/a`$. A least-squares fit of the photometric data provides $`\mu _e`$, $`r_e`$ and $`ϵ`$ of the bulge, $`\mu _0`$, $`r_d`$ of the disc, and the galaxy inclination $`i`$ (see Tab. 1 for the results). Kent (1984) measured surface photometry of NGC 4036 in the $`r`$band. He decomposed the major- and minor-axis profiles in an $`R^{1/4}`$ bulge and in an exponential disc (Kent 1985). A rough but useful comparison between the bulge and disc parameters resulting from the two photometric decompositions is possible by a transformation from $`r`$\- to $`V`$band of Kent’s data. We derived from Kent’s surface brightness profile along the major axis of NGC 4036 its (extrapolated) total magnitude $`r_T=10.56\pm 0.02`$ corresponding to $`V_Tr_T=0.13\pm 0.11`$. Then we converted the $`\mu _0`$ and $`\mu _e`$ values from the $`r`$\- to the $`V`$band (see Tab. 1). The differences between the best-fit parameters obtained from the two decompositions are lower than $`10\%`$ except for bulge ellipticity (our value is $`64\%`$ than the Kent’s one). These discrepancies are consistent with the differences in the slope of the two surface-brightness profiles, with differences between the Kent’s approach to correct for the seeing effects \[Kent (1985) convolved the theoretical bulge and disc profiles with the observed Gaussian seeing profile\] and with the uncertainties of the conversion from $`r`$\- to $`V`$band of Kent’s surface brightnesses. #### 3.2.3 Modeling results We looked for the disc parameters leading to the best fit of the observed stellar kinematics, using as starting guess those resulting from our best-fit photometric decomposition, with $`|\mu _01|18.7`$ $`\mathrm{mag}\mathrm{arcsec}^2`$, $`|r_d5^{\prime \prime }|22\stackrel{}{.}1`$, and $`|i5\mathrm{°}|74\stackrel{}{.}9`$. For any exponential disc of fixed $`\mu _0,r_d,i`$ (in the investigated range of values), we subtracted its surface brightness from the total seeing-deconvolved surface brightness. The residuum surface brightness was considered to be contributed by the bulge. Being $`\mu _0`$, $`r_d`$ and $`i`$ correlated, a bulge component with a surface-brightness profile consistent with that resulting from the photometric decomposition, was obtained only by taking exponential discs characterized by large $`\mu _0`$ values in combination with large values of $`r_d`$ (i.e. larger but fainter discs) or by small $`\mu _0`$ and small $`r_d`$ (i.e. smaller but brighter discs). After subtracting the disc contribution to the total surface brightness, the three-dimensional luminosity density of the bulge has been obtained after seven Richardson-Lucy iterations starting from a fit to the actual bulge surface brightness with the projection of a flattened Jaffe profile (1983). The residual surface brightness ($`\mathrm{\Delta }\mu =\mu _{\mathrm{model}}\mu _{\mathrm{obs}}`$) after each iteration and the final three-dimensional luminosity density profiles of the spheroidal component of NGC 4036 along the major, the minor and two intermediate axis is plotted for the kinematical best-fit model in Fig. 7 up to $`100^{\prime \prime }`$ (the stellar and ionized-gas kinematics are measured to $`30^{\prime \prime }`$). We applied the modeling technique described in Sec. 3.1 considering only models in which the bulge is an oblate isotropic rotator (i.e. $`k=1`$) and in which the bulge and the disc have the same mass-to-light ratio $`(M/L)_b=(M/L)_d`$. The $`M/L`$ determines the velocity normalization and was chosen (for each combination of disc parameters) to optimize the fit to the rms velocity profile. The best-fit model to the observed major-axis stellar kinematics is obtained with a mass-to-light ratio $`M/L_V`$$`=3.42`$ M/L$`_V`$ and with an exponential disc having a surface-brightness profile $$\mu _d(r)=19.3+1.0857\left(\frac{r}{22\stackrel{}{.}0}\right)\mathrm{mag}\mathrm{arcsec}^2,$$ (3) (represented by the dashed line in the upper panel of Fig. 1), a radial velocity dispersion profile $$\sigma _R(r)=155e^{r/27\stackrel{}{.}4}\mathrm{km}\mathrm{s}^1,$$ (4) (where the galactocentric distance $`r`$ is expressed in arcsec) and an inclination $`i=72\mathrm{°}`$. Fig. 8 shows the comparison between the rotation curve, the dispersion velocity profile and the rms velocity curve predicted by the best-fit model (solid lines) with the observed stellar kinematics along the major axis of NGC 4036. The agreement is good. The derived $`V`$band luminosities for bulge and disc are $`L_b=2.810^{10}`$ L$`_V`$ and $`L_d=1.410^{10}`$ L$`_V`$. They correspond to the masses $`M_b=9.810^{10}`$ M and $`M_d=4.810^{10}`$ M. The total mass of the galaxy (bulge$`+`$disc) is $`M_T=14.510^{10}`$ M. The disc-to-bulge and disc-to-total $`V`$band luminosity ratios are $`L_b/L_d=0.58`$ and $`L_d/L_T=0.36`$. The disc-to-total luminosity ratio as function of the galactocentric distance is plotted in Fig. 9. #### 3.2.4 Uncertainty ranges for the disc parameters The uncertainty ranges for the disc parameters are $`19.1\mu _019.5`$ $`\mathrm{mag}\mathrm{arcsec}^2`$, $`20^{\prime \prime }r_d24^{\prime \prime }`$ and $`71\mathrm{°}i73\mathrm{°}`$. In Fig. 10 the continuous and the dotted line show the kinematical profiles predicted for two discs with the same inclination ($`i=72\mathrm{°}`$) and the same total luminosity of the best-fit disc but with a smaller ($`\mu _0=19.1`$ $`\mathrm{mag}\mathrm{arcsec}^2`$, $`r_d=20\stackrel{}{.}0`$) or greater ($`\mu _0=19.5`$ $`\mathrm{mag}\mathrm{arcsec}^2`$, $`r_d=24\stackrel{}{.}0`$) scale length respectively. In Fig. 11 the continuous and the dotted lines correspond to two discs with the same scale length ($`r_d=22\stackrel{}{.}0`$) of the best-fit disc but with a lower ($`\mu _0=19.2`$ $`\mathrm{mag}\mathrm{arcsec}^2`$, $`i=71\mathrm{°}`$) or a higher ($`\mu _0=19.4`$ $`\mathrm{mag}\mathrm{arcsec}^2`$, $`i=73\mathrm{°}`$) inclination. Their total luminosity is respectively $`14\%`$ lower and $`16\%`$ higher then that of the best-fit disc. The fit of these models to the data are also acceptable so we estimate a $`15\%`$ error in the determination of the disc luminosity and mass. The good agreement with observations obtained with $`k=1`$, $`(M/L)_b=(M/L)_d`$, and without a dark matter halo causes us to not investigate models with $`k1`$, $`(M/L)_b(M/L)_d`$ or with radially increasing mass-to-light ratios. Therefore we cannot exclude them at all. The $`V`$band luminosity (after inclination correction) of the exponential disc corresponding to the model best-fit to the observed stellar kinematics is $`75\%`$ than that of the disc obtained from the parametric fit of the surface-brightness profiles. The differences between the disc parameters derived from NGC 4036 photometry ($`\mu _0=18.7`$ $`\mathrm{mag}\mathrm{arcsec}^2`$, $`r_d=22\stackrel{}{.}1`$, $`i=74\stackrel{}{.}9`$) and from the stellar kinematics ($`\mu _0=19.3`$ $`\mathrm{mag}\mathrm{arcsec}^2`$, $`r_d=22\stackrel{}{.}0`$, $`i=72\mathrm{°}`$) are expected. In fitting the NGC 4036 surface-brightness profiles the bulge is assumed (i) to be axisymmetric; (ii) to have a $`R^{1/4}`$ profile; (iii) and constant axial ratio (i.e. its isodensity luminosity spheroids are similar concentric ellipsoids). We adopted this kind of representation for the bulge component only to find rough bounds on the exponential disc parameters to be used in the kinematical fit. Often bulges have neither an $`R^{1/4}`$ law profile (e.g. Burstein 1979; Simien & Michard 1984) neither perfectly elliptical isophotes (e.g. Scorza & Bender 1990). Therefore in modeling the stellar kinematics the three-dimensional luminosity density $`j_b`$ of the stellar bulge is assumed (i) to be oblate axisymmetric; but (ii) not to be parametrized by any analytical expression; (iii) nor to have isodensity luminosity spheroids with constant axial ratio. The flattening of the spheroids increases with the galactocentric distance, as it appears from the three-dimensional luminosity density profiles plotted in Fig. 7 along different axes onto the meridional plane of NGC 4036. This flattening produces an increasing of the streaming motions for the bulge component assumed to be an isotropic rotator. #### 3.2.5 Modeling results with stellar kinematics by Fisher (1997) The bulge and disc parameters found for our best-fit model reproduce also the major-axis stellar kinematics by F97. Rather than choosing an ‘ad hoc’ wavenumber range to emulate F97 analysis technique the fit to the predicted line profile was done in ordinary space and not in the Fourier space as done to reproduce our Fourier Quotient measurements. The good agreement of the resulting kinematical profiles with F97 measurements are shown in Fig. 12. ## 4 Modeling of the ionized gas kinematics ### 4.1 Modeling technique At small radii both the ionized gas velocity and velocity dispersion are comparable to the stellar values, for $`r9^{\prime \prime }`$ and $`r5^{\prime \prime }`$ respectively. This prevented us to model the ionized gas kinematics by assuming the gas had settled into a disc component as done by CvdM94 for NGC 2974. Moreover a change in the slope of the \[O II\]$`\lambda 3726.2`$ intensity radial profile (Fig. 6) is observed at $`r8^{\prime \prime }`$, its gradient appears to be somewhat steeper towards the centre. The velocity dispersion and intensity profiles of the ionized gas suggest that it is distributed into two components (see also the distinct central structure in HST H$`\alpha `$$`+`$\[N II\] image, Fig. 3): a small inner spheroidal component and a disc. We built a dynamical model for the ionized gas with a dynamically hot spheroidal and in a dynamically colder disc component. The total mass of the ionized gas $`M_{\mathrm{HII}}`$ is negligible and the total potential is set only by the stellar component. The mass of the ionized gas can be derived from optical recombination line theory (see Osterbrock 1989) by the H$`\alpha `$ luminosity. For a given electron temperature $`T_\mathrm{e}`$ and density $`N_\mathrm{e}`$, the H II mass is given by $$M_{\mathrm{HII}}=\left(L_{\mathrm{H}\alpha }m_\mathrm{H}/N_\mathrm{e}\right)/\left(4\pi j_{\mathrm{H}\alpha }/N_\mathrm{e}N_\mathrm{p}\right)$$ (5) where $`L_{\mathrm{H}\alpha }`$ is the H$`\alpha `$ luminosity, $`m_\mathrm{H}`$ is the mass of the hydrogen atom, $`j_{\mathrm{H}\alpha }`$ is the H$`\alpha `$ emissivity, and $`N_\mathrm{p}`$ are the proton density (Tohline & Osterbrock 1976). The H$`\alpha `$ luminosity of NGC 4036 scaled to the adopted distance is $`L_{\mathrm{H}\alpha }`$$`=5.610^{39}`$ L (Ho et al. 1997). The term $`4\pi j_{\mathrm{H}\alpha }/N_\mathrm{e}N_\mathrm{p}`$ is insensitive to changes of $`N_\mathrm{e}`$ over the range $`10^2`$$`10^6`$ cm<sup>-3</sup>. It decreases by a factor 3 for changes of $`T_\mathrm{e}`$ over the range $`510^3\mathrm{°}`$K – $`210^4\mathrm{°}`$K (Osterbrock 1989). For an assumed temperature $`T_\mathrm{e}=10^4\mathrm{°}`$K, the electron density is estimated to be $`N_\mathrm{e}=210^2`$ cm<sup>-3</sup> from the \[S II\] ratio found by Ho et al. (1997) implying $`M_{\mathrm{HII}}=710^4`$ M. For the gaseous spheroid and disc we made two different sets of assumptions based on two different physical scenarios for the gas cloudlets. #### 4.1.1 Long-lived gas cloudlets (model A) In a first set of models we described the gaseous component by a set of collisionless cloudlets in hydrostatic equilibrium. The small gaseous ‘spheroid’ is characterized by a density distribution and flattening different from those of stars. Its major-axis luminosity profile was assumed to follow an $`R^{1/4}`$ law. Adopting for it the Ryden’s (1992) analytical approximation, we obtained the three-dimensional luminosity density of the gaseous spheroid. The flattening of the spheroids $`q`$ was kept as free parameter. To derive the kinematics of the gaseous spheroid we solved the Jeans Equations under the same assumptions made in Sec. 3 for the stellar spheroid. In particular the streaming velocity $`\overline{v_\varphi }`$ of gaseous bulge is derived from the second azimuthal velocity moment $`\overline{v_\varphi ^2}`$ using Satoh’s (1980) relation. For the ionized gaseous disc we solved the Jeans Equations under similar assumptions made in Sec. 3 for the stellar disc. Specifically, we assumed that the gaseous disc (i) has an exponential luminosity profile; (ii) is infinitesimally thin; (iii) has an exponentially decreasing $`\sigma _R^2`$; (iv) has $`\sigma _z^2=\sigma _R^2`$; (v) has $`\sigma _\varphi ^2`$ satisfying the epicyclic relation. #### 4.1.2 Gas cloudlets ‘just’ shed by the stars (model B) In a second set of models we assumed that the emission observed in the gaseous spheroid and disc arise from material that was recently shed from stars. Different authors (Bertola et al. 1984; Fillmore et al. 1986; Kormendy & Westpfahl 1989; Mathews 1990) suggested that the gas lost (e.g. in planetary nebulae) by stars was heated by shocks to the virial temperature of the galaxy within $`10^4`$ years, a time shorter than the typical dynamical time of the galaxy. Hence in this picture the ionized gas and the stars have the same true kinematics, while their observed kinematics are different due to the line-of-sight integration of their different spatial distribution. Differences between the radial profiles of the gas emissivity and the stellar luminosity may be explained if both the gas emission process and the efficiency of the thermalization process show a variation with the galactocentric distance. The three-dimensional luminosity density of the spheroid is derived as in model A and the luminosity density profile of the disc is assumed to be exponential. In both cases, the kinematics of the gaseous spheroid and disc were projected on the sky-plane to be compared to the observed ionized gas kinematics. Assuming a Gaussian line profile for both the gaseous components we derived the total line profile (which depends on the relative flux of the two components). As for the stellar components, we convolved the line profiles obtained for the ionized gas with the seeing PSF and we sampled them over the slit-width and pixel size to mimic the observational setup. This procedure is particularly important for the modeling of the observed kinematics near the centre. As a last step (in mimicking the measuring technique of the gaseous kinematics) we fitted a single Gaussian to the resulting line profile, after taking into account for the instrumental line profile. ### 4.2 Results for the gaseous component We decomposed the \[O II\]$`\lambda 3726.2`$ intensity profile as the sum of an $`R^{1/4}`$ gaseous spheroid and an exponential gaseous disc. A least-squares fit of the observed data was done for $`r>3^{\prime \prime }`$ to deal with seeing effects (Fig. 6). The gas spheroid resulted to be the dominating component up to $`r8^{\prime \prime }`$ beyond the bright emission in Fig. 3. We derived the effective radius of the gaseous spheroid $`r_{e,\mathrm{gas}}=0\stackrel{}{.}5\pm 0\stackrel{}{.}1`$, the scale length of the gaseous disc $`r_{d,\mathrm{gas}}=29\stackrel{}{.}8\pm 0\stackrel{}{.}9`$, and the ratio between the effective intensity of the spheroid and the central intensity of the disc $`I_{e,\mathrm{gas}}/I_{0,\mathrm{gas}}=718_{153}^{+813}`$. The uncertainties on the resulting parameters have been estimated by a separate decomposition on each side of the galaxy. With these parameters we applied the models for the gas kinematics described in Sec. 4.1. Since in both models A and B the stellar density radial profile differs from the gas emissivity radial profile, it is interesting to check what is the relation between the three-dimensional stellar density $`\rho _{\mathrm{star}}(R)`$ and the three-dimensional gas emissivity $`\nu _{\mathrm{gas}}(R)`$. After deprojection we find they are related by the following relation $$\nu _{\mathrm{gas}}(R)\rho _{\mathrm{star}}^{2.3}(R)$$ (6) in the range of galactocentric distances between $`r2^{\prime \prime }`$ and $`r10^{\prime \prime }`$. In a fully ionized gas the recombination rate is proportional to the square of the gas density (e.g., Osterbrock 1989), this is a power-law relation quite similar to Eq. 6. For models A and B the best-fit to the observed gas kinematics are in both cases obtained with a spheroid flattening $`q=0.8`$, and are plotted respectively in Fig. 13 and Fig. 14. The best result is obtained with model B. A simple estimate of the errors on the model due to the uncertainties of bulge-disc decomposition of the radial profile of \[O II\]$`\lambda 3726.2`$ emission line can be inferred by comparing the model predictions based on the separate decomposition on each side of the galaxy. We find a maximum difference of $`5\%`$ for $`4^{\prime \prime }<r<10^{\prime \prime }`$ between the gas velocities and velocity dispersion predicted using the two different bulge-disc decompositions of the \[O II\]$`\lambda 3726.2`$ intensity profile. For model A the assumption of Satoh’s (1980) relation fails in reproducing the observed gas kinematics for $`r6^{\prime \prime }`$, where the emission lines intensity profile is dominated by the gaseous spheroidal component. However the $`R^{1/4}`$ extrapolation of the density profile in the inner $`3^{\prime \prime }`$ overestimates the density gradient in this region (as it appears also from the HST image, Fig. 3) which could produce an exceeding asymmetric drift correction. #### 4.2.1 Modeling results with gas kinematics by Fisher (1997) We also applied our models A and B to the ionized gas kinematics and to the \[O III\]$`\lambda 5006.9`$ intensity radial profile measured by F97 along the major axis of NGC 4036. The best-fit to F97 data for models A and B are obtained with a spheroid flattening $`q=0.8`$. They are shown in Fig. 15 and Fig. 16 respectively. In this case the dynamical predictions of model B (even if they give better results than those of model A) are not able to reproduce the F97’s kinematics in the radial range between $`3^{\prime \prime }`$ and $`10^{\prime \prime }`$. The differences between the predicted and the measured kinematics rise up to 80 $`\mathrm{km}\mathrm{s}^1`$ in velocities and to 40 $`\mathrm{km}\mathrm{s}^1`$ in velocity dispersions at $`r6^{\prime \prime }`$. Nevertheless for $`0^{\prime \prime }<r<10^{\prime \prime }`$ this model agrees better with the observations than if the gas is assumed to be on circular orbits (see the dotted lines in Fig. 16). If this is the case the maximum differences with the measured kinematics are large as 130 $`\mathrm{km}\mathrm{s}^1`$ in velocity and as 110 $`\mathrm{km}\mathrm{s}^1`$ in velocity dispersion for $`r4^{\prime \prime }`$. ### 4.3 Do drag forces affect the kinematics of the gaseous cloudlets? Considerable differences exist between the ionized gas kinematics recently measured by F97 and the velocity and velocity dispersion profiles predicted even by the best model in the bulge-dominated region between $`r4^{\prime \prime }`$ and $`r10^{\prime \prime }`$. This suggests that other phenomena play a role in determining the dynamics of the spheroid gas (see item (iii) in Sect. 1). The discrepancy between model and observations could be explained by accounting for the drag interaction between the ionized gas and the hot component of the interstellar medium. In the scenario of evolution for stellar ejecta in elliptical galaxies outlined by Mathews (1990), a portion of the gas shed by stars (e.g., as stellar winds or planetary nebulae) undergoes to an orbital separation from its parent stars by the interaction with ambient gas, after an expansion phase and the attainment of pressure equilibrium with the environment, and before its disruption by various instabilities. Indeed to explain the luminosity of the optical emission lines measured for nearby ellipticals, Mathews (1990) estimated that the ionized gas ejected from the orbiting stars merges with the hot interstellar medium in at least $`t_{\mathrm{life}}10^6`$ yr. If the gas clouds have at the beginning the same kinematics of their parent stars, this lifetime is sufficiently long to let the gaseous clouds (which start with the same kinematics of their parent stars) to acquire an own kinematical behaviour due to the deceleration produced by the drag force of the interstellar diffuse medium. The lifetime of the ionized gas nebulae is shorter than $`10^410^5`$ years if magnetics effects on gas kinematics are ignored (as in our model B). To have some qualitative insights in understanding the effects of a drag force on the gas kinematics we studied the case of a gaseous nebula moving in the spherical potential $$\mathrm{\Phi }(r)=\frac{4}{3}\pi G\rho r^2$$ (7) generated by an homogeneous mass distribution of density $`\rho `$ and which, starting onto a circular orbit, is decelerated by a drag force $$𝐅_{\mathrm{𝑑𝑟𝑎𝑔}}=\frac{k_{\mathrm{𝑑𝑟𝑎𝑔}}}{m}v^2\frac{𝐯}{v}$$ (8) where $`m`$ and $`𝐯`$ are the mass and the velocity of the gaseous cloud. Following Mathews (1990), the constant $`k_{\mathrm{𝑑𝑟𝑎𝑔}}`$ is given by $$k_{\mathrm{𝑑𝑟𝑎𝑔}}\frac{3}{4}\frac{n}{n_{\mathrm{𝑒𝑞}}}\frac{m}{a_{\mathrm{𝑒𝑞}}}$$ (9) where $`n`$ is the density of the interstellar medium, $`n_{\mathrm{𝑒𝑞}}`$ and $`a_{\mathrm{𝑒𝑞}}`$ are respectively the density and the radius of the gaseous nebula when the equilibrium is reached between the internal pressure of the cloud and the external pressure of the interstellar medium. The ratio $`n/n_{\mathrm{𝑒𝑞}}10^3`$ at any galactic radius and therefore the ratio $`k_{\mathrm{𝑑𝑟𝑎𝑔}}/m`$ depends on the nebula radius $`a_{\mathrm{𝑒𝑞}}`$ (Mathews 1990). The equations of motion of the nebula expressed in plane polar coordinates $`(r,\psi )`$ in which the centre of attraction is at $`r=0`$ and $`\psi `$ is the azimuthal angle in the orbital plane are $$\ddot{r}r\dot{\psi }^2=\frac{4}{3}\pi G\rho r+\frac{k_{\mathrm{𝑑𝑟𝑎𝑔}}}{m}\dot{r}^2(r>0)$$ (10) $$r\ddot{\psi }+2\dot{\psi }\dot{r}=\frac{k_{\mathrm{𝑑𝑟𝑎𝑔}}}{m}r^2\dot{\psi }^2(r>0)$$ (11) We numerically solved the Eqs. 10 and 11 with the Runge-Kutta method (Press et al. 1986) to study the time-dependence of the radial and tangential velocity components $`\dot{r}`$ and $`r\dot{\psi }`$ of the nebula. We fixed the potential assuming a circular velocity of 250 $`\mathrm{km}\mathrm{s}^1`$ at $`r=1`$ kpc. Following Mathews (1990) we took an equilibrium radius for the gaseous nebula $`a_{\mathrm{𝑒𝑞}}=0.37`$ pc. The results obtained for different times in which the drag force decelerate the gaseous clouds are shown in Fig. 17. It results that $`\ddot{\psi }<0`$ and $`\ddot{r}>0`$: the clouds spiralize towards the galaxy centre as expected. Moreover the drag effects are greater on faster starting clouds and therefore negligible for the slowly moving clouds in the very inner region of NGC 4036. If the nebulae are homogeneously distributed in the gaseous spheroid only the tangential component $`r\dot{\psi }`$ of their velocity contribute to the observed velocity. No contribution derives from the radial component $`\dot{r}`$ of their velocities. In fact for each nebula moving towards the galaxy centre which is also approaching to us we expect to find along the line-of-sight a receding nebula which is falling to the centre from the same galactocentric distance with an opposite line-of-sight component of its $`\dot{r}`$. The radial components of the cloudlet velocities (typically of 30-40 $`\mathrm{km}\mathrm{s}^1`$, see Fig. 17) are crucial to explain the velocity dispersion profile and to understand how the difference between the observed velocity dispersions and the model B predictions arises. If the clouds are decelerated by the drag force their orbits become more radially extended and the velocity ellipsoids acquire a radial anisotropy. This is a general effect and it is true not only in our case, in which the clouds initially moved onto circular orbits. So we expect (in the region of the gaseous spheroid) the observed velocity dispersion profile to decrease steeper than the one predicted by the isotropic model B. In fact inside $`5^{\prime \prime }r10^{\prime \prime }`$ the F97 ionized gas data show that the gas velocity dispersion does not already exceed 50 $`\mathrm{km}\mathrm{s}^1`$, even if its rotation curve falls below the circular velocities inferred from the stellar kinematics. Given we do not know the the lifetime and the density of the cloudlets we cannot make a definite prediction. The drag effects could explain the differences between the observed and predicted velocities, if the decreasing of the tangential velocities (showed in Fig. 17) is considered as an upper limit. A proper luminosity-weighted integration of the tangential velocities of nebulae along the line-of-sight has to be taken into account for the gaseous spheroid. Being the clouds not all settled on a particular plane, the plotted values will be reduced by two distinct cosine-like terms depending on the two angles fixing the position of any particular cloud. Moreover (in the general case), the clouds are not all starting at the local circular velocity and also at any given radius we find either ‘younger’ clouds (just shed from stars) and ‘older’ clouds, which are coming from farther regions of the gaseous spheroid and are going to be thermalized. Due to the nature of the drag force (acting much more efficiently on fast-moving particles), it is easy to understand that the fast ‘younger’ clouds will soon leave their harbours, while the slow ‘older’ clouds will spend much more time crossing these regions. For instance, in the case of nebulae shed at the local circular velocity on a given plane (described by Eqs. 10 and 11 and shown Fig 17) at $`r=0.96`$ kpc (where the circular velocity is $`230`$ $`\mathrm{km}\mathrm{s}^1`$) we found that quite an half of the gas cloudlets are coming from regions between 0.97 and 1.1 kpc with tangential velocity between 150 $`\mathrm{km}\mathrm{s}^1`$ and 127 $`\mathrm{km}\mathrm{s}^1`$. This scenario is applicable only outside $`3^{\prime \prime }`$, whether if it is applicable also inside the region of the central discrete structure revealed by the HST image in Fig. 3, it is not clear. ## 5 Discussion and conclusions The modeling of the stellar and gas kinematics in NGC 4036 shows that the observed velocities of the ionized gas, moving in the gravitational potential determined from the stellar kinematics, cannot be explained without taking the gas velocity dispersion into account. In the inner regions of NGC 4036 the gas is clearly not moving at circular velocity. This finding is in agreement with earlier results on other disc galaxies (Fillmore et al. 1986; Kent 1988; Kormendy & Westpfahl 1989) and ellipticals (CvdM94). A much better match to the observed gas kinematics is found by assuming the ionized gas as made of collisionless clouds in a spheroidal and a disc component for which the Jeans Equations can be solved in the gravitational potential of the stars (i.e., model A in Sect. 4.1). Better agreement with the observed gas kinematics is achieved by assuming that the ionized gas emission comes from material which has recently been shed from the bulge stars (i.e., model B in Sect. 4.1). If this gas is heated to the virial temperature of the galaxy (ceasing to produce emission lines) within a time much shorter than the orbital time, it shares the same true kinematics of its parent stars. If this is the case, we would observe a different kinematics for ionized gas and stars due only to their different spatial distribution. The number of emission line photons produced per unit mass of lost gas may depend on the environment and therefore varying with the galactocentric distance. Therefore the intensity radial profile of the emission lines of the ionized gas can be different from that of the stellar luminosity. The continuum-subtracted H$`\alpha `$$`+`$\[N II\] image of the nucleus of NGC 4036 (Fig. 3) confirms that except for the complex emission structure inside $`3^{\prime \prime }`$ the smoothness of the distribution of the emission as we expected for the gaseous spheroidal component. In conclusion, the ‘slowly rising’ gas rotation curve in the inner region of NGC 4036 can be understood kinematically, at least in part. The difference between the circular velocity curve (inferred from the stellar kinematics) and the rotation curve measured for the ionized gas is substantially due to the high velocity dispersion of the gas. This kinematical modeling leaves open the questions about the physical state (e.g. the lifetime of the emitting clouds) and the origin of the dynamically hot gas. We tested the hypothesis that the ionized gas is located in short-living clouds shed by evolved stars (e.g. Mathews 1990) finding a reasonable agreement with our observational data. These clouds may be ionized by the parent stars, by shocks, or by the UV-flux from hot stars (Bertola et al. 1995a). The comparison with the more recent and detailed data on gas by F97 opens wide the possibility for further modeling improvement if the drag effects on gaseous cloudlets (due to the diffuse interstellar medium) will be taken into account. These arguments indicate that the dynamically hot gas in NGC 4036 has an internal origin. This does not exclude the possibility for the gaseous disc to be of external origin as discussed for S0’s by Bertola, Buson & Zeilinger (1992). Spectra at higher spectral and spatial resolution are needed to understand the structure of the gas inside $`3^{\prime \prime }`$. Two-dimensional spectra could further elucidate the nature of the gas. ## Acknowledgments We are indebted to Roeland van der Marel for providing for his $`f(E,L_z)`$ modeling software which became the basis of the programs package used here. WWZ acknowledges the support of the Jubiläumsfonds der Oesterreichischen Nationalbank (grant 6323). This research made use of NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration and of the Lyon-Meudon Extragalactic Database (LEDA) supplied by the LEDA team at the CRAL-Observatoire de Lyon (France). ## Appendix A Data tables The stellar (Tab. 2) and ionized gas (Tab. 3) heliocentric velocities and velocity dispersions measured along the major axis of NGC 4036.
no-problem/9903/hep-ph9903233.html
ar5iv
text
# Relations between the 𝐾_{ℓ⁢3} and 𝜏→𝐾⁢𝜋⁢𝜈_𝜏 decays ## Abstract We investigate the relations between the $`K_\mathrm{}3`$ and $`\tau K\pi \nu _\tau `$ decays using the meson dominance approach. First, the experimental branching fractions (BF) for $`K_{e3}^\pm `$ and $`K_{e3}^0`$ are used to fix two normalization constants (isospin invariance is not assumed). Then, the BF of $`\tau ^{}K^{}(892)^{}\nu _\tau `$ is calculated in agreement with experiment. We further argue that the nonzero value of the slope parameter $`\lambda _0`$ of the $`K_{\mu 3}^\pm `$ and $`K_{\mu 3}^0`$ form factors $`f_0(t)`$ implies the existence of the $`\tau ^{}K_0^{}(1430)^{}\nu _\tau `$ decay. We calculate its BF, together with BF’s of the $`K_{\mu 3}^\pm `$, $`K_{\mu 3}^0`$, $`\tau ^{}K^{}\pi ^0\nu _\tau `$, and $`\tau ^{}\overline{K}^0\pi ^{}\nu _\tau `$ decays, as a function of the $`\lambda _0`$ parameter. At some value of $`\lambda _0`$, different for charged and neutral kaons, calculated BF’s seem to match existing data and a prediction is obtained for the $`\tau K\pi \nu `$ decays going through the $`K_0^{}(1430)^{}`$ resonance. With a new generation of high statistics and precise data about the $`K_\mathrm{}3`$, i.e. $`K\pi \mathrm{}\nu _{\mathrm{}}`$, decays coming soon it is possible to think about investigating the problems that were not fully resolved in the previous series of experiments, which ended approximately in the early eighties. One of the as yet undecided issues is that of the value, or even the sign, of the slope $`\lambda _0`$ in the linear parametrization of the form factor $`f_0`$, the definition of which we give below. Some $`K_{\mu 3}^\pm `$ experiments indicated a non-vanishing negative value, some positive.We refer the reader to for references and more details. The situation was analysed by the Particle Data Group in 1982 and a recommended value of $`0.004\pm 0.007`$ was chosen. A very recent experiment with its result of $`0.062\pm 0.024`$ influenced the recommended value, which has now become $`0.006\pm 0.007`$ . The situation with the $`\lambda _0`$ parameter in the $`K_L^0\pi ^\pm \mu ^{}\nu `$ ($`K_{\mu 3}^0`$) decay seems to be a little more definite, at least judging from the recommended value of $`0.025\pm 0.006`$ and from all the experiments in the period of 1974-1981 agreeing on the positive sign. In this note we speculate about consequences which may stem from conclusive establishing a nonzero value of $`\lambda _0`$. Its purpose is not to compete with the elaborate calculations of the $`K_\mathrm{}3`$ form factors, see , or of the kaon production in $`\tau `$-lepton decays . Our aim is to show on a phenomenological basis in a simple and transparent way possible relations between the $`K_\mathrm{}3`$ and $`\tau K\pi \nu _\tau `$ decays. We mainly argue that a nonzero value of the $`\lambda _0`$ parameter of the $`K_{\mu 3}`$ decays implies a nonzero decay fraction of the $`\tau ^{}K_0^{}\nu _\tau `$ decay. Judging from our results and the contemporary experimental upper limit, this decay may be observed soon. The tool we are going to use here is the meson dominance hypothesis, see and references therein. If we believe in the validity of the standard electroweak model in the leptonic sector, we parametrize the matrix element of the $`K_\mathrm{}3`$ decay in the form $$_{K_\mathrm{}3}=C\left[f_+(t)p^\mu +f_{}(t)q^\mu \right]\overline{u}\gamma _\mu (1\gamma _5)v,$$ (1) where $`p`$ ($`q`$) is the sum (difference) of the four-momenta of the $`K`$ and $`\pi `$ mesons, $`t=q^2`$, and $`u`$ and $`v`$ are appropriately chosen Dirac spinors of outgoing leptons. This relation defines, up to a normalization factor, the $`K_\mathrm{}3`$ form factors $`f_+(t)`$ and $`f_{}(t)`$. The normalization used most frequently is defined by $`C=G_F|V_{us}|/2`$ for the $`K_\mathrm{}3^\pm `$ and $`C=G_F|V_{us}|/\sqrt{2}`$ for the $`K_\mathrm{}3^0`$ decays. It is customary to introduce also the form factor $$f_0(t)=f_+(t)+\frac{t}{m_K^2m_\pi ^2}f_{}(t),$$ (2) which corresponds to the $`J=0`$ state of the $`K\pi `$ system, whereas $`f_+(t)`$ to its $`J=1`$ state. After integrating over angular variables, the differential decay rate in $`t`$, which has also a meaning of the invariant mass squared of the $`\mathrm{}\nu `$ system, comes out as $`{\displaystyle \frac{d\mathrm{\Gamma }_{K_\mathrm{}3}}{dt}}`$ $`=`$ $`{\displaystyle \frac{C^2}{3(4\pi m_K)^3}}{\displaystyle \frac{\left(tm_{\mathrm{}}^2\right)^2}{t^3}}\lambda ^{1/2}(t,m_K^2,m_\pi ^2)`$ (3) $`\times `$ $`\left[\left(2t+m_{\mathrm{}}^2\right)\lambda (t,m_K^2,m_\pi ^2)\left|f_+(t)\right|^2+3m_{\mathrm{}}^2\left(m_K^2m_\pi ^2\right)^2\left|f_0(t)\right|^2\right],`$ (4) where $`\lambda (x,y,z)=x^2+y^2+z^22xy2xz2yz`$. The $`t`$-dependence of all form factors is usually studied experimentally in linear approximation $$f(t)=f(0)\left(1+\lambda \frac{t}{m_\pi ^2}\right),$$ (5) although such an approximation was shown to be improper, at least for the $`f_+(t)`$ form factor of the $`K_{e3}^\pm `$ and $`K_{e3}^0`$ decays. The authors of found big discrepancies among $`\lambda _+`$’s from different experiments if a linear approximation was used. They clearly demonstrated the existence of a quadratic term in $`f_+(t)`$ by showing that its inclusion led to better fits. There is a peculiarity of the present experimental situation, which is worth mentioning. The $`\mu /e`$ universality requires the form factors be equal for the $`K_{e3}`$ and $`K_{\mu 3}`$ decays. Assuming the validity of (5) we can express the $`R=K_{\mu 3}/K_{e3}`$ branching ratio as a function of two parameters: $`\lambda _+`$ and $`\lambda _0`$. Knowing the experimental values of the latter we can evaluate $`R`$ and compare it with the experimental ratio. The $`K_\mathrm{}3^\pm `$ data pass this consistency check without problems, whereas the contemporary recommended values of the $`K_\mathrm{}3^0`$ form factor slopes lead to a little lower ratio than the experimental one ($`0.676\pm 0.009`$ against $`0.701\pm 0.008`$). To restore the consistency, one has to sacrifice the $`\mu /e`$ universality and allow a higher value of the $`\lambda _+`$ parameter in the $`K_{\mu 3}^0`$ decay. A remark is required at the very beginning about our treatment of the $`K_\mathrm{}3`$ decays of neutral kaons. We will work with the $`K^0\pi ^{}\mathrm{}^+\nu _{\mathrm{}}`$ and $`\overline{K}^0\pi ^+\mathrm{}^{}\overline{\nu }_{\mathrm{}}`$ decays, despite the fact that what is really observed are decays of the $`K_L^0`$ and $`K_S^0`$ mesons. If we ignore a small violation of the $`CP`$ invariance, then the decay rates of the former two decays are identical and each of them is equal to the decay rate of $`K_L^0\pi ^\pm \mathrm{}^{}\nu `$, where summing is understood over the two final states shown. The same is true for $`K_S^0\pi ^\pm \mathrm{}^{}\nu `$. The assumption that the $`K_\mathrm{}3`$ decay is dominated by the $`K^{}(892)`$ pole, pictorially depicted in Fig. 1a, leads to the following matrix element (see, e.g., ): $$_{1a}=\frac{G_a}{m_V^2t}\left(p^\mu \frac{m_K^2m_\pi ^2}{m_V^2}q^\mu \right)\overline{u}\gamma _\mu (1\gamma _5)v,$$ (6) where $`m_V`$ is the $`K^\pm (892)`$ mass and (dimensionless) $`G_a`$ collects the coupling constants from all vertices. It includes also the $`V_{us}`$ element of the Cabibbo-Kobayashi-Maskawa matrix. As the isospin invariance is badly broken in the $`K_\mathrm{}3`$ decays, see, e.g., the discussion in , we have two independent constants. One for $`K_\mathrm{}3`$ decays of $`K^\pm `$, another for $`K^0`$ ($`\overline{K}^0`$). We do not need the explicit form of $`G_a`$’s, because we will fix their values from the experimental values of the corresponding $`K_{e3}`$ decay rates. Nevertheless, in the notation of Ref. we have $$G_a^{(\pm )}=G_FV_{us}w_K^{}m_V^2\frac{g_{K^\pm K^\pm \pi ^0}}{g_\rho }$$ (7) and a similar relation for $`G_a^{(0)}`$. The connection with the standard notation is given by $`G_a^{(\pm )}/m_V^2=G_F\left|V_{us}\right|f_+^{K^+\pi ^0}(0)/2`$. For $`G_a^{(0)}`$, the factor of 2 is replaced by $`\sqrt{2}`$. Let us note that when writing (6) we took the propagator of the $`K^{}`$ resonance in the free-vector-particle form $$iG_0^{\mu \nu }(q)=\frac{g^{\mu \nu }+q^\mu q^\nu /m_V^2}{tm_V^2+iϵ},$$ (8) where $`m_V`$ is the mass of the $`K^{}(892)^{}`$ resonance, as seen in the hadronic production experiments. The absence of a noninfinitesimal imaginary part in denominator is justified by $`t`$ being below the threshold of the $`K^{}K\pi `$ decay channel. But the actual form of the propagator may differ from (8) even in the subtreshold region. The success in describing the $`K_\mathrm{}3`$ form factors gives an a posteriori phenomenological argument in favor of an approximate validity of Eq. (8). If we fix, for simplicity, the normalization of the form factors by requiring $`f_+(0)=1`$, we find the following correspondence of (6) with the quantities entering Eq. (1): $`C`$ $`=`$ $`{\displaystyle \frac{G_a}{m_V^2}},`$ (9) $`f_+(t)`$ $`=`$ $`{\displaystyle \frac{m_V^2}{m_V^2t}},`$ (10) $`f_{}(t)`$ $`=`$ $`{\displaystyle \frac{m_K^2m_\pi ^2}{m_V^2t}}.`$ (11) We also have $$f_0(t)=1.$$ (12) Inserting our $`C`$, $`f_+(t)`$, and $`f_0(t)`$ to the general formula (3), integrating over $`t`$, and comparing our result with the $`K_{e3}^\pm `$ ($`K_{e3}^0`$) decay rate calculated from the experimental values of the $`K^\pm `$ ($`K_L^0`$) lifetime and the $`K_{e3}^\pm `$ ($`K_{e3}^0`$) branching fraction we arrive at $`G_{a}^{(\pm )}{}_{}{}^{2}=(1.037\pm 0.013)\times 10^{12}`$ and $`G_{a}^{(0)}{}_{}{}^{2}=(1.974\pm 0.021)\times 10^{12}`$. If the isospin invariance in the $`K^{}K\pi `$ vertex were exact, the ratio of the former to the latter would be equal to 1/2. Before proceeding further with our form-factor issue let us notice that the same overall coupling constants govern also the decays $`\tau ^{}K^{}\pi ^0\nu _\tau `$ and $`\tau ^{}\overline{K}^0\pi ^{}\nu _\tau `$ in which the $`K\pi `$ system is produced via the $`K^{}`$ resonance, see Fig. 2a. Let us first calculate their branching fractions using the $`G_a`$’s we have just determined. This will test the soundness of our approach and of the approximations made and will give us the confidence for calculations for which the comparison with data is impossible as yet. The main problem we are faced with when attempting such a calculation is that of the propagators of resonances. We are now above the threshold of the $`K\pi `$ system, $`s>(m_K+m_\pi )^2`$, where $`s`$ is the square of the four-momentum $`p`$ flowing through the $`K^{}`$ resonance. As a consequence, the propagator acquires an important imaginary part and may differ substantially from the propagator of a free vector particle also in other respects. For example, in it was proposed that the lowest order $`W^\pm `$ ($`Z^0`$) renormalized propagator in the unitary gauge can be obtained, at least in the resonance region, by a simple modification of the free propagator (8). Namely, by replacing the mass squared $`m_V^2`$ everywhere in Eq. (8) by $`m_V^2im_V\mathrm{\Gamma }_V`$, with $`\mathrm{\Gamma }_V`$ being the resonance width.For later development and references to alternative approaches to the weak-gauge-bosons propagators see Ref. . For resonances with strong interaction such a simple prescription is not justified, as discussed, e.g., in . Nevertheless, if $`s=p^2`$ is in a close proximity to the resonant mass squared we can write $$iG^{\mu \nu }(p)=\frac{g^{\mu \nu }+\omega (s)p^\mu p^\nu /s}{sm_V^2+im_V\mathrm{\Gamma }_V(s)},$$ (13) where $`\mathrm{\Gamma }_V(s)`$ is the $`s`$-dependent total width of the resonance normalized by $`\mathrm{\Gamma }(m_V^2)=\mathrm{\Gamma }_V`$ and $`\omega (s)`$ is a complex function. It reflects the properties of the one-particle-irreducible bubble and is, in principle, calculable. There are different ways of treating it in practice. For example, when considering the $`a_1`$ resonance in the intermediate state, the authors of eliminated its influence by choosing transverse vertices. Alternatively, various choices have been made in the literature. Very popular is the free-particle choice $`\omega (s)=s/m_V^2`$, recently used, e.g., in Ref. . In experimental analyses a spin-zero propagator is used even where not justified (see discussion in ). This corresponds to $`\omega (s)=0`$. The same choice was made in , where the branching fraction of the $`\tau ^{}K^{}(892)^{}\nu _\tau `$ decay was also calculated. Fortunately, the $`K^{}(892)`$ resonance is relatively narrow ($`\mathrm{\Gamma }_V51`$ MeV) and we can hope that the systematic error connected with the propagator ambiguity is small. Nevertheless, to assess it we will calculate every quantity of interest twice. Once with $`\omega =s/[m_V^2im_V\mathrm{\Gamma }_V(s)]`$, then with $`\omega =0`$. This procedure yields an average and an estimate of its systematic error. The differential rate of the $`\tau ^{}K^{}\pi ^0\nu _\tau `$ decay in the mass squared of the $`K\pi `$ system is given by the formula $`{\displaystyle \frac{d\mathrm{\Gamma }_{\tau ^{}K^{}\pi ^0\nu _\tau }}{ds}}`$ $`=`$ $`{\displaystyle \frac{1}{6(4\pi m_\tau )^3}}{\displaystyle \frac{\left(m_\tau ^2s\right)^2}{s^3}}\lambda ^{1/2}(s,m_K^2,m_\pi ^2)`$ (14) $`\times `$ $`\left[\left(2s+m_\tau ^2\right)\lambda (s,m_K^2,m_\pi ^2)\left|F_+(s)\right|^2+3m_\tau ^2\left(m_K^2m_\pi ^2\right)^2\left|F_0(s)\right|^2\right],`$ (15) where $$F_+(s)=\frac{G_a^{(\pm )}}{sm_V^2+im_V\mathrm{\Gamma }_V(s)}$$ (16) and $$F_0(s)=\frac{G_a^{(\pm )}\left[1\omega (s)\right]}{sm_V^2+im_V\mathrm{\Gamma }_V(s)}.$$ (17) The presence of $`F_0(s)`$ in (14) reflects the contribution of the off-mass-shell vector resonance $`K^{}`$ to the $`J=0`$ channel. It would disappear if we chose $`\omega (s)1`$, as seen from (17). After integrating (14) and using the experimental value of the $`\tau ^{}`$ lifetime, we arrive at $`B(\tau ^{}K^{}\pi ^0\nu _\tau )=(3.9\pm 0.6)\times 10^3`$. We proceed similarly to obtain $`B(\tau ^{}\overline{K}^0\pi ^{}\nu _\tau )=(7.1\pm 1.2)\times 10^3`$. After summing these two branching fractions we get $$B(\tau ^{}K^{}(892)^{}\nu _\tau )=(1.10\pm 0.18)\%.$$ (18) The experimental value is $`(1.28\pm 0.08)\%`$. Let us now return to the form factors. The salient feature of the one-vector-meson dominance model is the constant $`K_\mathrm{}3`$ form factor $`f_0`$, which implies a vanishing parameter $`\lambda _0`$ defined in (5). There are at least two ways to accommodate a nonvanishing value of $`\lambda _0`$ in the meson dominance approach. One possibility is to add more strange vector resonances. The case of two vector resonances was considered already in . In addition to the well established $`K^{}(892)`$ it was $`K^{}(730)`$, which was abandoned later on. But the formulas of are general, and could be used for inclusion of $`K^{}(1410)`$ as well. Another way of modifying the meson dominance approach to the $`K_\mathrm{}3`$ decay is to include the scalar resonance $`K_0^{}(1430)`$. The advantage of this approach is that, as we will see, it does not modify the $`f_+(t)`$ form factor, which seems to be well described already with the $`K^{}(892)`$ alone. The modification influences only the $`f_{}(t)`$ and, consequently, the $`f_0(t)`$ form factors. Already the authors of discussed this possibility, but at that time there was no known $`K`$-$`\pi `$ resonance with spin zero. To calculate the contribution to the $`K_\mathrm{}3`$ matrix element from the Feynman diagram with the $`K_0^{}(1430)^{}`$ in the intermediate state (Fig. 1b), let us first define the weak decay constant of the $`K_0^{}`$. As usual, it can be done by means of the matrix element of the vector part of the strangeness-changing quark current $$0|\overline{u}(0)\gamma ^\mu s(0)|p_{K_0^{}}=if_{K_0^{}}p^\mu .$$ (19) Then, the diagram in Fig. 1b yields $$_{1b}=\frac{G_b}{m_S^2t}q^\mu \overline{u}\gamma _\mu (1\gamma _5)v,$$ (20) where $`m_S`$ is the $`K_0^{}(1430)^{}`$ mass and $$G_b=\frac{G_F}{\sqrt{2}}V_{us}f_{K_0^{}}g_{K_0^{}K^{}\pi ^0}.$$ (21) Because (20) does not contain $`P^\mu `$, the constant $`C`$ and the form factor $`f_+(t)`$, as shown in (9), will not change after adding (20) to (6). New $`f_{}(t)`$ and $`f_0(t)`$ become $`f_{}(t)`$ $`=`$ $`{\displaystyle \frac{m_K^2m_\pi ^2}{m_V^2t}}+{\displaystyle \frac{G_b}{G_a}}{\displaystyle \frac{m_V^2}{m_S^2t}},`$ (22) $`f_0(t)`$ $`=`$ $`1+{\displaystyle \frac{G_b}{G_a}}{\displaystyle \frac{m_V^2}{m_K^2m_\pi ^2}}{\displaystyle \frac{t}{m_S^2t}}.`$ (23) The parameter $`\lambda _0`$ now acquires the value $$\lambda _0=\frac{G_b}{G_a}\frac{m_V^2}{m_S^2}\frac{m_\pi ^2}{(m_K^2m_\pi ^2)}.$$ (24) We see that the nonzero weak decay constant of $`K_0^{}`$ leads to deviation of the $`\lambda _0`$ parameter from zero. But to check whether a nonvanishing value of $`\lambda _0`$ is really caused by a $`K_0^{}`$ in the intermediate state of the $`K_\mathrm{}3`$ decay, we must look for other consequences of the weak interaction of $`K_0^{}`$ and their consistency with the $`K_\mathrm{}3`$ decay phenomenology. The most obvious candidate for such a program is the decay of $`\tau ^{}`$ lepton to neutrino and $`K_0^{}`$. Or, to be more precise, to the $`K^{}\pi ^0`$ system which originates from the strong decay of $`K_0^{}`$. When calculating the branching fraction of the $`\tau ^{}K^{}\pi ^0\nu _\tau `$ and $`\tau ^{}\overline{K}^0\pi ^{}\nu _\tau `$ decays, we should include the possible interference between the $`K^{}(892)^{}`$ and $`K_0^{}(1430)^{}`$ channels, i.e., add coherently the diagrams (a) and (b) shown in Fig. 2. The resulting differential decay rate formula for $`\tau ^{}K^{}\pi ^0\nu _\tau `$ coincides with Eq. (14). Function $`F_+(s)`$ is again given by (17) because the scalar resonance cannot contribute to the $`J=1`$ channel, but $$F_0(s)=\frac{G_a^{(\pm )}\left[1\omega (s)\right]}{sm_V^2+im_V\mathrm{\Gamma }_V(s)}+\frac{G_b^{(\pm )}}{m_K^2m_\pi ^2}\frac{s}{sm_S^2+im_S\mathrm{\Gamma }_S(s)}.$$ (25) The changes needed to get a formula for the same quantity in $`\tau ^{}\overline{K}^0\pi ^{}\nu _\tau `$ are obvious. Now we have all necessary formulas and constants prepared and can calculate the quantities of interest for various values of the slope parameter $`\lambda _0`$. The results are shown in Tab. I for the charged kaons, in Tab. II for the neutral kaons. Inspecting Tab. I we see that to get simultaneously the correct branching fraction of both $`K_{\mu 3}^\pm `$ and $`\tau ^{}K^{}\pi ^0\nu _\tau `$ decays, we need to pick $`\lambda _0`$ 0.020. This is higher than the present recommended value $`(6\pm 7)\times 10^3`$. But with eyes on the recent experiment with its $`0.062\pm 0.024`$, we do not consider disastrous the dicrepancy of our value of $`\lambda _0`$ with the recommended one. Our value also agrees with $`\lambda _0=0.019`$ obtained on the basis of the Callan-Treiman relation , see . With reference to the experiment it should be said that $`\lambda >0.04`$ contradicts the estimate of the upper limit for the non-$`K^{}(892)^{}`$ $`K^{}\pi ^0`$ production in $`\tau ^{}`$ decays. On the basis of $`\lambda _00.020`$ we expect the branching fraction for producing the $`K^{}\pi ^0`$ system in $`\tau ^{}`$ decays via the scalar $`K_0^{}(1430)^{}`$ resonance to be $`2\times 10^4`$. Similar analysis of numbers in Tab. II points to a $`\lambda _0`$ for the $`K_{\mu 3}^0`$ decay somewhere around 0.030, which is in agreement with the recommended value , but higher than in the previous case. The higher value is required by the $`K_{\mu 3}^0`$ branching fraction. As a consequence, also the branching fraction of the $`\overline{K}^0\pi ^{}`$ production from the $`\tau ^{}K_0^{}(1430)^{}\nu _\tau `$ decay, $`8\times 10^4`$, is higher than would correspond to $`K^{}\pi ^0`$ and isospin symmetry. On the basis of our estimates we expect the branching fraction of the $`\tau ^{}K_0^{}(1430)^{}\nu _\tau `$ decay to be around 0.1%. In Fig. 3 we show the mass spectrum of the $`\overline{K}^0\pi ^{}`$ system produced in the $`\tau ^{}\overline{K}^0\pi ^{}\nu _\tau `$ decays assuming $`\lambda _0=0.030`$. We concentrate on the $`K_0^{}(1430)^{}`$ mass region to show different contributions to the final yield. The tail of the $`K^{}(892)^{}`$ resonance modifies the resonance shape significantly, whereas the interference between the two contributing intermediate states is negligible. We hope that in the near future the high statistics and precise kaon decay data on one side, and data from the $`\tau `$-factories on the other, will enable to study the relations between the $`K_\mathrm{}3`$ and $`\tau K\pi \nu `$ decays in more detail. Finally, we would like to comment the role of the meson dominance model. It is clear that this approach cannot substitute for a more fundamental theory based on first principles. We cannot even say in advance for which processes it will offer a fair description and for which it will fail. But, in our opinion, it has an important role as a heuristic tool. In the cases where it succeeds, it shows which underlying quark diagrams are most important for understanding the dynamics of the process. Because of the necessity of convoluting the simple pure electroweak diagrams with QCD dynamics in order to form hadrons, the diagrams which seem to be important on the quark level, may finally become unimportant and vice versa. We discussed this aspect in some detail in in connection with the $`K^+\pi ^+e^+e^{}`$ decay. Also here, the ability of the meson dominance to describe, with the same set of basic parameters, both $`K_\mathrm{}3`$ and $`\tau ^{}K\pi \nu `$ decays hints that the most important mechanism of destroying strangeness is the quark-antiquark annihilation to the $`W`$ boson. This picture differs completely from the usual notion in which the non-strange quark is a spectator and proceeds through the process intact, whereas the strange quark converts to a non-strange one by emitting $`W`$. ###### Acknowledgements. The author is indebted to Dave Kraus and Julia Thompson for discussions. This work was supported by the U.S. Department of Energy under contract No. DOE/DE-FG02-91ER-40646 and by the Grant Agency of the Czech Republic under contract No. 202/98/0095. The hospitality of the CERN Theory Division, where a part of this work was done, is gratefully acknowledged.
no-problem/9903/cond-mat9903346.html
ar5iv
text
# Self-organized criticality and interface depinning transitions ## I Introduction Many systems respond to external perturbations by avalanches which behave intermittently with a power-law distribution of sizes. The paradigm of such self-organized critical (SOC) behavior is the so-called sandpile model . It maintains by an infinitely slow drive a critical steady-state, where the internal dissipation balances the external drive. Candidates for such phenomena include granular piles , microfracturing processes , and earthquakes . Despite many theoretical and numerical investigations a thorough understanding of self-organized criticality is still lacking . Fundamental problems which need to be solved involve deriving a continuum theory which would for instance determine the upper critical dimension, above which mean-field theory applies . Similar behavior can be found from elastic interfaces driven through random media . They undergo a continuous (critical) depinning transition as the external driving force is varied. With increasing force one passes from a phase where the interface is pinned to a depinned phase where the interface moves with a constant velocity. Close to the critical point, the motion of the interface takes place in “bursts” with no characteristic size and the interface develops scaling described by critical exponents. These phenomena can be met in fluids driven through porous media , in domain walls in magnets (the Barkhausen effect) , in flux lines in type II superconductors , and in charge-density waves . In this paper we investigate the connections between self-organized criticality and depinning transitions . We first establish a generic, exact relation between sandpile models and driven interfaces which builds upon previous investigations of e.g. a charge-density wave model and a rice-pile model . Specifically, we discuss the Bak-Tang-Wiesenfeld (BTW) model and, as an example, a stochastic sandpile model through a mapping to a model for interface depinning with slightly different noise terms. The mapping enables one to understand the slow-drive criticality used in sandpile simulations in terms of standard concepts for driven interfaces. Using the continuum theory for interface depinning it follows for these sandpile models that the upper critical dimension $`d_c`$ is 4, and the relevant noise is of quenched type. The connection with interfaces allows us to establish a scaling relation for the correlation length exponent for sandpile models. In addition, we discuss in the interface representation sandpiles driven at fixed density, driven at boundaries, and extremal drive criticality. ## II Sandpiles The sandpile models are here defined as follows: to each site of a $`d`$-dimensional lattice (square in $`d=2`$) of size $`L^d`$ is associated a variable $`z_x`$ which counts the number of grains on that site. When the number of grains on a site exceeds a critical threshold $`z_x>z_c`$, the site is active and it topples. This means that $`2d`$ grains are removed from that site and given to the $`2d`$ nearest neighbors (nn): $`z_xz_x2d`$, $`z_{nn}z_{nn}+1,nn`$. Sandpiles are usually open such that grains which topple out of the system are lost (in one dimension: $`z_0z_{L+1}0`$). It is also possible, as discussed later, to use periodic boundary conditions. When there are no more active sites in the system, one grain is added to a randomly chosen site, $`z_xz_x+1`$. The time and number of topplings till the system again contains no active sites define an avalanche and its internal lifetime and size. For the BTW model one has $`z_c=2d1`$, whereas for stochastic sandpile models the threshold $`z_c`$ is not constant. Below we will focus on 1) the BTW model and 2) a stochastic model where the threshold $`z_c`$ is randomly chosen to be for example $`2d1`$ or $`2d`$ after each toppling, i.e. $`P(z_c)`$, the probability distribution of the $`z_c`$’s, is any reasonable choice (i.e. decaying sufficiently fast). In terms of the internal avalanche time, the external drive is infinitely slow . After a transient, the system reaches a steady-state in which the slow drive and the dissipation of grains balance each other. The boundary conditions (BCs) are essential to obtain criticality and they are usually of the Dirichlet type, $`z0`$, such that particles are dissipated to the outside . Alternatively, the SOC steady state can be reached by using bulk dissipation and, e.g., periodic BCs . In the SOC steady-state the probability to have avalanches of lifetime $`t`$ and size $`s`$ follow power-law distributions: $`p(t)=t^{\tau _t}f_t(t/L^z)`$ and $`p(s)=s^\tau f(s/L^D),`$ with $`st^{D/z}`$ and $`z(\tau _t1)=D(\tau 1)`$ . Here the size scales as $`s\mathrm{}^D`$ and the (spatial) area as $`\mathrm{}^d`$ (for compact avalanches) with $`\mathrm{}`$ the linear dimension. The fact that each added grain will perform of the order of $`L^2`$ topplings before leaving the system leads to the fundamental result $$sL^2$$ (1) independent of dimension . Thus, $`\tau =22/D`$ and $`\tau _t=1+(D2)/z`$. Equation (1) yields $`\gamma /\nu =2`$, where $`\gamma `$ describes the divergence of the susceptibility (bulk response to a bulk field) near a critical point, $`\chi =s|\mathrm{\Delta }|^\gamma `$, and $`\nu `$ is the (spatial) correlation length exponent, $`\xi |\mathrm{\Delta }|^\nu `$ . Here $`\mathrm{\Delta }=\zeta \zeta _c`$ is the control parameter, $`\zeta =z_x`$, and the critical value $`\zeta _c=z_x_{\mathrm{SOC}}`$, where this average is taken in the slowly driven SOC steady-state with $`\mathrm{\Delta }=0`$ . ## III Interface depinning For driven interfaces in random media critical scaling is obtained with a force $`F`$ close to a critical value $`F_c`$. Depinned interfaces move with a velocity $`vf^\theta `$, with $`f=FF_c0`$. Pinned interfaces are blocked by pinning paths/manifolds which arise from the quenched disorder environment. Close to criticality, correlations scale as $`x^{2\chi }`$, with $`\chi `$ the roughness exponent, up to a correlation length $`\xi |f|^\nu `$. The characteristic time scale is $`\xi ^z`$ with $`z`$ the dynamic exponent and it follows that $`\theta =\nu (z\chi )`$ . Near the depinning transition, the simplest choice to describe the dynamics of the interface is the following continuum equation (’quenched Edwards-Wilkinson’, or linear interface model, LIM) : $$\frac{H}{t}=^2H+\eta (x,H)+F.$$ (2) Here, $`H(x,t)`$ measures the height of a given site $`x`$ at time $`t`$. The quenched noise $`\eta (x,H)`$ has correlations given by $`\eta (x,H)\eta (x^{},H^{})=\delta ^d(xx^{})G(HH^{}),`$ where $`G(HH^{})`$ decays rapidly, approximated by a delta function for random-field disorder. The critical exponents at the depinning transition have been calculated by $`ϵ`$-expansions and simulations . The upper critical dimension is $`d_c=4`$, above which mean-field theory applies . Below we will also discuss so-called columnar noise with $`G(H)1`$ . The interface equation (2) obeys an invariant so that the static response scales as $`\chi (q,\omega =0)q^2`$, i.e., $$\gamma /\nu =2.$$ (3) For forces below $`F_c`$, the (bulk) response of the interface triggered by a small increase in $`F`$ scales as $`\chi _{\mathrm{bulk}}dH/dF(F_cF)^\gamma `$. Right at the critical point one can argue as follows : the roughness of the interface scales as $`\mathrm{}^\chi `$ and assuming that $`\mathrm{\Delta }H`$ will scale in the same way it follows $$\gamma =1+\chi \nu .$$ (4) This yields $`\chi +1/\nu =2`$, i.e., there are only two independent exponents for depinning described by (2). The standard scaling relations are valid for interfaces with parallel dynamics: all sites with $`H/t>0`$ are updated in parallel. Note that interfaces with extremal (i.e., one unstable site at a time) and parallel drive have the same pinning paths. This manifests the Abelian character of the LIM in that the order in which active sites are advanced does not matter . ## IV Mapping of sandpile dynamics Next we will show that the SOC critical behavior can be related exactly to the slowly driven depinning transition in an interface model. Thus, Eqs. (1) and (3) are equivalent and Eq. (4) yields an expression for the correlation length exponent $`\nu `$ for sandpiles. The first step is to formulate the stopping of an avalanche in a SOC system as being due to a pinning path for an interface $`H(x,t)`$. This field is given in the continuum limit by $$H(x,t)=_0^t𝑑t^{}\rho (x,t^{}),$$ (5) where the order parameter $`\rho (x,t)`$ is the activity (topplings) at site $`x`$ at time $`t`$, i.e., $`\rho =\dot{H}=vf^\theta `$. In words: $`H(x,t)`$ counts the number of topplings at site $`x`$ up to time $`t`$. At the microscopic level this is an exact correspondence between a toppling and the interface advance. A toppling takes place when $`z_x>z_c`$, which by the relation $$z_x=z_c+\frac{H}{t},$$ (6) yields the dynamics $`H/t>0HH+1`$, whereas $`H`$ is unchanged at the sites where no toppling takes place. The dynamics of sandpile models thus map to discrete interface equations where an avalanche takes the interface $`H(x,t)`$ from one pinning path to the next in the quenched random medium . Since the interface counts topplings it does not move backwards and thus Eq. (6) effectively reads $`H/t=\theta (z_xz_c)`$, which is the standard discretization for depinning models . We are currently investigating the applicability of such discretization procedures to various models . Next, we express $`z_x`$ in terms of $`H(x,t)`$ for the specific models introduced above. The number of grains $`z_x`$ on site $`x`$ is $`z_x=N_{in}N_{out}+F(x,t)`$, where $`N_{in}`$ is the number of grains added to this site from its $`2d`$ nearest neighbors (nn) and $`N_{out}`$ is the number of grains removed from this site due to topplings. The (external) driving force $`F(x,t)`$ counts the number of grains added from the outside. Since $`N_{in}=\mathrm{\Sigma }_{nn}H(x_{nn},t)`$ and $`N_{out}=2dH(x,t)`$ (for details and extensions to other models see ) we arrive at $$\frac{H}{t}=^2Hz_c(x,H)+F(x,t),$$ (7) where $`^2H`$ is the discrete Laplacian. The Dirichlet boundary conditions for $`z_x`$ become $`H0`$ and the dynamics is parallel. Similar connections have been previously discussed for a charge-density wave model and for a boundary driven rice-pile model (see below). In the stochastic model, $`z_c(x,H)`$ is a random variable which changes after each toppling. Thus $`z_c(x,H)`$ acts like quenched random point-disorder similar to $`\eta (x,H)`$ in Eq. (2). The BTW model has $`z_c`$ equal to a constant. The dissipation needed to reach the SOC state (loss of grains $`z_x`$) takes place through the BC of $`H0`$. Using strong boundary pinning may thus give rise to the possibility of observing SOC experimentally in systems displaying a depinning transition. We emphasize that the mapping prescription can in principle be applied to any sandpile model. For other, more complicated, toppling rules additional terms like the “Kardar-Parisi-Zhang” nonlinearity $`|H|^2`$ may appear. On the internal (fast) time scale the driving force $`F(x,t)`$ does not act as a time-dependent noise but as columnar-type disorder. It counts all the grains added to the system by the slow drive, i.e. $`F(x,t)F(x,t)+1`$, and thus increases as function of time in an uncorrelated fashion. In the opposite limit when a grain is added (e.g.) each time step (“fast drive”) $`F(x,t)`$ would correspond to a time-dependent noise . Since $`H0`$ at the boundary and $`F`$ increases as function of time the steady-state profile of $`H`$ will be close to a paraboloid or, in one dimension, a parabola (see also ). In the steady-state, just after an avalanche, the slowly increasing force $`F`$ is balanced by the negative curvature $`^2H`$ of the paraboloid such that all sites are pinned ($`H/t0`$). This illustrates that the interface effectively is driven by a force equal to the critical force $`F_c\zeta _c\overline{z_c}`$, where $`\overline{z_c}`$ is the average of $`z_c(x,H)`$ in the steady state (for the BTW model trivially $`z_c=2d1`$). Accordingly, the slow drive reaches the depinning critical point by adjusting the dissipation to the driving force such that the velocity (order parameter) is infinitesimal. The steady-state of the different sandpile models is described by an equation similar to Eq. (2). Thus the exponent relation (3) holds and it is equivalent to Eq. (1) which describes the scaling of the average avalanche size (“susceptibility”). Assuming that a roughness exponent $`\chi `$ can be defined for sandpile models, one can argue that Eq. (4) is valid also for sandpiles. Furthermore, the upper critical dimension is $`d_c=4`$. Note that the ensuing noise will contain a columnar component due to the random drive $`F(x,t)`$. The one-dimensional BTW model has a critical force $`F_c=11=0`$, which corresponds to the critical point of the columnar-disorder interface model . In $`d>1`$, one has $`F_c<0`$ which in combination with the fact that the interface by definition cannot move backwards implies that the BTW model displays a more complicated behavior than the columnar models investigated in . Note also that avalanches in stochastic models will have a random structure due to the explicit point disorder whereas avalanches in the BTW model show a more regular behavior . For the case of the boundary driven one-dimensional rice-pile models a similar mapping of the dynamics can be done with an auxiliary field $`H(0,t)`$ and a drive implemented as $`H(0,t)H(0,t)+1`$ . The rice-pile models have Dirichlet BC at $`x=0`$ and Neumann BC (reflective) at $`x=L`$ which yields $`sL`$. In our picture the boundary drive is $`F(1,t)F(1,t)+1`$ and $`F(x>1,t)=0`$. Because of the Neumann BC \[$`H(L,t)=H(L+1,t)`$\] the steady state develops a parabolic profile with the left branch pointing up . ## V Various ensembles We next consider the more straightforward cases in which sandpiles are studied with periodic boundary conditions (amounting to $`H(1)=H(L)`$ in one dimension). In such cases the SOC steady state can be tuned into by various approaches. It can be reached by using a carefully tuned bulk dissipation $`ϵL^2`$ . In this case, periodic BCs are also the best since the scaling of the system is not a mixture of boundary and bulk scaling . As above, we arrive at $$\frac{H}{t}=^2Hz_c(x,H)ϵ(x,H)+F(x,t)$$ (8) with $`H`$ periodic. As in Eq. (7), the force $`F(x,t)`$ is columnar and increases on the slow time scale. The dissipation $`ϵ(x,H)`$ takes now into account all the grains removed before the site at $`x`$ topples. It increases with a (small) probability only when a site topples and this means that $`ϵ`$ explicitly depends on $`H`$. Therefore, a dissipation event effectively corresponds to a shift in the $`z_c`$ value. Thus, one obtains that the BTW model with bulk dissipation contains a very weak point-disorder component (since the increases in $`\overline{F}`$ equal in the statistical sense the increases in $`ϵ`$). Though point-disorder is in general expected to be a relevant perturbation, in the infinite system size limit the Larkin length associated with the cross-over from columnar behavior diverges and thus the avalanche behavior is not governed by the weak point disorder. By this argument the BTW models with or without bulk dissipation are equivalent to the same interface depinning equation (2) in accordance with simulations of the BTW and bulk dissipation models . Note that the boundary critical behavior of the BTW model depends on the specific boundary condition: Dirichlet BCs display a different behavior , whereas Neumann BCs (reflective) are similar to the bulk. In the case of periodic BCs and bulk dissipation, the $`H`$-field fluctuates around an average flat profile. The terms $`F(x,t)`$ and $`ϵ(x,H)`$ will balance each other in the steady state with an average difference such that $`F_c=\zeta _c\overline{z_c}<0`$. For larger dissipation rates the system moves away from the critical point and, in analogy to Eq. (3), the bulk susceptibility scales as $`\chi _{\mathrm{bulk}}1/ϵ\xi _ϵ^{1/\nu _ϵ}`$, with $`\nu _ϵ=1/2`$ . The fixed density (or energy) drive previously used in simulations corresponds to a normal driven interface. Thus, the situation is such that $`H(x,t=0)=0`$, $`\zeta =L^d_xF(x,0)`$ with $`F(x,t)=F(x,0)`$, and periodic BCs and $`ϵ(x,H)0`$ such that no ’grains’ are lost. The control parameter $`\mathrm{\Delta }=\zeta \zeta _c`$ ($`=FF_cf`$) is varied and criticality is only obtained when $`\mathrm{\Delta }=0`$; note that choosing $`\zeta `$ corresponds to using a spatially dependent force $`F(x,0)`$ with $`\zeta =F(x,0)`$. Here, the system is not generally in the SOC steady-state but by letting the control parameter $`\mathrm{\Delta }0`$ one reaches the critical point . The noise is set at the beginning of an avalanche at the columnar values $`F(x,0)`$. Depending on the exact nature of the initial configuration one may observe a different dynamic behavior but the steady-state behavior should correspond to the slowly driven case . In “microcanonical” simulations one has dissipation operating on the slow time scale with exactly the same rate as $`F(x,t)`$. Thus microcanonical simulations correspond to fixed density simulations with a specific initial configuration: after each avalanche, the time is reset to zero, the force is replaced with $`FF+^2H`$, and the forces at $`x^{}`$ ($`x^{\prime \prime }`$) are increased (decreased) by one unit where $`x^{}`$ and $`x^{\prime \prime }`$ are randomly chosen sites. Finally the interface is initialized, $`H0`$. Since the $`^2H`$ term does not introduce correlations this new starting condition is equivalent to the fixed density case but with the initial configuration chosen to be in the SOC steady state. Combining the scaling relations (3) and (4) it follows that $$2+d=D+1/\nu ,$$ (9) where $`D=d+\chi `$. In addition, the average area scales as $`\mathrm{}^dL^{1/\nu }`$. These relations are also valid for sandpiles and Eq. (9) provides estimates for $`\nu `$: in $`d=1`$, $`\nu 1.30`$, and in $`d=2`$, $`\nu 0.78`$. Numerical results yield $`\nu =1.25(5)`$ ($`d=1`$, stochastic model) and $`\nu =0.79(4)`$ ($`d=2`$, BTW model) . Note, however, that the estimates quoted for $`\nu `$ for sandpile models depend on the relation $`D=d+\chi `$, which means that the underlying assumption is that the roughness exponent $`\chi `$ can be defined for slowly driven sandpile models. ## VI Conclusions In summary, we have started from the depinning equation (2) to discuss the continuum description of self-organized critical sandpile models. Thus, their upper critical dimension is $`d_c=4`$ and a scaling relation for the correlation length exponent $`\nu `$ is obtained. We find that the BTW model has columnar disorder $`F`$ on the avalanche time scale whereas the stochastic models have explicitly point disorder included. Other models with slightly modified toppling rules (e.g., the Manna model ) may or may not belong to the same classes depending on the noise terms arising from the mapping (this we are currently investigating further in ). The present approach shows that the relevant noise for sandpiles is ’quenched’. The physics of sandpiles is such that the random decisions or events (grain deposition, choices for thresholds) are frozen into the dynamics of a site as long as it is stable, and their memory decays only slowly as the activity goes on. A recent field theory for $`\rho (x,t)`$ used analogies from systems with absorbing states and assumed that the noise was Reggeon field-theory like (i.e., time-dependent and not quenched) . Physically, the effect which is not incorporated in such Gaussian correlations is that the pinning forces along the interface selects a pinning path in the random media which stops the avalanche. The mapping between interface and sandpile dynamics allows one to characterize the sandpile universality classes by the quenched noise in the interface equations. It also allows to gain novel insight about the previously introduced ways of reaching the depinning critical point: balancing the force with dissipation (slow drive, or self-organized criticality), tuning the average force (as for fixed density sandpiles), tuning the interface velocity (extremal drive criticality), and finally tuning the driving force. This becomes possible because of the diffusive character of interface or sandpile dynamics and because of the Abelian character of the linear interface equation. K. B. L. is supported by the Carlsberg Foundation.
no-problem/9903/hep-th9903064.html
ar5iv
text
# References String theories (at weak coupling) generically exhibit high-temperature instabilities due to a density of states growing exponentially with the energy. At the Hagedorn temperature , a winding state becomes tachyonic and the string theory enters a new phase with a non-zero value of this state. The existence of the Hagedorn temperature can be established by studying the mass formula, derived from the modular invariant partition function at finite temperature . Finite temperature is formally equivalent to a compactification of (euclidean) time on a circle with radius $`(2\pi T)^1`$. Boundary conditions depending on statistics break supersymmetry and, because of modular invariance, modify GSO projections. To obtain information on the phases in the vicinity of a Hagedorn transition, one approach is to construct an effective field theory for the light and tachyonic states. In Ref. , this problem has been solved for $`N_4=4`$ strings<sup>1</sup><sup>1</sup>1$`N_D`$ is the number of $`D`$-dimensional supersymmetries.. A $`N_4=4`$ supergravity is defined by a parametrization of its scalar manifold and a choice of the gauging applied to the vector fields present in the Yang-Mills and supergravity multiplets. The gaugings related to, for instance, torus-compactified heterotic strings or Scherk-Schwarz supersymmetry breaking are known. The knowledge of the specific gauging corresponding to finite temperature allowed the construction of effective supergravity Lagrangians for the temperature instabilities of perturbative heterotic and type II strings and the study of the high-temperature phases . In five dimensions, heterotic strings on $`T^4\times S^1`$, IIA and IIB strings on $`K_3\times S^1`$ are related by $`S`$\- and $`T`$-dualities. A non-perturbative extension of the perturbative description of strings at finite temperature with a universal (duality-invariant) temperature modulus should then display an interesting structure of thermal phases. These theories are effectively four-dimensional at finite temperature and an effective Lagrangian description of the thermal phases would be a four-dimensional supergravity. This construction has been performed in Ref. <sup>2</sup><sup>2</sup>2See also ., and the present contribution is a summary of this work. Our procedure is firstly to write the finite-temperature generalization of the $`N_4=4`$ non-perturbative BPS mass formula, taking into account the expected dualities and the various perturbative heterotic, IIA and IIB limits. Secondly, an effective supergravity is constructed by identifying the appropriate field content (potentially tachyonic states and the minimal set of necessary moduli), the parametrization of the scalar manifold and, most importantly, the gauging for $`N_4=4`$ BPS states at finite temperature. Notice that our analysis will be restricted to BPS states breaking half of the supersymmetries. In terms of the (heterotic) string coupling $`g_H`$ and the $`T^2`$ torus radii $`R`$ and $`R_6`$, the supersymmetric BPS mass formula is : $$\begin{array}{ccc}\hfill ^2& =& \left[\frac{m}{R}+\frac{nR}{\alpha _H^{}}+g_H^2\left(\frac{\stackrel{~}{m}^{}}{R_6}+\frac{\stackrel{~}{n}^{}R_6}{\alpha _H^{}}\right)\right]^2+\left[\frac{m^{}}{R_6}+\frac{n^{}R_6}{\alpha _H^{}}+g_H^2\left(\frac{\stackrel{~}{m}}{R}+\frac{\stackrel{~}{n}R}{\alpha _H^{}}\right)\right]^2\hfill \\ \multicolumn{3}{c}{}\\ & =& \frac{\left|m+ntu+i(m^{}u+n^{}t)+is\left[\stackrel{~}{m}+\stackrel{~}{n}tui(\stackrel{~}{m}^{}u+\stackrel{~}{n}^{}t)\right]\right|^2}{\alpha _H^{}tu}.\hfill \end{array}$$ (1) The integers $`m,n,m^{},n^{}`$ are the four electric momentum and winding numbers for the four $`U(1)`$ charges from $`T^2`$ compactification. The numbers $`\stackrel{~}{m},\stackrel{~}{n},\stackrel{~}{m}^{},\stackrel{~}{n}^{}`$ are their magnetic non-perturbative partners, from the heterotic point of view. In the finite temperature case, the radius $`R`$ becomes the inverse temperature, $`R=(2\pi T)^1`$ and the above mass formula is then modified to: $$_T^2=\left(\frac{m+Q^{}+\frac{kp}{2}}{R}+kT_{p,q,r}R\right)^22T_{p,q,r}\delta _{|k|,1}\delta _{Q^{},0},$$ (2) where we have set $`m^{}=n^{}=\stackrel{~}{m}=\stackrel{~}{n}=0`$ to retain only the lightest states and $`Q^{}`$ is the (space-time) helicity charge. The integer $`k`$ is the common divisor of $`(n,\stackrel{~}{m}^{},\stackrel{~}{n}^{})k(p,q,r)`$ and $`T_{p,q,r}`$ is an effective string tension $$T_{p,q,r}=\frac{p}{\alpha _H^{}}+\frac{q}{\lambda _H^2\alpha _H^{}}+\frac{rR_6^2}{\lambda _H^2(\alpha _H^{})^2},$$ with $`\lambda _H^2=g_H^2RR_6/\alpha _H^{}`$ (six-dimensional heterotic string coupling). The integer $`\stackrel{~}{m}^{}=kq`$ is the wrapping number of the heterotic five-brane around $`T^4\times S_R^1`$, while $`\stackrel{~}{n}^{}=kr`$ corresponds to the same wrapping number after performing a T-duality along the $`S_{R_6}^1`$ direction, which is orthogonal to the five-brane. All winding numbers $`n,\stackrel{~}{m}^{},\stackrel{~}{n}^{}`$ are magnetic charges from the field theory point of view. Their masses are proportional to the temperature radius $`R`$ and are not thermally shifted. A nicer writing of the effective string tension $`T_{p,q,r}`$ is $$T_{p,q,r}=\frac{p}{\alpha _H^{}}+\frac{q}{\alpha _{IIA}^{}}+\frac{r}{\alpha _{IIB}^{}},$$ (3) where $`\alpha _H^{}=2\kappa ^2s`$, $`\alpha _{IIA}^{}=2\kappa ^2t`$ and $`\alpha _{IIB}^{}=2\kappa ^2u`$ when expressed in Planck units. The mass formula (2) possesses the same duality properties as the zero-temperature expression (1) and $`R`$, the inverse temperature, is a duality-invariant quantity. Eq. (2) gives the states and critical values of the temperature radius at which a tachyon appears. Each corresponds to the Hagedorn transition of a perturbative string, either heterotic, or IIA or IIB. It also contains new information (on critical values of $`\lambda _H`$ and/or $`R_6`$) since it also decides which tachyon arises first when $`T1/R`$ increases. We now want to construct an effective (four-dimensional) supergravity for the five-dimensional strings at finite temperature. To describe instabilities, we may truncate the $`N_4=4`$ spectrum and retain only the necessary moduli and potentially tachyonic states, as indicated by mass formula (2). The resulting truncated theory will have $`N_4=1`$ supersymmetry<sup>3</sup><sup>3</sup>3 The four gravitinos are treated identically by finite temperature effects. and include chiral multiplets only. The scalar manifold of a generic, unbroken, $`N_4=4`$ theory is $$\left(\frac{Sl(2,R)}{U(1)}\right)_S\times G/H,G/H=\left(\frac{SO(6,r+n)}{SO(6)\times SO(r+n)}\right)_{T_I,\varphi _A}.$$ (4) The manifold $`G/H`$ of the vector multiplets splits into a part that includes the $`6r`$ moduli $`T_I`$, and a second part with the infinite number $`n\mathrm{}`$ of BPS states. We need to keep three moduli $`S`$, $`T`$ and $`U`$ (for the temperature radius $`R`$, the torus radius $`R_6`$ and the string coupling) and three pairs of winding states $`Z_A^\pm `$, $`A=1,2,3`$, to generate the instabilities, as indicated by mass formula (2). Thus, $`r=2`$ and $`n=6`$ in Eq. (4). The truncation from $`N_4=4`$ to $`N_4=1`$ proceeds then as for the untwisted sector of a $`Z_2\times Z_2`$ orbifold. The first $`Z_2`$ leaves $`N_4=2`$ unbroken and the manifold is $$\begin{array}{c}\left(\frac{Sl(2,R)}{U(1)}\right)_S\times \left(\frac{Sl(2,R)}{U(1)}\right)_T\times \left(\frac{Sl(2,R)}{U(1)}\right)_U\times \left(\frac{SO(4,6)}{SO(4)\times SO(6)}\right)_{\varphi _A}.\hfill \end{array}$$ The first three factors are vector multiplets ($`S`$, $`T`$, $`U`$), the last one a hypermultiplet component ($`Z_A^\pm `$). The second $`Z_2`$ truncation cuts the hypermultiplet component in two Kähler manifolds: $$\left(\frac{SO(2,3)}{SO(2)\times SO(3)}\right)_{Z_A^+}\times \left(\frac{SO(2,3)}{SO(2)\times SO(3)}\right)_{Z_A^{}}.$$ The structure of the truncated scalar manifold and the Poincaré $`N_4=4`$ constraints on the scalar fields indicate that the Kähler potential can be written as $$K=\mathrm{log}(S+S^{})\mathrm{log}(T+T^{})\mathrm{log}(U+U^{})\mathrm{log}Y(Z_A^+,Z_A^+)\mathrm{log}Y(Z_A^{},Z_A^{}),$$ (5) with $`Y(Z_A^\pm ,Z_A^\pm )=12Z_A^\pm Z_A^\pm +(Z_A^\pm Z_A^\pm )(Z_B^\pm Z_B^\pm )`$. This Kähler function can be determined for instance by comparing the gravitino mass terms in the $`N_4=1`$ Lagrangian and in the $`Z_2\times Z_2`$ truncation of $`N_4=4`$ supergravity. The last piece to define the effective supergravity is the superpotential. Finding the correct gauging of $`N_4=4`$ supergravity for mass formula (2) requires some guesswork. We find: $$W=2\sqrt{2}\left[\frac{1}{2}(1Z_A^+Z_A^+)(1Z_B^{}Z_B^{})+(TU1)Z_1^+Z_1^{}+SUZ_2^+Z_2^{}+STZ_3^+Z_3^{}\right].$$ (6) Eqs. (5) and (6) define an effective supergravity which includes in its solutions the thermal phases of five-dimensional $`N_4=4`$ heterotic, type IIA and type IIB strings, and respects all expected duality symmetries. The scalar potential is complicated but a closed expression can be worked out<sup>4</sup><sup>4</sup>4The Kähler metric can be explicitly inverted.. It is stationary at $`Z_A^\pm =0`$ (zero winding background value). At this point, masses follow the formula (2), as they should. This solution corresponds to the low-temperature phase. Tachyons only arise in directions $`Re(Z_A^++Z_A^{})z_A`$, and truncating further to these directions leads to a very simple potential: $$\begin{array}{ccc}\hfill V& =& V_1+V_2+V_3,\hfill \\ \multicolumn{3}{c}{}\\ \hfill \kappa ^4V_1& =& \frac{4}{s}\left[(tu+\frac{1}{tu})H_1^4+\frac{1}{4}(tu6+\frac{1}{tu})H_1^2\right],\hfill \\ \multicolumn{3}{c}{}\\ \hfill \kappa ^4V_2& =& \frac{4}{t}\left[suH_2^4+\frac{1}{4}(su4)H_2^2\right],\hfill \\ \multicolumn{3}{c}{}\\ \hfill \kappa ^4V_3& =& \frac{4}{u}\left[stH_3^4+\frac{1}{4}(st4)H_3^2\right].\hfill \end{array}$$ (7) In these expressions, $`H_A=z_A/(1z_Bz_B)`$, $`s=ReS`$, $`t=ReT`$, $`u=ReU`$, heterotic temperature duality is $`tu(tu)^1`$ and IIA–IIB duality is $`t,H_2u,H_3`$. Limited space permits us to only briefly review the analysis of the effective theory. 1) In four-dimensional Planck units, the temperature is duality-invariant: $$T=(2\pi R)^1,R^2=\kappa ^2stu.$$ (8) 2) The low-temperature phase, with zero background values of the winding states $`Z_A^\pm `$, is universal to the three strings. It is in some sense a self-dual phase, as can be seen for instance in the pattern of supersymmetry breaking. Since $$(𝒢_S^S)^{1/2}𝒢_S=(𝒢_T^T)^{1/2}𝒢_T=(𝒢_U^U)^{1/2}𝒢_U=1$$ (9) (other derivatives of $`𝒢=K+\mathrm{log}|W|^2`$ vanish), the canonically normalized Goldstino is the sum of the fermionic partners of $`S`$, $`T`$ and $`U`$. And the gravitino mass is $$m_{3/2}^2=\kappa ^2e^𝒢=\frac{1}{4\kappa ^2stu}=\frac{1}{4R^2}=(\pi T)^2=\frac{1}{2\alpha _H^{}tu}=\frac{1}{2\alpha _{IIA}^{}su}=\frac{1}{2\alpha _{IIB}^{}st}.$$ (10) This phase certainly exists in the perturbative regime of each string. 3) The boundaries of the low-temperature phase are the values of $`s`$, $`t`$, and $`u`$ at which a winding state becomes tachyonic: $$\begin{array}{cccc}\hfill \mathrm{heterotic}\mathrm{tachyon}& \hfill Re(Z_1^++Z_1^{})& \mathrm{if}& (\sqrt{2}+1)^2>tu>(\sqrt{2}1)^2,\hfill \\ \multicolumn{4}{c}{}\\ \hfill \mathrm{type}\mathrm{IIA}\mathrm{tachyon}& \hfill Re(Z_2^++Z_2^{})& \mathrm{if}& su<4,\hfill \\ \multicolumn{4}{c}{}\\ \hfill \mathrm{type}\mathrm{IIB}\mathrm{tachyon}& \hfill Re(Z_3^++Z_3^{})& \mathrm{if}& st<4.\hfill \end{array}$$ (11) The boundaries are then $`tu=(\sqrt{2}+1)^2`$, $`su=4`$ and $`st=4`$, or, in heterotic variables, $$R=\sqrt{\alpha _H^{}}\frac{\sqrt{2}+1}{\sqrt{2}},R=2g_H^2R_6,R=2g_H^2\frac{\alpha _H^{}}{R_6}=\frac{4\sqrt{2}\kappa ^2}{R_6},$$ (12) with $`\alpha _H^{}=2\kappa ^2s`$. At these values, $`T`$ is a Hagedorn temperature. 4) Type II instabilities arise when $`su<4`$ (IIA) or $`st<4`$ (IIB). From the heterotic point of view, they are avoided as long as $$2\pi T<\frac{1}{4\sqrt{2}\kappa ^2}\mathrm{min}(R_6;\alpha _H^{}/R_6).$$ (13) And type II instabilities are unavoidable if $$2\pi T>\frac{2^{1/4}}{4\kappa g_H}.$$ (14) 5) In the high-temperature heterotic phase, $`(\sqrt{2}+1)^2>tu>(\sqrt{2}1)^2`$ and $`su>4<st`$. It cannot be reached<sup>5</sup><sup>5</sup>5From low temperature. for any value of the radius $`R_6`$ if the (lowest) heterotic Hagedorn temperature verifies inequality (14), which translates into $$g_H^2>g_{\mathrm{crit}.}^2=\frac{\sqrt{2}+1}{2\sqrt{2}}\mathrm{\hspace{0.17em}0.8536}.$$ (15) Only type II thermal instabilities exist in this heterotic strong-coupling regime and the value of $`R_6/\sqrt{\alpha _H^{}}`$ decides whether the type IIA or IIB instability will have the lowest critical temperature. If on the contrary the heterotic string is weakly coupled, $`g_H<g_{\mathrm{crit}.}`$, the high-temperature heterotic phase is reached for values of the radius $`R_6`$ verifying $$2\sqrt{2}g_H^2(\sqrt{2}1)<\frac{R_6}{\sqrt{\alpha _H^{}}}<\frac{1}{2\sqrt{2}g_H^2(\sqrt{2}1)}.$$ (16) The large and small $`R_6`$ limits, with fixed $`g_H`$, again lead to type IIA or IIB instabilities. 6) In the high-temperature heterotic phase, after solving for $`Z_A^\pm `$, the potential becomes $$\kappa ^4V=\frac{1}{s}\frac{(tu+\frac{1}{tu}6)^2}{16(tu+\frac{1}{tu})}.$$ It has a stable minimum for fixed $`s`$ (for fixed $`\alpha _H^{}`$). In units of $`\alpha _H^{}`$, the temperature is fixed, $`tu=1=2R^2/\alpha _H^{}=R^2/(\kappa ^2s)`$. The transition from the low-temperature vacuum is due to a condensation of the heterotic winding mode $`Re(Z_1^++Z_1^{})`$, or equivalently by a condensation of NS five-brane in the type IIA picture. At $`tu=1`$, $`\kappa ^4V=1/(2s)`$ and the heterotic dilaton $`s`$ runs away. The effective supergravity is solved by a background with a dilaton linear in a space-like direction. The fate of supersymmetry is interesting. Inequality (15) indicates that the high-T heterotic phase only exists for weakly-coupled heterotic strings, and by duality in non-perturbative type II regimes. Accordingly, supersymmetry breaking arises from $`s`$: only $`𝒢_S`$ is non-zero. It turns out, as observed in Ref. , that the spectrum of moduli and heterotic perturbative winding modes $`Z_1^\pm `$ is supersymmetric<sup>6</sup><sup>6</sup>6Boson-fermion mass degeneracy.. But this degeneracy does not exist for heterotic dyonic modes $`Z_2^\pm `$ and $`Z_3^\pm `$: $$\begin{array}{cccc}\hfill Z_2^\pm :& \hfill m_{bosons}^2& =& m_{fermions}^2\pm 2sum_{3/2}^2,\hfill \\ \multicolumn{4}{c}{}\\ \hfill Z_3^\pm :& \hfill m_{bosons}^2& =& m_{fermions}^2\pm 2stm_{3/2}^2.\hfill \end{array}$$ The significance of this mass pattern is somewhat ambiguous. The residual space-time symmetry with a linear dilaton is three-dimensional and local supersymmetry does not imply mass degeneracy in this dimension . This mass pattern however survives in five-dimensional type IIA and IIB limits, indicating then broken supersymmetry. 7) The linear dilaton background leads to a central charge deficit<sup>7</sup><sup>7</sup>7Superstring counting: $`\delta \widehat{c}=\frac{3}{2}\delta c`$. $`\delta \widehat{c}=4`$. It can be described by a non-critical string with the corresponding central charge. The appropriate string background is a non-compact parafermionic space, in which degeneracy of perturbative bosonic and fermionic fluctuations is ensured by $`N_4=2`$ (or $`N_3=4`$) supersymmetry. This agrees with the space-time solution with linear dilaton: the background is left invariant by the half of the supercharges in the $`N_4=4`$ algebra. Acknowledgements I wish to thank the organizers of the 32nd International Symposium Ahrenshoop on the Theory of Elementary Particles. This work is supported by the European Union (contracts TMR-ERBFMRX-CT96-0045 and -0090), the Swiss Office for Education and Science and the Swiss National Science Foundation.
no-problem/9903/cond-mat9903255.html
ar5iv
text
# Mesoscopic Cooperative Emission from a Disordered System ## I Introduction The study of cooperative phenomena in optics was initiated by the pioneering work of Dicke. The underlying physics of the cooperative emission can be readily understood using a classical approach. Suppose, that a large number $`N`$ of identical oscillators with frequency $`\omega _0`$ are confined within a small volume with characteristic size $`L2\pi c/\omega _0=\lambda _0`$, where $`\lambda _0`$ is the radiation wavelengh; this is reffered to as a “point” sample. If $`\tau `$ is the radiative lifetime of an isolated oscillator, then according to Dicke, the $`N`$ eigenmodes of the system of oscillators consist of one mode with a short lifetime $`\tau /N\tau `$, and $`N1`$ modes with lifetimes much longer than $`\tau `$ \[by a factor $`(L/\lambda _0)^2`$\]. Correspondingly, the emission spectrum of this system consists of superimposed broad (superradiant) and a narrow (subradiant) bands. The intensities ratio of these bands is determined by the details of the excitation. This type of lifetimes redistribution is caused by the interactions among the oscillators through their radiation fields. Certainly, the classical picture does not describe all aspects of the cooperative emission. In fact, the original work of Dicke primarily addressed the time evolution of the radiation emission, provided that at the initial moment, $`t=0`$, all the oscillators are coherently excited. For this situation, the classical picture helps in understanding that the radiation is released during a short time, $`\tau /N`$; understanding of the initial stages of the emission process (the delay time statistics) requires, however, a quantum description. The original treatment in Ref. also ignored the dipole–dipole interactions, which give rise to a spread in the oscillators frequencies (dephasing). The question whether or not this dephasing would completely destroy the cooperative emission is very non–trivial and was addressed in a number of later works. In the previous considerations of cooperative emission, it was assumed that all $`N`$ oscillators (atoms, molecules or excitons) have the same frequencies. Such a restriction was adequate for the experimental situation in both gases and single crystals. To the best of our knowledge, the only account of disorder in the frequencies of the oscillators was given in Ref. , which addressed the transient behavior of the cooperative emission from molecular aggregates. The case of $`J`$–aggregates corresponds to a symmetrical arrangement of oscillators in a circle. The authors treated the disorder within the perturbation theory and averaged the second–order correction to the decay rates (the first–order correction vanishes upon averaging) with a Gaussian distribution. The advantage of the work in Ref. is that the nearest neighbors dipole–dipole interactions were taken into account exactly. The drawback is in the perturbative approach, which rules out certain $`\mathrm{q}\mathrm{u}\mathrm{a}\mathrm{l}\mathrm{i}\mathrm{t}\mathrm{a}\mathrm{t}\mathrm{i}\mathrm{v}\mathrm{e}`$ physical effects (see below). Whereas Ref. addressed a rather particular situation, the following general questions might be asked. Suppose that the oscillators frequencies are randomly distributed with a characteristic width $`\mathrm{\Omega }`$. Obviously, as $`\mathrm{\Omega }`$ increases, it would eventually destroy the cooperative features in the emission spectrum. Then what is the critical magnitude of $`\mathrm{\Omega }`$? How does this magnitude depend on the parameters of the system $`N`$, $`L`$, and $`\lambda _0`$? What is the structure of the emission spectrum when disorder is smaller than critical? These questions have become not purely academic due to the recent advances in the field of laser–action in $`\pi `$–conjugated polymers. Some experiments provide a strong evidence for cooperative emission from an ensemble of excitons in these materials, for excitation intensities exceeding a certain characteristic threshold. On the other hand, it is well known that the films of $`\pi `$-conjugated polymers are strongly disordered (in the absense of disorder, cooperative emission by polymer chain was considered in Ref. ). They contain impurities and defects which break the polymer chains into segments of relatively short conjugation length, with a distribution depending on the film quality. This has a direct effect on the exciton energy, $`\mathrm{}\omega `$, since it has been found that $`\mathrm{}\omega `$ directly depends on the chain conjugation length. The questions formulated above are addressed in the present paper. We study here the effect of disorder on the cooperative emission spectrum of the system of classical oscillators. We consider the situation of incoherent excitation, which is most relevant to the experiment. In contrast to Ref. , we are interested in the nonaveraged (but universal) properties of the emission spectrum. In other words, our goal is to assess the $`\mathrm{m}\mathrm{e}\mathrm{s}\mathrm{o}\mathrm{s}\mathrm{c}\mathrm{o}\mathrm{p}\mathrm{i}\mathrm{c}`$ aspects of the cooperative emission. By mesoscopic we mean that, in the presence of a disorder, the emission spectrum of a large number of oscillators develops a fine structure. The actual shape of this spectral structure represents the fingerprints of the distribution of the oscillator frequencies and positions for a given realization. At the same time, the characteristic period and amplitude of the fine structure are determined by the net parameters of the system: $`N`$, $`L`$, and $`\mathrm{\Omega }`$. The paper is organized as follows. In Section II we derive the expression for the emission spectrum of a system of classical oscillators coupled by their radiation fields. In Section III we study in detail a simplified model in which the coupling among the oscillators is independent of distance. The eigenmodes of a “point” sample in the presence of disorder are analyzed in Section IV. The universal properties of the mesoscopic stucture in the emission spectrum for small and large (but still smaller than $`\lambda _0`$) sizes $`L`$ are discussed in Sections V and VI, respectively. In Section VII the effect of the dipole–dipole interactions is addressed. The conclusions are given in Section VIII. ## II The Basic Equations We consider a system of $`N`$ oscillators located at random points $`𝐫_i`$, with frequencies $`\omega _i`$ randomly distributed around a central frequency $`\omega _0`$ with a characteristic width $`\mathrm{\Omega }`$. Each oscillator is driven by the radiation field $`𝐄(𝐫,t)`$ produced by all oscillators. The equation of motion for the displacement $`u_i`$ of a given oscillator $`i`$ reads $$\ddot{u}_i+\omega _i^2u_i=\frac{e}{m}𝐧_i𝐄(𝐫_i,t),$$ (1) where $`e`$ and $`m`$ are the dipole characteristics (effective charge and mass) and $`𝐧_i`$ is a unit vector in the direction of the dipole moment. The current density, associated with the oscillators motion, can be written as $$𝐉(𝐫,t)=e\underset{i}{}𝐧_i\dot{u}_i\delta (𝐫𝐫_i).$$ (2) The current $`𝐉`$ plays the role of a source, which generates the electric field $`𝐄(𝐫,t)`$ according to $$\mathrm{\Delta }𝐄\frac{1}{c^2}\ddot{𝐄}=\frac{4\pi }{c^2}\dot{𝐉},$$ (3) where $`c`$ is the speed of light. Within the classical approach, the emission spectrum of the system should be calculated as follows. We assume that at the initial moment, $`t=0`$, all oscillators are excited with different phases $`\varphi _i`$, and that the radiation field at the initial moment is zero, $`𝐄(𝐫,0)=0`$. The evolution of $`𝐄`$ with time can be then obtained by solving Eqs. (1)–(3). After taking the limit $`r\mathrm{}`$ and expanding the field into harmonics, the spectral intensity can be obtained as $`I(\omega )=|𝐄(\mathrm{},\omega )|^2`$. To carry out this program, it is convenient to employ the Laplace transformation. The transformed functions $`\overline{u}_i(p)`$ and $`\overline{𝐄}(𝐫,p)`$ satisfy the following system of equations $`(\omega _i^2+p^2)\overline{u}_i(p)`$ $`=`$ $`{\displaystyle \frac{e}{m}}𝐧_i\overline{𝐄}(𝐫_i,p)+u_0(p\mathrm{cos}\varphi _i\omega _i\mathrm{sin}\varphi _i),`$ (4) $`\mathrm{\Delta }\overline{𝐄}(𝐫,p){\displaystyle \frac{p^2}{c^2}}\overline{𝐄}(𝐫,p)`$ $`=`$ $`{\displaystyle \frac{4\pi e}{c^2}}{\displaystyle \underset{i}{}}𝐧_i\left[p^2\overline{u}_i(p)u_0(p\mathrm{cos}\varphi _i\omega _i\mathrm{sin}\varphi _i)\right]\delta (𝐫𝐫_i),`$ (5) where $`u_0\mathrm{sin}\varphi _i`$ and $`\omega _iu_0\mathrm{cos}\varphi _i`$ are the respective initial displacement and velocity of the $`i`$th oscillator. The solution of Eq. (5) for $`\overline{𝐄}(𝐫,p)`$ can be presented as a superposition of eigenmodes, $`𝐄_\nu (𝐫)`$, of the wave equation for the electromagnetic field, $$\mathrm{\Delta }𝐄_\nu (𝐫)+\frac{\omega _\nu ^2}{c^2}𝐄_\nu (𝐫)=0,$$ (6) where $`\omega _\nu `$ is the eigenfrequency. Assuming that the modes are normalized, $`𝑑\mathrm{𝐫𝐄}_\nu ^2(𝐫)=1`$, we obtain the following expression for $`\overline{𝐄}(𝐫,p)`$ $$\overline{𝐄}(𝐫,p)=4\pi e\underset{i\nu }{}\left[p^2\overline{u}_i(p)u_0(p\mathrm{cos}\varphi _i\omega _i\mathrm{sin}\varphi _i)\right]\frac{𝐧_i𝐄_\nu (𝐫_i)}{\omega _\nu ^2+p^2}𝐄_\nu (𝐫).$$ (7) Substituting Eq. (7) into Eq. (4), we get a system of coupled equations for the amplitudes $`\overline{u}_i(p)`$ $`(\omega _i^2+p^2)\overline{u}_i(p)={\displaystyle \frac{4\pi e^2}{m}}{\displaystyle \underset{j\nu }{}}{\displaystyle \frac{\left[𝐧_i𝐄_\nu (𝐫_i)\right]\left[𝐧_j𝐄_\nu (𝐫_j)\right]}{\omega _\nu ^2+p^2}}`$ $`\left[p^2\overline{u}_j(p)u_0(p\mathrm{cos}\varphi _j\omega _j\mathrm{sin}\varphi _j)\right]`$ (9) $`+u_0(p\mathrm{cos}\varphi _i\omega _i\mathrm{sin}\varphi _i).`$ To simplify Eq. (9), it is convenient to introduce new variables $`v_i(p)`$: $$v_i(p)=\frac{p^2\overline{u}_i(p)}{u_0}(p\mathrm{cos}\varphi _i\omega _i\mathrm{sin}\varphi _i).$$ (10) Then Eq. (9) takes the form $$(\omega _i^2+p^2)v_i+\underset{j}{}S_{ij}v_j=\omega _i^2(\omega _i\mathrm{sin}\varphi _ip\mathrm{cos}\varphi _i),$$ (11) where the coefficients $$S_{ij}(p)=\frac{4\pi e^2p^2}{m}\underset{\nu }{}\frac{\left[𝐧_i𝐄_\nu (𝐫_i)\right]\left[𝐧_j𝐄_\nu (𝐫_j)\right]}{\omega _\nu ^2+p^2}$$ (12) describe the coupling between oscillators $`i`$ and $`j`$ via the radiation field. Let us now express the intensity, $`I(\omega )`$, in terms of the variables $`v_i(p)`$. The expression for $`\overline{𝐄}(𝐫,p)`$ follows from Eqs. (7) and $`(\text{10})`$ $$\overline{𝐄}(𝐫,p)=4\pi e\underset{i\nu }{}v_i(p)\frac{𝐧_i𝐄_\nu (𝐫_i)}{\omega _\nu ^2+p^2}𝐄_\nu (𝐫).$$ (13) The Fourier transform of the electric field is obtained by replacing $`p`$ by $`i\omega `$ in Eq. (13). In the limit $`r\mathrm{}`$, only the pole $`\omega _\nu =\omega `$ contributes to the sum over $`\nu `$, so that $$𝐄(𝐫,\omega )|_r\mathrm{}\underset{i\nu }{}v_i\left[𝐧_i𝐄_\nu (𝐫_i)\right]𝐄_\nu (𝐫)\delta (\omega _\nu ^2\omega ^2).$$ (14) This corresponds to taking the continuum limit for electromagnetic modes. The terms proportional to $`\left[𝐄_\nu (𝐫)𝐄_\mu (𝐫)\right]`$, which appear after calculating $`|𝐄(𝐫,\omega )|^2`$ from Eq. (14), oscillate rapidly if $`\mu \nu `$. Therefore, only the terms with $`\mu =\nu `$ survive at large $`r`$. These terms contain products of the form $`\left[𝐧_i𝐄_\nu (𝐫_i)\right]\left[𝐧_j𝐄_\nu (𝐫_j)\right]`$. Note that the same products enter into the coupling coefficients, $`S_{ij}`$, defined by Eq. (12). This allows us to present the final expression for the spectral intensity in a compact form $$I(\omega )\underset{ij}{}v_i(\mathrm{Im}S_{ij})v_j^{},$$ (15) where $`v_i`$ and $`S_{ij}`$ are calculated at $`p=i\omega `$. We assume that the spread of the oscillators frequencies due to the disorder is much smaller than the central frequency, $`\mathrm{\Omega }\omega _0`$. This means that the frequency dependence of the coupling constants is weak, so that $`S_{ij}(i\omega )`$ can be evaluated at $`\omega =\omega _0`$. The real part of $`S_{ij}`$, which comes from the principle value of the sum over modes in Eq. (12), diverges for $`i=j`$. This divergency is the manifestation of the Lamb shift, well–known in quantum electrodynamics, and can be absorbed into $`\omega _i`$. At the same time, the imaginary part of $`S_{ii}`$, which results from the pole $`\omega _\nu =\omega _0`$, is finite. It determines the radiative lifetime, $`\tau `$, of an individual oscillator via the relation $`\mathrm{Im}S_{ii}(i\omega _0)=2\omega _0/\tau `$. For a single oscillator in vacuum, the modes $`𝐄_\nu `$ are simply plane waves, and the summation over $`\nu `$ in Eq. (12) recovers the textbook result $$\tau =\frac{3mc^3}{e^2\omega _0^2}.$$ (16) For $`ij`$, the coupling $`S_{ij}`$ between two oscillators depends on the ratio $`r_{ij}/\lambda _0`$, where $`r_{ij}`$ is the distance between the oscillators, and $`\lambda _0=2\pi c/\omega _0`$ is the radiation wavelength. For $`r_{ij}\lambda _0`$, both real and imaginary parts of $`S_{ij}`$ oscillate rapidly with $`r_{ij}`$, and the effect of coupling is negligibly small for a large ensemble of oscillators. For $`r_{ij}\lambda _0`$, the real part of $`S_{ij}`$ represents the dipole–dipole interaction of the oscillators $`i`$ and $`j`$. It is convenient to present $`S_{ij}`$ in the form $$S_{ij}=\frac{2\omega _0}{\tau }(\beta _{ij}+i\alpha _{ij}),$$ (17) where $`\beta _{ij}`$ and $`\alpha _{ij}`$ are the dimensionless matrices of coupling between the oscillators, defined as $$\beta _{ij}=\left(\frac{\lambda _0}{2\pi r_{ij}}\right)^3\left[(𝐧_i𝐧_j)\frac{3(𝐧_i𝐫_{ij})(𝐧_j𝐫_{ij})}{r_{ij}^2}\right],$$ (18) and $`\alpha _{ij}=𝐧_i𝐧_j{\displaystyle \frac{1}{5}}\left({\displaystyle \frac{2\pi r_{ij}}{\lambda _0}}\right)^2\left[(𝐧_i𝐧_j){\displaystyle \frac{(𝐧_i𝐫_{ij})(𝐧_j𝐫_{ij})}{2r_{ij}^2}}\right].`$ (19) Turning back to Eq. (11), we note that since the distribution of oscillators frequencies is relatively narrow, that is $`\mathrm{\Omega }\omega _0`$, we can make some simplifications. Namely, for $`p=i\omega `$, the factor $`(\omega _i^2+p^2)`$ in the lhs can be replaced by $`2\omega _0(\omega _i\omega )`$, and the rhs can be written as $`i\omega _0^3e^{i\varphi _i}`$. Finally, after rescaling $`v_i`$ by factor $`\omega _0^2`$, Eq. (11) takes the form $$(\omega _i\omega )v_i+\frac{1}{\tau }\underset{j}{}(\beta _{ij}+i\alpha _{ij})v_j=\frac{i}{2}e^{i\varphi _i}.$$ (20) Equation (20) together with Eqs. (15) and (17)–(19) allow us to calculate the spectral intensity $`I(\omega )`$ for any set of initial oscillators phases. For $`\alpha _{ij}=\beta _{ij}=0`$ ($`ij`$), the eigenfrequencies of the system are simply the frequencies of individual oscillators, and the the emission spectum represents a superposition of Lorentzian peaks centered at $`\omega _i`$. In the presence of nondiagonal coupling, the eigenfrequencies are those of cooperative eigenmodes which, in turn, are determined by the imaginary part of the coupling, $`\alpha _{ij}`$. In the experiment, the measured spectrum represents the result of averaging over many excitation pulses. In order to simulate the experimental situation, we will assume the phases $`\varphi _i`$ to be uncorrelated random numbers and average the result for the spectral intensity over all $`\varphi _i`$. ## III A Simple Model In this section we consider a simplified situation, in which Eq. (20) with random frequencies $`\omega _i`$ can be solved exactly and the expression for the spectral intensity can be obtained in a closed form. Following Dicke, we disregard the dipole–dipole interactions by setting $`\beta _{ij}=0`$. Although this approximation is rather common, later on we will discuss it in more detail. Turning to $`\alpha _{ij}`$, we note that since $`L^2/\lambda _0^21`$, the second term in Eq. (19) is a small correction to the first term. We therefore approximate the non–diagonal elements of $`\alpha _{ij}`$ by replacing $`r_{ij}^2/\lambda _0^2`$ with its average, $$\alpha _{ij}=\alpha 𝐧_i𝐧_j,\alpha _{ii}=1,$$ (21) where the coupling constant $`\alpha `$, with a typical value $`(1\alpha )L^2/\lambda _0^21`$, is the same for all pairs. Note however, that the disorder coming from random orientations of $`𝐧_i`$ is still included. Later we will use this model for the analysis of the system (20) with realistic $`\alpha _{ij}`$. ### A General solution For the model coupling (20), the system of equations (20) takes the form $$\left[\omega _i\omega +\frac{i}{\tau }(1\alpha )\right]v_i+\frac{i}{\tau }\alpha 𝐧_i𝐬=\frac{i}{2}e^{i\varphi _i},$$ (22) with vector $`𝐬`$ defined as $$𝐬=\underset{i=1}{\overset{N}{}}v_i𝐧_i.$$ (23) A closed equation for $`𝐬`$ can be obtained by multiplying $`v_i`$, found from Eq. (22), by $`𝐧_i`$ and taking the sum over $`i`$. This yields $$𝐬+\frac{i\alpha }{\tau }\underset{i}{}\frac{𝐧_i(𝐧_i𝐬)}{\omega _i\omega +i(1\alpha )/\tau }=\frac{i}{2}\underset{i}{}\frac{𝐧_ie^{i\varphi _i}}{\omega _i\omega +i(1\alpha )/\tau }.$$ (24) Solving Eqs. (22) and (24) for $`v_i`$ and substituting the result into Eq. (15), we obtain for the spectral intensity after some algebra $$I(\omega )\mathrm{Im}\left[f(\omega )\frac{i\alpha }{\tau }\underset{\mu \nu }{}g_\mu ^{}\left(1+\frac{i\alpha }{\tau }F\right)_{\mu \nu }^1g_\nu ^+\right],$$ (25) where we introduced a function $$f(\omega )=\underset{i}{}\frac{1}{\omega _i\omega +i(1\alpha )/\tau },$$ (26) a vector $$g_\mu ^\pm (\omega )=\underset{i}{}\frac{e^{\pm i\varphi _i}n_{i\mu }}{\omega _i\omega +i(1\alpha )/\tau },$$ (27) and a tensor $$F_{\mu \nu }(\omega )=\underset{i}{}\frac{n_{i\mu }n_{i\nu }}{\omega _i\omega +i(1\alpha )/\tau },$$ (28) where $`n_{i\mu }`$ are the components of $`𝐧_i`$. ### B Identical oscillators Let us first consider the case of $`N`$ identical oscillators having the same frequencies $`\omega _i=\omega _0`$, and dipole momenta all aligned in the same direction. Then we find from Eq. (28), $$F_{\mu \nu }(\omega )=\delta _{\mu \nu }f(\omega )=\frac{\delta _{\mu \nu }N}{\omega _i\omega +i(1\alpha )/\tau },$$ (29) and after averaging over the initial phases $`\varphi _i`$, we obtain from Eq. (25) $`I(\omega )\left[{\displaystyle \frac{(N1)(1\alpha )/\tau }{(\omega _0\omega )^2+(1\alpha )^2/\tau ^2}}+{\displaystyle \frac{(1\alpha +\alpha N)/\tau }{(\omega _0\omega )^2+(1\alpha +\alpha N)^2/\tau ^2}}\right].`$ (30) The emission spectrum is a superposition of a wide and a narrow Lorentzians with spectral widths $`\mathrm{\Gamma }N/\tau `$ and $`\gamma =(1\alpha )/\tau `$, respectively. In accordance to the classical result, the eigenmodes of the system of $`N`$ identical oscillators coupled via their radiation field represent a single superradiant mode with short radiation time $`\tau /N`$, and $`N1`$ subradiant modes with radiation time much longer than that for an isolated oscillator, $`\tau /(1\alpha )\tau `$. The superradiant mode is a symmetric superposition of oscillator states and is strongly coupled to the radiation field, whereas the coupling of the subradiant modes to the radiation field is suppressed. In this case, the frequencies of all $`N1`$ subradiant modes are degenerate, and the spectrum consists of a single narrow peak of width $`\gamma `$ on top of much broader band of width $`\mathrm{\Gamma }`$, as shown in Fig. 1. As can be seen, with decreasing $`1\alpha `$, the hight of the subradiant peak increases, whereas the amplitude of the superradiand band diminishes. ### C Random frequencies Consider now the case when the oscillators frequencies are random, but orientational disorder is still absent, i.e. all dipoles are aligned in one direction. Again we have $`F_{\mu \nu }(\omega )=\delta _{\mu \nu }f(\omega )`$, with $`f(\omega )=f^{}(\omega )+if^{\prime \prime }(\omega )`$ given by Eq. (26). Then a straightforward evaluation of Eq. (25) yields (after averaging over the phases) $`I(\omega )\left[{\displaystyle \frac{\frac{\alpha }{\tau }f_1^{}\left(1\frac{\alpha }{\tau }f^{\prime \prime }\right)+\left(\frac{\alpha }{\tau }\right)^2f_1^{\prime \prime }f^{}}{\left(1\frac{\alpha }{\tau }f^{\prime \prime }\right)^2+\left(\frac{\alpha }{\tau }f^{}\right)^2}}f^{\prime \prime }\right].`$ (31) where the function $`f_1(\omega )=f_1^{}(\omega )+if_1^{\prime \prime }(\omega )`$ is defined as $$f_1(\omega )=\underset{i}{}\frac{1}{\left[\omega _i\omega +i(1\alpha )/\tau \right]^2}.$$ (32) In order to clarify the underlying physics, it is useful to express the spectral intensity in terms of the system eigenmodes. The eigenfrequencies $`\stackrel{~}{\omega }_k`$ are determined by the equation: $$1+\frac{i\alpha }{\tau }f(\stackrel{~}{\omega }_k)=0.$$ (33) Then the intensity Eq. (31) can be simply rewritten as $$I(\omega )\underset{k}{}\frac{\stackrel{~}{\omega }_k^{\prime \prime }}{(\omega \stackrel{~}{\omega }_k^{})^2+\stackrel{~}{\omega }_k^{\prime \prime 2}},$$ (34) where $`\stackrel{~}{\omega }_k^{}=\mathrm{Re}\stackrel{~}{\omega }_k`$ is the eigenmode frequency and $`\stackrel{~}{\omega }_k^{\prime \prime }=\mathrm{Im}\stackrel{~}{\omega }_k`$ characterizes its width. Note that for $`\omega _i=\omega _0`$, we have $`N1`$ degenerate eigenmodes with $`\stackrel{~}{\omega }_k^{}=\omega _0`$, and Eq. (34) turns into Eq. (30). ### D Disorder in orientations In the presence of the orientational disorder, the spectral intensity (25) depends, in principle, on the direction of each $`𝐧_i`$. However, for large $`N`$, one can replace the product $`n_{i\mu }n_{i\nu }`$ in Eq. (28) for $`F_{\mu \nu }`$, with its average, $$n_{i\mu }n_{i\nu }=\frac{1}{3}\delta _{\mu \nu }.$$ (35) Thus, we have $`F_{\mu \nu }(\omega )=\frac{1}{3}\delta _{\mu \nu }f(\omega )`$, so that the expression for the spectral intensity is similar to Eq. (31) with the only difference that in the first term, the functions $`f(\omega )`$ and $`f_1(\omega )`$ are now multiplied by $`1/3`$. This results in a shrinkage of the superradiant emission band by the same factor. At the same time, the width of subradiant peak increases by a factor of 3. Thus, the orientational disorder has no qualitative effect on the cooperative emission spectrum. The reason is that the coupling (20) is separable, that is it depends on orientations via the product $`𝐧_i𝐧_j`$. Furthermore, for realistic $`\alpha _{ij}`$ given by Eq. (19), the main (first) term has the same separable form; therefore, the orientational disorder does not qualitatively affect the cooperative emission spectrum and will be disregarded in the rest of the paper. ### E Numerical results In Fig. 2 we plot the normalized spectral intensity in the absence of coupling, i.e. $`\alpha =0`$, with increasing number of oscillators. Each spectrum is calculated with a computer generated set of $`N`$ random frequencies $`\omega _i`$, which we have chosen, for simplicity, to be uniformly distributed in the interval $`(\omega _0\mathrm{\Omega },\omega _0+\mathrm{\Omega })`$. For convenience, the spectra corresponding to different $`N`$ are normalized and shifted in the vertical direction. It can be seen that the peaks are resolved in the spectrum as long as the disorder, $`\mathrm{\Omega }`$, is larger than $`N/\tau `$. We also see that for sufficiently large $`N`$, the intensity peaks are washed out from the spectrum. In Figs. 3–6 we present the results for $`I(\omega )`$ calculated using Eq. (31) for several values of $`\alpha `$ close to 1. The striking feature of the emission spectrum is its mesoscopic character. In the presence of disorder, the narrow subradiant peak of Eq. (30) (see Fig. 1) is not smeared out due to a large spread in $`\omega _i`$, as in the case of uncoupled oscillators (see Fig. 2), but rather splits into a multitude of narrow peaks corresponding to the eigenmodes of the disordered system. Furthermore, although the curves are calculated with different random sets of frequencies, the overall pattern of the emission spectrum exhibits certain universal features. In particular, it can be seen by comparing Figs. 3–6 that with increasing $`N`$, the random structure of the spectrum undergoes several transformations, and that the characteristic $`N`$, at which the changes in the pattern occur, is sensitive to the proximity of $`\alpha `$ to 1. This indicates a rather non–trivial structure of the eigenmodes, which we address in the next section. ## IV Structure of eigenmodes The eigenmodes of a system of $`N`$ oscillators coupled through their radiation field are determined by the homogeneous part of Eq. (20) (we set $`\beta _{ij}=0`$ in this section) $$(\omega _i\omega )v_i+\frac{i}{\tau }\underset{j}{}\alpha _{ij}v_j=0.$$ (36) Since the typical values of $`(1\alpha _{ij})r_{ij}^2/\lambda _0^2`$ are small, we split the second term in Eq. (36) into a sum of the main contribution, with $`\alpha _{ij}=1`$, and a correction proportional to $`(\alpha _{ij}1)`$. Analogously to the consideration in the previous section, we rewrite Eq. (36) as $$(\omega _i\omega )v_i+\frac{i}{\tau }s(1+\sigma _i)=0,$$ (37) with $$s=\underset{j}{}v_j,\sigma _i=\frac{1}{s}\underset{j}{}(\alpha _{ij}1)v_j.$$ (38) Expressing $`v_i`$ from Eq. (37) and taking the sum over $`i`$, we obtain $$1+\frac{i}{\tau }\underset{j}{}\frac{1+\sigma _j}{\omega _j\omega }=0.$$ (39) The equation for $`\sigma _i`$ follows from substituting of $`v_j`$, found from Eq. (37), into the definition of $`\sigma _i`$, Eq. (38), $$\sigma _i+\frac{i}{\tau }\underset{j}{}\frac{\alpha _{ij}1}{\omega _j\omega }(1+\sigma _j)=0.$$ (40) The solutions of Eqs. (39) and (40) determine the complex frequencies of the eigenmodes, $`\stackrel{~}{\omega }_k\stackrel{~}{\omega }_k^{}+i\stackrel{~}{\omega }_k^{\prime \prime }`$. ### A “Point” sample Let us first analyze the effect of disorder on a system with all $`\alpha _{ij}=1`$, corresponding to the limit of a “point” sample, i.e. $`(L/\lambda _0)^21`$. With $`\sigma _i=0`$, the real and imaginary parts of Eq. (39) read $`{\displaystyle \frac{1}{\tau }}{\displaystyle \underset{j}{}}{\displaystyle \frac{\omega _j\omega ^{}}{(\omega _j\omega ^{})^2+\omega ^{\prime \prime 2}}}=0,`$ (41) $`{\displaystyle \frac{1}{\tau }}{\displaystyle \underset{j}{}}{\displaystyle \frac{\omega ^{\prime \prime }}{(\omega _j\omega ^{})^2+\omega ^{\prime \prime 2}}}=1.`$ (42) This system of equations has two different solutions with a crossover between them governed by the parameter $`\mathrm{\Omega }\tau /N`$. For large disorder, $`\mathrm{\Omega }N/\tau `$, it can be readily seen that only one term in each of Eqs. (41) and (42) contributes to the sum. In this case, the solutions are simply $`\omega =\omega _j+i/\tau `$, as if the oscillators were uncoupled. In fact, this conclusion could be anticipated. The above parameter represents the ratio of the mean frequency spacing (MFS) of oscillators, $`\mathrm{\Omega }/N`$, and the inverse lifetime of an individual oscillator, $`1/\tau `$; when the former is much larger than the latter, $`\mathrm{\Omega }/N1/\tau `$, the oscillators do not “feel” each other. In the opposite case of large $`N`$ (or weak disorder), $`N\mathrm{\Omega }\tau `$, the analysis of Eqs. (41) and (42) is carried out as follows. First note that in Eq. (41), which determines the real parts of the eigenfrequencies, $`\stackrel{~}{\omega }_k^{}`$, all the terms in the sum contribute now. Let us drop $`\omega ^{\prime \prime 2}`$ in the denominator of Eq. (41) (this step will be justified below). Then we obtain that the solutions $`\stackrel{~}{\omega }_k^{}`$ are given by the extrema of the polynomial $`P(\omega )=_j(\omega _j\omega )`$. These determine the frequencies of the $`N1`$ subradiant modes. At the same time, in Eq. (42), which determines the imaginary parts of the eigenfrequencies, $`\stackrel{~}{\omega }_k^{\prime \prime }`$, all the terms in the sum are positive, so that one should keep only the term with $`\omega _j`$ closest to $`\stackrel{~}{\omega }_k^{}`$. Since $`(\stackrel{~}{\omega }_k^{}\omega _j)\mathrm{\Omega }/N`$ for this term, we obtain the following estimate for the width of the subradiant mode: $`\stackrel{~}{\omega }_k^{\prime \prime }\gamma `$, where $$\gamma \tau \mathrm{\Omega }^2/N^2.$$ (43) It can be seen that $`\gamma `$ is much smaller than the MFS (by the factor $`\mathrm{\Omega }\tau /N1`$). This justifies neglecting $`\omega ^{\prime \prime 2}`$ in the denominators of Eqs. (41) and (42). The superradiant solution of Eqs. (41) and (42) corresponds to the case $`\omega ^{\prime \prime }\mathrm{\Omega }`$. Then we readily obtain $`\stackrel{~}{\omega }^{}=N^1_j\omega _j`$ and $`\stackrel{~}{\omega }^{\prime \prime }=\mathrm{\Gamma }N/\tau `$. We see that, indeed, $`\mathrm{\Gamma }/\mathrm{\Omega }N/\mathrm{\Omega }\tau 1`$. Therefore, the superradiant band in the spectral intensity is not affected by the disorder. We therefore conclude that cooperative emission is not destroyed by disorder. The spectrum of the system consists of a single superradiant and $`N1`$ subradiant eigenmodes. For large $`N/\mathrm{\Omega }\tau `$, the subradiant modes are well defined, since their spectral widths are much smaller than the MFS. ### B Limit of weak disorder In this subsection, we address a nontrivial question about the fate of cooperative eigenmodes when the disorder in frequencies vanishes. In this limit, $`\mathrm{\Omega }\tau /N0`$, all oscillator frequencies become equal, i.e. $`\omega _i\omega _0`$. In the absence of cooperative coupling, $`\alpha _{ij}=0`$ ($`ij`$), the eigenfrequencies of the system are those of individual oscillators with the energy width much larger than the MFS, $`1/\tau \mathrm{\Omega }/N`$, so that the spectrum of the system is degenerate. However, the situation is more complicated in the presence of cooperative coupling, $`\alpha _{ij}0`$. Consider the case of a “point sample”, $`\alpha _{ij}=1`$. In this case, the width of subradiant modes is given by Eq. (43). Important is that although the MFS diminishes with decreasing $`\mathrm{\Omega }`$, the width $`\gamma `$ decreases even faster: $`\gamma /(\mathrm{\Omega }/N)\mathrm{\Omega }\tau /N0`$. In orher words, in the presence of even a very weak disorder, the narrow subradiant peaks do not overlap. Therefore, the cooperative modes remain distinct even though the “bare” oscillator modes were already degenerate. In the case of general coupling, the width $`\gamma `$ of subradiant modes for small values of $`\mathrm{\Omega }\tau /N`$ will be determined by the fluctuations of $`\alpha _{ij}`$, as we will see below. ### C Fluctuations of $`\alpha _{ij}`$ Let us turn to the case with realistic coupling $`\alpha _{ij}`$. The eigenfrequencies $`\stackrel{~}{\omega }_k`$ should now be determined from Eq. (36), which in component form reads $`{\displaystyle \frac{1}{\tau }}{\displaystyle \underset{j}{}}{\displaystyle \frac{(\omega _j\omega ^{})(1+\sigma _j^{})\omega ^{\prime \prime }\sigma _j^{\prime \prime }}{(\omega _j\omega ^{})^2+\omega ^{\prime \prime 2}}}=0,`$ (44) $`{\displaystyle \frac{1}{\tau }}{\displaystyle \underset{j}{}}{\displaystyle \frac{\omega ^{\prime \prime }(1+\sigma _j^{})+(\omega _j\omega ^{})\sigma _j^{\prime \prime }}{(\omega _j\omega ^{})^2+\omega ^{\prime \prime 2}}}=1,`$ (45) with $`\sigma _i(\omega )=\sigma _i^{}(\omega )+i\sigma _i^{\prime \prime }(\omega )`$ satisfying Eq. (40), or in component form $`\sigma _i^{\prime \prime }+{\displaystyle \frac{1}{\tau }}{\displaystyle \underset{j}{}}(\alpha _{ij}1){\displaystyle \frac{(\omega _j\omega ^{})(1+\sigma _j^{})\omega ^{\prime \prime }\sigma _j^{\prime \prime }}{(\omega _j\omega ^{})^2+\omega ^{\prime \prime 2}}}=0,`$ (46) $`\sigma _i^{}{\displaystyle \frac{1}{\tau }}{\displaystyle \underset{j}{}}(\alpha _{ij}1){\displaystyle \frac{\omega ^{\prime \prime }(1+\sigma _j^{})+(\omega _j\omega ^{})\sigma _j^{\prime \prime }}{(\omega _j\omega ^{})^2+\omega ^{\prime \prime 2}}}=0.`$ (47) For $`\omega ^{\prime \prime }\mathrm{\Omega }/N`$, the system (44)–(47) can be approximately solved in the same way as for a “point” sample. The corresponding condition will be derived in Section V. When evaluating the contribution to the lhs of Eq. (45) coming from the first term in the numerator, one should keep only one term in the sum with $`\omega _j`$ closest to $`\stackrel{~}{\omega }_k^{}`$: $`(\omega _j\stackrel{~}{\omega }_k^{})\mathrm{\Omega }/N`$. Then we obtain $$\stackrel{~}{\omega }_k^{\prime \prime }\frac{\tau \mathrm{\Omega }^2}{N^2}\left(1\frac{1}{\tau }\underset{j}{}\frac{\sigma _j^{\prime \prime }}{\omega _j\stackrel{~}{\omega }_k^{}}\right),$$ (48) where we again dropped $`\omega ^{\prime \prime 2}`$ in the denominator. Since $`\sigma _i^{}\sigma _i^{\prime \prime }1`$ (see Section V), the frequencies $`\stackrel{~}{\omega }_k^{}`$ in Eq. (48) are the same as for the case $`\alpha _{ij}=1`$. Finding $`\sigma _i^{\prime \prime }`$ in the first order from Eq. (46), and substituting the result into Eq. (48), we obtain $$\stackrel{~}{\omega }_k^{\prime \prime }\frac{\tau \mathrm{\Omega }^2}{N^2}\left[1+\frac{1}{\tau ^2}\underset{ij}{}\frac{\alpha _{ij}1}{(\omega _i\stackrel{~}{\omega }_k^{})(\omega _j\stackrel{~}{\omega }_k^{})}\right].$$ (49) The second term is the sought correction to the width of the subradiant modes. Remarkably, this term turns to zero if the matrix elements $`\alpha _{ij}`$ are replaced by their average $`\overline{\alpha }`$. Indeed, in this case the double sum in Eq. (49) would factorize into a product of two sums, each vanishing due to the fact that $`\stackrel{~}{\omega }_k^{}`$ are the solutions of Eq. (41) (corresponding to $`\alpha _{ij}=1`$). Therefore, the widths of the subradiant modes are determined by the fluctuations, $`\delta \alpha _{ij}`$, of the coupling parameters $`\alpha _{ij}`$ rather than the deviation of their average, $`\overline{\alpha }`$, from unity. It should be noted that this property is general: one can easily see by comparing Eqs. (44) and (45) to Eqs. (46) and (47) that for $`\alpha _{ij}=const`$, we have $`\sigma _i^{\prime \prime }=0`$ and $`\sigma _i^{}1`$, so that the eigenfrequencies $`\stackrel{~}{\omega }_k`$ are unaffected. ### D Discussion of the numerical results We are now in the position to explain the spectra shown in Figs. 3–6. For the model coupling: $`\alpha _{ij}=\alpha `$, $`\alpha _{ii}=1`$, fluctuations only in the diagonal elements are finite: $`\delta \alpha _{ij}=(1\alpha )\delta _{ij}`$. Substituting this $`\delta \alpha _{ij}`$ into Eq. (49) \[instead of $`(\alpha _{ij}1)`$\] and keeping only the term with $`(\stackrel{~}{\omega }_k^{}\omega _j)\mathrm{\Omega }/N`$ in the remaining sum, we obtain $$\gamma \left[\frac{\tau \mathrm{\Omega }^2}{N^2}+\frac{1}{\tau }(1\alpha )\right].$$ (50) The above expression indicates that after the cooperative modes have been formed (at $`N\mathrm{\Omega }\tau `$), the system can be found in two different regimes characterized by the relative magnitude of the first and second terms in the rhs. For intermediate number of oscillators, $`\mathrm{\Omega }\tau N\mathrm{\Omega }\tau (1\alpha )^{1/2}`$, the width decreaeses with increasing $`N`$, as can be seen by comparing the bottom second and third curves in each of Figs. 3–6 (note that the lowest curves with $`N=2`$ show no sign of cooperative emission). In this regime, the system behaves in the same way as a “point” sample. With increasing number of oscillators, the dependence on $`N`$ saturates, and the width is dominated by the fluctuations of $`\alpha _{ij}`$. Correspondingly, the change in the pattern of the peaks in Figs. 3–6, calculated for different values of $`(1\alpha )`$, occurs at different $`N`$, as can be seen by comparing the next two curves in each figure. Note, however, that with further increase in $`N`$, the curves exhibit yet another change in pattern. Namely, the peaks get smeared out (the top two curves in each figure). This occurs when the value of $`(1\alpha )/\tau `$ exceeds the MFS, $`\mathrm{\Omega }/N`$, which is inconsistent with the above analysis. The reason for such a discrepancy is that for large $`N`$, the model with coupling $`\alpha _{ij}`$ independent of separation between oscillators, becomes inadequate, as we mentioned above. For the correct description of the peaks smearing at large $`N`$, the spatial dependence of $`\alpha _{ij}`$ is crucial; this question is addressed in Section VI. Nevertheless, for $`N\mathrm{\Omega }\tau /(1\alpha )`$, this model describes accurately the mesoscopic features of the spectral intensity, as shown in the next section. ## V Strong Mesoscopics Regime Let us now estimate the typical width of the radiation eigenmodes due to the fluctuations in $`\alpha _{ij}`$. Since the configurational average of the second term in Eq. (49) \[with $`\delta \alpha _{ij}`$ instead of $`(\alpha _{ij}1)`$\] vanishes, we need to evaluate $`(\stackrel{~}{\omega }_k^{\prime \prime })^2`$. Using the fact that only diagonal terms in the average $`\delta \alpha _{ij}\delta \alpha _{i^{}j^{}}`$ survive and omitting the first term in Eq. (49), we write $$(\stackrel{~}{\omega }_k^{\prime \prime })^2\left(\frac{\mathrm{\Omega }^2}{\tau N^2}\right)^2\underset{ij}{}\frac{(\delta \alpha _{ij})^2}{(\omega _i\stackrel{~}{\omega }_k^{})^2(\omega _j\stackrel{~}{\omega }_k^{})^2}.$$ (51) The sum is dominated by the terms with $`(\omega _i\stackrel{~}{\omega }_k^{})(\omega _j\stackrel{~}{\omega }_k^{})\mathrm{\Omega }/N`$. Since the typical spatial separation between two oscillators with close frequencies is $`L`$, the separation fluctuations are of the same order. Thus, the typical fluctuation of $`\alpha _{ij}`$ is $`\delta \alpha \sqrt{(\delta \alpha _{ij})^2}\left(L/\lambda _0\right)^2`$, and we finally obtain the typical width of a subradiant mode, $`\gamma \sqrt{(\stackrel{~}{\omega }_k^{\prime \prime })^2}`$, as $$\gamma \frac{\delta \alpha }{\tau }\frac{1}{\tau }\left(\frac{L}{\lambda _0}\right)^2.$$ (52) Comparing Eq. (52) to Eq. (49), we see that fluctuations in $`\alpha _{ij}`$ dominate the width $`\gamma `$ for $`N\mathrm{\Omega }\tau (\lambda _0/L)`$. In order to characterize the fine structure in the emission spectrum, it is convenient to introduce the dimensionless parameter $$\kappa =\frac{\mathrm{\Omega }\tau }{N}\left(\frac{\lambda _0}{L}\right)^2.$$ (53) It represents the product of a small, $`\mathrm{\Omega }\tau /N`$, and a large, $`(\lambda _0/L)^2`$, factors, which characterize the disorder and the system size, respectively. In terms of $`\kappa `$, the condition for the formation of the cooperative modes, $`N/\mathrm{\Omega }\tau 1`$, can be presented as $`\kappa \left(\lambda _0/L\right)^2`$. Using Eq. (53), the width (52) can be expressed in terms of the MFS as $$\gamma \frac{1}{\kappa }\left(\frac{\mathrm{\Omega }}{N}\right).$$ (54) This result applies when $`\kappa \lambda _0/L`$. On the other hand, it was implicit in the above derivation (Section IV) that typical $`\sigma _i^{}`$ and $`\sigma _i^{\prime \prime }`$ are smaller than unity . The latter parameters can be estimated in a similar way from Eqs. (46) and (47) with the result: $`\sigma ^{\prime \prime }\kappa ^1`$ and $`\sigma ^{}\sigma ^{\prime \prime 2}\kappa ^2`$. Thus, the lower boundary for $`\kappa `$, at which Eq. (54) applies, is $`\kappa 1`$. For $`\kappa 1`$, all terms in Eqs. (44)–(47) become of the same order of magnitude, and for smaller $`\kappa `$ this system has no subradiant solutions, as discussed above. Since the MFS exceeds the width $`\gamma `$ within the entire domain $`1\kappa \lambda _0/L`$, the fine structure in the spectral intensity $`I(\omega )`$ is well pronounced. In other words, this domain corresponds to the strong mesoscopics regime. The opposite case $`\kappa 1`$ is considered in the next section. ## VI Weak Mesoscopics Regime In the domain $`\kappa 1`$, the system cannot sustain eigenmodes that involve all $`N`$ oscillators. As a result, the eigenmodes become localized, in the sense that each eigenmode would comprise some $`N_cN`$ oscillators and occupy the volume with characteristic size $`L_cL`$. The magnitude of $`L_c`$ and $`N_c`$ can be estimated from the following argument. Let us divide the system of oscillators into subsystems of increasingly smaller size. When the size of the subsystem becomes $`L_c`$, the system of equations (44) and (45), applied to a subsystem, first acquires a solution. This happens when the width $`\gamma _c\tau ^1(L_c/\lambda _0)^2`$, determined from Eq. (52) for a subsystem, becomes of the order of MFS within a subsystem, i.e. $$\frac{1}{\tau }\left(\frac{L_c}{\lambda _0}\right)^2\frac{\mathrm{\Omega }}{N_c}.$$ (55) Taking into account that $`N_c=N(L_c/L)^3`$, we find $$L_c\kappa ^{1/5}L,N_c\kappa ^{3/5}N.$$ (56) Substituting these results back into Eq. (55), we find for the eigenmodes width $$\gamma =\gamma _c\frac{1}{\kappa ^{3/5}}\left(\frac{\mathrm{\Omega }}{N}\right)\frac{\mathrm{\Omega }}{N}.$$ (57) From Eq. (56), we can also estimate how the relative amplitude of mesoscopic fluctuations in the spectral intensity $`I(\omega )`$ falls off with decreasing $`\kappa `$: $$\frac{\delta I}{I}\left(\frac{N_c}{N}\right)^{1/2}=\kappa ^{3/10}.$$ (58) It is apparent that smearing of the fine structure in the cooperative emission spectrum with decreasing $`\kappa `$ occurs rather slowly. ## VII Dipole-dipole interactions In this section we study the effect of dipole-dipole interactions on the cooperative emission from a disordered system. Note that for a “point” sample with $`L\lambda _0`$, the typical magnitude of the dipole–dipole interaction between two oscillators is much larger than their superradiant coupling, $`\beta _{ij}/\alpha _{ij}(\lambda _0/L)^31`$. The structure of the eigenmodes in the absence of superradiant coupling, given by Eq. (20) with $`\alpha _{ij}=0`$, was considered in several papers. Renormalization–group arguments of Ref. (see also Ref. ) suggest that all eigenmodes are delocalized. Numerical studies indicate a wide range of spatial scales in eigenmodes and, thus, seem to support this conclusion. In Ref. , the role of general random-matrix perturbation in the spectrum of multilevel system was studied analytically; he ensemble-averaged renormalization of the spectrum of the system was derived which does not capture, however, the mesoscopic effects. Below we argue that finite disorder in combination with superradiant coupling lead to a certain “resistance” of the system to large, but zero on average, dipole–dipole terms because of the formation of cooperative modes. In the absence of superradiant coupling ($`\alpha _{ij}=0`$) the dipole–dipole interactions lead to the shifts in the frequencies of individual oscillators. The resulting additional spread in $`\omega _i`$ is, in general, much larger than the “bare” spread $`\mathrm{\Omega }`$. This can be readily seen from the lowest–order correction to the frequency, $`\delta \omega _i`$, which has the form $$\delta \omega _i=\frac{1}{\tau ^2}\underset{ji}{}\frac{\beta _{ij}^2}{\omega _i\omega _j},$$ (59) (since $`\beta _{ii}=0`$, the lowest–order correction to $`\omega _i`$ is quadratic). The main contribution to the sum comes from pairs of oscillators located closely in space, with $`r_{ij}LN^{1/3}`$ (nearest neighbor interaction), so that $$\beta _{ij}N\left(\frac{\lambda _0}{L}\right)^3.$$ (60) Since the typical frequency difference for such pairs is $`\mathrm{\Omega }`$, we obtain $$\delta \omega _i\frac{N^2}{\mathrm{\Omega }\tau ^2}\left(\frac{\lambda _0}{L}\right)^6.$$ (61) On the other hand, the results obtained in the previous sections apply only if the additional disorder, caused by dipole–dipole interactions, does not affect the MFS. This requires the condition $`\delta \omega _i\mathrm{\Omega }`$ to be met. Using Eq. (61), this condition could be rewritten as $`\lambda _0/L(\mathrm{\Omega }\tau /N)^{1/3}`$. Since the formation of cooperative modes occurs only if $`\mathrm{\Omega }\tau /N1`$, one could draw the conclusion that neglecting the dipole–dipole interactions would be inconsistent with our basic assumption $`L\lambda _0`$. The resolution of this apparent contradiction lies in the observation that, in the presence of superradiant coupling, i.e. $`\alpha _{ij}0`$, the true eigenmodes of the system are cooperative modes comprised from a large number of oscillators. Therefore, the relevant condition should involve the shifts, $`\delta \stackrel{~}{\omega }_k^{}`$, of the eigenmodes frequencies, rather than $`\delta \omega _i`$. In the first order, $`\delta \stackrel{~}{\omega }_k^{}`$ is given by an expression similar to the second term in the rhs of Eq. (49) \[with $`\beta _{ij}`$ instead of $`(\alpha _{ij}1)`$\]. Since this term vanishes on average, as discussed above, the typical shift, $`\delta \stackrel{~}{\omega }^{}\sqrt{(\delta \stackrel{~}{\omega }_k^{})^2}`$, can be estimated from \[compare with Eq. (51)\] $$(\delta \stackrel{~}{\omega }_k^{})^2\left(\frac{\mathrm{\Omega }^2}{\tau N^2}\right)^2\underset{ij}{}\frac{(\beta _{ij})^2}{(\omega _i\stackrel{~}{\omega }_k^{})^2(\omega _j\stackrel{~}{\omega }_k^{})^2}.$$ (62) There are two main contributions to the sum in the rhs. The first comes from the nearest–neighbor interaction with $`\beta _{ij}`$ given by Eq. (60). The second contribution originates from the pairs ($`ij`$) which are close in frequency; for such pairs, $`\beta _{ij}(\lambda _0/L)^3`$. Both contributions turn out to be of the same order of magnitude, resulting in $$\delta \stackrel{~}{\omega }^{}\frac{1}{\tau }\left(\frac{\lambda _0}{L}\right)^3.$$ (63) This result is smaller than $`\delta \omega _i`$ in Eq. (61) by the factor $`N(N/\mathrm{\Omega }\tau )(\lambda _0/L)^31`$. Such a dramatic difference illustrates the “resistance” of a coupled system of oscillators with disorder in frequencies to dipole–dipole interactions, as mentioned above. This property can also be qualitatively explained as follows. The dipole–dipole interaction between two subradiant modes can be viewed as an interaction between a mode and the electric field, $`\stackrel{~}{𝐄}(𝐫)`$, created by the dipole moments of oscillators making up the other mode. Since the number of oscillators in a mode is large, their electric fields effectively cancel each other, so that the resulting net field, $`\stackrel{~}{𝐄}(𝐫)`$, varies in space much slower than those of the individual oscillators. Note now that a slowly varying electric field couples only weakly to a subradiant mode. In fact, the suppression of the dipole–dipole interaction between subradiant modes has the same physical origin as their decoupling from the radiation field: had the electric field $`\stackrel{~}{𝐄}`$ been uniform, the cooperative modes would not interact at all with each other. This is the reason why the corrections to $`\stackrel{~}{\omega }_k^{}`$ and $`\stackrel{~}{\omega }_k^{\prime \prime }`$ vanish on average, and consequently the typical $`\delta \stackrel{~}{\omega }^{}`$ and $`\gamma `$ are determined by the fluctuations of $`S_{ij}`$. In contrast, the frequency shifts of individual oscillators are due to their interactions with the nearest neighbors, so that no cancellations occur. Thus we arrive at the condition $`\lambda _0/L\left(\mathrm{\Omega }\tau \right)^{1/3}`$, or, in terms of the parameter $`\kappa `$, $$\kappa \frac{1}{N}\left(\frac{\lambda _0}{L}\right)^5.$$ (64) This condition should be consistent with the condition for the formation of the cooperative modes, $`\kappa (\lambda _0/L)^2`$. We see that both conditions are satisfied for sufficiently large $`N`$, i.e. $`N(\lambda _0/L)^3`$. To account for different mesoscopics regimes, it is convenient to present Eq. (64) in the form $$N\left(\frac{\lambda _0}{L}\right)^n.$$ (65) Then $`n=3`$, 4, and 5 correspond to the “point” sample, strong mesoscopics, and weak mesoscopics regimes, respectively. ## VIII Conclusions The main result of the present paper is that disorder in oscillators frequencies does not destroy the cooperative character of the emission from a “point” sample, as long as the MFS, $`\mathrm{\Omega }/N`$, is smaller than the linewidth of an individual oscillator, $`\tau ^1`$. In the opposite case, when $`\mathrm{\Omega }/N\tau ^1`$, the spectrum represents a system of non–overlapping Lorentzians with the width $`\tau ^1`$. It is convenient to characterize the disorder in terms of the dimensionless parameter $`\kappa =\mathrm{\Omega }\tau \lambda _0^2/NL^2`$. Below we summarize our results for the characteristic width, $`\gamma `$, of the subradiant peaks (in units of $`\mathrm{\Omega }/N`$) for different domains of $`\kappa `$: $$\gamma =\frac{\mathrm{\Omega }}{N}\mathrm{\Phi }(\kappa ,L/\lambda _0),$$ (66) where the dimensionless function $`\mathrm{\Phi }`$ has the following asymptotes $`\mathrm{\Phi }`$ $`=`$ $`\kappa \left({\displaystyle \frac{L}{\lambda _0}}\right)^2,\mathrm{for}{\displaystyle \frac{\lambda _0}{L}}\kappa \left({\displaystyle \frac{\lambda _0}{L}}\right)^2,\text{“point” sample},`$ (67) $`\mathrm{\Phi }`$ $`=`$ $`\kappa ^1,\mathrm{for}\mathrm{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}1}\kappa {\displaystyle \frac{\lambda _0}{L}},\text{strong mesoscopics},`$ (68) $`\mathrm{\Phi }`$ $`=`$ $`\kappa ^{3/5},\mathrm{for}\kappa 1,\text{weak mesoscopics}.`$ (69) For $`\kappa (\lambda _0/L)^2`$, the spectrum corresponds to uncoupled oscillators. In Section III, we presented an exact solution of a model with simplified (separable) coupling Eq. (21). This model describes accurately the first two (“point” sample and strong mesoscopics) regimes in Eq. (67). It becomes, however, inadequate in the third (weak mesoscopics) regime, giving a $`\kappa ^1`$ instead of the correct $`\kappa ^{3/5}`$ dependence for the period of mesoscopic structure in the cooperative emission spectrum. Throughout the paper we have considered a three-dimensional system of oscillators. When the oscillators are confined to a plane, only the results for $`\kappa 1`$ should be modified. In this case, repeating the consideration of Section VI, we obtain $`\mathrm{\Phi }=\kappa ^{1/2}`$. Also for the relative magnitude of mesoscopic fluctuations, $`\left(\delta I/I\right)`$, instead of Eq. (58) we obtain $`\left(\delta I/I\right)\kappa ^{1/4}`$. Note finally that in experiments, such as photoexcited excitons in polymer films, the number of oscillators $`N`$ is governed by the excitation intensity. Thus, for a given disorder, the crossover from the strong mesoscopics regime ($`\kappa >1`$) to the weak mesoscopics regime ($`\kappa <1`$) can be simply achieved by increasing the excitation intensity level. ###### Acknowledgements. The authors are grateful to S. Mukamel and M. I. Stockman for helpful discussions. The work at Vanderbilt University was supported by NSF grant ECS-9703453, and the work at the University of Utah was supported by NSF grant DMR-9732820. M.E.R. was also supported by the Petroleum Research Fund under grant ACS-PRF# 34302-AC6. FIG. 1 FIG. 2 FIG. 3 FIG. 4 FIG. 5 FIG. 6
no-problem/9903/physics9903012.html
ar5iv
text
# Effect of shape anisotropy on transport in a 2-d computational model: Numerical simulations showing experimental features observed in biomembranes ## 1 Introduction The problem of packing of spheres plays a major role in the modeling of many physical systems and has been studied for more than four decades. Some of the early examples of the computer simulations of hard sphere liquids suggest the existence of a first order freezing transition. The problem of packing of spheres in two and three dimensions is of great interest. Recent investigations of such systems have focused on the study of the statistical geometry of the dense sphere packing. Such studies are important in the understanding of physical properties of many systems, composed of a large number of particles . In this context we pose a question, with the motivation of studying the transport across a two dimensional structure of packed circular disks (membrane), how does the packing change when the membrane is doped with objects of various shapes and sizes (e.g. spheres arranged rigidly in the form of rods of different lengths, L, T, X shapes etc. See Fig. 1) ? In particular we investigate the effect of these shapes on the distribution of “voids”. The “anisotropy” in the interaction potential appears to play a key role in the induction of large voids. As pointed out by Sastri et. al. , no algorithm is available to compute void statistics for the packing of shapes other than spheres. In this paper we propose a simple numerical algorithm to compute void statistics. Unlike a probabilistic algorithm (Monte Carlo), our algorithm is based on digitization and cell counting. The paper is organized as follows. In Sec. 2, we describe the model system. A definition of “void” and an algorithm to compute void statistics is given in Sec. 3. The results of numerical simulations and their relevance in lipid biomembranes is discussed in Sec. 4 We summarize the paper in Sec. 5. ## 2 The model system The configuration space of the model system (membrane) is considered as a two dimensional space with periodic (toroidal) boundary conditions. The constituents of the membrane are disks and dopants. ### 2.1 The basic model We consider a membrane made up of only circular disks interacting pairwise via the Lennard-Jones potential: $`V_{LJ}(r_{ij})=4ϵ{\displaystyle \underset{i=1}{\overset{N}{}}}{\displaystyle \underset{j=i+1}{\overset{N}{}}}\left(({\displaystyle \frac{\sigma }{r_{ij}}})^{12}({\displaystyle \frac{\sigma }{r_{ij}}})^6\right)`$ where, $`r_{ij}`$ is the distance between the centers of the $`i^{\mathrm{th}}`$ and $`j^{\mathrm{th}}`$ disks, $`\sigma `$ determines the range of hard core part in the potential and $`ϵ`$ signifies the depth of the attractive part. We choose the number of disks such that the area occupied by these disks is around $`70\%`$, which is less than that of the close-packed structure but still large enough to produce some closed voids. ### 2.2 The model with impurities Further, we consider different shape anisotropic combinations (dopants) consisting of $`\kappa `$ number of circular disks. We treat each of these combinations as a single rigid cluster. Several such dopants (impurities) are considered. Fig. 1 shows some of these impurities. The interaction between impurities and disks or other impurities is obtained by superposing the Lennard-Jones potential corresponding to each of the constituent disk in impurity. We consider a membrane with circular disks and impurities amounting to $`10\%`$ of the total number of circular disks, such that the area occupied is still $`70\%`$. These membranes are brought to an equilibrium configuration by the Monte Carlo method at a fixed temperature. Fig. 2 and Fig. 3 show typical equilibrium configurations of membrane without and with impurities respectively (The impurity in Fig. 3 is a rod shaped structure made up of five disks (Rod<sub>5</sub>), in general Rod<sub>κ</sub> for rod made up of $`\kappa `$ number of disks). In the simulation the temperature is so chosen that $`k_BT<4ϵ`$, where $`k_B`$ is the Boltzmann constant. The equilibrium is confirmed by simulated annealing. ## 3 Voids and an algorithm for void statistics Now, we introduce the notion of an “$`r`$-void” in a membrane which is suitable for the description of transport across membrane and further, propose an algorithm to compute statistical quantities such as the number of voids in the membrane, the void size distribution etc. We define an $`r`$-void as a closed area in a membrane devoid of disks or impurities, and big enough to accommodate a circular disk of radius $`r`$. Of course an $`r`$-void is also an $`r^{}`$-void if $`r^{}<r`$. ### 3.1 The algorithm to compute void statistics To compute the void statistics for $`r`$-voids, we increase the radii of the disks forming the membrane (including the disks in the impurities, without altering the positions of the centers) by an amount $`r`$ (See Fig. 4). Then we digitize the entire membrane on a suitably chosen grid. The choice of grid size depends on the required accuracy and the typical sizes of the voids. The digitization of circular disks is carried out by the Bressenham circle drawing algorithm , modified to incorporate periodic boundary conditions. The number of voids in the membrane are computed by flood filling every closed void with a different color and then counting the number of colors. The sizes of various voids can be obtained by counting the number of grid-cells filled by the corresponding color. The termination of flood fill algorithm is ensured since the voids are closed. In our case this condition is automatically fulfilled in view of periodic boundary conditions. The geometric algorithms involving Vorenoi polygons are mathematically satisfying and are expected to be accurate but would take much more computation time. On the other hand, as pointed in , the probabilistic algorithm is time efficient but requires a very large sample size while dealing with small voids. Our algorithm is quite efficient as well as suitable even when there are small voids in the membrane. We further note that the algorithm can be easily generalized to higher dimensions. We expect that the efficiency of this algorithm can be further enhanced by the use of a multi-resolution adaptive grid. ## 4 Results and Discussions The simulations were carried out for membranes of different compositions. Fig. 5 shows the graphs of the number of $`r`$-voids as a function of $`r`$ measured in units of the radius of the constituent disks. Curve (a) shows void distribution in absence of impurities. Curve (b) represents the void distribution in a membrane with rod shaped impurities made up of two disks (Rod<sub>2</sub>). Curves (c) and (d) show the void distribution with L shaped impurities made up of four disks (L<sub>4</sub>) and rod like impurities made up of four disks (Rod<sub>4</sub>) respectively. It is clear from the graph that the number of large voids increases with an increase in the anisotropy of the impurity. Even though L<sub>4</sub> and Rod<sub>4</sub> occupy the same area, Rod<sub>4</sub> being more anisotropic induces a larger number of big voids than L<sub>4</sub>. This fact can be clearly seen in Fig. 5, curves (c) and (d). Moreover, the Fig. 2 and Fig. 3 demonstrate the fact that the voids are mostly found in the neighborhood of the centers of anisotropy. Further, to strengthen our claim that the shape anisotropy induces voids, we compared two membranes. In one case we added rod impurities made up of two disks (Rod<sub>2</sub>) in the assembly of circular disks, and in the other case we added circular impurities of larger size, which occupied the same area as that of Rod<sub>2</sub>. We found that the former, being more anisotropic, induced larger and more numerous voids as compared to the later, though they occupied the same area. Thus, reduced to the bare essentials, the anisotropy in the interaction potential of the constituents, is seen to be responsible for the induction of large voids. If studied from the perspective of energy minimization, as the potential becomes direction dependent, some positions of the constituents are preferred over the other positions. This induces large voids. These features show a remarkable similarity with the observations reported in certain biological experiments . These experiments deal with the size-dependent permeation of non-electrolytes across biological membranes. The effect of doping on the permeation of large molecules was studied in these experiments. The liposome-membrane used in these experiments was made up of mixture of two types of lipids (cardiolipins and phosphatidylcholine) in a proportion 1:10. The understanding of the enhancement of transport in doped membranes needed an algorithmic statement. The ingredients at the algorithmic level involved: 1. consideration of the structure as a strictly 2–dimensional assembly 2. the cross sections of molecules being considered as constituents 3. interactions of the constituents via the Lennard Jones potential 4. permeating particles being considered as hard disks. The features reported in bear a similarity with the simulation carried out with Rod<sub>2</sub> as dopants. We have already seen in numerical simulations (See Fig. 5, curves (a) and (b)) that the Rod<sub>2</sub> type of impurities induced large voids in the membrane. The appearance of larger voids naturally enhances the transport of large particles. Thus an enhancement in the transport of large non-electrolytes like glucose, which was observed in the lipid mixture can possibly be understood using our simple approach. Further, apart from the biological implications, the model discussed is general enough to incorporate the studies of transport in various weakly bound granular media. ## 5 Summary We have presented a numerical algorithm to compute the entire void statistics in a two dimensional membrane consisting of circular disks and dopants. We found that our simple two dimensional model has shown results consistent with features observed in a complex biological system. The biological justification of the model and implications are discussed elsewhere . Nevertheless, our model and the proposed numerical algorithm which finds out the void statistics in the model system are quite general and use no specific features of any particular system. Therefore it is possible to use this method effectively in various systems from diverse disciplines. The result that the shape anisotropy induces large voids in mixtures may be used as a tool for achieving controlled selective permeability across such a system by merely changing the shape of the constituents of the mixture. Acknowledgments We thank N.V. Joshi, Deepak Dhar, H. E. Stanley and S.S. Manna for fruitful discussions. Figure captions: Some examples of the impurities. Rod type impurity made up of three circles (Rod<sub>3</sub>). L type impurity made up of four circles (L<sub>4</sub>). X type impurity made up of five circles (X<sub>5</sub>). T type impurity made up of five circles (T<sub>5</sub>). Typical equilibrium configuration of a membrane without impurity. There are 556 circular disks used to form this membrane. The number is so chosen that the area occupied is $`70\%`$. The $`\sigma `$ in Lennard-Jones potential is chosen as two times the radius of a circular disk. Typical equilibrium configuration of a membrane with impurity of type Rod<sub>5</sub>. The amount of impurity is 1:10 proportion. All the other parameters are same as Fig. 2. Figure describes the algorithm to compute void statistics. The radius of a circular disks (black disks) is $`R`$. These disks are expanded by amount $`r`$, so that the region $`V`$ is the void for particle of size $`r`$. The graphs of number of $`r`$-voids as a function of $`r`$ measured in units of the radius of the constituents. The void distribution without impurities. The void distribution with impurity of type Rod<sub>2</sub>. The void distribution with impurity of type L<sub>4</sub>. The void distribution with impurity of type Rod<sub>4</sub>. Typically 10000 Monte Carlo steps are thrown away as thermalisation, and it is ensured that total energy is minimized. The curves are averaged over 100 Monte Carlo steps.
no-problem/9903/quant-ph9903095.html
ar5iv
text
# The Meaning of Elements of Reality and Quantum Counterfactuals – Reply to Kastner. ## 1 Elements of Reality Quantum theory teaches us that the concepts of “reality” developed on the basis of the classical physics are not adequate for describing our world. A new language with concepts which are appropriate is not developed yet and this is probably the root of numerous controversies regarding interpretation of quantum formalism. It seems to me that philosophers of science can make a real contribution for progress of quantum theory through developing of an appropriate language. A necessary condition for a success of this wisdom is that physicists and philosophers will try to understand each other. I hope, that the resolution of the current controversy about the time-symmetrized quantum theory (TSQT) will contribute to such understanding. I took part in the development of the TSQT and I believe that this is an important and useful formalism. It already helped us to find several peculiar quantum phenomena tested in laboratories in the world . In the framework of the TSQT I have used terms such as “elements of reality” in a sense which seems to be radically different from the concept of reality considered by philosophers and, apparently, this is the main reason for the current controversy. I define that there is an element of reality at time $`t`$ for an observable $`C`$, “$`C=c`$” when it can be inferred with certainty that the result of a measurement of $`C`$, if performed, is $`c`$. Frequently, in such a situation it is said that the observable $`C`$ has the value $`c`$. It is important to stress that both expressions do not assume “ontological” meaning for $`c`$, the meaning according to which the system has some (hidden) variable with the value $`c`$. I do not try to restore realistic picture of classical theory: in quantum theory observables do not possess values. The only meaning of the expressions: “the element of reality $`C=c`$” and “$`C`$ has the value $`c`$” is the operational meaning: it is known with certainty that if $`C`$ is measured at time $`t`$, then the result is $`c`$. Clearly, my concept of elements of reality has its roots in “elements of reality” from the Einstein, Podolsky, and Rosen paper (EPR) . There are numerous works analyzing the EPR elements of reality. My impression that EPR were looking for an ontological concept and their “criteria for elements of reality” is just a property of this concept. I had no intention to define such ontological concept. I apologize for taking this name and using it in a very different sense, thus, apparently, misleading many readers. I hope to clarify my intentions here and I welcome suggestions for alternative name for my concept which will avoid the confusion. I consider elements of reality as counterfactual statements. Even if at time $`t`$ the system undergoes an interaction with a measuring device which measures $`C`$, the truth of “$`C=c`$” is ensured not by the final reading of the pointer of this measurement, but by a counterfactual statement that if another measurement, with as short duration as we want, is performed at time $`t`$, it invariably reads $`C=c`$. ## 2 The three-box example The actual story: (i) A macroscopic number $`N`$ of particles (gas) were all prepared at $`t_1`$ in a superposition of being in three separated boxes: $$|\psi _1=\frac{1}{\sqrt{3}}(|A+|B+|C),$$ (1) with obvious notation: $`|A`$ is the state of a particle in box $`A`$, etc. (ii) At later time $`t_2`$ all the particles were found in another superposition (this is extremely rare event): $$|\psi _2=\frac{1}{\sqrt{3}}(|A+|B|C).$$ (2) (iii) In between, at time $`t`$, weak measurements of a number of particles in each box, which are, essentially, usual measurements of pressure in each box, have been performed. The readings of the measuring devices for the pressure in the boxes $`A`$, $`B`$ and $`C`$ were $`p_A=p,`$ $`p_B=p,`$ (3) $`p_C=p,`$ where $`p`$ is the pressure which is expected to be in a box with $`N`$ particles. I am pretty certain that this “actual” story never took place because the probability for successful post-selection (ii) is of the order of $`3^N`$; for a macroscopic number $`N`$ it is too small for any real chance to see it happens. However, given that the post-selection (ii) does happen, I am safe to claim that (iii) is correct, i.e., the measurements of pressure at the intermediate time with very high probability yielded the results (2). The description of this example in the framework of the time symmetrized quantum formalism is as follows. Each particle at time $`t`$ is described by the two-state vector $$\psi _2||\psi _1=\frac{1}{3}(A|+B|C|)(|A+|B+|C),$$ (4) The system of all particles (signified by index $`i`$) is described by the two-state vector $$\mathrm{\Psi }_2||\mathrm{\Psi }_1=\frac{1}{3^N}\underset{i=1}{\overset{i=N}{}}(A|_i+B|_iC|_i)\underset{i=1}{\overset{i=N}{}}(|A_i+|B_i+|C_i)$$ (5) The ABL formula for the probabilities of the results of the intermediate measurements yields, for each particle, $`𝐏_A=1,`$ $`𝐏_B=1,`$ (6) $`𝐏_A+𝐏_B+𝐏_C=1.`$ Or, using my definition, for each particle there are three elements of reality: the particle is inside box $`A`$, the particle is inside box $`B`$, the particle is inside boxes $`A`$, $`B`$ and $`C`$. A theorem in the TSQT (Ref. , p. 2325) says that a weak measurement, in a situation in which the result of a usual (strong) measurement is known with certainty, yields the same result. Thus, from (2) it follows: $`(𝐏_A)_w=1,`$ $`(𝐏_B)_w=1,`$ (7) $`(𝐏_A+𝐏_B+𝐏_C)_w=1.`$ Since for any variables, $`(X+Y)_w=X_w+Y_w`$ we can deduce that $`(𝐏_C)_w=1`$. Similarly, for the “number operators” such as $`𝒩_A\mathrm{\Sigma }_{i=1}^{i=N}𝐏_A^{(i)}`$, where $`𝐏_A^{(i)}`$ is the projection operator on the box $`A`$ for a particle $`i`$, we obtain: $`(𝒩_A)_w=N,`$ $`(𝒩_B)_w=N,`$ (8) $`(𝒩_C)_w=N,`$ In this rare situation the “weak measurement” need not be very weak: a usual measurement of pressure is a weak measurement of the number operator. Thus, the time-symmetrized formalism yields surprising result (2): the pressure measurement in box $`C`$ is negative! Its value equals minus the pressure measured in the boxes $`A`$ and $`B`$. The analysis of “elements of reality” in this example which are clearly counterfactual statements (in actual world the measurements, results of which are quoted in (2), have not been performed) yields a tangible fruit: a shortcut for calculation of the expected outcome of an actual measurement.<sup>1</sup><sup>1</sup>1This example answers the criticism of Mermin quoted by Kastner in the context of my work. According to this criticism the elements of reality I defined are “rubbish – they have nothing to do with anything”. This outcome is surprising and paradoxical. Indeed, a usual device for measuring an observable which has only positive eigenvalues yields a negative value, the weak value in this rare pre- and post-selected situation. There are other paradoxical aspects discussed in relation to this example. The first paradoxical issue which was discussed reminds contextuality. Consider an observable $`X`$ which tells us the location of the particle: is it in box $`A`$, $`B`$, or $`C`$. The eigenstate of this observable corresponding to finding the particle in $`A`$ is identical to the eigenstate of the projection operator on $`A`$: $`|X=A=|𝐏_A=1`$. However, in this example there is no elements of reality $`X=A`$ (if we measure $`X`$ by opening all boxes at time $`t`$ we have only the probability 1/3 to find the particle inside box $`A`$) in spite of the fact that $`𝐏_A=1`$ is an element of reality. Finally, the paradoxical aspect of the three-box example which was analyzed by Kastner I shall analyze in the next section. ## 3 Kastner’s analysis of the three-box example. In the three-box example there are two elements of reality for the same particle: “the particle is inside box $`A`$”, and “the particle is inside box $`B`$”. Kastner considers this situation as a paradox which she resolves by rejecting the legitimacy of my concept of elements of reality. She does not mention at all my resolution of the “paradox”. Elements of reality are counterfactual statements. To be more explicit, “the particle is inside box $`A`$” means that if the particle is searched in box $`A`$ (and if it is not searched in box $`B`$!) then it is certain that the particle would be found in box $`A`$. Obviously, the two elements of reality cannot be considered together. Each element of reality assumes that antecedent of the counterfactual statement, which is the other element of reality, is false. Thus, both elements of reality exist separately, but we should not conclude from this that there is an element of reality consisting of the union of the elements of reality: the antecedent “the particle is searched in $`A`$ and it is not searched in $`B`$ and the particle is searched in $`B`$ and it is not searched in $`A`$” is logically inconsistent. The fact that we cannot consider the union of elements of reality does not make the whole exercise empty. We still can consider consequences of all true elements of reality together. In particular, in the three-box example the consequences of elements of reality (2) are the statements about weak values (2) and weak measurements which yield these weak values can be performed together. Kastner finds elements of reality “the particle is inside box $`A`$”, and “the particle is inside box $`B`$” to be “highly peculiar and counterintuitive”. This is indeed so, especially because there is no element of reality “the particle is inside box $`A`$ and inside box $`B`$”, as it explained above. This peculiar situation is an example of the failure of the “product rule” for pre- and post-selected elements of reality . From $`A=a`$ and $`B=b`$ does not follow $`AB=ab`$. The element of reality “the particle is inside box $`A`$ and inside box $`B`$ corresponds to the definite value of the product of projection operators: $`𝐏_A𝐏_B=1`$. But in the three-box example $`𝐏_A𝐏_B=0`$, in spite of the fact that $`𝐏_A=1`$ and $`𝐏_B=1`$. Kastner’s main objection is that the elements of reality “the particle is inside box $`A`$”, and “the particle is inside box $`B`$” cannot be interpreted as applying to an individual system because “being found in box $`A`$ and being found in box $`B`$ are mutually exclusive states of affairs”. She does not take into account that “elements of reality” are just counterfactual statements. She does not pay attention on the word “instead” in my writings which she herself quotes in her paper: “If in the intermediate time it was searched for in box $`A`$, it has to be found there with probability one, and if, instead, it was searched for in box $`B`$, it has to be found there too with probability one…” For demonstration that Kastner’s criticism is unfounded, let me repeat here an example of a per-selected only situation in which we attribute “mutually exclusive” properties to an individual system. Consider a system of two spin-$`\frac{1}{2}`$ particles prepared, at $`t_1`$, in a singlet state $$|\mathrm{\Psi }=\frac{1}{\sqrt{2}}(|_1|_2|_1|_2).$$ (9) We can predict with certainty that the results of measurements of spin components of the two particles fulfill the following two relations: $$\{\sigma _{1x}\}+\{\sigma _{2x}\}=0,$$ (10) $$\{\sigma _{1y}\}+\{\sigma _{2y}\}=0,$$ (11) where $`\{\sigma _{1x}\}`$ signifies the result of measurement of the spin $`x`$ component of the first particle, etc. The relations (10,11) cannot be tested together: the measurement of $`\sigma _{1x}`$ disturbs the measurement of $`\sigma _{1y}`$ and the measurement of $`\sigma _{2x}`$ disturbs the measurement of $`\sigma _{2y}`$ (not necessarily in the same way). According to the standard approach to quantum theory we accept that there are two matters of fact: “the outcomes of the spin $`x`$ components for the two particles have opposite values” and “the outcomes of the spin $`y`$ components for the two particles have opposite values” in spite of the fact that the statements represent “mutually exclusive states of affairs”. If the spin $`x`$ components have been measured at time $`t`$, we know that $`y`$ components of spin were not measured at time $`t`$. Note that if they were measured at a later time, after the spin $`x`$ component measurement, then the outcomes might not fulfill the equation (11). According to Kastner’s line of argumentation the application of statements (10,11) which I named “generalized elements of reality” (because they are not just about the values of observables, but about relations between these values) to a single quantum system should also be rejected. However, physicists do not reject such statements. There ane innumerable works analyzing counterfactuals related to incompatible measurements on a single system of correlated spin$`\frac{1}{2}`$ particles. Similarly, Kastner’s argumentation is not valid for the three-box example. ## 4 Quantum counterfactuals I will try here to clarify my statements which were criticized in Section 4 of Kastner’s paper . First, the meaning of the quotation from my work “indeterminism is crucial for allowing non-trivial time-symmetric counterfactuals” is just the following. Time-symmetric counterfactuals are related to time-symmetric background conditions, i.e. the state of the system is fixed both before and after the time about which the counterfactual statement is given. In a deterministic theory everything is fixed by conditions at a single time and, therefore, no novel (non-trivial) features can appear in the time-symmetric approach. In order to clarify the meaning of my continuation: “Lewis’s and other general philosophical analyses are irrelevant for the issue of counterfactuals in quantum theory” let me quote Lewis’ “system of weights or priorities” for similarity relation of counterfactual worlds : > (1) It is of the first importance to avoid big, widespread, diverse violations of \[physical\] law. > > (2) It is of the second importance to maximize the spatio-temporal region throughout which perfect match of particular facts prevails. > > (3) It is of the third importance to avoid even small, localized, simple violations of law. > > (4) It is of little or no importance to secure approximate similarity of particular fact, even in matters that concern us greatly. > > (Lewis, p. 47) This priorities might be helpful in the analysis of the truth value of a widely discussed counterfactual: “If Nixon had pressed the nuclear war button, the world would be very different”. The purpose of the priorities is to “resolve the vagueness of counterfactuals”. In physics context, however, the counterfactuals are not vague. (At least, I hope that counterfactuals I have defined, are not vague.) The truth value of quantum counterfactuals can be calculated from the equations of quantum theory. The above priorities cannot help in deciding the truth value of the counterfactual “the outcomes of the spin $`y`$ components measurement at time $`t`$ for the two particles have opposite values” in the world in which the two spin-$`\frac{1}{2}`$ particles were prepared, at $`t_1<t`$, in a singlet state (9) and the spin $`y`$ components were measured at time $`t`$, instead. Priorities (1) and (3) are not relevant because violations of physical laws are not considered. The counterfactual worlds are different from the actual world not because of “miracles”, i.e., violations of physical laws, but because different measurements on the system are considered. And the question about how it was decided which measurement to perform, is not under discussion. Priorities (2) and (4) are not relevant because quantum theory fixes everything. In particular, there is perfect match before the time of the measurement, $`t`$, and, in general, there cannot be arranged the perfect match after $`t`$. We do not have the freedom of interpretation in the framework of quantum counterfactuals after defining the similarity criteria. For the case of pre-selected counterfactuals it is simply the identity of quantum description of the system before the measurement and this is not controversial. For time-symmetrized counterfactuals there is no consensus. I have my definition. Its advantage that it yields the standard definition as a particular case for pre-selected only situation and it allows us to analyze and derive useful results for pre- and post-selected quantum systems. I am aware of other proposals . Each proposal should be judged according to consistency and usefulness for the purpose it has been defined. The success or failure of various definitions of similarity criteria of counterfactuals in exact sciences is not measured by maximizing priorities (1)-(4), but by its effectiveness in the framework of a particular theory. The priorities (1)-(4) are relevant outside the framework of exact sciences, where we have no laws which determine unambiguously the truth values of counterfactual statements. Contrary to Kastner’s writing I never claimed that Lewis’ theory is not applicable in an indeterministic universe. On the contrary, I have used Lewis’ framework of possible worlds for defining counterfactuals in quantum theory. I only claimed that most parts of Lewis’ analysis is irrelevant because counterfactuals in the context of quantum theory are of very specific form and the majority of aspects discussed in the general philosophical literature on counterfactuals are not present in the quantum case. To make things even more clear I will add another quotation from Lewis’ writings with an example of argumentation for which I cannot find any counterpart in the analysis of quantum counterfactuals: > Jim and Jack quarreled yesterday, and Jack is still hopping mad. We conclude that if Jim asked Jack for help today, Jack would not help him. But wait: Jim is a prideful fellow. He never would ask for help after such a quarrel; if Jim were to ask Jack for help today, there would have to have been no quarrel yesterday. In that case Jack would be his usual generous self. So if Jim asked Jack for help today, Jack would help him after all. … > > (Lewis, p. 33) Kastner continues by criticizing my definition of time-symmetrized counterfactual regarding results of a measurement performed on pre- and post-selected quantum system: > If it were that a measurement of an observable $`A`$ has been performed at time $`t`$, $`t_1<t<t_2`$, then the probability for $`A=a_i`$ would be equal to $`p_i`$, provided that the results of measurements performed on the system at times $`t_1`$ and $`t_2`$ are fixed. Her criticism regarding “problematicity” of the fixing requirement is answered in another paper . The latter was also criticized by Kastner . She claims that fixing the results of measurements at $`t_1`$ and $`t_2`$ is “ad hoc gerrymanddering” which relies on accidental similarity of individual facts”. But these facts are the physical assumptions in the pre- and post-selected situations for analysis of which the above concept of time-symmetrized counterfactuals has been introduced. Disregarding these facts is similar to deciding that there have been no quarrel between Jim and Jack even so the counterfactual statement starts with “Jim and Jack quarreled yesterday …”. The definitions in physics have no ambiguity which might allow such free reading of the text. In the present paper Kastner criticizes the syntax of the definition, in particular, that it reflects “a confusion between the non-counterfactual and counterfactual usage of the ABL rule”. In fact, I feel very unsure about the grammatical correctness of tenses in my definition. Also, I was not able to find exact philosophical definition according to which one can decide if a certain statement is “counterfactual”. However, it seems to me that the meaning of my definition is unambiguous and the name counterfactual is appropriate in the context of situations this definition was applied for. For example, in the three-box example described above, the definition is applied when it is known that in the actual world the observable $`A`$ (e.g. $`𝐏_A`$) has not been measured. Kastner suggests two possible “usages” of my definition. The difference, apart form using various tenses (the difference between which is beyond my linguistic understanding) is that only the second one includes the word “instead”. This word is essential. According to my understanding it is implicit in every counterfactual statement, but maybe it is helpful to state it explicitly, modifying the definition to: > If it were that a measurement of an observable $`A`$ has been performed at time $`t`$, $`t_1<t<t_2`$, instead of whatever took place at time $`t`$ in the actual world, then the probability for $`A=a_i`$ would be equal to $`p_i`$, provided that the results of measurements performed on the system at times $`t_1`$ and $`t_2`$ are fixed. I hope this clarifies my definition and makes its meaning unambiguous, even so grammatically it might not be perfect. Again, Kastner’s arguments presented in her other paper that this usage of my definition is “generally incorrect” have been answered in detail elsewhere . Here I want only to comment on Kastner’s concluding sentence in which she writes: “\[Vaidman’s\] definition, as it stands, is grammatically incorrect in a way that reflects its lack of clarity and rigor with respect to the physically crucial point concerning which measurement has actually taken place”. According to my definition of time-symmetrized counterfactuals the measurement performed at time $`t`$ is not “the physically crucial point”, on the contrary, it plays no role in calculating the truth value of the counterfactual statement; I have noted this feature of my definition in the paper which Kastner criticized. The counterfactual statement is about the counterfactual world in which at time $`t`$ some action was performed instead of the measurement which was performed in the actual world. Thus, the question which measurement has been actually performed is clearly irrelevant. The result of the measurement in the actual world does not add any information either, because in the framework of standard quantum theory to which the time-symmetrized formalism is applied, the results of measurements at $`t_1`$ and $`t_2`$ (which are fixed by definition) yield a complete description of the system at time $`t`$. ## 5 What does it mean: probability of a history? I want to add a comment about a connection to the consistent histories approach advocated by Kastner and presented in the Appendix to her paper. Following Cohen , Kastner claims that the counterfactual usage of the ABL rule is valid only for cases corresponding to “consistent” histories. Since for my counterfactuals the ABL rule is valid always, I find this approach to be an unnecessary limitation which prevents to see interesting results. In addition, I have to admit that I was never been able to understand the meaning of a basic concept in the consistent history approach: probability of a history. A particular history associates set of values of observables in a sequential set of times. If the meaning of probability is the probability for this set to be the results of the measurements of these observables at the appropriate times, then this is a well defined question in the framework of standard quantum theory. (The corresponding formula is given in the ABL paper .) Apparently, the meaning is something different. Indeed, in the example considered by Kastner, she uses the following expression: > “What is the probability that the system is in state $`C_k`$ at time $`t_1`$, given that it was preselected in state $`D`$ and post-selected in state $`F`$?” What is the meaning of “the system is in state $`C_k`$? In this example the system (up to known unitary transformation) is in state $`D`$. This is a standard quantum state evolving towards the future. In the framework of the TSQT one can also associate with the system at time $`t_1`$ the backward evolving state $`F`$, and to say that the system is described by the two-state vector $`F||D`$. However, from the text of Kastner’s paper it is obvious that she considers something different. She writes: “we consider a framework in which the system has some value $`C_k`$ associated with an arbitrary observable”. As I mentioned in Section 1, quantum observables do not possess values. Thus, I cannot understand the meaning of Kastner’s sentence: “… we cannot use the ABL rule to calculate the probability of any particular value of either $`A`$ or $`B`$ at time $`t_1`$…” because “probability of a value” is not defined. In this paper I have clarified the meaning of the concepts from the time-symmetrized quantum formalism: quantum counterfactuals and elements of reality (which are particular quantum counterfactuals). I have answered recent criticism of these concepts in this journal by Kastner . Kastner has claimed that the three-box example is a paradox arising from an invalid counterfactual usage of the ABL rule. I have argued here that if one adopts my definition of quantum counterfactuals, the ABL rule is valid. Peculiarities of this example do not represent a true paradox, but the unusual features of pre- and post-selected elements of reality, such as the failure of the product rule . Current controversy can be added to the list of examples which led Bell to suggest abandoning the usage of the word “measurement” in quantum theory . However, I do not think that abstaining from using problematic concepts is the most fruitful approach. I believe that physical and philosophical concepts which are vague and ambiguous should continue be under discussion until the concepts and the structure of the physical theory will be clear. I hope that current discussion brings us closer to constructing solid foundations for quantum theory. This research was supported in part by grant 471/98 of the Basic Research Foundation (administered by the Israel Academy of Sciences and Humanities).
no-problem/9903/quant-ph9903019.html
ar5iv
text
# The accelerated observer with back-reaction effects ## 1 Introduction Because of the equivalence principle field theory in the presence of gravitational fields is related to that in accelerated systems. Indeed, subsequently to Hawking’s remarkable discovery that black holes behave as if they had an effective temperature of $`\mathrm{}/8\pi Gm`$ with $`m`$ the mass of the black hole and $`G`$ Newton’s constant , it was found that a detector with uniform acceleration in the usual vacuum state of flat Minkowski space will be thermally excited to a temperature $`T=\mathrm{}a/2\pi `$ . It is therefore to be hoped that the study of the, apparently simpler, Unruh effect may shed light on the case of a curved space-time. In particular, in his original paper Unruh suggested a two-field model for a finite mass accelerated detector which consisted of two scalars of differing mass. This corresponds to a detector of finite mass having two energy levels separated by a gap corresponding to the mass difference of the two fields. A point-like (infinite mass) monopole detector again having a finite energy internal degree of freedom was suggested, on the other hand, by DeWitt . It is clear that, if one considers a finite mass for the detector, one must not ignore the quantum mechanical smearing of the trajectory or the recoil back-reaction when a quantum is emitted (absorbed). For the former reason we have previously considered a massless neutral scalar field $`\phi `$ coupled to a finite mass quantum monopole detector described by a gaussian wave-packet which was allowed to evolve according to an inverted harmonic oscillator potential, corresponding to constant acceleration. On examining the probability per unit time that the detector be excited by the absorption of scalar quanta, we observed that, on first considering the classical limit ($`\mathrm{}0`$) and then the point-like limit for the gaussian wave-packet, we reproduced the usual results (Unruh effect ). If however one considered the point-like limit first, the detector decoupled from the scalar field. This rather surprising result is associated with the fact that, once quantum-mechanical evolution is considered, the Compton wavelength of the detector enters in the theory. In our previous approach the scalar field only modified the internal degree of freedom and did not influence the motion or mass of the detector. The purpose of this letter is to include the back-reaction on the trajectory due to the emission (absorption) of massless scalar quanta (our motivation is an analogy with Bremsstrahlung wherein a charged particle decelerates by emitting soft photons and follows a trajectory of changing energy and acceleration). This will be done by modifying our previous approach in analogy with Unruh’s two-field model in order to eliminate the need for the monopole moment (or internal degree of freedom) of the detector. We shall then describe a “detector” with changing acceleration and mass due to the emission (absorption) of scalar quanta which will then mimic black hole evaporation thus generalizing the original Unruh effect. We use units such that $`c`$ and the Boltzmann constant are equal to one. ## 2 Two field model To illustrate our approach it will be sufficient (and easier) to consider a 2-dimensional Minkowski space-time with coordinates $`x^0`$, $`z`$ (for respectively Minkowski time and space) and an effective Lagrangian density $`(z,x^0)`$ $`=`$ $`{\displaystyle 𝑑\tau \underset{i=1}{\overset{2}{}}\delta (x^0x_i^0)}`$ (1) $`\times \left[i\mathrm{}\psi _i^{}(z,\tau )\dot{\psi }_i(z,\tau )+{\displaystyle \frac{\mathrm{}^2}{4m_i}}\left|{\displaystyle \frac{}{z}}\psi _i(z,\tau )\right|^2m_ia_i^2z_i^2|\psi _i(z,\tau )|^2\right]`$ $`+{\displaystyle \frac{1}{2}}{\displaystyle 𝑑\tau \delta \left(x^0\frac{1}{2}(x_1^0+x_2^0)\right)Q\left[\psi _2^{}(z,\tau )\psi _1(z,\tau )+\psi _2(z,\tau )\psi _1^{}(z,\tau )\right]\phi (z,\tau )}`$ $`{\displaystyle \frac{1}{2}}\eta ^{\mu \nu }_\mu \phi _\nu \phi ,`$ where $`x_i^0=a_i^1\mathrm{sinh}a_i\tau `$, $`a_i`$ (positive) is the proper acceleration in the $`z`$ direction and a dot denotes differentiation with respect to the continuous proper time $`\tau `$ which parametrizes the (semi)classical trajectory followed by the observer $`\psi `$. The diverse $`\psi `$ could be regarded as different states of the observer (or detector). A few words on the origin of our Lagrangian are in order: in the classical limit the first term in $``$ corresponds to the Lagrangian for a inverted harmonic oscillator, that is $`L_{cl}=m\left(\dot{z}^2+a^2z^2\right),`$ (2) and the sign is chosen so that the corresponding Hamiltonian is equal to the (positive) particle (detector) mass (the opposite choice is made in ); the second term describes the interaction between the detector and the scalar field $`\phi `$ (whose Lagrangian is given by the last term). Further if one considers $`Q=Q(\tau )`$ as an operator acting on the Hilbert space of the detector’s internal energy states, $`a_1=a_2`$, $`m_1=m_2`$ one needs only one field $`\psi `$ ($`=\psi _1=\psi _2`$) and our previous results are reproduced . Instead we shall consider $`Q`$ a time independent c-number (coupling constant) and therefore the interaction with a quantum $`\phi `$ is associated with the transition $`\psi _1\psi _2`$ corresponding to a change (which we shall always consider to be small) of acceleration and/or mass of the detector. From the first term in $``$ one obtains $`\psi _i(z,\tau )`$ $`=`$ $`\left({\displaystyle \frac{\beta _i}{ib\sqrt{\pi }}}\right)^{1/2}\left({\displaystyle \frac{1}{2b^2}}i\beta _i\mathrm{cosh}a_i\tau \right)^{1/2}`$ (3) $`\times \mathrm{exp}\left\{i\beta _i\left[z^2\mathrm{cosh}a_i\tau +{\displaystyle \frac{\alpha _i^2}{4b^2\beta _i^2}}\left(a_i^2\mathrm{cosh}a_i\tau 2za_i^14z^2\beta _i^2b^4\mathrm{cosh}a_i\tau \right)\right]\right\}`$ $`\times \mathrm{exp}\left\{{\displaystyle \frac{\alpha _i^2}{2}}\left(za_i^1\mathrm{cosh}a_i\tau \right)^2\right\},`$ where $`\beta _i\frac{m_ia_i}{\mathrm{}\mathrm{sinh}a_i\tau }`$ and $`\alpha _i\frac{2b\beta _i}{(1+4b^4\beta _i^2\mathrm{cosh}^2a_i\tau )^{1/2}}`$. This is a solution to the equation of motion (Schrödinger equation) with a gaussian wave-packet $`\psi _i(z,0)`$ of width $`b`$ as initial condition. Since, as we have previously observed , the Unruh effect is obtained in the semiclassical limit ($`\mathrm{}0`$, $`\beta _i\mathrm{}`$ and $`b`$ finite), it will be sufficient to use $`\psi _i(z,\tau )`$ $``$ $`{\displaystyle \frac{1}{\left(b\sqrt{\pi }\mathrm{cosh}a_i\tau \right)^{1/2}}}\mathrm{exp}\left\{{\displaystyle \frac{i}{\mathrm{}}}m_ia_iz^2\mathrm{tanh}a_i\tau {\displaystyle \frac{\left(za_i^1\mathrm{cosh}a_i\tau \right)^2}{2b^2\mathrm{cosh}^2a_i\tau }}\right\}.`$ (4) The energy of the above wave function, Eq. (4), may be evaluated obtaining $`H_i=m_i\left[{\displaystyle \frac{\mathrm{}^2}{4m_i^2}}\psi _i{\displaystyle \frac{^2}{z^2}}\psi _i+a_i^2\psi _iz^2\psi _i\right]=m_i,`$ (5) which coincides with the Hamiltonian computed along the classical trajectory $`z_i=a_i^1\mathrm{cosh}a_i\tau `$, agrees with the rest mass of the detector and occurs in the imaginary part of the exponent of $`\psi _i`$ for $`\tau 0`$ independently of the value of $`a_i`$. Let us now consider the probability for the detector to make a transition from $`a_1=a\delta a/2`$, $`m_1=m\delta m/2`$ to $`a_2=a+\delta a/2`$, $`m_2=m+\delta m/2`$ associated with a quantum $`\phi `$ of energy $`|\delta m|`$ ($`\delta m<0`$ for emission and $`\delta m>0`$ for absorption and we shall see later that $`\delta a`$ is related to $`\delta m`$). On using the interaction Lagrangian density (second term in Eq. (1)) one obtains $`P_{21}(\delta a,\delta m)`$ $`=`$ $`{\displaystyle \frac{Q^2}{4\mathrm{}^2}}{\displaystyle _{\tau _1}^{\tau _2}}𝑑\tau {\displaystyle _{\tau _1}^{\tau _2}}𝑑\tau ^{}{\displaystyle 𝑑z𝑑z^{}\psi _1^{}(z^{},\tau ^{})\psi _2(z^{},\tau ^{})\psi _1(z,\tau )\psi _2^{}(z,\tau )}`$ (6) $`\times \mathrm{\hspace{0.17em}0}\phi (z,x_c^0)\phi (z^{},x_{c}^{0}{}_{}{}^{})0`$ $`=`$ $`{\displaystyle \frac{Q^2}{4\mathrm{}^2}}{\displaystyle _0^L}d(\tau \tau _1){\displaystyle _0^L}d(\tau ^{}\tau _1^{}){\displaystyle 𝑑z𝑑z^{}\psi _{1}^{}{}_{}{}^{}\psi _2^{}\psi _1\psi _{2}^{}{}_{}{}^{}}`$ $`\times {\displaystyle \frac{\mathrm{}}{4\pi }}\mathrm{ln}\left[\left(zz^{}\right)^2\left(x_c^0x_{c}^{0}{}_{}{}^{}iϵ\right)^2\right],`$ where $`x_c^0(\tau )\left(x_1^0(\tau )+x_2^0(\tau )\right)/2`$, $`x_{c}^{0}{}_{}{}^{}x_c^0(\tau ^{})`$, $`\psi _i^{}\psi _i(z^{},\tau ^{})`$. In evaluating $`P_{21}`$ it is convenient to examine the exponent of $`\psi _2^{}\psi _1`$, $`\mathrm{ln}\psi _2^{}\psi _1`$ $`=`$ $`{\displaystyle \frac{1}{2b^2}}\left[{\displaystyle \frac{\left(za_1^1\mathrm{sinh}a_1\tau \right)^2}{\mathrm{cosh}^2a_1\tau }}+{\displaystyle \frac{\left(za_2^1\mathrm{sinh}a_2\tau \right)^2}{\mathrm{cosh}^2a_2\tau }}\right]`$ (7) $`+{\displaystyle \frac{i}{\mathrm{}}}z^2\left(m_2a_2\mathrm{tanh}a_2\tau m_1a_1\mathrm{tanh}a_1\tau \right)`$ $`{\displaystyle \frac{1}{2}}\mathrm{ln}\left(b^2\pi \mathrm{cosh}a_1\tau \mathrm{cosh}a_2\tau \right).`$ On first considering the real part of the above it is immediate to see that $`|\psi _2^{}\psi _1|^2`$ is peaked at $`z_c=\left(a_1^1\mathrm{cosh}a_1\tau +a_2^1\mathrm{cosh}a_2\tau \right)`$ up to $`𝒪[(\delta a)^2]`$ corrections. The imaginary part of Eq. (7) may then be evaluated at $`z=z_c`$, finally obtaining $`{\displaystyle \frac{i}{2\mathrm{}a}}\left[\left(\delta m+m{\displaystyle \frac{\delta a}{a}}\right)\mathrm{sinh}2a\tau +2m\delta a\tau \right],`$ (8) again up to higher order corrections in $`\delta a`$, $`\delta m`$ and we have assumed $`\delta a\tau `$ is small (this will constrain the coupling constant – we shall return to this). The above imaginary part of the exponent is associated with the change of energy between the final (2) and initial state (1), which, from Eq. (5), is expected to be $`\delta m=m_2m_1`$. Hence requiring that Eq. (8) be equal to $`i\delta m\tau /\mathrm{}`$ leads to $`\delta m=m{\displaystyle \frac{\delta a}{a}},`$ (9) or $`m={\displaystyle \frac{f}{a}},`$ (10) where $`f`$ is a positive constant. Thus on demanding consistency (i.e., conservation of energy/momentum) we have obtained a relationship between mass and acceleration corresponding to the action of a constant force. The above approximations and Eq. (9) may be substituted into Eq. (6) and on introducing $`\tau =T+t/2`$, $`\tau ^{}=Tt/2`$, $`T^{}=T\tau _1`$ one obtains $`P_{21}(\delta a)`$ $`=`$ $`{\displaystyle \frac{Q^2}{8\pi \mathrm{}}}{\displaystyle _0^L}𝑑T^{}{\displaystyle _{L+2|T^{}L/2|}^{+L2|T^{}L/2|}}𝑑te^{i\frac{f\delta a}{\mathrm{}a^2}t}\mathrm{ln}\left[{\displaystyle \frac{2}{a}}\mathrm{sinh}\left({\displaystyle \frac{at}{2}}iϵ\right)\right]`$ (11) $`=`$ $`i{\displaystyle \frac{Q^2a^2}{8\pi f\delta a}}{\displaystyle _0^L}𝑑T^{}{\displaystyle _L^{}^{+L^{}}}𝑑t{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{e^{i\frac{f\delta a}{\mathrm{}a^2}t}}{\left(t\frac{2\pi in}{a}iϵ\right)}},`$ which is of the desired form (see, e.g., ) and we have omitted an end point contribution in the integration by parts . One may evaluate the $`t`$ integral in Eq. (11) by closing the contour in the upper complex half plane ($`\delta a>0`$), thus including the poles at $`t=2\pi in/a`$, with $`n`$ a non negative integer. One then obtains $`P_{21}(\delta a>0){\displaystyle \frac{Q^2a^2L}{4f\delta a}}{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}e^{\frac{2\pi f\delta a}{\mathrm{}a^3}n},`$ (12) up to contour contributions associated with transient effects, the requirements for the neglect of which we shall return to afterwards. For the case $`\delta a<0`$ one performs the $`t`$ integration in Eq. (11) in the lower complex half plane. From the residues at $`t=2\pi ni/a`$ one obtains the same result as Eq. (12) with $`\delta a`$ replaced by $`|\delta a|`$ and the sum now runs from $`1`$ to $`\mathrm{}`$ since the pole in $`0+iϵ`$, which is responsible of spontaneous emission/absorption, is excluded. Thus one finally obtains $`P_{21}(\delta a){\displaystyle \frac{Q^2a^2L}{4f|\delta a|}}{\displaystyle \underset{n}{\overset{\mathrm{}}{}}}e^{\frac{2\pi f|\delta a|}{\mathrm{}a^3}n}={\displaystyle \frac{Q^2a^2L}{4f|\delta a|}}{\displaystyle \frac{\sigma (\delta a)}{1e^{\frac{2\pi f\delta a}{\mathrm{}a^3}}}},`$ (13) where $`\sigma (x)=+1`$ for $`x>0`$ and $`\sigma (x)=1`$ for $`x<0`$. In the above one notes the appearance of the familiar Planck distribution factor and the usual Unruh temperature $`\beta ^1=\mathrm{}a/2\pi `$ (remembering that $`\delta m=f\delta a/a^2`$ is the change in energy) with $`a=(a_1+a_2)/2`$. Further we observe that for $`f=1/4G`$, $`\beta ^1=\mathrm{}/8\pi Gm`$ which is the Hawking temperature for a black hole of mass $`m`$. ## 3 Multiple emission/absorption and black hole analogy In the previous Section we have obtained an expression for the probability of emission (or absorption) of a scalar quantum by the accelerated observer with the corresponding back-reaction. In this section we illustrate a possible analogy with black holes. To start with, let us note that for a generic accelerated observer one needs an external static source to produce the constant force $`f`$. Instead, if one identifies a black hole with its horizon (whose acceleration is equal to the surface gravity $`f/m=1/4Gm`$ and which is where Hawking emission takes place), the source of the force coincides with the black hole itself and thus one does not have an external source (as is the case in our model Eq. (1)). Hence, a change in the mass associated with the horizon is also a change in the strength (mass) $`m`$ of the source, while $`f=1/4G`$ is constant (which is a statement of the equivalence principle). One may proceed to calculate the change of energy (mass) per unit time (see for an analogous calculation for a black hole) for the accelerated observer emitting a “thermal” scalar field $`\phi `$ (see Eq. (13)) due to the coupling in our Lagrangian (1). In particular the average loss of mass per unit time will then be given by $`{\displaystyle \frac{\delta m}{L}}{\displaystyle \frac{Q^2}{4}}{\displaystyle \frac{d\omega }{e^{\beta \mathrm{}\omega }1}}={\displaystyle \frac{Q^2f}{8\pi m}}{\displaystyle \frac{dx}{e^x1}},`$ (14) where we have identified $`\mathrm{}\omega =f\delta a/a^2`$, $`x=\beta \mathrm{}\omega `$ and we shall not concern ourselves with eventual infrared divergencies which can be handled in the usual way . Since we want the mass to decrease for increasing time (emission), corresponding to the Unruh vacuum for a black hole , on identifying (for $`L`$ sufficiently small) $`\dot{m}\delta m/L`$ one obtains $`m(\tau )=m_0\left(1{\displaystyle \frac{\tau }{\tau _d}}\right)^{1/2},`$ (15) which, on setting $`f=1/4G`$,corresponds to the evaporation of a 2-dimensional black hole (in the Schwarzschild time $`\tau `$) with initial ($`\tau =0`$) mass $`m_0`$ and decay time $`\tau _d4\pi m_0^2/fQ^2=16\pi Gm_0^2/Q^2`$. Correspondingly $`a(\tau )={\displaystyle \frac{f}{m_0}}\left(1{\displaystyle \frac{\tau }{\tau _d}}\right)^{1/2}.`$ (16) On using Eq. (14) it is straightforward to see that $`\beta \delta m`$ is constant. Clearly this result depends on the expression we obtained for $`P_{21}`$, in particular the presence of the $`1/\mathrm{}\omega `$ factor in Eq. (13) which in turn depends on the form of the interaction in Eq. (1). The dependence of $`a`$ (or $`m`$) on time describes the semiclassical trajectory that the accelerated observer (black hole) follows. One may then imagine replacing the continuously changing acceleration $`a(\tau )`$ by a series of $`N`$ straight lines each associated with equal time intervals $`L`$ and the emission of a scalar quantum every time the line slope changes. More precisely one considers a trajectory beginning at $`\tau _0=0`$ for an accelerated detector having initial mass $`m_0`$ and ending with a final mass $`m_N`$ in a time interval $`\tau _N\tau _0=NL`$ ($`\tau _N\tau _d`$) after emitting $`N`$ quanta. Clearly $`L`$ must be sufficiently small so that one may reasonably approximate $`a(\tau )`$ in the above fashion and higher order terms be negligible. According to the above one obtains for the probability of emission of $`N`$ quanta in the interval $`NL`$ $`P_N={\displaystyle \underset{r=1}{\overset{N}{}}}P_{r,r1}\left({\displaystyle \frac{Q^2L}{4}}\right)^N{\displaystyle \underset{r=1}{\overset{N}{}}}{\displaystyle \frac{(\delta m_r)^1}{e^{\beta _r\mathrm{}\omega _r}1}},`$ (17) where of course $`\delta m_r=m_rm_{r1}m(rL)m((r1)L)`$ and $`P_{r,r1}`$ is given by Eq. (13) with $`a=(a_r+a_{r1})/2`$ and $`\delta a=a_ra_{r1}`$. A remarkable point is that with the time dependence for the trajectory given in Eq. (16) (evaporation) one has $`\beta _r\mathrm{}\omega _r\beta \delta m=\text{constant}`$ and obtains $`P_N\left({\displaystyle \frac{4\pi m_0/f}{e^{\beta \delta m}1}}\right)^N{\displaystyle \underset{r=1}{\overset{N}{}}}\left(1{\displaystyle \frac{rL}{\tau _d}}\right)^{1/2},`$ (18) corresponding to a sequence of $`N`$ emissions at the most probable frequencies, that is the ones for which the exponents in the denominators of Eq. (17) are minimum. Let us conclude by illustrating the constraints for the approximate validity of our approach. The final expression in Eq. (18) depends on the accelerating force $`f`$, the decay time $`\tau _d`$ (related to the coupling constant $`Q`$), the initial condition $`m_0`$ and the parameter $`L`$. The time interval $`L`$ is constrained by the requirements that $`\delta aL1`$ (see after Eq. (8)) and that the contour contributions in Eq. (11) are negligible, that is the exponent $`f\delta aL/\mathrm{}a^21`$. Thus one needs $`{\displaystyle \frac{\mathrm{}}{|\dot{m}|}}L^2{\displaystyle \frac{m^2}{f|\dot{m}|}},`$ (19) which implies $`m^2/f\mathrm{}`$ or (for the black hole analogy $`f=1/4G`$) $`m\sqrt{\mathrm{}/G}m_p`$, the Planck mass.
no-problem/9903/cond-mat9903186.html
ar5iv
text
# Island diffusion on metal fcc(100) surfaces ## Abstract We present Monte Carlo simulations for the size and temperature dependence of the diffusion coefficient of adatom islands on the Cu(100) surface. We show that the scaling exponent for the size dependence is not a constant but a decreasing function of the island size and approaches unity for very large islands. This is due to a crossover from periphery dominated mass transport to a regime where vacancies diffuse inside the island. The effective scaling exponents are in good agreement with theory and experiments. Theoretical studies on island diffusion over the past two decades have lead to expectations that even large islands may have substantial mobilities . A seminal study of diffusion of large islands on metallic surfaces was done by Voter , where he was able to show that the diffusion coefficient $`D`$ of islands with more than $`s10`$ atoms followed a simple scaling law with a constant scaling exponent $`\alpha `$ $$De^{\beta E_L}s^\alpha ,$$ (1) where $`\beta =1/(k_BT)`$ and $`E_L`$ is an effective energy barrier for island diffusion. Since then similar scaling law for large islands, with the scaling exponent $`\alpha `$ now depending on the diffusion mechanism, has been found in several simulation studies . However, the experimental confirmation of the early theoretical predictions had to wait for the development of advanced scanning tunneling microscope (STM) techniques. Only recently the experiments have unequivocally confirmed that on metal surfaces even large islands of sizes up to 1000 atoms undergo diffusion and that the diffusion coefficient obeys Eq. (1) with $`\alpha `$ indeed depending on the diffusion mechanism . Although the experiments and simulations have given strong support to the scaling law in Eq. (1), at least in a restricted region of sizes, the exact role of the various microscopic mechanisms in determining the value of $`\alpha `$ is still an open question. On the theoretical side, Khare et al. have explained island diffusion in terms of the shape fluctuations of the outer boundary, which makes it possible to relate the macroscopic motion of islands to the atomistic processes occurring on the boundary. The three basic mechanisms considered are particle diffusion along the periphery (PD), terrace diffusion (TD) where a particle can detach from and attach to the edge, and evaporation and condensation limited diffusion mechanism (EC). The effective exponent $`\alpha (R)\mathrm{ln}(D)/\mathrm{ln}(R)`$ can be expressed as $`2\alpha `$ $`=`$ $`2+{\displaystyle \frac{1}{1+(R/R_{st})(R_{su}/R_{st})}}`$ (4) $`{\displaystyle \frac{2+(R/R_{st})(R_{su}/R_{st})}{1+(R/R_{st})(R_{su}/R_{st})+(R/R_{st})^2}},`$ where $`R=\sqrt{s/\pi }`$. The parameters $`R_{st}`$ and $`R_{su}`$ are related to periphery and terrace diffusion coefficients, respectively. Allowing only one of the mass transport mechanisms EC, TD or PD at a time for large enough islands, the exponents 1/2, 1, 3/2 are obtained, respectively (see Fig 3. in Ref. ). When both the TD and the PD mechanisms are present, clear dependence of $`\alpha `$ on $`s`$ should be observed, and finally one should always find $`\alpha `$ = 1 for $`s\mathrm{}`$. However, the crossover regime towards this limit occupies rather narrow region in the parameter space and it has been assumed to be experimentally unaccessible . In contrast, most simulations of island diffusion on metallic fcc surfaces indicate values $`1.75<\alpha <2.1`$ that cannot be obtained from the theory of Khare et al. . However, their approach is strongly supported by the recent experiments of Pai et al. , whose careful STM measurements on the diffusion of Cu and Ag islands on Cu(100) and Ag(100) surfaces yielded $`\alpha 1.25`$ and $`\alpha 1.14`$, respectively, at room temperature. According to their explanation, these values of $`\alpha `$ are due to the lack of the TD mechanism with $`R_{su}=0`$, and $`0.1<R/R_{st}<10`$ in Eq. (4). The parameter $`R_{st}`$ was interpreted as the average separation between adjacent kinks. However, the STM measurements were not able to directly confirm the nature of microscopic diffusion mechanisms for the islands. In this report we will show through extensive simulations of a realistic model of Cu islands on the Cu(100) surface that these open questions can be resolved. First, our simulations show that there exist a long crossover towards $`\alpha =1`$ for this system. This indicates that large effective values of $`\alpha `$ may be obtained if only relatively small island sizes are considered. This may explain some of the large values reported in the literature in cases where there are no unusual diffusion mechanisms present . Second, we show that this crossover is actually due to PD dominated diffusion changing over to TD dominated case, where the microscopic mechanism for the TD process comes from vacancy diffusion within large islands. In this way, the values of $`\alpha `$ obtained in Ref. can be explained with the existence of both PD and TD mechanisms for Cu islands, with vacancy diffusion now accounting for the latter. We also discuss the origin of persistent oscillations in $`D`$ for small island sizes, and vacancy island diffusion on the Cu(100) surface. The model system we consider here is based on kinetic Monte Carlo simulations of Cu adatoms on the Cu(100) surface, with energetics obtained from molecular dynamics simulations with the effective medium theory (EMT) potential . As discussed in detail in Refs. , the EMT barriers are in good agreement with available experimental data for this case. The hopping rate $`\nu `$ of an atom to a vacant nearest neighbor (NN) site can be well approximated by $$\nu =\nu _0e^{\beta [E_S+\mathrm{min}(0,\mathrm{\Delta }_{NN})E_B]},$$ (5) where the attempt frequency $`\nu _0=3.06\times 10^{12}`$ s<sup>-1</sup> and the barrier for the jump of a single adatom $`E_S=0.399`$ eV. When there is at least one atom diagonally next to the saddle point the barrier $`E_S=0.258`$ eV. The change in the bond number $`3\mathrm{\Delta }_{NN}3`$ is the number of NN bonds in the final site subtracted by the number of NN bonds in the initial site. The bond energy $`E_B=0.260`$ eV. We note that within the EMT, barriers on the Ag(100) and Ni(100) surfaces are very similar to the barriers on Cu(100) up to a scaling factor . We therefore expect that the features observed here may describe island diffusion on some other fcc(100) metal surfaces, too. In this work we prevent detachment of adatoms from the island, however, an adatom can still go around the corner so that the PD mechanism is operational . It thus follows that $`E_S=0.258`$ eV for all the allowed jumps. Therefore, the energetics in Eq. (5) for the adatom islands is equivalent to the ferromagnetic Ising model with Metropolis transition rates and Kawasaki dynamics. We create the initial island of $`s`$ particles by adding atoms one by one to the nearest and the next nearest neighbor sites with the probability $`e^{\beta zE_B}`$, where $`0z4`$ is the number of nearest neighbors. It is important to start the simulation with a well thermalized island configuration since the relaxation times for larger islands can become very long. After thermalization, we compute the tracer diffusion coefficient of the island defined through $`D=lim_t\mathrm{}\frac{1}{4}dr^2/dt`$ where $`r^2`$ is the mean square displacement of the island . An efficient way of computing $`D`$ is given in Ref. . We implement our Monte Carlo program by the BKL algorithm using a binary tree structure . In the algorithm, every trial leads to a jump. At low temperatures, a large number of unsuccessful trials inherent in the traditional Metropolis algorithm can be avoided. This allows very long simulation times in our system. We first simulate adatom island diffusion with sizes $`1s10^4`$ at high temperature $`T=1000`$ K . Our data together with a fit of $`D`$ from Ref. (Eq. (36)) are shown in Fig. 1. For $`s10`$ we clearly observe a crossover region where the effective scaling exponent behaves as predicted by Eq. (4) (see the inset in Fig. 1). For large islands, $`\alpha `$ finally approaches the limit $`\alpha =1`$ as predicted by theory . Due to the crossover, it is evident in Fig. 1 that for a limited window of sizes, an effective exponent between $`1<\alpha <3/2`$ can be obtained. Similar type of crossover region persists at lower temperatures, and we find that for example using the size window $`100s1000`$ we obtain values of $`\alpha `$ that only weakly depend on temperature, i.e. $`1.12\alpha 1.23`$ at $`T=400,500,700`$ and $`1000`$ K. In particular, the overall behavior of $`D`$ for large values of $`s`$ at $`300`$ K is in very good agreement with the behavior found in the experiments of Pai et al. at room temperature where $`80s440`$ (see Fig. 2) ($`60s870`$ for Ag). The behavior of $`D`$ for smaller island sizes where Eq. (4) is not valid, is interesting. There are clear size dependent oscillations present as also reported by Fichthorn and Pal in their simulations. However, in the experiments such oscillations are easily smeared out by size fluctuations as can be seen in Fig. 2 where the experimental data for $`D`$ follow closely the average behavior of $`D`$ in the same regime. At low temperatures there is a difference for $`D`$ between small islands of sizes $`n^2`$ and $`n^2\pm 1`$, where $`n`$ is an integer, in particular that $`D(n^2)`$ is much smaller than $`D(n^2+1)`$. This is consistent with the notion that the square configurations are very stable and therefore move slowly . However, for larger island this is oversimplified since entropy must be taken into account. In equilibrium, the probability for a given configuration to occur $`P(s,E)\omega (s,E)e^{\beta E}`$, where $`\omega (s,E)`$ is the number of states of the island with size $`s`$ and energy $`E`$. There is no degeneracy at $`T=0`$ for an $`n^2`$ configuration, while the degeneracy of an “excited state” with one bond broken (e.g. an adatom moving along the edge) grows rapidly as a function of the island size. Thus, for $`n^2`$ islands the contribution of the low-mobility configuration to $`D`$ becomes less important when $`n`$ grows. Eventually, the oscillations dampen out and the continuum theory becomes valid . At higher temperatures, this naturally occurs for smaller islands already. We now turn into discussion about the microscopic mechanisms for island diffusion. Most importantly, our simulation results indicate that there is a TD type of process involved in the island motion in contrast to what was suggested in Ref. . This process in the present case is due to vacancy diffusion inside the island. This conclusion is supported by the observation that the effective scaling exponent $`\alpha `$ approaches unity as the island size increases even at room temperature which indicates that the TD mode must be involved . Moreover, we have explicitly checked the role of the PD and TD mechanisms at $`1000`$ K and $`700`$ K with $`s1000`$. We modified our model by first disallowing atoms to diffuse around corner sites to prohibit the PD mechanism. In the second modification, we disallowed the creation of vacancies in the island to prevent the TD mechanism from operating. Simulations of the two modified cases gave the scaling exponents $`\alpha =1.02`$ and $`\alpha =1.48`$, in complete agreement with the theoretical values for the TD ($`\alpha =1`$) and PD ($`\alpha =3/2`$) dominated island diffusion. We have also measured the effective Arrhenius barriers for island diffusion for $`s=100,300,500,`$ and $`1000`$, and find that there is virtually no size dependence. Interestingly enough, whether the PD or TD mechanism is present makes also very little difference. We have measured the barriers between $`700`$ K and $`1000`$ K for the PD and the TD dominated cases with one of the mechanisms suppressed as discussed in the section above, and obtain $`0.77`$ eV and $`0.79`$ eV, respectively. The Arrhenius barrier for the non-modified case at $`400`$ K $`T1000`$ K is $`0.79`$ eV. All these values are very close to the corresponding rate limiting process with $`\mathrm{\Delta }_{NN}=2`$. This can be easily explained by microscopic considerations. In the PD process, two bonds are broken when a particle goes from a kink to a corner site . Symmetrically, in the TD process, the rate limiting step is the creation of a vacancy where an atom having three neighbors becomes a one-neighbor particle i.e. a vacancy jumps into the island. Therefore, jumps with $`\mathrm{\Delta }_{NN}=2`$ dominate the vacancy creation. An interesting question for (100) metal surfaces concerns vacancy island diffusion. In our model, the energetics for vacancy islands is very similar to the adatom case. Symmetrically to adatom islands, vacancies are prevented to detach from a vacancy island, but atoms can detach from the edge to the pit. According to Eq. (5) the barriers for vacancies are then equivalent to the barriers for the adatoms, except that the jumps inside the vacancy islands for adatoms have $`E_S=0.399`$ eV in contrast to $`0.258`$ eV for vacancies inside the islands. However, this difference is not important in practice. We have simulated vacancy island diffusion at various temperatures, and the diffusion coefficients are the same as for the adatom islands within the statistical errors. This is because the diffusion inside either an adatom or vacancy island is not the rate limiting process. To summarize, our model gives results in very good agreement with experiments and theory and demonstrates that for at least Cu(100) surfaces, vacancy diffusion within the islands contributes significantly to the island mobility for larger islands . Another interesting feature not easily seen in the experiments are the persistent oscillations in $`D`$ at low temperatures that are due entropic reasons; in fact, this is yet another example of the compensation effect seen in many other systems. Our model predicts that vacancy island diffusion on the Cu(100) surface is essentially similar to adatom island diffusion since the rate-limiting mechanisms are symmetric for both cases. Finally, we note that the present model is somewhat idealized in the sense that the effect of other islands, surface steps etc. is neglected. It would be of great interest to study these issues, as well as island and vacancy diffusion on (100) surfaces of other metals to further clarify the role of various microscopic mechanisms. Acknowledgments: This work has been in part supported by the Academy of Finland, and J.H. in part by the Finnish Graduate School in Condensed Matter Physics. Corresponding author, e-mail address: jarkko.heinonen@helsinki.fi. FIG. 1. Adatom island diffusion coefficient $`D`$ vs. $`s`$ for $`1s10^4`$ at $`T=1000`$ K ($`a3.5`$ Å is the lattice constant of copper). Stars denote the results of our simulations, and the dashed line is a fit to Eq. (36) from Ref. ($`R_{st}=5.0\times 10^2`$ and $`R_{su}=5.0\times 10^4`$). Error bars are of the size of the symbols or smaller. The inset shows the effective exponent $`\alpha `$ from Eq. (4) using this fit. FIG. 2. Adatom island diffusion coefficient $`D`$ for $`1s10^4`$ at $`T=`$ 1000, 700, 500, 400, and 300 K (from top to bottom). For $`T=`$ 300 K, the $`n^2`$ configurations are shown with stars. The dotted lines are just guides to the eye. The dashed lines indicate fits to Eq. (36) of Ref. ($`R_{st}`$ and $`R_{su}`$ are almost independent of $`T`$). Error bars are of the size of the symbols or smaller except at $`T=`$ 300 K for $`s100`$ where the scatter in the data indicates the errors. The thick line at $`T=300`$ K shows the experimental results of Ref. for Cu. See text for details.
no-problem/9903/math9903136.html
ar5iv
text
# Regular Flip Equivalence of Surface Triangulations ## 1. Results on flip equivalence Let $`F`$ be a closed surface and let $`\chi (F)`$ be its Euler characteristic. A singular triangulation of $`F`$ is a graph $`T`$ embedded in $`F`$ such that each face of $`FT`$ is bounded by an edge path of length three. We denote by $`v(T)`$, $`e(T)`$ and $`f(T)`$ the number of vertices, edges and faces of $`T`$. If $`T`$ is without loops and multiple edges and has more than three faces, then $`T`$ corresponds to a triangulation of $`F`$ in the classical meaning of the word; in order to avoid confusions, we use for it the term regular triangulation. Let $`e`$ be an edge of a singular triangulation $`T`$ and suppose that there are two distinct faces $`\delta _1`$ and $`\delta _2`$ adjacent to $`e`$. The faces $`\delta _1`$ and $`\delta _2`$ form a (possibly degenerate) quadrilateral, containing $`e`$ as a diagonal. A flip of $`T`$ along $`e`$ replaces $`e`$ by the opposite diagonal of this quadrilateral, see Figure 1. The flip is called regular, if both $`T`$ and the result of the flip are regular triangulations. Two singular (resp. regular) triangulations $`T_1`$, $`T_2`$ of a closed surface are called flip equivalent (resp. regularly flip equivalent), if they are related by a finite sequence of flips (resp. regular flips) and isotopy. The following result is well known, and there are many proofs for it. There are interesting applications to the automatic structure of mapping class groups, see or . ###### Lemma 1. Any two singular triangulations $`T_1`$ and $`T_2`$ of a closed surface $`F`$ with $`v(T_1)=v(T_2)`$ are flip equivalent.∎ One might ask whether any two regular triangulations of $`F`$ with the same number of vertices are *regularly* flip equivalent. The answer is “Yes” in special cases: any two regular triangulations of the sphere , the torus , the projective plane or the Klein bottle with the same number of vertices are regularly flip equivalent. But in general, the answer is “No”: it is known that there are 59 different triangulations of the closed oriented surface of genus six based on the complete graph with 12 vertices, see . Such a triangulation does not admit any regular flip, thus the different triangulations are not regularly flip equivalent. This paper is devoted to the proof of the following theorem. A preliminary version of this paper has been appeared in . ###### Theorem 1. Let $`F`$ be a closed surface and $`N(F)=94506020\chi (F)`$. Any two regular triangulations $`T_1`$ and $`T_2`$ of $`F`$ with $`v(T_1)=v(T_2)N(F)`$ are regularly flip equivalent. Negami stated the mere existence of $`N(F)`$ without an estimate. The estimate in Theorem 1 is far from being best possible, at least for the surfaces up to genus one. The number $`N(F)`$ is negative if and only if $`F`$ is a sphere, in which case the statement is true since the transformation by regular flips is always possible, by Wagner’s Theorem . We assume in the following that $`F`$ is not the sphere. ## 2. Proof of the Theorem We need some additional notions. A contraction of a regular triangulation $`T`$ along an edge $`e`$ shrinks $`e`$ to a vertex and eliminates the two faces adjacent to $`e`$, see Figure 2. The edge $`e`$ is called contractible if the result of the contraction is still a regular triangulation. A regular triangulation $`T`$ is called irreducible if it does not contain contractible edges. The number of vertices of irreducible triangulations is bounded by the following result of Nakamoto and Ota . ###### Proposition 1. If $`T`$ is an irreducible triangulation of a closed surface $`F`$ which is not the sphere, then $`v(T)270171\chi (F)`$. ∎ Let $`\delta `$ be a face of a regular triangulation $`T`$. A face subdivision of $`T`$ along $`\delta `$ replaces $`\delta `$ by the cone over its boundary, see Figure 3, and the result is denoted $`s_\delta T`$. If $`\delta `$ and $`\delta ^{}`$ are two faces of $`T`$, then $`s_\delta T`$ and $`s_\delta ^{}T`$ are regularly flip equivalent, which is easy to see. Let $`T_1`$ and $`T_2`$ be regular triangulations and $`\delta _1`$, $`\delta _2`$ faces of $`T_1`$, $`T_2`$. It follows that if $`T_1`$ and $`T_2`$ regularly flip equivalent, then so are $`s_{\delta _1}T_1`$ and $`s_{\delta _2}T_2`$. If $`T_2`$ is obtained from $`T_1`$ by $`m`$ successive face subdivisions, then we write $`T_2=s^m(T_1)`$. The notation is ambiguous, but by the preceding remark only up to regular flip equivalence. After these preliminaries, we can cite a lemma of Negami . ###### Lemma 2. Let $`T_1`$ and $`T_2`$ be regular triangulations of $`F`$. If $`T_2`$ is obtained by contracting some edges of $`T_1`$, then $`T_1`$ is regularly flip equivalent to $`s^m(T_2)`$, with $`m=v(T_1)v(T_2)`$.∎ Let $`T^{}`$ denote the barycentric subdivision of a singular triangulation $`T`$ of $`F`$. ###### Lemma 3. Let $`T_1`$ and $`T_2`$ be two singular triangulations of $`F`$ with $`v(T_1)=v(T_2)`$. Then $`T_1^{\prime \prime }`$ and $`T_2^{\prime \prime }`$ are regularly flip equivalent. ###### Proof. It is easy to verify that $`T_1^{\prime \prime }`$ and $`T_2^{\prime \prime }`$ are regular triangulations of $`F`$. By Proposition 1, we know that $`T_1`$ and $`T_2`$ are related by not necessarily regular flips. Let $`T_2`$ be obtained from $`T_1`$ by a single flip. Then $`T_1^{}`$ can be transformed into $`T_2^{}`$ by the sequence of flips and isotopies that is explicitly given in Figure 4. The edges of $`T_i`$ are drawn bold, and the edges of $`T_i^{}`$ under flip are dotted. None of these flips introduces a loop. It is possible that some flip for $`T_i^{}`$ introduces a multiple edge. This happens only if some of the vertices $`A`$, $`B`$, $`C`$ and $`D`$ of $`T_1`$ coincide. We iterate the construction, i.e., we replace each flip for $`T_i^{}`$ by a flip sequence for $`T_i^{\prime \prime }`$. Since the four vertices of $`T_1^{}`$ of each quadrilateral involved in a flip are pairwise distinct, none of these flips introduces a loop or a multiple edge, thus all flips are regular. ∎∎ ###### Corollary 1. Let $`T_1`$ and $`T_2`$ be two regular triangulations of $`F`$ with $`v(T_1)=v(T_2)`$. Then $`s^m(T_1)`$ and $`s^m(T_2)`$ are regularly flip equivalent, with $`m=35\left(v(T_1)\chi (F)\right).`$ ###### Proof. For any singular triangulation $`T`$ of $`F`$, we have $`v(T^{})=v(T)+e(T)+f(T)`$. Since $`2e(T)=3f(T)`$, we obtain $`f(T)=2(v(T)\chi (F))`$ and $`v(T^{})=6v(T)5\chi (F)`$. It follows easily that $`v(T_i^{\prime \prime })v(T_i)=m`$ for $`i=1,2`$. One obtains $`T_i^{\prime \prime }`$ from $`T_i`$ by $`m`$ face subdivisions and some regular flips, see Figure 5 for the first barycentric subdivision. The figure shows the neighbourhood of a face, and the edges under flip are dotted. So $`s^m(T_1)T_1^{\prime \prime }T_2^{\prime \prime }s^m(T_2)`$ by the preceding Lemma. ∎∎ Now we finish the proof of Theorem 1. Let $`T_1`$, $`T_2`$ be two regular triangulations of $`F`$ with $`v(T_1)=v(T_2)=MN(F)`$ with $$N(F)=35\left(\left(270171\chi (F)\right)\chi (F)\right)=94506020\chi (F).$$ By contractions along some edges, $`T_i`$ ($`i\{1,2\}`$) can be transformed into an irreducible triangulation $`S_i`$. By Lemma 2, $`T_i`$ is regularly flip equivalent to $`s^{Mv(S_i)}S_i`$. By Proposition 1 and Corollary 1, $`s^{N(F)v(S_1)}S_1`$ and $`s^{N(F)v(S_2)}S_2`$ are regularly flip equivalent, and so are also $`s^{Mv(S_1)}S_1`$ and $`s^{Mv(S_2)}S_2`$ after further face subdivisions. Therefore also $`T_1`$ and $`T_2`$ are regularly flip equivalent. q.e.d.
no-problem/9903/astro-ph9903210.html
ar5iv
text
# A Database of COBE-Normalized CDM Simulations (Abbreviated Version) ## 1 INTRODUCTION ### 1.1 Importance of Numerical Simulations in Cosmology Observations of the nearby universe reveal the existence of the large-scale structure. The visible matter is clumped into galaxies, and these galaxies are not distributed uniformly into space, but instead grouped into structures such as clusters, filaments, and walls, separated by deep voids. Velocity structures (deviations from Hubble flow) are observed as well. By contrast, observations of the Cosmic Microwave Background (CMB) reveal that the universe was extremely uniform near the epoch of recombination. Hence, the present large-scale structure must result from an evolutionary process that took place between recombination and the present. The most widely accepted scenario assumes that the present large-scale structure originates from the growth, by gravitational instability, of primordial density fluctuations present in the early universe. Any fluctuation larger than the Jeans length can grow by gravitational instability once the universe becomes matter-dominated. If the primordial density fluctuations originates from a Gaussian random process (the usual assumption), then the primordial density field is entirely described in terms of its power spectrum $`P(k)`$. The particular form of the power spectrum essentially depends upon the amount and nature of the various components (baryonic matter, dark matter, cosmological constant, and so on) present in the universe. If we assume a certain power spectrum, we can describe the primordial density field, and the formation and evolution of large scale-structure in the universe becomes an initial value problem: starting from the primordial density field, we can compute its evolution using the laws of general relativity. Unfortunately, this initial value problem is far too complex to be solved analytically. We can simplify the problem by noticing that the largest structures observed in the universe are significantly smaller than the horizon. This enables us to describe the evolution of the large-scale structure using Newtonian mechanics instead of general relativity (Peebles 1980, Chapter 2). Even so, the general problem cannot be solved analytically. This leaves two possible approaches: analytical approximations, or numerical simulations. Two different kinds of analytical approximations have been considered. The first one is based on the fact that the initial fluctuations are small. We can expand the equations describing the evolution of these fluctuations in powers of the fluctuations, and solve them using perturbation theory. This approach is extremely useful in describing the early evolution of the fluctuations, and has led to very important results. However, it becomes inapplicable as soon as the fluctuations become nonlinear. Such fluctuations still have to grow by a factor of $`10^2`$ to reach the density of a cluster of galaxies, and $`10^5`$ to reach the density of a galaxy. Clearly, perturbation theory cannot be used to describe the late stages of large-scale structure formation. The second analytical approach consists of considering systems with a particular geometry (see, for instance, Zel’dovich 1970; Peebles 1980, §§19–21; Fillmore & Goldreich 1984a, 1984b; Bertschinger 1985a, 1985b). The most popular analytical models for large-scale structure formation are the Spherical Model, which assumes spherical symmetry, and the Pancake Model, which assumes planar symmetry. An important assumption of some of these models is that the system considered is isolated. For instance, the spherical model can describe the evolution of a self-gravitating spherical overdensity, but we must assume that any tidal influence from nearby structures can be neglected, an assumption that might be valid at late times but certainly not at early times. These analytical approximations can therefore describe the universe at early times or at late times, but not both. This problem can be solved by using mixed schemes, that combine various analytical approximations in a way that allows an analytical description of the evolution of large-scale structure at all epochs. The most important ones are the Press-Schechter Approximation (Press & Schechter 1979), which combines perturbation theory with the spherical model, and the Zel’dovich Approximation (Zel’dovich 1970), which combines perturbation theory with the pancake model. The alternative consists of using numerical simulation. Unlike analytical models, numerical simulations suffer from problems such as limited resolution and numerical noise. Also, simulations provide very little insight into the physical processes taking place, compared to analytical models. However, numerical simulations can describe the evolution of the large-scale structure entirely, from the initial conditions all the way to the present, without making any approximation or imposing any restriction on the geometry of the system. Cosmological N-body simulations have played a central role in the study of the formation and evolution of large-scale structure in the universe during most of the last two decades. These simulations have contributed the improve our understanding of the physical process of gravitational instability that leads to structure formation, have enable us to conceive and test various cosmological scenarios, and have produced simulated universes that can directly be compared with observations (Efstathiou & Eastwood 1981; Centrella & Melott 1983; Klypin & Shandarin 1983; Miller 1983; Shapiro, Struck-Marcell, & Melott 1983; White, Frenk, & Davis 1983; Davis et al. 1985; Efstathiou et al. 1985; Barnes & Hut 1986, 1989; Evrard 1986, 1987; Melott 1986; White et al. 1987a, 1987b; Frenk et al. 1988; Gramann 1988; Carlberg & Couchman 1989; Villumsen 1989; West, Oemler, & Dekel 1989; Couchman 1991; Fukushige et al. 1991; Hernquist, Bouchet, & Suto 1991; Martel 1991a; Moutarde et al. 1991; West, Villumsen, & Dekel 1991; Bouchet & Hernquist 1992; Fry, Melott, & Shandarin 1992; Park et al. 1992; Bahcall, Cen, & Gramann, 1993; Gramann, Cen, & Bahcall 1993; Melott & Shandarin 1993; Babul et al 1994; Pen 1995; Colombi, Bouchet, & Hernquist 1996; Moore, Katz, & Lake 1996: Yess & Shandarin 1996; Klypin, Nolthenius, & Primack 1997; Kravtsov, Klypin, & Khokhlov 1997; Navarro, Frenk, & White 1997; Gross et al. 1998; Thomas et al. 1998).<sup>1</sup><sup>1</sup>1Cosmological numerical simulations have been used intensively since 1981, and their results have appeared in hundreds of publications, so this list is necessarily incomplete. We decided to include only the key publications by each research group. We also excluded one- and two-dimensional simulations (for brevity), and simulations with hydrodynamics, which involve the next generation of numerical algorithms. ### 1.2 The Standard Model Theoretical developments in particle theory and early-universe physics, combined with numerical simulations and observations of the large-scale structure of the universe, led to the emergence during the 1980’s of what became known as the Standard Cosmological Model. The inflationary scenario (Guth 1981; Linde 1982; Albretch & Steinhardt 1982) requires that the universe is spatially flat to a great accuracy. In a matter-dominated universe, in the absence of any exotic components such as a nonzero cosmological constant $`\mathrm{\Lambda }`$, this requires that the mean density of the universe is equal to its critical density, or, alternatively, $$\mathrm{\Omega }_0=1,$$ (1) where $`\mathrm{\Omega }_08\pi G\overline{\rho }_0/3H_0^2`$ is the density parameter, $`\overline{\rho }_0`$ is the mean density of the universe, and $`H_0`$ is the Hubble constant (throughout this paper, we use subscripts 0 to designate present values). In this scenario, the large-scale structure of the universe that we observe today results from the growth, by gravitational instability, of small density perturbations present at recombination, which originate from quantum processes in the early universe. There are two difficulties with this scenario. First, there is strong evidence that the amount of “ordinary matter” in the universe is insufficient to satisfy equation (1). Primordial nucleosynthesis provides stringent upper limit to the baryonic content of the universe, and shows that the present baryon contribution to the density parameter, $`\mathrm{\Omega }_{\mathrm{B0}}`$, is less than $`0.026h^2`$, where $`h`$ is the Hubble constant in units of $`100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ (Krauss & Kernan 1995; Copi, Schramm, & Turner 1995; Krauss 1998). Furthermore, dynamical studies of rich clusters of galaxies show that the contribution to the density parameter of the matter that clusters at that scale is $`\mathrm{\Omega }_{\mathrm{clusters}}=0.2\pm 0.1`$ (Gott et al. 1974; Carlberg et al. 1996; Lin et al. 1996). Second, observations of the temperature fluctuations in the Cosmic Microwave Background (CMB) provide an upper limit to the density fluctuations at recombination, of order $`\delta \rho /\rho 10^5`$. Such fluctuations would grow by gravitational instability, to reach an amplitude of order $`10^2`$ by the present, clearly insufficient to explain the origin of large-scale structure and galaxies. These two difficulties were solved by postulating the existence of a component known as dark matter. This dark matter, which can be detected only through its gravitational influence, makes up the difference between the amount of matter required to satisfy equation (1) and the amount of matter that is observed or indirectly measured. We can then reconcile the dynamical estimates of $`\mathrm{\Omega }_0`$ with equation (1) by assuming that the dark matter is distributed more smoothly than the luminous matter that makes up galaxies and clusters, an idea known as biasing (Kaiser 1984). However, since the dynamical estimates of $`\mathrm{\Omega }_0`$ exceed the limit imposed by primordial nucleosynthesis, some amount of dark matter must be clustered on galactic and cluster scales, even though the bulk of the dark matter is smoothly distributed into space. Also, density fluctuations in the dark matter start growing when the universe becomes matter-dominated, and by the time recombination occurs, the dark matter fluctuations have already grown by a factor of $`20h^2/\mathrm{\Omega }_{\mathrm{B0}}`$ (Kolb & Turner 1990, §9.5). These fluctuations provide potential wells into which the baryons fall soon after recombination. This enables us to reconcile the CMB measurements, — which are sensitive to the fluctuations in the baryonic matter, but not the dark matter — with the existence of galaxies and large-scale structure, Many candidates for dark matter have been suggested, several of them emerging from particle theory. These various forms of dark matter are usually classified as Hot Dark Matter (HDM) and Cold Dark Matter (CDM). Cosmological numerical simulations performed in the 1980’s have shown that an $`\mathrm{\Omega }_0=1`$ universe in which most matter is in the form of Cold Dark Matter, and the distribution of luminous matter is biased relative to the distribution of dark matter, can successfully reproduce all observations of large-scale structure that were then known (see, e.g. Davis et al. 1985), while satisfying the constraints imposed by the inflationary scenario, primordial nucleosynthesis, and the CMB (unlike HDM, which as difficulties explaining structure formation at galactic scales). These simulations played a central role in establishing the $`\mathrm{\Omega }_0=1`$, biased CDM model as the Standard Cosmological Model. ### 1.3 Non-Standard Models The Standard Model, which has been hailed by many as the “final answer” to the problem of structure formation and evolution in the universe, ran into serious problems during the 1990’s. In this subsection, we briefly review these problems. #### 1.3.1 The Age Problem In a flat, matter-dominated universe, the age of the universe $`t_0`$ is 2/3 of the Hubble time, that is $`t_0=2/3H_0=6.52\times 10^9h^1\mathrm{years}`$. For $`h`$ in the range $`0.51`$, this corresponds to an age in the range $`6.5213.04\times 10^9\mathrm{years}`$. Measurements of the ages of globular clusters indicate that the oldest clusters are certainly older than $`9.5\times 10^9\mathrm{years}`$, and most likely in the range $`1113\times 10^9\mathrm{years}`$ (Jimenez et al. 1996; Chaboyer 1998; Chaboyer et al. 1998; Jimenez 1998). These measurements are only marginally consistent with the standard model, in the sense that they require a Hubble constant near its smallest possible value, $`h0.5`$. However, recent observations have significantly reduced the range of plausible values for the Hubble constant, showing that $`h`$ is likely to be in the range $`0.650.75`$ (Freedman, W. L. 1998, and references therein). In the standard model, this corresponds to an age in the range $`8.6910.03\times 10^9\mathrm{years}`$. The upper end of this range is still compatible with the measured ages of globular clusters, but just barely. #### 1.3.2 Large-Scale Structure Until 1992, the amplitude of the primordial density fluctuation power spectrum was unknown. We were free to tune this amplitude in order to reproduce the correct amount of galaxy clustering observed today. The latter is usually characterized by the rms density fluctuation $`\sigma _8`$ at a scale of $`8h^1\mathrm{Mpc}`$. Observations of clusters of galaxies show that $`\sigma _80.6`$ for the standard model (Viana & Liddle 1996, see eq. below). The discovery by the COBE DMR experiment of degree-scale fluctuations in the CMB temperature (Smoot et al. 1992) has eliminated this freedom, by fixing the amplitude power spectrum. A COBE-normalized Standard Model produces too much structure at cluster scales (Barlett & Silk 1993). The resulting value of $`\sigma _8`$ is $`1.22`$ for $`h=0.5`$ (Bunn & White 1997), too large by a factor of 2, and becomes even larger for larger $`h`$. By combining the COBE result with observations of the present large-scale structure, we obtain a constraint on the quantity $`\mathrm{\Omega }_0h`$, which is $$0.2\mathrm{\Omega }_0h0.3$$ (2) (Peacock & Dodds 1994). Since $`h`$ is certainly larger than 0.5, this implies $`\mathrm{\Omega }_0<0.6`$. #### 1.3.3 The Baryon Catastrophe In the Standard Model, the baryon fraction of the universe is small. Primordial nucleosynthesis imposes the constraint that $`\mathrm{\Omega }_{\mathrm{B0}}h^2<0.026`$ (Krauss & Kernan 1995; Copi, Schramm, & Turner 1995; Krauss 1998). For $`h=0.65`$, this corresponds to $`\mathrm{\Omega }_{\mathrm{B0}}=0.061`$. Hence, if $`\mathrm{\Omega }_0=1`$, at most 6% of the matter if the universe is composed of baryons, the rest being dark matter. However, observations of X-ray clusters reveal that the baryon fraction in these clusters is $`0.1h^{1.5}`$, or 19% for $`h=0.65`$ (Briel, Henry, & Boringer 1992). Hence, X-ray clusters contain a large excess of baryons relative to dark matter compared with the average values in the universe, a situation often referred to as the baryon catastrophe. This problem could be solved if we can think of a physical process that would concentrate the baryons inside clusters, creating a bias relative to the dark matter. However, no such physical process is known. It is much simplier to assume that the density parameter is less than unity. In this case, the universal baryon fraction is not $`\mathrm{\Omega }_{\mathrm{B0}}`$, but $`\mathrm{\Omega }_{\mathrm{B0}}/\mathrm{\Omega }_0`$. This baryon fraction is $`0.1h^{1.5}`$ according to observations of X-ray clusters, and smaller than $`0.026/\mathrm{\Omega }_0h^2`$ according to primordial nucleosynthesis. Combining these two results, we get $$\mathrm{\Omega }_0h^{1/2}<0.26.$$ (3) This rules out the Standard Model (unless $`h<0.07`$ !). For $`h>0.5`$, this limit becomes $`\mathrm{\Omega }_0<0.37`$. #### 1.3.4 Evolution of Cluster Abundance In the Standard Model, the density parameter $`\mathrm{\Omega }`$ is unity at all times, and density fluctuations can grow by gravitational instability all the way to the present. In other models, density fluctuations can grow at early times, when $`\mathrm{\Omega }`$ is near unity. But eventually, $`\mathrm{\Omega }`$ drops significantly below unity, and the density fluctuations “freezes-out.” Hence, in a model with $`\mathrm{\Omega }_0<1`$, the present abundance of clusters should be comparable to the abundance immediately after freeze-out, since not much growth has taken place since then. Conversely, in the Standard Model, the present abundance of clusters should be larger than the past one, since the growth of density fluctuations never freezes out. Bahcall, Fan, & Cen (1997) and Bahcall & Fan (1998) have determined the mass of three massive distant clusters, located at redshift $`z>0.5`$, and showed that in a $`\mathrm{\Omega }_0`$ universe, there should be only $`10^3`$ such clusters at $`z>0.5`$. They conclude that the density parameter is in the range $$0.1<\mathrm{\Omega }_0<0.35$$ (4) (Bahcall 1999). #### 1.3.5 Distant Type I Supernovae The relationship between the luminosity distance $`D_L`$ and the redshift is model-dependent. If standard candles can be observed at cosmological distances, then the $`D_L(z)`$ relationship can be inferred, and limits can be placed on the value of the cosmological parameters. This method was recently applied to samples of distant (“High-$`z`$”) Type I supernovae, by two independent research teams (Garnevich et al. 1998, and references therein; Perlmutter et al. 1998, and references therein). Applied to models with a nonzero cosmological constant $`\mathrm{\Lambda }`$, their observations provide severe constraints in the $`\mathrm{\Omega }_0\lambda _0`$ phase space (where $`\lambda _0\mathrm{\Lambda }/3H_0^2`$). Not only the Standard Model is excluded with a high degree of confidence, but open models ($`\mathrm{\Omega }_0<1`$), without a cosmological constant are also excluded, unless $`\mathrm{\Omega }_0`$ is very small. Observations of the CMB (White 1998; Tegmark et al. 1998) provide a different constraint, that does not rule out the Standard Model. However, combining the CMB and Type I supernovae observations leads to separate determinations of $`\mathrm{\Omega }_0`$ and $`\lambda _0`$. The preferred values are $`\mathrm{\Omega }_00.3`$ and $`\lambda _00.7`$. #### 1.3.6 Anthropic Considerations In models such as chaotic inflation, in which the observed big bang is just one of an infinite number of expanding regions in each of which the fundamental property takes a different value (Linde 1986, 1987, 1988), and models in which a state vector is derived for the universe which is a superposition of terms with different values of the fundamental property (e.g. Hawking 1983, 1984; Coleman 1988), the probability of observing any particular values of the cosmological parameters is conditioned by the existence of observers in those “subuniverses” in which the parameters take these values (Efstathiou 1995; Vilenkin 1995; Weinberg 1996; Martel, Shapiro, & Weinberg 1998). This probability is proportional to the fraction of matter which is destined to condense out of the background into mass concentrations large enough to form observers. Using this approach, Martel et al. (1998) calculated the relative likelihood of observing any given value of the cosmological constant $`\mathrm{\Lambda }`$ within the context of the flat CDM model normalized to COBE, and found that small but finite value of the cosmological constant, in the range suggested by observations, are favored over the value $`\mathrm{\Lambda }=0`$. Garriga, Tanaka, & Vilenkin (1998) have performed a similar analysis, but applied the the density parameter, and found that intermediate values of $`\mathrm{\Omega }_0`$ are more likely to be observed than values near 0 or near 1. Anthropic arguments do not favor the values that the parameters take in the Standard Model, $`\mathrm{\Omega }_0=1`$ and $`\lambda _0=0`$. #### 1.3.7 Alternatives to the Standard Model The problems listed above strongly argue against the Standard Model, and forces us to consider alternatives. The age problem, large-scale structure problem, and the baryon catastrophe can all be solved by considering Open CDM, or OCDM models, in which the density parameter $`\mathrm{\Omega }_0<1`$. However, such models do not satisfy the flatness requirement of inflation. CDM models with a nonzero cosmological constant $`\lambda _0`$ equal to $`1\mathrm{\Omega }_0`$, known as $`\mathrm{\Lambda }`$CDM models, satisfy this flatness requirement, and the addition of the cosmological constant improves the age and large-scale structure problems, while providing a better agreement to the supernovae data. Recently, several authors have shown that it is possible to reconcile the inflationary scenario with an open universe, thus eliminating the flatness requirement (Ratra & Peebles 1994; Bucher, Goldhaber, & Turok 1995; Yamamoto, Sasaki, & Tanaka 1995; Linde 1995; Linde & Mezhlumian 1995). This not only supports open, matter-dominated models, but also allows for the possibility of an open universe with $`\lambda _00`$ and $`\mathrm{\Omega }_0+\lambda _0<1`$. The large-scale structure problem can also be solved by introducing a “tilt” in the primordial power spectrum. In this Tilted CDM, or TCDM model, the primordial power spectrum $`P(k)`$ at large scales does no have the Harrison-Zel’dovich form $`P(k)k`$, but instead varies as $`P(k)k^n`$, where the primordial exponent $`n`$ can differ from unity. The universe might also contain a mixture of two different forms of dark matter, one cold and one hot, a model known as CHDM. Finally, the universe might contain a smooth component whose pressure $`p`$ and density $`\rho `$ are related by an equation of state $`p=w\rho `$, a concept known as “quintessence” (Caldwell, Dave, & Steinhardt 1998; see also Fry 1985; Charlton & Turner 1987; Silveira & Waga 1994; Martel 1995; Martel & Shapiro 1998). The cosmological constant is a particular form of quintessence, corresponding to $`w=1`$; other forms have been suggested, such as domain walls, textures, or strings. The Standard Model had no free parameters. The values of $`\mathrm{\Omega }_0`$ and $`\lambda _0`$ were fixed at 1 and 0, respectively. The value of $`h`$ had to be close to $`0.5`$ in order to avoid conflicts with ages of globular clusters, and the primordial power spectrum was assumed to be a CDM spectrum with no tilt. With the emergence of alternative models, there are now many free parameters. The density parameters $`\mathrm{\Omega }_0`$ no longer has to be unity. The cosmological constant $`\lambda _0`$ can be nonzero and, if open inflation is correct, a nonzero $`\lambda _0`$ does not have to be equal to $`1\mathrm{\Omega }_0`$. The Hubble constant can vary over a certain range without conflicting with observations, and the slope $`n`$ of the primordial power spectrum does not have to be 1. In models such as CHDM and quintessence models, there are additional parameters: the contribution of each component to the density parameter, and in the case of quintessence models, the coefficient $`w`$ appearing in the equation of state. During the 1980’s, N-body simulations have played a central role in establishing the Standard Model, and then went more or less into hibernation, as efforts were invested into adding more physics to the original algorithms (hydrodynamics in particular). The emergence of alternative cosmological models has lead to a renewal of interest in N-body simulations. Such numerical simulations are essential for testing cosmological models against observations. Furthermore, they are useful from a theoretical viewpoint, since they can reveal how each cosmological parameter affects the process of large-scale structure formation. ### 1.4 The Need for a Database Numerical methods such as the Particle-Mesh algorithm (PM) and the Particle-Particle/Particle-Mesh algorithm ($`\mathrm{P}^3\mathrm{M}`$) are well documented. Details of the algorithms can be found in textbooks (e.g. Hockney & Eastwood 1981) and papers (e.g. Efstathiou et al. 1985). Hence, any researcher can easily access all the information and knowledge necessary to develop such algorithms. However, the effort required to develop, test, and optimize a PM or $`\mathrm{P}^3\mathrm{M}`$ algorithm from scratch can be quite substantial, and can be regarded as a waste of effort, since it essentially amounts to “reinventing the wheel.” Also, performing simulations with large number of particles can demand a substantial investment in resources such as computer time, which is also wasteful if these simulations, or similar ones, have already been performed by other researchers. Consequently, it is a common practice among researchers to share either their programs or the results of their simulations. Klypin & Holtzman (1997) have combined into a single package their version of the PM algorithm and programs for generating initial conditions and analyzing the results. This package has been made available to the astronomical community, and can can be downloaded from a world-wide-web site. This allows other researchers interested in performing cosmological numerical simulations to “get started” immediately, without having to develop and test any computer program. However, installing and running these programs might pose some difficulties depending upon the kind of computer resources available to the user. We use a different approach. Instead of making our programs available to the astronomical community (something we might do eventually), it is the results of the simulations themselves that we are making available. We performed a very large number of numerical simulations, a total of 160, for 68 different cosmological models. This constitutes by far the largest database of cosmological simulations ever assembled, and it is still growing as more simulations are being performed. We are making this database available to the astronomical community (see §3.4 below). This approach is complementary to the one used by Klypin & Holtzman. By providing the results of the simulations, we eliminate the need for researchers to perform themselves these simulations, and the same simulations can be used by many different researchers. However, someone might be interested in simulating a cosmological model which is not included in the database, in which case the algorithm of Klypin & Holtzman can be used. Alternatively, we can, upon request, perform additional simulations and include them in the database. An interesting question is whether the results of simulations from the database can be analyzed using Klypin & Holtzman programs. In principle, this should be possible. The output files in the database are not written in the same format as the ones produced by Klypin & Holtzman’s PM code, but it is fairly trivial to write a program that “translate” files from one format to another. There is an important difference that must be pointed out. The program of Klypin & Holtzman is based on the PM algorithm, while the simulations in the database were performed using a $`\mathrm{P}^3\mathrm{M}`$ algorithm. For a same number of particles, the $`\mathrm{P}^3\mathrm{M}`$ algorithm has a length resolution superior to the one of the PM algorithm by a factor of order 6 (depending upon the particular choice of smoothing length). However, since the PM algorithm is significantly faster than the $`\mathrm{P}^3\mathrm{M}`$ algorithm, it is possible to make up for the lack of resolution of the PM code by simply using more particles. We used $`64^3`$ particles in all simulations.<sup>2</sup><sup>2</sup>2We intend to add simulations with $`128^3`$ particles to the database in a near future. These have the same length resolutions as PM calculations with $`384^3`$ particles (such as the ones performed by Gross et al. ). As mentioned above, there are numerous alternatives to the Standard Model. In this paper we consider CDM models in which the only components are ordinary matter (dark and baryonic) and possibly a nonzero cosmological constant (thus excluding CHDM models, and generic quintessence models). We consider the three cases $`\mathrm{\Omega }_0=1`$, $`\lambda _0=0`$ (the Einstein-de Sitter model), $`\mathrm{\Omega }_0<1`$, $`\lambda _0=0`$, and $`\mathrm{\Omega }_0+\lambda _0=1`$. We also allow the primordial power spectrum to have a tilt. These models are usually referred to as Tilted CDM (TCDM), Tilted, Open CDM (TOCDM), and Tilted, Lambda CDM (T$`\mathrm{\Lambda }`$CDM). ## 2 THE NUMERICAL SIMULATIONS ### 2.1 The Algorithm All simulations presented in this paper were done using the P<sup>3</sup>M algorithm (Hockney & Eastwood 1981; Efstathiou et al. 1985). The computational volume is a cubic box of comoving size $`L_{\mathrm{box}}`$ and comoving volume $`V_{\mathrm{box}}=L_{\mathrm{box}}^3`$ with triply periodic boundary conditions, expanding with Hubble flow. The matter distribution inside the computational volume is represented by $`N`$ equal-mass particles. The forces on particles are computed by solving Poisson’s equation on a cubic grid using a Fast Fourier Transform method. The resulting force field represents the Newtonian interaction between particles down to a separation of a few mesh spacings. At shorter distances the computed force is significantly smaller than the physical force. To increase the dynamical range of the code, the force at short distance is corrected by direct summation over pairs of particles separated by less than some cutoff distance $`r_e`$. With the addition of this so-called short-range correction, the code accurately reproduces the Newtonian interaction down to the softening length $`ϵ`$, which is a fraction of the grid spacing. The system is evolved forward in time using a second order Runge-Kutta time-integration scheme with a variable time step. Our particular version of the $`\mathrm{P}^3\mathrm{M}`$ algorithm uses supercomoving variables (Martel & Shapiro 1998; see also Shandarin 1980). In these variables, the position $`\stackrel{~}{𝐫}`$, peculiar velocity $`\stackrel{~}{𝐯}`$, time $`\stackrel{~}{t}`$, density $`\stackrel{~}{\rho }`$, and peculiar gravitational potential $`\stackrel{~}{\varphi }`$ are related to their Eulerian counterparts by $`\stackrel{~}{𝐫}`$ $`=`$ $`{\displaystyle \frac{𝐫}{ar_{}}},`$ (5) $`\stackrel{~}{𝐯}`$ $`=`$ $`{\displaystyle \frac{a𝐯t_{}}{r_{}}},`$ (6) $`d\stackrel{~}{t}`$ $`=`$ $`{\displaystyle \frac{dt}{a^2t_{}}},`$ (7) $`\stackrel{~}{\rho }`$ $`=`$ $`{\displaystyle \frac{a^3\rho }{\rho _{}}},`$ (8) $`\stackrel{~}{\varphi }`$ $`=`$ $`{\displaystyle \frac{a^2\varphi t_{}^2}{r_{}^2}},`$ (9) where $`\rho _{}`$ $`=`$ $`\overline{\rho }_0={\displaystyle \frac{3H_0^2\mathrm{\Omega }_0}{8\pi G}},`$ (10) $`t_{}`$ $`=`$ $`{\displaystyle \frac{2}{H_0(\mathrm{\Omega }_0a_0^3)^{1/2}}}.`$ (11) In these equations, $`a(t)`$ is the Friedmann-Robertson-Walker scale factor, $`a_0`$ is its present value, and $`r_{}`$, is a free parameter whose value is chosen according to the characteristic length scale of the problem. These variables are similar to the more standard comoving variables in many respects. In particular, equations (5) and (8) imply that a volume expanding with Hubble flow remains fixed in supercomoving variables, and that the mean density inside that volume remains constant. The main difference is in the change of time variable given by equation (7). In supercomoving coordinates, the time $`\stackrel{~}{t}`$ is negative, and equal to $`\mathrm{}`$ at the big bang. In an Einstein-de Sitter model, $`\stackrel{~}{t}=1`$ at present. This change of time variable has the virtue of eliminating the cosmological drag term in the momentum equation. In all simulations, we set $`r_{}=L_{\mathrm{box}}/a_0`$. Equation (5) then implies that the box size in supercomoving variables is unity at all times. The time-evolution of the scale factor $`a(t)`$ is governed by the Friedmann equation. For universes composed of ordinary, nonrelativistic matter and a nonzero cosmological constant $`\lambda _0`$, the Friedmann equation takes the form $$\left(\frac{1}{a}\frac{da}{dt}\right)^2=H(t)^2=H_0^2\left[(1\mathrm{\Omega }_0\lambda _0)\left(\frac{a}{a_0}\right)^2+\mathrm{\Omega }_0\left(\frac{a}{a_0}\right)^3+\lambda _0\right].$$ (12) In supercomoving variables, there is a precise normalization for the scale factor, which depends upon the particular cosmological model. For the models considered in this paper, the solution of the Friedmann equation and the present value of the scale factor are the following: (a) Einstein-de Sitter model ($`\mathrm{\Omega }_0=1`$, $`\lambda _0=0`$) $$a=\stackrel{~}{t}^2,a_0=1.$$ (13) (b) Open models ($`\mathrm{\Omega }_0<1`$, $`\lambda _0=0`$) $$a=(\stackrel{~}{t}^{\mathrm{\hspace{0.17em}2}}1)^1,a_0=(1\mathrm{\Omega }_0)/\mathrm{\Omega }_0.$$ (14) (c) Flat models with nonzero cosmological constant ($`\mathrm{\Omega }_0+\lambda _0=1`$) $$\stackrel{~}{t}=\frac{1}{2}_1^a\frac{dy}{y^{3/2}(1+y^3)^{1/2}},a_0=\left(\frac{\lambda _0}{\mathrm{\Omega }_0}\right)^{1/3}.$$ (15) Notice that the solutions for $`a(\stackrel{~}{t})`$ do not depend explicitly upon the cosmological parameters, which are absorbed in the definition of $`a_0`$. Hence, for all models included in the database, there are only 3 different solutions of the Friedmann equation. This is one of the most useful properties of supercomoving variables. For simplicity, we shall drop the tilde notation for supercomoving variables in the remainder of this paper, except in §3.3. ### 2.2 The Power Spectrum For all simulations presented in this paper, we use the Cold Dark Matter (CDM) power spectrum of Bardeen et al. (1986), with the normalization of Bunn & White (1997). The power spectrum at redshift $`z`$ is given by $$P(k,z)=2\pi ^2\left(\frac{c}{H_0}\right)^{3+n}\delta _H^2^2(z,0)k^nT_{\mathrm{CDM}}^2(k),$$ (16) where $`c`$ is the speed of light, $`(z,0)\delta _+(0)/\delta _+(z)`$ is the linear growth factor between redshift $`z`$ and the present, and $`\delta _+`$ is the linear growing mode (see eqs. – below), $`n`$ is the tilt, and $`T_{\mathrm{CDM}}`$ is the transfer function, given by $$T_{\mathrm{CDM}}(q)=\frac{\mathrm{ln}(1+2.34q)}{2.34q}\left[1+3.89q+(16.1q)^2+(5.46q)^3+(6.71q)^4\right]^{1/4}$$ (17) (Bardeen et al. 1986), with $`q`$ is defined by $`q`$ $`=`$ $`\left({\displaystyle \frac{k}{\mathrm{Mpc}^1}}\right)\alpha ^{1/2}(\mathrm{\Omega }_0h^2)^1\mathrm{\Theta }_{2.7}^2,`$ (18) $`\alpha `$ $`=`$ $`a_1^{\mathrm{\Omega }_{\mathrm{B0}}/\mathrm{\Omega }_0}a_2^{(\mathrm{\Omega }_{\mathrm{B0}}/\mathrm{\Omega }_0)^3},`$ (19) $`a_1`$ $`=`$ $`(46.9\mathrm{\Omega }_0h^2)^{0.670}\left[1+(32.1\mathrm{\Omega }_0h^2)^{0.532}\right],`$ (20) $`a_2`$ $`=`$ $`(12.0\mathrm{\Omega }_0h^2)^{0.424}\left[1+(45.0\mathrm{\Omega }_0h^2)^{0.582}\right]`$ (21) (Hu & Sugiyama 1996, eqs. \[D-28\] and \[E-12\]), where $`\mathrm{\Theta }_{2.7}`$ is the temperature of the cosmic microwave background in units of 2.7K, and $`\delta _H`$ is the density perturbation at horizon crossing (Liddle & Lyth 1993). Fits for $`\delta _H`$ are given by Bunn & White (1997), as follows, $$10^5\delta _H=\{\begin{array}{cc}1.95\mathrm{\Omega }_0^{0.350.19\mathrm{ln}\mathrm{\Omega }_00.17\stackrel{~}{n}}e^{(\stackrel{~}{n}+0.14\stackrel{~}{n}^2)},\hfill & \lambda _0=0\text{;}\hfill \\ 1.94\mathrm{\Omega }_0^{0.7850.05\mathrm{ln}\mathrm{\Omega }_0}e^{(0.95\stackrel{~}{n}+0.169\stackrel{~}{n}^2)},\hfill & \lambda _0=1\mathrm{\Omega }_0\text{;}\hfill \end{array}$$ (22) where $`\stackrel{~}{n}n1`$. ### 2.3 Setting up Initial Conditions We assume that the initial fluctuations originate from a Gaussian random process. The initial density contrast can then be represented as a superposition of plane waves with random phases, and amplitudes related to the power spectrum $`P(k)`$, where $`𝐤`$ is the wavenumber, and $`k=|𝐤|`$. In an infinite universe, all values of $`𝐤`$ are allowed. The power spectrum is therefore continuous, and the number of modes (that is, plane waves) present in the initial density contrast is infinite. The simulations, however, are performed inside a finite comoving cubic volume $`V_{\mathrm{box}}=L_{\mathrm{box}}^3`$ with periodic boundary conditions. This periodicity implies that only modes with wavenumbers $`𝐤=(k_x,k_y,k_z)=(n_x,n_y,n_z)k_0`$, where $`n_x`$, $`n_y`$, $`n_z`$ are integers and $`k_02\pi /L_{\mathrm{box}}`$ is the fundamental wavenumber, can be present in the simulated initial conditions. Furthermore, since the initial conditions are represented by particles, the components $`k_x`$, $`k_y`$, $`k_z`$ of the wavenumber cannot exceed the nyquist frequency $`k_{\mathrm{nyq}}=N^{1/3}k_0/2`$, where $`N`$ is the number of particles in the computational volume, and $`N^{1/3}`$ is the number of particles along one dimension. Modes with higher wavenumber cannot be represented because of undersampling. Hence we are faced with the task of representing continuous initial conditions using a discrete sample of plane waves. This key aspect of any numerical cosmological simulation is, surprisingly, seldom discussed in the literature. Here we present a detailed description. In a periodic universe with comoving cubic volume $`V_{\mathrm{box}}=L_{\mathrm{box}}^3`$, the density contrast $`\delta `$ can be decomposed into a sum of plane waves, $$\delta (𝐫)=\underset{𝐤}{}\delta _𝐤^{\mathrm{disc}}e^{i𝐤𝐫},$$ (23) where $`𝐫`$ is the comoving, or supercomoving, position, and $`\delta _𝐤^{\mathrm{disc}}`$ is the amplitude of the mode with wavenumber $`𝐤`$. The superscript “disc” stands for “discrete.” The real universe is of course not periodic, in which case all values of $`𝐤`$ are allowed. To convert equation (23) from the discrete limit to the continuous limit, consider first any function $`f(𝐤)`$ that is summed over all possible values of $`𝐤`$. In the discrete limit, we have $$\underset{𝐤}{}f_𝐤^{\mathrm{disc}}=\underset{\mathrm{all}\mathrm{V}.\mathrm{E}.}{}f_𝐤^{\mathrm{disc}}=\frac{1}{k_0^3}\underset{\mathrm{all}\mathrm{V}.\mathrm{E}.}{}f_𝐤^{\mathrm{disc}}k_0^3=\frac{V_{\mathrm{box}}}{(2\pi )^3}\underset{\mathrm{all}\mathrm{V}.\mathrm{E}.}{}f_𝐤^{\mathrm{disc}}_{\mathrm{V}.\mathrm{E}.}d^3k,$$ (24) where “V.E.” represents a volume element in $`𝐤`$-space, which is a cube of volume $`k_0^3=V_{\mathrm{box}}/(2\pi )^3`$ centered at $`𝐤`$. Assuming that the function $`f`$ does not vary significantly over one volume element, we can pull it inside the integral, $$\underset{𝐤}{}f_𝐤^{\mathrm{disc}}\frac{V_{\mathrm{box}}}{(2\pi )^3}\underset{\mathrm{all}\mathrm{V}.\mathrm{E}.}{}_{\mathrm{V}.\mathrm{E}.}f_𝐤^{\mathrm{disc}}d^3k.$$ (25) Of course, integrating over the volume element, and then summing over all volume elements, is effectively like integrating over all $`𝐤`$-space, so equation (25) reduces to $$\underset{𝐤}{}f_𝐤^{\mathrm{disc}}\frac{V_{\mathrm{box}}}{(2\pi )^3}f_𝐤^{\mathrm{disc}}d^3k=f_𝐤^{\mathrm{cont}}d^3k,$$ (26) where the superscript “cont” stands for “continuous.” The continuous and discrete functions are related by $$f_𝐤^{\mathrm{cont}}=\frac{V_{\mathrm{box}}}{(2\pi )^3}f_𝐤^{\mathrm{disc}}.$$ (27) Using these formulae, we can rewrite equation (23) as $$\delta (𝐫)=d^3k\delta _𝐤^{\mathrm{cont}}e^{i𝐤𝐫},$$ (28) where $$\delta _𝐤^{\mathrm{cont}}=\frac{V_{\mathrm{box}}}{(2\pi )^3}\delta _𝐤^{\mathrm{disc}}.$$ (29) To find the relationships between $`\delta _𝐤^{\mathrm{disc}}`$, $`\delta _𝐤^{\mathrm{cont}}`$, and the power spectrum, consider the rms density fluctuation $`\sigma _x`$ at some particular scale $`x`$. This quantity is given by $$\sigma _x^2=\frac{V_{\mathrm{box}}}{(2\pi )^3}d^3k|\delta _𝐤^{\mathrm{disc}}|^2W(kx).$$ (30) where $`W`$ is a window function. We present the derivation of this result in Appendix A. In the continuous limit, $`\sigma _x`$ is related to the power spectrum, by $$\sigma _x^2=\frac{1}{(2\pi )^3}d^3kP(k)W(kx)$$ (31) (see, e.g. Bunn & White , eqs. and ). By combining equations (29), (30), and (31), we get $$P(k)=V_{\mathrm{box}}|\delta _𝐤^{\mathrm{disc}}|^2=\frac{(2\pi )^6}{V_{\mathrm{box}}}|\delta _𝐤^{\mathrm{cont}}|^2.$$ (32) Both $`P(k)`$ and $`\delta _𝐤^{\mathrm{cont}}`$ have dimensions of a volume while $`\delta _𝐤^{\mathrm{disc}}`$ is dimensionless. Notice that the form of these expressions depends upon the actual definition of the Fourier Transform, which tends to vary among authors. To set up initial conditions, we lay down the particles on a cubic lattice, and displace each particle by an amount $`\mathrm{\Delta }𝐫`$ given by $$\mathrm{\Delta }𝐫=i\underset{𝐤}{}\frac{G_𝐤\delta _𝐤^{\mathrm{disc}}𝐤}{k^2}e^{i𝐤𝐫},$$ (33) where $`𝐫`$ is the unperturbed position, $`\delta _𝐤^{\mathrm{disc}}=|\delta _𝐤^{\mathrm{disc}}|e^{i\varphi _𝐤}`$ is a complex number with amplitude $`|\delta _𝐤^{\mathrm{disc}}|=[P(k)/V_{\mathrm{box}}]^{1/2}`$ and phase $`\varphi _𝐤`$ chosen randomly between 0 and $`2\pi `$ with uniform probability, and the sum extends over all modes included in the initial conditions (see §2.4). As Efstathiou et al. (1985) point out, assuming random phases would be sufficient to ensure that the initial conditions are Gaussian, in the continuous limit (that is, in an infinite universe). However, this assumption is insufficient in the discrete limit (that is, in a finite universe with periodic boundary conditions). To ensure the Gaussianity of the initial conditions, it is necessary, and sufficient, to include the Gaussian factor $`G_𝐤`$, a random number chosen from a Gaussian distribution with mean 0 and dispersion 1. This guarantees the initial conditions are Gaussian, even though there might be a lack of resolution at some scales; $`G_𝐤`$ does not change the spectral amplitude of the fluctuations. To compute the initial peculiar velocity field, we assume that the initial time of the calculation is early enough for the perturbation to be in the linear regime, but late enough so that the linear decaying mode can be neglected. The initial peculiar velocity of the particles are then related to their displacements by $$𝐯_i=\left(\frac{1}{\delta _+}\frac{d\delta _+}{dt}\right)_{z_i}\mathrm{\Delta }𝐫,$$ (34) where $`z_i`$ is the initial redshift of the simulations, $`\mathrm{\Delta }𝐫`$ is computed using equation (33), and $`\delta _+`$ is the linear growing mode of the perturbation, which depends upon the cosmological model. For the Einstein-de Sitter model ($`\mathrm{\Omega }_0=1`$, $`\lambda _0=0`$), the growing mode is $$\delta _+(z)=(1+z)^1.$$ (35) For open models ($`\mathrm{\Omega }_0<1`$, $`\lambda _0=0`$), the growing mode is $$\delta _+(z)=1+\frac{3}{x}+3\left(\frac{1+x}{x^3}\right)^{1/2}\mathrm{ln}\left[(1+x)^{1/2}x^{1/2}\right]$$ (36) (Peebles 1980), where $$x=(\mathrm{\Omega }_0^11)(1+z)^1.$$ (37) Finally, for flat models with a cosmological constant ($`\mathrm{\Omega }_0+\lambda _0=1`$), the growing mode is given by $$\delta _+(z)=\left(\frac{1}{y}+1\right)^{1/2}_0^y\frac{dw}{w^{1/6}(1+w)^{3/2}}$$ (38) (Martel 1991b), where $$y=\frac{\lambda _0}{\mathrm{\Omega }_0}(1+z)^3.$$ (39) ### 2.4 The Simulations We set the comoving length of the computational volume $`L_{\mathrm{box}}`$ equal to $`128\mathrm{Mpc}`$ (present length units). The total mass of the system is $`M_{\mathrm{sys}}=3H_0^2\mathrm{\Omega }_0L_{\mathrm{box}}^3/8\pi G=5.821\times 10^{17}\mathrm{\Omega }_0h^2\mathrm{M}_{}`$. We use $`N=64^3=262,144`$ particles of mass $`M_{\mathrm{part}}=M_{\mathrm{sys}}/N=2.220\times 10^{12}\mathrm{\Omega }_0h^2\mathrm{M}_{}`$. We solve Poisson’s equation on a $`128^3`$ grid. In all simulations, $`ϵ`$ and $`r_e`$ were set equal to 0.3 and 2.7 mesh spacings, respectively. This corresponds, in physical units, to a comoving softening length $`ϵ=300\mathrm{kpc}`$. This is a reasonable value for gravity-only cosmological simulations. At smaller scales, hydrodynamical effects become important and cannot be ignored. The dynamical range in length of the algorithm is $`L_{\mathrm{box}}/ϵ=467`$. The ratio of the nyquist wavenumber $`k_{\mathrm{nyq}}`$ to the fundamental wavenumber $`k_0`$ is $`N^{1/3}/2=32`$. Hence each component $`k_i`$, $`i=x`$, $`y`$, $`z`$, of the wavenumber can take 65 values; $`k_i=n_ik_0`$, with $`32n_i32`$. The initial conditions can therefore represent $`65^3=\mathrm{274\hspace{0.17em}625}`$ modes. However, the reality condition requires that the amplitudes of modes with equal and opposite wavenumbers are related by $`\delta _𝐤^{\mathrm{disc}}=(\delta _𝐤^{\mathrm{disc}})^{}`$ \[in order for $`\delta (𝐫)`$ to be real\]. Furthermore, we exclude modes with $`|𝐤|=(k_x^2+k_y^2+k_z^2)^{1/2}>k_{\mathrm{nyq}}`$. This reduces the actual number of modes represented in the initial conditions to $`\mathrm{68\hspace{0.17em}532}`$. All simulations start at an initial redshift $`z_i=24`$. The algorithm produces “dumps” (snapshots of the system) at numerous intermediate redshifts, up to the present. These redshifts where chosen by imposing that the dumps are equally space in conformal time $`\eta `$, defined by $`d\eta a_0dt/a(t)`$. We set the difference $`\mathrm{\Delta }\eta `$ between consecutive dumps equal to $`L_{\mathrm{box}}/c`$. Thus, if $`t`$ and $`t^{}`$ are the times corresponding to 2 consecutive dumps, they are related by $$\frac{L_{\mathrm{box}}}{c}=_t^{}^t[1+z(t)]𝑑t.$$ (40) This particular choice results in most dumps being concentrated near the present. Typically, about half of the dumps are between redshifts $`z=1`$ and $`z=0`$. Since the relationship between time and redshift, $`z(t)`$, is model-dependent, the redshifts where dumps are made depend upon the cosmological parameters $`\mathrm{\Omega }_0`$, $`\lambda _0`$, and $`H_0`$ (but not $`\sigma _8`$). Every simulation also produces a dump at $`z=z_i=24`$, and one at $`z=0`$. The number of dumps per simulation varies between 44 and 128. ## 3 THE DATABASE ### 3.1 The Cosmological Models The power spectrum described in §2.2 is characterized by 6 independent parameters: (1) the density parameter $`\mathrm{\Omega }_0`$, (2) the contribution $`\mathrm{\Omega }_{\mathrm{B0}}`$ of the baryonic matter to the density parameter, (3) the cosmological constant $`\lambda _0`$, (4) the Hubble constant $`H_0`$, (5) the temperature $`T_{\mathrm{CMB}}`$ of the Cosmic Microwave Background, and (6) the tilt $`n`$ of the power spectrum. In order to keep the size of the parameter space at a manageable level, we set $`T_{\mathrm{CMB}}=2.7K`$ and $`\mathrm{\Omega }_{\mathrm{B0}}=0.015h^2`$, thus reducing the dimensionality of the parameter space to 4. Also, the normalization of the power spectrum is often described in terms of the rms density fluctuation $`\sigma _8`$ at a scale of $`8h^1\mathrm{Mpc}`$. The value of $`\sigma _8`$ is a function of the 6 aforementioned parameters. We invert this relation, treating $`\sigma _8`$ as an independent parameter, and the tilt $`n`$ as a dependent one. The independent parameters in the database are therefore $`\mathrm{\Omega }_0`$, $`\lambda _0`$, $`H_0`$, and $`\sigma _8`$. For each model, we performed up to 3 different simulations, with different realizations of the initial conditions (this amounts to choosing a different set of random numbers for the phases $`\varphi _𝐤`$ of the complex numbers $`\delta _𝐤^{\mathrm{disc}}`$, and the Gaussian factors $`G_𝐤`$). An important question was to decide which models should be included in the database. Our goal here is not to find the “ultimate model,” which provides the best match to current observations. This would defeat the purpose of having a database, and furthermore, as former supporters of the Standard Model can appreciate, the “best model” can eventually be proven incorrect by new observations. Our intention is to provide an adequate coverage of the parameter phase-space. However, we do not want to invest much effort into simulating models that are considered “unlikely,” because some of the parameters have extreme values. With this in mind, we performed 160 simulations, which provide a broad coverage of the parameter phase-space, but we favored “likely” regions of the parameter phase-space over “unlikely” ones, by performing more simulations in these regions. For instance, we consider models with Hubble constant varying in the range $`H_0=5085\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, but 139 of the calculations (87%) have a Hubble constant in the more plausible range $`H_0=6575\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. The value of the parameters are given in Table 1 for the entire database (with $`H_0`$ in units of $`\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$). The first 4 columns contain the values of the 4 independent parameters $`\mathrm{\Omega }_0`$, $`\lambda _0`$, $`H_0`$, and $`\sigma _8`$. The dependent parameter $`n`$ is in the fifth column. The sixth and seventh columns contain the number of dumps per simulation and the codes of the simulations respectively (see §3.2). The parameter phase-space coverage of the database is illustrated in Figure 1. The top left panel shows a projection of the 4-dimensional parameter phase-space onto the $`\mathrm{\Omega }_0\lambda _0`$ plane. The dots indicate the cases for which there are simulations in the database. The number next to each dot indicates the number of simulations for that particular combination of $`\mathrm{\Omega }_0`$ and $`\lambda _0`$. This panel includes all simulations in the database. The top right panel shows the same projection, but for a subset of the simulations, all simulations with $`H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. The remaining 4 panels show different projections and different subsets. As we see, the coverage of the parameter phase space is quite dense. The biggest “hole” is seen in the $`\mathrm{\Omega }_0\lambda _0`$ projection (top panels). There are currently no simulations for open models with a nonzero cosmological constant ($`\lambda _00`$ and $`\mathrm{\Omega }_0+\lambda _0<1`$) in the database. As we pointed out in §1.3, these models are certainly worth considering, and we intend to include such models in the database in the near future. Several interesting quantities can be computed directly from the parameters. One of them is the age of the universe. For $`\lambda _00`$ models, $`t_0`$ is given by $$t_0=\frac{1}{H_0}_0^1\left[\frac{x}{\lambda _0x^3+(1\mathrm{\Omega }_0\lambda _0)x+\mathrm{\Omega }_0x}\right]^{1/2}𝑑x$$ (41) (see, e.g., Martel 1990). It is of course independent of $`\sigma _8`$. Table 2 gives the ages is Gigayears for the various models included in the database. Another interesting quantity is $`\sigma _8^{\mathrm{clus}}`$, the value of $`\sigma _8`$ inferreded from observations of clusters of galaxies. Using the X-ray temperature distribution function of clusters, Viana & Liddle (1996) have produced an empirical formula for $`\sigma _8^{\mathrm{clus}}`$, $$\sigma _8^{\mathrm{clus}}=0.6\mathrm{\Omega }_0^{C(\mathrm{\Omega }_0)},$$ (42) where $$C(\mathrm{\Omega }_0)=\{\begin{array}{cc}0.36+0.31\mathrm{\Omega }_00.28\mathrm{\Omega }_0^2,\hfill & \lambda _0=0\text{ ;}\hfill \\ 0.590.16\mathrm{\Omega }_0+0.06\mathrm{\Omega }_0^2,\hfill & \lambda _0=1\mathrm{\Omega }_0\text{ .}\hfill \end{array}$$ (43) Table 3 gives the values of $`\sigma _8^{\mathrm{clus}}`$ for the various models included in the database. Notice that these values do not always match the actual values of $`\sigma _8`$ used for the calculations (fourth column of Table 1). ### 3.2 Nomenclature Each cosmological model, that is, each combination of the four parameters $`\mathrm{\Omega }_0`$, $`\lambda _0`$, $`H_0`$, and $`\sigma _8`$, is identified by a two-character code composed of an uppercase letter and a lowercase letter. For instance, the Einstein-de Sitter model with $`H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and $`\sigma _8=1.0`$ is identified by the code Xa. The letters were chosen for practical reasons, and the reader should not try to find some logic in these choices. Each simulation is identified by a 3-character code, composed of the two-character code for the model, plus a digit to identified the simulation. For instance, the three simulations for the Xa model are identified by the codes Xa1, Xa2, and Xa3. The codes for the entire database are given in the last column of Table 1. Each simulation produces many output files, or dumps, which are snapshots of the system at various redshift. Each file is identified by a 7-character code, which consists of the 3-character code of the simulation, an underscore, and a 3-digit number which identifies the file. For instance, the first output file created by the simulation Xa1 is called Xa1\_001, and contains a snapshot of the system at the initial redshift $`z_i=24`$. The next file created by that simulation is called Xa1\_002, and contains a snapshot at $`z=22.079`$, and so on. The last file is called Xa1\_058, and contains a snapshot at the present ($`z=0`$). The lists of redshifts where dumps are available can be obtained from the authors. There is one such list for every combination of $`\mathrm{\Omega }_0`$, $`\lambda _0`$, and $`H_0`$ included in the database. ### 3.3 Conversion to Physical Units The positions and velocities stored in the dumps are expressed in supercomoving variables. They can be converted to physical units using equations (5)–(11). The positions $`𝐫`$ and velocities $`𝐮=H(z)𝐫+𝐯`$ in physical units are given by $`𝐫`$ $`=`$ $`{\displaystyle \frac{L_{\mathrm{box}}\stackrel{~}{𝐫}}{1+z}},`$ (44) $`𝐮`$ $`=`$ $`L_{\mathrm{box}}\left[{\displaystyle \frac{H(z)\stackrel{~}{𝐫}}{1+z}}+{\displaystyle \frac{\mathrm{\Omega }_0^{1/2}H_0(1+z)\stackrel{~}{𝐯}}{2a_0^{1/2}}}\right].`$ (45) In these expressions, we have reintroduced the tilde notation for the supercomoving variables. The expression of $`𝐫`$ is the same for all models, but the one for $`𝐯`$ is model dependent. After eliminating $`H(z)`$ using equation (12) and $`a_0`$ using equations (13)–(15), we obtain the following expressions: (a) Einstein-de Sitter model ($`\mathrm{\Omega }_0=1`$, $`\lambda _0=0`$) $$𝐮=H_0L_{\mathrm{box}}\left[(1+z)^{1/2}\stackrel{~}{𝐫}+\frac{(1+z)\stackrel{~}{𝐯}}{2}\right].$$ (46) (b) Open models ($`\mathrm{\Omega }_0<1`$, $`\lambda _0=0`$) $$𝐮=H_0L_{\mathrm{box}}\left[(1+\mathrm{\Omega }_0z)^{1/2}\stackrel{~}{𝐫}+\frac{\mathrm{\Omega }_0(1+z)\stackrel{~}{𝐯}}{2(1\mathrm{\Omega }_0)^{1/2}}\right].$$ (47) (c) Flat models with nonzero cosmological constant ($`\mathrm{\Omega }_0+\lambda _0=1`$) $$𝐮=H_0L_{\mathrm{box}}\left\{\left[\mathrm{\Omega }_0(1+z)^3+\lambda _0\right]^{1/2}\frac{\stackrel{~}{𝐫}}{1+z}+\frac{\mathrm{\Omega }_0^{2/3}(1+z)\stackrel{~}{𝐯}}{2\lambda _0^{1/6}}\right\}.$$ (48) ### 3.4 Technical Considerations The database contains 160 simulations for 68 cosmological models. For each simulation, there is a dump at $`z=z_i=24`$ and one at $`z=0`$, plus numerous dumps at intermediate redshifts. There is a total of $`\mathrm{11\hspace{0.17em}973}`$ dumps in the database. Each dump contains $`6N=\mathrm{1\hspace{0.17em}572\hspace{0.17em}864}`$ numbers, the coordinates of the position and velocity for each particle in the simulation. These numbers are stored in single precision (32 bits), although the simulations themselves were performed in double precision. Each dump is a binary file (IEEE 754 standard) of size $`6.3\mathrm{Mb}`$, which contains first the $`x`$-coordinates of all particles, followed by the $`y`$-coordinates, the $`z`$-coordinates, the $`v_x`$-coordinates, the $`v_y`$-coordinates, and finally the $`v_z`$-coordinates. Figure 2 shows a sample FORTRAN program that reads a file from the database. The size of the entire database is $`75.3\mathrm{Gb}`$. The database currently resides on archival tapes at the High Performance Computing Facility, University of Texas, where the simulations were performed. Because of the size of the database, it would be impractical (if not impossible) to install it on a web site or an anonymous ftp site where it could be easily retrieved by the user. This might change in the future, but currently the only way to access the database is to contact the authors, preferably by E-mail, and send a list of the dumps requested. Then, the authors and the user can choose the best strategy for transferring file, according to the computer resources and needs of the user. Requests should be sent to database@galileo.as.utexas.edu. ## 4 ANALYSIS OF THE SIMULATIONS The goal of this paper is to present the database, and describe its content. We supplement this description by analyzing the final state of each simulation, which corresponds to the present. We focus on four particular aspects of the present large-scale structure: the rms density fluctuation, the two-point correlation function, the moments of the peculiar velocity field, and the properties of clusters. In this abbreviated version of the paper, we only present the analysis of the rms density fluctuation and the two-point correlation function. The full version of the paper can be obtained by contacting the authors. ### 4.1 RMS Density Fluctuation The present rms density fluctuation $`\sigma _8`$ at scale $`8h^1\mathrm{Mpc}`$ is treated as an independent parameter in our simulations. However, we do not set up the state of the system at present. Instead, we set up initial conditions at high redshift ($`z_i=24`$), and evolve the system numerically all the way to the present. We adjust the initial conditions in such a way that the density fluctuation at presents ends up being equal to the desired value of $`\sigma _8`$. To achieve this, we assume that the power spectrum evolves with time according to linear perturbation theory (hence the presence of the factor $`^2`$ in equation ). Actually, we do not expect the actual value of $`\sigma _8`$ to be precisely equal to the desired one, for several reasons. Let us designate by $`\sigma _8^{\mathrm{cont}}`$ the desired value of $`\sigma _8`$ for each simulation, that is, the quantity appearing in the fourth column of Table 1. The superscript “cont” indicates that this the value in the real universe, where the wavenumber $`𝐤`$ varies continuously. We are representing the initial conditions using a finite number of discrete modes, which is clearly an approximation. We designate by $`\sigma _8^{\mathrm{disc}}`$ the value of $`\sigma _8`$ resulting from this approximation. This value is given by $$(\sigma _8^{\mathrm{disc}})^2=^2(z_i,0)\underset{𝐤}{}|\delta _{𝐤,i}^{\mathrm{disc}}|^2W(k\mathrm{}),$$ (49) where $`\mathrm{}8h^1\mathrm{Mpc}`$. Hence, $`\sigma _8^{\mathrm{disc}}`$ is computed by summing over all modes present in the initial conditions, ignoring the Gaussian factor, and then extrapolating to the present using linear perturbation theory. The introduction of the Gaussian factor in equation (33) further modifies the value of $`\sigma _8`$. We designate by $`\sigma _8^{\mathrm{gauss}}`$ the value of $`\sigma _8`$ resulting from the presence of this factor, $$(\sigma _8^{\mathrm{gauss}})^2=^2(z_i,0)\underset{𝐤}{}G_𝐤^2|\delta _{𝐤,i}^{\mathrm{disc}}|^2W(k\mathrm{}).$$ (50) Finally, we designate by $`\sigma _8^{\mathrm{num}}`$ the “numerical” value of $`\sigma _8`$, which is the actual rms density fluctuation inside the computational volume at present, obtained from the numerical simulation. This value should differ from $`\sigma _8^{\mathrm{gauss}}`$ from several reasons. First, the numerical algorithm has a finite accuracy, owing to the fact that the time step is finite and that the gravitational force is softened at short distances. Second, the evolution of a single mode would never follow precisely the exact solution when the system is represented by a finite number of particles. Third, and more importantly, equations (49) and (50) use linear perturbation theory to extrapolate from the initial conditions to the present, but this can only be approximate, as mode coupling introduces nonlinear effects at small scale. We investigated the importance of these various effects by computing the various values of $`\sigma _8`$ for all simulations. The value of $`\sigma _8^{\mathrm{cont}}`$ is imposed. The values of $`\sigma _8^{\mathrm{disc}}`$ and $`\sigma _8^{\mathrm{gauss}}`$ can be computed directly using equations (49) and (50) (these values are provided automatically by the code that generates initial conditions). To evaluate $`\sigma _8^{\mathrm{num}}`$, we used a direct, somewhat brute-force approach. For each simulation, we located one million (!) spheres of radius $`8h^1\mathrm{Mpc}`$ at random locations inside the computational volume at present, and computed the density contrast $`\delta _{\mathrm{sph}}`$ inside each sphere, using $$\delta _{\mathrm{sph}}=\frac{N_{\mathrm{sph}}\overline{N}_{\mathrm{sph}}}{\overline{N}_{\mathrm{sph}}},$$ (51) where $`N_{\mathrm{sph}}`$ is the number of particles inside the sphere, and $`\overline{N}_{\mathrm{sph}}`$ is the “mean number,” given by $$\overline{N}_{\mathrm{sph}}=\frac{4\pi \mathrm{}^3N}{3V_{\mathrm{box}}}$$ (52) (notice that $`\overline{N}_{\mathrm{sph}}`$ is not an integer). The value of $`\sigma _8^{\mathrm{num}}`$ is then given by $$(\sigma _8^{\mathrm{num}})^2=10^6\underset{\mathrm{all}\mathrm{spheres}}{}(\delta _{\mathrm{sph}})^2.$$ (53) We plot these various values of $`\sigma _8`$ against each others in Figure 3. The top-left panel shows the effect of discreteness. All dots are located below the dashed line, indicating that $`\sigma _8^{\mathrm{disc}}<\sigma _8^{\mathrm{cont}}`$. This is caused not as much by the discreteness itself as by the fact that modes outside the range $`k_0kk_{\mathrm{nyq}}`$ are missing in the initial conditions. Still, the effect is very small, 7% in the worst case (which happens to be model Uc). As we see on the top right panel, the effect of introducing of the Gaussian factor $`G_𝐤`$ is quite important, and causes a spread of order 10% in the value of $`\sigma _8`$. This is primarily an effect of undersampling. The modes with wavenumbers comparable to $`k_0`$ are very few, but contribute significantly to $`\sigma _8`$. This is consequence of the fact that our computational volume is actually too small to constitute a “fair sample” of the universe. With a larger volume, there would still be very few modes with wavenumbers of order $`k_0`$, but these modes would be farther from the peak of the power spectrum, and therefore would contribute less to $`\sigma _8`$. The bottom left panel shows the effect of actually performing the simulation. The values of $`\sigma _8^{\mathrm{gauss}}`$ and $`\sigma _8^{\mathrm{num}}`$ are comparable for small $`\sigma _8`$, but differ at large $`\sigma _8`$ where nonlinear effects become important. This panel shows that (1) the onset of nonlinearity occurs at $`\sigma _80.6`$, (2) The effect of nonlinearity on the value of $`\sigma _8`$ is small, of order 10%, and (3) the effect can go either way, it can either increase or decrease the value of $`\sigma _8`$, and the occurrences of these two cases are comparable. The bottom right panel shows the combined effect of discreteness, Gaussian factor, and nonlinearity. The spread is quite large. The value $`\sigma _8^{\mathrm{num}}`$ that comes out of the simulation can differ by as much as 20% from the value $`\sigma _8^{\mathrm{cont}}`$ we intended to obtain, that is, the values given in the fourth column of Table 1. The reader should be aware of this fact when selecting a particular model from the database. Consequently, we listed in Table 4 the values of $`\sigma _8^{\mathrm{num}}`$ for all simulations. ### 4.2 2-point Correlation Function The distribution of galaxies in the universe can be described statistically using N-point correlation functions. The first and most important of these functions is the 2-point correlation function $`\xi (r)`$, which measures the excess probability of finding two galaxies separated by a distance $`r`$. The 2-point correlation function can be estimated from the particle distributions by locating spherical shells around particles and counting the number of particles inside these shells. If $`N(r_1,r_2)`$ is the number of particles inside a spherical shell of inner radius $`r_1`$ and outer radius $`r_2`$ centered on a given particles, and $`N(r_1,r_2)`$ is the average of $`N(r_1,r_2)`$ over all particles, then by definition $$N(r_1,r_2)=\frac{4\pi (r_2^3r_1^3)n}{3}+4\pi n_{r_1}^{r_2}\xi (r)r^2𝑑r.$$ (54) where $`n`$ is the number density of particles. The first term in equation (54) gives the correct answer in the case of a uniform distribution. The second term represents the effect of the correlation. We can solve this equation for $`\xi `$ (see Martel 1991a). After some algebra, we get $$\xi (x)=\frac{1}{r^3(x)\mathrm{ln}10}\frac{d}{dx}\left[\frac{N(r(x),r_1)}{4\pi n}\frac{r^3(x)r_1^3}{3}\right],$$ (55) where $`x\mathrm{log}_{10}r`$. To compute $`\xi `$, we first evaluate $`N(r(x),r_1)`$ by computing the spacing between all pairs of particles, and counting pairs in bins equally spaced in $`x`$. We then compute the derivative in equation (55) using a standard five-point finite difference operator. We computed $`\xi (r)`$ at present for all simulations in the database. The results are plotted in Figures 4–7. For models with more than one simulation (most have three), we averaged the curves. They were actually so similar, in all cases, that error bars in Figures 4–7 would be too small to be seen. This shows that the 2-point correlation function depends essentially upon the cosmological model, with very little dependence upon the particular realization of the initial conditions. Each panel in Figures 4–7 corresponds to a particular combination of $`\mathrm{\Omega }_0`$, $`\lambda _0`$, and $`H_0`$, with different values of $`\sigma _8`$ represented by different curves. The dashed lines show the observed galaxy 2-point correlation function, $$\xi (r)=\left(\frac{r}{5.4h^1\mathrm{Mpc}}\right)^{1.77}$$ (56) (Peebles 1993, eq. \[7.32\]). Models with $`\sigma _8=0.3`$ fail to reproduce the observed correlation function on three counts. The slope of $`\xi (r)`$ is too shallow, the amplitude is too small, and there tend to be a “kink” in the correlation function at separations of order $`13\mathrm{Mpc}`$. As $`\sigma _8`$ increases, the kink goes away, and the amplitude and slope increase. Models with $`\sigma _8=0.8`$ provide the best fit to the observations, almost independently of the values of the other parameters! For larger values of $`\sigma _8`$, the slope and amplitude are too large. All curves have a shoulder at small separations, where the slope drops significantly. Martel (1991a) argues that this is a consequence of the softening of the force at small distances. The effect of this softening is to “take” pairs of particles that would have a separation $`r`$ less than the softening length $`ϵ=300\mathrm{kpc}`$ in the absence of softening, and transfer them to separations $`rϵ`$, resulting in a flattening of the correlation function. To illustrate this, we indicated in all panels of Figures 4–7 the location of the softening length $`ϵ`$ by a thick line. This line is located right in the middle of the “shoulder” for all curves. With higher force resolution, pairs that are now located at separations $`rϵ`$ would be located instead at separations $`rϵ`$, and the fit to the observed slope would be improved. ## 5 SUMMARY AND PROSPECTS Using a P<sup>3</sup>M algorithm with $`64^3`$ particles, we have performed 160 cosmological simulations, for 68 cosmological models. This constitutes the largest database of cosmological simulations ever assembled. We covered a four-dimensional parameter phase space by varying the density parameter $`\mathrm{\Omega }_0`$, the cosmological constant $`\lambda _0`$, the Hubble constant $`H_0`$, and the rms density fluctuation $`\sigma _8`$. We are making this database available to the astronomical community. We also performed a limited analysis of the simulations. Our results are the following: (1) The present rms density fluctuation $`\sigma _8^{\mathrm{num}}`$ differs from the one expected by linearly extrapolating the initial power spectrum to the present, because of the combined effects of having a finite number of modes in the initial conditions, introducing a Gaussian factor in the initial conditions, and having nonlinear coupling between modes. The first effect is negligible. The second effect is of order 10% or less, but this probably depends upon the size of the computational volume \[the one used for the simulations, $`(128\mathrm{Mpc})^3`$, is a little too small to constitute a fair sample of the universe\]. The effect of nonlinearity is negligible for $`\sigma _8<0.6`$, and of order 10% or less for larger $`\sigma _8`$. This can go either way: nonlinearities can either increase of decrease $`\sigma _8`$ relative to what linear theory predicts. (2) The observed two-point correlation function $`\xi (r)`$ is well reproduced by models with $`\sigma _8=0.8`$, nearly independently of the values of the parameters $`\mathrm{\Omega }_0`$, $`\lambda _0`$, and $`H_0`$. For models with $`\sigma _8<0.8`$, the correlation function is too small, its slope is too shallow, and it has a kink at separations $`r=13\mathrm{Mpc}`$. For models with $`\sigma _8>0.8`$, the correlation function is too large and its slope is too steep. (3) At small separations, $`r<1\mathrm{Mpc}`$, the velocity moments satisfy the relations $`|V_\mathrm{R}|H_0r`$ and $`V_{\mathrm{PP}}2^{1/2}V_{\mathrm{PL}}`$, indicating that small clusters have reached virial equilibrium. At larger separations, $`|V_\mathrm{R}|`$ increases above the Hubble velocity, indicating that clusters are accreting matter from the field. The velocity moments depend essentially upon $`\mathrm{\Omega }_0`$ and $`\sigma _8`$, and not $`\lambda _0`$ and $`H_0`$. The pairwise particle velocity dispersions are much larger than the observed pairwise galaxy velocity dispersion, except for models with $`\mathrm{\Omega }_0=0.2`$ and $`\sigma _80.4`$. But if the velocity dispersion of galaxies is biased relative to the velocity dispersion of dark matter, then models with larger values of $`\mathrm{\Omega }_0`$ or $`\sigma _8`$ can be reconciled with observations. (4) The multiplicity functions are decreasing for small values of for models with $`\sigma _80.3`$. At larger values of $`\sigma _8`$, the multiplicity functions have a horizontal plateau, whose length increases with $`\sigma _8`$. For models with $`\sigma _8>0.9`$, the multiplicity functions have a $``$ shape which results from the merging of intermediate-size clusters. For all models, clusters have densities in the range $`100\overline{\rho }_01000\overline{\rho }_0`$. A simple analytical model suggest that clusters have a density $`\rho 178\overline{\rho }`$ when they reach virial equilibrium. Our results suggest that many clusters have reached that equilibrium in the past, when $`\overline{\rho }`$ was larger than $`\overline{\rho }_0`$ (this could be checked by performing a cluster analysis on earlier dumps). The spin parameters $`\lambda `$ are in the range $`0.0080.2`$, with the median near 0.05, and the distributions of elongations favors prolate shapes ($`e_2>e_1`$) over oblate shapes ($`e_1<e_2`$). These results indicate the absence of rotationally supported disks in these simulations. The database is growing. We are currently adding new simulations to the original 160 simulations described in this paper. There are at least seven different motivations for performing additional simulations. (1) Additional Simulations for the same Models: For the sake of providing a good coverage of the parameter phase-space, we have limited the number of simulations per model to 3 or less. We can perform additional simulations for models already included in the database, if there is a need for doing so. This could be the case if, for some reason, a particular model (that is, a particular combination of $`\mathrm{\Omega }_0`$, $`\lambda _0`$, $`H_0`$, and $`\sigma _8`$) becomes particularly interesting, and deserves more scrutiny. Also, having more simulations per model has the virtue of improving the statistics. For instance, the size of the error bars in Figures 8–12 would be reduced if we had more than 3 simulations per model. Finally, for gravitational lensing simulations, it is necessary to combine dumps generated by different simulations, and having more simulations can be desirable (see, e.g. Premadi, Martel, & Matzner 1998). (2) Different Box Size: All simulations included in the database were performed using a computational box of size $`L_{\mathrm{box}}=128\mathrm{Mpc}`$. As described in §2.4, the softening length is comparable to the scale where nongravitational effects become important. Hence, there is no reason to consider smaller box sizes, unless we want to use fewer particles. There are, however, reasons for considering larger boxes. As we pointed out in §4.1, a box of size $`128\mathrm{Mpc}`$ is too small to constitute a “fair sample” of the universe. Using boxes of size $`256\mathrm{Mpc}`$ or even $`512\mathrm{Mpc}`$ would certainly provide a better, “fairer” description of the large-scale structure, even with the same number of particles. (3) Larger Number of Particles: The is no point increasing the number of particles as long as we keep the box size at $`128\mathrm{Mpc}`$, since the resolution would be increased at scales where nongravitational effects are important. However, if larger boxes are used, the number of particles can be increased accordingly in order to maintain the resolution of the algorithm at small scale. If we continue to adopt $`300\mathrm{kpc}`$ as the resolution scale of the algorithm, simulations in $`(256\mathrm{Mpc})^3`$ and $`(512\mathrm{Mpc})^3`$ boxes could be performed with $`128^3`$ and $`256^3`$ particles, respectively. (4) New Background Models: The 4-dimensional parameter phase-space considered in this paper is quite large, and the set of 68 cosmological models included in the database covers a small fraction of it. There are several “holes” in the projections shown in Figure 1. In particular, there are no simulations for open models with a nonzero cosmological constant ($`\lambda _00`$, $`\mathrm{\Omega }_0+\lambda _0<1`$). Simulations for additional background models could be added to the database, either to provide a better coverage of the parameter phase-space, or because there is a particular model we are interested in, “we” designating either the authors, or other researchers sending us a special request. Actually, the original database contained only 151 simulations for 65 cosmological models. Following a special request by Hamana (1998), we added 9 simulations to the database, for 3 new models: Ea, Pa, and Xg. (5) Additional Parameters: The current database covers a 4-parameter phase space, because we held the CMB temperature $`T_{\mathrm{CMB}}`$ and the baryon density parameter $`\mathrm{\Omega }_{\mathrm{B0}}`$ at values of 2.7 and $`0.015h^2`$, respectively. The CMB temperature is known so accurately that treating it as a variable parameter would be pointless. This is not the case for the baryon density parameter. According to primordial nucleosynthesis, the quantity $`\mathrm{\Omega }_{\mathrm{B0}}h^2`$ has an allowed range from 0.01 to 0.026 (Krauss & Kernan 1995; Copi et al. 1995; Krauss 1998). Furthermore, X-ray observations of clusters of galaxies suggest that the ratio of gas mass to dark matter mass in these clusters exceeds the mean value in the universe, a phenomenon known as “the baryon catastrophe” (Briel et al. 1992; White et al. 1993; Martel et al. 1994). There is at present no definitive explanation for this phenomenon, but one possible explanation is that primordial nucleosynthesis is somehow incorrect, and predicts a value of $`\mathrm{\Omega }_{\mathrm{B0}}`$ which is too small. (6) Different Components: All simulations in the database used a CDM power spectrum as initial conditions. There are, however, several other models that constitute interesting alternatives to the CDM model, which could be added to the database. One of them is the Hot Dark Matter model (HDM), though this model has fallen out of favor in recent years, due to its inability to form galaxies inside deep voids such as Boötes. A more interesting alternative is the mixed Cold + Hot Dark Matter model (CHDM), which contains both a cold dark matter component and a massive neutrino component. This model introduces one additional parameter, the contribution $`\mathrm{\Omega }_{\nu 0}`$ of the neutrinos to the mean energy density of the universe. (7) Different Cosmologies: The cosmological models included in the database contain only non-relativistic matter and a nonzero cosmological constant. It would be very interesting to consider models with other components. Possible candidates include domain walls, cosmic strings, or relativistic particles (Fry 1985; Charlton & Turner 1987; Silveira & Waga 1994; Martel 1995; Martel & Shapiro 1998). Recently, these various candidates have been combined into a single concept called “quintessence” (Caldwell, Dave, & Steinhardt 1998). The effects of these various components is twofold: First, the presence of these components modifies the expansion rate of the universe and the growth rate of density perturbations, thus changing the history of large-scale structure formation. Second, they might affect the shape and normalization of the primordial power spectrum, in ways that remain to be determined (none of these models were considered by Bunn & White ). ###### Acknowledgements. This work benefited from stimulating discussions with Paul Shapiro. We are pleased to acknowledge the support of NASA Grants NAG5-2785, NAG5-7363, and NAG5-7821, NSF Grants PHY93 10083, PHY98 00725 and ASC 9504046, the University of Texas High Performance Computing Facility through the office of the vice president for research. HM acknowledges the support of a fellowship provided by the Texas Institute for Computational and Applied Mathematics. ## Appendix A Calculation of $`\sigma _x^{\mathrm{disc}}`$ The mass inside a sphere centered at $`𝐫_0`$ is given by $$M(𝐫_0)=_{\mathrm{sph}(𝐫_0)}\overline{\rho }_{\mathrm{com}}(1+\delta )d^3r=\overline{\rho }_{\mathrm{com}}\left[V_{\mathrm{sph}}+_{\mathrm{sph}(𝐫_0)}d^3r\underset{𝐤}{}\delta _𝐤^{\mathrm{disc}}e^{i𝐤𝐫}\right],$$ (A1) where $`\overline{\rho }_{\mathrm{com}}`$ is the average comoving density, $`V_{\mathrm{sph}}`$ is the volume of the sphere, and the integral is computed over that volume. The relative mass excess in the sphere is given by $$\frac{\mathrm{\Delta }M}{M}(𝐫_0)=\frac{1}{V_{\mathrm{sph}}}_{\mathrm{sph}(𝐫_0)}d^3r\underset{𝐤}{}\delta _𝐤^{\mathrm{disc}}e^{i𝐤𝐫}.$$ (A2) We introduce the following change of variables, $$𝐫=𝐫_0+𝐲.$$ (A3) In $`𝐲`$-space, the sphere is now located at the origin, and equation (A3) becomes $$\frac{\mathrm{\Delta }M}{M}(𝐫_0)=\frac{1}{V_{\mathrm{sph}}}_{\mathrm{sph}(0)}d^3y\underset{𝐤}{}\delta _𝐤^{\mathrm{disc}}e^{i𝐤𝐫_0}e^{i𝐤𝐲}.$$ (A4) We now square this expression, and get $$\left(\frac{\mathrm{\Delta }M}{M}\right)^2(𝐫_0)=\frac{9}{16\pi ^2x^6}\left[_{\mathrm{sph}(0)}d^3y\underset{𝐤}{}\delta _𝐤^{\mathrm{disc}}e^{i𝐤𝐫_0}e^{i𝐤𝐲}\right]\left[_{\mathrm{sph}(0)}d^3z\underset{𝐤^{}}{}\delta _𝐤^{}^{\mathrm{disc}}e^{i𝐤^{}𝐫_0}e^{i𝐤^{}𝐳}\right],$$ (A5) where $`x`$ is the radius of the sphere. The rms density contrast at scale $`x`$ is obtained by averaging the above expression over all possible locations of the sphere inside the computational box, $`\sigma _x^2`$ $``$ $`\left({\displaystyle \frac{\mathrm{\Delta }M}{M}}\right)^2_{V_{\mathrm{box}}}={\displaystyle \frac{1}{V_{\mathrm{box}}}}{\displaystyle _{V_{\mathrm{box}}}}d^3r_0\left({\displaystyle \frac{\mathrm{\Delta }M}{M}}\right)^2(𝐫_0)`$ (A6) $`=`$ $`{\displaystyle \frac{1}{V_{\mathrm{box}}}}{\displaystyle \frac{9}{16\pi ^2x^6}}{\displaystyle _{V_{\mathrm{box}}}}d^3r_0{\displaystyle _{\mathrm{sph}(0)}}d^3y{\displaystyle _{\mathrm{sph}(0)}}d^3z{\displaystyle \underset{𝐤}{}}{\displaystyle \underset{𝐤^{}}{}}\delta _𝐤^{\mathrm{disc}}\delta _𝐤^{}^{\mathrm{disc}}e^{i𝐤𝐲}e^{i𝐤^{}𝐳}e^{i(𝐤+𝐤^{})𝐫_0}.`$ The integral over $`V_{\mathrm{box}}`$ reduces to $$_{V_{\mathrm{box}}}d^3r_0e^{i(𝐤+𝐤^{})𝐫_0}=V_{\mathrm{box}}\delta _{𝐤,𝐤^{}}.$$ (A7) We substitute this expression in equation (A6), and use the Kronecker $`\delta `$ to eliminate the summation over $`𝐤^{}`$. Equation (A6) reduces to $$\sigma _x^2=\frac{9}{16\pi ^2x^6}\underset{𝐤}{}|\delta _𝐤^{\mathrm{disc}}|^2\left[_{\mathrm{sph}(0)}d^3ye^{i𝐤𝐲}\right]^2.$$ (A8) The remaining integral can be evaluated easily. Equation (A8) reduces to $$\sigma _x^2=\underset{𝐤}{}|\delta _𝐤^{\mathrm{disc}}|^2W(kx),$$ (A9) where $$W(y)\frac{9}{y^6}(\mathrm{sin}yy\mathrm{cos}y)^2.$$ (A10) Using equation (26), we can rewrite this expression in an integral form, $$\sigma _x^2=\frac{V_{\mathrm{box}}}{(2\pi )^3}d^3k|\delta _𝐤^{\mathrm{disc}}|^2W(kx).$$ (A11) Figure Captions
no-problem/9903/cond-mat9903084.html
ar5iv
text
# Measurement of the local Jahn-Teller distortion in LaMnO3.006 ## I Introduction The Jahn-Teller (JT) distortion of the MnO<sub>6</sub> octahedra in perovskite manganites is known to have a significant effect on their electrical and magnetic properties. The JT distortion takes the form of an elongation of the octahedra. The simplest type of distortion is one in which the octahedra elongate along the direction of the $`d_{3z^2r^2}`$ orbitals and contract in directions in which the $`d_{x^2y^2}`$ orbitals point. This distortion would give rise to 2 long Mn-O bonds and 4 short Mn-O bonds in each distorted octahedron. However, it is possible to generate a symmetry lowering distortion with a different symmetry by making some linear combination of the pure $`d_{3z^2r^2}`$ and $`d_{x^2y^2}`$ states. Because of the importance of the Jahn-Teller distortion in these materials it is critical to characterize the exact nature of the JT state in the manganites. The MnO<sub>6</sub> octahedra pack together in space in a 3-dimensional corner shared network giving rise to the well-known perovskite structure. In general the distorted octahedra can be orientationally ordered (so-called orbital ordering) or disordered. If the orbitals are long-range ordered then a solution of the average crystal structure, as obtained from Rietveld refinement for example, will give the local bond-lengths in the octahedra accurately and reveal the nature of the local JT distortion. However, if the orbitals are not perfectly long-range ordered, the average crystal structure will not give the right result for the local JT distortion. However, a local structural probe such as extended x-ray absorption fine structure (XAFS) or the atomic pair distribution function (PDF) method, will still reveal the nature of the local distortion regardless of whether the orbitals are ordered or not. Any determination of the nature of the local JT distortion using a crystallographic approach necessarily presumes perfect orbital order. This is thought to be good in the case of undoped LaMnO<sub>3</sub> where every manganese ion is in the 3+ state. However, by measuring the local structure directly using the PDF method, we do not make this presumption. We have measured the local JT distortion in a sample of composition LaMnO<sub>3.006</sub> using the PDF analysis of neutron powder diffraction data. The PDFs we measure are essentially sample, and not resolution, limited and the short and long bonds in the distorted MnO<sub>6</sub> octahedra are clearly resolved. These PDFs have been modelled using a full-profile least-squares refinement approach. These results are compared to crystallographic Rietveld refinements on the same data. We find excellent agreement for the Jahn-Teller distortion between the PDF and crystallographic analyses. The average crystal structure of undoped LaMnO<sub>3</sub> has been extensively studied since the 1950’s. Differences between these studies occur largely because of the sensitivity of the structure to the sample stoichiometry which depends on synthesis conditions. It appears fairly widely accepted now that the correct structure for stoichiometric LaMnO<sub>3</sub> at low temperature is orthorhombic (space group $`Pbnm`$ or $`Pnma`$ depending on convention). The data assigned to a monoclinic space group by Mitchell et al. can be well refined in the orthorhombic space group as well with fewer degrees of freedom. An excellent summary of the situation is presented in Rodriguez-Carvajal et al. In this structure the long $`d_{3z^2r^2}`$ orbitals lie in the same (basal) plane in a checkerboard type of arrangement so the bonds are long-short-long-short as you move from Mn to Mn along the Mn-O-Mn bond. Since all the long bonds lie in this plane, the separation of the Mn ions in the perpendicular direction ($`c`$-axis in the $`Pbnm`$ setting and $`b`$ axis in the $`Pnma`$ setting) is shorter. This is the O structure in the Goodenough specification. There is one report of a PDF measurement on the undoped LaMnO<sub>3</sub> material. In this case the monoclinic structure of Mitchell et al. was successfully fit to the data. However, no structural parameters were published. In this paper we publish the local structure parameters of LaMnO<sub>3</sub> determined from PDF data. The results are compared to a Rietveld refinement of the same data-set. There is excellent agreement between the average and the local structures indicating that the sample is fully long-range ordered. We find that, even locally, there is a significant orthorhombic distortion to the MnO<sub>6</sub> octahedra. ## II Experimental The LaMnO<sub>3+δ</sub> sample was prepared using standard solid state reaction methods. Stoichiometric amounts of La<sub>2</sub>O<sub>3</sub> (Alfa Aesar Reacton 99.99%) and MnO<sub>2</sub> (Alfa Aesar Puratronic 99.999%) were ground in an Al<sub>2</sub>O<sub>3</sub> mortar and pestle under acetone until well mixed. The powder sample was loaded into a 3/4” diameter die and uniaxially pressed at 1000 lbs. The pellet was placed into an Al<sub>2</sub>O<sub>3</sub> boat and fired under pure oxygen for 12 hours at 1200-1250 C. The sample was cooled to 800 C and removed, reground, repelletized, and refired at 1200-1250 C for an additional 24 hours. This process was repeated until a single phase, rhombohedral, x-ray diffraction pattern was obtained. Total reaction time was approximately 5 days. Thermogravimetric analysis (TGA) indicated that the as-prepared sample had an oxygen stoichiometry of about LaMnO<sub>3.10</sub>. The LaMnO<sub>3.10</sub> sample was ground, left in powder form, and placed into an Al<sub>2</sub>O<sub>3</sub> boat. The sample was post-annealed in ultra high purity Ar at 1000 C for 24 hours then quenched to room temperature. The oxygen stoichiometry was again determined using TGA under forming gas. The final oxygen stoichiometry was 3.006. Neutron powder diffraction data were collected on the SEPD diffractometer at the Intense Pulsed Neutron Source (IPNS) at Argonne National Laboratory. The sample of about 10 g was sealed in a cylindrical vanadium tube with helium exchange gas. Data were collected at 20K in a closed cycle helium refrigerator. The data are corrected for detector deadtime and efficiency, background, absorption, multiple scattering, inelasticity effects and normalized by the incident flux and the total sample scattering cross-section to yield the total scattering structure function, $`S(Q)`$. This is Fourier transformedaccording to $$G(r)=\frac{2}{\pi }_0^{\mathrm{}}Q[S(Q)1]\mathrm{sin}(Qr)𝑑Q.$$ (1) Data collection and analysis procedures have been described elsewhere. The reduced structure factor $`F(Q)=Q[S(Q)1]`$ is shown in Figure 1. ## III Modelling and Results The Rietveld refinements were carried out using the GSAS Rietveld code. Modelling of PDF was carried out using a least-squares full-profile PDF fitting procedure. This is is exactly analogous to the Rietveld method except that the PDF is fit (in real-space) rather than the reciprocal-space data. When the PDF is fit the short-range order is obtained directly. The program we use is called PDFFIT. It is described in detail elsewhere and is available on request. The structural inputs for the program are atomic positions, occupancies and thermal factors. The results are shown in Table I. We chose the convention used in Ref. of putting Mn on the (0,$`\frac{1}{2},0`$) position. Two PDF refinements are reported labelled A and B. The difference is the range of $`r`$ over which the fit was made: A was made over a range $`1.5\mathrm{\AA }<r<15.5\mathrm{\AA }`$; B over a range $`1.5\mathrm{\AA }<r<3.5\mathrm{\AA }`$. Both refinements were constrained to have the symmetry of the $`Pbnm`$ space group. In addition, PDF-refinements were carried out where the space group symmetry was relaxed. However, the results of these refinements essentially reproduced those within the $`Pbnm`$ space group and the results are not reported here. Of particular interest are the resulting MnO bond lengths in the MnO<sub>6</sub> octahedra, listed in Table II. The R-values given in Table II are calculated over the same interval 1.5 to 3.5 Å so they can be directly compared with each other. The observed and calculated PDFs for runs A and B are shown in Figure 2. In addition, we have determined the Debye Temperature of the Mn and Oxygen ions from the refinements. The data were collected at 20 K. Assuming this is close enough to 0 K we use the expression $$\mathrm{\Theta }_D=\frac{3h^2}{16\pi ^2mk_bu^2}$$ (2) to determine the $`\mathrm{\Theta }_D`$ from the refined thermal displacements, $`u^2`$ of Mn and O atoms, here $`m`$ is the mass of the corresponding atom and $`k_b`$ Boltzmann’s constant. Thermal factors obtained from PDF refinements are often more accurate than Rietveld thermal factors because of the wider range of $`Q`$ over which data are analyzed . For example, in this study the PDF’s were obtained from data collected up to $`Q_{max}=27`$ Å<sup>-1</sup>. This is almost double the $`Q`$-range used ($`Q_{max}=15.7`$ Å<sup>-1</sup>) in the Rietveld refinement of the same data. The Rietveld refinement was confined to a lower $`Q`$-range due to Bragg-peak overlap in the high-$`Q`$ region. We have previously shown that the PDF can give accurate absolute values of $`\mathrm{\Theta }_D`$ . The values we obtain are $`\mathrm{\Theta }_D`$(Mn)$`=1000\pm 100`$ K, $`\mathrm{\Theta }_D`$(O1)$`=980\pm 30`$ K and $`\mathrm{\Theta }_D`$(O2)$`=601\pm 8`$ K. ## IV Discussion The average crystallographic structure suggests that the Jahn-Teller distorted octahedra in LaMnO<sub>3</sub> contain two short ($`s`$) bonds (1.9200 Å), two less short ($`m`$) bonds (1.9662 Å) and two long ($`l`$) bonds (2.1609 Å). The PDF peaks corresponding to these Mn-O bonds can be seen in Fig. 2 at around $`r=2`$ Å as negative peaks. A double-peak structure is clearly resolved reflecting the high resolution of the PDF measurement. The motivation for this study was to determine whether the real, local, JT distorted octahedra had 4-$`s`$ and 2-$`l`$ bonds (pure $`Q_3`$ distortion), which one would expect for an isolated octahedron, or 2-$`s`$, 2-$`m`$ and 2-$`l`$ bonds (some $`Q_2`$ component) as suggested by the average structure. The JT distorted octahedra could be locally $`Q_3`$ but appear further distorted in the average structure if there was some orbital disorder (for example, some of the long bonds orienting parallel to the $`c`$-axis). These two scenarios can be distinguished in a joint Rietveld/PDF study where the average and local structures are determined from the same data-set. When the PDF is fit over a wide $`r`$-range (run A) the Mn-O bond lengths are similar to the Rietveld values, suggesting that the local bond-length distribution matches that of the average structure. However, it is possible that even by $`r=15`$ Å the effects of possible orbital disorder will bias the results of the PDF refinement towards the average values. To check this, we carried out run B which fits only over the range to 3.5 Å. This PDF-range contains only the MnO<sub>6</sub> octahedra themselves and does not depend on how they are oriented in space. It is clear from Table II that the refinement still prefers two short, two less-short and two long bonds. As a final check we carried out refinements where the $`Pbnm`$ space-group symmetry was relaxed so that all Mn and O ion positions can vary independently. This allows up to six different Mn-O bond-lengths to refine within one octahedron. The refinements again resulted in the bond lengths grouping into 2 short, 2 less short and 2 long. It is clear that the PDF peak corresponding to the short octahedral bonds is broader than can be explained by a single Mn-O bond-length. This is strong evidence that the local Jahn-Teller distortion is not a pure stretch of the $`d_{3z^2r^2}`$ orbitals but there is a small amount of $`d_{x^2y^2}`$ character mixed in and there is some $`Q_2`$ character to the static distortion. This can be explained by the fact that the average structure is orthorhombic and so the octahedra are sitting in an orthorhombic crystal field. It is helpful to understand the origin of the long-range orthorhombicity of the structure itself. This comes about because of the rotations (about $`111`$ directions) of the corner-shared octahedra. This does not in itself result in an orthorhombic distortion; however, if the octahedra themselves are elongated and the orbitals ordered, as in this case, an orthorhombic distortion does result. The basal-plane distortion ($`2(ab)/(a+b)`$ where $`a`$ refers to the axis of the orthorhombic cell) is explained straightforwardly by reference to Fig. 3(b). In this figure the octahedra are shown elongated and it is clear how the rotations give rise to the basal plane distortion. The manganese-manganese separation along the perpendicular direction is also different to the basal plane, but this is due to a different reason. With the pattern of orbital ordering in this O structure, all of the long Mn-O bonds lie in the basal plane and all the Mn-O bonds along the $`c`$-direction are short. The separation of manganese ions along the $`c`$ direction is therefore shorter than in the basal plane and $`c/\sqrt{2}ab`$. This would still hold if the shapes of the octahedra themselves were tetragonal (pure $`Q_3`$ distortion) and comes simply from the fact that the long bonds all lie in the basal plane. Thus, the structure could accommodate having tetragonally ($`Q_3`$) distorted octahedra within the orthorhombic unit cell by choosing the appropriate $`c`$-axis lattice parameter to make the Mn-O(2) bond along the $`c`$-direction the same as the Mn-O(1) (in-plane) short bond. In fact this does not happen. What is clear from the local Mn-O bond-lengths is that there is a large $`Q_3`$ distortion with a small $`Q_2`$ distortion superimposed. The large $`Q_3`$ distortion breaks the degeneracy of the $`e_g`$ orbitals and separates them widely in energy ($`0.5`$ eV). The additional small $`Q_2`$ distortion comes about by mixing the pure $`d_{3z^2r^2}`$ and $`d_{x^2y^2}`$ but is presumably a small response of the local octahedra to the orthorhombic crystal field, even though this long-range orthorhombicity results from the large local $`Q_3`$ distorted octahedra (and their rotations). ## V Conclusions We have fit a high-resolution atomic pair distribution function obtained from powder neutron diffraction data to determine the local Jahn-Teller distortion in stoichiometric LaMnO<sub>3</sub>. We observe a small but significant difference in the length of the in-plane (1.924(2) Å) and out-of plane (1.9742(9) Å) short bonds, and a well separated long bond (2.177(2) Å). This implies that there is some mixing of the $`d_{3z^2r^2}`$ and $`d_{x^2y^2}`$ levels and the occupied $`e_g`$ state is not pure $`d_{3z^2r^2}`$ in character. This is in agreement with the result from crystallography; however, it is important to determine this directly from the local structure as we report here since any orbital disorder (for example, due to small non-stoichiometries) would affect the crystal structure but not the local structure. Finally, we report estimates of the Debye temperature of Mn and O ions in this compound. ###### Acknowledgements. We would like to acknowledge stimulating discussions with S. D. Mahanti, P. Radaelli and T. A. Kaplan. This work was supported by the NSF through grant DMR-9700966 at MSU and the DOE through contract W-7405-ENG-36 at LANL. The IPNS is funded by the U.S. Department of Energy under Contract W-31-109-Eng-38.
no-problem/9903/astro-ph9903410.html
ar5iv
text
# NaSt1: A Wolf-Rayet star cloaked by an 𝜼 Car–like nebula? ## 1 Introduction Relatively few peculiar emission line objects identified from H$`\alpha `$ surveys in the 1960’s (e.g. Henize 1967, 1976) have been studied in detail, principally because they are faint and suffer from heavy reddening. However, advances with instruments combined with the availability of 8–10m telescopes now permit the routine observation of such objects, which may provide new information on stellar systems and evolution. One such object is NaSt1 (V$`=14.5`$ mag), discovered by Nassau & Stephenson (1963) who proposed a Wolf-Rayet (WR) classification because of its strong emission line spectrum. Its appearance, however, was quite unlike any previously known WR star in the Galaxy or Large Magellanic Cloud (Massey & Conti 1983). Nevertheless, Massey & Conti proposed a cool, late type nitrogen sequence WN10 spectral type for NaSt1, and it was included in the sixth WR catalogue as WR122 (van der Hucht et al. 1981). More recently, van der Hucht, Williams & Thé (1984) and Williams, van der Hucht & Thé (1987) have obtained infra-red (IR) photometry of NaSt1, and the closely related object LS4005 (WR85a), revealing the presence of circumstellar dust shells. van der Hucht et al. (1989, 1997) reported moderate IR photometric variability for NaSt1 and argued against a WR nature, instead preferring an alternative (massive) emission line nature (either B\[e\], O\[e\] or Ofpe/WN9) based on optical and infrared spectroscopy. B\[e\] supergiants appear to represent objects with an equatorial excretion disk plus an OB-type stellar wind in the polar regions. Ofpe/WN9 stars – now revised to WN9–11 (Smith, Crowther & Prinja 1994) – are intimately related to Luminous Blue Variables (LBVs) and classical WR stars (Crowther & Smith 1997). NaSt1 has recently received renewed attention principally because it is extremely bright in the IR (K$`=6.5`$ mag). Blum, DePoy & Sellgren (1995), Tamblyn et al. (1996), Morris et al. (1996), and Figer, McLean & Najarro (1997) have presented a spectral comparison of various emission line objects, including NaSt1, still adhering to its former WN10 or Ofpe/WN9 classifications. Indeed, NaSt1 is currently used as a late WN-type spectral standard, particularly for IR studies of WR stars near the Galactic Centre, despite its nature being uncertain. In this paper we present new data for NaSt1 and consider its true nature. Specifically, in Section 2 we report on new spectroscopy of NaSt1, obtained at the Keck I, William Herschel Telescope (WHT), and UK Infrared Telescope (UKIRT). In Section 3 we discuss these new observations revealing a peculiar nebular appearance. In Section 4 we use various techniques to determine the interstellar extinction and distance to NaSt1, while we obtain its nebular properties and abundances in Section 5. We finally interpret our results and discuss possible natures for NaSt1 in Section 6. ## 2 Observations We have obtained intermediate to high spectral resolution optical and infrared spectroscopy of NaSt1 during 1994 July–October at the 10m Keck I, 4.2m WHT and 3.8m UKIRT telescopes. These data were complemented by narrow-band filter imaging with the auxiliary port of the WHT during August 1996. The journal of our observations is presented in Table 1. ### 2.1 Optical observations Intermediate dispersion spectroscopy of NaSt1 between $`\lambda \lambda `$3820–7030 Å were obtained at the 4.2m WHT, during July–August 1994 using the the dual beam Intermediate dispersion Spectroscopic and Imaging System (ISIS). These observations, obtained in good seeing (0.8<sup>′′</sup>) used 600 l/mm gratings on both arms of ISIS, with Tektronix and EEV CCDs (both 24$`\mu `$m pixels) on blue and red arms, respectively. A 1<sup>′′</sup> slit width resulted in a spectral resolution of 1.6Å (blue), and 1.7Å (red) as determined from widths of CuAr and CuNe arc lines. The data were de-biased, divided by a normalised flat-field, and optimally extracted using the pamela (Horne 1986) routines within figaro (Shortridge et al. 1997). After wavelength calibration using arcs obtained between stellar exposures, the spectra were absolutely flux-calibrated using the standard star BD+28 4211. Subsequent analysis was carried out within dipso (Howarth et al. 1995). Since the observed emission features were unresolved in our ISIS observations, additional high spectral resolution observations of NaSt1 were kindly obtained for us by Dr M.H. van Kerkwijk at the 10m Keck I telescope, using the high resolution echelle spectrograph (HIRES) and a 2048$`\times `$2048 Tek CCD as the detector. Observations at two wavelength settings during good conditions (0<sup>′′</sup>.7 seeing) provided near complete wavelength coverage between $`\lambda \lambda `$4350–8750 at a 2.5 pixel spectral resolution of 0.08–0.15Å, as measured from Th-Ar arc spectra. The CCD frames were bias-subtracted, and the echelle orders of NaSt1 and the flux and atmospheric standard Feige 110 were optimally extracted using the software package echomop (Mills & Webb 1994). Subsequent analysis was again performed using the figaro and dipso packages. Synthetic Johnson filter photometry performed on the WHT and Keck spectra give V=14.47 and 14.67 mag, respectively. Using the narrow-band photometric system derived for WR stars (Smith 1968) and convolving our flux calibrated observations with suitable Gaussian filters, we find $`b`$=16.91 mag, $`v`$=15.20 mag and $`r`$=13.99 mag for our WHT dataset. These compare reasonably well with previous optical narrow band photometry ($`b`$=16.9 mag, $`v`$=15.4 mag) provided by Massey & Conti (1983). To complement our optical spectroscopy, imaging was carried out with the WHT auxiliary port during August 1996 using the 1024$`\times `$1024 EEV CCD detector and several narrow-band filters, covering He ii $`\lambda `$4686, He i $`\lambda `$5876, H$`\alpha `$ \+ \[N ii\] $`\lambda `$6583, and \[N ii\] $`\lambda `$6583. The details of the filters and exposure times employed are given in Table 1. The images were de-biassed and flat-fielded prior to analysis. The average seeing measured from the images was $`0^{\prime \prime }.75`$ with each CCD pixel corresponding to 0.22 arcsec. ### 2.2 Infrared observations Our infrared NaSt1 observations were obtained at the 3.8m UKIRT with the cooled grating spectrograph CGS4, the 300mm camera, the 75 l/mm and echelle gratings and a 62$`\times `$58 InSb array in 1994 August covering selected regions in the 1.03–2.21$`\mu `$m range. The observations were bias-corrected, flat-fielded, extracted and sky-subtracted using cgs4dr (Daly & Beard 1992). In order to remove atmospheric features, the observations were divided by an appropriate standard star (whose spectral features were artificially removed) observed at around the same time and similar air mass. Our echelle observations covered only He i 2.058$`\mu `$m and were obtained at a spectral resolution of $`\lambda `$/$`\mathrm{\Delta }\lambda `$=16 000. ## 3 Discussion of observations In this section we discuss our WHT, Keck and UKIRT spectroscopy and imaging of NaSt1. An extremely unusual nebular appearance is revealed, with no clear signature of stellar emission lines, arguing strongly against a WR identification. ### 3.1 Optical spectroscopy We present our WHT-ISIS flux calibrated observations of NaSt1 in Fig. 1. The visual spectrum of NaSt1 shows a multitude of strong, narrow, low and high excitation nebular features superimposed on a clear continuum. From this figure it is clear that the nebular spectrum of NaSt1 is unusual. Very strong He i features are observed relative to the Balmer series. In spite of the presence of very strong He ii $`\lambda `$4686 emission – indicating high temperatures or densities – \[O iii\] $`\lambda `$5007 is very weak and \[N ii\] $`\lambda 6583`$ is strong, suggestive of chemical peculiarities (see Sect. 6). In Fig. 1 we include our WHT-ISIS observations degraded to the resolution ($``$6Å) of the sole previously published optical spectroscopy of NaSt1 by Massey & Conti (1983), obtained in 1982 September. On close inspection we find that the two datasets are essentially identical, indicating that, whatever the true nature of NaSt1, it has remained unchanged over the past decade. From a spectral comparison with bona-fide cool nitrogen sequence WR stars (specifically the WN9 stars HDE 313846 and BE381), Massey & Conti (1983) proposed a very late WN spectral classification for NaSt1 of WN10 on the basis that N ii $`\lambda \lambda `$4654–67 emission was stronger than N iii $`\lambda \lambda `$4634–41, and tentatively identified N i $`\lambda `$5616, in spite of very strong He ii $`\lambda `$4686 emission. (Smith et al. (1994) and Crowther & Smith (1997) have recently updated the spectral classification of WN9–11 stars, with N i absent). The emission lines observed in NaSt1 are resolved in our Keck I HIRES data set, providing a potential means to unravelling its true nature. The stellar spectral features proposed by Massey & Conti (1983) as N ii $`\lambda \lambda `$4654–67 and N i $`\lambda `$5616 which resulted in a late WN classification are revealed as nebular \[Fe iii\] $`\lambda `$4658 and \[Ca vii\] $`\lambda `$5619, respectively. Indeed, all optical spectral features appear to be of nebular, rather than stellar, origin. Permitted and forbidden lines cover a wide range in excitation, and include H i, He i-ii, N i-iii, \[N ii\], \[Ne iii-iv\], Mg i-ii, Si ii, \[S ii-iii\], \[Ar iii-v\], \[Ca v-vii\], \[Fe ii-vii\], \[Ni ii-iii\]). We present selected line profiles from the Keck dataset in Fig. 2, covering a wide range of morphologies, which we now discuss. A full line list is provided in Table 2. 1. Balmer series: H$`\alpha `$ and H$`\beta `$ show very similar emission line profiles with a double component structure, comprising a major component with FWHM$``$50 km s<sup>-1</sup> at line centre plus a minor component centred at $``$$``$115 km s<sup>-1</sup>, which we attribute to He ii since these features have the correct velocity displacement, the appropriate strength and an identical structure to adjacent Pickering series members. 2. He i: The numerous He i emission lines in our observations show a variety of morphologies. Most He i profiles, such as $`\lambda `$7281 in Fig. 2 show double Gaussian, asymmetric profiles (FWHM$``$33 km s<sup>-1</sup>), separated by $``$32 km s<sup>-1</sup>, with stronger emission in the red component. Other He i lines such as $`\lambda `$4713, $`\lambda `$5016 show a broader profile that can be deconvolved into two components, again stronger on the red side (FWHM$``$43 km s<sup>-1</sup> separated by $``$27 km s<sup>-1</sup>). Exceptions include $`\lambda `$5876 (see Fig. 2) which shows a double Gaussian profile with greater blue emission and $`\lambda `$6678 which has a single symmetric profile with FWHM=62 km s<sup>-1</sup>. 3. He ii: The profiles (e.g. $`\lambda 4686`$, $`\lambda 4859`$, $`\lambda 5412`$, $`\lambda `$8237 in Fig. 2) are asymmetric, and can readily be deconvolved into two components with FWHM$``$50 km s<sup>-1</sup>, separated by $``$50 km s<sup>-1</sup> with blue to red strengths of 2:3. 4. Low excitation forbidden lines: Spectral lines from these transitions are fairly common and include \[N i\] $`\lambda `$8680, \[N ii\] $`\lambda `$5755, $`\lambda `$6583, \[S ii\] $`\lambda `$6731, \[Fe iii\] $`\lambda `$4658 presented in Fig. 2. These profiles can be reproduced with double Gaussian fits of similar strength, with intrinsic FWHM$``$24 km s<sup>-1</sup>, and separated by 28–31 km s<sup>-1</sup>. 5. High excitation forbidden lines: Spectral features due to \[Ca v-vii\], \[Fe v-vii\] are observed (see Fig. 2). While the broad shape of these lines can be reproduced by a single Gaussian fit (FWHM$``$140 km s<sup>-1</sup>), a double peaked structure is observed, with each component separated by $``$50 km s<sup>-1</sup>. 6. Permitted metal lines: Examples include N iii $`\lambda \lambda `$4634–41 and Si ii $`\lambda `$6347–71, shown in Fig. 2. These are of similar shape to the He ii profiles, with asymmetric, double peaked profiles (FWHM$``$28–36 km s<sup>-1</sup>), separated by 36–41 km s<sup>-1</sup>, showing greater emission in the red component. In Fig. 3 we present a small portion of our HIRES spectrum in the region of the \[O iii\] $`\lambda `$5007, $`\lambda `$4959 lines. These lines, usually amongst the strongest nebular lines in hot Planetary Nebulae (PNe), are extremely weak in NaSt1. Indeed, our line measurements indicate that the feature at $`\lambda `$5007 is blended, since it is seven times stronger than $`\lambda `$4959 (the theoretical line ratio is 2.9). Although our spectroscopic observations do not extend to the \[O ii\] doublet at $`\lambda 3727`$, we have been provided with intermediate dispersion observations of NaSt1 from L.F. Smith extending to $`\lambda `$3300. From this data set, negligible emission is observed at \[O ii\] $`\lambda `$3727. The high reddening towards NaSt1 (Sect. 4.1), however, means that no useful limit can be determined for the strength of this feature. Instead, we have searched for the red \[O ii\] lines at 7320 and 7330 Å in the Keck dataset. The latter occurs in the inter-order gap but the former is detected and has approximately the same strength as the blended \[O iii\] $`\lambda 5007`$ line. The weakness of the oxygen lines therefore indicates that the oxygen content of NaSt1 is very low. Likewise, we can make inferences about the carbon content since, by comparison with planetary nebulae and the presence of strong He ii $`\lambda 4686`$ in NaSt1, we would expect to see the C iv recombination lines at $`\lambda 4660`$, $`\lambda \lambda `$5801–12, but these are absent in our spectra. The C ii line at $`\lambda 4267`$ is also absent, suggesting that NaSt1 is carbon deficient. In Fig. 4 we present line FWHM (km s<sup>-1</sup>) versus ionization potential (in eV) for representative ions in our HIRES spectra of NaSt1, revealing a broad correlation, suggesting lines of different excitations are formed within different regions of the nebula, as seen, for example, within symbiotic novae (e.g. V1016 Cyg, Schmid & Schild 1990). ### 3.2 Infrared spectroscopy The low resolution 1.0–1.8$`\mu `$m UKIRT/CGS4 spectrum of NaSt1 is shown in Figure 5 together with a higher resolution 1.99–2.31$`\mu `$m spectrum obtained at the Steward Bok 2.3m/FSPEC by Tamblyn et al. (1996). Once again, numerous nebular emission lines, principally attributable to H i and He i-ii are observed, with weak features tentatively identified as nitrogen and iron (Table 3). While our observations are generally of insufficient quality to resolve individual features, the hydrogen and helium components at 2.165$`\mu `$m are resolved (He i 2.162$`\mu `$m + He ii 2.1646$`\mu `$m + Br$`\gamma `$) in the FSPEC data set, as are the Balmer-Pickering series in the optical. In Fig. 6 we present the high resolution He i 2.058$`\mu `$m UKIRT echelle profile. These observations reveal a strong, emission feature with wings extending to $``$300 km s<sup>-1</sup>. This feature can be reproduced with a double Gaussian fit comprising a narrow central feature with FWHM$``$110 km s<sup>-1</sup> plus a second, broad component of similar flux with FWHM$``$360 km s<sup>-1</sup>. This profile represents the only potential feature that has a stellar origin. Consequently, $``$300 km s<sup>-1</sup> may relate to the underlying outflow wind velocity. ### 3.3 Optical imaging We now discuss our narrow-band imaging of NaSt1 obtained at the auxiliary port of the WHT. To date, the only published image of NaSt1 is an H$`\alpha `$+\[N ii\] image from Miller & Chu (1993). This image shows no nebular emission associated with the star although the pixel size of $`0^{\prime \prime }.98`$ means that a nebula close to the central star would have been missed. Williams et al. (1987) find evidence from the IR flux distribution for a dusty circumstellar shell associated with NaSt1. In Fig. 7 we show the four narrow-band images of NaSt1 with contours superimposed. It is immediately obvious that a nebula is detected in the \[N ii\] $`\lambda `$6583 image. It is elliptical in shape with the major axis at a position angle of $`30^{}`$. The lengths of the major and minor axes (using the lowest contours shown in Fig. 7) are 8.5 and 5.1 arcsec respectively. The nebula is also seen in the H$`\alpha +`$\[N ii\] $`\lambda `$6565 image. In contrast, there is no hint of any extension in the He ii and He i images at the measured seeing of $`0^{\prime \prime }.75`$. The \[N ii\] $`\lambda 6583`$ line is also spatially extended in the Keck HIRES spectra which were obtained at a position angle of $`0^{}`$. Examination of the \[N ii\] profile shows that the relative strengths of the blue and red components vary as a function of spatial position. At the position of the continuum, they have equal strengths and are separated by 30 km s<sup>-1</sup>; to the north, the blue component dominates; and to the south, the reverse occurs with the red component much stronger. At all positions, both components are always seen and their velocities are constant with no sign of the two components merging at the edge of the nebula. The dynamics are therefore inconsistent with a simple expanding shell, but suggest a more complex geometry. Pre-empting the derived distance for NaSt1 in the next section, an average diameter for the \[N ii\] emitting region of 6.8 arcsec corresponds to a physical size of 0.033 pc (or 6,800 AU) at a distance of 1 kpc, or 0.11 pc (22,400 AU) at a distance of 3.3 kpc. If we assume that the characteristic expansion velocity associated with the outflowing material is 15 km s<sup>-1</sup>, we derive a dynamical timescale for the \[N ii\] emitting region of 1,100–3,600 yr. ## 4 Extinction and distance towards NaSt1 We now use our spectroscopic observations presented above to investigate the interstellar extinction and distance to NaSt1. ### 4.1 Energy distribution and interstellar extinction The observed 0.4–15$`\mu `$m flux distribution for NaSt1 is presented in Fig. 8. The energy distribution is extremely red, and peaks around the L-band, suggestive of a very high interstellar reddening. We defer a comparison between the observed IR colours for NaSt1 with other emission line objects until Sect. 6.1. We use two methods to determine the interstellar extinction to NaSt1; applying Case B recombination theory (Storey & Hummer 1995) to the observed H i line strengths, and the strength of observed Diffuse Interstellar Band (DIB) absorption lines. For the first method, we have assumed an electron temperature $`T_e`$=10,000K and electron density $`N_e`$=10<sup>4</sup> cm<sup>-3</sup> and used the H$`\alpha `$, P13 and P16 fluxes relative to H$`\beta `$ from our Keck/HIRES observations. Higher Balmer series were not used since measurements of these features are restricted to less reliable WHT/ISIS observations. We obtain $`c`$(H$`\beta `$)=3.06$`\pm 0.08`$, implying E<sub>B-V</sub>=2.1$`\pm `$0.1 mag. In Sect. 5, we derive higher values of $`T_e`$ and $`N_e`$ but these have a negligible effect on the derived value of E<sub>B-V</sub>. NaSt1 lies along the line-of-sight to the Aquila Rift (200$`\pm `$100 pc) which Dame & Thaddeus (1985) suggest has a low mean visual extinction of about 2 magnitudes. It appears that the majority of the extinction we observe arises from diffuse material lying behind this cloud, and therefore that the standard mean value of $`R`$ (=$`A_\mathrm{V}`$/E<sub>B-V</sub>)=3.1 may be reasonable (i.e. $`A_\mathrm{V}`$=6.5$`\pm `$0.3 mag). Diffuse Interstellar Band (DIB) features are readily visible in our Keck/HIRES spectra of NaSt1 and allow an independent $`E_{\mathrm{B}\mathrm{V}}`$ determination (see also Le Bertre & Lequeux 1993). In particular, equivalent widths of certain DIBs ($`\lambda `$5797, $`\lambda `$5849) are known to scale fairly linearly with $`E_{\mathrm{B}\mathrm{V}}`$ (Herbig 1995). We have measured equivalent widths for these DIBs and compared them with those quoted by Herbig (1995) along the (standard) line-of-sight to HD 184143. We obtain equivalent widths of 377 and 135 mÅ for $`\lambda `$5797 and $`\lambda `$5849, implying $`E_{\mathrm{B}\mathrm{V}}`$=2.00 and 2.04 mag, in excellent agreement with our value derived from Case B recombination theory. The de-reddened flux distribution for NaSt1 is shown in Fig. 8. Clearly, the intrinsic optical flux distribution from NaSt1 is very blue, indicating a very hot ionizing source for the nebula, as illustrated by the 100,000K blackbody flux distribution in the figure. Our optical spectroscopy does not allow us to distinguish between temperatures for the ionizing source in the range 30kK (if $`E_{\mathrm{B}\mathrm{V}}`$=2.0 mag) to $``$200kK (if E<sub>B-V</sub>=2.2 mag). The mid-IR energy distribution can be approximated with a warm blackbody of 700K (van der Hucht et al. 1984; Williams et al. 1987), while a further blackbody of $``$2000K is required to reproduce the intrinsic near-IR flux distribution, The combined effect of our three blackbodies is indicated as a solid line in Fig. 8. ### 4.2 Distance We use several techniques to estimate the distance, based on the LSR radial velocity of NaSt1 and interstellar material, and from previously obtained distance-reddening relations along this line-of-sight. To estimate the systemic LSR radial velocity for the emission lines observed in NaSt1, we have measured the velocities of thirteen emission lines which show red and blue components (see Fig. 2) in the Keck HIRES dataset. We derive $`V_{\mathrm{LSR}}=4\pm 6`$ and $`+34\pm 5`$ km s<sup>-1</sup> for the blue and red components, giving a systemic $`V_{\mathrm{LSR}}`$ of $`+15`$ km s<sup>-1</sup> for NaSt1. The interstellar Na i lines are shown in Fig. 9 and have a strong, broad component centred on $`V_{\mathrm{LSR}}=+25`$ km s<sup>-1</sup>, and extending to $`+50`$ km s<sup>-1</sup>, with a weaker component at $`25`$ km s<sup>-1</sup>. The LSR radial velocity as a function of distance for the line-of-sight towards NaSt1 is also shown in Fig. 9, using the Galactic rotation curve of Brand & Blitz (1993) and a Galactocentric distance of 8.5 kpc for the Sun. The velocity of $`+15`$ km s<sup>-1</sup> derived from the emission lines indicates a distance of $``$1 kpc for NaSt1. In contrast, the main Na i absorption feature, extending up to $`+50`$ km s<sup>-1</sup> suggests a distance of $``$3.3 kpc. The first estimate assumes that NaSt1 is participating in the Galactic rotation and has no peculiar velocity. The second estimate assumes that the Na i absorption originates from diffuse interstellar clouds along the line-of-sight which are co-rotating with the Galaxy. It is possible that some of this absorption arises in the circumstellar shell associated with NaSt1 since the red emission component is at $`V_{\mathrm{LSR}}=+34`$ km s<sup>-1</sup>. On the other hand, there is no blue-shifted emission with velocities as negative as the additional Na i component at $``$25 km s<sup>-1</sup>. Alternatively, Cappellaro et al. (1994) have presented an approximate distance-reddening relation for objects along a narrow line-of-sight common to NaSt1 based on normal stars and PNe. Using their approximate relation our $`E_{\mathrm{B}\mathrm{V}}`$ implies a significantly larger distance of 7$`\pm `$1 kpc. It is possible, however, that the high extinction towards NaSt1 results from dusty circumstellar material rather than interstellar material. In summary, the high reddening of $`E_{\mathrm{B}\mathrm{V}}=2.1`$ mag derived from H i line ratios indicates a large distance of $`7`$ kpc but some of this reddening could be local to NaSt1. Conversely, the radial velocities of the emission and interstellar absorption lines indicate smaller distances of 1–3.3 kpc and by inference, that most of the reddening is circumstellar. Dame & Thaddeus (1985) list nine O stars lying behind the Aquila Rift with distances in the range 1–6 kpc and extinctions of $`A_\mathrm{V}`$=2.4–4.2 mag, considerably less than the value derived for NaSt1. We have a slight preference for the distances based on kinematic arguments and will assume a distance to NaSt1 of 1–3.3 kpc. This distance range yields an absolute visual magnitude for NaSt1 of $`M_\mathrm{V}`$=$``$1.9 to $``$4.5 mag. Assuming a bolometric correction in the range $``$3 to $``$7 mag, appropriate for stars with temperatures of 30,000K to 200,000K, the intrinsic luminosity of the hot component lies in the range log ($`L/L_{}`$)=3.9 to 6.5. We can also obtain a luminosity from the IRAS mid-IR photometry, which reflects warm re-radiated dust. For a distance of 1 to 3.3 kpc we obtain log ($`L/L_{}`$)=2.6 to 3.6 (assuming a 700K blackbody normalized at the observed 12$`\mu `$m flux). Clearly, although extremely bright at IR wavelengths, the entire bolometric luminosity of NaSt1 is not re-radiated in the IR. ## 5 Nebular diagnostics and elemental abundances We can now proceed to obtaining estimates of the nebular physical parameters and abundances. From the usual diagnostic diagram relating H$`\alpha `$/\[S ii\] to \[S ii\] $`\lambda `$6717/$`\lambda `$6731 (see Sabaddin, Minello & Bianchini 1977) we find that NaSt1 falls in the photoionization-dominated region. Therefore the usual nebular diagnostic techniques used for studies of PNe and symbiotic novae are applicable. ### 5.1 Nebular parameters from observed line fluxes In Table 2 we provide observed line fluxes ($`F_\lambda `$) of features visible in our Keck and WHT observations, including de-reddened fluxes ($`I_\lambda `$), using the interstellar extinction obtained in Sect. 4. (Table 3 contains observed and de-reddened IR line fluxes from our UKIRT observations.) Nebular line identifications have been largely drawn from the lists of Kaler et al. (1976), Keyes, Aller & Feibelman (1990), Baluteau et al. (1995), and H.-M. Schmid (priv. comm). Despite the rich emission-line spectrum of NaSt1, many of the usual optical nebular diagnostics are unavailable (e.g. \[O iii\]). Since we do not possess ultraviolet (nor are we likely to, given its high reddening!) or far-red spectroscopy, we are limited to a small number of available diagnostics. A diagnostic diagram for these line ratios that are sensitive to $`T_e`$ and $`N_e`$ is presented in Fig. 10. The curves were generated using the ratio program, written by I.D. Howarth and S. Adams, which solves the equations of statistical equilibrium allowing a determination of $`N_e`$ as a function of $`T_e`$ for each ratio. From comparison with studies of symbiotic novae that exhibit a range of physical conditions (e.g. Schmid & Schild 1990), we might expect lines from ions with the highest ionization potentials (IP) such as \[Fe vi-vii\] to sample the highest electron densities, while lines with low ionization potentials (e.g. \[S ii\]) sample lower density regions. From Fig. 10, we see that this may be the case for NaSt1 since the $`N_e`$ obtained from \[S ii\] $`\lambda 6731/\lambda 4068`$ is a factor of ten lower than that obtained from other diagnostic line ratios. We note, however, that \[S ii\] $`\lambda `$4068 is unresolved in our ISIS dataset, and thus the solution from this ratio should be given a lower weight, since all the other line ratios are from the HIRES dataset. Indeed, the solution for \[N ii\] is consistent with the high ionization potential ions of \[Fe vi-vii\]. The interception point of these diagnostics is shown in the figure (filled circle) at $`N_e`$=3.10<sup>6</sup>cm<sup>-3</sup> and $`T_e`$$`=`$13 000K. The $`N_e`$$`T_e`$ intersection for the \[Ar iv\] diagnostic ratio (IP$`=`$60 eV) with the iron diagnostic ratios is at 4.10<sup>6</sup>cm<sup>-3</sup>–20,000K for \[Fe vi\] (IP$`=`$99 eV) and 3.10<sup>7</sup>cm<sup>-3</sup>–7,500K for \[Fe vii\] (IP$`=`$125 eV). ### 5.2 Nebular abundances We now utilise our derived nebular parameters to obtain estimates of elemental abundances, and consider first the abundances of the collisionally-excited species. Ionic abundances were obtained by solving the equations of statistical equilibrium using the equib program, also written by S. Adams and I.D. Howarth, and are given in Table 4. Total element abundances were derived using the ionization correction factor (ICF) scheme of Kingsburgh & Barlow (1994) and these are also listed in Table 4. Turning to the helium abundance, optical He i-ii transitions are produced by radiative recombination, with additional contributions to He i line strengths from collisional processes. Helium ionic abundances were derived relative to H<sup>+</sup> using Case B recombination theory. For this we utilised interpolated effective recombination coefficients from Osterbrock (1989) for H i and He ii, and He i coefficients from Smits (1996). He i line strengths have been corrected for the effect of collisional population of their upper states (Kingdon & Ferland 1995). These collisional factors (C/R) are shown in Table 5 for $`N_e`$=3.10<sup>6</sup>cm<sup>-3</sup> and $`T_e`$=13 000 K together with the derived helium ionic abundance ratios. It is clear that there is a large scatter in the He<sup>+</sup>/H<sup>+</sup> ratios which must result from optical depth effects. Indeed, the two lowest-lying singlets at $`\lambda 6678`$ and $`\lambda 7281`$ show the highest ionic abundances. We cannot therefore derive a reliable He<sup>+</sup>/H<sup>+</sup> ratio using Case B recombination theory. The He<sup>2+</sup>/H<sup>+</sup> ratio is, however, better determined since the He ii $`\lambda `$4686 and $`\lambda `$5412 transitions are in the correct Case B ratios and yield a mean He<sup>2+</sup>/H<sup>+</sup> ratio of 0.64. Furthermore, with this abundance, the predicted strengths of He ii $`\lambda `$4860 and $`\lambda `$6683 are 28.0 and 3.8 relative to H$`\beta `$, in agreement with the measured de-reddened values of 34.9 and 4.6. In summary, while we cannot determine a reliable He<sup>+</sup>/H<sup>+</sup> abundance because the lines are optically thick, we can determine a lower limit to the He/H abundance by using the He<sup>2+</sup>/H<sup>+</sup> ratio of 0.64. In reality, we expect the total He/H abundance to be at least twice this, given the strength of the He i lines and the expectation that He<sup>+</sup> is the dominant ionization stage of He. Even with this lower limit, it is apparent that NaSt1 is extremely helium-rich. In Table 6 we present a summary of the abundances derived for NaSt1 and a comparison with other objects. First, we find that the abundances of Ne, Ar and S are very similar to the average H ii region values (Shaver et al. 1983). This excellent agreement suggests that the use of single representative $`T_e`$ and $`N_e`$ values has produced reliable abundances. The N/O ratio for NaSt1 is very different to that expected for H ii regions because N is enhanced by a factor of 20 while O is depleted by a factor of 140. Such extreme values indicate heavily CNO-processed material. Indeed, the total H ii C$`+`$N$`+`$O abundance of $`8.27\times 10^4`$ is comparable to the combined N$`+`$O abundance of $`7.47\times 10^4`$ for NaSt1 suggesting that very nearly all the carbon and oxygen have been processed to nitrogen. This is in accord with the null detection of carbon emission lines in NaSt1 (Sect. 3.1). The only object known to us which shows such extreme CNO-processing is the LBV $`\eta `$ Car. In Table 6, we list the abundances from the recent study of Dufour et al. (1997). The spectacular bipolar nebula associated with $`\eta `$ Car was ejected in 1840 during a giant eruption (Davidson & Humphreys 1997). The abundances show that the ejected outer stellar layers are composed of CNO-equilibrium products. Other LBV nebulae (e.g. AG Car, R127, S119) have much smaller nitrogen enrichments of 4–11, and little, if any, oxygen depletion, indicative of CN-processing only (Smith et al. 1997, 1998). These abundance differences have led Lamers et al. (1998) to propose that the LBV-like star whose spectrum now dominates the nucleus was not the star that ejected the nebula because the appearance of the stellar spectrum is indicative of mildly-enhanced CN products. They instead suggest that the eruptor was more evolved (possibly WR-like) and thus more massive than the LBV $`\eta `$ Car. The abundances we determine for NaSt1 are also very different to those measured for symbiotic stars. Nussbaumer et al. (1988) and Schmid & Schild (1990) find CNO abundances similar to those observed in red giants with only N enhanced from CN-processing. The abundances we determine are also not in agreement with those measured for novae which show enriched C, N and O and sometimes Ne (Livio & Truran 1994). ## 6 Discussion In this final section, we discuss the information that we have obtained for NaSt1 with the aim of identifying its true nature. ### 6.1 Comparison of NaSt1 with other peculiar emission line objects First, we will compare our observations of NaSt1 with known WR, Ofpe/WN9, LBV and B\[e\] stars, to which it has previously been compared, and symbiotic novae, to which it shows certain similarities. #### 6.1.1 Wolf-Rayet and Ofpe/WN9 stars A Wolf-Rayet (WN10) classification for NaSt1 was suggested by Nassau & Stephenson (1963) and supported by Massey & Conti (1983), while van der Hucht et al. (1989) proposed an Ofpe/WN9 classification. WR stars represent the final state in the evolution of very massive stars prior to the supernova explosion. Their optical spectrum is characterised by broad, pure emission (and P Cygni) profiles of highly excited species resulting from a fast, extremely dense stellar wind. Ofpe/WN9 stars, reclassified as WN9–11 stars by Smith et al. (1994) and Crowther, Hillier & Smith (1995a), are intimately related to both classical WR stars and LBVs. In Fig. 11, we compare our rectified WHT-ISIS spectrum of NaSt1 with the LMC star HDE 269582 (WN10, previously Ofpe/WN9; Crowther & Smith 1997). This confirms the quite different appearance of NaSt1, even at intermediate spectral resolution. Stellar He i–ii, H i and N iii features are observed in the WN10 star, including the P Cygni He i $`\lambda `$5016 profile, providing evidence for a dense stellar wind outflow. The only nebular lines observed in WR stars are those weak features originating in low-excitation, ejecta-type circumstellar nebulae (e.g. Nota et al. 1996). #### 6.1.2 B\[e\] stars van der Hucht et al. (1989) suggested a possible B\[e\] supergiant nature for NaSt1 – such luminous, massive objects appear to have a equatorial excretion disk plus a polar OB-type stellar wind. In the optical, their spectra are characterised by a plethora of low-excitation emission lines including the Balmer series plus narrow permitted and forbidden lines of singly ionized ions (e.g. Fe ii), as shown in Fig. 11 for the very luminous LMC B\[e\] star Hen S22. The presence of He ii $`\lambda `$4686 emission in NaSt1 led van der Hucht et al. (1989, 1997) to tentatively suggest that it may be a high temperature counterpart of B\[e\] stars and hence they proposed a new O\[e\] classification. In common with B\[e\] stars, NaSt1 shows significant mid-IR dust excess and \[Fe ii\] lines in its optical spectrum. However, NaSt1 additionally displays high excitation, permitted (He ii $`\lambda `$4686) and forbidden (\[Fe vii\] $`\lambda `$6087) emission lines and does not show the broad, stellar Balmer lines (compare H$`\beta `$ profiles in Fig. 11). #### 6.1.3 Symbiotic novae The spectral appearance of NaSt1 shares some characteristics with symbiotic novae. These systems consist of a red giant, the ionized nebula and a hot ionizing source, with the red star generally directly observable in the near-infrared. Fig. 11 presents a comparison with the D-type (‘dusty’) symbiotic nova V1016 Cyg, which Schmid & Schild (1990) found to have an electron density of $`N_e`$$``$10<sup>6</sup>cm<sup>-3</sup>, comparable to NaSt1. V1016 Cyg and NaSt1 show a strong nebular spectrum, including He ii $`\lambda `$4686, H$`\beta `$, \[N ii\], N iii, plus low and high-excitation forbidden Fe emission. Both systems show IR dust emission and a correlation between density and ionization potential (Schmid & Schild 1990). However, significant differences are also found: (i) there is no spectroscopic evidence for a red giant in NaSt1 at IR wavelengths; (ii) the He i-ii emission spectrum of NaSt1 is dramatically stronger; (iii) the characteristic symbiotic O vi Raman scattered lines are absent; (iv) \[O iii\] $`\lambda `$5007 emission is extremely strong in V1016 Cyg, and indeed all dusty symbiotics, which is very weak in NaSt1. Therefore, while NaSt1 shows some similarities with dusty symbiotic novae, its nebular line strengths are anomalous, and its overall properties are distinct. #### 6.1.4 $`\eta `$ Carinae From Sect. 5.2, the abundance pattern of NaSt1 most closely resembles $`\eta `$ Car. Consequently we include a spectroscopic comparison between NaSt1 and $`\eta `$ Car in Fig. 11. $`\eta `$ Car shows narrow emission lines of H i and He i with very broad wings, and He ii $`\lambda `$4686 is absent. Highly ionized forbidden nebular lines are not observed in $`\eta `$ Car. Therefore, although its appearance is also unusual, $`\eta `$ Car bears little spectroscopic resemblance to NaSt1. ### 6.2 What is the nature of NaSt1? From the objects discussed in the previous subsection, NaSt1 most closely resembles symbiotic novae from optical spectroscopy and $`\eta `$ Car from nebular abundances. We believe we can rule out a symbiotic nature on the basis of its He-rich, O-deficient chemistry and the spectroscopic absence of a red-giant component. Specifically, the CNO-cycle products in the ejected nebula are unique to a massive post-main sequence star. We also note that NaSt1 does not appear to be a strong X-ray emitter. Pollock (1987) suggested that NaSt1 was a possible X-ray source from Einstein data. NaSt1, however, lies fairly close (20) to the well-known supernova remnant Kes 79 (G33.6+0.1). Higher quality ROSAT PSPC observations revealed that the majority of the X-ray emission near NaSt1 is associated with Kes 79 (Pollock, priv. comm.) and negligible emission is observed from NaSt1 itself. The unusual nature of NaSt1 is further illustrated in Fig. 12 which compares its two–colour IR index (J$``$H and H$``$K) with other (potentially related) objects from our Galaxy (open symbols) and the LMC (filled-in). Included are ‘normal’ stars, B\[e\] stars, supergiants, WN6–11 stars, LBVs, ‘dusty’ late WC (WCL) stars, and D-type symbiotic novae. We also show the location of NaSt1 before and after correction for the high interstellar extinction. We find the IR characteristics of NaSt1 are extremely unusual. Most other dusty luminous objects, including B\[e\], WCL and symbiotics, show quite different IR characteristics. The only object known to possess similar IR properties, after correction for the interstellar reddening towards NaSt1, is $`\eta `$ Car. ### 6.3 A massive, evolved star cloaked by a dense ejecta nebula Despite the absence of a stellar signature in NaSt1, our nebular analysis suggests that it contains a hot, luminous, evolved star, hidden from direct view by the dense nebular envelope. The composition of the nebula indicates that the star ejected its outer layers when CNO-equilibrium products were present on the surface. Comparison with the lower limit on the He/H ratio we derive of 0.64 and the surface He/H ratios derived for WR stars (Crowther et al. 1995a, Crowther & Smith 1997) suggests that the central star must have been a WN star at the time of eruption rather than an LBV-type star. An early WN (WNE) star identification agrees with the high temperature we find for the ionizing source. These stars are highly evolved objects, with stellar temperatures greatly in excess of 30,000 K and hydrogen-deficient, CNO-processed stellar winds. Indeed, our estimate of the stellar luminosity lies in the range occupied by early WN stars – log ($`L_/L_{}`$)=5.1–6.1 (Crowther, Smith & Hillier 1995b; Hamann & Koesterke 1998). Another possibility is that the remnant star would now have a more advanced chemical composition than a WN star if most of the stellar envelope was lost in the ejection, namely a WC-type star, also in accord with the temperature and luminosity we derive for the ionizing source. Early-type WR stars, however, have very dense stellar winds, so how would the characteristic broad, stellar features not be directly observed? Perhaps its stellar wind has not yet pierced the dense ejected nebula. The dynamical age obtained from the nebular analysis is of the order of a few thousand years. This would imply a very recent ejection of a large amount of material, as evidenced from the very high characteristic electron density obtained of $`3\times 10^6`$ cm<sup>-3</sup>. The only spectral feature which shows any possible type of stellar wind outflow is the He i $`\lambda `$20581 profile which has wings extending to $``$300 km s<sup>-1</sup>, and closely resembles the He i 1.0830$`\mu `$m profile of the massive young stellar object (YSO) Sh 2-106IR, with a comparable outflow velocity (Drew, Bunn & Hoare 1993 and J. Drew, priv. comm.). This velocity is, however, much lower than early WR stars which have winds of $``$2000–3000 km s<sup>-1</sup>. This velocity is more characteristic of an LBV during its hot phase (e.g. AG Car; Smith, et al. 1994). Theoretically, we might reconcile a high stellar temperature and low wind velocity with a massive star that is extremely close to its Eddington limit. Indeed, low velocity, aspherical outflows are anticipated for stars close to the related ‘$`\mathrm{\Omega }`$ limit’ which includes the effect of rotation (see Langer 1997). The only LBV known which has an ejected nebula composed of heavily processed CNO material is $`\eta `$ Car. As discussed in Sect. 5.2, the advanced evolutionary state of this nebula has led Lamers et al. (1998) to propose that the eruptor was not the LBV but an unseen more evolved star. The IR two–colour index of NaSt1 in Fig. 12 shows strong similarities with $`\eta `$ Carinae, indicating comparable nebular dust conditions. It is possible that NaSt1 is a counterpart to $`\eta `$ Carinae with an unseen massive evolved central star that underwent a major instability and ejected its outer layers a few thousand years ago. Differences in the spectral appearances of NaSt1 and $`\eta `$ Car are probably attributable to geometry, age, and the hotter ionizing source in NaSt1 ($`\eta `$ Car is cooler than $``$30kK from the absence of He ii $`\lambda `$4686 emission). Unfortunately, we are unable to comment on details of the precise geometry of NaSt1 since we do not possess deep, high spatial resolution optical/IR imaging. Whatever the true nature of NaSt1, its properties are extremely unusual. Is there any evidence for objects with similar characteristics? van der Hucht et al. (1984) and Williams et al. (1987) have discussed similarities between NaSt1 and LS4005 (WR85a). LS4005 also shows narrow ($`\mathrm{\Delta }\lambda 20`$ km s<sup>-1</sup>) emission lines of N ii-iii, Fe ii-iii (both allowed and forbidden), with strong He i-ii features, and no absorption features present, and photometric variability (van der Hucht et al. 1989). LS4005 would certainly represent an excellent target for future high resolution spectroscopy and imaging. ### 6.4 Summary and future work We have presented optical and IR spectroscopy, and imaging of NaSt1 which have revealed a heavily CNO-processed nebula. NaSt1 serves as a useful reminder that great care should be taken when selecting IR sources to be classification standards. While many authors have commented that NaSt1 bears little resemblance to other late-WN type stars at IR wavelengths, it has nevertheless remained as a classification standard. We interpret the spectrum of NaSt1 as arising in a dense nebula, ejected by an evolved massive star. The H-deficient, CNO-processed nebula suggests that an unseen early WN or WC star provides the ionizing flux. The only object which shares some of the peculiar characteristics of NaSt1 is $`\eta `$ Carinae. NaSt1 appears to be a remarkable object, and hints at new insights into massive star evolution. ## Acknowledgments We would like to thank Mike Barlow, Steve Fossey, Jay Gallagher, Norbert Langer, Mario Livio, Xiao Wei Lui, Andy Pollock and Hans-Martin Schmid for many fruitful discussions. We also wish to thank Bruce Bohannan, You-Hua Chu, Hans-Martin Schmid, Lindsey Smith and Peter Tamblyn for generously forwarding additional observations. PAC acknowledges financial support from PPARC and the Royal Society. We are especially grateful to Marten van Kerkwijk for obtaining the Keck HIRES observations for us, and the staff of the now defunct Royal Greenwich Observatory for obtaining service spectroscopy and imaging. The William Herschel Telescope is operated on the Island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. The W.M. Keck Telescope is operated by Caltech and the University of California on Mauna Kea, Hawaii, while the U.K. Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the Particle Physics and Astronomy Research Council also on Mauna Kea, Hawaii.
no-problem/9903/astro-ph9903079.html
ar5iv
text
# A Calibration Method for Wide Field Multicolor Photometric Systems The work is supported partly by the National Sciences Foundation under the contract No.19833020 and No.19503003 ## 1 Introduction Multi-color photometry can provide accurate SED information of low spectral resolution for a large sample of objects and is a very powerful tool to deal with many important astrophysical problems. It can be used to measure the photometric red-shift of quasars and galaxies, and to select quasars and other interesting objects based on their characterstic SEDs. One can also use SEDs for stellar population synthesis, an important tool for studying the structure and evolution of galaxies. The more passbands in which one observes, the better one can determine the SED. The Beijing-Arizona-Taipei-Connecticut (BATC) multicolor photometric survey is designed to obtain as much information on the SED of celestial sources possible. It combines the Beijing Astronomical Observatory’s 60/90 cm f/3 Schmidt telescope with a Ford Aerospace CCD of 2048$`\times `$2048 pixels (recently replaced by a thinned Loral CCD, thanks to the Steward Observatory) and 15 intermediate-band filters ranging from 320 nm to 1000 nm to obtain the SEDs for all objects down to $`B=21`$ mag in a field of one square degree. A key problem in the survey is calibrate accurately the SED of our objects. The standard procedure of SED calibration is as follows: on photometric nights, we observe through the 15 BATC filters both Oke & Gunn (1983) spectroscopic standard stars (HD19445, HD84937, BD+262606 and BD+174708), and the target fields. We convolve the known fluxes of the standard stars with the transmission curves of the BATC filters to determine magnitudes in BATC photometric system (Fan et al. 1996, Zheng al. 1999). In practice, however, this method is not very efficient, for two reasons. First, the calibrations are not always taken on perfectly photometric nights. This results in systematic troughs or bumps in the SED for all objects in a field. Second, in order to obtain accurate SED calibration for 15 passbands, we need many photometric nights, which could take a very long time at our present site. In order to solve these problems, we are developing a technique we call “SED self-calibration”. This is a statistical method, based on applying a stellar SED library to a large sample of observed stellar objects. In section 2, we describe our method of SED self-calibration. In section 3 we present some results from tests of the method. We discussions the method and give our conclusions in section 4. ## 2 Method The BATC survey covers regions at high galactic latitudes. In a typical field, several thousand objects are detected per square degree. Among them are several hundred unsaturated, bright stars with high signal-to-noise ratios. These “good” stars of highly precise photometry form a big sample which is the observational basis for the SED auto-calibration method. In order to make the description concise, we will use, in the following presentation, the term $`SED_{obs}`$ to express the SED of an object which is calibrated by observations, and the term $`SED_{match}`$ to express the SED of the same object which is the closest match found in the SED library. We assume that most of these “good” stars are normal, nearby stars; therefore, most of these “good” stars should have very close matches in the SED libraries. If the field has been well calibrated by observations, the RMS of the differences between the $`SED_{obs}`$ and the $`SED_{match}`$ for most of the “good” stars should be roughly the size of the photometric precision. If the RMS of the residuals is considerably larger than the photometric precision, then either some of the “good” stars might not actually be normal stars, or observations in some passbands might not have been made during truly photometric nights. In the former case, we can reject those stars with abnormal residuals from the data sample and repeat the process. In the later case, we can shift the zero points for some passbands which show systematic deviations of $`SED_{obs}`$ from $`SED_{match}`$. We iterate these processes until the RMS of the residual reaches the level of the photometric precision, which is about 0.01 mag for “good” stars. This self-calibration method is based on the following assumptions: 1. Most bright stellar objects in the field are normal stars whose SEDs can be found in the SED library. 2. The interstellar extinction of these bright stars is small and the differences in extinction between stars in the same CCD field is negligible. 3. The SED library is reliable and covers all the required types of spectral and luminosity classes to fit the $`SED_{obs}`$. We now describe briefly details of the observed and model spectra energy distributions. ### 2.1 $`SED_{obs}`$ First, we must obtain $`SED_{obs}`$ for all the objects in the target field. Once a target field has been observed in several passbands, we can obtain the instrumental colors of the stars in the field by using standard photometry packages, such as DAOPHOT. The zero point for each passband is obtained through observations of Oke & Gunn standards during photometric nights. As long as the zero points are well-determined, the instrumental magnitudes can be transferred into the calibrated magnitudes of the BATC system. The instrumental color index can be defined as in Table 1, where $`Co_{j,s}^i`$ is the instrument color index of j’th band minus s’th band of the i’th object. Here we use the $`s`$’th band as the reference band To calculate the flux-calibrated color index $`C_{j,s}^i`$, the color index zero point correction $`Cc_{j,s}`$ is added to the instrumental color index $`C_{j,s}`$. $`C_{j,s}^i=Co_{j,s}^i+Cc_{j,s}`$ ### 2.2 SED libraries The stellar SED library used for this method is a hybrid one, including the theoretical SEDs of Kurucz (1992, 1993), and the observational SEDs of Gunn and Stryker, and of Straizyz and Sviderskiene (henceforth referred to as “Vilnius”). We transfer spectral fluxes into the BATC photometric system using the following equation: $`m_i=2.5log(\widehat{f_\nu })_i48.6`$ where $`(f_\nu )_i`$ is the monochromatic energy at the central wavelength of $`i^{th}`$ filter, in unit of $`ergcm^2s^1Hz^1`$ (cf. Fan 1995; Fan et al. 1996). The Vilnius library has 49 spectra covering spectral type from O to M6 and luminosity from main sequence to giant. The library of Gunn and Stryker contains 74 spectra with the same coverage of the spectral type and luminosity. The SED libraries given by Kurucz cover temperatures from 3500K to 50000K with a range of stellar surface gravities and metallicities. Using these SEDs we can build a table of model color indices: If the $`SED_{obs}`$ is well determined by the observations, it can be matched to one of the model SEDs with a residual at the level of photometric precision. Mathematically, we search for a minimum of $`\sigma `$: $`\sigma =\underset{i=1}{\overset{n}{}}\mathrm{min}[\underset{j=1}{\overset{m}{}}\left(Co_{j,s}^i+Cc_{j,s}Cm_{j,s}^k\right)^2,k1,N]`$ Here, $`n`$ is the number of the “good” stars, $`m`$ is the number of the color index, and $`N`$ is the total number of entries in the model SED library. $`Co_{j,s}^i,Cc_{j,s}`$ and $`Cm_{j,s}^k`$ are the instrumental color index, the color index zero-point offset, and the color index of model SED, respectively. We look simultaneously for the nearest match to each star in the library, and a value for $`Cc`$ which leads to a minimum in $`\sigma `$. In our algorithm, we call subroutines of the MINUIT package (James,1994) from the CERNLIB software. The process converges on the The correct values of the color index zero-point corrections $`Cc_{j,s}`$. During the iteration process, a different weight is given to each star according to its instrumental magnitude. The program can subsequently reject those stars of having abnormally high differences between $`SED_{obs}`$ and $`SED_{match}`$. At the start of the iteration, we must provide initial values for the color index corrections. The effect of different initial values on the final converged result is less than 0.01 mag. We take the mean instrumental color index and the mean model color index as the initial corrections: $`Cc_{j,s}^0=(\underset{i=1}{\overset{n}{}}Co_{j,s}^i)/n(\underset{k=1}{\overset{N}{}}Cm_{j,s}^k)/N`$ Because most nearby stars are spectral type F, G and K, we employ only these types of model SEDs to estimate the initial color correction constants. ## 3 Testing ### 3.1 Empirical comparison of the two methods We can test the method as follows: if we observe one of the Oke & Gunn standard star fields, we acquire the instrumental SED for the standard star. Since we know the real SED of the standard star, we can determine exactly the SED corrections for each passband. In other words, for this particular field, the zero-point color corrections can be derived directly. If we use our method on the same data set, we can compare its values for the zero-point corrections to the correct ones. We observed the field of HD84937 through 13 filters on Jan. 22, 1998. The transparency was good, but it was not photometric. Two exposures were taken for each filter: a short one of a few seconds to avoid saturating the bright star HD84937, and a long one of 300 seconds. The short and the long exposures for each filter were taken in quick succession in order to guarantee that both shared the same weather conditions. The short exposure was used to determine the SED corrections for the field via HD84937 directly, and the long exposure was used to determine the SED corrections via our method. In the following, we use this data set to do several tests of our method. As mentioned above, we obtained the instrumental SED of standard star HD83927 ($`MAG\mathrm{\_}std\mathrm{\_}instr`$) from the short exposure images. Using the known SED of HD83927 in BATC system ($`MAG\mathrm{\_}std\mathrm{\_}BATC`$), we calculated the differences in each passband $`i`$: $`dMAG\mathrm{\_}std_i=MAG\mathrm{\_}std\mathrm{\_}BATC_iMAG\mathrm{\_}std\mathrm{\_}instr_i`$ If the SED self-calibration method is successful, it should yield corrections for the SED of the long exposure images $`Cc_i`$ which differ from $`dMAG\mathrm{\_}std_i`$ only by a constant $`K`$, due to the difference in exposure time: $`K=Cc_idMAG\mathrm{\_}std_i`$ The results are listed in Table 3. It shows that the constant $`K`$ is indeed the same in each passband, except for the BATC2 filter. The RMS of the variation of $`K`$ in color is of the level of $`0.004`$. We give two reasons for the large difference in the near-UV band BATC2: first, our thick CCD has very low quantum efficiency in the blue, causing low signal-to-noise values in all stars. Second, the SED templet in this region of wavelengths is available only for some stellar types. ### 3.2 Cross checking between two spectral libraries We can test our method in a second way. Suppose the SED libraries, either Gunn & Stryker or Vilnius, represent well the SED for most types of stars. Having computed synthetically the BATC magnitudes in all passbands for the stars in each library, we can apply our method to calculate the corrections between the two sets of spectral models. If: 1) both libraries provide accurate stellar SEDs, and 2) each library covers an equal range of stellar types, i.e. any SED in one library has a corresponding SED in the other library, and 3) the method developed here is correct and effective, then after the iteration process, the final SED corrections should be close to zero. In the following test, we use Gunn & Stryker library as $`SED_{match}`$ and the Vilnius library as the $`SED_{obs}`$ catalog. The results are shown in Table 4. Column 1 is the filter name, column 2 its central wavelength, Column 3 the correction for each band. The BATC9 passband (6660A) is used as the reference band, and kept fixed during the iterative process. The final values for the zero point corrections are indeed very small. The mean deviation of zero point corrections is $`\pm 0.015`$ mag; however, the BATC1 band again shows much larger devaitions than the other bands. This result shows that the two SED libraries in general can be matched well each other, but two points call for for further discussion. 1) There is a systematic deviation for both SED libraries in the blue versus the red: below 455nm, the Vilnius SEDs are flatter than those of Gunn & Stryker; above 455nm, the Vilnius SEDs are more depressed those of Gunn & Stryker. 2) The large deviation in UV band indicates that either one or both the SEDs does not well represent the stellar SEDs in this region. ### 3.3 Stellar classification The process of matching $`SED_{obs}`$ to $`SED_{match}`$ produces as a byproduct a spectral classification for each star; or, if a library of theoretical models is used, the stellar atmospheric parameters of each star. It provides us with an indirect way to test the method presented in this paper. Figure 1 shows a good correlation between the original spectral class of stars in the Vilnius catalog, and the spectral class determined for those stars via our method, using the Gunn & Stryker library for $`SED_{match}`$. ## 4 Discussions and conclusions 1. Our method is a statistical one, so it can only be effectively applied to a large stellar sample. The BATC survey satisfies this condition. Each BATC field covers about one square degree. Typically, there are more than 4000 objects detected in each image. We are guaranteed several hundred bright and unsaturated stars with reliable instrumental magnitudes to use as a “good” star sample. 2. The basis of our method is fitting the observed stellar SED to a library of stellar SEDs. Our assumption that most of our “good” stars are normal stars appears firm. and their SEDs can be found from the SED library. Most of abnormal stars are rejected during of the iteration process. 3. The SED library is a key for the method. There is a great demand in the astronomical community for reliable SED libraries which cover a wide range in wavelength and stellar type, The theoretical libraries (e.g. Kurucz 1993) are limited by our knowledge of stellar physics. The theoretical SEDs for late type, low-temperature stars is not as reliable as those for hot stars. There are many observational stellar libraries (Gunn & Stryker, Vilnius, etc), but differences among them exist. It is not easy to judge which one is the best. After comparing various stellar SED libraries we have collected from the literature, we are at present using mostly the Gunn & Stryker library in our calculations. Further work on making good libraries of SEDs is necessary. 4. In principle, interstellar extinction must be considered in the final result. Our method of SED correction takes into account not only terrestrial atmospheric extinction, but also interstellar reddening. If standard stars or spectral models suffer from systematically more or less (or different) extinction than a survey’s target stars, our method will yield systematically incorrect zero-point corrections to the photometry of target stars. This is because the method uses a fitting process to fit the observational SED affected by the reddening to the theoretical SED unaffected by the reddening. In the BATC survey, the fields are located at high galactic latitudes, where the interstellar extinction is much reduced. We therefore use only bright stars, which are nearby and little affected by interstellar extinction, as standards for the survey fields. 5. We have made several tests to see if the method works. (1) We have tried different filter bands as the reference band to see if the change in the reference band affects the results. Our results show that the difference in the final constant corrections is less than 0.01 mag. This means that the choice of reference band in color index is not important. Normally, we select the band of deepest exposure as the reference band, since it has highest signal to noise ratio. This also allows the measurement of color index for as many stars as possible. (2) We wanted to know if the number of the filters used affects the results, and what is the minimum number of colors required to make the SED self-calibration work effectively. In order to do the test, we reduced the number of filters by taking off some of the color indexes. We found that the method still yields reasonable results. Obviously, the results improve when one uses a larger number of the filter bands, and a wider range of wavelengths. (3). We wanted to know if the final results depend on the characteristics of the “good” stars, which we used for the SED self-calibration. For this purpose, we used various randomly selected subsets of the “good” stars in each image. We obtained very similar color correction constants. Furthermore, we divided the “good” stars sample into several sub-groups according to their apparent magnitude. The results show that the difference among different groups is about 0.03 mag. One possible reason is the low signal-to-noise ratio of faint stars. (4) Does the choice of the initial value of the iteration affect the results? Our tests show that different initial values normally only affect the convergence time of iteration. The difference on the final result is less than 0.01 mag. In some cases, there are several minima and the process may not iterate to the right one; this is an open issue. The best way to solve this problem, from our experiences, is that if one or a few color indexes are very well determined by observations taken during photometric nights, then we keep these indexes fixed. This causes the iteration always to converge to the right minimum. After many tests and applications to real data, we conclude that, though there are still some problems requiring further development (such as creating larger libraries of stellar SEDs), the method presented here can work well: the accuracy of the SED calibration is comparable to the precision of the CCD photometry. A by-product of our method is the automatic classification of the stellar type or the determination of the stellar parameters, which is very useful for the studies of galactic structure via large field multi-color CCD photometry survey.
no-problem/9903/astro-ph9903276.html
ar5iv
text
# On the enigmatic X-ray Source V1408 Aql (=4U 1957+11) ## 1. Introduction The low mass X-ray binary (LMXB) V1408 Aql ($`=`$4U 1957+11, 3U 1956+11) was detected during scans of the Aquila region by Uhuru in 1973 (Giacconi et al. (1974)), and it was subsequently identified with an 18$`\stackrel{\mathrm{m}}{\mathrm{.}}`$7 star having a strong blue excess (Margon, Thornstensen & Bowyer (1978)). The object is situated in a region of relatively small extinction ($`N_\mathrm{H}1.3\times 10^{21}\mathrm{cm}^2`$; Dickey & Lockman (1990); Stark et al. (1992)). $`A_\mathrm{V}`$ measurements place the source at a distance $`>`$2.5 kpc, and comparisons of its X-ray and optical luminosity to Sco X-1 place it at a distance of $``$7 kpc (Margon, Thornstensen & Bowyer (1978)). Little is known about the nature of the system. Optical spectra of V1408 Aql reveal a power-law continuum with H$`\alpha `$, H$`\beta `$, and He ii 4686Å emission lines (Cowley, Hutchings & Crampton (1988); Shahbaz et al. (1996)), typical for an accretion disk-dominated system. Thorstensen (1987) reported a nearly perfectly sinusoidal V-band luminosity modulation with 10% amplitude and a 0.389 d (=9.33 h) period, which he interpreted as due to X-ray heating of the companion. In recent multicolor photometry a more complex lightcurve with 30% modulation amplitude was observed. Hakala, Muhli & Dubus (1999) interpret this change in the shape of the lightcurve as evidence for a disk with a large outer rim, possibly due to a warped disk, seen close to edge on (Hakala, Muhli & Dubus (1999); see also §4). This interpretation is also consistent with the shape of the infrared spectrum (Smith, Beall & Swain (1990)). The short orbital period is indicative of a late type main sequence star of $`M1\mathrm{M}_{}`$ as the donor star. The absence of X-ray eclipses and the assumption that the donor star fills its Roche lobe yield an upper limit on the orbital inclination of $`i70^{}`$$`75^{}`$, consistent with the models for the optical variability (Hakala, Muhli & Dubus (1999)). V1408 Aql is one of the less well-studied possible black hole candidates (BHCs). Identification as either a BHC or a neutron star-low mass X-ray binary (NS-LMXB) is usually made by analogy with the spectral- and timing-behavior of better observed sources. V1408 Aql has been a BHC since 1984, when EXOSAT X-ray observations revealed that V1408 Aql has a very soft X-ray spectrum, similar to that of other BHC. In color-color-diagrams, V1408 Aql lies halfway between the black hole candidate GX 339$``$4 (in its high/soft state) and the neutron-star LMXBs Cyg X-2 and LMC X-2 (White & Marshall (1984); Schulz, Hasinger & Trümper (1989)). This color identification of V1408 Aql as a BHC, however, is not definitive. Previous narrow-band observations have not characterized the X-ray spectrum in a consistent manner. The analysis of 1983 and 1985 EXOSAT observations of V1408 Aql led to contradictory results. While Singh, Apparao & Kraft (1994) succeeded in fitting a Comptonization spectrum to these data and interpreted this as an indication that V1408 Aql is a black hole candidate, Ricci, Israel & Stella (1995) interpreted the same data as being similar to that observed from NS-LMXBs. Observations with Ginga, with its larger spectral range and effective area, have shed more light on the nature of V1408 Aql (Yaqoob, Ebisawa & Mitsuda (1993)). The values of the normalizations of multicolor disk models (MCD; Mitsuda et al. (1984)), i.e. $`(r_{\mathrm{in}}/d)^2\mathrm{cos}i`$ where $`r_{\mathrm{in}}`$ is the inner disk radius, $`d`$ is the distance to the source, and $`i`$ is the inclination, have been used to distinguish between BHCs and NS-LMXBs (Tanaka & Lewin (1995)). In the case of V1408 Aql, $`r_{\mathrm{in}}\mathrm{cos}^{1/2}i2\mathrm{km}`$ assuming $`d=7`$ kpc, which is more characteristic of sources containing neutron stars. Additionally, the Ginga observation showed evidence of a hard tail (1–18 keV) comprising $`25\%`$ of the inferred flux for this system at that time. The best fit power-law photon indices for the hard component ranged from $`\mathrm{\Gamma }2`$ to $`3`$. The EXOSAT observations of Ricci, Israel & Stella (1995) indicate the presence of an iron fluorescence line with an equivalent width of 90 eV or smaller and a line-energy of 7.06 keV (i.e., highly ionized). Other values in the literature range from non-detection (e.g., Yaqoob, Ebisawa & Mitsuda (1993)) to 200 eV (White & Marshall (1984)), the uncertainty being mainly due to the difference in the assumed spectral continua and the different sensitivities of the instruments. Except for one observation, which hints toward a weak red-noise ($`f^\alpha `$) component between $`10^4`$ and $`10^3`$ Hz, all EXOSAT observations are consistent with the absence of any periodic features (Ricci, Israel & Stella (1995)). The Ginga observations have yet to have their short timescale variability analyzed; however, they do show evidence of significant flux and color changes on long time scales ($`10^4`$ s). The Vela 5B satellite did not detect any long-term X-ray variability from this source (Priedhorsky & Terrell (1984)); however, the upper limits to the variability were not particularly strong. If the published spectral models are accepted at face value, then the relative energetics of the disk black-body and power-law components, as well as the slope of the high-energy power-law, are very similar to those seen in BHCs such as LMC X-1 (Ebisawa, Mitsuda & Inoue (1989); Wilms et al. (1998b)), LMC X-3 (Treves et al. (1988); Wilms et al. (1998b)), and in the soft state of GX 339$``$4 (Miyamoto et al. (1991); Grebenev et al. (1991)). However, at luminosities as low as that of V1408 Aql, BHCs tend to show hard tails with no evidence of a disk or thermal component. On the other hand, NS-LMXBs that exhibit soft disk spectra also tend to show an additional $`2`$ keV blackbody component, while showing little hard flux (Miyamoto (1994), and references therein). Furthermore, low-luminosity NS-LMXBs *also* tend to be dominated by hard emission. Thus, there are good arguments that point towards V1408 Aql being a neutron star and also toward it being a black hole; however, none of the arguments are truly conclusive. In either case, V1408 Aql would still be a unique object, being either an unusually soft low-luminosity BHC, an unusually soft low-luminosity neutron star, or a soft neutron star with an unusually energetic hard tail. With the advent of X-ray detectors with much larger effective areas than EXOSAT and Ginga, as well as with the availability of detectors of higher energy resolution, such as those on the Advanced Satellite for Cosmology and Astrophysics (ASCA), a critical reexamination of the X-ray spectrum of V1408 Aql has become possible. In this paper we present the results from our analysis of a 30 ksec pointed observation with the Rossi X-Ray Timing Explorer (RXTE), as well as archival data from ASCA and the Röntgensatellit (ROSAT). In §2 we present the results from the spectral analysis. We discuss the timing analysis in §3, the long term variability of the source in §4, and we discuss our results in §5. The details of the data extraction are described in an appendix. ## 2. Spectral Analysis V1408 Aql was observed with RXTE in 1997 November in three observing blocks for a total on source time of 27 ksec. A log of the observations is given in Table 1. Since the spectral shapes of the three observing blocks are identical, the data were analyzed together. Spectral and temporal data were extracted using the methods outlined in appendix A.1. Spectral analysis was performed with XSPEC, version 10.0s (Arnaud (1996)). The RXTE spectrum of V1408 Aql is very soft. The Proportional Counter Array (PCA) did not detect any flux above $``$20 keV and the High Energy X-ray Timing Experiment (HEXTE) count rates are consistent with zero: the background subtracted count rates were $`0.4\pm 0.2`$ cps and $`0.0\pm 0.2`$ cps, for HEXTE clusters A and B, respectively. The residual flux in cluster A is most probably due to a slight overestimation of the HEXTE background dead time, as the spectrum seen is similar to the HEXTE background. Therefore, we do not consider V1408 Aql to be detected with HEXTE and will not further discuss these data. To describe the PCA spectrum we use the spectral models traditionally applied to V1408 Aql: an exponentially cutoff power-law, a multicolor disk-black body (Mitsuda et al. (1984)), and a Comptonization model after Titarchuk (1994). Due to the low sensitivity of the PCA to the low absorbing column towards V1408 Aql (see Stelzer et al. (1999) for a discussion of the sensitivity of the PCA to $`N_\mathrm{H}`$), we fixed $`N_\mathrm{H}`$ to the Dickey & Lockman (1990) value of $`N_\mathrm{H}=1.3\times 10^{21}\mathrm{cm}^2`$. The results of our spectral fits are given in Table 2, while the PCA spectrum and the residues are displayed in Fig. 1. All three models roughly describe the observational data. Note that we do not see any evidence for a high energy power-law tail as that seen in previous observations. The 90% confidence level upper limit to the 3–20 keV flux from a power law is $`8\times 10^{12}\mathrm{ergs}\mathrm{cm}^2\mathrm{s}^1`$, which is less than 2% of the observed 3–20 keV flux. The best description of the PCA data is given by the Comptonization model ($`\chi _{\mathrm{red}}^2=0.82`$ for 29 degrees of freedom), while the residues of the MCD model and the exponentially cut-off power-law show structure in excess of that expected from calibration uncertainties of the PCA. These residues are especially apparent in the low energy channels of the PCA, below the characteristic feature of the Xe L-edge at $`5`$ keV (a region of very uncertain detector calibration; see the discussion by Wilms et al. (1998a)). Inspection of our best fit values in Table 2 shows that the Comptonization model results in such a good fit because the seed photon temperature of the model, taken here as a Wien spectrum with best-fit temperature $`kT_0=0.34`$ keV, is uncharacteristically large. This is also evident in the very asymmetric confidence contour indicated in Table 2. Setting $`kT_0`$ to a small value yields residues that more resemble those seen in the MCD and exponentially cut-off models. We conclude that there is unambiguous evidence for the presence of an additional soft-excess below $`5`$ keV. Since the low-energy cut-off of the PCA is at $`2`$ keV, this instrument cannot be used to further constrain the nature of this soft-excess. We therefore turned to archival data from observations of V1408 Aql made with ASCA and ROSAT. The High Energy Astrophysics Archive (HEASARC) contains one ASCA observation of V1408 Aql, made in 1994 October (Table 1). A preliminary analysis of these data has been presented by Ricci et al. (1996). We extracted the data from all four instruments on ASCA, the two solid state detectors (SIS0 and SIS1) and the two GIS detectors (GIS2 and GIS3). Due to the uncertainty in the intercalibration of the instruments, the GIS and the SIS detectors were analyzed separately. We describe the data extraction process in appendix A.2. In Table 2 we list the results from modeling the data with the standard models for the SIS and the GIS, respectively. The data and residues for the models are shown in Fig. 2. Note that due to our extraction procedure the model normalizations differ between the detectors. It is only possible to compare the spectral shapes (see appendix A.2). As with the RXTE-PCA data, both the exponentially cut-off power-law and the MCD model provide a rough description of the data. Due to the higher spectral resolution of the ASCA detectors, however, the causes for the spectral deviations are now apparent, and include a strong deviation at $`1`$ keV. We interpret this feature as evidence for the presence of line emission at this energy, which might come from the iron L complex or emission features from other metals such as K$`\alpha `$ lines from highly ionized neon or magnesium (see Nagase et al. (1994)). Modeling the feature with the addition of a simple Gaussian line does not result in a markedly improved fit. In contrast, including an optically thin thermal plasma spectrum after Raymond & Smith (1977) in the spectral modeling process results in a dramatic improvement of the fit ($`\mathrm{\Delta }\chi _{\mathrm{red}}^2=0.65`$ for the MCD model). The best-fit parameters for the thermal plasma are similar to the disk temperature found with the MCD model and the emission line spectrum is dominated by emission around 1 keV. In order to further check whether the 1 keV feature is always present in the X-ray spectrum of V1408 Aql we turned to archival ROSAT position sensitive proportional counter (PSPC) data. The observing log for this observation is given in Table 1 and the data extraction procedure is described in appendix A.3. As can be seen from our fit-results in Table 2, the ROSAT data give similar results as the ASCA data. In fact, the ROSAT data *require* the presence of the line emission component to provide satisfactory fits (see also Fig. 3). We note that the PCA data shows weak residuals in the region of an Fe line. An MCD model with weak power law tail, for example, admits the inclusion of a 6.6 keV line with width 0.8 keV and equivalent width 80 eV. Such a weak, broad line, however, is comparable to the remaining uncertainties in the PCA response matrix, and we therefore cannot be confident of its significance nor of its parameters. Adding an Fe line (with energies ranging from 6.4 to 7.1 keV) to the models of the ASCA data also does not significantly improve the fits. Limits to the equivalent width of any line in this region were of $`𝒪(10\mathrm{eV})`$, which is comparable to the equivalent width of the Fe line present in the best fit Raymond-Smith models. We note that contrary to the EXOSAT and Ginga data, the ASCA data also do not show strong evidence of a hard tail. The upper limit to the flux from a power law tail was 12% of the 2–10 keV flux in the cutoff power law model of the ASCA GIS data. The upper limits for the MCD models and for the SIS models were 3–20 times lower. We therefore cannot rule out the possibility that the 7.06 keV line claimed by Ricci, Israel & Stella (1995) was associated with the presence of a hard tail. ## 3. Timing Analysis We employed Fourier techniques, in the same manner as for our RXTE observations of Cyg X-1 (Nowak et al. (1999a)) and GX 339$``$4 (Nowak et al. (1999b)), to study the short timescale variability of V1408 Aql. We use the same techniques for estimating deadtime corrections (Zhang et al. (1995); Zhang & Jahoda (1996)) to the Power Spectral Density (PSD), and for estimating uncertainties and the Poisson noise levels of the PSD (Leahy et al. (1983); van der Klis (1989)) as in our previous RXTE analyses. We use lightcurves with $`2^5`$ s resolution, from the PCA top layer data only, over the energy range $`1`$–7.2 keV (absolute PCA channels 1-20), in the analysis discussed below. We also searched $`2^{11}`$ s lightcurves over the same energy range for high-frequency features, but none were found above the Poisson noise limits. As the source intensity did not appear to vary over the course of the observation, we created a single PSD. The results are presented in Figure 4 for a normalization where integrating over positive frequency yields the mean square variability (see Belloni & Hasinger (1990); Miyamoto et al. (1992)). Note that above $`f=10^2`$ Hz the power is completely consistent with Poisson noise. We estimate that the background contributes 13 cps to the lightcurves, compared to 210 cps for the signal. Based upon these count rates and from calculating the PSD of the background lightcurve generated using the RXTE software, we find that the PSD observed between $`f=10^3`$$`10^2`$ Hz is consistent with background fluctuations. We find that the upper limit to the root mean square (rms) variability between $`10^3`$$`16`$ Hz is 4%. ## 4. Long-Term Variability We used data from the All Sky Monitor (ASM) on RXTE to study the long-term behavior of V1408 Aql. The ASM is an array of three shadow cameras combined with position sensitive proportional counters that provides for a quasi-continuous coverage of the sky visible from RXTE (Levine et al. (1996); Remillard & Levine (1997)). Lightcurves in three energy bands — 1.3–3.0 keV, 3.0–5.0 keV, and 5.0–12.2 keV — as well as over the whole ASM band are publically available from the ASM data archives (Lochner & Remillard (1997)). Typically there are several 90 s measurements available for each day. In Figure 5 we present the ASM data of V1408 Aql that were available as of 1998 November 20. The date of our pointed RXTE observation is indicated by an arrow in this figure. Several features are immediately apparent in these data. The count rate light curve shows significant variability with fluctuations up to $`𝒪(50\%)`$ of the mean. These fluctuations occur on $`𝒪`$(100 day) timescales. The color lightcurve (we show the 1.3-3.0 keV lightcurve divided by the 5.0–12.2 keV lightcurve) shows significantly less variability, with peaks in the softness of the source occurring on $`𝒪`$(400 day) timescales. Furthermore, the peaks in the softness of the source seem to be correlated with dips in the intensity of V1408 Aql. The features in the light curve appear to be associated with possible long term periodicities. We determined the significance of these possible long term periodicities by computing the Lomb-Scargle Periodogram (Lomb (1976); Scargle (1982)) for the 1.3–12.2 keV band for 6-day averages of the ASM lightcurves. We averaged data where the best fit to the source position and flux in an ASM observation had a $`\chi _{\mathrm{red}}^21.5`$ (see Lochner & Remillard (1997)) in *each* of the three ASM energy channels. The periodogram presented in Figure 5 shows evidence of a 117 day, 235 day, and a 352 day periodicity. Each of these periodicities is significant at greater than the 95% level, as determined by the methods of Horne & Baliunas (1986). We note that the Lomb-Scargle periodogram does not assume the presence of harmonics; this is a result of the analysis. Epoch folding (see Leahy et al. (1983); Schwarzenberg-Czerny (1989); Davies (1990)) of the ASM lightcurves also shows evidence of these periodicities, although each period has uncertainties of approximately $`\pm 10`$ days. The evidence for a periodicity in the color lightcurve is somewhat weaker. Only the longest period appears, with an approximately 370 day period, and then only at the 50% significance level in a Lomb-Scargle periodogram. Figure 5 shows the result of fitting three harmonically spaced sinusoids to the count rate and color lightcurves. In these fits, the periods were constrained to be within a few days of the periods found in the Lomb-Scargle periodogram of the count rate lightcurve; however, the phases of the sinusoids were left completely free. For the count rate lightcurve, the amplitudes of the sinusoids are 0.32 cps, 0.26 cps, and 0.41 cps for the fundamental, first harmonic and second harmonic. For the color lightcurve, the respective amplitudes are 0.1, 0.05, and 0.01. Furthermore, the phases of the sinusoids are displaced from those of the count rate lightcurves by of $`𝒪(\pi )`$. In Figure 5 we also show the lightcurves folded on the 117 day period. Note that the folded color lightcurve indeed exhibits very little variation on this timescale. The count rate lightcurve shows significantly more periodic structure. The low flux points, however, display the most variations from phase bin to phase bin. Partly this could be due to interference from the 235 and 352 day periods. Additionally, if this periodicity is due to inclination effects in a warped disk, as we further discuss below, the low flux points come at times when the disk is at its greatest inclination to our line of sight. The lightcurve is most sensitive at these times to small changes in disk thickness and/or shape. Long timescale periodicities and quasi-periodicities are relatively common in ASM observations of binary sources (Remillard 1997, private communication). Evidence for a 294 d periodicity in Cyg X-1 has been previously reported (Kemp et al. (1983); Priedhorsky, Terrell & Holt (1983)), and is readily apparent in the ASM data during the hard state. A 198 day periodicity also has been observed in LMC X-3 (Cowley et al. (1991); Wilms et al. (1998b)), and a possible 240 day periodicity appears in ASM data of the low/hard state of GX 339$``$4 (Nowak et al. (1999b)). ## 5. Discussion — The nature of V1408 Aql To summarize, our spectral analysis has provided evidence for a very soft spectrum which can be satisfactorily modeled with any of the three traditional models used here, namely the exponentially cutoff power-law, the MCD model, and Comptonization. We did not see any evidence for a hard power-law tail similar to that seen in previous observations. We have also found evidence for a spectral feature at $`1`$ keV, which we interpret as emission from the iron L complex or as K$`\alpha `$ lines from highly ionized metals. No short term variability in excess of the noise was detected from the source, and the upper limit to the rms variability between $`10^3`$$`16`$ Hz is 4%. On long timescales, we found evidence for periodic variability on a time-scale of about 117 days in the soft X-ray luminosity, and evidence for a periodic softening of the X-ray spectrum on a 350–400 day timescale. Below, we discuss interpretations of these results. ### Spectral Considerations Although the Comptonization model appears to provide the best fit to our broad-band RXTE data, we do not regard it likely that Comptonization is indeed the physical process responsible for producing the X-ray spectrum. As we have shown in §2, part of the small $`\chi _{\mathrm{red}}^2`$ obtained for Comptonization is attributable to the comparably high seed photon temperature, $`kT_0=0.34`$ keV, which mimics the soft excess seen in the RXTE data. Also, the best fit parameters hint at a very cold and optically thick Comptonizing plasma with an optical depth of almost 10. Commonly assumed models for Compton coronae, such as advection dominated accretion flows (Esin, McClintock & Narayan (1997)) or other ‘sphere plus disk’ coronal models (Dove et al. (1997)), have considered only hot, optically thin to moderately optically thick coronae. It is not clear whether a cool and very optically thick corona can be made energetically self-consistent, nor is it clear what physical processes would lead to such a configuration. We therefore conclude that Comptonization is an improbable physical mechanism for producing the observed soft spectrum. The accretion disk spectrum and the exponentially cutoff power-law both provided similar quality fits and yielded almost indistinguishable residues. The MCD model, however, seems the better phenemonological representation of the underlying physical mechanism for producing the observed spectrum. The best fit parameters for the exponentially cutoff power-law span a wide range (including a negative photon index in the RXTE spectrum), and in many ways appear to be “mimicing” the features of the MCD model. The MCD models, on the other hand, have best-fit spectral parameters that are all similar for each of the independent observations. More importantly, optical and infrared observations (Cowley, Hutchings & Crampton (1988); Shahbaz et al. (1996); Hakala, Muhli & Dubus (1999)) provide independent evidence for the presence of an extended accretion disk in V1408 Aql. As we discuss further below, additional independent evidence for the assumption that the X-rays are dominated by the accretion disk comes from the presence of the long term spectral variability. We note that the line features apparent in the ASCA and ROSAT data are also consistent with an accretion disk picture. Line features around 1 keV are a common occurrence in photoionized plasmas close to sources emitting hard X-rays (e.g., in eclipse in Vela X-1, Nagase et al. (1994)). We would also expect such features in models with warped accretion disks similar to those of Schandl (1996) (see discussion below). Iron L features and K$`\alpha `$ lines from Mg and Ne are also predicted in models for reflection off ionized accretion disks (Ross & Fabian (1993)), and are in fact observed in several NS-LMXB such as Cygnus X-2 (Vrtilek et al. (1986); Kallman, Vrtilek & Kahn (1989)), albeit the complexity of the observed line shapes makes a direct comparison between the data and the models difficult. See Kallman et al. (1996) for a detailed discussion of these features. ### Long Term Variability The timescales of the periodicities observed with the ASM are comparable to the timescales expected from precessing accretion disk warps, whether they are driven by the radiation pressure instability discovered by Pringle (1996) (see also Maloney, Begelman & Pringle (1996); Maloney & Begelman (1997); Maloney, Begelman & Nowak (1998)), or by an X-ray heated wind as for models of Her X-1 (Schandl (1996)). As radiation pressure must typically strongly dominate gas pressure before a wind can be launched, the former mechanism may dominate (Maloney & Begelman (1997)), at least for warps large enough such that the outer disk is effectively lluminated by the X-ray flux from the inner disk. This radiation pressure driven instability is fairly generic, and is expected to cause a radiatively efficient (i.e., non-advection dominated) accretion disk to warp and precess on $`𝒪(100\mathrm{day})`$ timescales. The observed ratio between the precession period and the orbital period in V1408 Aql is too long to be explained by a tidally forced precession of the accretion disk (Larwood (1998)). In a warped disk scenario, the long term modulations could be due to a combination of the flux varying as the cosine of the inclination angle, as well as due to obscuration of the inner disk by the outer disk. For the former effect, we note that if the inclination of V1408 Aql is $`70^{}`$–75 as suggested by Hakala, Muhli & Dubus (1999), then relatively modest inclination variations of $`\pm 10^{}`$ can yield the observed X-ray luminosity variations. The softening of the spectrum observed on the 352 day timescale could be due to a warp periodically obscuring the central regions of the accreting system, which would explain why we do not detect the hard power-law tail seen by the previous observations. In analogy to other soft sources such as LMC X-3 (Wilms et al. (1998b)), we can assume that this tail is produced in a small and comparably cold accretion disk corona close to the compact object which is then obscured by the precessing warp. A prediction of this scenario, therefore, is that a long term monitoring campaign with an instrument capable of detecting the hard power-law tail (e.g., RXTE or BeppoSAX), will detect a *periodic* change in the flux level of the power law, including a periodic disappearance of this tail. One alternative explanation is that the corona is covered by the rim of a (geometrically thick) accretion disk. Unlike the warped disk scenario where the relative inclination of the disk to our line of sight does not change on orbital timescales (see Figure 6), in a disk rim scenario the rim is caused by interaction of the accretion stream with the outer edge of the disk. Our relative view through the rim therefore changes on orbital timescales (see Hakala, Muhli & Dubus (1999) and references therein). This seems to be less likely than the warp scenario, however, as contrary to the optical and infrared data there appears to be no evidence for a modulation of the X-ray spectrum on orbital timescales. The warped disk picture could also explain the observed change in the *optical* lightcurve recently discovered by Hakala, Muhli & Dubus (1999) as a precession of a warp on long timescales. ### Short Term X-ray Variability Although we only have upper limits for the amplitude of the $`10^3`$$`16`$ Hz variability, these limits are consistent with the few observations of BHC and some NS-LMXB in nearly “pure” soft states (Miyamoto (1994)). For examples of BHC high/soft states, with little or no discernible hard tail wherein short term variability is presented, see Grebenev et al. (1991) for an observation of GX 339$``$4, Treves et al. (1988) for an observation of LMC X-3, Ebisawa, Mitsuda & Inoue (1989) for an observation of LMC X-1, and Miyamoto et al. (1994) for observations of Nova Muscae. The PSD presented in these works typically have a PSD level of $`(\mathrm{rms})^2/\mathrm{Hz}10^3`$ at 0.01 Hz, which decreases as $`f^{0.7}`$ for higher Fourier frequencies and an rms variability of $`3\%`$ in the $`10^2`$$`30`$ Hz range. This is slightly below the upper limits presented in Figure 4. The ‘normal branch’ of the NS-LMXB GX5+1 also has a similar amplitude and shape PSD as described above for high/soft state BHC; however, its energy spectrum consists of both a 1 keV MCD component and a 2 keV blackbody spectral component (Miyamoto (1994), and references therein). The high/soft state of Cir X-1 has been similarly modeled (Miyamoto (1994), and references therein). If V1408 Aql had weak 1–10 Hz variability comparable to that discussed by Miyamoto (1994) for Cir X-1, our observations would have detected it. Other soft neutron star sources with luminosities of $`𝒪(10\%)`$ Eddington (the approximate luminosity of V1408 Aql, if it were a 1.4 M neutron star given our hypothesis of a highly inclined disk), especially the bright atoll sources such as GX13+1, GX3+1, GX9+1, and GX9+9, can also exhibit “very low frequency noise” with approximately 5% rms variability (Hasinger & van der Klis (1989)). The $`10^4`$–100 s low amplitude variability in the light curves of these sources has been interpreted as intermittent, slow nuclear burning on the surface of the neutron star (Bildsten (1993, 1995)). Low frequency variability at such a level is absent in V1408 Aql. Furthermore, bright atoll sources often show a 0.1–10 Hz power spectrum in excess of the upper limits discussed here (Hasinger & van der Klis (1989)). The level of the 0.1–10 Hz PSD seen in GX13+1 (Homan et al. (1998)), for example, also would have been easily detected in the PSD of V1408 Aql, yet was not. ### Black hole or neutron star? The nature of the compact object as of now is not clear. The general picture outlined above is similar to that seen in neutron star X-ray binaries such as Sco X-1, Cyg X-2, and others. Yaqoob, Ebisawa & Mitsuda (1993) pointed out that the normalization of the best-fit MCD model appears to indicate that the compact object is a neutron star. This argument, however, strongly relies on the assumed distance to V1408 Aql, for which no compelling measurement exists (a lower limit of 2.5 kpc comes from the fact that the ASCA and ROSAT measured $`N_\mathrm{H}`$ values are consistent with the full galactic column), and also relies on the assumption that the accretion disk is seen closer to *face on*. The recent optical and soft X-ray variability measurements, however, make a large inclination more probable. Taking these points into account and assuming for the sake of argument a source distance of 7 kpc, then the overall flux of V1408 Aql is comparable to that of the high state of the black hole candidate GX 339$``$4, which is a very plausible BHC. The upper limits to the high frequency variability discussed above are consistent with previously observed BHC power spectra in high/soft states. If transitions from the hard state to the soft state occur at 5%-10% of the Eddington luminosity (see Nowak (1995)), then the compact object in V1408 Aql is consistent with being a 2–3 $`\mathrm{M}_{}`$ black hole. Thus, although there is no compelling evidence that V1408 Aql contains a black hole, there also is no compelling evidence that V1408 Aql is a neutron star. ### Conclusions X-ray spectroscopy and the study of both the long term and the short term variability of V1408 Aql make a system geometry as that depicted in Figure 6 seem likely. A low-mass main sequence star serves, via Roche Lobe overflow, as the donor for a compact object which is surrounded by a large accretion disk which in turn dominates the system at all wavelength ranges. The accretion disk is surrounded by an optically thin plasma, either in the form of an accretion disk wind or a stationary accretion disk photosphere, which emits the observed X-ray line radiation. A small hot corona directly surrounding the compact object produces the hard X-ray power-law. The whole accretion disk precesses on a time scale of about 117 d, obscuring the central region and causing the power-law tail to periodically disappear and reappear. Also on these long timescales, the changing view of the warp causes the orbital modulation of the optical light-curve (due to partial obscuration of the outer accretion disk) to vary from sinusoidal (Thorstensen (1987)) to a more complex pattern (Hakala, Muhli & Dubus (1999)). The nature of the compact object in V1408 Aql is still ambiguous. We have put forth a hypothesis, however, that might explain the observed phenomenology and makes predictions that are observationally testable. X-ray monitoring over the 117 d period with an instrument like RXTE or BeppoSAX should reveal whether the X-ray power-law tail really does periodically disappear and reappear as predicted by our model. Furthermore, if the source is at 10% $`L_{\mathrm{Edd}}`$ and contains a neutron star, then about one “Type I” microburst per day might be expected (Bildsten (1995)). This should be easily observable during such a campaign. One might also hope to find “kilohertz QPO” (van der Klis (1998)), as are often associated with atoll sources. For these latter two possibilities, however, we note that some of the brighter atoll sources such as GX13+1 have yet to exhibit kilohertz QPO (Homan et al. (1998), and references therein), and rarely exhibit Type I bursts (see, for example, Matsuba et al. (1984), and references therein). Finally, high spectral resolution observations as will be provided by the upcoming new generation of X-ray instruments, such as the gratings on the Advanced X-ray Astronomy Facility (AXAF) and the X-ray Multiple Mirror Mission (XMM), will provide the spectral resolution necessary for resolving and studying the Fe L complex. This will allow the application of plasma spectroscopic diagnostics (e.g., Liedahl et al. (1992)) to the study of this fascinating source. We thank Neil Brandt and Christopher Reynolds for valuable advice concerning the ASCA data analysis. We would also like to acknowledge useful correspondence with Lars Bildsten and Rob Fender. Ingo Kreykenbohm made some literature references available to us. We thank Erik Kuulkers for pointing out to us additional references. Rudy Wijnands provided invaluable advice concerning the timing analysis. This work has been financed by NASA Grants NAG5-3225, NAG5-4737, and NAG5-7024. JW was also supported by a travel grant from the Deutscher Akademischer Austauschdienst. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center, and the ROSAT archive at the Max Planck Institut für Extraterrestrische Physik in Garching bei München. ## Appendix A Data Analysis Methodology ### A.1. RXTE Data Analysis Our RXTE data were analyzed using the same procedure as that for our analysis of the spectrum of GX 339$``$4 (Wilms et al. (1998a)). Screening criteria for the selection of good on-source data were that the source elevation was larger than 10. Data measured within 30 minutes of passages of the South Atlantic Anomaly or during times of high particle background (as expressed by the “electron ratio” being greater than 0.1) were ignored. Using these selection criteria, a total exposure time of 27 ksec was obtained. To increase the signal to noise level of the data, we restricted the analysis to the first anode layer of the proportional counter units (PCUs) where most source photons are detected (the particle background is almost independent of the anode layer), and we combined the data from all five PCUs. To take into account the calibration uncertainty of the PCA we applied the channel dependent systematic uncertainties described by Wilms et al. (1998a). These uncertainties were determined from a power-law fit to an observation of the Crab nebula and pulsar taking into account all anode chains; however, they do also provide a good estimate for the first anode layer only since most of the photons are detected in this layer. Since V1408 Aql has a comparably small count rate we are able to use the new background model for the PCA that was made available by the RXTE Guest Observers Facility (GOF) in 1998 June. The quality of this model was checked by looking at high detector channels which are completely background dominated. Although the measured count rate of V1408 Aql was at the high end of the applicability of the new background model, the agreement between the model and the measured background was good. This is in part due to the fact that V1408 Aql is a very soft source which allows greater latitude in using the background model for faint sources. Remaining background residuals were minimized by using the XSPEC “corfile” facility which renormalizes the background flux to decrease the best fit $`\chi _{\mathrm{red}}^2`$. The corrections applied to the background flux were on the order of $``$1.5%, indicating that at least for this source the background model provides a good background estimate. Since the spectrum is completely background dominated above 20 keV, and due to the calibration uncertainty below 3 keV, we restricted the spectral analysis to the range from 3 to 20 keV. For the timing analysis, we generated lightcurves from the ‘GoodXenon’ data. Note that although there are short data gaps of 1–4 s duration that are flagged by commensurate jumps in the value of the time coordinate from one data bin to the next, there are occasional data gaps where the extraction software generates a continuous series of time bins despite the data losses. These data gaps *do not* appear in lightcurves generated from the ‘standard2f’ data (which is processed by a different event analyzer on-board RXTE). These gaps can be recognized, however, in the high time resolution data by searching for any sequence, 1 s or greater in length, of time bins with zero count rate. Four such ‘unflagged’ sequences, with 16 s duration each, were found in our data. (Aside from these four 16 s sequences, there were a few instances where two $`1/32`$ s time bins in a row would have zero detected counts. The lack of counts in these bins were consistent with counting statistics, and we did not consider these to be data gaps.) The power spectra that we presented in Figure 4 were made from continous data segments without internal data gaps. If we include data segments with the unflagged data gaps in the calculation of the PSD, we obtain a low amplitude (5% rms) PSD that is flat from $`10^3`$$`10^2`$ Hz and is exponentially cutoff at higher Fourier frequencies. In fact, the presence of unflagged data gaps can be deduced from such a characteristic PSD shape (Wijnands 1999, priv. comm.). ### A.2. ASCA Data Analysis We extracted data from the two solid state detectors (SIS0, SIS1) and the two gas detectors (GIS2, GIS3) onboard ASCA by using the standard ftools as described in the ASCA Data Reduction Guide (Day et al. (1998)). The data extraction regions were limited by the fact that all the observations were in 1-CCD mode and that the source was placed close to the chip edge. To maximize the extraction regions, we chose rectangular regions of $`6^{}\times 8^{}`$ and $`6^{}\times 7^{}`$ for SIS0 and SIS1, respectively. Choosing a rectangular region does not effect the shape of the extracted spectrum; however, the ASCAARF ancillary response matrix generator assumes a circular region, so the flux normalization is slightly off (hence the $`30\%`$ normalization differences between the SIS and GIS detectors in Table 2). For the GIS detectors we chose circular regions centered on the source each with a radius of $`13^{}`$. The SIS count rate for V1408 Aql is large enough that the central regions of the CCD suffer from pileup (i.e., two or more events being registered as a single event). Estimates of the amount of this pileup can be found in the appendix presented by Ebisawa et al. (1996). Based upon our measured spectrum and these estimates, we chose to exclude from analysis central rectangular regions with dimensions of $`4^{}\times 3^{}`$ and $`3^{}\times 3^{}`$ for SIS0 and SIS1, respectively. With these exclusions, we estimate that pileup will contribute less than 1% of the counts at 10 keV. We used the SISCLEAN and GISCLEAN tools (Day et al. (1998)), with the default values, to remove hot and flickering pixels. As the spectrum of V1408 Aql is very similar to the low flux level of Cir X-1 described by Brandt et al. (1996), we filtered the data with the same cleaning criteria outlined in that work; however, we took the slightly larger values of $`10^{}`$ for the minimum elevation angle and 7 $`\text{GeV}/c`$ for the rigidity. Also similar to the work of Brandt et al. (1996), we formed background estimates by extracting a circular region of radius $`5^{}`$ near the edge of the detector for the GIS observations. For the SIS observations, we chose L shaped regions near the corner of the chip opposite from the source. Background, however, contributes relatively little to the observations. We rebinned the spectral files so that each energy bin contained a minimum of 20 photons. We retained SIS data in the 0.6 to 10 keV range and GIS data in the 1 to 10 keV range. The cross-calibration uncertainties among the instruments were accounted for by introducing a multiplicative constant for each detector in all of our fits. As discussed above, the resulting data files showed reasonable agreement between all four detectors. ### A.3. ROSAT Data Analysis The extraction of the ROSAT spectrum was performed using the standard ROSAT PSPC data analysis package Extended X-ray Scientific Analysis System (EXSAS) (Zimmermann et al. (1998)) following the procedures described by Brunner et al. (1997). Source counts were extracted from a circular region centered on the position of V1408 Aql with a radius of $`2^{}`$, while the background was extracted from an annulus centered on the source from which source counts from detected background sources were removed. A correction for the telescope vignetting was applied to the standard ROSAT response matrix. The spectrum was then rebinned into 26 channels of $`10000`$ counts each to ensure an even signal to noise ratio over the whole ROSAT energy band. As for RXTE and ASCA, the spectral analysis of the extracted data was then performed with XSPEC, ignoring data measured below 0.5 keV and above 2.5 keV.
no-problem/9903/hep-ph9903395.html
ar5iv
text
# Electroweak Symmetry Breaking due to Confinement ## Abstract Within the framework of gauge mediated supersymmetry breaking, we consider an electroweak symmetry breaking pattern in which there is no conventional $`\mu `$ term. The pattern is made appealing through realizing it as low energy effective description of a supersymmetric Yang-Mills theory which is of confinement. Phenomenological implications are discussed. preprint: hep-ph/9903395 Supersymmetry provides a solution to the gauge hierarchy problem if it breaks dynamically . It has been realized that the breaking should occur in a hidden sector, which is then communicated to the observable sector. In this paper, we consider the scenario in which the communication is via gauge interactions . Generally, the gauge mediated supersymmetry breaking (GMSB) models have a so-called $`\mu `$ problem , namely either the $`\mu `$ term is at the weak scale and the $`B\mu `$ term is unnaturally large, or $`B\mu `$ is at the weak scale and $`\mu `$ is very small. That means, there are some difficulties in getting right electro-weak symmetry breaking (EWSB). Although several ways were suggested for solving this problem , it would be desirable to find more simple solution. Instead of generating the $`\mu `$ term, we suggest to study the EWSB by the following superpotential, $$W_{EWSB}=\lambda X(H_uH_d\mu ^2),$$ (1) where $`H_u`$ and $`H_d`$ are the two Higgs doublets, $`X`$ a standard model singlet; $`\mu `$ is the EWSB scale, and $`\lambda `$ the coupling constant. The physical implications of above superpotential will be discussed later. In fact, it was used in early stage of the supersymmetry phenomenology . Spontaneous EWSB is obtained as $$v_u=v_d=\mu ,$$ (2) where $`v_u`$ and $`v_d`$ denote vacuum expectation values (vevs) of the doublet Higgs fields, and the other fields have vanishing vevs. Note that $`W_{EWSB}`$ does not break supersymmetry. After taking relevant soft masses into consideration, it can be seen that there are no light Higgs and light Higgsino particles. The superpotential $`W_{EWSB}`$ of Eq. (1) may have fundamental reasons. It can be an effective theory of a more fundamental theory. The results of supersymmetric Yang-Mills theory can be used to realize this idea. To be specific, we exploit a model of Intriligator, Seiberg and Shenker . Introduce a supersymmetric SU(2) gauge interaction with a single matter superfield $`Q`$ in the $`I=3/2`$ representation. This theory is believed to be of confinement. The basic gauge singlet field is $`u=Q^4`$ with a totally symmetric contraction of the gauge indices. The quantum theory has a moduli space of degenerate vacua labeled by the vev of $`u`$. The nontrivial check of the ’t Hooft anomaly matching conditions implies that the Kähler potential at low energy is $$Ku^{}u|\mathrm{\Lambda }|^6\mathrm{for}u^{}u<\mathrm{\Lambda }^8,$$ (3) with $`\mathrm{\Lambda }`$ being the dynamical scale of the SU(2) interaction. Perturbing the theory by a tree level superpotential $`{\displaystyle \frac{k}{m}}u`$ with $`m`$ being some new physics scale and $`k`$ the dimensionless coupling coefficient would break supersymmetry. To achieve EWSB other than supersymmetry breaking, we assume that the new physics further couples $`Q`$ with standard model Higgs fields which are singlet under this SU(2). The low energy effective superpotential is written as $$W_{eff}=\lambda _1mH_uH_d+\frac{k}{m}u\frac{c}{m^3}uH_uH_d,$$ (4) where $`\lambda _1`$ and $`c`$ are dimensionless coupling constants. By field redefinition $`uu+m^4{\displaystyle \frac{\lambda _1}{c}}`$, $`W_{eff}`$ becomes to $$W_{eff}=\frac{k}{m}u\frac{c}{m^3}uH_uH_d,$$ (5) plus some unphysical constant, where we denote the redefined field still as $`u`$ without confusing. Note that the Kähler potential does not change under this redefinition. To the order of $`1/m^3`$, the general effective superpotential includes terms $`\lambda _2(H_uH_d)^2/m+\lambda _3(H_uH_d)^3/m^3`$ with $`\lambda _2`$ and $`\lambda _3`$ being dimensionless constants. The presence of these terms does not modify the above-discussed EWSB qualitatively. The point is that whenever there appears a term proportional to $`H_uH_d`$, it can be removed through the above procedure of field redefinition. In addition, $`\lambda _2`$ and $`\lambda _3`$ can be small. The smallness is natural in the sense of ’t Hooft due to the non-renormalization theorem in supersymmetry. Besides that $`u`$ is a composite field with dimension $`4`$, Eqs. (3) and (5) is the same as the physics by Eq. (1) with an elementary $`X`$. Quantitatively, rescaled field $`u/\mathrm{\Lambda }^3`$ corresponds to $`X`$, and then $$\lambda =c\frac{\mathrm{\Lambda }^3}{m^3},\mu ^2=\frac{k}{c}m^2.$$ (6) It can be seen from Eq. (1) that keeping Higgsino mass at weak scale requires $`\lambda O(1)`$. So numerically $`c`$ is $`m^3/\mathrm{\Lambda }^3>1`$. This is consistent with $`\mu ^2<m^2`$ if $`k`$ is $`O(1)`$, $`\mu ^2k{\displaystyle \frac{\mathrm{\Lambda }}{m}}\mathrm{\Lambda }^2`$. By taking $`\mathrm{\Lambda }`$ to be ($`1001000`$) GeV, $`m`$ should be about ($`10^210^5`$) GeV. Therefore, viable EWSB can indeed occur dynamically due to confinement of a supersymmetric gauge theory with certain effective tree level superpotential. It is necessary to discuss theoretical implications of the above described EWSB. First, the breaking scale $`\mu `$ is not generated by the supersymmetry breaking which has not been dealt with yet. It is related to the SU(2) dynamical scale $`\mathrm{\Lambda }`$ and the new physics scale $`m`$. However, the EWSB is still tied to supersymmetry itself. Supersymmetry is necessary to keep the gauge hierarchy whenever there are elementary scalar particles. Second, it is not radiative breaking. Once new scales are introduced for generating the scale $`\mu `$, radiative breaking is no longer a requirement of simplicity. It is natural to relate the scales $`\mathrm{\Lambda }`$ and $`m`$ to the EWSB directly. Third, we wonder if there is a relation between the scale $`m`$ and the supersymmetry breaking scale. For instance, the scale $`m`$ can be at $`10^4`$ GeV which might also be the supersymmetry breaking scale. It would be interesting that the EWSB is finally connected to the supersymmetry breaking. Fourth, in principle this EWSB mechanism may also apply to the case of supergravity. Of course, it seems to have less relation with the supersymmetry breaking in this case which is therefore less interesting. Supersymmetry breaks dynamically in another sector. There are several ways to get the breaking . For simplicity, we can still adopt the model of Ref. . Introduce another SU(2) with single matter $`Q^{}`$ in the $`I=3/2`$ representation. The singlet composite field is $`u^{}=Q^4`$. The tree level superpotential $`{\displaystyle \frac{u^{}}{m^{}}}`$ with $`m^{}`$ being some scale breaks supersymmetry dynamically. To mediate the supersymmetry breaking to the standard model sector, we introduce the so-called messenger fields which are singlet under this new SU(2) but in the vector representation under the standard model gauge group. Couple $`u^{}`$ to the messengers in the way like the $`u`$ field to the Higgs fields in Eq. (4). The difference here is that the messenger mass terms in the superpotential cannot be removed by field redefinition so as to avoid the messengers developing vevs. And these mass terms break R-symmetry explicitly . Supersymmetry breaking is mediated to the standard model sector through loops. In fact, the effective theory obtained from the above is just the O’Raifeartaigh model used by Dine and Fischler in Ref. which gives details of the messenger content. The phenomenological implications of the EWSB described in this paper should be stressed. The Higgs vevs are determined by the superpotential Eq. (1), the supersymmetric standard model gauge interactions and the soft masses, $$V=|\lambda (v_uv_d\mu ^2)|^2+\frac{1}{8}(g^2+g^2)(v_u^2v_d^2)^2+M^2v_u^2+M^2v_d^2,$$ (7) where $`g`$ and $`g^{}`$ are the standard model SU(2)$`\times `$U(1) gauge coupling constants, $`M`$ the soft mass of the Higgs particles. The minimum of $`V`$ results in Eq. (2). Hence $$\mathrm{tan}\beta \frac{v_u}{v_d}=1.$$ (8) Note that the usual phenomenological constraints on $`\mathrm{tan}\beta `$ in the minimal supersymmetric standard model (MSSM) do not apply here, because the EWSB is not radiative breaking. Compared with the particle spectra of the MSSM, there is one more neutral Higgs and one more neutralino because of the introduction of $`X`$ field. Due to the tree level electroweak breaking and the additional coupling $`\lambda `$, the spectra of the scalar bosons and the neutralinos are less constrained. Nevertheless they are all around the weak scale. Let us look at the neutralino masses which are given as $$(\stackrel{~}{\varphi }_d^0\stackrel{~}{\varphi }_u^0\stackrel{~}{W}^3\stackrel{~}{B}\stackrel{~}{X})\left(\begin{array}{ccccc}0& 0& gv_d/\sqrt{2}& g^{}v_d/\sqrt{2}& \lambda v_u\\ 0& 0& gv_u/\sqrt{2}& g^{}v_u/\sqrt{2}& \lambda v_d\\ gv_d/\sqrt{2}& gv_u/\sqrt{2}& M_{\stackrel{~}{W}}& 0& 0\\ g^{}v_d/\sqrt{2}& g^{}v_u/\sqrt{2}& 0& M_{\stackrel{~}{B}}& 0\\ \lambda v_u& \lambda v_d& 0& 0& 0\end{array}\right)\left(\begin{array}{c}\stackrel{~}{\varphi }_d^0\\ \stackrel{~}{\varphi }_u^0\\ \stackrel{~}{W}^3\\ \stackrel{~}{B}\\ \stackrel{~}{X}\end{array}\right),$$ (9) where $`\stackrel{~}{\varphi }_d`$, $`\stackrel{~}{\varphi }_u`$ and $`\stackrel{~}{X}`$ stand for the fermion components of $`H_d`$, $`H_u`$ and $`X`$. $`\stackrel{~}{W}`$ and $`\stackrel{~}{B}`$ are Wino and Bino with soft masses $`M_{\stackrel{~}{W}}`$ and $`M_{\stackrel{~}{B}}`$ respectively. The determinant of the matrix is about $`M_Z^4M_{\stackrel{~}{W}}`$. We see explicitly that there is no light Higgsino. And all the neutralinos are around the weak scale. The chargino mass matrix is more predictive, $$(\stackrel{~}{\varphi }_u^+\stackrel{~}{W}^+)\left(\begin{array}{cc}0& M_W\\ M_W& M_{\stackrel{~}{W}}\end{array}\right)\left(\begin{array}{c}\stackrel{~}{\varphi }_u^{}\\ \stackrel{~}{W}^{}\end{array}\right).$$ (10) Because of the absence of conventional $`\mu `$ term, the two chargino mass product satisfies: $`M_{\stackrel{~}{\chi }_1^\pm }M_{\stackrel{~}{\chi }_2^\pm }=M_W^2`$. $`M_{\stackrel{~}{W}}0`$ leads to that one of the charginos must be lighter than the W boson. Such a chargino is within the experimental reach. However, if the lightest neutralino mass is close to this chargino mass within a few GeV, it is hard to be detected. In summary, an old EWSB pattern has been re-suggested to avoid the $`\mu `$ problem in the GMSB scenario. Our main point is that it can be effective description of a more fundamental supersymmetric gauge theory which is of confinement. Phenomenologically, additional neutral Higgs and one more neutralino are predicted with masses around the weak scale. One of the charginos is lighter than the W gauge boson. Several remarks should be made finally. (i) This model is originally motivated by the works of Ref. which aim at the flavor problem. Slight lepton number violation can be introduced into the model. The scalar neutrinos develop small vevs. In this case the $`\mathrm{tan}\beta `$ deviates from unity slightly. (ii) The relation between EWSB and super-Yang-Mills theory is not unique. In a recent model of Ref. , conventional $`\mu `$ term is generated dynamically. (iii) This EWSB mechanism is similar to the spirit of the technicolor . The electroweak symmetry breaking is triggered by a strong SU(2) interaction. However, there is distinction, that is the property of the strong gauge interaction used in this mechanism is not spontaneous chiral symmetry breaking, but confinement. ###### Acknowledgements. I would like to thank E.J. Chun and S.Y. Choi for helpful discussions.
no-problem/9903/math9903172.html
ar5iv
text
# Theorem 1 ## 1 Background A ray, $`\gamma :[0,\mathrm{})M^n`$, is a geodesic which is minimal on any subsegment, $`d(\gamma (t),\gamma (s))=|ts|`$. Every complete noncompact Riemannian manifold contains a ray. Given a ray, one can define its associated Busemann function, $`b:M^n\text{R}\text{I}`$, as follows: $$b(x)=\underset{R\mathrm{}}{lim}\left(Rd(x,\gamma (R))\right).$$ (6) The Busemann function is a Lipschitz function whose gradient has unit length almost everywhere. \[Bu\]. In Euclidean space, the level sets of the Busemann function associated with a given ray are the planes perpendicular to the given ray. In contrast, the Busemann function defined on a manifold with nonnegative Ricci curvature and linear volume growth has compact level sets with bounded diameter growth \[So1, Thm 15\]. In that paper, the author also proved that if such a manifold is not an isometrically split manifold, then the Busemann function is bounded below and $`b^1((\mathrm{},r])`$ is a compact set for all $`r`$ \[So1, Cor 19\]. ###### Lemma 2 Let $`M^n`$ be a complete manifold with nonnegative Ricci curvature. Suppose that there is a Busemann function, $`b`$, which is bounded below by $`b_{min}`$ and that diameter of the level sets grows at most linearly, $$diam(b^1(b_{min}+r))C_D(r+1).$$ (7) Then there exists a universal constant, $`C_n`$, depending only on the dimension, $`n`$, such that any harmonic function, $`f`$, satisfies the following gradient estimate: $$\underset{b^1([b_{min},b_{min}+r))}{sup}|f|\frac{C_n}{2(r+D)}\underset{b^1(b_{min}+2(r+D))}{sup}|f|$$ (8) for all $`DC_D(r+1)`$. Proof of Lemma: First note that the boundary of the compact set, $`b^1([b_{min},r))`$, is just $`b^1(r)`$. So, by the maximum principal, we know that for any harmonic function, $`f`$, $$\underset{b^1([b_{min},r))}{\mathrm{max}}f\underset{b^1(r)}{\mathrm{max}}f\text{ and}\underset{b^1([b_{min},r))}{\mathrm{min}}f\underset{b^1(r)}{\mathrm{min}}f.$$ (9) Furthermore, Cheng and Yau have proven the following gradient estimate for harmonic functions on balls in manifolds with nonnegative Ricci curvature, $$\underset{B_p(a/2)}{sup}|f|\frac{C_n}{a}\underset{B_p(a)}{sup}|f|$$ (10) where $`C_n`$ is a universal constant depending only on the dimension, $`n`$. \[ChgYau, Thm 6\], see also \[SchYau, p21, Cor 2.2\]. This will be the constant in (8). Thus, we need only relate balls to regions defined by the Busemann function to prove the theorem. Let $`x_0`$ be a point in $`b^1(b_{min})`$. Note that $$B_{x_0}(a)b^1([b_{min},b_{min}+a))$$ (11) because the triangle inequality implies that $$b(x)=\underset{R\mathrm{}}{lim}Rd(x,\gamma (R))\underset{R\mathrm{}}{lim}Rd(x_0,\gamma (R))+d(x_0,x)=b(x_0)+d(x_0,x).$$ On the other hand, using our diameter bound in (7), we claim that $$b^1([b_{min},b_{min}+r))B_{x_0}(r+D)DC_D(r+1).$$ (12) To see this we will construct a ray, $`\sigma `$, emanating from $`x_0`$ such that for all $`t0`$, $`\sigma (t)b^1(r_{min}+t)`$. Then, for any $`yb^1([b_{min},b_{min}+r))`$, we let $`t=b(y)`$ and we have $$d(x_0,y)d(x_0,\sigma (t))+d(\sigma (t),y)t+diam(b^1(r_{min}+t))r+D.$$ which implies (12). The ray, $`\sigma `$, is constructed by taking a limit of minimal geodesics, $`\sigma _i`$, from $`x_0`$ to $`\gamma (R_i)`$. A subsequence of such a sequence of minimal geodesics always converges. The limit ray satisfies the required property, $`b(\sigma (t))`$ $`=`$ $`\underset{i\mathrm{}}{lim}b(\sigma _i(t))=\underset{i\mathrm{}}{lim}\underset{R\mathrm{}}{lim}Rd(\sigma _i(t),\gamma (R))`$ $`=`$ $`\underset{R\mathrm{}}{lim}R(d(\sigma _i(0),\gamma (R))t)=b(x_0)+t.`$ We can now combine the relationships between Busemann regions and balls, (12) and (11), with the gradient estimate, (10), and the maximum principal, (9), to prove the lemma. That is, for all $`DC_D(r+1)`$, we have $`\underset{b^1([b_{min},b_{min}+r))}{sup}|f|`$ $``$ $`\underset{B_{x_0}(r+D)}{sup}|f|\text{ by (}\text{12}\text{),}`$ $``$ $`{\displaystyle \frac{C_n}{2(r+D)}}\underset{B_{x_0}(2(r+D))}{sup}|f|`$ $``$ $`{\displaystyle \frac{C_n}{2(r+D)}}\underset{b^1([b_{min},b_{min}+2(r+D)))}{sup}|f|`$ $``$ $`{\displaystyle \frac{C_n}{2(r+D)}}\underset{b^1(b_{min}+2(r+D))}{sup}|f|.`$ We employ this lemma and elements of the proof to prove our theorem. ## 2 Proof of the Theorem The given manifold, $`M^n`$, has nonnegative Ricci curvature and linear volume growth. We will assume that $`M^n`$ doesn’t split isometrically and demonstrate that the harmonic functions of polynomial growth must be constant. Since the manifold doesn’t split isometrically and has linear volume growth, any Busemann function, $`b`$, has a minimum value by \[So1, Cor 23\]. Furthermore, by \[So2, Thm ?\], the diameters of the level sets of the Busemann function grow sublinearly. Thus we satisfy the hypothesis of Lemma 2 with $`C_D=1`$ in (7). Let $`M(r)=\mathrm{max}_{b^1(b_{min}+r)}|f|`$, where $`f`$ is a harmonic function of polynomial growth. Note that $`M`$ is an nondecreasing function by the maximum principal, (9). By the lemma, we know that for all $`rb_{min}`$ and for all $`D(r+1)`$, we can bound the gradient of $`f`$ in terms of $`M`$, $$\underset{b^1([b_{min},b_{min}+r))}{sup}|f|\frac{C_nM(2(r+D))}{2(r+D)}.$$ (13) Since $`b^1(r)`$ is compact, there exists $`x_r,y_rb^1(b_{min}+r)`$ such that $$f(x_r)=\underset{b^1(b_{min}+r)}{\mathrm{min}}f\text{ and }f(y_r)=\underset{b^1(b_{min}+r)}{\mathrm{max}}f.$$ (14) We claim that, for $`r`$ sufficiently large, $`M(r)f(y_r)f(x_r)`$. First recall that if $`f`$ is a positive or negative harmonic function on a manifold with nonnegative Ricci curvature, then $`f`$ must be constant \[Yau1, Cor 1\]. So there exists a point $`zM^n`$ such that $`f(z)=0`$. Thus, by the maximum principal, if $`rb(z)`$ we know that $`f(y_r)0`$ and $`f(x_r)0`$. So $`M(r)=max(f(y_r),f(x_r))f(y_r)f(x_r)`$. We can now estimate $`M(r)`$ from above in terms of the gradient of $`f`$ and the diameter of the level set, $`b^1(r)`$. First we join $`x_r`$ to $`y_r`$ by a smooth minimal geodesic, $`\gamma _r`$. Note that the length of $`\gamma _r`$, is less than or equal to $`diam(b^1(r))`$ by the definition of diameter. So $`\gamma _rb^1(b_{min},r+diam(b^1(r)))`$. Thus for all $`rb(z)`$, for all $`D(r+1)`$, we have $`M(r)f(y_r)f(x_r)`$ $``$ $`{\displaystyle _0^{L(\gamma _r)}}{\displaystyle \frac{d}{dt}}f(\gamma (t))𝑑t`$ $``$ $`{\displaystyle _0^{L(\gamma _r)}}|f||\gamma ^{}(t)|𝑑t`$ $``$ $`\underset{b^1([b_{min},r+diam(b^1(r))))}{sup}|f|{\displaystyle _0^{L(\gamma _r)}}|\gamma ^{}(t)|𝑑t`$ $``$ $`{\displaystyle \frac{C_nM(2(r+diam(b^1(r))+D))}{2(r+diam(b^1(r))+D)}}diam(b^1(r))`$ $``$ $`C_nM(2(r+diam(b^1(r))+D)){\displaystyle \frac{diam(b^1(r))}{2r}}`$ $``$ $`C_nM(2(r+(r+1)+D)){\displaystyle \frac{diam(b^1(r))}{2r}}`$ Setting $`D=r+1`$ and taking $`r1`$, we have $$M(r)C_nM(6r)\frac{diam(b^1(r))}{2r}.$$ (15) Recall that our manifold has sublinear diameter growth by \[So2, Thm ?\]. So, given any $`\delta >0`$, we can find $`R_\delta 1`$ such that $$\frac{diam(b^1(r))}{2r}<\delta rR_\delta .$$ (16) Then, for all $`k𝐍`$ and for all $`RR_\delta `$, we have $$M(R)C_nM(6R)\delta \text{ … }C_n^kM(6^kR)\delta ^k.$$ (17) Now $`f`$ has polynomial growth of order $`q`$, (5), so $$M(r)=\underset{xb^1(b_{min}+r)}{\mathrm{max}}|f(x)|\underset{xb^1(b_{min}+r)}{\mathrm{max}}C(d(x,x_0)^q+1).$$ (18) Applying (12) with $`C_D=1`$ and $`D=C_D(r+1)`$, we get $$M(r)C((r+(r+1))^q+1)C(6r^2)^qr1.$$ (19) Substituting this information into (17), we get $`M(R)`$ $``$ $`C_n^kC\left(6(6^kR)^2\right)^q\delta ^k`$ $``$ $`C\mathrm{\hspace{0.17em}6}^qR^{2q}(C_n6^{2q}\delta )^kRR_\delta .`$ . Fix $`\delta =1/(2C_n6^{2q})`$, so $`R_\delta `$ is fixed by (16). Then, for all $`RR_\delta `$, we have $$M(R)\underset{k\mathrm{}}{lim}C\mathrm{\hspace{0.17em}6}^qR^{2q}(1/2)^k=\mathrm{\hspace{0.17em}0}.$$ (20) Since $`M(r)`$ is nondecreasing and nonnegative, $`M(r)=0`$ everywhere. Thus, $`f`$ is a constant.
no-problem/9903/gr-qc9903067.html
ar5iv
text
# Bounds on 2⁢𝑚/𝑅 for static spherical objects ## I INTRODUCTION Consider any static solution of the Einstein equations with the matter satisfying the null energy condition. The Penrose singularity theorem shows that this system cannot have a trapped surface. In a spherically symmetric configuration, the first apparent horizon in the initial data occurs when the ratio of (twice) the mass to its radial extent, $`2m/R`$, is one. However, it is also well known that for a spherical star composed of ordinary matter with positive energy described as a perfect fluid with a monotonically decreasing energy profile, $`2m/R`$ cannot exceed 8/9, the constant density value. Such a bound is particularly interesting because it occurs strictly before the appearance of an apparent horizon. Here we wish to examine this bound more closely when the assumptions underlying it are relaxed. Several very strong assumptions on the distribution of matter enter its derivation. Even in an astrophysical stellar object where matter is described phenomenologically, it is not clear that the perfect fluid assumption is justified. A humble soap bubble consisting of a membrane with a given (tangential) surface tension supported by the pressure of the enclosed perfect gas violates both the monotonicity of the energy density and the perfect fluid assumption. The approximation clearly also does not represent accurately the interior of topological defects such as monopoles . The balance of forces providing the equilibrium typically turns out to be analogous to that holding a soap bubble together. In addition, in many extensions of Einstein gravity, the effective stress tensor describing a perfect fluid does not assume the perfect fluid form. Field configurations typically will not be compact, in which situation we require a generalization of the mass that holds thoughout the bulk. This requires the replacement of $`m`$ by a quasi-local mass. While the $`8/9`$ bound is not a universal one, it is robust in the sense that under physical conditions which are at least reasonable classically, the mass continues to be bounded by a value strictly below the apparent horizon value, $`m=R/2`$. It appears to be impossible, even in principle, to construct a static distribution which saturates it. In Sections II and III, we establish our notation. We show in Section III that if the matter satisfies $`\rho +S_{}0`$, where $`\rho `$ is the energy density and $`S_{}`$ is the radial stress, the object cannot have an apparent horizon and thus $`2m/R`$ is strictly bounded away from 1. This condition, $`\rho +S_{}0`$ is one of the so-called ‘null energy’ or ‘null convergence’ conditions . It is interesting because we need no restriction on the tangential stress nor do we need to assume $`\rho 0`$. In a spherically symmetric geometry, the tangential stress will generally differ from the radial one except at the center where the constraints dictate that they coincide. Consider the ratio, $`\gamma `$, of tangential to radial stress. In a perfect fluid $`\gamma =1`$. We further show, again in Section III, that static matter satisfying $`\rho 0`$ together with $`\gamma 1`$ must have positive radial pressure which monotonically decreases outwards. This guarantees that $`\rho +S_{}0`$ which provides another way of excluding apparent horizons. The rest of the article is devoted to investigating how close $`2m/R`$ can get to one, and generalizing the $`2m/R<8/9`$ bound noted above. We summarize very briefly the simple constant density star in Section IV. In Section V we consider ‘stars’ that are monotonic with positive density. If $`\gamma 1`$, it is simple to show that the 8/9 inequality continues to hold, not only on the boundary but through the entire bulk. Indeed, the configuration need not be compact. If $`\gamma 1`$ anywhere, however, we obtain a slightly weaker result. We construct a bound which shows that $`2m/R`$ is strictly bounded away from unity. If $`\gamma _{max}`$ approaches 1 the bound smoothly approaches $`8/9`$; as $`\gamma `$ becomes unboundedly large the bound approaches 1. In particular, for a monotonic star with positive radial pressure and for which the transverse pressure is less than the density we can show that $`2m/R<0.974`$. Finally, in Section VI, we relax the assumption of monotonicity and find essentially the same results, except that now the bound depends both on the variation of the matter as well as on $`\gamma _{max}`$. ## II Static Limit of Einstein Equations The spacetime metric describing a static solution of the Einstein equations can always be written in the form $$ds^2=N^2dt^2+g_{ab}dx^adx^b,$$ (1) where $`N`$ is the lapse function and the shift vanishes. $`N`$ is also the norm of the global timelike killing vector, $`_t`$, and so must satisfy $`N>0`$. The spatial geometry at constant $`t`$ is described by the metric tensor $`g_{ab}`$. Both the material current vector $`J^a`$ and the extrinsic curvature tensor $`K_{ab}`$ (describing the embedding of a hypersurface of fixed $`t`$ in spacetime) vanish. In the canonical formulation of the theory, the momentum constraints of the theory are then vacuous. The hamiltonian constraint reduces to the form (see also the appendix to ), $$=16\pi \rho ,$$ (2) where $``$ is the scalar curvature constructed with the spatial metric $`g_{ab}`$ and $`\rho `$ is the material energy density. Given some specification of $`\rho `$, Eq.(2) is a constraint on the spatial geometry, $`g_{ab}`$. It does not involve the stresses operating on $`\rho `$. In the spherically symmetric case we will see that the intrinsic geometry is completely specified by $`\rho `$. The advantage of working within the canonical formulation is that this constraint is isolated explicitly. Given that the time direction is Killing the evolution in this direction must be trivial. The dynamical Einstein equation reduces to $`\dot{K}_{ab}=0`$, and now reads $$_a_bN+_{ab}N=8\pi N\left(S_{ab}\frac{1}{2}g_{ab}\mathrm{tr}S+\frac{1}{2}g_{ab}\rho \right),$$ (3) where $`_a`$ is the covariant derivative compatible with $`g_{ab}`$, $`_{ab}`$ is the associated Ricci tensor, $`S_{ab}`$ is the material pressure tensor and $`S`$ is its trace. In a perfect fluid the stress is isotropic with $`S_{ab}=Pg_{ab}`$. The other evolution equation, $`\dot{g}_{ab}=0`$, is trivially satisfied. For given $`\rho `$ and $`g_{ab}`$, Eqs.(3) consist of six PDEs for the seven functions, $`N`$ and $`S_{ab}`$. This counting is not very precise, because if we had a realistic fluid/field theoretical model we would have to supplement these equations with an ‘equation of state’ which would convert the equations from an underdetermined to an overdetermined system. We will suppose that the spatial topology is $`R^3`$. For an object with energy density of compact support (a star) or falling off sufficiently rapidly at infinity the spacetime will be asymptotically flat with $`N1`$ at infinity. The appropriate boundary condition on $`S_{ab}`$ in an object of compact support is that its normal component vanishes on the surface. For much of our discussion these boundary conditions are irrelevant. Taking the trace of the equations, (3), and eliminating $``$ in favor of $`\rho `$ using Eq.(2), we obtain the linear elliptic equation for $`N`$, $$\mathrm{\Delta }N=4\pi (\rho +\mathrm{tr}S)N.$$ (4) If the strong energy condition is satisfied we have $`\rho +\mathrm{tr}S0`$ everywhere, and so $`\mathrm{\Delta }N0`$ when $`N>0`$. Thus the solution cannot possess an interior maximum. $`N`$ falls towards the center. Even if the potential, $`V=\rho +\mathrm{tr}S`$, is large, $`N`$ never fall to zero in the interior (e.g.,). The lapse can go to zero only when the density or pressure becomes unboundedly large. Let us assume that $`N`$ is positive in the exterior and negative on a compact region $`W`$. $`N`$ vanishes on $`W`$ but will have positive outward gradient. If we integrate $`\mathrm{\Delta }N`$ over $`W`$ we can turn it into a surface integral which must be positive. On the other hand, from Eq.(4) we see that $`\mathrm{\Delta }N0`$ on $`W`$, so we have a contradiction. We also see that it is impossible for $`N`$ to just touch zero at a point. At that point we would have that $`N`$, its first derivatives and its second derivatives all vanish. Thus the function could never grow away from zero. One can deduce the conservation law, $$_bS^{ab}=(S^{ab}+\rho g^{ab})\frac{_bN}{N},$$ (5) directly from the static Einstein equations, Eqs.(2) and (3). To do this, we simply take the divergence of Eq.(3). Exploiting the Ricci identities, $$[_a,_b]V^b=R_{ab}V^b,$$ (6) and the contracted Bianchi identity for $`_{ab}`$, $`_a^{ab}=^b/2`$, we reproduce Eq.(5). It is clear from Eq.(4) that there are no non-trivial vacuum static solutions in the theory. We have $`\mathrm{\Delta }N=0`$ everywhere. If there is no internal boundary, the solution is $`N=1`$ everywhere. Now $`_{ab}=0`$, as well as $`=0`$, so that the geometry is flat everywhere. There is a well known result that the only perfect fluid static equilibria are spherically symmetric . This result implies that for perfect fluids the spherically symmetric analysis is complete. A discussion of the symmetries of equilibrium configurations is provided in . ## III Spherical Symmetry The line element describing the spatial part of a spherically symmetric geometry can always be written as $$ds^2=d\mathrm{}^2+R^2d\mathrm{\Omega }^2,$$ (7) $`\mathrm{}`$ is the proper radial distance on the hypersurface, $`R`$ is the areal radius. For $`R^3`$ topology, $`\mathrm{}`$ has domain $`[0,\mathrm{})`$. The appropriate boundary conditions on $`R`$ are $$R(0)=0,dR/d\mathrm{}|_0=R^{}(0)=1.$$ (8) The scalar curvature $``$ is given by $$=\frac{2}{R^2}\left[2\left(RR^{}\right)^{}R^21\right],$$ (9) where primes denote derivatives with respect to $`\mathrm{}`$. The constraint equation can be cast in the form (see, for example, ) $$R^2=1\frac{2m}{R},$$ (10) where the positive quasi-local mass is given by $$m=4\pi _0^{\mathrm{}}\rho R^2R^{}𝑑\mathrm{}=4\pi _0^R\rho R^2𝑑R.$$ (11) It is immediately clear that $$mR/2$$ (12) everywhere. In general, $`R^21`$ in any regular geometry when the weak energy condition $`(\rho 0)`$ is satisfied, so that $`R\mathrm{}`$ everywhere. To show this we substitute Eq.(9) into Eq.(2) to get $$2RR^{\prime \prime }+R^21=8\pi R^2\rho .$$ (13) At the center we have $`R^{}=1`$ and $`m=0`$. From Eq.(11) we see that $`m`$ increases as soon as we meet matter and thus $`R^{}`$ drops below 1. Let us assume that it later rises up to $`+1`$. However, from Eq.(13) we see that at this point $`R^{\prime \prime }0`$ so it cannot be rising! On the other hand $`R^{}`$ can drop below $`1`$. We again get $`R^{\prime \prime }0`$ which means that it cannot ever rise up again to the asymptotic $`R^{}1`$. Thus for any regular spherical geometry satisfying the weak energy condition $`1<R^{}+1`$ and $`R^{}=+1`$ only at the origin and at infinity. This holds true for any solution of Eq.(13), no static assumption is required. It is clear from Eq.(10) that $`m`$ is positive everywhere in a regular geometry and vanishes only at the center and in any vacuum region surrounding it. In a static configuration, the extrinsic curvature vanishes so that an apparent horizon is a minimal surface with $`R^{}=0`$. Thus, if the geometry is free of an apparent horizon, it must also be free of singularities. In such a geometry $`0<R^{}1`$ and $`m`$ increases monotonically with $`\mathrm{}`$ (or $`R`$). We emphasise that the spatial geometry and with it the ADM mass is completely determined by the source energy density. The material stresses play no role whatever. At the surface of a compact object of radius $`R=R_0`$, the quasi-lcoal mass coincides with the constant ADM mass, $`m_0`$. The exterior solution is given by Eq.(10) $$R^2=1\frac{2m_0}{R}.$$ (14) In a spherically symmetric geometry, any symmetric tensor is completely characterized by two scalars. We have $`_{ab}`$ $`=`$ $`_{}n_an_b+_R(g_{ab}n_an_b)`$ (15) $`S_{ab}`$ $`=`$ $`S_{}n_an_b+S_R(g_{ab}n_an_b).`$ (16) Here $`n^a`$ is the outward pointing normal to a two sphere of fixed proper radius. The two scalars appearing in the Ricci tensor can be expressed in terms of $``$, $`R`$ and $`R^{}`$ as follows : $`_{}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{R^2}}(1R^2)`$ (17) $`_R`$ $`=`$ $`{\displaystyle \frac{1}{4}}+{\displaystyle \frac{1}{2R^2}}(1R^2).`$ (18) Taking the two independent projections of Eq.(3), we therefore have in any spherically symmetric static equilibrium, $`N^{\prime \prime }`$ $`=`$ $`N\left\{4\pi (\rho S_{}+2S_R){\displaystyle \frac{2m}{R^3}}\right\}`$ (19) $`R^{}N^{}`$ $`=`$ $`RN\left\{4\pi S_{}+{\displaystyle \frac{m}{R^3}}\right\}.`$ (20) We can also combine Eqs.(20) and (13) to obtain $$\frac{R^{\prime \prime }}{R}+\frac{R^{}N^{}}{RN}=4\pi (\rho +S_{}).$$ (21) These three equations are the complete set of equations satisfied by any static spherically symmetric system. If the matter satisfies $$\rho +S_{}0,$$ (22) we can immediately deduce from Eq.(21) that any spherical static configuration cannot have an apparent horizon. The apparent horizon coincides with a minimal surface, i.e., when $`R^{}=0`$. From (21) at a minimal surface we see that $`R^{\prime \prime }0`$. However, at the outermost minimal surface we must have $`R^{\prime \prime }>0`$ since the area is increasing outwards. Thus we have a contradiction. Thus we have shown that, in a static star satisfying Eq.(22), $`R^{}>0`$. From Eq.(10) this is equivalent to showing $`m<R/2`$. In essentially the converse of this argument was given, showing that if a minimal surface existed (the throat of a static wormhole) then the matter cannot satisfy Eq.(22). This result has been recently proven in . However, that proof requires that both $`\rho `$ and $`S_{}`$ be positive while we only need to impose a condition on the combination. The energy condition Eq.(22) is a single component of what is called the ‘null convergence’ or ‘null energy’ condition . It is a consequence of each of the three standard energy conditions (the ‘strong’, ‘weak’, and ‘dominant’). Consider the outgoing radial null vector $$\xi ^\mu =(1/N,1,0,0),$$ (23) and multiply it into the spacetime Ricci tensor $`{}_{}{}^{(4)}_{\mu \nu }^{}`$ to get $${}_{}{}^{(4)}_{\mu \nu }^{}\xi ^\mu \xi ^\nu =^{(\mathcal{4})}𝒢_{\mu \nu }\xi ^\mu \xi ^\nu =8\pi (\rho +𝒮_{}).$$ (24) where $`{}_{}{}^{(4)}𝒢_{\mu \nu }^{}`$ is the spacetime Einstein tensor. The equality above is to be expected because the Einstein tensor only differs from the Ricci tensor by a trace and the trace term, when dotted twice with a null vector, vanishes. The positivity of $`{}_{}{}^{(4)}_{\mu \nu }^{}\xi ^\mu \xi ^\nu `$ implies Eq.(22). Choosing $`\xi ^\mu `$ to be an outgoing tangential null vector we obtain $`\rho +S_{}+2S_R0`$. If both $`S_{}0`$ and $`\rho 0`$ hold independently, as supposed in , it is clear that Eq.(22) is satisfied which guarantees $`R^{}>0`$. We also now have that the right hand side of Eq.(20) is positive so that $`N^{}0`$ everywhere. The lapse function, the length of the Killing vector, for any regular solution must grow monotonically out from the centre to its asymptotic value one. With positive radial stress and positive $`\rho `$ we do not need to assume the strong energy condition. Spherical symmetry is very restrictive. Compare this to the spherically symmetric statement of the maximum principle which was applied earlier to the trace equation, Eq.(4). For completeness, we note that the lapse is evaluated in the exterior of a compact object as follows: using Eq.(20), we have $$R^{}N^{}=N\frac{m_0}{R^2},$$ (25) so that using Eq.(14), $$\frac{N^{}}{R^{}}=\frac{m_0}{R^2}\left(1\frac{2m_0}{R}\right)^1.$$ (26) The boundary condition at infinity, $`N1`$, fixes $$N=\left(1\frac{2m_0}{R}\right)^{1/2}.$$ (27) Eq.(27) together with Eq.(14) reproduce the exterior Schwarzschild form of the spacetime metric. The conservation of the stress tensor reduces to the single equation, $$S_{}^{}+2\frac{R^{}}{R}(S_{}S_R)=(S_{}+\rho )\frac{N^{}}{N}.$$ (28) We deduce immediately that at $`\mathrm{}=0`$ in a non-singular geometry $$S_{}=S_R.$$ (29) The perfect fluid form of the stress tensor is the only one consistent with the symmetry at the origin. This is exactly as in newtonian theory. While we only needed a condition on the radial stress to eliminate apparent horizons, the transverse stress does play a role in establishing the equilibrium. This can be seen in simple mechanical models. For example, in a soap bubble, the surface tension, which is effectively a negative transverse stress, is the object which balances the positive outward pressure difference between the inside and outside. On the other hand, if we had an evacuated spherical metal shell, with a vacuum inside and positive pressure outside, the outside pressure forces the metal shell to contract setting up a positive transverse stress. The radial stress obviously increases outwards and is balanced by the positive transverse stress. In a self-gravitating system, we expect the radial pressure to decrease outwards. However, it need not if there are large positive transverse stresses to support the external pressure. In a spherically symmetric geometry it is possible to exploit the first order Einstein equation, Eq.(20), to reduce the dependence on the stress tensor appearing in Eq(19) to a dependence on the ratio of the tangential to the radial stress, $$\gamma =\frac{S_R}{S_{}}.$$ (30) We have $$N^{\prime \prime }+\frac{R^{}}{R}\left(12\gamma \right)N^{}=\left\{4\pi \rho \frac{m}{R^3}\left(1+2\gamma \right)\right\}N.$$ (31) In particular, if the stress is isotropic then $`\gamma =1`$ and Eq.(31) is independent of $`S_{ab}`$. Alternatively, we can exploit Eq.(20) to eliminate the lapse from the conservation equation, Eq.(28) $$S_{}^{}+2\frac{R^{}}{R}(S_{}S_R)=\frac{R}{R^{}}(S_{}+\rho )\left(4\pi S_{}+\frac{m}{R^3}\right).$$ (32) In the isotropic limit, Eq.(32) is the Tolman-Oppenheimer-Volkov equation. In the newtonian limit, the RHS of Eq.(32) is replaced by $`\rho m/r^2`$. In the simple mechanical models provided above, it is clear that positive transverse stresses reduce the internal pressure and vice versa. We can exploit Eq.(28) (or Eq.(32)) to show that, in general, if the transverse pressure is smaller than the radial pressure the radial pressure builds up inside. We will give two slightly different versions of this result. First, let us assume $`S_{}S_R0`$, $`\rho +S_{}0`$ and $`\rho +S_{}+2S_R0`$ (both of the latter coming from the ‘strong energy’ condition). From the trace equation, Eq.(4), we have that $`N^{}>0`$ and from Eq.(21) we have $`R^{}>0`$. When these are substituted into Eq.(28) we get $`S_{}^{}<0`$ so the pressure monotonically increases inward. The object does not need to be compact. Alternatively, even more simply, let us assume $`S_{}S_R0`$ and $`\rho 0`$. The hamiltonian constraint guarantees $`m>0`$. Suppose first that the object is compact. We cannot have an apparent horizon outside so we have $`R^{}>0`$ on the boundary. At the boundary of a compact object $`S_{}=0`$ and so from Eq.(32) we get $`S_{}^{}<0`$ so that $`S_{}`$ is decreasing outwards and so must be positive near the boundary. However a positive $`S_{}`$ makes the right hand side of Eq.(32) even more negative so $`S_{}`$ becomes ever larger as one moves inwards. Thus we have shown that $`S_{}>0`$ and monotonically decreases as one travels out. In turn this guarantees both $`\rho +S_{}>0`$ and $`4\pi S_{}+m/R^3>0`$. Hence $`R^{}>0`$ and $`N^{}>0`$. Note that we need not assume that $`S_R`$ vanishes on the boundary but it cannot be positive there. In other words, surface tension is good. If the object is nor compact, the argument we have just presented is valid in the region bounded by any sphere with $`S_{}0`$. In this section, using very weak assumptions, we have demonstrated that in a spherical static star we have $`0<R^{}1`$. From Eq.(10) we now get that both $`m>0`$ and $`2m/R<1`$. However, we have to date no information on how close $`R^{}`$ can get to zero, how close $`2m/R`$ can get to 1. This will be discussed in the next sections, where, by imposing various restrictions on the matter, we get extra control on the behaviour of $`2m/R`$. ## IV Constant Density Perfect Fluid Star It is our good fortune that for a perfect fluid constant density star, Eq.(32) is exactly solvable. Eqs.(10) and (11) reduce to $$R^2+\left(\frac{8\pi \rho _0}{3}\right)R^2=1.$$ (33) We then have $$\frac{dP}{dR}=4\pi \frac{R}{1\frac{8\pi }{3}\rho _0R^2}(P+\rho _0)\left(P+\frac{1}{3}\rho _0\right),$$ (34) with the well known solution, $$P(R)=\rho _0\left(\frac{\left(1\frac{2m_0R^2}{R_0^3}\right)^{1/2}\left(1\frac{2m_0}{R_0}\right)^{1/2}}{3\left(1\frac{2m_0}{R_0}\right)^{1/2}\left(1\frac{2m_0R^2}{R_0^3}\right)^{1/2}}\right).$$ (35) The pressure always exceeds the newtonian value. In fact, in an isotropic uniform newtonian fluid ball of radius $`R_0`$, the central pressure is given by $`P=P_c`$, where $$P_c=\frac{2\pi }{3}\rho _0^2R_0^2.$$ (36) The pressure given by Eq.(35) diverges at the center $`R=0`$ when $`m_0=4R_0/9`$. This occurs when the surface lapse, $`N_0=1/3`$. If $`m_0>4R_0/9`$, it diverges at some finite value of $`R`$. As $`2m_0`$ is increased up to $`R_0`$, the divergence moves out to $`R_0`$. Let us now examine the lapse. In a constant density perfect fluid, Eq.(31) assumes the very simple form, $$\left(\frac{N^{}}{R}\right)^{}=0.$$ (37) We exploit the continuity of the lapse and its first derivative across $`R_0`$ which follow from Eq.(19) and (20) We first integrate out from some interior point to the surface at $`R_0=R(\mathrm{}_0)`$: $$\left(\frac{N^{}}{R}\right)_{R<R_0}=\left(\frac{N^{}}{R}\right)_{R=R_0}=\frac{m_0}{R_0^3},$$ (38) where we have exploited Eq.(27) to evaluate the RHS. We integrate again over the same domain. We find for the surface lapse, $$N_0=N_c+\frac{m_0}{R_0^3}_0^\mathrm{}_0𝑑\mathrm{}R(\mathrm{}),$$ (39) where $`N_c`$ is the value of the lapse at the center. We note that generally $$_{\mathrm{}}^\mathrm{}_0𝑑\mathrm{}R(\mathrm{})=_R^{R_0}R𝑑R\left(1\frac{2m}{R}\right)^{1/2}.$$ (40) Thus, in a constant density star, $$_{\mathrm{}}^\mathrm{}_0𝑑\mathrm{}R(\mathrm{})=_R^{R_0}R𝑑R\left(1\frac{2m_0R^2}{R_0^3}\right)^{1/2}=\frac{1}{2}\left(1\left(1\frac{2m_0R^2}{R_0^3}\right)^{1/2}\right)\frac{R_0^3}{m_0},$$ (41) and we get $$N_0=\left(1\frac{2m_0}{R_0}\right)^{1/2}=N_c+\frac{1}{2}\left(1\left(1\frac{2m_0}{R_0}\right)^{1/2}\right).$$ (42) We require $`N_c0`$. Eq.(42) then implies $$0\frac{3}{2}\left(1\frac{2m_0}{R_0}\right)^{1/2}\frac{1}{2},$$ (43) or $$m_0\frac{4}{9}R_0,$$ (44) exactly as before. This route, however, has the advantage that Eq.(31) is linear in $`N`$ unlike Eq.(32) which is nonlinear in $`S_{}`$. It is worth noting that $`N0`$ as $`m4R/9`$ should not be viewed as the Killing vector going null. It is another version of the ‘collapse of the lapse’ phenomenon, in this case driven by the fact that the pressure is becoming unboundedly large. ## V Monotonic Stars Buchdahl demonstrated that if the energy density profile in a star is monotonically decreasing, and it is modeled as a perfect fluid, this 4/9 bound continues to hold. The constant density star saturates the bound within this class of systems. In this section we follow Buchdahl in only considering objects with monotonically decreasing densities but we will push the calculations much further. We start off with a perfect fluid assumption and rederive the 4/9 bound. We then weaken this to the dominant radial pressure assumption $`(S_{}S_R)`$ that we used in Section III and prove that the 4/9 bound is still valid. We next extend the inequality to interior points. We finally consider the situation where $`S_R`$ may be larger than $`S_{}`$. We no longer can recover the 4/9 bound; however if the ratio of the pressures is bounded we show that $`m/R`$ is strictly bounded away from 1/2. Let us define $$\frac{4\pi }{3}\rho :=\frac{m}{R^3},$$ (45) so that $$\rho =\frac{_0^R\rho R^2𝑑R}{_0^RR^2𝑑R},$$ (46) is an average of $`\rho (R)`$ (not to be confused with the physical average) within a euclidean ball. Thus if $`\rho ^{}0`$, it is clear that $`\rho \rho `$ and $$\left(\frac{m}{R^3}\right)^{}=\frac{4\pi }{3}\rho ^{}0.$$ (47) In particular, one can deduce that $`m/Rm_0R^2/R_0^3`$, so that $$\left(1\frac{2m}{R}\right)^{1/2}\left(1\frac{2m_0R^2}{R_0^3}\right)^{1/2},$$ (48) a lower bound is always provided by a constant density star with the same $`m_0`$ and $`R_0`$. We mimic the constant density star calculation. This is essentially the Buchdahl derivation, however we allow for a non-perfect fluid. We combine Eqs.(19) and (20) to give $$\left(\frac{N^{}}{R}\right)^{}=\frac{N^{\prime \prime }}{R}\frac{N^{}R^{}}{R^2}=\frac{4\pi N}{R}\left[\left(\rho \rho \right)+2\left(S_RS_{}\right)\right]$$ (49) Both terms on the RHS of Eq.(49) are negative when $`\rho ^{}0`$ and $`S_{}S_R0`$. Thus we have $$\left(\frac{N^{}}{R}\right)^{}0,$$ (50) with equality only in a constant density star supported by isotropic pressure. The remainder of the calculation in this case mimics that for a constant density star. As before, we first integrate Eq.(50) out from some interior point to the surface at $`R=R_0`$: $$\left(\frac{N^{}}{R}\right)_0\frac{m_0}{R_0^3}.$$ (51) We follow this by integrating out from the center at $`\mathrm{}=0`$ to the surface. We find $$N_0N_c+\frac{m_0}{R_0^3}_0^\mathrm{}_0𝑑\mathrm{}R(\mathrm{}).$$ (52) We require a lower bound on the integral appearing in the second term on the RHS. We cast it, as before, in the form (40). Using Eq.(48), it is clear that $$_0^\mathrm{}_0𝑑\mathrm{}R(\mathrm{})\frac{1}{2}\left(1\left(1\frac{2m_0}{R_0}\right)^{1/2}\right)\frac{R_0^3}{m_0}.$$ (53) When we substitute Eq.(53) and Eq.(27) into (52) together with the requirement that $`N_c0`$ we recover Eq.(43) and so we have that $`2m_0/R_08/9`$. This gives only a bound at the boundary. If the configuration has a ‘thin’ atmosphere with $`3\rho <\rho `$, $`m/R`$ is decreasing so the maximum value of $`m/R`$ occurs somewhere in the interior and not on the boundary. In such a scenario the above result is not very useful. Happily, the argument can be tweaked to show that $`2m/R8/9`$ through the whole system. Let us assume $`\rho 0`$, $`\rho ^{}0`$ and $`S_{}S_R`$. The argument at the end of Section III shows us that $`S_{}0`$. From Eq.(20) we have $$\frac{N^{}}{R}=\frac{N}{R^{}}\left(4\pi S_{}+\frac{m}{R^3}\right)\frac{N}{R^{}}\left(\frac{m}{R^3}\right).$$ (54) Let us assume that $`2m/R`$ possesses a maximum at a point a distance $`\mathrm{}_1`$ from the center. The monotonicity of $`N^{}/R`$ (Eq.(50)) and Eq.(54) gives (contrast Eq.(51)) $$\frac{N^{}}{R}\left(\frac{N}{R^{}}\frac{m}{R^3}\right)_1\mathrm{}\mathrm{}_1.$$ (55) Integrate from the center to $`\mathrm{}_1`$ to get $`N_1`$ $``$ $`N_c+\left({\displaystyle \frac{N}{R^{}}}{\displaystyle \frac{m}{R^3}}\right)_1{\displaystyle _0^\mathrm{}_1}R𝑑\mathrm{}`$ (56) $`=`$ $`N_c+\left({\displaystyle \frac{N}{R^{}}}{\displaystyle \frac{m}{R^3}}\right)_1{\displaystyle _0^{R_1}}{\displaystyle \frac{RdR}{\left(12m/R\right)^{1/2}}}`$ (57) $``$ $`N_c+\left({\displaystyle \frac{N}{R^{}}}{\displaystyle \frac{m}{R^3}}\right)_1{\displaystyle _0^{R_1}}{\displaystyle \frac{RdR}{\left(12m_1R^2/R_1^3\right)^{1/2}}},`$ (58) where the last line follows from the monotonicity of $`m/R^3`$. This can be integrated to give $$N_1N_c+\frac{m_1N_1}{R_1^3\left(1\frac{2m_1}{R_1}\right)^{1/2}}\frac{R_1^3}{2m_1}\left[1\left(1\frac{2m_1}{R_1}\right)^{1/2}\right].$$ (59) Requiring that $`N_c0`$ allows us to cancel the $`N_1`$ on both sides and we immediately get that $`2m/R8/9`$. To deal with the situation where $`S_{}`$ can be less than $`S_R`$ we need a somewhat more complicated argument. We add now as one of our assumptions that $`S_{}0`$. If we divide Eq.(49) by Eq.(20) we can get $$\left(\frac{N^{}}{R}\right)^{}=\frac{N^{}}{R}\frac{R^{}}{R}\frac{\left(\rho \rho \right)+2\left(S_RS_{}\right)}{S_{}+\frac{\rho }{3}}.$$ (60) Let us assume that the term on the right hand side of Eq.(60) which depends on the sources is bounded. In other words we assume $$\frac{\left(\rho \rho \right)+2\left(S_RS_{}\right)}{S_{}+\frac{\rho }{3}}\beta .$$ (61) It is clear that $`\beta `$ cannot be negative because the numerator vanishes at the center. We have $`\beta =0`$ for a monotonic star with $`S_RS_{}`$. In general, it will be some positive number. Eq.(60) now reads $$\left(\frac{N^{}}{R}\right)^{}\beta \frac{N^{}}{R}\frac{R^{}}{R}.$$ (62) Find the point where $`2m/R`$ is a maximum (call it $`\mathrm{}_1`$ as before) and integrate Eq.(62) out to it to give $$\mathrm{ln}\left(\frac{(N^{}/R)_1}{(N^{}/R)}\right)\beta \mathrm{ln}R_1/R,$$ (63) so that $$\frac{N^{}}{R}\left(\frac{N^{}}{R}\right)_1\left(\frac{R}{R_1}\right)^\beta .$$ (64) As before, we integrate this equation from the center out to $`\mathrm{}_1`$ to get $`N_1`$ $``$ $`N_c+\left({\displaystyle \frac{N^{}}{R}}\right)_1{\displaystyle _0^\mathrm{}_1}\left({\displaystyle \frac{R}{R_1}}\right)^\beta R𝑑\mathrm{}`$ (65) $`N_1`$ $``$ $`N_c+\left({\displaystyle \frac{N}{R^{}}}\left[S_{}+{\displaystyle \frac{m}{R^3}}\right]\right)_1{\displaystyle _0^\mathrm{}_1}\left({\displaystyle \frac{R}{R_1}}\right)^\beta R𝑑\mathrm{}`$ (66) $``$ $`N_c+\left({\displaystyle \frac{N}{R^{}}}{\displaystyle \frac{m}{R^3}}\right)_1{\displaystyle _0^{R_1}}\left({\displaystyle \frac{R}{R_1}}\right)^\beta {\displaystyle \frac{RdR}{\left(12m/R\right)^{1/2}}}`$ (67) $``$ $`N_c+\left({\displaystyle \frac{N}{R^{}}}{\displaystyle \frac{m}{R^3}}\right)_1{\displaystyle _0^{R_1}}\left({\displaystyle \frac{R}{R_1}}\right)^\beta {\displaystyle \frac{RdR}{\left(1\frac{2m_1R^2}{R_1^3}\right)^{1/2}}}.`$ (68) In going from Eq.(65) to Eq.(66) we use Eq.(20) and in going from Eq.(66) to Eq.(67) we use that $`S_{}(\mathrm{}_1)0`$. It is clear that the integral in Eq.(68) is finite and well behaved for any finite $`\beta `$. Thus we get a bound on $`2m/R`$ which is strictly bounded away from 1. Only in the limit as $`\beta \mathrm{}`$ does the integral go to zero. In this case the bound on $`2m/R1`$. In the other limit, when $`\beta 0`$, we recover Eq.(58) and so we get $`2m/R8/9`$. In the special cases where $`\beta =2,4,6,\mathrm{}`$ the integral in Eq.(68) can be done simply. This includes one especially interesting case. Let us assume we are given a monotonic star with positive radial pressure (these assumptions can be justified by stability criteria). Let us further assume that the transverse pressure is bounded. More precisely let us assume $`S_R\rho `$. This can be justified on some kind of speed of sound argument. From the monotonicity we get $`S_R\rho `$. From these we immediately get $`\beta 6`$! Now we can do the integration and get $`2m/R0.974`$. Alternatively, if the material was approximately a perfect fluid we could use the ratio of the pressures, $`\gamma `$, that we introduced earlier. It is clear that $`\beta 2(\gamma _{Max}1)`$. ## VI Completely general spherical configuration Let us now consider a general static spherical ball. We no longer wish to assume either monotonicity or a perfect fluid. The only constraints we place are that both $`\rho 0`$ and $`S_{}0`$. We also assume that $`\beta `$ as defined by Eq.(61) exists. We are less interested in obtaining the tightest bound on $`2m/R`$ than in establishing that such a bound exists. All the equations, starting from Eq.(61 up to and including Eq.(67) continue to hold. However, in going from Eq.(67) to Eq.(68) we used the monotonicity. One way of avoiding that difficulty is by replacing Eq.(68) with $$N_1N_c+\left(\frac{N}{R^{}}\frac{m}{R^3}\right)_1_0^{R_1}\left(\frac{R}{R_1}\right)^\beta \frac{RdR}{\left(1\frac{8\pi \rho _{\mathrm{Min}}}{3}R^2\right)^{1/2}}.$$ (69) This uses that $`2m/R=8\pi \rho R^2/38\pi \rho _{\mathrm{Min}}R^2/3`$. Eq.(69) can be simplified by introducing a new variable $`x^2=8\pi \rho _{\mathrm{Min}}R^2/3`$ and $`x_1^2=8\pi \rho _{\mathrm{Min}}R_1^2/3=2\overline{m}_1/R_12m_1/R_1`$ where $`\overline{m}=4\pi \rho _{\mathrm{Min}}R^3/3m`$. We then get from $`N_c>0`$ $$\sqrt{1\frac{2m_1}{R_1}}\frac{\rho }{2\rho _{\mathrm{Min}}}_0^{x_1}\left(\frac{x}{x_1}\right)^\beta \frac{xdx}{\sqrt{1x^2}}.$$ (70) It is clear that the right hand side of Eq.(70) is finite and bounded away from zero as long as $`\rho _{\mathrm{Min}}`$ is non-zero. This is very misleading because of the dependence of $`x_1`$ on $`\rho _{\mathrm{Min}}`$. If we return to Eq.(69) we can see that the integral has a lower bound of $`R_1^2/(\beta +2)`$ and this is achieved when $`\rho _{\mathrm{Min}}=0`$. In this case we get a bound on $`2m_1/R_1`$ given by $$2(\beta +2)\sqrt{1\frac{2m_1}{R_1}}\frac{2m_1}{R_1}.$$ (71) Thus we get the following bound $$\frac{2m_1}{R_1}\frac{1}{2}(\beta +2)^2\left[\sqrt{1+\frac{4}{(\beta +2)^2}}1\right]1\frac{2}{(\beta +2)^2}.$$ (72) The approximation given in Eq.(72) holds only in the limit as $`\beta `$ becomes large. In general, we can show that the expression in Eq.(72) is always less than 1 and monotonically increases with $`\beta `$. For example, when $`\beta =0`$ we get $`2m_1/R_10.944`$. We know that $`0\rho _{\mathrm{Min}}\rho _1`$. For any fixed value of $`\beta `$, if $`\rho _{\mathrm{Min}}\rho _1`$ the bound we get on $`2m_1/R_1`$ agrees with the monotonic bound as given implicitly by Eq.(68). As $`\rho _{\mathrm{Min}}`$ reduces below $`\rho _1`$ the bound on $`2m_1/R_1`$ increases monotonically and in the limit $`\rho _{\mathrm{Min}}0`$ it reaches the bound given in Eq.(72) and so is always strictly bounded away from 1. ## VII Conclusions We have examined the ratio of the quasi-local mass to the circumferential radius, $`2m/R`$, for physically reasonable spherically symmetric isolated static configurations in general relativity. We have demonstrated how the theory always places an upper bound on this ratio which lies strictly below the value it assumes when a horizon forms. This extends considerably earlier work on this question. The bounds we have derived do not take into account the stability of these static equilibria. It would be interesting to know if and how these bounds get tightened when only stable configurations are considered. In Morris and Thorne addressed the problem of constructing a static wormhole. If one has a spherical static wormhole one must have a minimal surface and thus $`\rho +S_{}<0`$ somewhere. This raises an interesting question. Assume one has a spherical static wormhole and assume $`\rho 0`$. How much ‘exotic material’ violating the strong energy condition does one need? ## Acknowledgements We gratefully acknowledge support from CONACyT Grant 211085-5-0118PE to JG and Forbairt Grant SC/96/750 to NÓM.
no-problem/9903/gr-qc9903045.html
ar5iv
text
# Quantum spacetime: what do we know? ## 1 The incomplete revolution Quantum mechanics (QM) and general relativity (GR) have modified our understanding of the physical world in depth. But they have left us with a general picture of the physical world which is unclear, incomplete, and fragmented. Combining what we have learn about our world from the two theories and finding a new synthesis is a major challenge, perhaps the major challenge, in today’s fundamental physics. The two theories have a opened a major scientific revolution, but this revolution is not completed. Most of the physics of this century has been a sequel of triumphant explorations of the new worlds opened by QM and GR. QM lead to nuclear physics, solid state physics, and particle physics. GR to relativistic astrophysics, cosmology and is today leading us towards gravitational astronomy. The urgency of applying the two theories to larger and larger domains, the momentous developments, and the dominant pragmatic attitude of the middle of this century, have obscured the fact that a consistent picture of the physical world, more or less stable for three centuries, has been lost with the advent of QM and GR. This pragmatic attitude cannot be satisfactory, or productive, in the long run. The basic Cartesian-Newtonian notions such as matter, space, time, causality, have been modified in depth. The new notions do not stay together. At the basis of our understanding of the world reigns a surprising confusion. From QM and GR we know that we live in a spacetime with quantum properties, that is, a quantum spacetime. But what is a quantum spacetime? In the last decade, the attention of the theoretical physicists has been increasingly focusing on this major problem. Whatever the outcome of the enterprise, we are witnessing a large scale intellectual effort for accomplishing a major aim: completing the XXth scientific revolution, and finding a new synthesis. In this effort, physics is once more facing conceptual problems: What is matter? What is causality? What is the role of the observer in physics? What is time? What is the meaning of “being somewhere”? What is the meaning of “now”? What is the meaning of “moving”? Is motion to be defined with respect to objects or with respect to space? These foundational questions, or sophisticated versions of these questions, were central in the thinking and in the results of Einstein, Heisenberg, Bohr, Dirac and their colleagues. But these are also precisely the same questions that Descartes, Galileo, Huygens, Newton and their contemporaries debated with passion – the questions that lead them to create modern science. For the physicists of the middle of this century, these questions were irrelevant: one does not need to worry about first principles in order to apply the Schrödinger equation to the helium atom, or to understand how a neutron star stays together. But today, if we want to find a novel picture of the world, if we want to understand what is quantum spacetime, we have to return, once again, to those foundational issues. We have to find a new answer to these questions –different from Newton’s answer– which took into account what we have learned about the world with QM and GR. Of course, we have little, if any, direct empirical access to the regimes in which we expect genuine quantum gravitational phenomena to appear. Anything could happen at those fantastically small distance scales, far removed from our experience. Nevertheless, we do have information about quantum gravity, and we do have indications on how to search it. In fact, we are precisely in one of the very typical situations in which good fundamental theoretical physics has been working at its best in the past: we have learned two new extremely general “facts” about our world, QM and GR, and we have “just” to figure out what they imply, when taken together. The most striking advances in theoretical physics happened in situations analogous to this one. Here, I present some reflections on these issues.<sup>1</sup><sup>1</sup>1For recent general overviews of current approaches to quantum gravity, see (Isham 1999) and (Rovelli 1999). What have we learned about the world from QM and, especially, GR? What do we know about space, time and matter? What can we expect from a quantum theory of spacetime? To which extent does taking QM and GR into account force us to modify the notion of time? What can we already say about quantum spacetime? I present also a few reflections on issues raised by the relation between philosophy of science and research in quantum gravity. I am not a philosopher, and I can touch philosophical issues only at the risk of being naive. I nevertheless take this risk here, encouraged by Craig Callender and Nick Huggett extremely stimulating idea of this volume. I present some methodological considerations –How shall we search? How can the present successful theories can lead us towards a theory that does not yet exist?– as well as some general consideration. In particular, I discuss the relation between physical theories that supersed each others and the attitude we may have with respect to the truth-content of a physical theory, with respect to the reality of the theoretical objects the theory postulates in particular, and to its factual statements on the world in general. I am convinced of the reciprocal usefulness of a dialog between physics and philosophy (Rovelli 1997a). This dialog has played a major role during the other periods in which science faced foundational problems. In my opinion, most physicists underestimate the effect of their own epistemological prejudices on their research. And many philosophers underestimate the influence –positive or negative– they have on fundamental reserach. On the one hand, a more acute philosphical awarness would greatly help the physicists engaged in fundamental research: Newton, Heisenberg and Einstein couldn’t have done what they have done if they weren’t nurtured by (good or bad) philosophy. On the other hand, I wish contemporary philosophers concerned with science would be more interested in the ardent lava of the foundational problems science is facing today. It is here, I believe, that stimulating and vital issues lie. ## 2 The problem What is the task of a quantum theory of gravity, and how should we search for such a theory? The task of the search is clear and well defined. It is determined by recalling the three major steps that lead to the present problematic situation. ### 2.1 First step. A new actor on the stage: the field The first step is in the works of Faraday, Maxwell and Einstein. Faraday and Maxwell have introduced a new fundamental notion in physics, the field. Faraday’s book includes a fascinating chapter with the discussion of whether the field (in Faraday’s terminology, the “lines of force”) is “real”. As far as I understand this subtle chapter (understanding Faraday is tricky: it took the genius of Maxwell), in modern terms what Faraday is asking is whether there are independent degrees of freedom in the electric and magnetic fields. A degree of freedom is a quantity that I need to specify (more precisely: whose value and whose time derivative I need to specify) in order to be able to predict univocally the future evolution of the system. Thus Faraday is asking: if we have a system of interacting charges, and we know their positions and velocities, is this knowledge sufficient to predict the future motions of the charges? Or rather, in order to predict the future, we have to specify the instantaneous configuration of the field (the fields degrees of freedom), as well? The answer is in Maxwell equations: the field has independent degrees of freedom. We cannot predict the future evolution of the system from its present state unless we know the instantaneous field configuration. Learning to use these degrees of freedom lead to radio, TV and cellular phone. To which physical entity do the degrees of freedom of the electromagnetic field refer? This was one of the most debated issues in physics towards the end of last century. The electromagnetic waves have aspects in common with water waves, or with sound waves, which describe vibrations of some material medium. The natural interpretation of the electromagnetic field was that it too describes the vibrations of some material medium – for which the name “ether” was chosen. A strong argument supports this idea: The wave equations for water or sound waves fail to be Galilean invariant. They do so because they describe propagation over a medium (water, air) whose state of motion breaks Galilean invariance and defines a preferred reference frame. Maxwell equations break Galilean invariance as well and it was thus natural to hypothesize a material medium determining the preferred reference frame. But a convincing dynamical theory of the ether compatible with the various experiments (for instance on the constancy of the speed of light) could not be found. Rather, physics took a different course. Einstein believed Maxwell theory as a fundamental theory and believed the Galilean insight that velocity is relative and inertial system are equivalent. Merging the two, he found special relativity. A main result of special relativity is that the field cannot be regarded as describing vibrations of underlying matter. The idea of the ether is abandoned, and the field has to be taken seriously as elementary constituent of reality. This is a major change from the ontology of Cartesian-Newtonian physics. In the best description we can give of the physical world, there is a new actor: the field. The electromagnetic field can be described by the Maxwell potential $`A_\mu (x),\mu =0,1,2,3`$. The entity described by $`A_\mu (x)`$ (more precisely, by a gauge-equivalent class of $`A_\mu (x)`$’s) is one of the elementary constituents of the physical world, according to the best conceptual scheme physics has find, so far, for grasping our world. ### 2.2 Second step. Dynamical entities have quantum properties The second step (out of chronological order) is the replacement of the mechanics of Newton, Lagrange and Hamilton with quantum mechanics (QM). As did classical mechanics, QM provides a very general framework. By formulating a specific dynamical theory within this framework, one has a number of important physical consequences, substantially different from what is implied by the Newtonian scheme. Evolution is probabilistically determined only; some physical quantities can take certain discrete values only (are “quantized”); if a system can be in a state $`A`$, where a physical quantity $`q`$ has value $`a`$, as well as in state $`B`$, where $`q`$ has value $`b`$, then the system can also be in states (denoted $`\mathrm{\Psi }=c_aA+c_bB`$) where $`q`$, has value $`a`$ with probability $`|c_a|^2/(|c_a|^2+|c_b|^2)`$, or, alternatively, $`b`$ with probability $`|c_b|^2/(|c_a|^2+|c_b|^2)`$ (superposition principle); conjugate variables cannot be assumed to have value at the same time (uncertainty principle); and what we can say about the properties that the system will have the-day-after-tomorrow is not determined just by what we can say about the system today, but also on what we will be able to say about the system tomorrow. (Bohr would had simply said that observations affect the system. Formulations such as Bohm’s or consistent histories force us to use intricate wording for naming the same physical fact.) The formalism of QM exists in a number of more or less equivalent versions: Hilbert spaces and self-adjoint observables, Feynman’s sum over histories, algebraic formulation, and others. Often, we are able to translate from one formulation to another. However, often we cannot do easily in one formulation, what we can do in another. QM is not the theory of micro-objects. It is our best form of mechanics. If quantum mechanics failed for macro-objects, we would have detected the boundary of its domain of validity in mesoscopic physics. We haven’t.<sup>2</sup><sup>2</sup>2Following Roger Penrose’s opposite suggestions of a failure of conventional QM induced by gravity (Penrose 1995), Antony Zeilinger is preparing an experiment to test such a possible failure of QM (Zeilinger 1997). It would be very exciting if Roger turned out to be right, but I am afraid that QM, as usual, will win. The classical regime raises some problems (why effects of macroscopic superposition are difficult to detect?). Solving these problems requires good understanding of physical decoherence and perhaps more. But there is no reason to doubt that QM represents a deeper, not a shallower level of understanding of nature than classical mechanics. Trying to resolve the difficulties in our grasping of our quantum world by resorting to old classical intuition is just lack of courage. We have learned that the world has quantum properties. This discovery will stay with us, like the discovery that velocity is only relational or like the discovery that the Earth is not the center of the universe. The empirical success of QM is immense. Its physical obscurity is undeniable. Physicists do not yet agree on what QM precisely says about the world (the difficulty, of course, refers to physical meaning of notions such as “measurement”, “history”, “hidden variable”, …). It is a bit like the Lorentz transformations before Einstein: correct, but what do they mean? In my opinion, what QM means is that the contingent (variable) properties of any physical system, or the state of the system, are relational notion which only make sense when referred to a second physical system. I have argued for this thesis in (Rovelli 1996, Rovelli 1998). However, I will not enter in this discussion here, because the issue of the interpretation of QM has no direct connection with quantum gravity. Quantum gravity and the interpretation of QM are two major but (virtually) completely unrelated problems. QM was first developed for systems with a finite number of degrees of freedom. As discussed in the previous section, Faraday, Maxwell and Einstein had introduced the field, which has an infinite number of degrees of freedom. Dirac put the two ideas together. He believed quantum mechanics and he believed Maxwell’s field theory much beyond their established domain of validity (respectively: the dynamics of finite dimensional systems, and the classical regime) and constructed quantum field theory (QFT), in its first two incarnations, the quantum theory of the electromagnetic field and the relativistic quantum theory of the electron. In this exercise, Dirac derived the existence of the photon just from Maxwell theory and the basics of QM. Furthermore, by just believing special relativity and believing quantum theory, namely assuming their validity far beyond their empirically explored domain of validity, he predicted the existence of antimatter. The two embryonal QFT’s of Dirac were combined in the fifties by Feynman and his colleagues, giving rise to quantum electrodynamics, the first nontrivial interacting QFT. A remarkable picture of the world was born: quantum fields over Minkowski space. Equivalently, à la Feynman: the world as a quantum superposition of histories of real and virtual interacting particles. QFT had ups and downs, then triumphed with the standard model: a consistent QFT for all interactions (except gravity), which, in principle, can be used to predict anything we can measure (except gravitational phenomena), and which, in the last fifteen years has received nothing but empirical verifications. ### 2.3 Third step. The stage becomes an actor Descartes, in Le Monde, gave a fully relational definition of localization (space) and motion (on the relational/substantivalist issue, see Earman and Norton 1987, Barbour 1989, Earman 1989, Rovelli 1991a, Belot 1998). According to Descartes, there is no “empty space”. There are only objects, and it makes sense to say that an object A is contiguous to an object B. The “location” of an object A is the set of the objects to which A is contiguous. “Motion” is change in location. That is, when we say that A moves we mean that A goes from the contiguity of an object B to the contiguity of an object C<sup>3</sup><sup>3</sup>3“We can say that movement is the transference of one part of matter or of one body, from the vicinity of those bodies immediately contiguous to it, and considered at rest, into the vicinity of some others”, (Descartes, Principia Philosophiae, Sec II-25, pg 51).. A consequence of this relationalism is that there is no meaning in saying “A moves”, except if we specify with respect to which other objects (B, C,…) it is moving. Thus, there is no “absolute” motion. This is the same definition of space, location, and motion, that we find in Aristotle. <sup>4</sup><sup>4</sup>4Aristotle insists on this point, using the example of the river that moves with respect to the ground, in which there is a boat that moves with respect to the water, on which there is a man that walks with respect to the boat …. Aristotle’s relationalism is tempered by the fact that there is, after all, a preferred set of objects that we can use as universal reference: the Earth at the center of the universe, the celestial spheres, the fixed stars. Thus, we can say, if we desire so, that something is moving “in absolute terms”, if it moves with respect to the Earth. Of course, there are two preferred frames in ancient cosmology: the one of the Earth and the one of the fixed stars; the two rotates with respect to each other. It is interesting to notice that the thinkers of the middle ages did not miss this point, and discussed whether we can say that the stars rotate around the Earth, rather than being the Earth that rotates under the fixed stars. Buridan concluded that, on ground of reason, in no way one view is more defensible than the other. For Descartes, who writes, of course, after the great Copernican divide, the Earth is not anymore the center of the Universe and cannot offer a naturally preferred definition of stillness. According to malignants, Descartes, fearing the Church and scared by what happened to Galileo’s stubborn defense of the idea that “the Earth moves”, resorted to relationalism, in Le Monde, precisely to be able to hold Copernicanism without having to commit himself to the absolute motion of the Earth! Relationalism, namely the idea that motion can be defined only in relation to other objects, should not be confused with Galilean relativity. Galilean relativity is the statement that “rectilinear uniform motion” is a priori indistinguishable from stasis. Namely that velocity (but just velocity!), is relative to other bodies. Relationalism holds that any motion (however zigzagging) is a priori indistinguishable from stasis. The very formulation of Galilean relativity requires a nonrelational definition of motion (“rectilinear and uniform” with respect to what?). Newton took a fully different course. He devotes much energy to criticise Descartes’ relationalism, and to introduce a different view. According to him, space exists. It exists even if there are no bodies in it. Location of an object is the part of space that the object occupies. Motion is change of location.<sup>5</sup><sup>5</sup>5 “So, it is necessary that the definition of places, and hence local motion, be referred to some motionless thing such as extension alone or space, in so far as space is seen truly distinct from moving bodies”, (Newton De gravitatione et Aequipondio Fluidorum 89-156). Compare with the quotation of Descartes in the footnote above. Thus, we can say whether an object moves or not, irrespectively from surrounding objects. Newton argues that the notion of absolute motion is necessary for constructing mechanics. His famous discussion of the experiment of the rotating bucket in the Principia is one of the arguments to prove that motion is absolute. This point has often raised confusion because one of the corollaries of Newtonian mechanics is that there is no detectable preferred referential frame. Therefore the notion of absolute velocity is, actually, meaningless, in Newtonian mechanics. The important point, however, is that in Newtonian mechanics velocity is relative, but any other feature of motion is not relative: it is absolute. In particular, acceleration is absolute. It is acceleration that Newton needs to construct his mechanics; it is acceleration that the bucket experiment is supposed to prove to be absolute, against Descartes. In a sense, Newton overdid a bit, introducing the notion of absolute position and velocity (perhaps even just for explanatory purposes?). Many people have later criticised Newton for his unnecessary use of absolute position. But this is irrelevant for the present discussion. The important point here is that Newtonian mechanics requires absolute acceleration, against Aristotle and against Descartes. Precisely the same does special relativistic mechanics. Similarly, Newton introduce absolute time. Newtonian space and time or, in modern terms, spacetime, are like a stage over which the action of physics takes place, the various dynamical entities being the actors. The key feature of this stage, Newtonian spacetime, is its metrical structure. Curves have length, surfaces have area, regions of spacetime have volume. Spacetime points are at fixed distance the one from the other. Revealing, or measuring, this distance, is very simple. It is sufficient to take a rod and put it between two points. Any two points which are one rod apart are at the same distance. Using modern terminology, physical space is a linear three-dimensional (3d) space, with a preferred metric. On this space there exist preferred coordinates $`x^i,i=1,2,3`$, in terms of which the metric is just $`\delta _{ij}`$. Time is described by a single variable $`t`$. The metric $`\delta _{ij}`$ determines lengths, areas and volumes and defines what we mean by straight lines in space. If a particle deviates with respect to this straight line, it is, according to Newton, accelerating. It is not accelerating with respect to this or that dynamical object: it is accelerating in absolute terms. Special relativity changes this picture only marginally, loosing up the strict distinction between the “space” and the “time” components of spacetime. In Newtonian spacetime, space is given by fixed 3d planes. In special relativistic spacetime, which 3d plane you call space depends on your state of motion. Spacetime is now a 4d manifold $`M`$ with a flat Lorentzian metric $`\eta _{\mu \nu }`$. Again, there are preferred coordinates $`x^\mu ,\mu =0,1,2,3`$, in terms of which $`\eta _{\mu \nu }=diag[1,1,1,1]`$. This tensor, $`\eta _{\mu \nu }`$, enters all physical equations, representing the determinant influence of the stage and of its metrical properties on the motion of anything. Absolute acceleration is deviation of the world line of a particle from the straight lines defined by $`\eta _{\mu \nu }`$. The only essential novelty with special relativity is that the “dynamical objects”, or “bodies” moving over spacetime now include the fields as well. Example: a violent burst of electromagnetic waves coming from a distant supernova has traveled across space and has reached our instruments. For the rest, the Newtonian construct of a fixed background stage over which physics happen is not altered by special relativity. The profound change comes with general relativity (GR). The central discovery of GR, can be enunciated in three points. One of these is conceptually simple, the other two are tremendous. First, the gravitational force is mediated by a field, very much like the electromagnetic field: the gravitational field. Second, Newton’s spacetime, the background stage that Newton introduced introduced, against most of the earlier European tradition, and the gravitational field, are the same thing. Third, the dynamics of the gravitational field, of the other fields such as the electromagnetic field, and any other dynamical object, is fully relational, in the Aristotelian-Cartesian sense. Let me illustrate these three points. First, the gravitational field is represented by a field on spacetime, $`g_{\mu \nu }(x)`$, just like the electromagnetic field $`A_\mu (x)`$. They are both very concrete entities: a strong electromagnetic wave can hit you and knock you down; and so can a strong gravitational wave. The gravitational field has independent degrees of freedom, and is governed by dynamical equations, the Einstein equations. Second, the spacetime metric $`\eta _{\mu \nu }`$ disappears from all equations of physics (recall it was ubiquitous). At its place –we are instructed by GR– we must insert the gravitational field $`g_{\mu \nu }(x)`$. This is a spectacular step: Newton’s background spacetime was nothing but the gravitational field! The stage is promoted to be one of the actors. Thus, in all physical equations one now sees the direct influence of the gravitational field. How can the gravitational field determine the metrical properties of things, which are revealed, say, by rods and clocks? Simply, the inter-atomic separation of the rods’ atoms, and the frequency of the clock’s pendulum are determined by explicit couplings of the rod’s and clock’s variables with the gravitational field $`g_{\mu \nu }(x)`$, which enters the equations of motion of these variables. Thus, any measurement of length, area or volume is, in reality, a measurement of features of the gravitational field. But what is really formidable in GR, the truly momentous novelty, is the third point: the Einstein equations, as well as all other equations of physics appropriately modified according to GR instructions, are fully relational in the Aristotelian-Cartesian sense. This point is independent from the previous one. Let me give first a conceptual, then a technical account of it. The point is that the only physically meaningful definition of location that makes physical sense within GR is relational. GR describes the world as a set of interacting fields and, possibly, other objects. One of these interacting fields is $`g_{\mu \nu }(x)`$. Motion can be defined only as positioning and displacements of these dynamical objects relative to each other (for more details on this, see Rovelli 1991a and especially 1997a). To describe the motion of a dynamical object, Newton had to assume that acceleration is absolute, namely it is not relative to this or that other dynamical object. Rather, it is relative to a background space. Faraday Maxwell and Einstein extended the notion of “dynamical object”: the stuff of the world is fields, not just bodies. Finally, GR tells us that the background space is itself one of these fields. Thus, the circle is closed, and we are back to relationalism: Newton’s motion with respect to space is indeed motion with respect to a dynamical object: the gravitational field. All this is coded in the active diffeomorphism invariance (diff invariance) of GR.<sup>6</sup><sup>6</sup>6Active diff invariance should not be confused with passive diff invariance, or invariance under change of coordinates. GR can be formulated in a coordinate free manner, where there are no coordinates, and no changes of coordinates. In this formulation, there field equations are still invariant under active diffs. Passive diff invariance is a property of a formulation of a dynamical theory, while active diff invariance is a property of the dynamical theory itself. A field theory is formulated in manner invariant under passive diffs (or change of coordinates), if we can change the coordinates of the manifold, re-express all the geometric quantities (dynamical and non-dynamical) in the new coordinates, and the form of the equations of motion does not change. A theory is invariant under active diffs, when a smooth displacement of the dynamical fields (the dynamical fields alone) over the manifold, sends solutions of the equations of motion into solutions of the equations of motion. Distinguishing a truly dynamical field, namely a field with independent degrees of freedom, from a nondynamical filed disguised as dynamical (such as a metric field $`g`$ with the equations of motion Riemann\[g\]=0) might require a detailed analysis (for instance, hamiltonian) of the theory. Because active diff invariance is a gauge, the physical content of GR is expressed only by those quantities, derived from the basic dynamical variables, which are fully independent from the points of the manifold. In introducing the background stage, Newton introduced two structures: a spacetime manifold, and its non-dynamical metric structure. GR gets rid of the non-dynamical metric, by replacing it with the gravitational filed. More importantly, it gets rid of the manifold, by means of active diff invariance. In GR, the objects of which the world is made do not live over a stage and do not live on spacetime: they live, so to say, over each other’s shoulders. Of course, nothing prevents us, if we wish to do so, from singling out the gravitational field as “the more equal among equals”, and declaring that location is absolute in GR, because it can be defined with respect to it. But this can be done within any relationalism: we can always single out a set of objects, and declare them as not-moving by definition<sup>7</sup><sup>7</sup>7Notice that Newton, in the passage quoted in the footnote above argues that motion must be defined with respect to motionless space “in so far as space is seen truly distinct from moving bodies”. That is: motion should be defined with respect to something that has no dynamics.. The problem with this attitude is that it fully misses the great Einsteinian insight: that Newtonian spacetime is just one field among the others. More seriously, this attitude sends us into a nightmare when we have to deal with the motion of the gravitational field itself (which certainly “moves”: we are spending millions for constructing gravity wave detectors to detect its tiny vibrations). There is no absolute referent of motion in GR: the dynamical fields “move” with respect to each other. Notice that the third step was not easy for Einstein, and came later than the previous two. Having well understood the first two, but still missing the third, Einstein actively searched for non-generally covariant equations of motion for the gravitational field between 1912 and 1915. With his famous “hole argument” he had convinced himself that generally covariant equations of motion (and therefore, in this context, active diffeomorphism invariance) would imply a truly dramatic revolution with respect to the Newtonian notions of space and time (on the hole argument, see Earman and Norton 1987, Rovelli 1991a, Belot 1998). In 1912 he was not able to take this profoundly revolutionary step (Norton 1984, Stachel 1989). In 1915 he took this step, and found what Landau calls “the most beautiful of the physical theories”. ### 2.4 Bringing the three steps together At the light of the three steps illustrated above, the task of quantum gravity is clear and well defined. He have learned from GR that spacetime is a dynamical field among the others, obeying dynamical equations, and having independent degrees of freedom. A gravitational wave is extremely similar to an electromagnetic wave. We have learned from QM that every dynamical object has quantum properties, which can be captured by appropriately formulating its dynamical theory within the general scheme of QM. Therefore, spacetime itself must exhibit quantum properties. Its properties, including the metrical properties it defines, must be represented in quantum mechanical terms. Notice that the strength of this “therefore” derives from the confidence we have in the two theories, QM and GR. Now, there is nothing in the basics of QM which contradicts the physical ideas of GR. Similarly, there is nothing in the basis of GR that contradicts the physical ideas of QM. Therefore, there is no a priori impediment in searching for a quantum theory of the gravitational fields, that is, a quantum theory of spacetime. The problem is (with some qualification) rather well posed: is there a quantum theory (say, in one formulation, a Hilbert space $`H`$, and a set of self-adjoint operators) whose classical limit is GR? On the other hand, all previous applications of QM to field theory, namely conventional QFT’s, rely heavily on the existence of the “stage”, the fixed, non-dynamical, background metric structure. The Minkowski metric $`\eta _{\mu \nu }`$is essentially for the construction of a conventional QFT (in enters everywhere; for instance, in the canonical commutation relations, in the propagator, in the Gaussian measure …). We certainly cannot simply replace $`\eta _{\mu \nu }`$ with a quantum field, because all equations become nonsense. Therefore, to search for a quantum theory of gravity, we have two possible directions. One possibility is to “disvalue” the GR conceptual revolution, reintroduce a background spacetime with a non-dynamical metric $`\eta _{\mu \nu }`$, expand the gravitational field $`g_{\mu \nu }`$ as $`g_{\mu \nu }=\eta _{\mu \nu }+fluctuations`$, quantize only the fluctuations, and hope to recover the full of GR somewhere down the road. This is the road followed for instance by perturbative string theory. The second direction is to be faithful to what we have learned about the world so far. Namely to the QM and the GR insights. We must then search a QFT that, genuinely, does not require a background space to be defined. But the last three decades whave been characterized by the great success of conventional QFT, which neglects GR and is based on the existence of a background spacetime. We live in the aftermath of this success. It is not easy to get out from the mental habits and from the habits to the technical tools of conventional QFT. Still, this is necessary if we want to build a QFT which fully incorporates active diff invariance, and in which localization is fully relational. In my opinion, this is the right way to go. ## 3 Quantum spacetime ### 3.1 Space Spacetime, or the gravitational field, is a dynamical entity (GR). All dynamical entities have quantum properties (QM). Therefore spacetime is a quantum object. It must be described (picking one formulation of QM, but keeping in mind that others may be equivalent, or more effective) in terms of states $`\mathrm{\Psi }`$ in a Hilbert space. Localization is relational. Therefore these states cannot represent quantum excitations localized in some space. They must define space themselves. They must be quantum excitations “of” space, not “in” space. Physical quantities in GR, that capture the true degrees of freedom of the theory are invariant under active diff. Therefore the self-adjoint operators that correspond to physical (predictable) observables in quantum gravity must be associated to diff invariant quantities. Examples of diff-invariant geometric quantities are physical lengths, areas, volumes, or time intervals, of regions determined by dynamical physical objects. These must be represented by operators. Indeed, a measurement of length, area or volume is a measurement of features of the gravitational field. If the gravitational field is a quantum field, then length, area and volume are quantum observables. If the corresponding operator has discrete spectrum, they will be quantized, namely they can take certain discrete values only. In this sense we should expect a discrete geometry. This discreteness of the geometry, implied by the conjunction of GR and QM is very different from the naive idea that the world is made by discrete bits of something. It is like the discreteness of the quanta of the excitations of an harmonic oscillator. A generic state of spacetime will be a continuous quantum superposition of states whose geometry has discrete features, not a collection of elementary discrete objects. A concrete attempt to construct such a theory, is loop quantum gravity. I refer the reader to Rovelli (1997b) for an introduction to the theory, an overview of its structure and results, and full references. Here, I present only a few remarks on the theory. Loop quantum gravity is a rather straightforward application of quantum mechanics to hamiltonian general relativity. It is a QFT in the sense that it is a quantum version of a field theory, or a quantum theory for an infinite number of degrees of freedom, but it is profoundly different from conventional, non-general-relativistic QFT theory. In conventional QFT, states are quantum excitations of a field over Minkowski (or over a curved) spacetime. In loop quantum gravity, the quantum states turn out to be represented by (suitable linear combinations of) spin networks (Rovelli and Smolin 1995a, Baez 1996, Smolin 1997). A spin network is an abstract graphs with links labeled by half-integers. See Figure 1. Intuitively, we can view each node of the graph as an elementary “quantum chunk of space”. The links represent (transverse) surfaces separating the quanta of space. The half-integers associated to the links determine the (quantized) area of these surfaces. The spin network represent relational quantum states: they are not located in a space. Localization must be defined in relation to them. For instance, if we have, say, a matter quantum excitation, this will be located on the spin network; while the spin network itself is not located anywhere. The operators corresponding to area and volume have been constructed in the theory, simply by starting from the classical expression for the area in terms of the metric, then replacing the metric with the gravitational field (this is the input of GR) and then replacing the gravitational field with the corresponding quantum field operator (this is the input of QM). The construction of these operators requires appropriate generally covariant regularization techniques, but no renormalization: no infinities appear. The spectrum of these operators has been computed and turns out to be discrete (Rovelli and Smolin 1995b, Ashtekar Lewandowski 1997a, 1997b). Thus, loop quantum gravity provides a family of precise quantitative predictions: the quantized values of area and volume. For instance, the (main sequence) of the spectrum of the area is $$A=8\pi \mathrm{}G\underset{i=1,n}{}\sqrt{j_i(j_i+1)}$$ where $`(j_i)=(j_1\mathrm{}j_n)`$ is any finite sequence of half integers. This formula gives the area of a surface pinched by $`n`$ links of a spin network state. The half integers $`j_1\mathrm{}j_n`$ are ones associated with the $`n`$ links that pinch the surface. This illustrates how the links of the spin network states can be viewed as transversal “quanta of area”. The picture of macroscopic physical space that emerges is then that of a tangle of one-dimensional intersecting quantum excitation, called the weave (Ashtekar Rovelli and Smolin 1992). Continuous space is formed by the weave in the same manner in which the continuous 2d surface of a T-shirt is formed by weaved threads. ### 3.2 Time The aspect of the GR’s relationalism that concerns space was largely anticipated by the earlier European thinking. Much less so (as far as I am aware) was the aspect of this relationalism that concerns time. GR’s treatment of time is surprising, difficult to fully appreciate, and hard to digest. The time of our perceptions is very different from the time that theoretical physics finds in the world as soon as one exits the minuscule range of physical regimes we are accustomed to. We seem to have a very special difficulty in being open minded about this particular notion. Already special relativity teaches us something about time which many of us have difficulties to accept. According to special relativity, there is absolute no meaning in saying “right now on Andromeda”. There is no physical meaning in the idea of “the state of the world right now”, because which set of events we consider as “now” is perspectival. The “now” on Andromeda for me might correspond to “a century ago” on Andromeda for you. Thus, there is no single well defined universal time in which the history of the universe “happens”. The modification of the concept of time introduced by GR is much deeper. Let me illustrate this modifications. Consider a simple pendulum described by a variable $`Q`$. In Newtonian mechanics, the motion of the pendulum is given by the evolution of $`Q`$ in time, namely by $`Q(T)`$, which is governed by the equation of motion, say $`\ddot{Q}=\omega Q`$, which has (the two-parameter family of) solutions $`Q(T)=A\mathrm{sin}(\omega T+\varphi )`$. The state of the pendulum at time $`T`$ can be characterized by its position and velocity. From these two, we can compute $`A`$ and $`\varphi `$ and therefore $`Q(T)`$ at any $`T`$. From the physical point of view, we are really describing a situation in which there are two physical objects: a pendulum, whose position is $`Q`$, and a clock, indicating $`T`$. If we want to take data, we have to repeatedly observe $`Q`$ and $`T`$. Their relation will be given by the equation above. The relation can be represented (for given $`A`$ and $`\varphi `$) by a line in the $`(Q,T)`$ plane. In Newtonian terms, time flows in its absolute way, the clock is just a devise to keep track of it, and the dynamical system is formed by the pendulum alone. But we can view the same physical situation from a different perspective. We can say that we have a physical system formed by the clock and the pendulum together and view the dynamical system as expressing the relative motion of one with respect to the other. This is precisely the perspective of GR: to express the relative motion of the variables, with respect to each other, in a “democratic” fashion. To do that, we can introduce an “arbitrary parameter time” $`\tau `$ as a coordinate on the line in the $`(Q,T)`$ plane. (But keep in mind that the physically relevant information is in the line, not in its coordinatization!). Then the line is represented by two functions, $`Q(\tau )`$ and $`T(\tau )`$, but a reparametrization of $`\tau `$ in the two functions is a gauge, namely it does not modify the physics described. Indeed, $`\tau `$ does not correspond to anything observable, and the equations of motion satisfied by $`Q(\tau )`$ and $`T(\tau )`$ (easy to write, but I will not write them down here) will be invariant under arbitrary reparametrizations of $`\tau `$. Only $`\tau `$-independent quantities have physical meaning. This is precisely what happens in GR, where the “arbitrary parameters”, analogous to the $`\tau `$ of the example, are the coordinates $`x^\mu `$. Namely, the spatial coordinate $`\stackrel{}{x}`$ and the temporal coordinate $`t`$. These have no physical meaning whatsoever in GR: the connection between the theory and the measurable physical quantities that the theory predict is only via quantities independent from $`\stackrel{}{x}`$ and $`t`$. Thus, $`\stackrel{}{x}`$ and $`t`$ in GR have a very different physical meaning than their homonymous in non-general-relativistic physics. The later correspond to readings on rods and clocks. The formed, correspond to nothing at all. Recall that Einstein described his great intellectual struggle to find GR as “understanding the meaning of the coordinates”. In the example, the invariance of the equations of motion for $`Q(\tau )`$ and $`T(\tau )`$ under reparametrization of $`\tau `$, implies that if we develop the Hamiltonian formalism in $`\tau `$ we obtain a constrained system with a (weakly) vanishing hamiltonian. This is because the hamiltonian generates evolutions in $`\tau `$, evolution in $`\tau `$ is a gauge, and the generators of gauge transformations are constraints. In canonical GR we have precisely the same situation: the hamiltonian vanishes, the constraints generate evolution in $`t`$, which is unobservable – it is gauge. GR does not describe evolution in time: it describes the relative evolution of many variables with respect to each other. All these variables are democratically equal: there isn’t a preferred one that “is the true time”. This is the temporal aspect of GR’s relationalism. A large part of the machinery of theoretical physics relies on the notion of time (on the different meanings of time in different physical theories, see Rovelli 1995). A theory of quantum gravity should do without. Fortunately, many essential tools that are usually introduced using the notion of time can equally well be defined without mentioning time at all. This, by the way, shows that time plays a much weaker role in the structure of theoretical physics than what is mostly assumed. Two crucial examples are “phase space” and “state”. The phase space is usually introduced in textbooks as the space of the states of the systems “at a given time”. In a general relativistic context, this definition is useless. However, it is known since Lagrange that there is an alternative, equivalent, definition of phase space as the space of the solutions of the equations of motion. This definition does not require that we know what we mean by time. Thus, in the example above the phase space can be coordinatized by $`A`$ and $`\varphi `$, which coordinatize the space of the solutions of the equations of motion. A time independent notion of “state” is then provided by a point of this phase space, namely by a particular solution of the equations of motion. For instance, for an oscillator a “state”, in this atemporal sense, is characterized by an amplitude $`A`$ and a phase $`\varphi `$. Notice that given the (time-independent) state ($`A`$ and $`\varphi `$), we can compute any observable: in particular, the value $`Q_T`$ of $`Q`$ at any desired $`T`$. Notice also that $`Q_T`$ is independent from $`\tau `$. This point often raises confusion: one may think that if we restrict to $`\tau `$-independent quantities then we cannot describe evolution. This is wrong: the true evolution is the relation between $`Q`$ and $`T`$, which is $`\tau `$-independent. This relation is expressed in particular by the value (let us denote it $`Q_T`$) of $`Q`$ at a given $`T`$. $`Q_T`$ is given, obviously, by $$Q_T(A,\varphi )=A\mathrm{sin}(\omega T+\varphi ).$$ This can be seen as a one-parameter (the parameter is $`T`$) family of observables on the gauge invariant phase space coordinatized by $`A`$ and $`\varphi `$. Notice that this is a perfectly $`\tau `$-independent expression. In fact, an explicit computation shows that the Poisson bracket between $`Q_T`$ and the hamiltonian constraint that generates evolution in $`\tau `$ vanishes. This time independent notion of states is well known in its quantum mechanical version: it is the Heisenberg state (as opposed to Schrödinger state). Similarly, the operator corresponding to the observable $`Q_T`$ is the Heisenberg operator that gives the value of $`Q`$ at $`T`$. The Heisenberg and Schrödinger pictures are equivalent if there is a normal time evolution in the theory. In the absence of a normal notion of time evolution, the Heisenberg picture remains viable, the Shrödinger picture becomes meaningless.<sup>8</sup><sup>8</sup>8In the first edition of his celebrated book on quantum mechanics, Dirac used Heisenberg states (he calls them relativistic). In later editions, he switched to Shrödinger states, explaining in a preface that it was easier to calculate with these, but it was nevertheless a pity to give up the Heisenberg states, which are more fundamental. In what was perhaps his last public seminar, in Sicily, Dirac used just a single transparency, with just one sentence: “The Heisenberg picture is the right one”. In quantum gravity, only the Heisenberg picture makes sense (Rovelli 1991c, 1991d). In classical GR, a point in the physical phase space, or a state, is a solution of Einstein equations, up to active diffeomorphisms. A state represents a “history” of spacetime. The quantity that can be univocally predicted are the ones that are independents from the coordinates, namely that are invariant under diffeomorphisms. These quantities have vanishing Poisson brackets with all the constraints. Given a state, the value of each of these quantities is determined. In quantum gravity, a quantum state represents a “history” of quantum spacetime. The observables are represented by operators that commute with all the quantum constraints. If we know the quantum state of spacetime, we can then compute the expectation value of any diffeomorphism invariant quantity, by taking the mean value of the corresponding operator. The observable quantities in quantum gravity are precisely the same as in classical GR. Some of these quantities may express the value of certain variables “when and where” certain other quantities have certain given values. They are the analog of the reparametrization invariant observable $`Q_T`$ in the example above. These quantities describe evolution in a way which is fully invariant under the parameter time, unphysical gauge evolution (Rovelli 1991d, 1991e). The corresponding quantum operators are Heisenberg operators. There is no Schrödinger picture, because there is no unitary time evolution. There is no need to expect or to search for unitary time evolution in quantum gravity, because there is no time in which we should have unitary evolution. A prejudice hard to die wants that unitary evolution is required for the consistency of the probabilistic interpretation. This idea is wrong. What I have described is the general form that one may expect a quantum theory of GR to have. I have used the Hilbert space version of QM; but this structure can be translated in other formulations of QM. Of course, physics works then with dirty hands: gauge dependent quantities, approximations, expansions, unphysical structures, and so on. A fully satisfactory construction of the above does not yet exist. A concrete attempt to construct the physical states and the physical observables in loop quantum gravity is given by the spin foam models approach, which is the formulation one obtains by starting from loop quantum gravity and constructing a Feynman sum over histories (Reisenberger Rovelli 1997, Baez 1998, Barret and Crane 1998). See (Baez 1999) in this volume for more details on ideas underlying these developments. In quantum gravity, I see no reason to expect a fundamental notion of time to play any role. But the nostalgia for time is hard to resist. For technical as well as for emotional reasons. Many approaches to quantum gravity go out of their way to reinsert in the theory what GR is teaching us we should abandon: a preferred time. The time “along which” things happen is a notion which makes sense only for describing a limited regime of reality. This notion is meaningless already in the (gauge invariant) general relativistic classical dynamics of the gravitational field. At the fundamental level, we should, simply, forget time. ### 3.3 Glimpses I close this section by briefly mentioning two more speculative ideas. One regards the emergence of time, the second the connection between the relationalism in GR and the relationalism in QM. (i) In the previous section, I have argued that we should search for a quantum theory of gravity in which there is no independent time variable “along” which dynamics “happens”. A problem left open by this position is to understand the emergence of time in our world, with its features, which are familiar to us. An idea discussed in (Rovelli 1993a 1993b, Connes and Rovelli 1994) is that the notion of time isn’t dynamical but rather thermodynamical. We can never give a complete account of the state of a system in a field theory (we cannot access the infinite amount of data needed to completely characterize a state). Therefore we have at best a statistical description of the state. Given a statistical state of a generally covariant system, a notion of a flow (more precisely a one-parameter group of automorphisms of the algebra of the observables) follows immediately. In the quantum context, this corresponds to the Tomita flow of the state. The relation between this flow and the state is the relation between the time flow generated by the hamiltonian and a Gibbs state: the two essentially determine each other. In the absence of a preferred time, however, any statistical state selects its own notion of statistical time. This statistical time has a striking number of properties that allow us to identify it with the time of non-general relativistic physics. In particular, a Schrödinger equation with respect to this statistical time holds, in an appropriate sense. In addition, the time flows generated by different states are equivalent up to inner automorphisms of the observable algebra and therefore define a common “outer” flow: a one paramater group of outer automorphisms. This determines a state independent notion of time flow, which shows that a general covariant QFT has an intrinsic “dynamics”, even in the absence of a hamiltonian and of a time variable. The suggestion is therefore that the temporal aspects of our world have statistical and thermodynamical origin, rather than dynamical. “Time” is ignorance: a reflex of our incomplete knoweldge of the state of the world. (ii) What is QM really telling us about our world? In (Rovelli 1996, 1998), I have argued that what QM is telling us is that the contingent properties of any system –or: the state of any system– must be seen as relative to a second physical system, the “observing system”. That is, quantum state and values that an observables take are relational notions, in the same sense in which velocity is relational in classical mechanics (it is a relation between two systems, not a properties of a single system). I find the consonance between this relationalism in QM and the relationalism in GR quite striking. It is tempting to speculate that they are related. Any quantum interaction (or quantum measurement) involving a system $`A`$ and a system $`B`$ requires $`A`$ and $`B`$ to be spatiotemporally contiguous. Viceversa, spatiotemporal contiguity, which is the grounding of the notions of space and time (derived and dynamical, not primary, in GR) can only be verified quantum mechanically (just because any interaction is quantum mechanical in nature). Thus, the net of the quantum mechanical elementary interactions and the spacetime fabric are actually the same thing. Can we build a consistent picture in which we take this fact into account? To do that, we must identify two notions: the notion of a spatiotemporal (or spatial?) region, and the notion of quantum system. For intriguing ideas in this direction, see (Crane 1991) and, in this volume, (Baez 1999). ## 4 Considerations on method and content ### 4.1 Method Part of the recent reflection about science has emphasized the “non cumulative” aspect in the development of scientific knowledge. According to this view, the evolution of scientific theories is marked by large or small breaking points, in which, to put it very crudely, the empirical facts are just reorganized within new theories. These would be to some extent “incommensurable” with respect to their antecedent. These ideas have influenced physicists. The reader has remarked that the discussion of quantum gravity I have given above assumes a different reading of the evolution of scientific knowledge. I have based the above discussion on quantum gravity on the idea that the central physical ideas of QM and GR represent our best guide for accessing the extreme and unexplored territories of the quantum-gravitational regime. In my opinion, the emphasis on the incommensurability between theories has probably clarified an important aspect of science, but risks to obscure something of the internal logic according to which, historically, physics finds knowledge. There is a subtle, but definite, cumulative aspect in the progress of physics, which goes far beyond the growth of validity and precision of the empirical content of the theories. In moving from a theory to the theory that supersedes it, we do not save just the verified empirical content of the old theory, but more. This “more” is a central concern for good physics. It is the source, I think, of the spectacular and undeniable predicting power of theoretical physics. Let me illustrate the point I am trying to make with a historical case. There was a problem between Maxwell equations and Galilei transformations. There were two obvious way out. To disvalue Maxwell theory, degrading it to a phenomenological theory of some yet-to-be-discovered ether’s dynamics. Or to disvalue Galilean invariance, accepting the idea that inertial systems are not equivalent in electromagnetic phenomena. Both ways were pursued at the end of the century. Both are sound applications of the idea that a scientific revolution may very well change in depth what old theories teach us about the world. Which of the two ways did Einstein take? None of them. For Einstein, Maxwell theory was a source of great awe. Einstein rhapsodizes about his admiration for Maxwell theory. For him, Maxwell had opened a new window over the world. Given the astonishing success of Maxwell theory, empirical (electromagnetic waves), technological (radio) as well as conceptual (understanding what is light), Einstein admiration is comprehensible. But Einstein had a tremendous respect for Galileo’s insight as well. Young Einstein was amazed by a book with Huygens’ derivation of collision theory virtually out of Galilean invariance alone. Einstein understood that Galileo’s great intuition –that the notion of velocity is only relative– could not be wrong. I am convinced that in this faith of Einstein in the core of the great Galilean discovery there is very much to learn, for the philosophers of science, as well as for the contemporary theoretical physicists. So, Einstein believed the two theories, Maxwell and Galileo. He assumed that they would hold far beyond the regime in which they had been tested. He assumed that Galileo had grasped something about the physical world, which was, simply, correct. And so had Maxwell. Of course, details had to be adjusted. The core of Galileo’s insight was that all inertial systems are equivalent and that velocity is relative, not the details of the galilean transformations. Einstein knew the Lorentz transformations (found, of course, by Lorentz, not by Einstein), and was able to see that they do not contradict Galileo’s insight. If there was contradiction in putting the two together, the problem was ours: we were surreptitiously sneaking some incorrect assumption into our deductions. He found the incorrect assumption, which, of course, was that simultaneity could be well defined. It was Einstein’s faith in the essential physical correctness of the old theories that guided him to his spectacular discovery. There are innumerable similar examples in the history of physics, that equally well could illustrate this point. Einstein found GR “out of pure thought”, having Newton theory on the one hand and special relativity –the understanding that any interaction is mediated by a field– on the other; Dirac found quantum field theory from Maxwell equations and quantum mechanics; Newton combined Galileo’s insight that acceleration governs dynamics with Kepler’s insight that the source of the force that governs the motion of the planets is the sun … The list could be long. In all these cases, confidence in the insight that came with some theory, or “taking a theory seriously”, lead to major advances that largely extended the original theory itself. Of course, far from me suggesting that there is anything simple, or automatic, in figuring out where the true insights are and in finding the way of making them work together. But what I am saying is that figuring out where the true insights are and finding the way of making them work together is the work of fundamental physics. This work is grounded on the confidence in the old theories, not on random search of new ones. One of the central concerns of modern philosophy of science is to face the apparent paradox that scientific theories change, but are nevertheless credible. Modern philosophy of science is to some extent an after-shock reaction to the fall of Newtonian mechanics. A tormented recognition that an extremely successful scientific theory can nevertheless be untrue. But it is a narrow-minded notion of truth the one which is questioned by the event of a successful physical theory being superseded by a more successful one. A physical theory, in my view, is a conceptual structure that we use in order to organize, read and understand the world, and make prediction about it. A successful physical theory is a theory that does so effectively and consistently. At the light of our experience, there is no reason not to expect that a more effective conceptual structure might always exist. Therefore an effective theory may always show its limits and be replaced by a better one. On the other hand, however, a novel conceptualization cannot but rely on what the previous one has already achieved. When we move to a new city, we are at first confused about its geography. Then we find a few reference points, and we make a rough mental map of the city in terms of these points. Perhaps we see that there is part of the city on the hills and part on the plane. As time goes on, the map gets better. But there are moments, in which we suddenly realize that we had it wrong. Perhaps there were indeed two areas with hills, and we were previously confusing the two. Or we had mistaken a big red building for the City Hall, when it was only a residential construction. So we adjourn the mental map. Sometime later, we have learned names and features of neighbors and streets; and the hills, as references, fade away. The neighbors structure of knowledge is more effective that the hill/plane one …The structure changes, but the knowledge increases. And the big red building, now we know it, is not the City Hall, and we know it forever. There are discoveries that are forever. That the Earth is not the center of the universe, that simultaneity is relative. That we do not get rain by dancing. These are steps humanity takes, and does not take back. Some of these discoveries amount simply to cleaning our thinking from wrong, encrusted, or provisional credences. But also discovering classical mechanics, or discovering electromagnetism, or quantum mechanics, are discoveries forever. Not because the details of these theories cannot change, but because we have discovered that a large portion of the world admits to be understood in certain terms, and this is a fact that we will have to keep facing forever. One of the thesis of this essay, is that general relativity is the expression of one of these insights, which will stay with us “forever”. The insight is that the physical world does not have a stage, that localization and motion are relational only, that diff-invariance (or something physically analogous) is required for any fundamental description of our world. How can a theory be effective even outside the domain for which it was found? How could Maxwell predict radio waves, Dirac predict antimatter and GR predict black holes? How can theoretical thinking be so magically powerful? Of course, we may think that these successes are chance, and historically deformed perspective. There are hundred of theories proposed, most of them die, the ones that survive are the ones remembered. There is alway somebody who wins the lottery, but this is not a sign that humans can magically predict the outcome of the lottery. My opinion is that such an interpretation of the development of science is unjust, and, worse, misleading. It may explain something, but there is more in science. There are tens of thousand of persons playing the lottery, there were only two relativistic theories of gravity, in 1916, when Einstein predicted that the light would be defected by the sun precisely by an angle of 1.75”. Familiarity with the history of physics, I feel confident to claim, rules out the lottery picture. I think that the answer is simpler. Somebody predicts that the sun will rise tomorrow, and the sun rises. It is not a matter of chance (there aren’t hundreds of people making random predictions on each sort of strange objects appearing at the horizon). The prediction that tomorrow the sun will rise, is sound. However, it is not granted either. A neutron star could rush in, close to the speed of light, and sweep the sun away. More philosophically, who grants me the right of induction? Why should I be confident that the sun would rise, just because it has been rising so many time in the past? I do not know the answer to this question. But what I know is that the predictive power of a theory beyond its own domain is precisely of the same sort. Simply, we learn something about nature (whatever this mean). And what we learn is effective in guiding us to predict nature’s behavior. Thus, the spectacular predictive power of theoretical physics is nothing less and nothing more than common induction. And it is as comprehensible (or as incomprehensible) as my ability to predict that the sun will rise tomorrow. Simply, nature around us happens to be full of regularities that we understand, whether or not we understand why regularities exist at all. These regularities give us strong confidence -although not certainty- that the sun will rise tomorrow, as well as in the fact that the basic facts about the world found with QM and GR will be confirmed, not violated, in the quantum gravitational regimes that we have not empirically probed. This view is not dominant nowadays in theoretical physics. Other attitudes dominate. The “pragmatic” scientist ignores conceptual questions and physical insights, and only cares about developing a theory. This is an attitude, that has been successful in the sixties in getting to the standard model. The “pessimistic” scientist has little faith in the possibilities of theoretical physics, because he worries that all possibilities are open, and anything might happen between here and the Planck length. The “wild” scientist observes that great scientists had the courage of breaking with old and respected ideas and assumptions, and explore new and strange hypothesis. From this observation, the “wild” scientist concludes that to do great science one has to explore strange hypotheses, and violate respected ideas. The wildest the hypothesis, the best. I think wilderness in physics is sterile. The greatest revolutionaries in science were extremely, almost obsessively, conservative. So was certainly the greatest revolutionary, Copernicus, and so was Planck. Copernicus was pushed to the great jump from his pedantic labor on the minute technicalities of the Ptolemaic system (fixing the equant). Kepler was forced to abandon the circles by his extremely technical work on the details of Mars orbit. He was using ellipses as approximations to the epicycle-deferent system, when he begun to realize that the approximation was fitting the data better than the (supposedly) exact curve. And extremely conservative were also Einstein and Dirac. Their vertiginous steps ahead were not pulled out of the blue sky. They did not come from violating respected ideas, but, on the contrary, from respect towards physical insights. In physics, novelty has always emerged from new data and from a humble, devoted interrogation of the old theories. From turning these theories around and around, immerging into them, making them clash, merge, talk, until, through them, the missing gear could be seen. In my opinion, precious research energies are today lost in these attitudes. I worry that a philosophy of science that downplays the component of factual knowledge in physical theories might have part of the responsibility. ### 4.2 On content and truth in physical theories If a physical theory is a conceptual structure that we use to organize, read and understand the world, then scientific thinking is not much different from common sense thinking. In fact, it is only a better instance of the same activity: thinking about the world. Science is the enterprise of continuously exploring the possible ways of thinking about the world, and constantly selecting the ones that work best. If so, there cannot be any qualitative difference between the theoretical notions introduced in science and the terms in our everyday language. A fundamental intuition of classical empiricism is that nothing grants us the “reality” of the referents of the notions we use to organize our perceptions. Some modern philosophy of science has emphasized the application of this intuition to the concepts introduced by science. Thus, we are warned to doubt the “reality” of the theoretical objects (electrons, fields, black holes …). I find these warning incomprehensible. Not because they are ill founded, but because they are not applied consistently. The fathers of empiricism consistently applied this intuition to any physical object. Who grants me the reality of a chair? Why should a chair be more than a theoretical concept organizing certain regularities in my perceptions? I will not venture here in disputing nor in agreeing with this doctrine. What I find incomprehensible is the position of those who grant the solid status of reality to a chair, but not to an electron. The arguments against the reality of the electron apply to the chair as well. The arguments in favor of the reality of the chair apply to the electron as well. A chair, as well as an electron, is a concept that we use to organize, read and understand the world. They are equally real. They are equally volatile and uncertain. Perhaps, this curious schizophrenic attitude of being antirealist with electrons and iron realist with chairs is the result of a complex historical evolution. First there was the rebellion against “metaphysics”, and, with it, the granting of confidence to science alone. From this point of view, metaphysical questioning on the reality of chairs is sterile – true knowledge is in science. Thus, it is to scientific knowledge that we apply empiricist rigor. But understanding science in empiricists’ terms required making sense of the raw empirical data on which science is based. With time, the idea of raw empirical data showed more and more its limits. The common sense view of the world was reconsidered as a player in our picture of knowledge. This common sense view should give us a language and a ground from which to start – the old anti-metaphysical prejudice still preventing us, however, from applying empiricist rigor to this common sense view of the world as well. But if one is not interested in questioning the reality of chairs, for the very same reason why should one be interested in questioning the “reality of the electrons”? Again, I think this point is important for science itself. The factual content of a theory is our best tool. The faith in this factual content does not prevent us from being ready to question the theory itself, if sufficiently compelled to do so by novel empirical evidence or by putting the theory in relation to other things we know about the world. Scientific antirealism, in my opinion, is not only a short sighted application of a deep classical empiricist insight; it is also a negative influence over the development of science. H. Stein (1999) has recently beautifully illustrated a case in which a great scientist, Poincaré, was blocked from getting to a major discovery (special relativity) by a philosophy that restrained him from “taking seriously” his own findings. Science teaches us that our naive view of the world is imprecise, inappropriate, biased. It constructs better views of the world. Electrons, if anything at all, are “more real” that chairs, not “less real”, in the sense that they ground a more powerful way of conceptualizing the world. On the other hand, the process of scientific discovery, and the experience of this century in particular, has made us painfully aware of the provisional character of any form of knowledge. Our mental and mathematical pictures of the world are only mental and mathematical pictures. This is true for abstract scientific theories as well as from the image we have of our dining room. Nevertheless, the pictures are powerful and effective and we can’t do any better than that. So, is there anything we can say with confidence about the “real world”? A large part of the recent reflection on science has taught us that row data do not exist, and that any information about the world is already deeply filtered and interpreted by the theory. Further than that, we could even think, as in the dream of Berkeley, that there is no “reality” outside there. The European reflection (and part of the American as well) has emphasized the fact that truth is always internal to the theory, that we can never exit language, we can never exit the circle of discourse within which we are speaking. It might very well be so. But, if the only notion of truth is internal to the theory, then this internal truth is what we mean by truth. We cannot exit from our own conceptual scheme. We cannot put ourself outside our discourse. Outside our theory. There may be no notion of truth outside our own discourse. But it is precisely “from within the language” that we can assert the reality of the world. And we certainly do so. Indeed, it is more than that: it is structural to our language to be a language about the world, and to our thinking to be a thinking of the world. Therefore, precisely because there is no notion of truth except the one in our own discourse, precisely for this reason, there is no sense in denying the reality of the world. The world is real, solid, and understandable by science. The best we can say about the physical world, and about what is there in the world, is what good physics says about it. At the same time, our perceiving, understanding, and conceptualizing the world is in continuous evolution, and science is the form of this evolution. At every stage, the best we can say about the reality of the world is precisely what we are saying. The fact we will understand it better later on does not make our present understanding less valuable, or less credible. A map is not false because there is a better map, even if the better one looks quite different. Searching for a fixed point on which to rest our restlessness, is, in my opinion, naive, useless and counterproductive for the development of science. It is only by believing our insights and, at the same time, questioning our mental habits, that we can go ahead. This process of cautious faith and self-confident doubt is the core of scientific thinking. Exploring the possible ways of thinking of the world, being ready to subvert, if required, our ancient prejudices, is among the greatest and the most beautiful of the human adventures. Quantum gravity, in my view, in its effort to conceptualize quantum spacetime, and to modify in depth the notion of time, is a step of this adventure. * Ashtekar A, Rovelli C, Smolin L (1992). Weaving a classical metric with quantum threads, Physical Review Letters 69, 237. * Ashtekar A , Lewandowski J (1997a). Quantum Theory of Gravity I: Area Operators, Class and Quantum Grav 14, A55–A81. * Ashtekar A, Lewandowski J (1997b). Quantum Theory of Geometry II: Volume operators, gr-qc/9711031. * Baez J (1997). Spin networks in nonperturbative quantum gravity, in The interface of knots and physics, ed L Kauffman (American Mathematical Society, Providence). * Baez J (1998). Spin foam models, Class Quantum Grav, 15, 1827–1858. * Baez J (1999). In “Physics Meets Philosophy at the Planck scale”, C Callender N Hugget eds, Cambridge University Press, to appear. * Barbour J (1989). Absolute or Relative Motion? (Cambridge University Press, Cambridge). * Barret J, Crane L (1998). Relativistic spin networks and quantum gravity, Journ Math Phys 39, 3296–3302. * Belot G (1998). Why general relativity does need an interpretation, Philosophy of Science, 63, S80–S88. * Connes A and Rovelli C (1994). Von Neumann algebra automorphisms and time versus thermodynamics relation in general covariant quantum theories, Classical and Quantum Gravity 11, 2899. * Crane L (1991). 2d physics and 3d topology, Comm Math Phys 135, 615–640. * Descartes R (1983): Principia Philosophiae, Translated by VR Miller and RP Miller (Reidel, Dordrecht ). * Earman J (1989). World Enough and Spacetime: Absolute versus Relational Theories of Spacetime (MIT Press, Cambridge). * Earman J, Norton J (1987). What Price Spacetime Substantivalism? The Hole Story, British Journal for the Philosophy of Science, 38, 515–525. * Isham C (1999). In “Physics Meets Philosophy at the Planck scale”, C Callender N Hugget eds, Cambridge University Press, to appear. * Newton I (1962): De Gravitatione et Aequipondio Fluidorum, translation in AR Hall and MB Hall eds Unpublished papers of Isaac Newton (Cambridge University Press, Cambridge). * Norton J D (1984). How Einstein Found His Field Equations: 1912-1915, Historical Studies in the Physical Sciences, 14, 253–315. Reprinted in Einstein and the History of General Relativity: Einstein Studies, D Howard and J Stachel eds., Vol.I, 101-159 ( Birkhäuser, Boston). * Penrose R (1995). The Emperor’s new mind (Oxford University Press) * Reisenberger M, Rovelli C (1997). Sum over Surfaces Form of Loop Quantum Gravity, Physical Review, D56, 3490–3508, gr-qc/9612035. * Rovelli C (1991a). What is observable in classical and quantum gravity?, Classical and Quantum Gravity, 8, 297. * Rovelli C (1991b). Quantum reference systems, Classical and Quantum Gravity, 8, 317. * Rovelli C (1991c). Quantum mechanics without time: a model, Physical Review, D42, 2638. * Rovelli C (1991d). Time in quantum gravity: an hypothesis, Physical Review, D43, 442. * Rovelli C (1991e). Quantum evolving constants. Physical Review D44, 1339. * Rovelli C (1993a). Statistical mechanics of gravity and thermodynamical origin of time, Classical and Quantum Gravity, 10, 1549. * Rovelli C (1993b). The statistical state of the universe, Classical and Quantum Gravity, 10, 1567. * Rovelli C (1993c). A generally covariant quantum field theory and a prediction on quantum measurements of geometry, Nuclear Physics, B405, 797. * Rovelli C (1995). Analysis of the different meaning of the concept of time in different physical theories, Il Nuovo Cimento, 110B, 81. * Rovelli C (1996). Relational Quantum Mechanics, International Journal of Theoretical Physics, 35, 1637. * Rovelli C (1997a). Half way through the woods, in The Cosmos of Science, J Earman and JD Norton editors, (University of Pittsburgh Press and Universitäts Verlag Konstanz). * Rovelli, C. (1997b) Loop Quantum Gravity, Living Reviews in Relativity (refereed electronic journal), http://www.livingreviews.org/ Articles/Volume1/1998-1rovelli; gr-qc/9709008 * Rovelli C (1998). Incerto tempore, incertisque loci: Can we compute the exact time at which the quantum measurement happens?, Foundations of Physics, 28, 1031–1043, quant-ph/9802020. * Rovelli C (1999). Strings, loops and the others: a critical survey on the present approaches to quantum gravity, in Gravitation and Relativity: At the turn of the millennium, N Dadhich J Narlikar eds (Poona University Press), gr-qc/9803024. * Rovelli C and Smolin L (1988). Knot theory and quantum gravity, Physical Review Letters, 61, 1155. * Rovelli C and Smolin L (1990). Loop space representation for quantum general relativity, Nuclear Physics, B331, 80. * Rovelli C, Smolin L (1995a). Spin Networks and Quantum Gravity, Physical Review, D 53, 5743. * Rovelli C and Smolin L (1995b). Discreteness of Area and Volume in Quantum Gravity, Nuclear Physics B442, 593. Erratum: Nuclear Physics B 456, 734. * Smolin L (1997). The future of spin networks, gr-qc/9702030. * Stachel J (1989). Einstein search for general covariance 1912-1915, in Einstein Studies, D Howard and J Stachel eds, vol 1, 63-100 (Birkhäuser, Boston). * Stein H (1999). Physics and philosophy meet: the strange case of Poincaré. Unpublished. * Zeilinger A, in Gravitation and Relativity: At the turn of the millennium, N Dadhich J Narlikar eds (Poona University Press).
no-problem/9903/math9903040.html
ar5iv
text
# Applications of Kawamata’s positivity theorem ## 0. Introduction In this paper we treat some applications of Kawamata’s positivity theorem (See Theorem (1.4)), which are related to the following important problem. ###### Problem 0.1. Let $`(X,\mathrm{\Delta })`$ be a proper klt pair. Let $`f:XS`$ be a proper surjective morphism onto a normal variety $`S`$ with connected fibers. Assume that $`K_X+\mathrm{\Delta }_{,f}0`$. Then is there any effective $``$-divisor $`B`$ on $`S`$ such that $$K_X+\mathrm{\Delta }_{}f^{}(K_S+B)$$ and that the pair $`(S,B)`$ is again klt? A special case of Problem (0.1) was studied in \[KeMM, Section 3\], where general fibers are rational curves as a step toward the proof of the three dimensional log abundance theorem. Thanks to Kodaira’s canonical bundle formula for elliptic surfaces and \[KeMM, Section 3\], the problem is affirmative under the assumption that $`dimX2`$. However, the problem is much harder in higher dimension because of the lack of canonical bundle formula and so forth. In this paper, we prove the following theorem as an application of positivity theorem of Kawamata, which could be viewed as a partial answer to Problem (0.1). This is the main theorem of this paper. ###### Theorem 0.2. Let $`(X,\mathrm{\Delta })`$ be a proper sub klt pair. Let $`f:XS`$ be a proper surjective morphism onto a normal variety $`S`$ with connected fibers. Assume that $`dim_{k(\eta )}f_{}𝒪_X(\mathrm{}\mathrm{\Delta }\mathrm{})_{𝒪_S}k(\eta )=1`$, where $`\eta `$ is the generic point of $`S`$. And assume that $`K_X+\mathrm{\Delta }_{,f}0`$, that is, there exists a $``$-Cartier $``$-divisor $`A`$ on $`S`$ such that $`K_X+\mathrm{\Delta }_{}f^{}A`$. Let $`H`$ be an ample Cartier divisor on $`S`$, and $`ϵ`$ a positive rational number. Then there exists a $``$-divisor $`B`$ on $`S`$ such that $$K_S+B_{}A+ϵH,$$ $$K_X+\mathrm{\Delta }+ϵf^{}H_{}f^{}(K_S+B),$$ and that the pair $`(S,B)`$ is sub klt. Furthermore, if $`f_{}𝒪_X(\mathrm{}\mathrm{\Delta }\mathrm{})=𝒪_S`$, then we can make $`B`$ effective, that is, $`(S,B)`$ is klt. In particular, $`S`$ has only rational singularities. By using Theorem (0.2), we obtain the cone theorem for the base space $`S`$ (See Theorem (3.1)), whose proof is given in Section 4 for the reader’s convenience. We also prove that the target space of an extremal contraction is at worst “Kawamata log terminal” (See Corollary (3.5)). Corollary (3.7) is a reformulation of a result of Nakayama. In this paper, we will work over $``$, the complex number field, and make use of the standard notations as in \[KoM, Notation 0.4\]. ###### Acknowledgments . I would like to express my sincere gratitude to Professors Shigefumi Mori and Noboru Nakayama for many useful comments. I am grateful to Dr. Daisuke Matsushita, who informed me of the paper \[N2\], which was a starting point of this paper. I am also grateful to Professor Yoichi Miyaoka for warm encouragements. ## 1. Definitions and Preliminaries We make some definitions and cite the key theorem in this section. ###### Definition 1.1. Let $`f:XS`$ be a proper surjective morphism of normal varieties with connected fibers. 1. A divisor $`D`$ is called $`f`$-exceptional if $`\mathrm{codim}_Sf(D)2`$. 2. Two $``$-divisors $`\mathrm{\Delta }`$ and $`\mathrm{\Delta }^{}`$ on $`X`$ are called $``$-linearly $`f`$-equivalent, denoted by $`\mathrm{\Delta }_{,f}\mathrm{\Delta }^{}`$, if there exists a positive integer $`r`$ such that $`r\mathrm{\Delta }`$ and $`r\mathrm{\Delta }^{}`$ are linearly $`f`$-equivalent (See \[KoM, Notation 0.4 (5)\]). ###### Definition 1.2. A pair $`(X,\mathrm{\Delta })`$ of normal variety and a $``$-divisor $`\mathrm{\Delta }=_id_i\mathrm{\Delta }_i`$ is said to be sub Kawamata log terminal (sub klt, for short) (resp. divisorial log terminal (dlt, for short)) if the following conditions are satisfied: 1. $`K_X+\mathrm{\Delta }`$ is a $``$-Cartier $``$-divisor ; 2. $`d_i<1`$ (resp. $`0d_i1`$) ; 3. there exists a log resolution (See \[KoM, Notation 0.4 (10)\]) $`\mu :YX`$ such that $`a_j>1`$ for all $`j`$ in the canonical bundle formula, $$K_Y+\mu _{}^1\mathrm{\Delta }=\mu ^{}(K_X+\mathrm{\Delta })+\underset{j}{}a_jE_j.$$ A pair $`(X,\mathrm{\Delta })`$ is called Kawamata log terminal (klt, for short) if $`(X,\mathrm{\Delta })`$ is sub klt and $`\mathrm{\Delta }`$ is effective. We say that a variety $`X`$ has only canonical singularities if $`(X,0)`$ is klt and $`a_j0`$ for all $`j`$ in (3). The notion of divisorial log terminal pair was first introduced by V. V. Shokurov in his paper \[Sh\] (for another equivalent definition, see \[Sz, Divisorial Log Terminal Theorem\] and \[KoM, Definition 2.37\]). ###### Definition 1.3. Let $`f:XS`$ be a smooth surjective morphism of varieties with connected fibers. A reduced effective divisor $`D=_iD_i`$ on $`X`$ such that $`D_i`$ is mapped onto $`S`$ for every $`i`$ is said to be relatively normal crossing if the following condition holds. For each closed point $`x`$ of $`X`$, there exists an open neighborhood $`U`$ (with respect to the classical topology) and $`u_1,\mathrm{},u_k𝒪_{X,x}`$ inducing a regular system of parameters on $`f^1f(x)`$ at $`x`$, where $`k=dim_xf^1f(x)`$, such that $`DU=\{u_1\mathrm{}u_l=0\}`$ for some $`l`$ such that $`0lk`$. The next theorem is \[Ka3, Theorem 2\], which plays an essential role in this paper. The conditions (2) and (3) are different from the original ones. But we do not have to change the proof in \[Ka3\]. ###### Theorem 1.4 (Kawamata’s positivity theorem). Let $`g:YT`$ be a surjective morphism of smooth projective varieties with connected fibers. Let $`P=_jP_j`$ and $`Q=_lQ_l`$ be normal crossing divisors on $`Y`$ and $`T`$, respectively, such that $`g^1(Q)P`$ and $`g`$ is smooth over $`TQ`$. Let $`D=_jd_jP_j`$ be a $``$-divisor on $`Y`$ ($`d_j`$’s may be negative), which satisfies the following conditions: 1. $`D=D^h+D^v`$ such that every irreducible component of $`D^h`$ is mapped surjecively onto $`T`$ by $`g`$, $`g:\mathrm{Supp}(D^h)T`$ is relatively normal crossing over $`TQ`$, and $`g(\mathrm{Supp}(D^v))Q`$. An irreducible component of $`D^h`$ (resp. $`D^v`$) is called horizontal (resp. vertical). 2. $`d_j<1`$ if $`P_j`$ is not $`g`$-exceptional. 3. $`dim_{k(\eta )}g_{}𝒪_Y(\mathrm{}D\mathrm{})_{𝒪_T}k(\eta )=1`$, where $`\eta `$ is the generic point of $`T`$. 4. $`K_Y+D_{}g^{}(K_T+L)`$ for some $``$-divisor $`L`$ on $`T`$. Let $`g^{}Q_l`$ $`=`$ $`{\displaystyle \underset{j}{}}w_{lj}P_j`$ $`\overline{d}_j`$ $`=`$ $`{\displaystyle \frac{d_j+w_{lj}1}{w_{lj}}}\text{ if }g(P_j)=Q_l`$ $`\delta _l`$ $`=`$ $`\text{max }\{\overline{d}_j;g(P_j)=Q_l\}`$ $`\mathrm{\Delta }_0`$ $`=`$ $`{\displaystyle \underset{l}{}}\delta _lQ_l`$ $`M`$ $`=`$ $`L\mathrm{\Delta }_0.`$ Then $`M`$ is nef. ## 2. Proof of the Main Theorem ###### Proof of Theorem (0.2). (cf. \[N2, Theorem 2\]) By using the desingularization theorem (cf. \[Sz, Resolution Lemma\]) we have the following commutative diagram: $$\begin{array}{ccc}Y& \stackrel{\nu }{}& X\\ g& & f& & ,\\ T& \stackrel{\mu }{}& S\end{array}$$ where 1. $`Y`$ and $`T`$ are smooth projective varieties, 2. $`\nu `$ and $`\mu `$ are projective birational morphisms, 3. we define $``$-divisors $`D`$ and $`L`$ on $`Y`$ and $`T`$ by the following relations: $$K_Y+D=\nu ^{}(K_X+\mathrm{\Delta }),$$ $$K_T+L_{}\mu ^{}A,$$ 4. there are simple normal crossing divisors $`P`$ and $`Q`$ on $`Y`$ and $`T`$ such that they satisfy the conditions of Theorem (1.4) and there exists a set of positive rational numbers $`\{s_l\}`$ such that $`\mu ^{}H_ls_lQ_l`$ is ample. By the construction, the conditions (1) and (4) of Theorem (1.4) are satisfied. Since $`(X,\mathrm{\Delta })`$ is sub klt, the condition (2) of Theorem (1.4) is satisfied. The condition (3) of Theorem (1.4) can be checked by the following claim. Note that $`\mu `$ is birational. We put $`h:=f\nu `$. ###### Claim (A). $`𝒪_Sh_{}𝒪_Y(\mathrm{}D\mathrm{})f_{}𝒪_X(\mathrm{}\mathrm{\Delta }\mathrm{})`$. ###### Proof of Claim (A). Since $`𝒪_Y𝒪_Y(\mathrm{}D\mathrm{})`$, we have $`𝒪_S=h_{}𝒪_Yh_{}𝒪_Y(\mathrm{}D\mathrm{})`$. Note that $`\mathrm{}D\mathrm{}=\nu _{}^1\mathrm{}\mathrm{\Delta }\mathrm{}+F`$, where $`F`$ is effective and $`\nu `$-exceptional. Then $$\begin{array}{cccc}& \mathrm{\Gamma }(U,\nu _{}𝒪_Y(\mathrm{}D\mathrm{}))\hfill & & \\ & \mathrm{\Gamma }(U\backslash \nu (F),\nu _{}𝒪_Y(\mathrm{}D\mathrm{}))\hfill & =& \mathrm{\Gamma }(U\backslash \nu (F),\nu _{}𝒪_Y(\nu _{}^1\mathrm{}\mathrm{\Delta }\mathrm{}))\hfill \\ & \mathrm{\Gamma }(U\backslash \nu (F),𝒪_X(\mathrm{}\mathrm{\Delta }\mathrm{}))\hfill & =& \mathrm{\Gamma }(U,𝒪_X(\mathrm{}\mathrm{\Delta }\mathrm{})),\hfill \end{array}$$ where $`U`$ is a Zariski open set of $`X`$. So we have $`\nu _{}𝒪_Y(\mathrm{}D\mathrm{})𝒪_X(\mathrm{}\mathrm{\Delta }\mathrm{})`$. Then $`h_{}𝒪_Y(\mathrm{}D\mathrm{})f_{}𝒪_X(\mathrm{}\mathrm{\Delta }\mathrm{})`$. We get Claim (A). ∎ So we can apply Theorem (1.4) to $`g:YT`$. The divisors $`\mathrm{\Delta }_0`$ and $`M`$ are as in Theorem (1.4). Then $`M`$ is nef. Since $`M`$ is nef, we have that $$M+ϵ\mu ^{}Hϵ^{}\underset{l}{}s_lQ_l$$ is ample for $`0<ϵ^{}ϵ`$. We take a general Cartier divisor $$F_0|m(M+ϵ\mu ^{}Hϵ^{}\underset{l}{}s_lQ_l)|$$ for a sufficiently large and divisible integer $`m`$. We can assume that $`\mathrm{Supp}(F_0_lQ_l)`$ is a simple normal crossing divisor. And we define $`F:=(1/m)F_0`$. Then $$L+ϵ\mu ^{}H_{}F+\mathrm{\Delta }_0+ϵ^{}\underset{l}{}s_lQ_l.$$ Let $`B_0:=F+\mathrm{\Delta }_0+ϵ^{}_ls_lQ_l`$ and $`\mu _{}B_0=B`$. We have $`K_T+B_0=\mu ^{}(K_S+B)`$. By the definition, $`\mathrm{}\mathrm{\Delta }_0\mathrm{}0`$. So $`\mathrm{}F+\mathrm{\Delta }_0+ϵ^{}_ls_lQ_l\mathrm{}0`$ when $`ϵ^{}`$ is small enough. Then $`(S,B)`$ is sub klt. By the construction we have $$K_S+B_{}A+ϵH,$$ $$K_X+\mathrm{\Delta }+ϵf^{}H_{}f^{}(K_S+B).$$ If we assume furthermore that $`f_{}𝒪_X(\mathrm{}\mathrm{\Delta }\mathrm{})=𝒪_S`$, we can prove the following claim. ###### Claim (B). If $`\mu _{}Q_l0`$, then $`\delta _l0`$. ###### Proof of Claim (B). If $`\mathrm{}d_j\mathrm{}w_{lj}`$ for every $`j`$, then $`\mathrm{}D\mathrm{}g^{}Q_l`$. So $`g_{}𝒪_Y(\mathrm{}D\mathrm{})𝒪_T(Q_l)`$. Then $`𝒪_S=h_{}𝒪_Y(\mathrm{}D\mathrm{})\mu _{}𝒪_T(Q_l)`$ by Claim (A). It is a contradiction. So we have that $`\mathrm{}d_j\mathrm{}<w_{lj}`$ for some $`j`$. Since $`w_{lj}`$ is an integer, we have that $`d_j+1w_{lj}`$. Then $`\overline{d}_j0`$. We get $`\delta _l0`$. ∎ So $`B`$ is effective if $`f_{}𝒪_X(\mathrm{}\mathrm{\Delta }\mathrm{})=𝒪_S`$. This completes the proof. ∎ Note that Theorem (0.2) implies a generalization of Kollár’s result \[Ko2, Remark 3.16\]. ## 3. Applications of the Main Theorem The following theorem is the cone theorem for $`(S,AK_S)`$. This implies the argument in \[C, (5.4.2)\]. ###### Theorem 3.1 (Generalized Cone Theorem). In Theorem (0.2) we assume that $`f_{}𝒪_X(\mathrm{}\mathrm{\Delta }\mathrm{})=𝒪_S`$. Then we have the cone theorem of $`S`$ as follows: 1. There are (possibly countably many) rational curves $`C_jS`$ such that $`_0[C_j]`$ is an $`A`$-negative extremal ray for every $`j`$ and $$\overline{\mathrm{NE}}(S)=\overline{\mathrm{NE}}(S)_{A0}+\underset{j}{}_0[C_j].$$ 2. For any $`\delta >0`$ and every ample $``$-divisor $`F`$, $$\overline{\mathrm{NE}}(S)=\overline{\mathrm{NE}}(S)_{(A+\delta F)0}+\underset{\text{finite}}{}_0[C_j].$$ 3. The contraction theorem holds for any $`A`$-negative extremal face (for more precise statement, see \[KoM, Theorem 3.7 (3), (4)\]). ###### Proof. See Section 4. ∎ The next theorem is a partial answer to Problem (0.1) under some assumptions. ###### Theorem 3.2. Let $`(X,\mathrm{\Delta })`$ be a proper sub klt pair. Let $`f:XS`$ be a proper surjective morphism onto a normal projective variety with connected fibers. Assume that $`K_X+\mathrm{\Delta }_{,f}0`$ and $`f_{}𝒪_X(\mathrm{}\mathrm{\Delta }\mathrm{})=𝒪_S`$. Assume that $`S`$ is $``$-factorial and the Picard number $`\rho (S)=1`$, and the irregularity $`q(S)=0`$. Then there is an effective $``$-divisor $`\mathrm{\Delta }^{}`$ on $`S`$ such that (!) $$K_X+\mathrm{\Delta }_{}f^{}(K_S+\mathrm{\Delta }^{}),$$ and that the pair $`(S,\mathrm{\Delta }^{})`$ is klt. ###### Proof. We use the same notations as in the proof of Theorem (0.2). By Theorem (0.2) and the $``$-factoriality of $`S`$, we have that $`(S,\mu _{}\mathrm{\Delta }_0)`$ is klt. The $``$-divisor $`\mu _{}M`$ is an ample $``$-Cartier $``$-divisor or $`\mu _{}M_{}0`$ since $`S`$ is $``$-factorial and $`\rho (S)=1`$, and $`q(S)=0`$. When $`\mu _{}M_{}0`$, we put $`\mathrm{\Delta }^{}:=\mu _{}\mathrm{\Delta }_0`$. So $`(S,\mathrm{\Delta }^{})`$ is klt and satisfies (!). When $`\mu _{}M`$ is ample, we take a sufficiently large and divisible integer $`k`$ such that $`|k\mu _{}M|`$ is very ample. Let $`C`$ be a general member of $`|k\mu _{}M|`$. We put $`\mathrm{\Delta }^{}:=(1/k)C+\mu _{}\mathrm{\Delta }_0`$. Then $`(S,\mathrm{\Delta }^{})`$ is klt and satisfies (!). ∎ ###### Remark 3.3. In Theorem (3.2), the assumption $`q(S)=0`$ is satisfied if $`K_S`$ is nef and big. It is because $`S`$ is klt by Theorem (0.2) and the $``$-factoriality of $`S`$. So $`q(S)=h^1(S,𝒪_S)=0`$ by Kawamata-Viehweg Vanishing Theorem. ###### Remark 3.4. On the assumption that $`X`$ has only canonical singularities and the general fibers of $`f`$ are smooth elliptic curves, Problem (0.1) was proved (See \[N1, Corollary 0.4\]). The next corollary is the generalization of Kollár’s theorem (See \[Ko1, Corollary 7.4\]). ###### Corollary 3.5. Let $`(X,\mathrm{\Delta })`$ be a projective dlt pair. Let $`f:XS`$ be an extremal contraction (See \[KMM, Theorem 3-2-1\]). Then there exists an effective $``$-divisor $`\mathrm{\Delta }^{}`$ on $`S`$ such that $`(S,\mathrm{\Delta }^{})`$ is klt. In particular, $`S`$ has only rational singularities. ###### Proof. Let $`H`$ be an ample $``$-Cartier $``$-divisor on $`S`$ such that $`H^{}:=(K_X+\mathrm{\Delta })+f^{}H`$ is ample. Let $`m`$ be a positive integer such that $`m\mathrm{\Delta }`$ is a $``$-divisor. Let $`m^{}`$ be a sufficiently large and divisible integer such that $`𝒪_X(m\mathrm{\Delta }+m^{}H^{})`$ is generated by global sections. We take a general member $`D^{}|m\mathrm{\Delta }+m^{}H^{}|`$. Then $`H^{}_{}(1/m^{})(D^{}m\mathrm{\Delta })`$. So we have that $`K_X+\mathrm{\Delta }+(ϵ/m^{})(D^{}m\mathrm{\Delta })`$ is $``$-Cartier and klt for any rational number $`0<ϵ1`$ (See \[KoM, Proposition 2.43\]). We take a sufficiently large and divisible integer $`k`$ such that $`kH^{}`$ is very ample. Let $`D^{\prime \prime }`$ be a general member of $`|kH^{}|`$. Then $$(X,\mathrm{\Delta }+\frac{ϵ}{m^{}}(D^{}m\mathrm{\Delta })+\frac{1ϵ}{k}D^{\prime \prime })$$ is klt and $$K_X+\mathrm{\Delta }+\frac{ϵ}{m^{}}(D^{}m\mathrm{\Delta })+\frac{1ϵ}{k}D^{\prime \prime }_{,f}0.$$ Apply Theorem (0.2) for $$f:(X,\mathrm{\Delta }+\frac{ϵ}{m^{}}(D^{}m\mathrm{\Delta })+\frac{1ϵ}{k}D^{\prime \prime })S.$$ We get the result. ∎ ###### Corollary 3.6. Let $`f:XS`$ be a Mori fiber space (for the definition of a Mori fiber space, see \[C, (1.2)\]). Then $`S`$ is klt. ###### Proof. Apply Corollary (3.5). Note that $`S`$ is $``$-factorial (See \[KoM, Corollary 3.18\]). ∎ The following corollary is a reformulation of \[N3, Corollary A.4.4\], whose assumption is slightly different from ours. Our proof is much simpler than \[N3, Corollary A.4.4\], but we can only treat the global situation. For the non-projective case, we refer the reader to \[N3, Appendix\]. ###### Corollary 3.7. Let $`(X,\mathrm{\Delta })`$ be a proper sub klt pair. Let $`f:XS`$ be a proper surjective morphism onto a normal projective variety $`S`$ with connected fibers. Assume that $`f_{}𝒪_X(\mathrm{}\mathrm{\Delta }\mathrm{})=𝒪_S`$ and $`(K_X+\mathrm{\Delta })`$ is $`f`$-nef and $`f`$-abundant (See \[KMM, Definition 6-1-1\]). Then there exists an effective $``$-divisor $`\mathrm{\Delta }^{}`$ on $`S`$ such that $`(S,\mathrm{\Delta }^{})`$ is klt. In particular, $`S`$ has only rational singularities. ###### Proof. By \[KMM, Proposition 6-1-3\], there exists a diagram: $$\begin{array}{ccc}Y& \stackrel{\mu }{}& Z\\ g& & \nu & & \\ X& \stackrel{f}{}& S\end{array}$$ which satisfies the following conditions; 1. it is commutative, that is, $`h:=fg=\nu \mu `$, 2. $`\mu `$, $`\nu `$ and $`g`$ are projective morphisms, 3. $`Y`$ and $`Z`$ are nonsingular varieties, 4. $`g`$ is a birational morphism and $`\mu `$ is a surjective morphism with connected fibers, and 5. there exists a $`\nu `$-nef and $`\nu `$-big $``$-Cartier $``$-divisor $`D`$ on $`Z`$ such that $$K_Y+\mathrm{\Delta }^{\prime \prime }:=g^{}(K_X+\mathrm{\Delta })_{}\mu ^{}(D).$$ By \[Ka1, Lemma 1.7\], there exists an effective $``$-divisor $`C_0`$ on $`Z`$ such that $`DC_0`$ is $`\nu `$-ample. We define $`C:=\mu ^{}C_0`$. If $`m`$ is a sufficiently large integer, then $`(Y,\mathrm{\Delta }^{\prime \prime }+(1/m)C)`$ is sub klt. We put $$H:=(K_Y+\mathrm{\Delta }^{\prime \prime })\frac{1}{m}C_{}\mu ^{}(D\frac{1}{m}C_0).$$ Then $`H`$ is $`h`$-semi-ample since $`D(1/m)C_0`$ is $`\nu `$-ample. We take a very ample Cartier divisor $`A`$ on $`S`$ such that $`H+h^{}A`$ is semi-ample. Let $`E`$ be a general member of $`|k(H+h^{}A)|`$ for a sufficiently large and divisible integer $`k`$. Then $$(Y,\mathrm{\Delta }^{\prime \prime }+\frac{1}{m}C+\frac{1}{k}E)$$ is sub klt and $$K_Y+\mathrm{\Delta }^{\prime \prime }+\frac{1}{m}C+\frac{1}{k}E_{,h}0$$ by the construction. So we can apply Theorem (0.2) for $$h:(Y,\mathrm{\Delta }^{\prime \prime }+\frac{1}{m}C+\frac{1}{k}E)S.$$ Note that $$𝒪_Sh_{}𝒪_Y(\mathrm{}\mathrm{\Delta }^{\prime \prime }\frac{1}{m}C\frac{1}{k}E\mathrm{})=h_{}𝒪_Y(\mathrm{}\mathrm{\Delta }^{\prime \prime }\mathrm{})f_{}𝒪_X(\mathrm{}\mathrm{\Delta }\mathrm{})$$ by Claim (A) in the proof of Theorem (0.2). This complete the proof. ∎ ## 4. Generalized Cone Theorem In this section we always work on the following assumption. ###### Assumptions 4.1. Let $`S`$ be a normal projective variety and $`A`$ a $``$-Cartier $``$-divisor on $`S`$. For any positive rational number $`ϵ`$ and every ample Cartier divisor $`H`$, there exists an effective $``$-divisor $`B`$ on $`S`$ such that $`K_S+B_{}A+ϵH`$ and that $`(S,B)`$ is klt. ###### Definition 4.2. Let $`F`$ be an ample Cartier divisor. We define $$r:=sup\{t;F+tA\text{is nef}\}.$$ ###### Theorem 4.3 (Generalized Cone Theorem). We have the generalization of the cone theorem as follows: 1. There are (possibly countably many) rational curves $`C_jS`$ such that $`_0[C_j]`$ is an $`A`$-negative extremal ray for every $`j`$ and $$\overline{\mathrm{NE}}(S)=\overline{\mathrm{NE}}(S)_{A0}+\underset{j}{}_0[C_j].$$ 2. For any $`\delta >0`$ and every ample $``$-divisor $`F`$, $$\overline{\mathrm{NE}}(S)=\overline{\mathrm{NE}}(S)_{(A+\delta F)0}+\underset{\text{finite}}{}_0[C_j].$$ 3. The contraction theorem holds for any $`A`$-negative extremal face (for more precise statement, see \[KoM, Theorem 3.7 (3), (4)\]). ###### Proof. If $`A`$ is nef, then there is nothing to be proved. So we can assume that $`A`$ is not nef. Then (2) is obvious. Note that for rational numbers $`0<\delta ^{}<\delta `$ there is an effective $``$-divisor $`B^{}`$ such that $`A+\delta F_{}K_S+B^{}+\delta ^{}F`$ and $`(S,B^{})`$ is klt. So we can reduce it to the well-known cone theorem for klt pairs (See \[KoM, Theorem 3.7\]). Let $`ϵ`$ be a small positive rational number and $`K_S+B_{}A+ϵF`$. If $`F+r_0(K_S+B)`$ is nef but not ample, then $`r_0`$ is a rational number by the rationality theorem for klt pairs. So we get the rationality of $`r`$. By \[KMM, Lemma 4-2-2\] and the rationality of $`r`$, we have $$\overline{\mathrm{NE}}(S)=\overline{\overline{\mathrm{NE}}(S)_{A0}+\underset{j}{}_0[C_j]},$$ where the right hand side is the closure of the cone generated by $`\overline{\mathrm{NE}}(S)_{A0}`$ and $`_j_0[C_j]`$. This fact and (2) implies (1) (See the proof of \[KoM, Theorem 1.24\]). (3) is also obvious. By changing $`A`$ to $`A+ϵH_{}K_X+B`$, where $`ϵ`$ is a small rational number, we can reduce it to the well-known klt case. ∎
no-problem/9903/astro-ph9903007.html
ar5iv
text
# A TULLY-FISHER RELATION FOR S0 GALAXIES ## 1 Introduction The Tully-Fisher relation (TFR) is a correlation between some measure of the maximal, or asymptotic, circular velocity of the disk and the integrated stellar luminosity of a galaxy. Since its discovery (Tully & Fisher 1977) much effort has been invested in studying its manifestation at various wavelengths, its dependence on different kinematic tracers, and its differences among galaxy types. Measures of the circular velocity have been derived from either the 21cm H I line width or from optical rotation curves (e.g., Mathewson, Ford, & Buchhorn 1992; Raychaudhury et al. 1997; Giovanelli et al. 1997ab). The optical rotation curves in all these cases were derived from H II emission lines. Courteau (1997) has recently compared the TFRs based on H I widths and optical rotation curves and finds basic agreement among them. Integrated galaxy magnitudes were initially measured in the $`B`$-band and later in the $`I`$ and $`H`$ bands (see Aaronson et al. 1979, 1986). The slope (Aaronson & Mould 1983), the zero point, and the scatter of the TFR depend on the band (see Jacoby et al. 1992, for a summary; see also Tully et al. 1998). The lowest scatter has been found in the $`I`$ band ($`0.1`$ mag, Bernstein et al. 1994). Presumably, this is because the $`B`$ magnitude is more influenced by dust extinction and short-lived stellar populations, while the infrared magnitude is a more robust measure of the total stellar mass of the galaxy. While the TFR serves as a fundamental tool for measuring extragalactic distances, the physical mechanism behind its existence is also of great interest. Possible explanations for a well defined TFR are emerging (Aaronson et al. 1979; Schechter 1980; Eisenstein & Loeb 1996; Dalcanton, Spergel, & Summers 1997; Mo, Mao, & White 1998; Elizondo et al. 1998; Heavens & Jimenez 1999). Self-regulated star-formation, cosmologically-determined initial angular momentum distributions, and adiabatic baryon infall all seem to play important roles. Alternatively, Milgrom (1983; 1989) has advocated that his Modified Newtonian Dynamics (MOND), designed to explain the rotation curves of galaxies without resorting to dark matter, also naturally predicts a TFR. In the MOND picture, the instrinsic scatter in the TFR for a given galaxy population simply reflects the spread of mass-to-light ratios ($`M/L`$) in the population. Most observational efforts have focussed on the TFR for late-type spiral galaxies, one extreme of the Hubble sequence. Rubin et al. (1985) studied the TFR for Sa, Sb and Sc galaxies, claiming a zero-point offset between Hubble types Sa and Sc that corresponds to $`1.5`$ magnitudes in I. However, Giovanelli et al. (1997b) found an offset of only $`0.3`$ mag between these Hubble types, and Aaronson & Mould (1983), Pierce & Tully (1988), and Bernstein et al. (1994) did not find a type dependence in their TFRs. None of these authors derived a TFR for the next Hubble type, S0s, because it is difficult to measure their rotation curves using H I or H II emission lines. Although $`27\%`$ of the S0s in Roberts et al. (1991) have H I gas detected, in many of them the gas shows unusual characteristics, such as large velocity dispersions and counter-rotating components, and single-dish measurements often cannot reveal that the gas is concentrated in the inner regions or in an outer ring (e.g. Van Driel & Van Woerden 1991). In this paper we explore an analogous relation for these earliest-type disk galaxies. S0 galaxies were classified by Hubble as a transition class between spirals and ellipticals (see van den Bergh 1997, for a recent review), and in the RSA catalog (Sandage & Tammann 1981) they comprise 11% of bright galaxies. The formation histories of S0s are not well understood and are likely to be heterogeneous. Their overabundance in cluster environments (Dressler 1980; see, e.g., Hashimoto & Oemler 1998, for an update) has led to suggestions that they are the products of disk-galaxy collisions and mergers (Schweizer 1986), or of gas stripping in later types (Gunn & Gott 1972). Numerical simulations of gas and stellar dynamics indeed suggest that the merger of two gas-rich disk galaxies of unequal mass can produce an object resembling an S0 (Hernquist & Mihos 1995; Bekki 1998a,b). In this picture, the merger induces a flow of gas to the central parts of the product galaxy, where the gas is almost completely transformed into stars during an induced central starburst. The simulated merger products resemble actual S0 galaxies in that they are much less gas-rich than their progenitors, contain a thickened disk, and exhibit little, if any, spiral structure. Observationally, this scenario is not free of problems, e.g., the absence of two distinct populations of globular clusters (old and young) in early-type galaxies (Kissler-Patig, Forbes, & Minniti 1998). As an alternative, Van den Bosch (1998) and Mao & Mo (1998) have proposed that S0s form a continuum with later types. In the context of hierarchical galaxy formation models, the bulge-to-disk ratio is a tracer of the formation redshift and/or the initial angular momentum of the dark halo in which the galaxy formed. In their gross stuctural properties, S0s are similar to ellipticals and share the same Fundamental Plane relations ( e.g. Jorgensen, Franx, & Kjaergard 1996). Since the central velocity dispersions of S0s and ellipticals are considerably higher than the rotation velocities of spirals of a given luminosity (which usually rise quickly to their assymptotic values), it appears that the mass-to-light ratio in the inner regions of galaxies increases when going to earlier types. The existence and parameters of a TFR for S0s could help us place them relative to ellipticals and spirals, and give a better understanding of the physical mechanism behind the TFR. From a practical viewpoint, a tight TFR for S0s could improve the distance estimate to many clusters, where S0s are the dominant population. To our knowledge, there has been only one published effort to measure a TFR in S0 galaxies, by Dressler & Sandage (1983). They found no evidence for any actual correlation between stellar luminosity and the observed mean stellar rotation speed. However, their rotation curves had very limited radial extent and were not corrected for projection effects nor for the stellar velocity dispersions. Furthermore, approximate (Hubble flow) distances were used, and the integrated blue magnitudes were based on photographic plates. An intrinsic TFR for S0s may have therefore been lost in the observational noise. In this paper we attempt to measure an $`I`$-band TFR in a sample of S0 galaxies. Rotation curves are obtained from major-axis long-slit optical absorption-line spectra. In §2 we describe our sample and observations. In §3 we describe the spectroscopic and photometric reduction and analysis, present rotation curves, and derive the asymptotic circular velocities of the galaxies. In §4 we derive the TFR relation and discuss its implications. Our conclusions are summarized in §5. ## 2 Sample and Observations ### 2.1 Sample Selection Because of the paucity of gas in S0s, the circular velocities must be estimated from stellar absorption-line kinematics, which requires fairly high signal-to-noise (S/N) ratios. We therefore first chose the brightest-possible sample of galaxies. Second, to explore or to establish any TFR we need accurate and independent distance estimates to our sample galaxies. Since the brightest S0s are nearby, Hubble distances, even when corrected for peculiar velocities using a large-scale flow model, are unreliable. We therefore chose only S0s whose distances have been measured by Tonry et al. (1998) using the surface brightness fluctuation (SBF) method (Tonry & Schneider 1988; Tonry et al. 1997; Blakeslee et al. 1998). Specifically, our sample criteria were as follows: a) The galaxy is in the Tonry et al. (1998) sample; b) Heliocentric radial velocity $`<2000`$ km s<sup>-1</sup>; c) Declination $`>20^{}`$; d) RSA classification S0/E, S0, SB0, S0/Sa, SB0/SBa or S0pec; e) RC3 (de Vaucouleurs et al. 1991) $`B`$ magnitude $`<12.6`$. These criteria lead to an initial sample of nearly forty galaxies, of which we observed a sub-sample of 20, devoid of morphological peculiarities, and with inclinations $`i35^{}60^{}`$. In this inclination range both the corrections for $`\mathrm{sin}i`$ and for the line-of-sight integration through the disk (see §4) are small. Two of the galaxies, NGC 4406 and NGC 4472, were subsequently excluded from the sample because they showed little or no rotation, with their kinematics dominated by random motions. These galaxies are perhaps more suitably labeled as elliptical galaxies, which do not have a major disk component. For three of the 18 galaxies in our final sample (NGC 936, NGC 3115, and NGC 7332) the stellar kinematics have been studied by other authors, and only photometric data were needed. For NGC 3115 we used kinematic data from Capaccioli et al. (1993) and Illingworth & Schechter (1982). A deep study of NGC 7332 by Fisher, Illingworth, & Franx (1994) provided the kinematics for this galaxy. Rotation curves for NGC 936 were taken from Kent (1987) and Kormendy (1983; 1984). Table 1 lists the objects, their parameters, and our sources of data. ### 2.2 Observations Cousins $`I`$ band photometry of the sample galaxies was obtained at the Wise Observatory 1m telescope using a Tektronix $`1024\times 1024`$-pixel back-illuminated CCD with a scale of 0.696$`\pm `$0.002 arcsec pixel<sup>-1</sup>. For each galaxy we took $`13`$ exposures of 300 s each. Most of the images were obtained on 1995 December 29, and a few were taken on 1996 December 15 and on 1997 February 17. All nights had photometric conditions and photometric standard stars from Landolt (1992) were observed throughout each night, and used to translate counts to $`I`$-band magnitudes. The spectroscopic observations were also obtained at the Wise Observatory 1m telescope. We used the Faint Object Spectrograph Camera (Kaspi et al. 1996) coupled to the above CCD. A 2<sup>′′</sup>-wide slit and a 600 line/mm grism, gave a dispersion of $`3.68`$ Å pixel<sup>-1</sup> in the 4000–7263 Å range, corresponding to a resolution of $`300`$ km s<sup>-1</sup>. The angular sampling was 2.08 arcsec pixel<sup>-1</sup>. The observations were made on 1995, October 25–26, November 29, and December 15–16, and 1996, March 14–16 and April 14–15. On each night we also obtained spectra of bright stars, mostly K-giants, to serve as templates for modeling the galaxy spectrum. Observations typically consisted of two consecutive major-axis exposures for each galaxy. Total integration times varied from 1 hr for the brightest galaxies to 4 hrs for the faintest. He-Ar lamp exposures, for wavelength calibration, and quartz lamp exposures, for flat fielding, were taken between consecutive galaxy exposures. One spectrum of NGC 5866 was obtained at Kitt Peak National Observatory (KPNO) using the 4m telescope on 1994 March 7, with the RC spectrograph, a 1200 line mm<sup>-1</sup> grating and an exposure time of 30 min. ## 3 Data Reduction and Analysis ### 3.1 Photometry The $`I`$-band images were reduced using standard IRAF<sup>1</sup><sup>1</sup>1IRAF (Image Reduction and Analysis Facility) is distributed by the National Optical Astronomy Observatories, which are operated by AURA, Inc., under cooperative agreement with the National Science Foundation. routines. Images were bias subtracted, and flat-field corrected using twilight sky exposures. Foreground stars were found and removed by examining each image and replacing the affected area with an interpolated two-dimensional surface, using the Imedit task. In order to measure the ellipticity of each galaxy and the scale length of its disk we used the Ellipse task. The semi-major axis lengths of the fitted elliptical isophotes was increased in increments of 5% until the change in the intensity between two successive ellipses was negligible (except for two cases where a bright star near the galaxy prevented extracting additional isophotes). The task outputs for each ellipse the semi-major axis length, the mean isophotal intensity, the ellipticity, and the position angle. The Elapert task was then used to approximate each ellipse with a polygon and the counts within each polygon were measured with the Polyphot task. The projected disk ellipticity was taken to be the ellipticity of the last well-fitted ellipse. The disk scale length was found by $`\chi ^2`$ minimization, allowing the central surface-brightness, disk scale length, and sky level to vary. The parameters of the exponential disk fit and their uncertainties were used to extrapolate the counts from the last measured radius to infinity, resulting in a “total” $`I`$-band magnitude and its error. Tonry et al.’s (1998) distances to the galaxies were used to derive the absolute magnitudes, $`M_I`$. These parameters are listed in Table 1. ### 3.2 Spectroscopy The long slit spectra were also reduced using standard IRAF routines. Each two-dimensional spectrum was bias subtracted. Variations in slit illumination were removed by dividing each image by an illumination image derived from a spectrum of the twilight sky. Pixel-to-pixel sensitivity variations were removed by division by a quartz lamp spectrum taken after every galaxy exposure. The quartz spectrum was first normalized by a 6th-order polynomial fit to its low frequency structure in the dispersion direction. Cosmic ray events were removed with the IRAF tasks Ccrej, Cosmicrays and Imedit. He-Ar arc-lamp spectra with about 40 lines were used to rectify all science frames to uniform sampling in slit position and $`\mathrm{log}\lambda `$, where $`\lambda `$ is the wavelength, in the two cardinal directions. The resulting accuracy of the wavelength calibration is $`15`$ km s<sup>-1</sup>. The sky background was removed by interpolating along the two ends of the slit, where the sky dominates. Template star spectra were reduced in the same fashion, and subsequently extracted from the frames to yield one-dimensional spectra. The line-of-sight velocities $`V_{obs}(R)`$ and velocity dispersions $`\sigma (R)`$ as functions of the projected radius $`R`$ were extracted from the galaxy spectra, following Rix & White (1992) and Rix et al. (1995). The two dimensional spectrum was first rebinned into a sequence of one-dimensional spectra of approximately constant S/N and each of these spectra was then matched by a shifted and broadened linear combination of templates, minimizing $`\chi ^2`$. This resulted in a kinematic profile that, at each radius, is derived from an “optimal” template. Figure 1 (top and middle panels) shows rotation and velocity dispersion curves, $`V_{obs}(R)`$ and $`\sigma (R)`$, for the 15 galaxies we observed spectroscopically. One of the galaxies, NGC 5866, was measured both at Wise Observatory and at KPNO (see Fig. 1). Although the degradation in S/N when going to a small telescope is obvious, the agreement is good and shows that, for the present purpose the Wise Observatory spectra are of sufficient quality. ### 3.3 Deriving Circular Velocities from $`V(R)`$ and $`\sigma (R)`$ Determining the true circular velocity of a galaxy, defined as $`V_c(R)\sqrt{R\frac{\mathrm{\Phi }_{grav}}{R}}`$, from stellar kinematics is somewhat model-dependent, even if rotation dominates (see, e.g., discussion by Illingworth & Schechter 1982; Binney & Tremaine 1987; Raychaudhury et al. 1997). We derive the circular velocity in several steps. When several rotation curves were available for a single galaxy (see §3.2) we computed the asymptotic velocity and velocity dispersion in each curve separately and subsequently used the means. To obtain the mean stellar rotation velocity, $`V_\varphi `$, in the plane of the disk, we deproject the observed velocity, using the observed disk ellipticity and assuming an edge-on disk axis ratio $`q_0=0.22`$ (de Vaucouleurs et al. 1991): $`V_\varphi (R)={\displaystyle \frac{V_{obs}(R)}{\mathrm{sin}(i)}}=V_{obs}\times \sqrt{{\displaystyle \frac{1q_0^2}{2ee^2}}}\text{ ,}`$ where $`i`$ is the inclination, $`e`$ is the ellipticity, $`V_{obs}`$ is the observed radial velocity, and $`V_\varphi `$ is the azimuthal speed. Galaxies with ellipticities greater than 0.57 ($`i>67^{}`$) were deemed to be edge-on, and no attempt at the above inclination correction was made. However, in highly-inclined galaxies the line-of-sight integration through the disk will reduce the observed mean velocity relative to the actual velocity $`V_\varphi (R)`$ at the tangent point. We constructed a simple model of an exponential disk with a vertical scale height of $`0.2R_{exp}`$, to calculate $`V_{obs}/V_\varphi (R)`$. For the edge-on case, an approximate analytic expression for $`fV_{obs}/V_\varphi (R)`$ can be found, which is is shown in Figure 2. The same effect will lead to an overestimate of the azimuthal velocity dispersion. The two corrections for edge-on disks are: $`V_\varphi (R)={\displaystyle \frac{V_{obs}(R)}{f(\frac{R}{R_{exp}})}}\text{ ,}`$ $`\text{and }\sigma _\varphi ^2=\sigma _{obs}^2{\displaystyle \frac{1}{2}}(V_\varphi V_{obs})^2\text{ ,}`$ $`\text{with }f(x)={\displaystyle \frac{\mathrm{exp}(x)}{0.5772\mathrm{ln}(x)+x\frac{x^2}{2\times 2!}+\frac{x^3}{3\times 3!}\mathrm{}}}x,`$ where $`\sigma _\varphi `$ is the corrected velocity dispersion, and $`\sigma _{obs}`$ is the observed velocity dispersion. Note that our uncertainties in how close to edge-on these galaxies actually are, lead to an error of only $`\mathrm{\Delta }\mathrm{log}_{10}(V_\varphi )0.025`$, assuming random inclinations between $`i=90^{}`$ and $`i=70^{}`$. For inclinations less than $`70^{}`$, the correction is $`<4\%`$, and we neglect it. Most importantly, however, $`V_c`$ will differ from the directly observable quantities by the “asymmetric drift” correction, which accounts for the non-circular orbits of the stars, or, equivalently, their velocity dispersion. The circular velocity $`V_c`$ is related to the gravitational potential $`\mathrm{\Phi }`$(R), in the galaxy plane by $`V_c^2(R)=R\left[{\displaystyle \frac{\mathrm{\Phi }(R)}{R}}\right].`$ To obtain the circular velocity (i.e. the velocity of a “cold” gas in the disk) we follow Binney & Tremaine (1987), eqn. 4-33: $`V_c^2=\overline{V_\varphi ^2}+\sigma _\varphi ^2\sigma _r^2{\displaystyle \frac{R}{\rho }}{\displaystyle \frac{(\rho \sigma _R^2)}{R}}R{\displaystyle \frac{(\overline{V_RV_z})}{z}},`$ where $`\rho (R)=\rho _0\mathrm{exp}(\frac{R}{R_{exp}})`$ is the mass density, and the term $`\overline{V_RV_z}`$ is usually negligible (Binney and Tremaine, 1987). For a flat rotation curve, $`\sigma _\varphi ^2(r)/\sigma _r^2(r)=0.5`$, which leads to $$V_c^2=V_\varphi ^2+\sigma _\varphi ^2\left[2\left(\frac{R}{R_{exp}}\frac{\mathrm{ln}\sigma _R^2}{\mathrm{ln}R}\right)1\right].$$ For many of the sample galaxies $`\frac{\mathrm{ln}\sigma _\varphi ^2}{\mathrm{ln}R}`$, and hence $`\frac{\mathrm{ln}\sigma _R^2}{\mathrm{ln}R}`$, is small, and can be neglected, yielding: $`V_c^2=V_\varphi ^2+\sigma _\varphi ^2\left(2{\displaystyle \frac{R}{R_{exp}}}1\right).`$ To obtain the corrected rotation curves, we first fit an exponential function to the observed dispersion profile $`\sigma _\varphi (R)`$. We then use the fit value of $`\sigma _\varphi (R)`$ to apply the asymmetric drift correction to every measurement of $`V_\varphi `$ for which $`V_\varphi /\sigma _\varphi >2.5`$ (see below). The final, corrected, curves are shown in the bottom panels of Figure 1. Finally, to estimate the deprojected, asymptotic rotation speed (usually at $`R3R_{exp}`$), we average the last three points on either side of the corrected rotation curves in Figure 1 (bottom panels). Points with errors $`100`$ km s<sup>-1</sup> were discarded. The radius of the measured aymptotic velocity, $`R`$, was taken as the average radius of the points in the rotation curve that we used, and the uncertainty in that radius is half the distance between the inner point and the outer point that we used to obtain the final velocity. We list all the measured and corrected velocities in Table 1. Three of the galaxies, NGC 2768, NGC 4382, and NGC 4649, have relatively large velocity dispersions even in their outer parts, such that $`\sigma _\varphi >V_\varphi /2.5`$. Under such circumstances, the approximations and systematics involved in the asymmetric drift correction may lead to an unacceptably large error in the inferred $`V_c`$ and we mark the measurements of these galaxies as uncertain in the subsequent discussion. ## 4 Results With the information assembled in Table 1 we can explore the two questions posed initially: a) To what extent do S0s follow a TFR, i.e., how well are $`M_I`$ and $`V_c`$ correlated? b) What is the mean stellar luminosity for S0s at a given circular velocity, and how does it compare to the luminosity of later-type disk galaxies? Figure 3 shows $`M_I`$ vs. $`V_c`$ for the sample galaxies. The errorbars in $`M_I`$ include photometric errors and distance uncertainties, and the errors in $`V_c`$ include propagation of all the uncertainties involved in the calculation of the final circular velocity. The data points with dotted errorbars represent the three galaxies for which the asymmetric drift corrections were uncertain due to their relatively large velocity dispersions (see above). The dashed line shows the $`I`$-band TFR for late type spiral galaxies, as derived from the Mathewson et al. (1992) data by Courteau and Rix (1998) and adjusted to the same distance scale ($`H_0=80`$ km s<sup>-1</sup> Mpc<sup>-1</sup>) as that implied by the SBF method for these galaxies (Tonry et al. 1998). To estimate the best fit and the intrinsic scatter in the TFR, we proceeded as follows (see also Rix et al. 1997.) We assumed a relation of the form $$M_I(\mathrm{log}V_c)=M_I(2.3)\alpha (\mathrm{log}V_c2.3),$$ where the fit’s pivoting point is 200 km s<sup>-1</sup>, i.e., $`\mathrm{log}V_c=2.3`$. Further, we assumed that the relation has an intrinsic Gaussian scatter in $`M_I`$ (at a given $`\mathrm{log}V_c`$) of $`\sigma `$ magnitudes. For each parameter set $`[M_I(2.3),\alpha ,\sigma ]`$ this defines a model probability distribution, $`P_{model}`$ in the ($`M_I,\mathrm{log}V_c`$) plane. Each data point $`i`$, with its uncertainties in $`V_c`$ and $`M_I`$, also constitutes a probability distribution in the same parameter plane, $`P_i(M_I,\mathrm{log}V_c)`$. The overall probability of a parameter set $`[M_I(2.3),\alpha ,\sigma ]`$, given the data, can be calculated as: $$P(M_I(2.3),\alpha ,\sigma )=\underset{i}{}\left(P_{model}\times P_i\right)𝑑Md\mathrm{log}V_c,$$ which is a measure of the overlap between the data and the model probability distributions for a given model. It is apparent from the data (Fig. 3) that the slope is poorly determined. Therefore, we fit a relation assuming the spiral TFR slope from Mathewson et al. (1992), $`\alpha =7.5`$. The best fit has a zeropoint of $`M_I(2.3)=21.36\pm 0.15`$ mag and an intrinsic scatter of $`\sigma =0.68\pm 0.15`$ mag. The thick line in Fig. 3 is this best fit relation, and the thin lines show the scatter. From the plot is is clear that the data indicate a steeper slope. Formally, $`\alpha >10.5`$ (at 95% confidence), with no well-defined upper bound. Note that if we have underestimated the (dominant) velocity errors by 30%, the estimated intrinsic scatter in the relation will only decrease to $`0.58`$ magnitudes. Based on Figure 3, we can now answer the two question posed above: $``$ Despite the care taken in deriving $`V_c`$ and $`M_I`$, there is a great deal of intrinsic scatter in the TFR: $`0.68\pm 0.15`$ mag. $``$ At a given $`V_c`$, there is only a small ($`0.5\pm 0.15`$ mag) systematic offset in $`M_I`$, between the S0s and the Sc galaxies from Mathewson et al. This offset is much smaller than the 1.5 magnitudes (in $`I`$) between Sa’s and Sc’s, claimed by Rubin et al. (1985), and adds to the other evidence (e.g., Pierce & Tully 1988; Bernstein et al. 1994) that the zero point of the I-band TFR is only weakly dependent on galaxy type. The large scatter in Fig. 3 is particularly remarkable in light of the well-behaved Fundamental Plane (FP) relation ( e.g. Jorgensen et al. 1996, and references therein) for S0s in general, as well as for this particular set of objects. Figure 4 shows the FP for our sample, based on values for the effective radii, $`R_e`$, as compiled in Bender, Burstein, & Faber (1992) and Fisher (1997), and central velocity dispersions, $`\sigma _0`$, estimated both from our data and the literature, and listed in Table 1. For comparison with the existing FP literature, we reconstructed $`I_{eff}`$ from $`M_I`$ and $`R_e`$, assuming a de Vaucouleur’s law. The median scatter among the points is well below $`0.1`$ in either axis. The important difference between Figures 3 and 4 is that the FP in Figure 4 uses the central stellar dispersion as the kinematic parameter, while the TFR in Figure 3 involves $`V_c`$ at 2–3$`R_{exp}`$, characterizing the total mass within this radius. It is clear from this comparison that, at least for this sample, the central stellar dispersion is a much better predictor of the total stellar luminosity than the circular velocity at several disk exponential radii. We have searched for possible sources, either observational or intrinsic, for the large scatter we have found in the S0 TFR. Fisher (1997) obtained stellar rotation curves and velocity dispersion profiles for 18 S0 galaxies, 7 of which are in our sample. Although he presents his measurements only out to about one disk scale length, $`R_{exp}`$, while our rotation curves typically extend to $`R/R_{exp}=24`$, a meaningful comparison can be made, since, as seen in Fig. 1, the rotation curves usually flatten out already at small radii (10 to 25 arcsec). Our measured asymptotic line-of-sight velocities agree with Fisher’s at the $`10\%`$ level. A similar level of agreement exists between his measurements in the $`B`$-band and our measurements in $`I`$-band of the disk scale lengths and ellipticities. Velocity dispersions in his data are also generally consistent with ours, except for two cases, NGC 4382 and NGC 5866, in which he measures twice the values we obtained. NGC 4382, however, was already excluded from our analysis above because of its relatively low level of rotation, while for NGC 5866 we have both Wise Observatory data and high-quality data from KPNO, which are consistent with each other. Simien & Prugniel (1997), Bettoni & Galletta (1997), Fried & Illingworth (1994) and Seifert & Scorza (1996) have each derived rotation curves for some of the galaxies in our sample, and their results are in good agreement with ours. A mild exception is NGC 2549, for which Simien & Prugniel (1997) and Seifert & Scorza (1996) obtain a maximum velocity of $`150\pm 30`$ km s<sup>-1</sup> compared to our $`113\pm 13`$ km s<sup>-1</sup>. While our sample has the advantage of uniform SBF distance estimation, distance errors could contribute to the TFR scatter as well. The SBF method has an r.m.s scatter of less than 0.1 mag, but there are a number of distance discrepancies which could affect a small sample like ours. Among the galaxies in our sample, Blakeslee et al. (1998) and Ciardullo et al. (1993) find differences of order of 0.3 mag between SBF-based distance modulii and distances based on planetary nebula luminosity functions for NGC 3115, NGC 4382, and NGC 1023. However, it is difficult to see how this could be a dominant source of scatter in the TFR without introducing comparable scatter in the Fundamental Plane relation for our sample. A second potential source of errors is in the corrections for inclination and assymetric drift we have applied to our data. These corrections are sometimes at a level of $`100\%`$ (most are above $`35\%`$) and are based on noisy velocity dispersion measurements. H I observations for some of our galaxies exist, and can partially confirm the velocity corrections. Comparisons of H I velocities and corrected stellar velocities are not straightforward, since the gas component in S0s may sometimes be concentrated only in the inner parts or in an outer ring, as a relic from a past accretion event. Furthermore, there are different measures of 21 cm linewidth (e.g., at $`50\%`$ or $`20\%`$ of the peak). Nevertheless, from Roberts et al. (1991), Huchtmeier et al. (1995), and Wardle & Knapp (1986) we obtained H I velocities for five galaxies in our sample, and find excellent agreement with our corrected stellar velocities in four cases, the exception being NGC 1052, where there is a $`2\sigma `$ discrepancy between the inclination-corrected H I width of Roberts et al. (1991), 288 km s<sup>-1</sup>, and our final circular velocity of $`190\pm 39`$ km s<sup>-1</sup>. As an alternative method of calculating the assymetric drift correction, we attempted, instead of the procedure decribed above, to apply the correction directly to the outermost measurements of the velocities and dispersions, after averaging the outer three points. However, this had the effect of increasing the scatter in the TFR. This is a consequence of the large $`R/R_{exp}`$ values making the dispersion term in the assymetric drift correction, $`\sigma _\varphi ^2\left(2\frac{R}{R_{exp}}1\right)`$, dominant compared to $`V_\varphi ^2`$. Modifications in the choice of $`R`$, or correcting $`\sigma _\varphi `$ for inclination had little effect on the TFR scatter. Next, we searched for intrinsic sources of TFR scatter, arising from a possible dependence on additional parameters. We have checked for correlations among the residuals in the best-fitting TFR relation and a variety of parameters. We found no dependence of TFR residuals on disk ellipticity, as was found, e.g., in the late-type-galaxy TFR of Bernstein et al. (1994) and interpreted as the effect of extinction by dust. The ratio $`\frac{V_\varphi }{\sigma _0}`$ can serve as a kinematic indicator of rotational vs. dispersive support in a given galaxy, and correlation of the TFR residuals with it could indicate, e.g., that those galaxies with the least rotation (and the largest assymetric drift corrections) are those contributing most to the scatter. However, we found no significant correlation between the TFR residuals and this ratio. Similarly the residuals are not correlated with $`\frac{R_{exp}}{R_e}`$, a photometric measure of disk vs. bulge dominance. We found that the parameter $`x=\frac{V_\varphi }{\sigma _0}\frac{R_{exp}}{R_e}`$ is marginally correlated with the TFR residuals, at a significance level of 93%. Although the physical significance of $`x`$ is unclear, applying this correction reduces the intrinsic TFR scatter by $`0.2`$ mag. In view of these tests, we conclude that the large intrinsic TFR scatter of 0.7 mag that we find for S0s is most likely not the result of errors in observation and analysis. Likewise, we have not found additional parameters that significantly lower the scatter. For comparison, the TFR in late type spirals usually has an intrinsic r.m.s scatter of $`\sigma _{in}0.25`$ mag (e.g. Giovanelli et al. 1997b), although a smaller scatter can occur in homogeneous, well-defined samples; Bernstein et al. (1994) found an r.m.s. scatter of 0.23 mag, which, after correction for extinction based on ellipticities reduced to 0.1 mag. From the physical viewpoint, our result is in conflict with the idea that most S0s were disk galaxies – on their way to become present day spirals – whose star-forming career was cut short by some mechanism, e.g. tidal stripping in a dense environment (Gunn & Gott 1972). In that case, we would expect the S0s to have faded significantly at constant $`V_c`$, exhibiting a larger TFR zeropoint offset. Specifically, if S0s had had similar star formation histories to Sc’s (e.g., Kennicutt et al. 1994) until a truncation, say, $`>4`$ Gyrs ago, we would expect an offset of $`>0.9`$ magnitudes in $`I`$ due to the fading of the stellar population, based on Charlot & Bruzual (1991) models. Similarly, the absence of a tight S0 TFR argues against a physical continuity of S0s with later-type spirals, as suggested in the context of hierarchical structure formation models (Van den Bosch 1998; Mao & Mo 1998). Alternatively, S0s may be more closely related to ellipticals. Both may be the relics of non-cataclysmic mergers (Schweizer 1986). For individual sample members, ( e.g. NGC 4649, NGC 4406, NGC 4472) this may be apparent from their individual structure, but the present evidence is pointing towards this being true for a good fraction of the morphological class. Qualitatively, the spread among S0s in time elapsed since the merger and its ensuing gas-depleting starburst would produce the TFR scatter, while, on average, the larger concentration of stars may compensate for the fading of the stellar population, and give a mean luminosity comparable to that of late-type galaxies, for a given halo mass. A quantitative examination of the TFR resulting in this scenario is, however, needed. ## 5 Conclusions We have constructed a TFR for nearby S0 galaxies, deriving corrected circular velocities from stellar velocities, and using high-quality distance estimates (Tonry et al. 1998) based on surface brightness fluctuations. Despite the care taken, the relation between $`M_I`$ and $`V_c`$ exhibits $`0.7`$ magnitudes of scatter. As an illustration, NGC 2787 and NGC 4753 both have similar circular velocities of 230 km s<sup>-1</sup>, but their luminosities differ by over 3 mag. The reason for this large scatter is not clear. Perhaps it indicates that the S0 morphological class truly represents a “mixed bag”, with a wide range of galaxy formation channels feeding into it. The central stellar velocity dispersion is a much better predictor of the total stellar luminosity than $`V_c`$ at several exponential radii. Similarly, the fact that on average S0’s and Sc’s of the same $`V_c`$ have such similar luminosities is a puzzle. S0s have older, and hence dimmer, stellar populations, which should lead to a TFR zero-point offset. The absence of such an offset could be explained if S0s have a considerably higher fraction of their total mass in stars than Sc’s. This perhaps would be expected in the merger-formation scenario if, in fact, such events are very efficient at converting the available gas into stars. Observationally, it is desirable to reconfirm our result on a larger sample with higher S/N measurements at larger radii, where presumably the kinematic corrections will be smaller. An independent test, which is insensitive to errors in the distance estimate, is to measure the TFR for S0s in a galaxy cluster. Analysis of such a measurement for the Coma cluster is underway (Hinz, Rix, & Bernstein 1999). We thank Rachel Somerville for useful discussions, and the referee, Brent Tully, for helpful comments. This work was supported by the US-Israel Binational Science Foundation Grant 94-00300, and by the Alfred P. Sloan Foundation (HWR).
no-problem/9903/hep-ph9903335.html
ar5iv
text
# A more careful estimate of the charm content of 𝜂' ## Abstract We estimate the quantity $`|f_\eta ^{}^{(c)}|`$ which is associated with the charm content of $`\eta ^{}`$ meson from the experimentally known ratio $`R=B(\psi \eta ^{}\gamma )/B(\psi \eta _c\gamma )`$. It is shown that due to the off-shellness of the $`c\overline{c}`$ component of $`\eta ^{}`$, which has been overlooked so far, $`f_\eta ^{}^{(c)}`$ is further suppressed. Assuming that $`\psi \eta ^{}\gamma `$ decay is dominated by $`\psi \eta _c`$ transition, we obtain $`|f_\eta ^{}^{(c)}|2.4`$ MeV which could imply that the $`bc\overline{c}s`$ mechanism does not play a major role in the $`BK\eta ^{}`$ decay mode. preprint: OCHA-PP-134 Various properties of $`\eta ^{}`$ meson have been at the focus of a lot of theoretical attentions. Recently, a fresh interest in this psuedoscalar particle has arisen due to the measurement of unexpectedly large branching ratios for inclusive $`BX_s\eta ^{}`$ and exclusive $`BK\eta ^{}`$ decay modes by the CLEO collaboration. There have been various attempts at explaining these experimental results within or beyond the Standard Model. For example, anomalous coupling of $`\eta ^{}`$ to two gluons has been used in conjunction with the QCD penguin to reproduce the observed results. On the other hand, it has been argued that the possible charm content of $`\eta ^{}`$ plus the the CKM favored $`bc\overline{c}s`$ transition could be responsible for the large $`\eta ^{}`$ production in B meson decays. In this work, we investigate whether or not $`\eta ^{}`$ contains a sizable charm component. The parameter $`f_\eta ^{}^{(c)}`$ which is defined as $$0|\overline{c}\gamma _\mu \gamma _5c|\eta ^{}(q)=f_\eta ^{}^{(c)}q_\mu ,$$ (1) is estimated by utilizing the observed value for the ratio $`R=B(\psi \eta ^{}\gamma )/B(\psi \eta _c\gamma )`$. For this purpose, one can write the $`\eta ^{}`$ meson state in terms of its various possible components $$|\eta ^{}=C_1|\eta _1+C_8|\eta _8+C_g|gg+C_c|\eta _c+\mathrm{},$$ (2) where $`|\eta _1`$ and $`|\eta _8`$ are flavor $`SU(3)`$ singlet and octet states, respectively, and $`|gg`$ represents a glueball state. The last term in Eq. (2) is the $`c\overline{c}`$ content of $`\eta ^{}`$ which should have the same quantum numbers as $`\eta _c`$. The probability amplitude of finding $`|\eta ^{}`$ in any of its components is described by the coefficients $`C_i`$, $`i=1,8,g,c`$ in Eq. (2). Here an explanation about the inclusion of the gluon and charm components that may appear due to the $`U(1)_A`$ anomaly, is in order. The role of the strong anomaly in the low energy dynamics of the $`\eta ^{}`$ meson was established by <sup>,</sup>t Hooft, Witten and Veneziano. In fact, one can write a low energy effective chiral Lagrangian for the meson field which obeys the anomalous conservation law and where other degrees of freedoms (like glueballs etc.) are integrated out (or equivalently, eliminated by using the equations of motion). Therefore, this effective Lagrangian may be expressed purely in terms of the light meson fields which is useful if we are interested only in $`\eta ^{}`$ meson. However, to examine various mechanisms in the fast $`\eta ^{}`$ production in two body B decays, the conventional approach is to write all possible states that mix with this anomalous psuedoscalar meson explicitly. The mixing coefficients, i.e. $`C_i`$, are in principle related if they are calculated from the underlying dynamics. However, here they are considered as phenomenological parameters to be determined from experimental data. From Eqs. (1) and (2), to leading order in $`1/m_c`$, one obtains $`f_\eta ^{}^{(c)}q_\mu `$ $`=`$ $`C_c0|\overline{c}\gamma _\mu \gamma _5c|\eta _c(q)`$ (3) $`=`$ $`C_cf_{\eta _c}(q^2=m_\eta ^{}^2)q_\mu ,`$ (4) which results in $$f_\eta ^{}^{(c)}=C_cf_{\eta _c}(q^2=m_\eta ^{}^2).$$ (5) We note that $`q`$ is the momentum of the physical $`\eta ^{}`$ meson and hence, $`f_{\eta _c}`$ should be evaluated far off $`\eta _c`$ mass-shell as is explicitly shown in Eqs. (3) and (4). This important issue has not been taken into account so far in the estimates of $`f_\eta ^{}^{(c)}`$ and is the main point of the present work. In fact, we show that the off-shellness effect leads to the suppression of $`f_{\eta _c}`$ and, consequently, a smaller value for $`f_\eta ^{}^{(c)}`$ is obtained. The value of on-mass-shell $`f_{\eta _c}`$ is extracted from the two photon decay rate of $`\eta _c`$ $$\mathrm{\Gamma }(\eta _c\gamma \gamma )=\frac{4(4\pi \alpha )^2f_{\eta _c}^2(m_{\eta _c}^2)}{81\pi m_{\eta _c}}.$$ (6) Using the measured decay width $`\mathrm{\Gamma }(\eta _c\gamma \gamma )=7.5_{1.4}^{+1.6}`$ KeV results in an estimate of $`f_{\eta _c}(m_{\eta _c}^2)=411`$ MeV where $`m_{\eta _c}^2`$ in the parentheses is to emphasize that the obtained number is for on-mass-shell $`\eta _c`$. However, as it is pointed out in Ref. , a model calculation of $`\eta _c`$-photon-photon coupling reveals a drastic suppression of the $`\eta _c\gamma \gamma `$ transition form factor $`g(q^2)`$ when $`q^2`$ is small compared to its on-shell value, i.e. $`q^2m_{\eta _c}^2`$. In this model, the two photon decay of $`\eta _c`$ proceeds via a triangle quark loop which is illustrated in Fig. 1. The corresponding expression can be written in the following form $$T^{\mu \nu }(\eta _c\gamma \gamma )=Ng(q^2)ϵ^{\mu \nu \alpha \beta }p_{1\alpha }p_{2\beta },$$ (7) where $`p_1`$ and $`p_2`$ are the four-momenta of the photons and $`q=p_1+p_2`$. The form factor $`g(q^2)`$ is obtained from the quark loop calculation: $$g(q^2)\begin{array}{c}=_0^1𝑑x_0^{1x}𝑑y\frac{1}{m_c^2q^2xy}\hfill \\ =\{\begin{array}{c}\frac{2}{q^2}Arcsin^2\sqrt{\frac{q^2}{4m_c^2}}\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}0}q^24m_c^2\hfill \\ \frac{2}{q^2}\left[Ln\left(\sqrt{\frac{q^2}{4m_c^2}}+\sqrt{\frac{q^2}{4m_c^2}1}\right)\frac{I\pi }{2}\right]^2\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}4}m_c^2q^2\hfill \end{array},\hfill \end{array}$$ (8) where $`m_c`$ is the charm quark mass. In Fig. 2, the variation of $`g(q^2)/g(m_{\eta _c}^2)`$ in the range $`m_\eta ^{}^2q^2m_{\eta _c}^2`$ is depicted. We observe that for $`qm_\eta ^{}^2`$, the form factor suppression is quite substantial. In writing Eq. (6), the constants are all swept into the factor $`N`$ which can be obtained using the requirement that for $`q^2=m_{\eta _c}^2`$ Eq. (6) should yield the experimentally measured decay rate $`\mathrm{\Gamma }(\eta _c\gamma \gamma )`$. Consequently, we obtain the following form for the $`\eta _c`$-$`\gamma \gamma `$ transition amplitude: $$A(\eta _c\gamma \gamma )=\frac{16i\sqrt{m_{\eta _c}\mathrm{\Gamma }(\eta _c\gamma \gamma )}}{\pi ^{3/2}}g(q^2)ϵ^{\mu \nu \alpha \beta }ϵ_\mu (p_1)ϵ_\nu (p_2)p_{1\alpha }p_{2\beta }.$$ (9) $`ϵ(p_i)`$ is the polarization of the photon with momentum $`p_i`$ and we assumed weak binding for charmonium, i.e. $`m_{\eta _c}2m_c`$. Eqs. (5) and (8) lead to the following result $`f_{\eta _c}(q^2=m_\eta ^{}^2)`$ $`=`$ $`{\displaystyle \frac{g(m_\eta ^{}^2)}{g(m_{\eta _c}^2)}}f_{\eta _c}(m_{\eta _c}^2)`$ (10) $`=`$ $`{\displaystyle \frac{m_{\eta _c}^2}{m_\eta ^{}^2}}{\displaystyle \frac{Arcsin^2\sqrt{\frac{m_\eta ^{}^2}{m_{\eta _c}^2}}}{(\frac{\pi }{2})^2}}f_{\eta _c}(m_{\eta _c}^2),`$ (11) where the last term is obtained by using Eq. (7). As a result, we observe that $`f_{\eta _c}`$ on $`\eta ^{}`$ mass-shell $$f_{\eta _c}(q^2=m_\eta ^{}^2)0.42f_{\eta _c}(m_{\eta _c}^2)172\mathrm{MeV},$$ (12) is reduced to less than 50% of its value for on-mass-shell $`\eta _c`$. To proceed with the numerical estimate of $`f_\eta ^{}^{(c)}`$ via Eq. (4), we use the branching ratios $`B(\psi \eta ^{}\gamma )=(4.31\pm 0.30)\times 10^3`$ and $`B(\psi \eta _c\gamma )=(1.3\pm 0.4)\times 10^2`$ which are experimentally known. Assuming that the former decay mode dominantly occurs through $`\psi `$ transition to the $`\eta _c`$ component of $`\eta ^{}`$ results in $$R=\frac{B(\psi \eta ^{}\gamma )}{B(\psi \eta _c\gamma )}=C_c^2\frac{(m_\psi ^2m_\eta ^{}^2)^3}{(m_\psi ^2m_{\eta _c}^2)^3}.$$ (13) We evaluate $`C_c`$ by inserting the central value of the branching ratios in Eq. (11) which yields $$|C_c|=0.014,$$ (14) and consequently, leads to our estimate for $`|f_\eta ^{}^{(c)}|`$ $$|f_\eta ^{}^{(c)}|2.4\mathrm{MeV}.$$ (15) We note that the stringent bound in Eq. (12) is considerably lower than the estimated range of (50-180) MeV for $`f_\eta ^{}^{(c)}`$ in Refs. and .Some recent estimates along the same line point to smaller results The value of $`|f_\eta ^{}^{(c)}|`$ obtained by us is less than half of the estimates in Refs. and due to the fact that the off-shellness effect of the $`c\overline{c}`$ component of $`\eta ^{}`$ has been taken into account in our evaluations. At the same time, the estimate given in Eq. (12) is within the range $`65\mathrm{MeV}f_\eta ^{}^{(c)}15\mathrm{MeV}`$ presented in Ref. based on an analysis of the transition form factor data which is also consistent with $`f_\eta ^{}^{(c)}=0`$. In conclusion, we estimated the parameter $`f_\eta ^{}^{(c)}`$, which is related to the charm content of $`\eta ^{}`$, by using experimental inputs and considering the fact the pseudoscalar $`c\overline{c}`$ component of $`\eta ^{}`$ is highly off mass-shell. Our stringent bound could imply that the decay mode $`BK\eta ^{}`$ does not receive significant contribution from $`bc\overline{c}s`$ transition. Acknowledgement We would like to thank V. A. Miransky and V. Elias for useful discussions. M. A. acknowledges support from the Science and Technology Agency of Japan. E. K. acknowledges support from the Japanese Society for the Promotion of Science.
no-problem/9903/astro-ph9903100.html
ar5iv
text
# Pulsars identified from the NRAO VLA Sky Survey ## 1 Introduction Compared with other types of radio sources, pulsars are known to have strong polarization, even up to 100% if one observes them with high time resolutions. Pulsar polarization would be smeared somehow if they are observed as continuum point sources over a duration much longer than a pulsar period, mainly because of the fast swing of polarization angle across a pulse profile. However, we will show in this paper that is not so serious as generally believed. Pulsars have high (birth) velocities, on average 450 km s<sup>-1</sup> (Lyne & Lorimer ll94 (1994)) and maybe up to 1600 km s<sup>-1</sup> for individuals (e.g. Cordes & Chernoff cc98 (1998)), much faster than that of other types of stars (typically a few tens km s<sup>-1</sup>). The high velocity was probably caused by the asymmetric kick during supernova explosion when a pulsar was born. This leads to a large proper motion for (nearby) pulsars. However, measuring the proper motion is not an easy task since the precise positions of a pulsar at well-seperated epochs have to be precisely measured. Up to now, there are 96 pulsars with proper motion measurements (e.g. Taylor, Manchester & Lyne tml93 (1993); Fomalont et al. fomet97 (1997)). Recently, the National Radio Astronomical Observatory (NRAO) Very Large Array (VLA) Sky Survey (NVSS) has been finished, which covers the sky north of Dec(J2000) $`=40\mathrm{°}`$ at 1.4 GHz (Condon et al. conet98 (1998)). The survey detected more than 1.8 million sources, with polarization measurements, down to a flux density limit about 2.5 mJy. Observations have a resolution of $`45^{\prime \prime }`$, but the positional accuracy is a few arcsec for weak sources, and much better for strong sources. The observations were made with two IF channels at 1.365 and 1.435 GHz with an effective bandwidth of 42MHz each. Most sources in the NVSS were observed in three pointings of 23 sec each. The final sky map is the weighted sum from these pointings (Condon et al. conet98 (1998)). We had tried to identify the pulsars from the NVSS catalog, and then to investigate the pulsar polarization properties and proper motions from continuum observations. In the sky region covered by NVSS, there are 520 known pulsars according to the updated pulsar catalog of Taylor, Manchester & Lyne (tml93 (1993). Updated catalog was kindly provided by Manchester). Using the latest version of the NVSS catalog (with 1814748 entries), we identified 97 strong pulsars according to positional coincidence. During revising this paper for publication, we noticed that similar identification work has been done by Kaplan et al. (kapet98 (1998)), but they emphasized the other aspects, such as position accuracy, scintillation effects and completeness of detections. Comparing to Kaplan et al. (kapet98 (1998)), we got 24 further new identifications. In the following, we will not repeat their work, but present our results in Sect.2. We discuss briefly in Sect. 3 about scintillations (Sect. 3.1), pulsar polarization properties (Sect. 3.2), and proper motions (Sect. 3.3). We compared the pulsar positions with those from the pulsar catalog if the epochs were seperated over more than 5 years, and got the upper limits of proper motion of 18 pulsars, including one pulsar which has had no proper motion measurements previously. ## 2 Identification and Results We took positions of pulsars from the updated catalog of Taylor et al. (tml93 (1993)). PSR names in J2000, and B1950 if applicable, are given in the Columns (1) and (2) of the Table 1. Their positions are given in columns (3) and (4), generally with an accuracy better than $`0.1^{\prime \prime }`$, but occasionally up to a few arcsec. These positions were determined by timing observations or interfermetric measurements at epoch for the position<sup>1</sup><sup>1</sup>1 There are two epochs in the pulsar catalog, one (“pepoch”) for pulsar period and period derivatives and the other (“epoch”) for pulsar position. If “epoch” was not available, we used the “pepoch” as instructed by Manchester (private communications). in Column (5). For comparison, we list in column (6) the flux density at 1.4 GHz from pulsar catalog, which were normally obtained from the average of several pulsar observation sessions to overcome scintillation effects. We searched for radio sources in the NVSS catalog within $`30^{\prime \prime }`$ angular distance around each of the 520 pulsar positions. Only 106 radio sources were found to match the positions and are probably related to pulsars. The positions of the NVSS sources are listed in columns (7) and (8). The angular offset from pulsar positions “$`\mathrm{\Delta }`$” in arcsec is given in column (9). The flux density and polarization parameters of the NVSS sources extracted from the NVSS catalog are listed in columns (10)–(13). A blank in these columns indicates no significant detection above the sensitivity limit of linear polarization of the NVSS ($``$0.5 mJy). We marked in column (14) if there was any further consideration during identifications. Note that the epochs for pulsar position in the pulsar catalog differ from that of NVSS observations. However, even if a pulsar has the largest proper motion, eg. 400 mas per year, then after 20 years, the position offset would be $`8^{\prime \prime }`$. So, our search in $`30^{\prime \prime }`$ should not miss any known pulsar if it is detectable by the NVSS.<sup>2</sup><sup>2</sup>2 We missed 4 pulsars which appear in Table 1 of Kaplan et al. (kapet98 (1998)): PSRs B1823-11, B1900-06, B1901+10, and B2323+63. Their position offsets to the NVSS sources or position uncertainties are too large ($`>30^{\prime \prime }`$) to make significant assessment. For the same reason, we removed J1848+0651 from our sample which was included by Kaplan et al. (kapet98 (1998)). On the other hand, the NVSS was done over a long period, from $``$1993 to $``$1996. We will take an approximate epoch MJD 49718 ($``$1995.0) in following discussion. There should be only a very small position offset ($`<1^{\prime \prime }`$) caused by pulsar proper motions, if any, over the NVSS observation period, much smaller than the position uncertainties of the NVSS sources listed in Tab. 1. If the position of a pulsar was measured at an epoch later than MJD 47000, we will not consider its proper motion during the identification process for the same reason. The first step for identification is to check the position offset $`\mathrm{\Delta }`$. At this stage, we ignored the proper motion. If $`\mathrm{\Delta }`$ is smaller than twice of the total position uncertainty, ie. $`\mathrm{\Delta }2\sqrt{\sigma _{nvss}^2+\sigma _{psrcat}^2}`$, then we attribute the NVSS source as being a positive identification of a pulsar. This process yielded the first 90 positive detections. If any pulsar position was obtained at an epoch several years ago, the pulsar must have had only a very small proper motion so that the position offsets are not significant. Now we consider the remaining 16 sources more carefully, which are marked with “?” in Column (14) of Tab. 1. Nine Confusion cases: (a). PSR B0531+21 (Crab) and PSR B1951+32 are confused by their associated supernova remnants. We marked them in Notes, i.e. Column (14), of Tab. 1 with “SNR”. (b). PSRs B1112+50, B1829$``$10 and B1831$``$00 are confused by their nearby strong sources which have much larger flux density (more than 10 times) than that from the pulsar catalog. One NVSS source was detected $`28.1^{\prime \prime }`$ (formally $`7.8\sigma `$) away from PSR B1920+21, too large to be proper motion for this distant pulsar (distance $``$12.5 kpc). We consider these detections unlikely and mark with “no” in Notes to stand for “no detection”. (c). PSRs B1744-24A and J2129+1210A, (maybe also B1745-20 as indicated by Kaplan et al. kapet98 (1998)), are confused by other continuum sources in the host globular clusters Terzan-5 and M15, (and NGC6440?), respectively. They are marked with “glbc”. (d). PSR B1718-35 is a marginal case, maybe confused by a source $`19^{\prime \prime }.4`$ away, with 4.7$`\sigma `$ for position offset and 27.7 mJy in flux (pulsar: 10.0 mJy). Seven Detection cases: (a). The position offset to PSR B1831$``$04 is only $`4.75^{\prime \prime }`$ (formally $`2.3\sigma `$, or $`2.6\sigma `$ rather than $`14\sigma `$ using the new position in Kaplan et al. 1998), much smaller than the beam size of the NVSS. Although Kaplan et al. (1998) suggested otherwise, we believe the pulsar is detected. The consistent flux densities of the pulsar and the NVSS source confirm the identification. We mark such a case as ”yes” in the Notes. (b). PSRs B0823+26, B1133+16, B2016+28, (and B2154+40) have small position offsets caused by proper motions (see Sect. 3.3). (c). PSR B1820-31 is detected with a position offset of $`12.5^{\prime \prime }=2.6\sigma `$, as confirmed by consistent flux density, and more importantly, by the highly linear polarization of the NVSS source. (d). A marginal case is the strong pulsar PSR B2020+28. The NVSS survey detected a very weak source $`2.2\sigma `$ away, too weak to believe the identification (see more discussion below). However, highly linear polarization of the source suggests that it is the pulsar. We mark in the Notes ”yes?” for this case. In all, the NVSS detected 97 pulsars, including the 73 which appeared in Kaplan et al. (1998) and 24 new identifications.<sup>3</sup><sup>3</sup>3 The PSR J1615-39 in Table 1 of Kaplan et al. (1998) is missing from the pulsar catalog available to us. ## 3 Discussion ### 3.1 Scintillation and undetected pulsars The VLA measurements of the flux densities $`S_{1.4}`$ of most identified pulsars, averaged over about 84 MHz bandwidth and 3$`\times `$23 sec in time, are comparable to the flux densities published in Lorimer et al. (loret95 (1995)) and Gould & Lyne (gl98 (1998)). They are generally within a factor of 2 of the published densities (see Fig. 1), but sometimes up to a factor of 3 or more. Most of undetected pulsars ($``$400) have flux densities below 2 or 3 mJy. Interstellar scintillation (eg. Gupta et al. gupet94 (1994)) both helps and hinders the detections (Cordes & Lazio cl91 (1991)). Some pulsars which have a flux density less than 2 mJy in the pulsar catalog have been detected in the NVSS with a larger flux density. The scintillation effect is more obvious for strong pulsars. For example, PSR B2020+28 should be as strong as 38.0 mJy, but in the NVSS it appears to be a highly polarized source of $`3.6\pm 0.5`$ mJy. Among 61 pulsars with known flux densities larger than 5 mJy, about one fourth were missed by the NVSS (as listed in Table 2), some due to scintillation, some due to confusion (J. Condon, private communication). ### 3.2 Polarization When pulsars are observed as continuum radio sources, the polarized intensity, $`L`$, and polarization position angle, $`PA`$, are calculated from the integrated $`Q`$ and $`U`$ values of the final images, i.e., over all the observation time and the bandwidth, so that $$L_{\mathrm{nvss}}=\sqrt{(_tQ)^2+(_tU)^2},$$ $`(1)`$ and $$PA_{\mathrm{nvss}}=\frac{1}{2}\frac{180}{\pi }\mathrm{arctan}(\frac{_tU}{_tQ}).$$ $`(2)`$ In pulsar observations, however, the total linearly polarized intensity is $$L_{\mathrm{psr}}_t\sqrt{Q^2+U^2},$$ $`(3)`$ and the polarization position angle PA is $$PA_{\mathrm{psr}}=\frac{1}{2}\frac{180}{\pi }\mathrm{arctan}(\frac{U}{Q})$$ $`(4)`$ for each pulse longitude. The PA often swings more than $`90\mathrm{°}`$ over a pulse. Since a positive value of $`Q`$ or $`U`$ in one part of a pulse may cancel a negative value in another part, it is believed that the pulsar emission is depolarized in contiuum observations. Furthermore, the bandwidth depolarization occurs for pulsars with high rotation measures. Therefore the $`L/S`$ in Table 1 should be taken as the lower limit of pulsar polarization. Even so, pulsars are still the sources with the highest polarization compared to other kinds of objects (see Fig. 2). As seen from Table 1, some pulsars have very high linear polarization, such as PSRs B1742-30 ($`L/S90\%`$) and PSR B1929+10 ($`L/S63\%`$), even after the smearing and depolarization. Since the NVSS has very accurate absolute position angle calibrations ($`<0.2\mathrm{°}`$), the well measured PA of a few pulsars (with error $`2\mathrm{°}`$) may help to make an absolute PA calibration in pulsar observations. One example is shown in Fig. 3. First, using the VLA measurements of PA at 1400 MHz and the RM values, we calculated the averaged PA over the pulse at the observation frequency accordingly. Second, from the pulsar observations, we got $`PA`$ for calibration pulsars using Eq.(2) from the pulse profiles (including interpulse if applicable) of Stokes parameters Q and U. Third we compared them to get an offset which represents the instrument PA offset, and used it to calibrate all pulsar observations. In Table 3, we listed 5 pulsars which can be used for calibration purposes. All of them have strong linear polarized intensity that can be easily detected, and their rotation measures $`RM`$ are either quite small ($``$10 rad m<sup>-2</sup>) or accurately measured ($`\sigma _{\mathrm{RM}}1`$ rad m<sup>-2</sup>). None of them has any mode-changing (e.g. PSR B1237+25 and PSR B1822+09) or complicated variations in PA across the profile (e.g. PSR B1933+16). All pulsars in Table 3 satisfy $`\sigma _{\mathrm{PA}}+\sigma _{\mathrm{RM}}\delta (\lambda ^2)<3\mathrm{°}`$, where $`\delta (\lambda ^2)`$ was the difference of the wavelengths squared, and was taken as 1.0. ### 3.3 Proper motions Pulsar proper motion is a very important quantity to be measured, so that pulsar velociaties can be determined. Pulsar timing can be used to determine the proper motions of millisecond pulsars because of their great timing stability (e.g. Nice & Taylor nt95 (1995)). However, for most pulsars, the proper motions can only be measured by determining the pulsar position precisely at two or more well-seperated epochs using interferometry (e.g. Fomalont et al. fomet97 (1997)). We compared the pulsar positions given in the pulsar catalog with those from the NVSS whose epoch is simply taken as MJD=49718, and calculated pulsar proper motions if possible. The results are listed in Table 4. Pulsars with uncertainties of proper motion larger than 200 mas yr<sup>-1</sup> have been deleted. Because of the large uncertainty of the NVSS positions, we obtained only a few significant measurements: proper motion in declination direction of PSR B1133+16, and that in right ascension of PSRs B0823+26 and B2016+28. While the former two are consistent with the previous measurements made by Lyne, Anderson & Salter (las82 (1982)), the latter one is marginally not. Cross-checking with Table 2 of Taylor, Manchester & Lyne (tml93 (1993)), we found that all other measurements in Table 4 are consistent with (though poorer than) those given in the pulsar catalog, except for one new upper limit of PSR B0031$``$07. VLA A-array observations of these pulsars in Table 1 should provide much more accurate positions, and hence could produce the first meaurement of the proper motions of about 20 pulsars. PSR B0031$``$07 is a nearby pulsar with distance 0.68 kpc. Its proper motion upper limit indicates that the pulsar has a velocity of $`470\pm 346`$ km s<sup>-1</sup>, quite normal according to the pulsar velocity distribution (Lyne & Lorimar ll94 (1994)). ## 4 Summary We identified about 97 strong pulsars from the NVSS catalog and presented the flux densities at 1.4 GHz. The parameters of linear polarization are independent, but slightly different (see Eqs.(1), and (2) above), measurements from those obtained from normal pulsar observations. Interstellar scintillation both helps and hinders the detection of pulsars. Table 1 presents all known pulsars detected by the NVSS. Well-calibrated VLA measurements of the average polarization angles of 5 strong pulsars can be used for PA calibrations for pulsar observations. By comparing the pulsar positions from the pulsar catalog and those from the NVSS, we got a proper motion upper limit of PSR B0031$``$07. ###### Acknowledgements. We thank the anonymous referee for his constructive suggestions which helped to revise the paper significantly, and Drs. Dunc Lorimer, Elly Berkhuijsen, R. Wielebinski and Paul Arendt for their helpful comments. JLH is grateful for the hospitality of Prof. R. Wielebinski and Dr. R. Beck during his stay at the MPIfR, Bonn as an exchange scholar between the Chinese Academy of Sciences (CAS) and Max-Planck-Gesellschaft between 1997 May and 1998 August. He also thanks the National Natural Science Foundation of China and the Astronomical Committee of the CAS for continuous support.
no-problem/9903/cond-mat9903162.html
ar5iv
text
# Large Thermopower in a Layered Oxide NaCo2O4 ## Abstract A transition-metal oxide NaCo<sub>2</sub>O<sub>4</sub> is a layered oxide in which CoO<sub>2</sub> and Na alternately stack along the $`c`$ axis. Recently we have found that this compound shows large thermopower with low resistivity, which is comparable to those of Bi<sub>2</sub>Te<sub>3</sub>. The negative transverse magnetoresistance and the strongly temperature-dependent Hall coefficient suggest that electron correlation dominates the conduction mechanism in NaCo<sub>2</sub>O<sub>4</sub>. ## 1 Introduction Transition-metal oxides are known as a large class of materials where the band width and the carrier density can be controlled by cation substitutions. They exhibit various physical properties, e.g., ferroelectricity, magnetism and superconductivity. Since the discovery of the high-temperature superconductors, transition-metal oxides have attracted renewed interest, and a number of new materials and new phenomena have been discovered in the past decade. The search for new thermoelectric (TE) materials has also come to a new stage where ternary or quarternary compounds are extensively examined . A filled Skutterudite is a prime example of the newly discovered TE materials . Very recently we discovered that a layered oxide NaCo<sub>2</sub>O<sub>4</sub> shows large thermopower and low resistivity, whose power factor is comparable to that for Bi<sub>2</sub>Te<sub>3</sub> . This strongly suggests that NaCo<sub>2</sub>O<sub>4</sub> is a possible candidate for a TE transition-metal oxide. NaCo<sub>2</sub>O<sub>4</sub> is an old material, which was synthesized in 1970’s , even though the metallic conduction has been reported recently . Figure 1 shows the schematic view of the oxygen network of the CoO<sub>2</sub> block in NaCo<sub>2</sub>O<sub>4</sub>. Edge-shared distorted octahedra (and Co in the center of them) form a triangular lattice. Na cations and CoO<sub>2</sub> blocks alternately stack along the $`c`$ axis to make a layered structure. A characteristic feature is that the mobility for NaCo<sub>2</sub>O<sub>4</sub> is ten times smaller than that for Bi<sub>2</sub>Te<sub>3</sub>. In other words, the carrier density for NaCo<sub>2</sub>O<sub>4</sub> is ten times larger, implying that small carrier density is not an origin for the large thermopower. Hence there should be another mechanism to cause large thermopower, and the elucidation of the conduction mechanism in NaCo<sub>2</sub>O<sub>4</sub> may give a new principle to design TE materials. Here we report the measurements of the various transport parameters of NaCo<sub>2</sub>O<sub>4</sub> single crystals, and discuss the anomalous conduction mechanism quantitatively. ## 2 Experimental Single crystals of NaCo<sub>2</sub>O<sub>4</sub> were prepared by a NaCl-flux technique with Al<sub>2</sub>O<sub>3</sub> crucibles. The as-grown crystals were washed in water to remove the NaCl flux. The crystals were thin along the $`c`$ axis with a typical dimension of 1.5$`\times `$1.5$`\times `$0.02 mm<sup>3</sup>. Transport parameters were measured only along the in-plane direction, since the crystals were very thin along the out-of-plane direction. Thermopower was measured in the configuration where one edge of the sample was pasted on a sapphire plate with the other pasted on a sheet heater. Temperature ($`T`$) was monitored by two diode thermometers attached on the edges. The contribution of copper leads was carefully subtracted. Transverse magnetoresistance and the Hall coefficient were measured in sweeping field ($`H`$) from 0 to 8 T using an ac-bridge nano-ohm-meter (Linear Research LR201). ## 3 Results Figure 2(a) shows the resistivity ($`\rho `$) of NaCo<sub>2</sub>O<sub>4</sub> single crystals. The magnitude of resistivity ranges from 200 to 600 $`\mu \mathrm{\Omega }`$cm at 300 K, which is possibly due to the uncontrollable disorder or nonstoichiometry of the Na content. We observed that $`\rho `$ of polycrystalline Na<sub>1.1+x</sub>Co<sub>2</sub>O<sub>4</sub>, monotonically increases with $`x`$ . In contrast to $`\rho `$, the thermopower ($`S`$) is less sensitive to the crystal quality \[Fig. 2(b)\]. It should be noted that the magnitude of $`S`$ (100 $`\mu `$V/K at 300 K) is one-order-of-magnitude larger than conventional metals. As is often observed in conventional metals, $`S`$ is roughly proportional to $`T`$, which is consistent with the fact that $`\rho `$ exhibits metallic conduction from room temperature down to 1.5 K. The Hall coefficient ($`R_H`$) is plotted as a function of $`T`$ in Fig. 3. The magnitude of $`R_H`$ at 4.2 K is 2$`\times `$10<sup>-3</sup> cm<sup>3</sup>/C, corresopnding to the carrier density of 10<sup>21</sup> cm<sup>-3</sup>. This is consistent with the fact that NaCo<sub>2</sub>O<sub>4</sub> has ten times larger carrier density than Bi<sub>2</sub>Te<sub>3</sub>. Note that $`R_H`$ shows the opposite sign to $`S`$. This clearly indicates that the electronic states cannot be understood by a simple parabolic band picture. Another feature is the remarkable $`T`$ dependence. In conventional metals, $`1/R_H`$ is proportional to the carrier density (or density of states at the Fermi energy), and is independent of $`T`$ in the lowest-order approximation. Figure 4 shows the transverse magnetoresistance (MR), where negative MR is observed at 4.2 K. As $`T`$ is lowered, positive MR (roughly proportional to $`H^2`$) overlaps with the negative MR. A possible origin for the negative MR is weak localization or scattering by spin fluctuation. Considering that the metallic conduction continues down to 1.55 K (see the inset of Fig. 4), the former is unlikely to occur. Magnetic measurements have shown that spin fluctuation may exist in NaCo<sub>2</sub>O<sub>4</sub>. The susceptibility of NaCo<sub>2</sub>O<sub>4</sub> shows Curie-Weiss-like $`T`$ dependence , and the Knight shifts of the Na and Co sites show different $`T`$ dependence . We further note that $`\rho `$ depends on $`T`$ even below 4.2 K, which indicates that the carriers are not scattered by ordinary phonons. ## 4 Discussion ### 4.1 Electric conduction mechanism As is seen in the previous section, the large thermopower, the $`T`$-dependent Hall coefficient, and the negative magnetoresistance are difficult to explain by a conventional picture based on the band theory and the electron-phonon scattering. All these results strongly suggest that, like high-$`T_c`$ superconductors, the strong electron correlation is important in the electric conduction in NaCo<sub>2</sub>O<sub>4</sub>. We can expect that the strong correlation enhances $`|S|`$ under certain conditions. Since the diffusive part of $`S`$ corresponds to the transport entropy , larger electronic specific heat can give larger $`S`$. Thus $`S`$ would be enhanced if the carriers could couple with some outside entropy such as optical phonon, spin fluctuation, or orbital fluctuation. A similar scenario has been applied to heavy fermions or valence-fluctuation systems, some of which show large $`S`$ . The $`T`$ dependence of $`\rho `$ and $`S`$ of NaCo<sub>2</sub>O<sub>4</sub> is, at least qualitatively, consistent with the theories of 2D metals with spin fluctuation by Moriya, Takahashi and Ueda and by Miyake and Narikiyo . ### 4.2 Effect of layered structure It should be noted that the half of the Na sites are randomly vacant, that is, the Na layer is highly disordered. Thus the mean free path (MFP) of phonons will be as short as the lattice spacing, which means that the lattice thermal conductivity is minimized. In fact, the thermal conductivity of polycrystalline NaCo<sub>2</sub>O<sub>4</sub> is as small as that of Bi<sub>2</sub>Te<sub>3</sub>, and it is hardly affected by cation substitutions . On the other hand, electric conduction is determined by the CoO<sub>2</sub> block that includes little disorder. In low-dimensional correlated metals, the carriers are often confined in the conducting region, and are hardly affected by outside disorder. This is known as the “confinement” . As a result, the electric conduction remains considerably good, together with the minimized lattice thermal conductivity. In other words, MFP of carriers can be much longer than MFP of phonons in NaCo<sub>2</sub>O<sub>4</sub>. We propose that strongly correlated layered conductors are promising as new TE materials, in the sense that lattice thermal current and electric current can flow in different paths in the crystal. In this context we may call them a new type of ‘electron crystals and phonon glasses’ . ## 5 Summary In summary, we prepared single crystals of metallic layered transition-metal oxide NaCo<sub>2</sub>O<sub>4</sub>, and measured the thermoelectric power, the Hall coefficient, and the magnetoresistance. All the measured quantities are unconventional: (i) the thermoelectric power is unusually large (100 $`\mu `$V/K at 300 K), (ii) the Hall coefficient strongly depends on temperature, and (iii) the magnetoresistance is negative at 4.2 K. These results indicate that the conduction mechanism in NaCo<sub>2</sub>O<sub>4</sub> is not explained by a conventional band-picture and electron-phonon scattering. ## Acknowledgements The authors would like to thank M. Takano, S. Nakamura, K. Fukuda and K. Kohn for fruitful discussions. They also appreciate H. Yakabe, K. Nakamura, K. Fujita and K. Kikuchi for collaboration.
no-problem/9903/hep-th9903237.html
ar5iv
text
# Untitled Document hep-th/9903237 BROWN-HET-1176 RH-02-99 PUPT-1839 AdS/CFT and the Information Paradox David A. Lowe Department of Physics, Brown University, Providence, RI 02912, USA lowe@het.brown.edu and Lárus Thorlacius Department of Physics, Princeton University, Princeton, NJ 08544, USA and University of Iceland, Science Institute, Dunhaga 3, 107 Reykjavík, Iceland lth@raunvis.hi.is Abstract The information paradox in the quantum evolution of black holes is studied within the framework of the AdS/CFT correspondence. The unitarity of the CFT strongly suggests that all information about an initial state that forms a black hole is returned in the Hawking radiation. The CFT dynamics implies an information retention time of order the black hole lifetime. This fact determines many qualitative properties of the non-local effects that must show up in a semi-classical effective theory in the bulk. We argue that no violations of causality are apparent to local observers, but the semi-classical theory in the bulk duplicates degrees of freedom inside and outside the event horizon. Non-local quantum effects are required to eliminate this redundancy. This leads to a breakdown of the usual classical-quantum correspondence principle in Lorentzian black hole spacetimes. March, 1999 1. Introduction Hawking’s information paradox is an important theoretical problem, which must be resolved by any theory that claims to provide a fundamental description of quantum gravity. The usual argument for information loss in black hole evolution is made in the context of a low-energy effective theory, defined on a set of smooth spacelike hypersurfaces in a geometry describing the formation and subsequent evaporation of a large mass black hole. The key assumption in the argument is that the effective theory is a conventional local quantum field theory. If, on the other hand, we assume that black hole evolution is a unitary process we are led to the conclusion that spacetime physics is non-local on macroscopic length scales. The nature of this non-locality must be subtle, for it is certainly not apparent in our everyday low-energy activities. It should only become manifest under extreme kinematic circumstances, such as those that relate inertial and fiducial observers in a black hole geometry. Some evidence for the required sort of non-local behavior has been found in string theory . The commutator of operators corresponding to an observer inside the event horizon and an observer, who measures low-energy Hawking radiation well outside the black hole, is non-vanishing in string theory, in spite of the fact that these observers are spacelike separated. These observers are related by a trans-Planckian boost, but if one instead considers a pair of spacelike separated observers with low-energy kinematics the effect is strongly suppressed and one recovers conventional local causality. These results suggest that the usual reasoning for information loss fails in the context of string theory but they have a limited range of validity, being obtained by perturbative, off-shell calculations in light-front string field theory. Similar arguments can also be made by convolving appropriate wavepackets with the S-matrix . In this case a macroscopic characteristic length scale appears in the amplitude, indicating non-local effects. The analyticity of S-matrix is, however, consistent with conventional causality. Recent progress towards a non-perturbative formulation of string theory has provided new tools with which to explore these issues. In the present paper we re-examine the information problem from a modern point of view, using in particular the AdS/CFT correspondence , which states that string theory in a certain background spacetime is equivalent to a supersymmetric gauge theory that lives on the boundary of the spacetime. The fact that the gauge theory is unitary strongly supports the view that no information is lost in the quantum mechanical evolution of black holes, but it is less clear how the unitarity is implemented from the spacetime point of view. In section 2 we briefly review the AdS/CFT correspondence in Euclidean space, and discuss the definition of the Lorentzian correspondence by straightforward analytic continuation. Black hole backgrounds, and their CFT descriptions are described in section 3. A simple gedanken experiment is considered in section 4, which allows us to infer qualitative features of the non-locality that must be present in the effective theory in the bulk. The dual gauge theory description implies an information retention time for the black hole, which plays a crucial role in these arguments. We conclude that local observers inside or outside the black hole see no violation of causality. We argue, however, that the semi-classical effective theory duplicates degrees of freedom inside and outside the event horizon, and that non-local quantum effects are required to eliminate this redundancy. Classically there are no such non-local effects, thus the usual classical-quantum correspondence principle breaks down in black hole spacetimes. 2. Review of AdS/CFT Correspondence As described in \[5,,6\], the natural relation between CFT and AdS correlators is $$\mathrm{exp}_{S^d}\varphi _0𝒪_{CFT}=Z(\varphi _0),$$ where $`\varphi _0(\mathrm{\Omega })`$ is the boundary value of a field in the bulk, $`𝒪`$ is the dual operator in the CFT, and $`Z`$ is the string partition function on AdS space with boundary conditions $`\varphi _0`$. This statement is made in the Euclidean formulation. It is made plausible by a remarkable theorem of Graham and Lee that says that for any sufficiently smooth metric boundary values, there exists a unique smooth solution in the bulk. In the following we will need to formulate the AdS/CFT correspondence in Lorentzian signature spacetime. Luscher and Mack have shown that the Euclidean Green’s functions of a quantum field theory invariant under the Euclidean conformal group can be analytically continued to Lorentzian signature, and the resulting Hilbert space of states carries a unitary representation of the infinite-sheeted universal covering group of the Lorentzian conformal group. The natural Lorentzian spacetime is an infinite-sheeted covering of Minkowski space and thus the natural spacetime to consider in the bulk is the universal covering space of AdS. Let us review how this analytic continuation proceeds for the case of $`AdS_5`$. Projective coordinates for anti-de Sitter space $`\xi ^a`$ with $`a=1,\mathrm{}6`$ satisfy $$(\xi ^6)^2(\xi ^4)^2(\xi ^k)^2=R^2,$$ where $`k=1,2,3,5`$ and $`R`$ is the radius of curvature of the AdS space. In the boundary limit we can drop the $`R^2`$ term, and parameterize the coordinates as $$\xi ^6=r\mathrm{cosh}\sigma ,\xi ^4=r\mathrm{sinh}\sigma ,\xi ^k=re^k,$$ where $`e^k`$ is a unit four-vector. The Euclidean conformal group $`SO(5,1)`$ acts in an obvious way on the $`\xi `$ coordinates. The set of Euclidean coordinates to be used for the boundary field theory are $`(x^4,\stackrel{}{x})`$ (with $`\stackrel{}{x}=x^j`$, $`j=1,2,3`$) $$x^4=\frac{\mathrm{sinh}\sigma }{\mathrm{cosh}\sigma +e^5},\stackrel{}{x}=\frac{\stackrel{}{e}}{\mathrm{cosh}\sigma +e^5}.$$ The analytic continuation corresponds to taking $`\sigma =i\tau `$ with $`\mathrm{}<\tau <\mathrm{}`$, which leads to an infinite-sheeted covering of Minkowski space $`\stackrel{~}{M}`$ with coordinates $`(\tau ,e^k)`$. A single copy of Minkowski space can be embedded in $`\stackrel{~}{M}`$ by taking the subspace $`\pi <\tau <\pi `$ and $`e^5>\mathrm{cos}\tau `$. The usual Minkowski coordinates are $$x^0=\frac{\mathrm{sin}\tau }{\mathrm{cos}\tau +e^5},\stackrel{}{x}=\frac{\stackrel{}{e}}{\mathrm{cos}\tau +e^5}.$$ This space is conformal to the boundary at infinity of the Poincare patch of anti-de Sitter space, which is defined by the coordinates $`(x^\mu ,z)`$ $$ds^2=\frac{R^2}{z^2}(dx^2dz^2).$$ The boundary at infinity corresponds to $`z=0`$. Scale/radius duality is manifest in this set of coordinates as illustrated by the D-instanton/Yang-Mills instanton duality. The scale factor of the Yang-Mills instanton translates into the position along the radial $`z`$ coordinate of the D-instanton in the bulk . To generate translations with respect to the global time $`\tau `$, one acts with the conformal Hamiltonian of the field theory $`H=\frac{1}{2}(P^0+K^0)`$. Luscher and Mack show $`H`$ is positive and self-adjoint , and that there is a unique vacuum state annihilated by $`H`$, invariant under conformal transformations. In the following, we will take the point of view that the Lorentzian theory in the bulk is defined by this analytic continuation of the Euclidean correlation functions. As we will see later, this is a rather subtle point. Alternative proposals for the Lorentzian version of the AdS/CFT correspondence have appeared in the literature. An example is the eternal black hole solution considered in , where the dynamics is described instead by two disconnected boundary field theories. 3. The Information Puzzle and AdS black Holes Hawking’s information paradox arises when one considers the quantum mechanical evolution of black holes \[1,,11\]. The issues are most sharply defined for a black hole that is formed by gravitational collapse from non-singular initial data and subsequently evaporates by emitting apparently thermal Hawking radiation. If the initial configuration is described by a pure quantum state and the Hawking radiation is truly thermal then this process involves evolution from a pure state to a mixed one, which violates quantum mechanical unitarity. A related problem involves perturbations on a background extremal black hole. The resulting non-extremal black hole will emit Hawking radiation until it approaches extremality once more and the question of unitarity arises in this context. In both of the above settings one can also consider an equilibrium configuration where energy is fed into a black hole at the same rate that it evaporates. In this case the paradox arises from the fact that an arbitrary amount of information can be encoded into the infalling matter over time and most of this information will be absent from the outgoing Hawking radiation if it is truly thermal. In order to take advantage of some of the recent developments in fundamental gravitational theory one would like to formulate analogous questions in the context of black hole evolution in anti-de Sitter spacetime. This presents us with some immediate problems. For one thing AdS gravity does not have a well posed initial value problem due to the global causal structure of the AdS spacetime. While spacelike slices of AdS spacetime have infinite volume, null signals can propagate from infinity into any given region in a finite affine parameter. As a result it is problematic to define unitary quantum mechanical evolution in AdS space even in the absence of black holes. This problem can be circumvented in a number of ways. We can for example impose boundary conditions at infinity, which in the absence of black hole formation lead to unitary evolution. The important point is that within some such framework we can study the formation and evaporation of a black hole whose lifetime is short compared to the light-crossing time of the AdS geometry. We can choose parameters in such a way that this black hole is nevertheless large compared to the Planck scale and thus carries a significant amount of information. The question of possible information loss associated with the evolution of the black hole is then effectively decoupled from the unitarity problem of the underlying AdS geometry. There also exist black hole solutions in AdS space where the Schwarzschild radius is large compared to the characteristic AdS length scale. For such black holes there is no separation of scales and thus difficult to disentangle the two unitarity problems. On the other hand, as we shall see below, such black holes are extremely unstable and not so useful for studying the information problem in the first place. Another way to proceed would be to consider the asymptotically flat geometry of an extremal D-brane, whose near-horizon limit is locally isomorphic to AdS spacetime. The asymptotically flat region then regulates the infrared pathology of the AdS geometry and one can consider evolution from initial data (subject to physical boundary conditions at the D-brane horizon). One would again want to study black holes which are small compared to the characteristic AdS length scale which in this case is the size of the extremal D-brane throat. There of course also exist solutions describing black holes that are larger than the throat scale. These are just the non-extremal $`p`$-branes of the higher-dimensional supergravity theory. They have non-vanishing Hawking temperature and their evolution leads to an information problem of the usual type. On the other hand, once we are far from extremality the gauge theory correspondence, which is the main new tool at our disposal, is no longer useful. There is a rather subtle problem with regulating the infrared problems in this manner. The thermodynamic behavior is sensitive to the asymptotic boundary conditions, i.e. whether the conformal boundary in Euclidean signature is taken to be $`S^n\times S^1`$ or $`𝐑^n\times S^1`$ . In the following, we make the choice $`S^n\times S^1`$, which is appropriate for the black holes we want to study, whereas the near-horizon limit of a D-brane gives rise to $`𝐑^n\times S^1`$. The metric of a static Schwarzschild black hole in $`n+1`$-dimensional asymptotically AdS spacetime can be written $$ds^2=\left(\frac{r^2}{R^2}+1\frac{\mu }{r^{n2}}\right)dt^2+\left(\frac{r^2}{R^2}+1\frac{\mu }{r^{n2}}\right)^1dr^2+r^2d\mathrm{\Omega }_{n1}^2,$$ where $`R`$ is the AdS radius of curvature and $`\mu `$ is proportional to Newton’s constant in $`n+1`$ spacetime dimensions times the black hole mass, $$\mu =\frac{8\mathrm{\Gamma }(\frac{n}{2})G_NM}{(n1)\pi ^{(n2)/2}}.$$ As we approach the black hole from large $`r`$ the metric has a coordinate singularity at the AdS-Schwarzschild radius $`r=r_s`$, where $$\frac{r_s^2}{R^2}+1\frac{\mu }{r_s^{n2}}=0.$$ In the limit of small black hole mass, $`\mu <<R^{n2}`$, the black hole parameters approach those of a black hole of equal mass in asymptotically flat spacetime, $$r_s\mu ^{1/(n2)},$$ while in the large mass limit, $`\mu >>R^{n2}`$, we instead have $$r_s(\mu R^2)^{1/n}.$$ One obtains the Hawking temperature in the standard way by continuing to the Euclidean section and requiring the horizon to be smooth, $$T_h=\frac{nr_s^2+(n2)R^2}{4\pi R^2r_s}.$$ In the small mass limit this reduces to $`T_h(n2)/4\pi r_s`$, which is the usual Hawking temperature of a Schwarzschild black hole in asymptotically flat spacetime, but in the large mass limit we find that the AdS black hole has positive specific heat, $`T_hnr_s/4\pi R^2`$. We can now use the Stefan-Boltzmann law to estimate the lifetime of an AdS black hole. For small black holes we find $$\frac{d\mu }{dt}\mu ^{2/(n2)},$$ leading to a lifetime which grows as a power of the black hole mass, $$\tau \mu ^{n/(n2)}.$$ If we choose parameters so that $`r_s`$ is a macroscopic length (in an AdS background with even larger radius of curvature) then the black hole will slowly evaporate, at a rate reliably predicted by semi-classical considerations. In the large mass limit the behavior is very different. In this case the evaporation rate grows with mass, $$\frac{d\mu }{dt}\mu ^2,$$ leading to a lifetime that is bounded from above<sup>1</sup> Here we assume the boundary conditions at infinity correspond to zero incoming flux. For reflecting boundary conditions instead, the black hole will rapidly come into equilibrium with the thermal radiation. We thank G. Horowitz for discussions on this point. $$\tau _0\tau 1/\mu .$$ The approximation of large mass breaks down as $`r_sR`$, so $`\tau `$ should be interpreted as the time that elapses before the black hole has evaporated to a size of order the AdS length scale. The subsequent evaporation rate will be independent of the original black hole mass so the total lifetime is obtained by adding some constant to $`\tau `$. The parameter $`\tau _0`$ in equation (3.1) is the value of $`\tau `$ in the limit where the original black hole mass becomes infinite. A black hole of arbitrarily large initial mass will reduce to a size of order the AdS scale within this time, which means that such objects are violently unstable. They do not provide us with the slowly evolving background geometries that are required for setting up the information puzzle. In fact the instability will most likely prevent them from forming in gravitational collapse in the first place. This does not preclude their existence in thermal equilibrium with a high-temperature thermal bath but since the heat bath is already in a mixed quantum state that is not the ideal configuration for studying the information problem. The upshot of all this is that, for the purpose of studying information issues in black hole evolution, we want to consider black holes that are macroscopic, i.e. large compared to the string scale, but at the same time small compared to the AdS scale. This means that their Schwarzschild radius is also small compared to the radius of the transverse compact space that accompanies the AdS geometry. The favored configuration in this case is in fact not the AdS/Schwarzschild black hole that we have been discussing but rather a higher-dimensional black hole that is localized somewhere on the transverse compact space . This is not really a problem for our discussion. The semi-classical expressions (3.1) for the Schwarzschild radius, (3.1) for the evaporation rate, and (3.1) for the black hole lifetime still remain valid if we remember to replace the $`n`$ of $`AdS_n`$ by the number of space dimensions of the higher-dimensional geometry. The $`2+1`$-dimensional black hole is a rather special case. The metric for the non-rotating BTZ black hole takes the form $$ds^2=(\frac{r^2}{R^2}m)dt^2(\frac{r^2}{R^2}m)^1dr^2r^2d\phi ^2,$$ where $`\phi `$ has period $`2\pi `$. The relationship between $`m`$ and the black hole mass depends on which geometry one uses as a zero-mass reference. Two choices offer themselves. One is to define $`AdS_3`$ to have zero mass, in which case we have $$m=1+8G_NM_{adS}.$$ Since $`m`$ is required to be positive we see that $`2+1`$-dimensional black holes have a non-vanishing minimum mass with this definition. The other definition, which is perhaps more natural from the point of view of black hole physics, is to define the $`m=0`$ geometry in (3.1) to have vanishing mass, so that $$m=8G_NM_{BTZ},$$ In this case the Schwarzschild radius, $`r_s=\sqrt{m}R`$, goes to zero when the mass is taken to zero, and $`AdS_3`$ appears as an isolated smooth geometry in a family of solutions with naked singularities which formally have negative mass. The Hawking temperature of the black hole (3.1) is $`T_h=\sqrt{m}/2\pi R`$ and the entropy is $`S=\pi \sqrt{m}R/2G_N`$. The lifetime of such $`2+1`$-dimensional black holes is formally infinite. This is because the rate of Hawking radiation slows down as $`m`$ approaches zero. This is not a problem because such small $`2+1`$-dimensional black holes are not relevant to the physics. Here the AdS/CFT correspondence involves string theory on a background of the form $`AdS_3\times S^3\times M`$ and black holes with Schwarzschild radius small compared to the size of the $`S^3`$ are unstable to form $`5+1`$-dimensional black holes that are localized on the $`S^3`$. Those black holes have a finite lifetime, given by (3.1) with $`n=5`$. Let us now consider the description of macroscopic black holes, that are nevertheless small compared to the scale of the transverse geometry, from the gauge theory point of view. We begin with the $`AdS_5`$ case. In the canonical ensemble, Witten has shown there is a phase transition from a large mass AdS-Schwarzschild solution to an AdS space with certain discrete identifications, generalizing the work of Hawking and Page for $`3+1`$ dimensions . In the gauge theory, this is reflected as a deconfinement transition in the gauge theory as the temperature is lowered. For the $`AdS_3`$ case, there is no analog of the Hawking-Page phase transition, and instead there is a smooth cross-over as the temperature decreases. To obtain a stable phase containing the intermediate mass black holes that we will be interested in, it is convenient to consider the microcanonical ensemble instead. In general it will be necessary to impose additional constraints on the ensemble, by requiring that the energy density be sufficiently well-localized, to ensure that only single black hole states dominate the ensemble. The gauge theory version of this ensemble will likewise be a microcanonical ensemble with additional constraints. To determine the form of these constraints one must follow through the mapping of the energy-momentum tensor of the gravity theory into operators in the gauge theory. This is of course a difficult task once one wishes to go beyond the linearized approximation, but is nevertheless a well-defined procedure. Analogous constraints appear in the discussion of black hole entropy in Matrix theory given in . 4. Unitarity vs. Locality The static Schwarzschild solution (3.1) describes a black hole in equilibrium with a thermal gas in an AdS background. In order to study the information problem we want instead to consider the evolving geometry of a black hole which forms by gravitational collapse in AdS space and then evaporates as it emits Hawking radiation. We will not attempt to write down such a solution but rather choose parameters in such a way that the black hole evaporates slowly compared to all microscopic timescales and has a long lifetime. The geometry is then described to a good approximation by (3.1) with $`\mu `$ varying slowly with time. In order to separate the issue of black hole information loss from the usual unitarity problem in AdS space we will also assume that this macroscopic black hole is formed in an AdS background with a very large radius of curvature so that the lifetime (3.1) is small compared to the light-crossing time. We can now imagine describing the bulk of the evolution of the black hole in terms of a low-energy effective theory defined on a set of ‘nice’ slices \[18,,19,,2\] that foliate the slowly evolving spacetime and approach the local free-fall frame of infalling matter at (and inside) the event horizon but also approach the frame of fiducial observers far away from the black hole. If the low-energy effective theory on the nice slices is a local quantum field theory then it follows from standard arguments that the quantum state of the Hawking radiation will be correlated to that of the infalling matter. If we further assume that the black hole completely evaporates, leaving only outgoing Hawking radiation behind, then the final state cannot be a pure quantum state. On the other hand, the evolution of states in a local quantum theory is unitary and therefore the final state should be a pure state if the initial configuration before the black hole forms is described by a pure state. This apparent contradiction must somehow be resolved in a fundamental theory and a number of scenarios have been put forward . The conjectured AdS/CFT correspondence supports the view that black hole evolution is unitary since the gauge theory is manifestly unitary. This in turn means that the low-energy effective theory cannot be a local quantum field theory<sup>2</sup> A possible loophole to this argument would be that information is stored in Planck-mass remnant states. The existence of such remnants would imply an enormous peak in the density of states around the Planck scale. This, however, is in conflict with the gauge theory calculations of black hole entropy.. This is not a problem in the AdS/CFT context because the duality map that relates the gauge theory and spacetime physics is quite non-local, as has been emphasized in recent work \[21,,22,,23\]. In the following we will assume the validity of the AdS/CFT conjecture and ask what its implications are for the propagation of information in black hole spacetimes. For this purpose it is useful to consider gedanken experiments which highlight the conflict between unitarity and locality . Let us in particular examine a simple experiment which involves correlated degrees of freedom inside and outside the event horizon. Imagine a pair of spins prepared in a singlet state well outside the black hole. Here ‘spin’ should be understood as some internal label because conventional spin can in principle be detected by its long range gravitational field and is therefore not suitable for this experiment. One of the spins is then carried inside the black hole, where a measurement of the spin is made, at some point $`𝒜`$. Meanwhile, an observer $`𝒪`$ outside the black hole makes measurements on the Hawking radiation. If all the information about the quantum state inside the black hole is encoded in the Hawking radiation, this observer can effectively measure a component of the spin that went inside the black hole. The observer then passes inside the event horizon, where he can receive a signal from point $`𝒜`$, which potentially contradicts his previous measurement, in violation of the laws of quantum mechanics. There is no real paradox here from the spacetime point of view, for if $`𝒪`$ is to learn of the contradiction before hitting the singularity then the signal sent by $`𝒜`$ has to involve frequencies beyond the Planck scale . If, on the other hand, the signal from $`𝒜`$ is generated using only low-energy physics then $`𝒪`$ will have entered a region of strong curvature before receiving it. Either way the analysis of the gedanken experiment requires knowledge of physics beyond the Planck scale and the apparent contradiction only arises if we make the (unwarranted) assumption that this physics is described by local quantum field theory. Let us reconsider this experiment using the boundary gauge theory. There is only a single quantum state on any given time slice from the boundary point of view, so no contradiction can arise between the spin measurements. During the evaporation phase, the Hamiltonian of the boundary theory, to a good approximation, generates evolution in the asymptotic time $`t`$ in the AdS-Schwarzschild spacetime (3.1). This time variable belongs to a coordinate system which only covers a region of spacetime exterior to the black hole and is therefore awkward for describing the fate of observers that enter the black hole. The history of such an observer is more economically described if we instead evolve our quantum state using a different timelike generator, one which is associated with the free fall frame at the horizon and connects the interior and exterior regions of the black hole . It is, however, an important matter of principle that the evolution in asymptotic time must contain all information about the infalling matter, even inside the black hole region. This is guaranteed by the unitarity of the boundary gauge theory. The scale/radius duality tells us that an object far outside the black hole is described by a localized configuration in the gauge theory, but as the black hole is approached the same object will be represented by an excitation of much larger transverse size in the gauge theory if the system evolves in asymptotic time. This should be a correct approximate statement in asymptotically AdS backgrounds, but note that here we are going beyond the application of scale/radius duality in the unperturbed AdS background. The black hole is represented by an system of particles in the gauge theory with fixed total energy. As an object falls into the black hole the gauge theory configuration that describes it spreads in transverse size and at the same time gets entangled with the particles that make up the black hole. While the exact quantum state of the system of object plus black hole contains the information that the object continues its plunge towards the singularity, this information is not readily available to outside observers. In fact the only way to access it is through careful observations of correlations in the entire train of outgoing Hawking radiation, as we discuss further below. Let us return to our gedanken experiment. In order for an outside observer to conduct a measurement on the spin that entered the black hole, he or she measures correlations between Hawking particles emerging from the black hole at different times. One could imagine a more active type of measurement where the outside observer attempts to probe the black hole in various ways. This would only serve to excite the black hole and be counterproductive since the state of the spin would now be entangled with that of the probes in addition to the original black hole. The gauge theory configuration that describes the black hole containing the spin at $`𝒜`$ behaves as a conventional thermodynamic system. Ideally we would like to calculate the entropy and lifetime using the CFT description, but such a calculation requires a better understanding of the strongly coupled CFT in the large $`N`$ limit. But we stress these are nevertheless completely well-defined computations in the CFT. The best we can do at present is to assume the validity of the AdS/CFT correspondence and infer that the gauge theory answers coincide with the semi-classical gravity results. With an understanding that the entropy and the rate of Hawking radiation can be obtained from the CFT point of view, we can then invoke a result of Page \[26,,27\]. This states that no useful information is emitted from a thermodynamic system that is radiating, until its coarse grained entropy has been reduced by a factor of two. The time this takes for an evaporating black hole is of order the black hole lifetime. In other words, there is an information retention time in the gauge theory description of the black hole. The existence of an information retention time was postulated in references and but the AdS/CFT correspondence now provides a concrete realization of that idea. The information transfer between inside the event horizon and outside the event horizon thus effectively takes place only when a time of order the lifetime of the black hole has elapsed, from the point of view of a distant outside observer. It is also important to determine at what point the outside measurements begin to have significant influence on infalling observers inside the black hole. Let us think of this, for the moment, in the context of a low-energy effective theory, defined on a set of ‘nice’ slices. If we assume that the effective theory is a local field theory that has been evolved forward from a non-singular initial configuration described by some pure quantum state, and we further stipulate that all information about the initial state is to be found in the outgoing Hawking radiation, then the degrees of freedom on that part of the nice slice that is inside the black hole can carry no information about the initial state . In other words, all information about the initial state must be ‘bleached’ out of the infalling matter immediately upon crossing the event horizon, in blatant violation of the equivalence principle. For a large mass black hole the horizon sits in a region of weak curvature where tidal effects are small and an object in free fall should pass through more or less unaffected. This is, of course, just a statement of the information paradox in the context of a local effective field theory of gravity. In the boundary theory this appears in the form of a somewhat different puzzle. There the infalling object is described by a spreading field configuration which is getting entangled with the ambient fields describing the black hole. This entanglement gets very complicated as the object is ‘thermalized’ into the black hole configuration, yet it is very delicate. The slightest change in the combined configuration could drastically change the results of subsequent correlation measurements on the Hawking radiation, leading the outside observers to conclude that the infalling object did indeed get bleached as it passed through the horizon. For a resolution of this puzzle we again have to appeal to the AdS/CFT correspondence. Since we know that an infalling object encounters no obstacle at the horizon of a large black hole in the supergravity description, the gauge theory dynamics must somehow miraculously preserve the integrity of infalling matter even if it appears, to a casual outside observer, to be thermalized as it interacts with the black hole configuration. This is very reminiscent of the discussion in reference of low-energy, large impact parameter gravitational scattering. On the supergravity side the particles hardly interact at all and move past each other with only a small deflection, but on the gauge theory side the objects completely merge during the collision and interact strongly, but then somehow disentangle themselves and go their separate ways. Some recent work \[23,,3032\] has shown explicitly in certain examples how the gauge theory dynamics, at large N, leads to a local and causal description of semi-classical low-energy supergravity. Building on this, we can argue that the information transfer to the outgoing Hawking radiation must take place near the black hole singularity from the point of view of infalling observers. Consider once again the gedanken experiment involving spins. The observer inside the black hole cannot be influenced by the outside measurements until their result has been communicated inside, but the outside measurements cannot be completed until at least of order the black hole information retention time has passed according to asymptotic AdS-Schwarzschild clocks. In order to receive a signal carrying outside results before hitting the singularity, the observer inside would have to undergo a proper acceleration of order $`\mathrm{exp}(M^{2/(n2)})`$ (in $`n+1`$ dimensions). Such an acceleration would require trans-Planckian energies, which are simply not available to a low-energy observer. We therefore conclude that the observation of Hawking radiation does in fact bleach out a low-energy observer inside the horizon, but the bleaching only takes place at the singularity, where life is less than good anyway. Since the singularity is by definition not in the causal past of any of the outgoing Hawking radiation it is clear that this information transfer is non-local on a macroscopic scale. Causality in the boundary gauge theory does not guarantee causality on the spacetime side. It does lead to an approximate causality in the bulk physics in flat or near-flat spacetime, but our black hole example illustrates that even this macroscopic causality must break down near spacetime singularities. We note that this breakdown occurs already at the level of a perturbative $`1/N`$ expansion in the boundary gauge theory. The time evolution operator must be unitary, order by order in a $`1/N`$ expansion. Further insight into the nature of this non-locality in the nice slice theory can be gained by considering physics in the Euclidean continuation. The analytic continuation from Euclidean space completely determines arbitrary correlation functions in the Lorentzian CFT, and in particular correlators corresponding to the above gedanken experiment. We learn from the theorem of Graham and Lee that in Euclidean signature, the classical bulk geometry is smooth, and there is a one-to-one mapping between the field configurations in the bulk, and those on the boundary. In Euclidean signature, we see no sign of any non-locality in the bulk theory. Only when we analytically continue to Lorentz signature does the singularity arise, hidden behind the event horizon. Classically, this leads to too many degrees of freedom in the bulk. The degrees of freedom inside and outside the event horizon are independent. Quantum mechanically, the picture in Lorentz signature is radically different. The CFT tells us the degrees of freedom inside are to be identified with degrees of freedom outside the event horizon. This implies the usual classical-quantum correspondence principle breaks down for black hole spacetimes. The degrees of freedom in the correct quantum description in the bulk do not smoothly go over to the classical degrees of freedom of the supergravity theory. The information paradox arises when we ask questions involving these degrees of freedom that are duplicated in the classical theory. Non-local effects are required in a semi-classical description on a set of nice slices, to see that these degrees of freedom are in fact redundant. This set of arguments also implies the CFT formulation resolves the singularity of the black hole \[33,,25\] and allows us to propagate states smoothly past the point where the black hole has evaporated. Classically, this point looks like a timelike (or null) naked singularity in the spacetime. Thus from the Lorentz point of view, the analytic continuation from Euclidean signature leads to definite boundary conditions on this naked singularity. Acknowledgments The research of D.L. is supported in part by DOE grant DE-FE0291ER40688-Task A. The research of L.T. is supported in part by a DOE Outstanding Junior Investigator award DE-FG02-91ER40671. References relax S.W. Hawking, “Breakdown of Predictability in Gravitational Collapse,” Phys. Rev. D14 (1976) 2460. relax D.A. Lowe, J. Polchinski, L. Susskind, L. Thorlacius and J. Uglum, “Black Hole Complementarity vs. Locality,” Phys. Rev. D52 (1995) 6697, hep-th/9506138. relax D.A. Lowe, “The Planckian Conspiracy: String Theory and the Black Hole Information Paradox,” Nucl. Phys. B456 (1995) 257, hep-th/9505074. relax J. Maldacena, “The Large $`N`$ Limit of Superconformal theories and gravity,” ATMP 2 (1998) 231, hep-th/9711200. relax E. Witten, “Anti De Sitter Space and Holography,” ATMP 2 (1998) 253, hep-th/9802150. relax S.S. Gubser, I. Klebanov and A.M. Polyakov, “Gauge theory correlators from noncritical string theory,” Phys. Lett. B428 (1998) 105, hep-th/9802109. relax C.R. Graham and J.M. Lee, “Einstein Metrics with Prescribed Conformal Infinity on the Ball,” Adv. Math. 87 (1991) 186. relax M. Luscher and G. Mack, “Global Conformal Invariance and Quantum Field Theory,” Comm. Math. Phys. 41 (1975) 203. relax T. Banks and M.B. Green, “Non-perturbative effects in $`AdS_5\times S^5`$ string theory and $`d=4`$ Yang-Mills,” JHEP 5 (1998) 002, hep-th/9804170. relax V. Balasubramanian, P. Kraus, A. Lawrence and S.P. Trivedi, “Holographic Probes of Anti-de Sitter Spacetimes,” hep-th/9808017. relax S.W. Hawking, “Particle Creation by Black Holes,” Comm. Math. Phys. 43 (1975) 199. relax A. Peet and S. Ross, “Microcanonical Phases of String Theory on $`AdS_m\times S^n`$,” JHEP 12 (1998) 020, hep-th/9810200. relax R. Gregory and R. Laflamme, “Black Strings and p-Branes are Unstable,” Phys. Rev. Lett. 70 (1993) 2837, hep-th/9301052. relax M. Banados, C. Teitelboim and J. Zanelli, “The Black Hole in Three-Dimensional Spacetime,” Phys. Rev. Lett. 69 (1992) 1849, hep-th/9204099. relax E. Witten, “Anti-de Sitter Space, Thermal Phase Transition, and Confinement in Gauge Theories,” ATMP 2 (1998) 505, hep-th/9803131. relax S.W. Hawking and D.N. Page, “Thermodynamics of Black Holes in Anti-de Sitter Space,” Comm. Math. Phys. 87 (1983) 577. relax D.A. Lowe, “Statistical Origin of Black Hole Entropy in Matrix Theory,” Phys. Rev. Lett. 81 (1998) 256, hep-th/9802173. relax R.M. Wald, unpublished, 1993. relax P. Kraus and F. Wilczek, “Self-Interaction Correction to Black Hole Radiance,” Nucl. Phys. B433 (1995) 403, gr-qc/9408003. relax For reviews of the information problem see f.ex. J. Preskill, “Do Black Holes Destroy Information?” hep-th/9209058; S. Giddings, “Quantum Mechanics of Black Holes,” hep-th/9412138; A. Strominger, “Les Houches Lectures on Black Holes,” hep-th/9501071; L. Thorlacius, “Black Hole Evolution,” hep-th/9411020. relax L. Susskind,“Holography in the Flat Space Limit,” hep-th/9901079. relax J. Polchinski, “S-Matrices from AdS Space-Time,” hep-th/9901076. relax V. Balasubramanian, S.B. Giddings, and A. Lawrence, “What Do CFT’s Tell Us About Anti-de-Sitter Space-Times,” hep-th/9902052. relax L. Susskind and L. Thorlacius, “Gedanken Experiments Involving Black Holes,” Phys. Rev. D49 (1994) 966, hep-th/9308100. relax T. Banks, M.R. Douglas, G.T. Horowitz and E. Martinec, “AdS Dynamics from Conformal Field Theory,” hep-th/9808016. relax D.N. Page, “Average Entropy of a Subsystem,” Phys. Rev. Lett. 71 (1993) 1291, gr-qc/9305007. relax S. Sen, “Average Entropy of a Subsystem,” Phys. Rev. Lett. 77 (1996) 1, hep-th/9601132. relax D.N. Page, “Information in Black Hole Radiation,” Phys. Rev. Lett. 71 (1993) 3743, hep-th/9306083. relax L. Susskind, L. Thorlacius, and J. Uglum, “The Stretched Horizon and Black Hole Complementarity,” Phys. Rev. D48 (1993) 3743, hep-th/9306069. relax S.R. Das, “Brane Waves, Yang-Mills theories and Causality,” hep-th/9901004. relax G.T. Horowitz and N. Itzhaki, “Black Holes, Shock Waves, and Causality in the AdS/CFT Correspondence,” hep-th/9901012. relax D. Bak and S.-J. Rey, “Holographic view of causality and locality via branes in AdS/CFT correspondence,” hep-th/9902101. relax G. Horowitz and S. Ross, “Possible Resolution of Black Hole Singularities from Large N Gauge Theory,” JHEP (1998) 9804:015, hep-th/9803085.
no-problem/9903/cond-mat9903118.html
ar5iv
text
# Correlation between Spin Polarization and Magnetic Moment in Ferromagnetic Alloys ## I Introduction There has been renewed interest in spin-polarized transport over the last decade. This interest comes in part because of a wide range of novel phenomena, e.g., the giant and colossal magnetoresistance, spin-injection experiments, and spin-polarized tunneling experiments. One of the most fundamental properties of spin polarized transport in a ferromagnet is the polarization in the density of states at the Fermi energy. This polarization enters either directly or indirectly into most transport calculations. In particular, since tunneling experiments measure the density of states, they should provide a direct measure of this polarization. In the case of ferromagnet-insulator-ferromagnet tunneling experiments one measures the product of the spin polarizations. However, in ferromagnet-insulator-superconductor tunneling experiments where the density of states in the superconductor is Zeeman split by a field in the plane of the film, one can in principle measure directly the spin polarization in the density of states. Tedrow, Meservey, and collaborators carried out a series of ferromagnet-insulator-superconductor experiments on Fe, Co, Ni, and Ni alloys in the 1970’s. They found two surprising results. First, the electron spin polarization for Fe, Co, and Ni is positive. Assuming the tunneling conductance to be proportional to the total density of states at the Fermi level, band structure calculations predict positive spin polarization for Fe and negative spin polarization for Co and Ni, even though these calculations have successfully explained the Slater-Pauling curve for the magnetic moments. Second, the spin polarization for Fe, Co, Ni, and the Ni alloys is roughly proportional to the magnetic moments. On the other hand, it is expected that only electrons near the Fermi level participate in the tunneling process. Given the complicated band structure of the transition metals, it is not clear why the electron spin polarization measured by tunneling, a property of the Fermi surface, and the magnetic moment, a property of the Fermi whole sea, are related in such a simple way. Recently, Soulen et al. and Upadhyay et al. independently studied the spin polarization of ferromagnets with Andreev reflection point contact experiments. The values of the spin polarization measured in these experiments are different from those obtained in the tunneling experiments, although they are in the same range. For example, the spin polarization of Ni is 43-46.5% as measured by Soulen et al. and 32% as measured by Upadhyay et al., while the tunneling spin polarization of Ni measured by Moodera et al. is 33%. The difference may be due to the experiments measuring related but different quantities. One may have to take into account the different dependencies on the density of states and the Fermi velocity for two sets of experiments. The discrepancies in the experiments indicates that it is important for a theory to compare the trend instead of particular values of the experiments. More recently, by using Andreev reflection techniques, Nadgorny et al. measured the spin polarization of Ni<sub>x</sub>Fe<sub>1-x</sub> alloys to be about 45%, roughly independent of the magnetic moment. Their results are different from the tunneling measurements made by Tedrow and Meservey. There have been a number of theoretical calculations to explain the results of Tedrow and Meservey’s tunneling experiments. Many of these calculations concluded that the tunneling density of states is dominated by only a fraction of the electrons. In the work of Stearns , the relevant electronic states are a $`t_{2g}`$-like band that is modeled by a parabolic dispersion near the Fermi surface. Hertz and Aoi concluded that tunneling measures the s density of states as modified by many body effects due to the electron interaction with spin waves. Tsymbal and Pettifor studied the tunneling from Fe and Co by a tight-binding model which has only $`ss\sigma `$ bonds between the ferromagnet and the insulator. They concluded that only s electron tunneling was sufficient to explain the experiments. Recently, Nguyen-Manh et al. performed a self-consistent band structure calculation of the Co/Al<sub>2</sub>O<sub>3</sub> interface by a LMTO technique. Their calculation suggested that the interfacial cobalt d bands spin polarize s-p bands in the barrier, resulting in a positive tunneling spin polarization. On the other hand, Mazin and Nadgorny et al. suggested that spin polarization measured by both the tunneling and Andreev reflection experiments should be found from the polarization of the density of states times the Fermi velocity squared. The values of the polarization obtained from the all of the above models are in the same range of the experimental values; however, all these models are different and it is not clear which model is closer to reality. To understand if there is indeed a subset of the electronic states which can account for the tunneling density of states, in this article we present a microscopic calculation for both the magnetic moment and the various density of states based on a self-consistent tight-binding coherent potential approximation (CPA) model. In a range of alloys we find that the s density of states follows the same trend as the measured tunneling density of states. This is the first microscopic calculation to see this correlation. Furthermore, we are able to understand the correlation seen in our calculation and to show that it is not universal, i.e., the tunneling density of states is not simply proportional to the magnetic moment. There may even be some alloys where they are inversely correlated. ## II Review of Experiments In this section, we review the series of tunneling experiments by Tedrow and Meservey which show a correlation between the spin polarization and the magnetic moment. We also discuss the spin polarization measurements by Andreev reflection. In the tunneling experiments of Tedrow and Meservey, spin polarized electrons tunnel from films made by alloying Ni and other 3d transition metals. In Fig. 1(a), the tunneling spin polarization for the Ni alloys is plotted against the average number of valence electrons per atom, which is changed by changing the composition of the alloys. Figure 1(a) looks very similar to the Slater-Pauling curve shown in Fig.1(b), in which the bulk magnetic moment of alloys are plotted against the average number number of valence electrons per atom. In NiFe and NiMn, the spin polarization peak is close to the magnetic moment peak. In NiCu, NiCr, and NiTi, both the spin polarization and the magnetic moment decrease monotonically as impurity concentrations increase. The thresholds at which the spin polarization and the magnetic moment drops to zero are close to each other. These similarities suggest that the spin polarization and the magnetic moment are correlated. To see how the spin polarization relates to the magnetic moment, we follow Tedrow and Meservey and plot the spin polarization against the magnetic moment for NiCr, NiCu, NiMn, and NiTi in Fig. 2. The spin polarization data are taken from the experiment of Tedrow and Meservey, and the magnetic moment data are taken from the bulk measurements of alloys with the same compositions. Figure 2 shows a clear correlation between the spin polarization and the magnetic moment in these alloys. The data points for different alloys are roughly in a straight line passing through the origin. Finite spin polarization is obtained for a few samples with zero bulk magnetic moments. This is probably due to the difference between the estimated bulk magnetic moments, which are taken from experiments on alloys with the same composition, and the surface magnetic moments of the actual sample, which are not measured in the tunneling experiments. When the magnetic moment is zero, there should be no difference between the majority and minority spins, the spin polarization is expected to be zero. In Fig. 3, we plotted the tunneling data shown in Fig. 2 together with the tunneling measurements on other samples fabricated from different techniques by Tedrow and Meservey and by Moodera. Also included are the spin polarization measurements using Andreev reflection of Soulen et al., Nadgorny et al., and Upadhyay et al. The two dashed lines outline an area which roughly divides the graph into a fcc region (left) and a bcc region (right). When a data point is close to this area, the structure depends on the particular sample. One can visually identify three regimes in the figure: the lower left regime (fcc NiCr, NiCu, NiMn and NiTi), the middle regime (fcc NiFe with magnetic moment less than about 1.8 $`\mu _B`$), and the right regime (bcc NiFe with magnetic moment greater than about 2 $`\mu _B`$). There is a clear correlation between the spin polarization and the magnetic moment in the lower left regime, which is the same as Fig. 2. In the middle regime, the tunneling data and the Andreev reflection data by Upadhyay et al. only show a roughly increasing relation; on the other hand, the Andreev reflection data by Soulen et al. and Nadgorny et al. suggest that the spin polarization is independent of the magnetic moment. In right regime of Fig. 3, no clear correlation is seen between the spin polarization and the magnetic moment. However, this regime corresponds to the peak area near Fe in Fig. 1, which clearly indicates a correlation between the tunneling spin polarization and the magnetic moment. ## III Model To see if there is actually a relationship between the magnetic moment and the spin polarization of a subset of the electronic states, we calculate both the magnetic moment and the spin polarization using a tight-binding model. The band structure of an alloy is calculated self-consistently within the coherent potential approximation. The magnetic moments and the spin polarization of the density of states for different bands at the Fermi level can then be obtained from the band structure. It is found that when the p and d bands are included, the spin polarization of the density of states is obviously inconsistent with the experiments. For example, negative spin polarization is obtained for Ni when the d bands are included. Therefore, we present only the results of the spin polarization of the s electrons. The alloys we consider are substitution alloys, in which some of the host atoms are replaced by impurity atoms without changing the crystal structure. The band structure of these alloys can be found by the coherent potential approximation. Before the band structure calculation of a magnetic 3d alloy can be carried out, one has to know the splitting between the majority and minority spins, which is related to the magnetic moment. The splitting can be found within our model from the number of majority and minority electrons, which has to be found from the band structure. Therefore, the band structure, the number of majority and minority electrons, and the splitting have to be solved self-consistently. To calculate the alloy band structure, we consider a nine-band tight-binding Hamiltonian written as $$H=\underset{im\sigma }{}u_{im}^\sigma c_{im\sigma }^{}c_{im\sigma }+\underset{ijmn\sigma }{}t_{ijmn}c_{im\sigma }^{}c_{jn\sigma },$$ (1) where $`c_{im\sigma }^{}`$($`c_{im\sigma }`$) is the creation (annihilation) operator of a spin-$`\sigma `$ electron of orbital $`m`$ on the lattice site $`i`$, $`u_{im}^\sigma `$ is the on-site potential, $`t_{ijmn}`$ is the hoping energy between neighbors. The band structure of the alloy described by this Hamiltonian will be found using the coherent potential approximation. Tight-binding parameters in the Slater-Koster two-center form are obtained from fits to the local density approximation band structure. In principle all of the parameters in the alloy are changed as one varies the alloy composition. However, the 3d alloys are similar and the differences in hoping and s and p on-site energies are not as significant as the differences in the d on-site energies. Therefore, we assume that the most important changes due to alloying are contained in the on-site energies, $`u_{i,d}^{}`$ and $`u_{i,d}^{}`$, of the spin-up (majority) and spin-down (minority) d electrons. In other words, the alloy is assumed to have site-independent hoping parameters and s and p on-site energies the same as the host metal, and site-dependent d on-site energies. This assumption will be justified later by the agreement with local density calculations of supercells in different impurity concentrations. The major contributions to the on-site energies, $`u_{i,d}^{}`$ and $`u_{i,d}^{}`$ come from the atomic core potential plus the Coulomb energy due to the opposite spin. In the itinerant electron model, it is more convenient to work with the number of spin-up and spin-down d electrons, $`N_{i,d}^{}`$ and $`N_{i,d}^{}`$, at each site $`i`$. Therefore, we rewrite the parameters in the form of $`u_{i,d}^{}`$ $`=`$ $`U_i^0+U_i^xN_{i,d}^{}`$ (2) $`u_{i,d}^{}`$ $`=`$ $`U_i^0+U_i^xN_{i,d}^{},`$ (3) where $`U_i^0`$ and $`U_i^x`$ are the on-site parameter and effective Coulomb energy per pair at site $`i`$. Both $`N_{i,d}^{}`$ and $`N_{i,d}^{}`$ in Eq. (2) are obtained from the band structure, which in turn depends on $`u_{i,d}^{}`$ and $`u_{i,d}^{}`$. Thus, Eq. (2) has to be solved self-consistently with the band structure calculated from the Hamiltonian of Eq. (1). All parameters for the host atoms can be found from fits to the local density calculations of the pure metal. The only two parameters left, $`U_{imp}^0`$ and $`U_{imp}^x`$ of impurity atoms, are obtained from fits to the local density calculations of supercells composed the two types of atoms. For example, parameters for Fe as impurities in fcc Ni host are obtained from fits to fcc Ni<sub>3</sub>Fe band structures. As a check, parameters obtained from the fit is used to calculate the tight-binding band structure of fcc Ni<sub>31</sub>Fe, which agrees with the local density approximation band structure. This indicates that the model works well at least when impurity concentration is less than 25%. When the impurity concentration is large, or when the difference in hoping between the host and the impurity is large, the error is expected to increase. Although this simplified model requires only two more parameters in addition to the parameters of the host, it still produces correctly the Slater-Pauling curve for the magnetic moment . The three quantities which we will compare are the magnetic moment per atom, the polarization of the density of states as measured by tunneling experiments, and the polarization of the calculated s density of states. The magnetic moment per atom, $`\mu `$, is determined from the number of electrons per atom in the majority spin orientation, $`N^{}`$, and the minority spin orientation, $`N^{}`$, via $`\mu =\mu _B(N^{}N^{}),`$ (4) where $`\mu _B`$ is the Bohr magneton. The s density of states of spin $`\sigma `$ at the Fermi level at lattice site $`i`$ is defined as $$n_{i,s}^\sigma =\frac{d^3𝐤}{(2\pi )^3}\delta (E_FE(𝐤))\underset{m=1}{\overset{9}{}}|is\sigma |𝐤m\sigma |^2,$$ (5) where $`|is\sigma `$ is the s orbital with spin $`\sigma `$ at site $`i`$ and $`|𝐤m\sigma `$ is the $`m`$-th eigenvector with a wavevector $`𝐤`$ and spin $`\sigma `$. The net s density of states is the weighted average of the s density of states at each lattice site: $$n_s^\sigma =(1/N_{at})\underset{i}{}n_{i,s}^\sigma ,$$ (6) where $`N_{at}`$ is the number of atoms in the sample. The density of states for spin $`\sigma `$ measured experimentally is denoted by $`n^\sigma `$. Using these definitions, the polarization of the tunneling density of states, $`P`$, and the calculated s density of states, $`P_s`$, are given by $`P`$ $`=`$ $`{\displaystyle \frac{n^{}n^{}}{n^{}+n^{}}},`$ (7) $`P_s`$ $`=`$ $`{\displaystyle \frac{n_s^{}n_s^{}}{n_s^{}+n_s^{}}}.`$ (8) ## IV Results As mentioned, when the density of states of the p and d bands are included, the spin polarization is inconsistent with experiments. Therefore, to compare with the experiments, we plot the spin polarization of the s density of states, $`P_s`$, against the calculated magnetic moment in the same graph as the experimental data. In Fig. 4, we plot the calculated spin polarization of NiCr and NiCu (filled symbols), together with the experimental tunneling spin polarization, $`P`$, shown in Fig. 2 (unfilled symbols). We choose to calculate NiCr and NiCu because they represent different dependencies of the magnetic moment on the average number of electrons. As seen in Fig. 1(b), the magnetic moment of NiCr increases as the average number of electrons increases, while the the magnetic moment of NiCu decreases as the average number of electrons increases. For both alloys, the calculated spin polarization varies from zero (non-magnetic) to about 27% (pure Ni), and has the same dependency on the magnetic moment. When the magnetic moment is zero, the majority and minority spin bands are the same, and the spin polarization are also zero. Both calculated and experimental results show that the spin polarization is positive and roughly proportional to the magnetic moment. However, the experimental values are much lower than the calculated ones. One reason may be that in order to mix the elements, the samples in this series of experiments were grown with technique which seems to reduce the spin polarization. For example, the tunneling spin polarization measured for the Ni films in this series is only about $`8\%`$, which is much lower than the measured values of $`2733\%`$ for the Ni films grown by better techniques. To compare with more experiments, we plot in Fig. 5 the calculated spin polarization of the s electrons for NiCr, NiCu, and fcc and bcc NiFe (filled symbols) together with the experimental spin polarization shown in Fig. 3 (unfilled symbols). As explained above, the calculation for the lower left regime agrees with the tunneling experiment. The calculation for the middle regime (fcc NiFe) agrees well with the tunneling experiments and the Andreev reflection experiments by Upadhyay et al., but not with the Andreev reflection experiments of Soulen et al. and Nadgorny et al. In the right regime (bcc NiFe), the calculated results are significantly higher than the measured values. Thus far we have presented calculations of the bulk density of states. However, experiments have suggested that the spin-polarized tunneling electrons originate from the first two to three layers at the surface. To estimate how the surface affects the spin polarization, we study the variation of the s density of states as a function of depth from the surface. The band structure of bcc (100) Fe, hcp (001) Co, and fcc (111) Ni slabs of 18 atomic layers thick are calculated by using fixed tight-binding parameters. This calculation only serves as a crude estimate of the surface effects. Self-consistent iterations are not used in the band structure calculation. Effects such as the change in surface magnetic moment and surface roughness are also neglected. We study the spin polarization of the cumulative s density of states as a function of the number of layers. The cumulative s density of states of spin-$`\sigma `$ in the first $`l`$ layers from the surface is defined as $`\overline{n}_{l,s}^\sigma =_{j=1}^ln_{j,s}^\sigma `$, where $`n_{j,s}^\sigma `$ is the s density of states in the $`j`$-th layer. The spin polarization of the cumulative s density of states for the first $`l`$ layers from the surface is given by $`\overline{P}_{l,s}`$ $`=`$ $`{\displaystyle \frac{\overline{n}_{l,s}^{}\overline{n}_{l,s}^{}}{\overline{n}_{l,s}^{}+\overline{n}_{l,s}^{}}}.`$ (9) This quantity is related to the tunneling spin polarization because it takes into account the electrons contributed from the first few layers. However, this is only an approximation to the tunneling spin polarization because the contribution from different layers may not be uniformly weighted. As shown in Fig. 6, there is a reduction of the spin polarization near the surface. The reduction is more important in bcc metals than in fcc metals. For example, the spin polarization of bcc Fe surface is less than one third of the bulk value, while the spin polarization of the fcc Ni surface is about two thirds of the bulk value. The degree of reduction in the spin polarization depends mainly on the structure of film. Therefore, as seen in Fig. 5, our calculation for the spin polarization of the bcc alloys using the bulk spin polarization is significantly higher than the experimental value, which may be related to the surface spin polarization. For the same reason, the calculated spin polarization of the fcc alloys agrees well with the experiments. One may now ask the question: why should the polarization of the s density of states, $`P_s`$, and the magnetic moment per atom, $`\mu `$, be related? In Fig. 7 we have plotted the s density of states, $`n_{i,s}^\sigma `$, for the two spin orientations, $`\sigma `$, at the different sites, $`i`$, in fcc Ni rich alloys. The x-axis is the total number of electrons, $`N_i^\sigma `$, at site $`i`$ with spin $`\sigma `$. As evident from the figure, for a range of compositions and different alloys, all the points lie on the same curve. The solid and dashed lines are obtained from pure Ni by varying the Fermi energy in the spin-up and spin-down bands respectively. This curve contains the key to understanding why the s density of states and the magnetic moment are related. In the range of $`3.0<N_i^\sigma <5.5`$, $`n_{s,i}^\sigma `$ is an increasing function of $`N_i^\sigma `$ for both spins. Since $`n_s^\sigma `$ increases as $`N^\sigma `$ increases, it follows that the spin polarization, $`P_s(n_s^{}n_s^{})`$, increases with increasing magnetic moment, $`\mu =\mu _B(N^{}N^{})`$. It is important to note that $`P_s`$ is only roughly proportional to $`\mu `$ because the curve in Fig. 7 is quite different from a straight line even in the range of $`3.0<N_i^\sigma <5.5`$. Furthermore, this is not a universal relation because the curve shown in Fig. 7 is for the fcc Ni rich alloys. Other kinds of alloys would presumably produce a different curve, possibly even a decreasing instead of an increasing curve. For example, applying the same analysis to bcc alloys such as FeCr shows that the s density of states is still an increasing function of $`N_i^\sigma `$ in the range of interest. However, as shown in Fig. 8, a much different curve is obtained for the bcc alloys. In bcc alloys, the s density of states has sudden jumps in the energy range of interest. One of the jumps is reflected in Fig. 8 by the curves near $`N_i^\sigma =4.5`$. The jump causes a sudden increase in the spin polarization of the s density of states in bcc FeCr with low Fe concentration, which has not been studied experimentally. While this explains the correlation, it still leaves open the question of why all the Ni rich alloys fall on a single curve in Fig. 7 and why that curve is increasing. To understand this we examine more closely the band structure and in particular the density of states for the s and d bands. The d density of states is responsible for the magnetic moment, while as argued above the s density of states is responsible for the tunneling. The s and d density of states are related by s-d hybridization. As an example, we have plotted in Fig. 9 the s and d density of states near the Fermi level for (a) the majority band and (b) the minority band at the Ni sites of pure Ni, Ni<sub>0.8</sub>Fe<sub>0.2</sub>, and Ni<sub>0.6</sub>Fe<sub>0.4</sub>. We have shifted the energy such that within the same spin channel, the integrated density of states from the bottom of the bands to 0eV (not the Fermi level) are the same for every alloy. The Fermi energies in these plots are indicated by the vertical lines. The s density of states fall on the same curve and the d peaks have the same energy. The only difference among the alloys are the Fermi levels. Thus, the primary effect of alloying at the Ni sites is to shift the s and d bands together relative to the Fermi level. We see from Fig. 9 that the s density of states $`n_s^\sigma `$ of the fcc alloys is an increasing function of the Fermi energy in the range where the Fermi level lies. On the other hand, $`N^\sigma `$, the integrated density of states up to the Fermi level, also increases with the Fermi energy. Therefore $`n_s^\sigma `$ is an increasing function of $`N^\sigma `$ as shown in Fig. 7. It should be noted that the shifting of the d bands is due to energy considerations as in the itinerant electron theory. It results in the change of the magnetic moments, $`\mu `$. On the other hand, the shifting of the s band is due to its hybridization with the shifted d bands. It results in the change of the spin polarization of the s density of states, $`P_s`$. ## V Conclusion In this paper, a microscopic calculation for both the magnetic moment and the spin polarization of subsets of the density of states of the 3d alloys are studied. By interpreting the tunneling spin polarization as the spin polarization of the s density of states, the trends in the tunneling experiments of Tedrow and Meservey are obtained. The correlation between the magnetic moment and the density of states are understood by showing that the s density of states for both spin orientations is an increasing function of the number of electrons and by showing that the primary effect of alloying is to shift the bands together relative to the Fermi level. The correlation between the magnetic moment and the polarization of the s density of states is not universal. All of this work supports the picture that the tunneling current is dominated by s electrons. While we have explained the correlation between the spin polarization and the magnetic moment, there are still a number of open questions. Within this model, the tunneling current is assumed to be dominated by s electrons, but the mechanism behind it is not clear. Tsymbal and Pettifor suggested that the hoping from the d bands of the ferromagnet to the s band of the barrier is essentially zero. In 3d metals, the s-d hoping is normally a few times smaller than the s-s hoping, so it is reasonable to assume a similar ratio for the hoping at the metal-insulator interface. However, we note that since the d density of states is about two orders of magnitude higher than that of the s density of states at the Fermi level, the s-d hoping has to be much smaller than a tenth of the s-s hoping to explain the dominance of the tunneling current by the s electrons. Therefore, it is unclear if such a requirement is physical. In another model, suggested by Nguyen-Manh et al., the s band in the insulator is spin-polarized by the d band of the ferromagnet due to hybridization, causing a positive spin polarization in the s current. The model predicts that in the insulator there is very small but spin polarized density of states at the Fermi level. On the other hand, a different model was suggested by Mazin and Nadgorny et al.. They argued that the current, both in tunneling and Andreev reflection experiments, is proportional to the density of states times the Fermi velocity squared. The low(high) density of states of the s(d) electrons are therefore compensated by their high(low) Fermi velocities. Thus, both the s and the d electrons are important to the tunneling current. The spin polarization they calculated is roughly independent of the magnetic moment, much like the Andreev reflection data the group obtained. At this point, it is unclear which of the above models gives a better physical picture. Another open question is whether the spin polarization measured by the tunneling experiments and the Andreev reflection point contact experiments are the same. Mazin and Nadgorny et el. argued that the two are the same. However, the situation here is more complicated. There are even disagreements among the results obtained from the Andreev reflection experiments. The results obtained by Soulen et al. and Nadgorny et al. are not the same as those obtained by Upadhyay et al. It is unclear whether this is due to the differences in the sample preparation or the method of data analysis. There are also qualitative differences between the prediction of different models. While our calculations show that the spin polarization increases with magnetic moments, the calculations by Mazin and Nadgorny et al. show that the spin polarization is independent of the magnetic moments. However, the spin polarization can remain constant at most in a certain regime. In regimes such as Ni<sub>1-x</sub>Cr<sub>x</sub> with $`x>0.15`$, Ni<sub>1-x</sub>Cu<sub>x</sub> with $`x>0.55`$, or the invar regime near Ni<sub>0.36</sub>Fe<sub>0.64</sub>, the magnetic moment drops to zero. When the magnetic moment is zero the spin polarization is expected be zero because there is no difference between the majority and minority spins. It would be helpful to compare experiments and theories in these regimes.
no-problem/9903/gr-qc9903013.html
ar5iv
text
# Removing non-stationary, non-harmonic external interference from gravitational wave interferometer data ## I Introduction Gravitational wave (gw) research started in the early 1960’s thanks to the pioneering work of Weber . Since that time, there has been an ongoing research effort to develop detectors of sufficient sensitivity to allow the detection of these waves from astrophysical sources. The effect of a gw of amplitude $`h`$ is to produce a strain in space given by $`\mathrm{\Delta }L/L=h/2`$. The magnitude of the problem facing researchers in this area can be appreciate if from the fact that theory predicts that for a reasonable ‘event rate’ one should aim for a strain sensitivity of $`10^{21}`$ to $`10^{22}`$. This means that if we were monitoring the separation of two free test masses of one meter apart, the change in their separation would be $`10^{21}`$ m. Such figures show the size of the experimental challenge facing those developing gravitational wave detectors, and make it clear that the analysis of the data must realize as much of the detector sensitivity as possible. The different types of detectors can be classified into two major categories: those using laser interferometers with very long arms and those using resonant solid masses that may be cooled to ultra-low temperatures . The first have the ability to measure the gravitational wave induced strain in a broad frequency band (expected to range from 50 Hz up to perhaps 5 kHz), while the latter measure the gravitational wave Fourier components around the resonant frequency (usually near 1 kHz), with a bandwidth currently of order a few Hz. For resonant mass antennas, the fundamental limitation to their sensitivity comes from the thermal motion of the atoms that can be reduced by cooling them to temperatures of order 50 mK; existing antennas can then achieve a sensitivity $`10^{19}`$ to $`10^{20}`$. In the early 1970’s, the idea emerged that laser interferometers might have a better chance of detecting gravitational waves. Detailed studies were carried out by Forward and his group and by Weiss . Since then, several groups have develop prototype interferometric detectors at Glasgow (10 meter Fabry-Perot), Garching (30 m delay line), MIT, Caltech (40 m Fabry-Perot) and Tokyo. Projects to build long arm laser interferometers have also been funded. These are LIGO project to build two 4 km detectors in USA, VIRGO project to build a 3 km detector in Italy, GEO-600 project to build a 600 m detector in Germany and the TAMA-300 project to build a 300 m detector in Japan. The construction of the above detectors has already started. First observations may come as early as 2000. These observations have a potential to see the full range of gravitational wave sources: periodic, burst, quasiperiodic and stochastic. In the development of data analysis techniques, it is useful to examine data already available from prototypes. Today, prototype interferometers are routinely operating at a displacement noise level of a few times $`10^{19}`$ m$`/\sqrt{\mathrm{Hz}}`$ over a frequency range from 200 Hz to 1000 Hz corresponding to an rms gravitational-wave amplitude noise level of $`h_{rms}2\times 10^{19}`$ . There are different noise sources that limit the sensitivity of the laser-interferometer detectors. The stochastic noise can be modeled as a sum of six main contributions : photon shot noise, seismic noise, quantum noise (which follows from the indeterminacy of the position of the test masses due to the Heisenberg uncertain principle), the vibration of the suspension wires (‘violin modes’), and thermal noise from the vibration of the test masses and from the low frequency oscillations of the pendulum suspensions. The above noise sources may be considered among the most important but other sources of noise cannot be ignored. In the measured noise spectrum of the different prototypes, in addition to the stochastic noise, we observe peaks due external interference, where the amplitudes are not stochastic. The most numerous are powerline frequency harmonics. We have shown how to model and remove these very effectively using a technique we call coherent line removal (clr) . Other lines are clearly related to the power supply but appear at non-harmonic frequencies. In this paper, we will describe their characteristics and we will present a procedure to remove them as well. Our goal is to remove as many interference features as possible, so that the interferometer sensitivity is limited by the genuinely stochastic noise. clr is an algorithm able to remove interference present in the data while preserving the stochastic detector noise. clr works when the interference is present in many harmonics and they remain coherent with each other. In , we applied clr to some interferometric data and the entire series of wide lines corresponding to the electricity supply frequency and its harmonics were completely removed even when the frequency of the supply was not independently known. In addition to those lines appearing at multiples of the electricity supply frequency (50 or 60 Hz), there are other interference lines whose frequencies change in step with the supply frequency, but not at the harmonic frequencies. From a data-analysis point of view, we try to develop a technique able to remove this interference while producing a minimum disturbance to the underlying noise background. In order to remove these lines, other methods are also available but these methods will remove the noise and any underlying real signals as well. We are not sure if this type of interference will be absent in large-scale interferometers, since the physical process that generates all these lines is so far unknown. But it is present in prototype data and therefore it is important to be prepared to remove it from full data. This paper is based on a study of Glasgow interferometer data taken in March 1996. The method we propose makes use of a reference wave-form signal corresponding to the fundamental harmonic of the electrical interference. We can obtain it directly from the supply voltage or we can construct it using the clr algorithm from the true harmonics present in the data. The method we propose is an adaptive procedure that is tuned in such a way that the electrical interference can be removed and ‘single-line’ signals masked by them can be recovered to at least the $`75\%`$ level. The rest of the paper is organized as follows: In section II, we describe the electrical interference present in the data. In section III, we present different models of the interference. In section IV, we summarize the principle of the coherent line removal algorithm and we explain how to construct a reference wave-form of the incoming electricity signal from the data. In section V, we present an algorithm to remove the electrical interference, and not only the harmonics of the reference wave-form. The algorithm is recursively applied for small stretches of data. Thus, it allows the parameters to change and to adapt themselves in order to be able to remove the interference with a minimum disturbance of the noise background. Finally, in section VI, we discuss the results obtained. ## II The electrical interference in the prototype data In this paper, we will focus our attention to the data produced by the Glasgow laser interferometer in March 1996. The data set consists of 19857408 points, sampled at 4 kHz and quantized with a 12 bit analog-to-digital converter with a dynamical range from -10 to 10 Volts. The data are divided into 4848 blocks of 4096 points each. The first 18 minutes of data were rendered useless due to a failure of the autolocking. Thus, in our analysis we ignore the first 1153 blocks. (For preliminary studies of these data see .) In the study of the prototype data, we observe in the power spectrum many instrumental lines. Some of them are rather broad and appear at multiples of 50 Hz. But there are other ones appearing at different frequencies. In the data, the lines at 1 kHz have a width of 5 Hz. Therefore, we can ignore these sections of the power spectrum or we can try to understand the interference and, if possible, remove it in order to be able to detect any possible gravitational wave signal previously masked by it. We have already shown how to remove lines at integer multiples of the supply frequency . In long-term Fourier transforms, these lines are broad, and the structure of different lines is similar apart from an overall scaling proportional to the frequency. In smaller length Fourier transforms, the lines are narrow, with central frequencies that change with time, again in proportion to one another. It thus appears that all these lines are harmonics of a single source (e.g., the electricity supply) and that their broad shape is due to the wandering of the incoming electricity frequency. These lines have been observed in different interferometer prototypes . But this is not the end of the story. Further analysis of the prototype data reveals the presence of many other features that are related to the incoming electricity frequency. These other lines are not as powerful as the harmonics. In many cases, they are just slightly above the stochastic noise level. The easiest way to detect their presence is by studying in detail the spectrogram (i.e., the magnitude of the time-dependent Fourier transform versus time). This is a time-dependent frequency analysis in which the whole data set is split into small segments and for each of them a discrete Fourier transform is computed. By examining the spectrogram, we identify a large number of lines that present similar time-frequency evolution to the harmonics of 50 Hz. Therefore, all these line are related in some way with the incoming electricity supply. An example is shown in figure 1. These lines are spread over the whole spectrum. A big population of them lies below 140 Hz. There are also some isolated ones around 222, 238, 444, 575, 887, 1105 and 1275 Hz and, after 1750 Hz, there is again a large population of them. Our guess is that this kind of line might be present through the whole spectrum, but many are buried in the stochastic noise. An interesting feature is that some of these lines appear to fall in to their own harmonic families. For example, we have found the families: (222, 443, 886, 1772 Hz), (48, 96 Hz), (66, 132 Hz), (72.5, 145 Hz), (116, 232 Hz). In all cases, the width of the lines (which is the interval of the frequency wandering) seems to scale as the frequency. The amplitude of the lines is time-dependent as well. Although the nature of the process that generates these lines remains unknown, many of them may be intrinsic to the incoming electricity signal. We have analyzed some a-posteriori electricity supply voltage (from Glasgow University) and we have observed several features, and not just harmonics of 50 Hz. External switches or the running of other electrical devices (as computers) can generate some of these lines. This could explain their non-linear and non-stationary character. Another possible sources are ground motions due to mechanical motors running at different frequencies than 50 Hz (e.g., trains), but whose frequency wanders in a way related somehow to the power supply. Remembering the extremely small amplitude of the disturbances these lines represent, it follows that it may be difficult for experimenters to exclude these lines from the data of detectors now under construction. It is therefore important to understand how to remove them if they are there. ## III Modeling the lines A first model for these non-harmonic lines is that they could be beats between stationary frequencies and the supply harmonics. We have examined this in detail and we believe it is unlikely. In particular, the width of these lines seems to be proportional to their frequency, which could not be the case for a beat, and also we would expect the lines to appear in couples, that it is not the case. Using this model, $$h(t)=\alpha M(t)^n\mathrm{exp}(i2\pi ft),$$ (1) where $`\alpha `$ is a complex amplitude, $`M(t)^n`$ is a supply harmonic, and $`f`$ is the beat frequency, we have analyzed several lines. For example, the line at 99.7 Hz, we find that the best match corresponds to the second harmonic $`n=2`$ with a beat frequency of $`f=0.7391`$ Hz. Using these values, we try to remove the interference by applying a least square method, but as we show in figure 2 the interference is not cancelled. The same occurs with other lines. After rejecting the possibility of the beats, the next simplest model is to assume a non-integer harmonic of the supply: $$h(t)=\alpha (t)M(t)^q,$$ (2) where $`\alpha (t)`$ is a slowly varying complex amplitude, $`M(t)`$ is a reference wave form corresponding to the fundamental harmonic of the electrical interference, but now $`q`$ is a real number, not only an integer as in the case of the harmonics. The reference signal, $`M(t)`$, can be obtained directly from the supply voltage, or it can be obtained directly using clr as we describe below. The justification for this model is simply the success we have in removing the interference. The model is not perfect, and our line-removal is sometimes incomplete. It appears that the index $`q`$ may also be a slow function of time. But we will not pursue this refinement here. ## IV Coherent Line Removal For any of the previous models, we need to know the reference wave-form, $`M(t)`$, corresponding to the fundamental harmonic of the interference. In the case of electrical interference, this wave-form can be obtained from the data applying the clr algorithm. In this section, we summarize the principle of clr. For further details we refer the reader to . clr works when the interference is present in many harmonics, and assumes that the interference has the form $$y(t)=\underset{n}{}a_nm(t)^n+\left(a_nm(t)^n\right)^{},$$ (3) where $`a_n`$ are complex amplitudes, $`m(t)`$ is a nearly monochromatic function near a frequency $`f_0`$ and $``$ is the complex conjugate. The idea is to use the information in the harmonics of the interference to construct the reference function $`M(t)`$ that is as close a replica as possible to $`m(t)`$. Assuming additive noise, the data produced by the system is just $$x(t)=y(t)+n(t),$$ (4) where $`y(t)`$ is the interference given by Eq. (3) and the noise $`n(t)`$ in the detector is a zero-mean stationary stochastic process. The algorithm consists in defining a set of functions $`\stackrel{~}{z}_k(\nu )`$ in the frequency domain as $$\stackrel{~}{z}_k(\nu )\{\begin{array}{cc}\stackrel{~}{x}(\nu )& \nu _{ik}<\nu <\nu _{fk}\\ 0& \text{elsewhere},\end{array}$$ (5) where $`\stackrel{~}{}`$ denotes the Fourier transform, $`(\nu _{ik},\nu _{fk})`$ correspond to the upper and lower frequency limits of the harmonics of the interference and $`k`$ denotes the harmonic considered. These functions are equivalent to $$\stackrel{~}{z}_k(\nu )=a_k\stackrel{~}{m^k}(\nu )+\stackrel{~}{n}_k(\nu ),$$ (6) where $`\stackrel{~}{n}_k(\nu )`$ is the noise in the frequency band of the harmonic considered. Their inverse Fourier transforms yield $$z_k(t)=a_km(t)^k+n_k(t).$$ (7) Since $`m(t)`$ is supposed to be a narrow-band function near a frequency $`f_0`$, each $`z_k(t)`$ is a narrow-band function near $`kf_0`$. Then, we define $$B_k(t)\left[z_k(t)\right]^{1/k},$$ (8) that can be rewritten as $$B_k(t)=(a_k)^{1/k}m(t)\beta _k(t),$$ (9) where $$\beta _k(t)=\left[1+\frac{n_k(t)}{a_km(t)^k}\right]^{1/k}.$$ (10) All these functions, $`\{B_k(t)\}`$, are almost monochromatic around the fundamental frequency, $`f_0`$, but they differ basically by a certain complex amplitude. These factors, $`\mathrm{\Gamma }_k`$, can easily be calculated, and we can construct a set of functions $`\{b_k(t)\}`$ $$b_k(t)=\mathrm{\Gamma }_kB_k(t),$$ (11) such that, they all have the same mean value. Then, $`M(t)`$ can be constructed as a function of all $`\{b_k(t)\}`$ in such a way that it has the same mean and minimum variance. If $`M(t)`$ is linear with $`\{b_k(t)\}`$, then statistically the best choice for $`M(t)`$ is $$M(t)=\left(\underset{k}{}\frac{b_k(t)}{\mathrm{Var}[\beta _k(t)]}\right)/\left(\underset{k}{}\frac{1}{\mathrm{Var}[\beta _k(t)]}\right),$$ (12) where $$\mathrm{Var}[\beta _k(t)]=\frac{n_k(t)n_k(t)^{}}{k^2|a_km(t)^k|^2}+\text{corrections}.$$ (13) In practice, we approximate $$|a_km(t)^k|^2|z_k(t)|^2,$$ (14) and we assume stationary noise. Therefore, $$n_k(t)n_k(t)^{}=_{\nu _{ik}}^{\nu _{fk}}S(\nu )𝑑\nu ,$$ (15) where $`S(\nu )`$ is the power spectral density of the noise. The amplitude of the different harmonics of the interference can be obtained then applying a least square method. ## V Removal of non-harmonic external interference clr can remove the integer harmonics of a reference signal but it does not remove other interference present in the data. We can use clr to construct a reference waveform $`M(t)`$, but we have to design another technique able to get read off the electrical interference present at non-harmonics of the 50 Hz line frequency. The models of signals we propose are buried in noise. Therefore, we face the problem of detecting signals and estimating their parameters. ### A Maximum likelihood detection A standard method is the maximum likelihood detection which consists of maximizing the likelihood function $`\mathrm{\Lambda }`$ with respect to the parameters of the signal. If the maximum of $`\mathrm{\Lambda }`$ exceeds a certain threshold, we say that the signal is present. (See for signal analysis theory in the context of gravitational wave broadband detectors.) We assume that the noise $`n(t)`$ in the detector is an additive, zero-mean, Gaussian and stationary random process. Then the data $`x(t)`$ (if the expected signal model $`h(t)`$ is present) can be written as $$x(t)=n(t)+h(t).$$ (16) The logarithm of the likelihood function has the form $$\mathrm{ln}\mathrm{\Lambda }=\underset{k=0}{\overset{N1}{}}\frac{\stackrel{~}{x}_k\stackrel{~}{h}_k^{}}{S_k}\frac{1}{2}\underset{k=0}{\overset{N1}{}}\frac{|\stackrel{~}{h}_k|^2}{S_k},$$ (17) where $`S_k`$ is the power spectral density of the noise, $`k`$ is the frequency index running from $`0`$ to $`N1`$ and $`N`$ is the number of sampled points. The likelihood ratio $`\mathrm{\Lambda }`$ depends on the particular set of data $`x(t)`$ only through the sum $$G=\underset{k}{}\frac{\stackrel{~}{x}_k\stackrel{~}{h}_k^{}}{S_k}.$$ (18) This sum is called the detection statistic for the signal $`h`$. Its variance is $$d^2=\underset{k}{}\frac{|\stackrel{~}{h}_k|^2}{S_k}.$$ (19) If there is no signal present, the mean of $`G`$ is zero, but if the signal $`h`$ is present the mean of $`G`$ will be equal to its variance. The thresholds on the likelihood function $`\mathrm{\Lambda }`$ must be set with regard to the false alarm probability. For a detection statistics $`G`$ and a variance $`d^2`$, the false alarm probability is $$P_F=\frac{1}{2}\mathrm{erfc}\left(\frac{G}{\sqrt{2}d}\right).$$ (20) This is equivalent to study the output signal-to-noise ratio (snr), i.e., the value of the detection statistics divided by the standard deviation, $$\mathrm{snr}\frac{G}{d}=\underset{k}{}\frac{\stackrel{~}{x}_k\stackrel{~}{h}_k^{}}{S_k}/\sqrt{\underset{k}{}\frac{|\stackrel{~}{h}_k|^2}{S_k}}.$$ (21) Notice that snr (and therefore the false alarm probability) is independent of the value of the amplitude of the model signal $`h(t)`$ used for this pattern-matching procedure. Of course, the snr is proportional to the amplitude of whatever multiple of $`h(t)`$ is contained in $`x(t)`$ . This is thus a linear detector. ### B The parameter space Assuming the form for the electrical interference, $`h(t)=M(t)^q`$, we have to construct as many filters as different values of $`q`$ need to be considered and, for each of them, calculate their snr. In order to set the parameter space, we can consider $`M(t)`$ as a monochromatic signal at a frequency $`f_0`$. Hence, $`M(t)^q`$ will be a monochromatic signal at $`qf_0`$. Thus, the maximum value of $`q`$ to be considered corresponds to $$q_f=\frac{f_{Nyquist}}{f_0},$$ (22) where $$f_{Nyquist}=\frac{f_s}{2},$$ (23) and $`f_s`$ is the sampling frequency. The frequency resolution is $`\mathrm{\Delta }\nu =1/T`$. Therefore, we can resolve two signals if the separation in $`q`$ is of the order $$\mathrm{\Delta }q=\frac{\mathrm{\Delta }\nu }{f_0}.$$ (24) This is the maximum separation in $`q`$ we can allow. Note that the size of the parameter space $$N_q=\frac{q_f}{\mathrm{\Delta }q}=\frac{f_s}{2}T,$$ (25) increases with the observation time $`T`$. For 128 blocks of the Glasgow data (this corresponds to the usual stretch of data we work at once), we obtain $$\mathrm{\Delta }q0.00015.$$ (26) We can calculate the minimum number of filters and the number of floating points operations needed to calculate the snr for all of them. ### C An approximately matched filter The calculation of the exact matched filter for all the parameter space of $`q`$ is computationally expensive. Given the reference function $`M(t)`$, for each value of $`q`$, we need to calculate $`h(t)=M(t)^q`$, perform its Fourier Transform, and then, compute the output filter via the formula of the snr given by Eq. (21). Since we assume $`M(t)`$ is a nearly monochromatic function, all the functions $`h(t)=M(t)^q`$ are also going to be nearly monochromatic but at different frequencies. This means that the values of their Fourier transforms are just relevant in small frequency bands. Hence, the first approximation we can perform is to reduce the index summation in Eq. (21) to a small interval. This is equivalent to consider that the Fourier transform of the template is zero outside this interval. But this is not enough. What is really computationally expensive is the construction of the templates. Since the reference signal $`M(t)`$ is almost monochromatic and changes frequency smoothly in time, we can approximate the values of $`\stackrel{~}{M^q}(\nu )`$ in a certain interval, $`\nu _{iq}<\nu <\nu _{fq}`$, by those of $`\stackrel{~}{H^q}(\nu )`$ defined by $$\stackrel{~}{H^q}(\nu )\stackrel{~}{M^n}\left(\nu \frac{q}{n}\right),$$ (27) where $`n`$ can be chosen to be an integer, i.e., we can build $`\stackrel{~}{H^q}`$ as a similar function to $`\stackrel{~}{M^n}`$, where $`M^n(t)`$ corresponds to an harmonic of the reference signal. If we construct $`\stackrel{~}{H^q}(\nu )`$ via Eq. (27) using the nearest harmonic, we will expect $`\stackrel{~}{H^q}(\nu )`$ to be close to $`\stackrel{~}{M^q}(\nu )`$. Therefore, we just need to calculate the Fourier transforms of all the harmonics (which is a small number in comparison to all the possible values of $`q`$), calculate $`\stackrel{~}{H^q}(\nu )`$ from the nearest harmonic and substitute their values in equation (21). The power spectral density, $`S_k`$, can be estimated from the data using Welch Method averaging over shorter periodograms. With all these simplifications, we apply the approximate matched filter to 128 blocks (approximately two minutes) of prototype data. By setting a threshold of snr=4.5, we have found several values of $`q`$ for which the electrical interference might be present. See table I. If we assume that the interference follows the model $$h(t)=\alpha M(t)^q,$$ (28) with $`\alpha `$ being constant, for each $`q`$ we can find the value of the complex amplitude $`\alpha `$ by applying a least square method in the time domain, i.e., $`\alpha `$ is the value of $`\beta `$ that minimizes the quantity $`|x(t)\beta M(t)^q|^2`$. With this method and using the same 128 blocks of data, we try to remove the interference. The result is that the interference is attenuated but not removed. Therefore, we think that this model is too simple and that to remove real interference we need a more complicated model. ### D Amplitude modulation The extra complication is to allow the amplitude to vary slowly with time. Therefore, we assume that the interference takes the form $$h(t)=\alpha (t)M(t)^q,$$ (29) where $`q`$ are real constants, $`\alpha (t)`$ is a slow changing function of time, and the values of $`q`$ do not differ much from those calculated with the matched filter as described in the previous subsection. We have to find a procedure to determine $`\alpha (t)`$, the amplitude modulation. Since the signal is buried in noise, we cannot find its exact value. What we do is to find an approximate function $`\beta (t)`$ to $`\alpha (t)`$ such that allows us to remove the interference and, at the same time, keep the intrinsic level of noise present in the interferometer. The method we use consists of splitting the data into small pieces (i.e., small number of blocks), and for each piece calculating a value $`\beta `$ as if it was a constant. Then, we construct $`\beta (t)`$ as the succession of those values obtained. In practice, we separate the data into sets of $`2n_b+1`$ blocks with overlaps of $`n_b`$ blocks. For each of them we calculate the $`\beta `$ value and we associate it to the block $`n_b+1`$. In this way, we construct $`\beta (t)`$ as a set of discrete values that change smoothly in time. The number of blocks, $`n_b`$, used must be tuned according to the data. If the value of $`n_b`$ is too big, it does not allow enough amplitude modulation and the interference is not cancelled. By contrast, if $`n_b`$ is too small, the function $`M(t)^q`$ for those small number of blocks will be almost monochromatic, in the sense that it will affect to a few frequency bins. Then, this method will behave as an adaptive multitaper method and hence, it will remove whatever is in those frequency bins (any signal plus noise). For the prototype data, we have found that these values, $`n_b`$, need to depend on $`q`$. Thus, we build the function $`n_b(q)`$ satisfying the following requirements: * it must be able to remove the interference, * it should leave a noise level comparable to the intrinsic noise background of the interferometer, * an artificial ‘single-line’ signal with an amplitude of the order of the electrical interference should not be attenuated more than 25 %. To satisfy all these requirements, we propose the following function $$n_b(q)=\{\begin{array}{cc}7,\hfill & q<2\\ 5,\hfill & 2q<4\\ 4,\hfill & 4q<6\\ 3,\hfill & 6q<8\\ 2,\hfill & 8q<16\\ 1,\hfill & q16.\end{array}$$ (30) Using this function, $`n_b`$, and the values of $`q`$ displayed in table I, we have succeeded in removing the interference present in 128 blocks. The result are shown in figures 3 and 4. ### E Frequency drift We proceed now to remove the interference from the whole data stream. We split the data into fragments of 128 blocks and we apply the previous method using the amplitude modulation. As a first attempt, we assume that the values of $`q`$ remain constant during all time. As a result, some of the lines are removed but some others are not after a certain time. This is not surprising. Visual inspection of the spectra shows that these lines drift, $$q(t)=q_0+\delta q(t),$$ (31) where $`\delta q(t)`$ is small in comparison with $`q_0`$. Therefore, for the longer data set, we allow small changes in $`q`$. We apply to each fragment of 128 blocks the approximate matched filter, as described before, but using a much reduced parameter space, i.e., just allowing a maximum variation of $`10\mathrm{\Delta }q`$ about the $`q`$ values of the previous fragment of 128 blocks. In order to construct the templates, we make use again of Eq. (27), but we use the old values of $`q`$ (those obtained in the previous data fragment). Then we choose the new values of $`q`$ by maximizing the snr. Using this procedure, we have removed all the lines corresponding to those initial values of $`q`$ listed in table I. See figure 5. From the evolution of $`q`$, we observe that some values of $`q`$ remain almost constant, but many others change in time, obtaining a maximum variation of 0.005 for 3695 blocks of data. Assuming the incoming signal is monochromatic, we can estimate the best timescale to perform the matched filter, i.e., the length of the stretch of data for which the frequency resolution is greater than the maximum expected frequency variation due to the drift of $`q`$. This yields 120 blocks, while we were using 128. ## VI Discussion We have described an algorithm able to remove any kind of interference related with the incoming electricity main supply. The study is based on the data produced by the Glasgow interferometer prototype. In the data, we have observed many interference lines that are highly non-linear and non-stationary. The form of the interference can be modeled by $`h(t)=\alpha (t)M(t)^{q(t)}`$, where $`M(t)`$ corresponds to the fundamental harmonic of the incoming electrical signal, $`\alpha (t)`$ is a slow varying function of time, and $`q(t)`$ is almost constant, but it can drift in time, i.e., $`q(t)=q_0+\delta q(t)`$, where $`\delta q(t)`$ is small in comparison to $`q_0`$. In order to detect those signals and estimate their parameters, we use the method of maximum likelihood detection to determine the values of $`q`$ (at a certain instance). Then, we apply an adaptive procedure to determine the amplitude modulation and we repeat it recursively for the whole data stream. The result is that all those lines which were initially detected (i.e., those with enough snr) have been tracked and completely removed. This method is able to recover monochromatic signals that are buried by the interference. The signal distortion is less than $`25\%`$. Thus, this procedure can assist in the search for continuous waves and also clean the statistics of the noise in the time domain. As we pointed out in an earlier paper , the removal of lines like these can reduce the level of non-Gaussian noise. Therefore, line removal is important since it can raise the sensitivity and duty cycle of the detectors to short bursts of gravitational waves, as well. ###### Acknowledgements. We would like to thank J. Hough and the gravitational waves group at Glasgow University for providing their gravitational wave interferometer data for analysis. This work was partially supported by the European Union, TMR Contract No. ERBFMBICT972771.
no-problem/9903/nucl-th9903013.html
ar5iv
text
# Modern nucleon-nucleon interactions and charge-symmetry breaking in nuclei ## Abstract Coulomb displacement energies, i.e., the differences between the energies of corresponding nuclear states in mirror nuclei, are evaluated using recent models for the nucleon-nucleon (NN) interaction. These modern NN potentials account for breaking of isospin symmetry and reproduce $`pp`$ and $`pn`$ phase shifts accurately. The predictions by these new potentials for the binding of $`{}_{}{}^{16}O`$ are calculated. A particular focus of our study are effects due to nuclear correlations and charge-symmetry breaking (CSB). We find that the CSB terms in the modern NN interactions substantially reduce the discrepancy between theory and experiment for the Coulomb displacement energies; however, our calculations do not completely explain the Nolen-Schiffer anomaly. Potential sources for the remaining discrepancies are discussed. The differences between the energies of corresponding states in mirror nuclei, the so-called Coulomb displacement energies, are due to charge-symmetry breaking (CSB) of the nucleon-nucleon (NN) interaction. If one assumes that the strong part of the nuclear force is charge symmetric, i.e. the strong proton-proton interaction is identical to the interaction between two neutrons, then the Coulomb displacement energies would originate entirely from the electromagnetic interaction between the nucleons. The dominant contribution is the Coulomb repulsion. After accurate experimental data on the charge distribution became available from electron scattering experiments, Hartree-Fock calculations with phenomenological models for the NN interaction like the Skyrme forces were performed which reproduced these measured charge distributions with good accuracy. The Coulomb displacement energies which were evaluated with these Hartree-Fock wave functions, however, underestimated the experimental data by typically seven percent. This has become known as the Nolen-Schiffer anomaly. Many attempts have been made to explain this discrepancy by the inclusion of electromagnetic corrections, many-body correlations beyond the Hartree-Fock approach, or by explicit charge-symmetry breaking terms in the NN interaction. During the last few years, a new generation of realistic NN interactions has been developed, which yield very accurate fits of the proton-proton (pp) and proton-neutron (pn) data. These new interactions account for isospin symmetry breaking (ISB) and also for CSB (which is a special case of ISB). The long-range part of these interactions is described in terms of the one-pion-exchange model, accounting correctly for the mass difference between the neutral pion, $`\pi ^0`$, and the charged pions, $`\pi ^\pm `$. This distinction between the masses of a neutral and charged pions is one origin for ISB in the resulting NN interactions. Moreover, these interactions also account for the mass difference between proton and neutron. This gives rise to a difference in the matrix elements of the meson-exchange diagrams between two protons as compared to two neutrons. Within the one-boson-exchange model, this yields only a very small contribution that breaks charge symmetry. Besides the latter effect, the Argonne and Bonn potentials (which we will apply here) include additional CSB terms necessary to correctly reproduce the empirical differences in the scattering length and effective range parameters for $`pp`$ and $`nn`$ scattering in the $`{}_{}{}^{1}S_{0}^{}`$ state. The Argonne $`V_{18}`$ (AV18) potential is constructed in a ($`S,T`$) decomposition (where $`S`$ and $`T`$ denote the total spin and isospin of the two interacting nucleons). The local potentials in the ($`S=0,T=1`$) channel are adjusted such as to reproduce the $`{}_{}{}^{1}S_{0}^{}`$ scattering length for the various isospin projections. This method of constructing CSB potentials implies that the information on CSB from the $`{}_{}{}^{1}S_{0}^{}`$ scattering length and effective range parameters is simply extrapolated to channels with $`L>0`$. More reliably, the information on CSB in NN partial waves with $`L>0`$ can be derived from a comprehensive meson-exchange model that includes diagrams beyond the simple one-meson-exchange approximation. Based upon the Bonn Full Model , ISB effects due to hadron mass-splitting have carefully been calculated up to partial waves with $`J=4`$ in ref. . The new high-precision NN potential CDBonn99 includes these ISB effects plus the effects from irreducible $`\pi \gamma `$ exchange as derived in . The difference between CDBonn99 and CDBonn96 is that the latter takes CSB only in $`{}_{}{}^{1}S_{0}^{}`$ into account and not in higher partial waves. Thus, a comparison between CDBonn99 and CDBonn96 demonstrates the effect from CSB in states with $`L>0`$. This will be useful for our discussion below. It is the aim of the present work to investigate the predictions by these new potentials for the properties of finite nuclei. Our example nucleus is $`{}_{}{}^{16}O`$. In particular, we want to determine the effects of correlations and of CSB by these potentials on the calculated Coulomb displacement energy. One possibility would be to perform self-consistent Brueckner-Hartree-Fock (BHF) calculations and extract the Coulomb displacement energies from the single-particle energies for protons and neutrons. We do not take this approach, for the following reasons: (i) Such self-consistent BHF calculations typically predict the radii for the charge-density distributions too small. This implies that the leading Coulomb contribution to the displacement energy would be overestimated. Also the calculation of the correction terms would be based on single-particle wavefunctions which are localized too much. (ii) BHF calculations are appropriate for short-range correlations. However, long-range correlations involving the admixture of configurations with low excitation energies in the uncorrelated shell-model basis require a more careful treatment. (iii) The BHF single-particle energies do not account for any distribution of the single-particle strength consistent with realistic spectral functions. For the reasons listed, we take the following approach. We use single-particle wave functions from Hartree-Fock calculations with effective nuclear forces, which yield a good fit to the empirical charge distribution. These wave functions are used to determine the leading Coulomb contribution and corrections like the effects of finite proton size, the electromagnetic spin-orbit interaction, the kinetic energy correction due to the mass difference between proton and neutron, and the effects of vacuum polarization. Actually, for these contributions we use the results by Sato. The first column of our table 2 is taken from table 2 of ref. which includes all the effects just listed. The correlation effects are taken into account in a two-step procedure. We assume a model space defined in terms of shell-model configurations including oscillator single-particle states up to the 1p0f shell. We use the oscillator parameter $`b=1.76`$ fm which is appropriate for $`{}_{}{}^{16}O`$. The effects of short-range correlations are calculated by employing an effective interaction, i.e. a $`𝒢`$-matrix suitable for the model space. This $`𝒢`$-matrix is determined as the solution of the Bethe-Goldstone equation $$𝒢(\mathrm{\Omega })=V+V\frac{Q_{\text{mod}}}{\mathrm{\Omega }Q_{\text{mod}}TQ_{\text{mod}}}𝒢(\mathrm{\Omega }),$$ (1) where $`T`$ is identified with the kinetic energy operator, while $`V`$ stands for the bare two-body interaction including the Coulomb interaction and accounting for ISB terms in the strong interaction. The Pauli operator $`Q_{\text{mod}}`$ in this Bethe-Goldstone eq.(1) is defined in terms of two-particle harmonic oscillator states $`|\alpha \beta >`$ by $$Q_{\text{mod}}|\alpha \beta >=\{\begin{array}{cc}0\hfill & \text{if }\alpha \text{ or }\beta \text{ from }0s\text{ or }0p\text{ shell}\hfill \\ 0\hfill & \text{if }\alpha \text{ and }\beta \text{ from }1s0d\text{ or }1p0f\text{ shell}\hfill \\ |\alpha \beta >\hfill & \text{elsewhere}\hfill \end{array}$$ (2) As a first approximation we use the resulting $`𝒢`$-matrix elements and evaluate single-particle energies in the BHF approximation $`ϵ_\alpha `$. This approximation, which will be denoted as BHF in the discussion below, accounts for short-range correlations, which are described in terms of configurations outside our model space. In a next step we add to this BHF definition of the nucleon self-energy the irreducible terms of second order in $`𝒢`$ which account for intermediate two-particle one-hole and one-particle two-hole configurations within the model-space $$𝒰_\alpha ^{(2)}=\frac{1}{2}\underset{p_1,p_2,h}{}\frac{<\alpha h|𝒢|p_1p_2><p_1p_2|𝒢|\alpha h>}{\omega (ϵ_{p_1}+ϵ_{p_2}ϵ_h)+i\eta }+\frac{1}{2}\underset{h_1,h_2,p}{}\frac{<\alpha p|𝒢|h_1h_2><h_1h_2|𝒢|\alpha h>}{\omega (ϵ_{h_1}+ϵ_{h_2}ϵ_p)i\eta }.$$ (3) Applying the techniques described in we can solve the Dyson equation for the single-particle Greens function $`G_{\alpha \beta }(\omega )`$ $$G_\alpha (\omega )=g^\alpha (\omega )+g_\alpha (\omega )𝒰_\alpha ^{(2)}(\omega )G_\alpha (\omega )$$ (4) with $`g_\alpha `$ the BHF approximation for the single-particle Greens function, and determine its Lehmann representation $$G_\alpha (\omega )=\underset{n}{}\frac{\left|<\mathrm{\Psi }_n^{A+1}|a_\alpha ^{}|\mathrm{\Psi }_0>\right|^2}{\omega (E_n^{A+1}E_0)+i\eta }+\underset{m}{}\frac{\left|<\mathrm{\Psi }_m^{A1}|a_\alpha |\mathrm{\Psi }_0>\right|^2}{\omega (E_0E_m^{A1})i\eta }.$$ (5) This yields directly the energies of the states with $`A\pm 1`$ nucleons we are interested in, as well as the spectroscopic factors for nucleon addition or removal. Numerical results for the binding energy of $`{}_{}{}^{16}O`$ and some single-particle properties are listed in table 1. This table contains two columns for each NN interaction considered. The columns labeled “ISB” contain results of calculations in which the ISB terms of the interactions are taken into account. In order to demonstrate the size of these ISB terms we also performed calculations, in which we restored isospin-symmetry of the strong interaction by replacing the $`pp`$ and $`nn`$ interactions by the corresponding $`pn`$ interaction (see columns labeled $`pn`$). It is worth noting that the inclusion of long range correlations by means of the Greens function approach outlined above yields an additional binding energy of around 2 MeV per nucleon for all interactions considered. About 1.5 MeV of these 2 MeV per nucleon can be attributed to the admixture of the low-lying particle-particle states within the model space. This energy would also be included in a BHF calculation using the BHF Pauli operator in the Bethe-Goldstone eq.(1) instead of $`Q_{\text{mod}}`$ defined in (2). An additional 0.5 MeV per nucleon arises from the inclusion of the hole-hole scattering terms in the Greens function approach. The effects of long range correlations are also very important for the quasiparticle energies. For this quantity, however, the effects of low energy particle-particle and hole-hole contributions tend to cancel each other to a large extent. The inclusion of 2p1h configurations within the model space lowers the proton quasiparticle energy for the $`p_{1/2}`$ state (using AV18) from -12.12 MeV to -15.38 MeV. If, however, the admixture of 2h1p configurations is also included in the definition of the self-energy, the quasiparticle energy yields -12.54 MeV, close to its original BHF value. Similar cancellations are observed for other states and, also, using other NN interactions. The Bonn potentials CDBonn96 and CDBonn99 yield around 0.9 MeV per nucleon more binding energy than the Argonne potential AV18. This is to be compared with a difference of 1.2 MeV per nucleon, which has been obtained comparing the results of BHF calculations for these potentials in nuclear matter at saturation density. These energy differences can be related to the fact that the Argonne potential is “stiffer” than the Bonn potential. More stiffness creates more correlations. This can be seen from the spectroscopic factors in table 1, which deviate more from unity in the case of AV18 as compared to Bonn. Including the ISB terms in the NN interaction rather than using the $`np`$ interaction for all isospin channels has a small but non-negligible effect on the calculated binding energies. The scattering lengths for $`pp`$ and $`pn`$ scattering implies an NN interaction which is slightly more attractive in the $`pn`$ as compared to $`pp`$. This small difference translates into about 0.2 MeV per nucleon in $`{}_{}{}^{16}O`$. We finally discuss the effects of correlations and of charge-symmetry breaking in the NN interaction on the calculated Coulomb displacement energies. Results are listed in table 2 for various one-hole and one-particle states relative to $`{}_{}{}^{16}O`$. The first column of this table, $`C^{(1)}`$, contains the results of ref. for the leading Coulomb contributions, the corrections due to the finite proton size, the electromagnetic spin-orbit interaction, the kinetic energy correction due to nucleon mass splitting, and the effects of vacuum polarization. As discussed above, we think that it is more realistic to evaluate these contributions for single-particle wave functions which are derived from Hartree-Fock calculations with phenomenological forces rather than using the wavefunctions derived from a microscopic BHF calculation. The second and third columns of table 2 list the corrections to the Coulomb displacement energies which originate from the treatment of short-range ($`\delta _{SR}`$) and long-range correlations ($`\delta _{LR}`$) discussed above. The correction $`\delta _{SR}`$ has been derived from the differences of BHF single-particle energies for protons and neutrons subtracting the Coulomb displacement energy evaluated in the mean-field approximation $$\delta _{SR}=ϵ_i^{BHF}(\text{proton})ϵ_i^{BHF}(\text{neutron})\delta _{\text{mean field}}$$ (6) In this case the BHF calculations have been performed with the $`pn`$ versions of the different interactions, i.e. without any ISB terms. The correction terms $`\delta _{LR}`$ have been evaluated in a similar way from the quasiparticle energies determined in the Greens function approach, subtracting the BHF effects already contained in $`\delta _{SR}`$. The correction terms $`\delta _{SR}`$ and $`\delta _{LR}`$ include the effects represented by irreducible diagrams of second and higher order in the interaction, in which at least one of the interaction lines represents the Coulomb interaction. In addition they contain the effects of folded diagrams discussed by Tam et al.. We find that the correlation effects are rather weak. The short- and long-range contributions tend to cancel each other. This is true in particular for the one-hole states $`p_{3/2}^1`$ and $`p_{1/2}^1`$. The effects of short-range correlations dominate in the case of the particle states, $`d_{5/2}`$ and $`1s_{1/2}`$, leading to a total correlation effect around 100 keV in the Coulomb displacement energies. This effect is slightly larger for the Argonne potential than for the Bonn potentials because of the stronger correlations in the case of Argonne. The contributions to the Coulomb displacement energies caused by the CSB terms in the NN interactions, $`\delta _{CSB}`$, are listed in the fourth column of table 2. The CDBonn96 potential, which includes CSB only in the $`{}_{}{}^{1}S_{0}^{}`$ state to fit the empirical $`pp`$ and $`nn`$ scattering lengths, provides the smallest $`\delta _{CSB}`$. When CSB as derived from a comprehensive meson-exchange model is included in partial waves with $`L>0`$, as done in CDBonn99, then the $`\delta _{CSB}`$ contribution about doubles. The Argonne AV18 potential also includes CSB for $`L>0`$ and, therefore, produces a relatively large $`\delta _{CSB}`$. Note, however, that the CSB for $`L>0`$ in AV18 is just an extrapolation of what $`{}_{}{}^{1}S_{0}^{}`$ needs to fit the $`pp`$ and $`nn`$ scattering lengths; it is not based upon theory. In any case, when a NN potential includes CSB beyond the $`{}_{}{}^{1}S_{0}^{}`$ state, then a contribution to the Coulomb displacement energies of about 100 keV is created, while CSB in $`{}_{}{}^{1}S_{0}^{}`$ only generates merely about 50 keV. This demonstrates how important it is to include CSB in all relevant NN partial waves if one wants to discuss a phenomenon like the Nolen-Schiffer anomaly in a proper way. However, it also turns out that even this carefull consideration of CSB does not fully explain the Nolen-Schiffer anomaly, in our calculations. Our final predictions given in column $`C^{Tot}`$ of table 2 still differ by about 100 keV from the experimental values. Thus the CSB NN force contribution has cut in half the original discrepancy of about 200 keV. For the remaining discrepancy, many explanations are possible. First, the nuclear structure part of our calculations may carry some uncertainty. To obtain an idea of how large such uncertainties may be, we compare the results for Coulomb displacement energies using the Skyrme II force and no CSB by Sato with the more recent ones by Suzuki et al. . For the single-hole state $`p_{1/2}^1`$, Suzuki’s result is larger by 167 keV as compared to Sato; and for the single-particle state $`d_{5/2}`$, the two calculations differ by 138 keV. Uncertainties of this size can well explain the remaining discrepancies in our results. Another possibility is that the CSB forces contained in CDBonn99 and AV18 are too weak. Based upon our results, it may be suggestive to conclude that CSB forces of about twice their current strength are needed. Notice, however, that one cannot just add more CSB forces to these potentials. A crucial constraint for any realistic CSB NN force is that it reproduces the empirical difference between the $`pp`$ and $`nn`$ $`{}_{}{}^{1}S_{0}^{}`$ scattering lengths, $`\mathrm{\Delta }a_{CSB}=1.5\pm 0.5`$ fm . The CSB contained in CDBonn99 is based upon nucleon mass-difference effects as obtained in a comprehensive meson-exchange model which completely explains the entire $`\mathrm{\Delta }a_{CSB}`$ leaving no room for additional CSB contributions. The only possibility that remains then is to simply ignore the above CSB effects and consider an alternative source for CSB, namely $`\rho ^0\omega `$ mixing. Traditionally, it was believed that $`\rho ^0\omega `$ mixing causes essentially all CSB in the nuclear force . However, recently some doubt has been cast on this paradigm. Some researchers found that $`\rho ^0\omega `$ exchange may have a substantial $`q^2`$ dependence such as to cause this contribution to nearly vanish in $`NN`$. The recent findings of ref. that the empirically known CSB in the nuclear force can be explained solely from nucleon mass splitting (leaving essentially no room for additional CSB contributions from $`\rho ^0\omega `$ mixing or other sources) fits well into this new scenario. However, since the issue of the $`q^2`$ dependence of $`\rho ^0\omega `$ exchange and its impact on $`NN`$ is by no means settled (see Refs. for critical discussions and more references), it is premature to draw any definite conclusions. In any case, for test purposes one may invoke the $`\rho ^0\omega `$ mechanism as an alternative. Note, however, that due to the constraint that the $`{}_{}{}^{1}S_{0}^{}`$ $`\mathrm{\Delta }a_{CSB}`$ be reproduced quantitatively, the $`{}_{}{}^{1}S_{0}^{}`$ contribution will most likely not change, no matter what miscroscopic mechanism is assumed for CSB. However, the CSB contributions in partial waves with $`L>0`$ may depend sensitively on the underlying mechanism. The CSB force caused by nucleon mass-splitting has essential scalar character, while $`\rho ^0\omega `$ exchange is of vector nature. Since we have seen above that the $`L>0`$ partial waves produce about 50% of the total CSB effect, higher partial waves may carry the potential for substantial changes. This would be an interesting topic for a future investigation. In summary, we have compared results for bulk properties of finite nuclei derived from modern models of the nucleon-nucleon interaction. Effects of short-range as well as long-range correlations are taken into account. The different models for the NN interaction are essentially phase-shift equivalent. Nevertheless they predict differences in the binding energy of $`{}_{}{}^{16}O`$ up to 1 MeV per nucleon. The main source for this discrepancy could be the local versus non-local description of the pion exchange interaction as discussed in the literature for the deuteron and nuclear matter. The CSB force components contained in the CDBonn99 and AV18 potentials cut in half the discrepancy that is know as the Nolen-Schiffer anomaly. The remainder of the discrepancy may be due to subtle nuclear structure effects left out in our current calculations. The consideration of alternative mechanism for CSB has also the potential to shed light on open issues. This work was supported in part by the Graduiertenkolleg “Struktur und Wechselwirkung von Hadronen und Kernen” (DFG, GRK 132/3) and by the U.S. National Science Foundation under Grant No. PHY-9603097.
no-problem/9903/hep-ph9903360.html
ar5iv
text
# Measuring the Higgs Boson Yukawa Couplings at an NLC ## I Introduction The search for the Higgs boson is one of the primary objectives of present and future colliders. Once the Higgs boson has been discovered, it will be important to measure its couplings to fermions and to gauge bosons. These couplings are completely determined in the Standard Model and the process $`e^+e^{}t\overline{t}h`$ provides a direct mechanism for measuring the $`t\overline{t}h`$ Yukawa coupling. Since this coupling can be significantly different in a supersymmetric model from that in the Standard Model, the measurement would provide a means of discriminating between different models. The associated production of a Higgs boson with a top quark pair in $`e^+e^{}`$ collisions has a small rate, around $`1fb`$ for $`\sqrt{s}=500GeV`$ and $`M_h100GeV`$. However, the signature, $`e^+e^{}t\overline{t}hW^+W^{}b\overline{b}b\overline{b}`$ is distinctive and a precise measurement may be possible. A similar reaction in the $`b`$ quark system, $`e^+e^{}b\overline{b}h`$, is suppressed in the Standard Model due to the smallness of the $`b\overline{b}h`$ Yukawa coupling. In a supersymmetric model, however, this coupling can be enhanced for large values of the parameter $`\mathrm{tan}\beta `$. In addition, a supersymmetric model contains resonant contributions not present in the Standard Model such as, for example, the process $`e^+e^{}A^0h_i^0,A^0b\overline{b}`$. In order to extract the Yukawa couplings, precise predictions for the rates, including QCD corrections, are necessary. The QCD corrections to the associated production of a Higgs boson with a heavy quark pair have been computed by two groups and are the subject of this note. ## II Associated Higgs-top quark production in the Standard Model The Standard Model cross section for $`e^+e^{}t\overline{t}h`$ occurs through both $`s`$ channel photon and $`s`$ channel $`Z`$ exchange.. The most relevant contributions are those in which the Higgs boson is emitted from a top quark leg, which are directly proportional to the $`t\overline{t}h`$ Yukawa coupling. The contribution when the Higgs boson is emitted from the $`Z`$ boson is always less than a few per cent at $`\sqrt{s}=500`$ GeV and can safely be neglected. In addition, at $`\sqrt{s}=500GeV`$, the photon exchange contribution provides the bulk of the cross section. The $`𝒪(\alpha _s)`$ inclusive cross section for $`e^+e^{}t\overline{t}h`$ receives contributions from real gluon emission from the final quark legs, $$e^+e^{}t\overline{t}hg,$$ (1) and also from virtual gluon contributions to the lowest order process. The real gluon emission is separated into a hard and a soft contribution by introducing an arbitary cutoff on the gluon momentum, $`E_{min}`$. The infrared divergences in the soft gluon emission are then regulated by the introduction of a small gluon mass, $`m_g`$. When the one-loop virtual and the real contributions are combined, the final result is finite and independent of both $`E_{min}`$ and $`m_g`$. In Fig. 1, we show the various contributions to the total cross section. $`\sigma _1`$ is the complete $`𝒪(\alpha _s)`$ corrected rate, $$\sigma _1=\sigma _0+\sigma _{virt}+\sigma _{hard}+\sigma _{soft}.$$ (2) The counterterms are included in $`\sigma _{virt}`$. The combination $`\sigma _{virt}+\sigma _{soft}`$ is independent of the gluon mass, but retains a dependence on $`E_{min}`$ which is cancelled by $`\sigma _{hard}`$. At $`\sqrt{s}=500`$ GeV, the corrections are large and positive, significantly increasing the rate. The corrections are smaller at $`\sqrt{s}=1`$ TeV, with large cancellations between the hard and the virtual plus soft contributions. The size of the QCD corrections can be described by a $`K`$ factor, $$K(\mu )\frac{\sigma _1}{\sigma _0},$$ (3) which is shown in Fig. 2. Note that after the cancellation of the ultraviolet divergences, the only $`\mu `$ dependence is in $`\alpha _s(\mu )`$. If $`\mu =\sqrt{s}`$, then $`K(M_h=100\text{GeV})`$ is reduced to $`1.4`$ from the value $`K=1.5`$ obtained with $`\mu =M_t`$ for $`\sqrt{s}=500GeV`$. ## III associated Higgs -top quark PRoduction in a supersymmetric model In the minimal supersymmetric model, a top quark pair can be produced in association with either of the neutral Higgs bosons, $`h_i=h^0,H^0`$, or with the pseudoscalar, $`A^0`$. The production of the pseudoscalar is highly suppressed and the rate for $`e^+e^{}t\overline{t}A^0`$ is less than $`10^2fb`$ at $`\sqrt{s}=500GeV`$ for all values of $`\mathrm{tan}\beta `$ and $`M_A`$. The rate for either $`e^+e^{}t\overline{t}h^0`$ or $`e^+e^{}t\overline{t}H^0`$ is greater than $`.75fb`$ throughout most of the $`M_A\mathrm{tan}\beta `$ plane and we show this in Fig. 3. We see that this region includes much of the parameter space. The results shown in Fig. 3 are relatively insensitive to changing the squark masses or the mixing parameters of the supersymmetric sector. ## IV Associated Higgs- bottom quark production in a supersymmetric model In the Standard Model, it will be difficult to extract the bottom quark-Higgs Yukawa coupling from a measurement of $`e^+e^{}b\overline{b}h`$, since the coupling itself is tiny and the $`Z`$ contribution is important, so that there is a significant dependence on the $`ZZh`$ coupling. In the minimal supersymmetric model, however, there are $`5`$ Higgs bosons, $`\varphi =h^0,H^0,A^0,H^\pm `$, so that additional processes not present in the Standard Model may be useful to pin down the fermion-Higgs boson Yukawa couplings. In addition, for certain values of $`\mathrm{tan}\beta `$, the $`b\overline{b}\varphi `$ Yukawa couplings receive significant enhancements and so the processes $`e^+e^{}b\overline{b}\varphi `$ may be larger than in the Standard Model. The physics for $`b\overline{b}h_i^0`$ production is significantly different from that of Higgs production with a $`t\overline{t}`$ pair. In the case of the $`b`$ quark, there is a large resonant contribution from the process, $`e^+e^{}A^0h_i^0`$, $`A^0b\overline{b}`$. This enhancement occurs when $`M_{h_i}M_A`$ and so is relevant for $`M_A`$ below about $`120`$ GeV for $`e^+e^{}b\overline{b}h_i^0`$. Fig. 4 shows the different contributions to the process $`e^+e^{}b\overline{b}h^0`$ for $`\mathrm{tan}\beta =40`$ at $`\sqrt{s}=500`$ GeV. The curve labelled “$`total_{NW}`$” is the narrow width approximation to the $`A^0`$ resonance, while the curve labelled “$`AhZ`$” is only the contribution from the square of the resonant diagram. At small $`M_A`$ ($`<120`$ GeV), the narrow width approximation is an excellent approximation to the total rate for this value of $`\mathrm{tan}\beta `$. For smaller $`\mathrm{tan}\beta `$, the narrow width approximation becomes increasingly inaccurate, since the $`Z`$ exchange contribution becomes more and more relevant. At large $`M_A`$, the rate is given predominantly by the $`Z`$ boson exchange contribution and is typically between $`5`$ and $`10`$ fb. In the narrow width approximation, the QCD corrections to the rate are trivially included by including the QCD corrections to the pseudo-scalar width. Away from the pseudoscalar resonance, (large $`M_A`$), inclusion of the QCD corrections would require a complete calculation, which we do not include in the present analysis since the interesting region is near the resonance where the rate is enhanced. For heavy Higgs production, $`H^0`$, the narrow width approximation is an excellent approximation for all values of $`\mathrm{tan}\beta `$ so the QCD corrections can be accurately included everywhere. For $`\mathrm{tan}\beta <5`$, the cross section is larger than $`20`$ fb even for $`M_A200`$ GeV. For $`\mathrm{tan}\beta >5`$, the rate is greater than $`20`$ fb for $`M_A>110`$ GeV, as shown in Fig. 5. This process can potentially be used to probe the couplings of the heavier neutral Higgs boson and to obtain a precise measurement of the product of the Higgs couplings, $`g_{bbH}g_{ZAH}`$. We can safely work in the narrow width approximation also for the case of $`e^+e^{}b\overline{b}A^0`$ production. In fact, in this case the contributions of the $`h_i^0`$ resonances are completely dominant and the exact cross section can be distinguished from the one obtained using the narrow width approximation only at very high values of $`\mathrm{tan}\beta `$. The case $`\mathrm{tan}\beta =40`$ is illustrated in Fig. 6. Unlike $`t\overline{t}A^0`$ production, the process $`e^+e^{}b\overline{b}A^0`$ is not suppressed relative to $`e^+e^{}b\overline{b}h_i`$ production. For $`M_a<200GeV`$, the cross secion is aways greater than $`20fb`$. ## V Conclusion We have computed the $`𝒪(\alpha _s)`$ corrected rate for $`e^+e^{}t\overline{t}h_i^0`$. At $`\sqrt{s}=500`$ GeV, the corrections are large and positive and this process can be used to measure the $`t\overline{t}h_i^0`$ Yukawa couplings, both in the Standard Model and over much of the parameter space of the minimal supersymmetric model. In a supersymmetric model, the rates for $`e^+e^{}b\overline{b}\varphi `$ can be enhanced for large values of $`\mathrm{tan}\beta `$ and relatively small values of $`M_A`$. In such models, the QCD corrections can be accurately included using the narrow width approximation in the region where the scalar or pseudoscalar resonance dominates. The $`b\overline{b}\varphi `$ production processes will measure a combination of Higgs Yukawa couplings. ## Acknowledgments The work of S. D. is supported by the U.S. Department of Energy under contract DE-AC02-76CH00016. The work of L. R. is supported by the U.S. Department of Energy under contract DE-FG02-95ER40896.
no-problem/9903/hep-ex9903041.html
ar5iv
text
# Search for bottom squarks in 𝑝⁢𝑝̄ collisions at √𝑠=1.8 TeV ## Abstract We report on a search for bottom squarks ($`\stackrel{~}{b}`$) produced in $`p\overline{p}`$ collisions at $`\sqrt{s}=1.8`$ TeV using the DØ detector at Fermilab. Bottom squarks are assumed to be produced in pairs and to decay to the lightest supersymmetric particle (LSP) and a $`b`$ quark with a branching fraction of 100%. The LSP is assumed to be the lightest neutralino and stable. We set limits on the production cross section as a function of $`\stackrel{~}{b}`$ mass and LSP mass. preprint: FNAL-PUB-99-046-E Supersymmetry (SUSY) is a hypothetical fundamental space-time symmetry relating bosons and fermions . Supersymmetric extensions to the standard model (SM) feature as yet undiscovered supersymmetric partners for every SM particle. The scalar quarks (squarks) $`\stackrel{~}{q}_L`$ and $`\stackrel{~}{q}_R`$ are the partners of the left-handed and right-handed quarks, respectively. These are weak eigenstates, and can mix to form the mass eigenstates, with $`\stackrel{~}{q}_1=\stackrel{~}{q}_L`$cos$`\theta +\stackrel{~}{q}_R`$sin$`\theta `$ for the lighter squark, and the orthogonal combination for the heavier squark $`\stackrel{~}{q}_2`$. In most SUSY models, the masses of the squarks are approximately degenerate. But in some models, the lighter top and bottom squarks could have a lower mass than the other squarks because of the high mass values of the top and bottom quarks. In particular, lighter bottom squarks could arise for large values of tan$`\beta `$, the ratio of the vacuum expectation values of the two Higgs fields in the minimal supersymmetric standard model. We report the results of a mixing-independent search for bottom squarks produced in $`p\overline{p}`$ collisions at $`\sqrt{s}=1.8`$ TeV. Squarks are produced in pairs by QCD processes with the production cross section depending on the mass of the squark but not on the mixing angle $`\theta `$. We search for events where both squarks decay to the lightest neutralino $`\stackrel{~}{\chi }_1^0`$ via $`\stackrel{~}{b}\stackrel{~}{\chi }_1^0+b`$ and assume that the $`\stackrel{~}{\chi }_1^0`$ is the lightest supersymmetric particle (LSP) and stable. This should be the dominant decay channel provided that the mass of the squark ($`m_{\stackrel{~}{b}}`$) is larger than the combined masses of the $`b`$ quark and LSP ($`m_{\text{LSP}}`$); therefore we assume its branching fraction is 100%. This yields a final state consisting of two $`b`$ quarks and two unobserved stable particles resulting in missing transverse energy ($`E\text{/}_T`$) in the detector. In this paper, we give limits on the squark pair production cross section for different values of $`m_{\stackrel{~}{b}}`$ and $`m_{\text{LSP}}`$. Limits on the cross section are used to exclude a region in the ($`m_{\text{LSP}}`$, $`m_{\stackrel{~}{b}}`$) plane. Limits from the CERN $`e^+e^{}`$ collider (LEP) experiments depend on the $`Z/\gamma `$-to-squark coupling, which is a function of the mixing angle. For maximal coupling, the LEP exclusion region can extend to the kinematic maximum; for example, to about 85 GeV/$`c^2`$ at $`\sqrt{s}=183`$ GeV. The data used for our analysis were collected during 1992–1996 by the DØ detector at the Fermilab Tevatron Collider. The DØ detector is composed of three major systems: an inner detector for tracking charged particles, a uranium/liquid argon calorimeter for measuring electromagnetic and hadronic energies, and a muon spectrometer consisting of a magnetized iron toroid and three layers of drift tubes. The detector measures jets with an energy resolution of approximately $`\sigma /E=0.8/\sqrt{E}`$ ($`E`$ in GeV) and muons with a momentum resolution of $`\sigma /p=[(\frac{0.18(p2)}{p})^2+(0.003p)^2]^{1/2}`$ ($`p`$ in GeV$`/c`$). $`E\text{/}_T`$ is determined by summing the calorimeter and muon transverse energies, and is measured with a resolution of $`\sigma `$ = 1.08 GeV + 0.019$`(\mathrm{\Sigma }|E_T|)`$ . Four channels are combined to set limits on the production of bottom squarks. The first required a $`E\text{/}_T`$ and jets topology. This channel was previously used to set limits on the mass of the top squark, which was assumed to decay $`\stackrel{~}{t}\stackrel{~}{\chi }_1^0+c`$ . The other three channels in addition required that at least one jet has an associated muon, thereby tagging $`b`$ quark decay, and were used to set limits on a charge 1/3 third generation leptoquark for the decay $`LQ\nu _\tau +b`$ . We use identical data samples and event selections for the bottom squark limits presented in this paper. For all channels, the presence of significant $`E\text{/}_T`$ is used to identify the non-interacting LSPs. Figure 1 shows the expected $`E\text{/}_T`$ distribution for two values of $`m_{\stackrel{~}{b}}`$ and different $`m_{\text{LSP}}`$ . Our requirement that $`E\text{/}_T`$$`>3540`$ GeV reduces the acceptance for small values of the mass difference $`m_{\stackrel{~}{b}}`$$`m_{\text{LSP}}`$. Backgrounds arise from events where neutrinos produce significant $`E\text{/}_T`$; for example, in $`W`$+jets events, where $`Wl\nu `$. Events for the $`E\text{/}_T`$+jets channel were collected using a trigger that required $`E\text{/}_T`$$`>35`$ GeV. The offline analysis required two jets ($`E_T^{\mathrm{jet}}>30`$ GeV), $`E\text{/}_T`$$`>40`$ GeV, and no isolated electrons or muons. Events had to have only one primary vertex to assure an unambiguous calculation of $`E\text{/}_T`$. To eliminate QCD backgrounds, additional cuts were made on the angles between the two jets, and between jets and the direction of the $`E\text{/}_T`$. Data with an integrated luminosity of 7.4 pb<sup>-1</sup>, satisfying the above selection criteria, yielded three candidate events. Background was estimated to be $`3.5\pm 1.2`$ events, with $`3.0\pm 0.9`$ events from $`W`$ boson decays and $`0.5\pm 0.3`$ events from $`Z`$ boson decays . The trigger for the muon channels required either two low-$`p_T`$ muons ($`p_T^\mu >3.0`$ GeV/$`c`$), or a single low-$`p_T`$ muon and a jet with $`E_T>10`$ GeV, or a high-$`p_T`$ muon ($`p_T^\mu >15`$ GeV/$`c`$) and a jet with $`E_T>15`$ GeV. Integrated luminosities of 60.1 pb<sup>-1</sup>, 19.5 pb<sup>-1</sup>, and 92.4 pb<sup>-1</sup> respectively were collected using the three muon triggers. The offline analysis used muons in the pseudorapidity range $`|\eta _\mu |`$ $`<`$ 1.0 and $`p_T^\mu >3.5`$ GeV/$`c`$, while jets were required to have $`E_T>10`$ GeV. For events with two muons, each muon had to be associated with its own jet. In single muon events, the muon was required to be associated with a jet, and an additional jet with $`E_T>25`$ GeV was also required. To remove QCD backgrounds, events were selected with $`E\text{/}_T`$ $`>35`$ GeV and an azimuthal angular separation between the $`E\text{/}_T`$ and the nearest jet of $`>0.7`$ radians. For the single muon channels, backgrounds from $`W`$ boson decays were reduced by cuts on muon-jet correlations, while background from top quark production was minimized by cuts on the scalar sum of jet $`E_T`$. After imposition of all selection criteria, two events remained in the data. We considered background contributions to the muon channels from $`t\overline{t}`$ and $`W`$ and $`Z`$ boson decays . Top quark events have multiple $`b`$ quarks and $`E\text{/}_T`$, and we estimated that $`1.4\pm 0.5`$ $`t\overline{t}`$ events remained in our sample. $`W`$ and $`Z`$ events have $`E\text{/}_T`$ from $`Wl\nu `$ or $`Z\nu \overline{\nu }`$. They can also have muons near jets that can mimic $`b`$ quark decays when a prompt muon overlaps a jet, or a jet fragments into a muon via a $`c`$ quark or a $`\pi /K`$ decay. We estimated there were $`1.0\pm 0.4`$ $`W`$ boson events and $`0.1\pm 0.1`$ $`Z`$ boson events in the sample. The total background for the muon channels was therefore $`2.5\pm 0.6`$ events. Combining the four channels yields five events, with a total estimated background of $`6.0\pm 1.3`$ events. We set limits on the cross section by combining the detection efficiencies and integrated luminosities for the different channels. We calculate the detection efficiency using Monte Carlo (MC) generated acceptances , multiplied by trigger and reconstruction efficiencies obtained from data . The total efficiencies for different squark and neutralino masses are summarized in Table I. Using a muon to tag $`b`$ quark decays reduces the efficiency for those channels, but their higher integrated luminosities yield a sensitivity comparable to that of the $`E\text{/}_T`$+jets channel. Including systematic errors and statistics for the MC, the total uncertainty on the combined efficiency varies between 8.6% and 29%, depending on the assumed masses. The jet energy scale dominates the systematic error for $`m_{\stackrel{~}{b}}`$ = 70 GeV/$`c^2`$, while uncertainties on the muon trigger and reconstruction efficiency dominate at higher squark masses. The 95% confidence level (C.L.) upper limits on the pair production cross section are determined using Bayesian methods, and include the systematic uncertainty on the efficiency and a 5.3% uncertainty in the integrated luminosity. The resulting upper limits are given in Table I for different values of $`m_{\stackrel{~}{b}}`$ and $`m_{\text{LSP}}`$. We use the program prospino to calculate the bottom squark pair production cross section as a function of $`m_{\stackrel{~}{b}}`$. The cross section is evaluated assuming a renormalization scale $`\mu =m_{\stackrel{~}{b}}`$. The program includes next-to-leading order diagrams, and uses cteq4m parton distribution functions . For any given $`m_{\stackrel{~}{b}}`$, we determine the value of $`m_{\text{LSP}}`$ where our 95% C.L. limit intersects the theoretical cross section. The excluded region in the ($`m_{\text{LSP}}`$,$`m_{\stackrel{~}{b}}`$) plane is shown in Fig. 2. We exclude values of $`m_{\stackrel{~}{b}}`$ below 115 GeV/$`c^2`$ for $`m_{\text{LSP}}<20`$ GeV/$`c^2`$. For $`m_{\stackrel{~}{b}}`$ = 85 GeV/$`c^2`$, we exclude the region with $`m_{\text{LSP}}<47`$ GeV/$`c^2`$. Also shown are limits from ALEPH for $`\sqrt{s}=181184`$ GeV. For most allowable values of $`m_{\text{LSP}}`$, they exclude the region with $`m_{\stackrel{~}{b}}`$ $`<83`$ GeV/$`c^2`$, assuming maximal coupling ($`\theta =0^o`$) . In conclusion, we observe five candidate events consistent with the final state $`b\overline{b}+`$$`E\text{/}_T`$. We estimate that $`6.0\pm 1.3`$ events are expected from $`t\overline{t}`$ and $`W`$ and $`Z`$ boson production, and find no excess of events that can be attributed to bottom squark production. We interpret our result as an excluded region in the ($`m_{\text{LSP}}`$,$`m_{\stackrel{~}{b}}`$) plane. This result is independent of the mixing between $`\stackrel{~}{b}_L`$ and $`\stackrel{~}{b}_R`$. We thank S.P. Martin and M. Spira for their assistance. We thank the Fermilab and collaborating institution staffs for contributions to this work and acknowledge support from the Department of Energy and National Science Foundation (USA), Commissariat à L’Energie Atomique (France), Ministry for Science and Technology and Ministry for Atomic Energy (Russia), CAPES and CNPq (Brazil), Departments of Atomic Energy and Science and Education (India), Colciencias (Colombia), CONACyT (Mexico), Ministry of Education and KOSEF (Korea), and CONICET and UBACyT (Argentina).
no-problem/9903/cond-mat9903260.html
ar5iv
text
# Apparent finite-size effects in the dynamics of supercooled liquids ## Abstract Molecular dynamics simulations are performed for a supercooled simple liquid with changing the system size from $`N=108`$ to $`10^4`$ to examine possible finite-size effects. Although almost no systematic deviation is detected in the static pair correlation functions, it is demonstrated that the structural $`\alpha `$ relaxation in a small system becomes considerably slower than that in larger systems for temperatures below $`T_c`$ at which the size of the cooperative particle motions becomes comparable to the unit cell length of the small system. The discrepancy increases with decreasing temperature. As liquids are cooled toward the glass transition temperature $`T_g`$, a drastic slowing-down occurs in dynamical properties, such as the structural relaxation time, the diffusion constant, and the viscosity , while only small changes are detected in static properties. The goal of theoretical investigations on the glass transition is to understand the universal mechanism which gives rise to the drastic slowing-down. To this end, a great number of molecular dynamics (MD) simulations has been carried out for supercooled liquids . Several large scale simulations have been performed very recently and reviled that the dynamics in supercooled liquids are spatially heterogeneous ; rearrangements of particle configurations in glassy states occur cooperatively involving many molecules. We have examined bond breakage processes among adjacent particle pairs and found that the spatial distribution of broken bonds in an appropriate time interval ($`\tau _\alpha 0.1\tau _b`$ where $`\tau _\alpha `$ is the structural $`\alpha `$ relaxation time and $`\tau _b`$ is the average bond life time) is very analogous to the critical fluctuation in Ising spin systems. The structure factor is excellently fitted to the Ornstein-Zernike form , and the correlation length $`\xi `$ thus obtained grows rapidly with decreasing temperature. Furthermore, we demonstrated that $`\xi `$ is related to $`\tau _\alpha `$ trough the dynamical scaling law, $`\tau _\alpha \xi ^z`$ with $`z4`$ in 2D and $`z2`$ in 3D. The heterogeneity structure in our bond breakage is essentially the same as that in local diffusivity , which leads to a systematic violation of the Stokes-Einstein low in supercooled states. To investigate long-time behavior of glassy materials by MD simulation, rather small systems typically composed of $`N=10^210^3`$ particles have been used with the periodic boundary condition (PBC). Such small systems have generally been considered to be large enough to avoid finite-size effects in the case of amorphas materials in which no long-range order exists. In fact, static properties such as the radial distribution function $`g(r)`$ or the static structure factor $`S(q)`$ of glassy materials are not seriously affected by the system size as long as reasonably large systems ($`N10^2`$) are used. However, this is not always the case for dynamical properties. For example, it is known that the use of a small system with PBC gives an manifest effect in relatively short-time behavior of the density-time correlation function. There appears an artifact in time scale of order $`L/c`$, where $`L`$ is the size of the simulation cell and $`c`$ is the sound velocity . As we already mentioned, the dynamical correlation length $`\xi `$ in supercooled liquids grows rapidly with lowering the temperature. It is thus possible that some kinds of finite-size effects may appear in the dynamics of supercooled liquids when $`\xi `$ becomes comparable to $`L`$ even if no such effect is detected in the static correlation functions. The main purpose of this paper is to examine carefully this point for a simple soft sphere mixture. Our model mixture is composed of two soft sphere components $`1`$ and $`2`$ having the size ratio $`\sigma _1/\sigma _2=1/1.2`$ and the mass ratio $`m_1/m_2=1/2`$ while $`ϵ_1=ϵ_2=ϵ`$. The units of length, time, and temperature are $`\sigma _1`$, $`\tau _0=(m_1\sigma _1^2/ϵ)^{1/2}`$, and $`ϵ/k_B`$ in this paper. Details of simulations are given in our earlier paper . We presently performed MD simulations only in three-dimensional space with the systems composed of $`N=N_1+N_2=108`$, $`10^3`$, and $`10^4`$ particles while the density $`\rho =N/L^3=0.8`$ and the composition $`N_1/N=0.5`$ are fixed. The corresponding system linear dimensions are $`L^{N=108}=5.13`$, $`L^{N=10^3}=10.8`$, and $`L^{N=10^4}=23.2`$. Simulations were carried out at $`T=0.772`$, $`0.473`$, $`0.352`$, $`0.306`$ and $`0.267`$ with the time step $`\mathrm{\Delta }t=0.005`$. The PBC was used in all cases. At each temperature, the systems were carefully equilibrated in the canonical condition so that no appreciable aging effect takes place. Data are then taken in the microcanonical condition. We first calculate the partial static structure factor, $$S_{ab}(q)=\frac{1}{N}\underset{j=1}{\overset{N_a}{}}\underset{k=1}{\overset{N_b}{}}\mathrm{exp}(i𝒒(𝒓_j^a𝒓_k^b)),$$ (1) to investigate whether finite-size effects are detectable in static particle configurations. Here $`𝒓_j^a`$ and $`𝒓_k^b`$ are the positions of the $`j`$-th and $`k`$-th particles in the $`a`$ and $`b`$ components ($`a,b1,2`$) and $`\mathrm{}`$ indicates the ensemble average over different initial configurations. The dimensionless wave number $`q`$ is in units of $`\sigma _1^1`$. In Fig. 1, we plotted $`S_{11}(q)`$ for $`N=108`$, $`10^3`$, and $`10^4`$ at $`T=0.473`$ (a) and $`0.267`$ (b). One can find that $`S_{11}(q)`$ for all cases excellently agrees with each other both in (a) and (b); no systematic size dependence can be detected among them. We examined also $`S_{12}`$ and $`S_{22}`$ and confirmed the same tendency. Our results indicate that finite-size effects are very small or almost negligible in static pair correlations for $`N10^2`$, as that is generally believed. Let us next consider finite-size effects in dynamical properties. The structure relaxation in glassy materials can be measured by calculating the coherent or the incoherent intermediate scattering functions, $`F(q,t)`$ or $`F_s(q,t)`$. The decay profiles of those two functions tend to coincide at the first peak wave number $`q_m`$ in $`F(q,0)`$. This has been confirmed for the present soft-sphere mixture and also for a Lennard-Jones binary mixture . Since $`F_s(q,t)`$ can be more accurately determined via MD simulation, we here calculate the incoherent scattering function for the component 1, $$F_s(q,t)=\frac{1}{N_1}\underset{j=1}{\overset{N_1}{}}\mathrm{exp}[i𝒒\mathrm{\Delta }𝒓_j^1(t)],$$ (2) where $`\mathrm{\Delta }𝒓_j^1(t)=𝒓_j^1(t)𝒓_j^1(0)`$ is the displacement vector. Although $`F_s(q,t)`$ decays monotonically in normal liquid states, it exhibits multi-step relaxations in highly supercooled states. This is due to the fact that at lower temperatures the particle motions are highly jammed and thus trapped considerably in effective cages formed by their neighbors. We then defined the $`\alpha `$ relaxation time $`\tau _\alpha `$, which corresponds to a characteristic life time of the effective cage, by $`F_s(q,\tau _\alpha )=e^1`$ at $`q=2\pi `$ for several temperatures. Fig. 2 shows the decay profiles of $`F_s(q=2\pi ,t)`$ obtained for $`N=108`$, $`10^3`$, and $`10^4`$ at $`T=0.473`$ and $`0.267`$. At $`T=0.473`$, we see that the two curves from $`N=10^3`$ and $`10^4`$ entirely coincide, and one from $`N=108`$ is also close to them. The relaxation times thus obtained are $`\tau _\alpha ^{N=108}3.5`$, $`\tau _\alpha ^{N=10^3}=\tau _\alpha ^{N=10^4}2.3`$. However, the situation is different at $`T=0.267`$ where $`F_s(q,t)`$ exhibits two-step relaxations. The faster and the slower parts of the decay are called the fast-$`\beta `$ (thermal) and the $`\alpha `$ relaxations, respectively. We note that the decay profiles for the three systems differ significantly in the $`\alpha `$ regime ($`t10^2`$), whereas they agree well in the fast-$`\beta `$ regime ($`t1`$). Here we determined $`\tau _\alpha ^{N=108}11000`$, $`\tau _\alpha ^{N=10^3}6500`$, and $`\tau _\alpha ^{N=10^4}2000`$ at $`T=0.267`$. Fig. 3 shows the temperature dependence of $`\tau _\alpha ^{N=108}`$, $`\tau _\alpha ^{N=10^3}`$ and $`\tau _\alpha ^{N=10^4}`$. At the highest temperature $`T=0.772`$, $`\tau _\alpha ^{N=108}`$, $`\tau _\alpha ^{N=10^3}`$ and $`\tau _\alpha ^{N=10^4}`$ are exactly equal. However, $`\tau _\alpha ^{N=108}`$ begins to deviate from the others around $`T=0.473`$ at which $`\xi 5`$ is comparable to $`L^{N=108}=5.13`$. The deviation increases with further decreasing temperature, and $`\tau _\alpha ^{N=108}`$ becomes almost one order larger than $`\tau _\alpha ^{N=10^4}`$ for $`T0.352`$. Furthermore, $`\tau _\alpha ^{N=10^3}`$ begins to deviate from $`\tau _\alpha ^{N=10^4}`$ around $`T=0.306`$, at which $`L^{N=10^3}<\xi <L^{N=10^4}`$. We thus suppose that the present finite-size effects are attribute to suppressions of cooperative particle motions due to insufficient system size. The structural relaxation time of smaller systems thus tend to show a stronger (super-Arrhenius) temperature dependence as a result of the finite-size effects. Remembering the fact that the static structure factors are almost identical among those three systems at all temperatures, the origin of this effect may be purely kinetic, or higher order correlations in particle configurations may relevant to this. In their recent paper , Horbach et al. found similar finite-size effects in a model silica glass which is known as a typical strong glass former, in addition to the present soft sphere mixture which is usually classified in fragile glass former. To understand what happens in microscopic scale, we next visualize individual particle motions in $`N=10^4`$ system at $`T=0.267`$. First, we pick up mobile particles by the condition $`|\mathrm{\Delta }𝒓_j^a(t)|>l_c^a`$ in a time interval $`[t_0,t_0+t]`$, where $`t=0.125\tau _\alpha =250`$, and $`l_c^a`$ is defined separately for the component $`a1,2`$ such that the sum of $`\mathrm{\Delta }𝒓_j^a(t)^2`$ over the mobile particles covers $`66\%`$ of the total sum $`_j^{N_a}\mathrm{\Delta }𝒓_j^a(t)^2`$. Then we define clusters of the mobile particles by connecting $`i`$ and $`j`$ if $`|𝒓_i(t)𝒓_j(0)|<0.3(\sigma _i+\sigma _j)`$ or $`|𝒓_i(0)𝒓_j(t)|<0.3(\sigma _i+\sigma _j)`$ similar to Donati et al. . In Fig. 4, we show spatial distribution of the clusters having the size $`n5`$; those are all chain-like and have large scale correlations. Although only $`5`$% of the total particles are shown in Fig. 4, the sum of $`\mathrm{\Delta }𝒓_j(t)^2`$ covers approximately $`40`$% of the total $`_{i=1}^N\mathrm{\Delta }𝒓_j(t)^2`$. This clearly indicates that the cooperative motions become dominant in glassy states. To investigate finite-size effects in cooperative motions quantitatively, we here introduce the distribution function, $$P(n)=\underset{i=1}{\overset{N}{}}{}_{}{}^{}\mathrm{\Delta }𝒓_j(t)^2\delta (nn_i)/\underset{i=1}{\overset{N}{}}{}_{}{}^{}\mathrm{\Delta }𝒓_j(t)^2,$$ (3) where the sum runs over mobile particles only. $`n_i`$ is the size of the cluster in which the mobile particle $`i`$ belongs, and thus $`\delta (nn_i)`$ is $`1`$ if $`i`$ is a member of the cluster having the size $`n`$ and $`0`$ if not. The physical meaning of $`P(n)`$ is as follows; clusters having the size $`n`$ contribute $`P(n)`$ to the total squared displacements of the mobile particles. In Fig. 5, we show $`P(n)`$ for $`N=108`$, $`10^3`$, and $`10^4`$ at $`T=0.267`$ at which $`\xi 40`$ obtained for $`N=10^4`$ is even larger than $`L^{N=10^4}=23.2`$. We found that the cooperative motions in $`N=108`$ system are strongly suppressed. By comparing $`N=10^3`$ and $`10^4`$, it is found that larger scale cooperative motions ($`n>10`$) are considerably suppressed also in $`N=10^3`$ system. Infinitely large clusters which percolate the system though the PBC have never been found in all cases. The characteristic cluster size $`\overline{n}=_{i=1}^{\mathrm{}}nP(n)`$ thus obtained is 3.38, 6.11, and 7.73 for $`N=108`$, $`10^3`$, and $`10^4`$, respectively. In the framework of conventional liquid theories , changes in static particle configurations upon cooling lead to a drastic slowing-down toward the grass transition. The mode coupling theory (MCT) is the most successful self-consistent approach within this framework. MCT describes onset of glassy slowing down (or slow structural relaxations) in the density-time correlation functions. In the original MCT, however, a sharp Ergodic/Non-ergodic transition is predicted at a temperature $`T_0`$ which is considerably above $`T_g`$. Although such a tendency has been found in colloidal systems in which thermal activation processes are negligible, it has never been observed in real glassy materials. It is thus believed that the MCT has some difficulties for describing the true dynamics of supercooled liquids apparently below $`T_0`$. The main problem is the fact that the original MCT do not take into account the hopping motions of particles, which must be cooperative and thus long ranged as is seen in recent MD simulations . Unfortunately the problem has not yet been overcome in fully self-consistent way because efforts for including thermal activations make the theory more or less ad hoc. It is a interesting fact that the behavior of structural relaxations in our smallest system, in which cooperative hopping motions are highly suppressed, becomes somehow closer to the original MCT prediction. It is worth mentioning several experimental attempts to find the evidence of the dynamical heterogeneity in glassy materials. One of the most interesting and useful approaches are the recent experiments on glass-forming thin films. The thickness $`d`$ dependence of film properties is the main interest in these studies . The motivations of those studies are quite similar to the present study; they aimed to control the size $`\xi `$ of the cooperative motions by changing $`d`$ and found that the relaxation time and $`T_g`$ considerably depend on $`d`$. They considered that the cooperative particle motions in the direction normal to the film may be truncated near the interface, and this effect become dominant when $`\xi d`$. Thus the system size restriction can enhance the particle motions. Note that this seems to contradict to the finite-size effect in the present MD simulations in which the size restriction suppresses the cooperative particle motions . The mechanism of this discrepancy is still an open question; we naively speculate that the situation is much more complicated in polymer films than in MD simulations. The system size restriction occurs only in one direction normal to the film and other two in-film directions are free in thin films, whereas all directions are equally restricted in the present simulations. We believe that investigating the microscopic relaxation mechanisms in glass-forming (both simple and polymeric) thin films are definitely important. In summary, we have examined system-size effects in the dynamics of supercooled liquids by MD simulation. We found significant finite-size effects in the structural relaxation at lower temperatures, whereas no such effect is detectable in static pair correlation functions. The cooperative particle motions, which leads to the $`\alpha `$ relaxation in glassy states, are strongly suppressed in smaller systems for temperatures lower than $`T_c`$ at which $`\xi `$ becomes comparable to the system size. The present finite-size effects are regarded as a natural consequence of the dynamical heterogeneity appearing in supercooled liquids. We point out that finite-size effects are significant in the dynamics of highly supercooled liquids, and thus special attention must be payed to investigate true relaxation dynamics in computer simulations particularly in the $`\alpha `$ regime. We thank Profs. T. Munakata and A. Onuki for helpful discussions. This work is supported by a Grant in Aid for Scientific Research from the Ministry of Education, Science, Sports and Culture of Japan. Calculations have been carried out at the Supercomputer Laboratory, Institute for Chemical Research, Kyoto University and Human Genome Center, Institute of Medical Science, University of Tokyo.
no-problem/9903/math9903153.html
ar5iv
text
# Three-player impartial games ## 0 Introduction Let us begin with a very specific problem: Assume $`G`$ is an impartial (positional) game played by three people who alternate moves in cyclic fashion (Natalie, Oliver, Percival, Natalie, Oliver, Percival, …), under the convention that the player who makes the last move wins. Let $`H`$ be another such game. Suppose that the second player, Oliver, has a winning strategy for $`G`$. Suppose also that Oliver has a winning strategy for $`H`$. Is it possible for Oliver to have a winning strategy for the disjunctive sum $`G+H`$ as well? Recall that an impartial positional game is specified by (i) an initial position, (ii) the set of all positions that can arise during play, and (iii) the set of all legal moves from one position to another. The winner is the last player to make a move. To avoid the possibility of a game going on forever, we require that from no position may there be an infinite chain of legal moves. The disjunctive sum of two such games $`G,H`$ is the game in which a legal move consists of making a move in $`G`$ (leaving $`H`$ alone) or making a move in $`H`$ (leaving $`G`$ alone). Readers unfamiliar with the theory of two-player impartial games should consult or . It is important to notice that in a three-player game, it is possible that none of the players has a winning strategy. The simplest example is the Nim game that starts from the position $`1+2`$, where 1 and 2 denote Nim-heaps of size one and two respectively. As usual, a legal move consists of taking a number of counters from a single heap. In this example, the first player has no winning move, but his actions determine whether the second or third player will win the game. None of the players has a winning strategy. That is, any two players can cooperate to prevent the remaining player from winning. It is in a player’s interest to join such a coalition of size two if he can count on his partner to share the prize with him – unless the third player counters by offering an even bigger share of the prize. This kind of situation is well known in the theory of “economic” (as opposed to positional) games. In such games, however, play is usually simultaneous rather than sequential. Bob Li has worked out a theory of multi-player positional games by decreeing that a player’s winnings depend on how recently he has moved when the game ends (the last player to move wins the most, the player who moved before him wins the next most, and so on), and by assuming that each player will play rationally so as to get the highest winnings possible. Li’s theory, when applied to games like Nim, leads to quite pretty results, and this is perhaps sufficient justification for it; but it is worth pointing out that, to the extent that game theory is supposed to be applicable to the actual playing of games, it is a bit odd to assume that one’s adversaries are going to play perfectly. Indeed, the only kind of adversaries a sensible person would play with, at least when money is involved, are those who do not know the winning strategy. Only in the case of two-player games is it the case that a player has a winning strategy against an arbitrary adversary if and only if he has a winning strategy against a perfectly rational adversary. Phil Straffin has his own approach to three-player games. He adopts a policy (“McCarthy’s revenge rule”) governing how a player should act in a situation where he himself cannot win but where he can choose which of his opponents will win. Straffin analyzes Nim under such a revenge rule, and his results are satisfying if taken on their own terms, but the approach is open to the same practical objections as Li’s. Specifically, if a player’s winning strategy depends on the assumption that his adversaries will be able to recognize when they can’t win, then the player’s strategy is guaranteed to work only when his opponents can see all the way to the leaves of the game tree. In this case, at least one of them (and perhaps each of them) believes he can’t win; so why is he playing? The proper response to such objections, from the point of view of someone who wishes to understand real-world games, is that theories like Li’s and Straffin’s are prototypes of more sophisticated theories, not yet developed, that take into account the fact that players of real-life games are partly rational and partly emotional creatures, capable of such things as stupidity and duplicity. It would be good to have a framework into which the theories of Li and Straffin, along with three-player game-theories of the future, can be fitted. This neutral framework would make no special assumptions about how the players behave. Here, we develop such a theory. It is a theory designed to answer the single question “Can I win?,” asked by a single player playing against two adversaries of unknown characteristics. Not surprisingly, the typical answer given by the theory is “No”; in most positions, any two players can gang up on the third. But it turns out that there is a great deal to be said about those games in which one of the players does have a winning strategy. In addition to the coarse classification of three-player games according to who (if anyone) has the winning strategy, one can also carry out a fine classification of games analogous to, but much messier than, the classification of two-player games according to Grundy-value. The beginnings of such a classification permit one to answer the riddle with which this article opened; the later stages lead to many interesting complications which have so far resisted all attempts at comprehensive analysis. ## 1 Notation and Preliminaries Games will be denoted by the capital letters $`G`$, $`H`$, $`X`$, and $`Y`$. As in the two-player theory, we can assume that every position carries along with it the rules of play to be applied, so that each game may be identified with its initial position. The game $`G^{}`$ is an option of $`G`$ if it is legal to move from $`G`$ to $`G^{}`$. To build up all the finite games, we start from the null-game $`0`$ (the unique game with no options) and recursively define $`G=\{G_1^{},G_2^{},\mathrm{}\}`$ as the game with options $`G_1^{},G_2^{},\mathrm{}`$. The game $`\{0\}`$ will be denoted by 1, the game $`\{0,1\}`$ will be denoted by 2, and so on. (It should always be clear from context whether a given numeral denotes a number or a Nim game.) We recursively define the relation of identity by the rule that $`G`$ and $`H`$ are identical if and only if for every option $`G^{}`$ of $`G`$ there exists an option $`H^{}`$ of $`H`$ identical to it, and vice versa. We define (disjunctive) addition, represented by $`+`$, by the rule that $`G+H`$ is the game whose options are precisely the games of the form $`G^{}+H`$ and $`G+H^{}`$. It is easy to show that identity is an equivalence relation that respects the “bracketing” and addition operations, that addition is associative and commutative, and that $`0`$ is an additive identity. The following abbreviations will prove convenient: $$\begin{array}{ccc}\hfill GH& \text{means}\hfill & G+H\hfill \\ \hfill G^n& \text{means}\hfill & G+G+\mathrm{}+G\text{(}n\text{ times)}\hfill \\ \hfill m_n& \text{means}\hfill & \{\{\mathrm{}\{m\}\mathrm{}\}\}\text{(}n\text{ layers deep)}\hfill \end{array}$$ Thus, $`\{12\}^34_5`$ denotes $$\{1+2\}+\{1+2\}+\{1+2\}+\{\{\{\{\{4\}\}\}\}\}.$$ (We’ll never need to talk about Nim-heaps of size $`>9`$, so our juxtaposition convention won’t cause trouble.) Note that for all $`G`$, the games $`G0`$, $`G^1`$, $`G_0`$, and $`G`$ are identical. Relative to any non-initial position in the course of play, one of the players has just moved (the Previous player) and one is about to move (the Next player); the remaining player is the Other player. At the start of the game, players Next, Other, and Previous correspond to the first, second, and third players (even though, strictly speaking, there was no “previous” move). We call $`G`$ a Next-game ($`𝒩`$-game) if there is a winning strategy for Next, and we let $`𝒩`$ be the set of $`𝒩`$-games; $`𝒩`$ is the type of $`G`$, and $`G`$ belongs to $`𝒩`$. We define $`𝒪`$-games and $`𝒫`$-games in a similar way. If none of the players has a winning strategy, we say that $`G`$ is a Queer game ($`𝒬`$-game). In a slight abuse of notation, I will often use “$`=`$” to mean “belongs to”, and use the letters $`N,O,P,Q`$ to stand for unknown games belonging to these respective types. Thus I will write $`1=N`$, $`11=O`$, $`111=P`$, etc.; and the problem posed in the Introduction can be formulated succinctly as: solve $`O+O=O`$ or prove that no solution exists. (At this point I invite the reader to tackle $`Q+Q=O`$. There is a simple and elegant solution.) The following four rules provide a recursive method for classifying a game: * $`G`$ is an $`𝒩`$-game exactly if it has some $`𝒫`$-game as an option. * $`G`$ is an $`𝒪`$-game exactly if all of its options are $`𝒩`$-games, and it has at least one option (this proviso prevents us from mistakenly classifying $`0`$ as an $`𝒪`$-game). * $`G`$ is a $`𝒫`$-game exactly if all of its options are $`𝒪`$-games. * $`G`$ is a $`𝒬`$-game exactly if none of the above conditions is satisfied. Using these rules, it is possible to analyze a game completely by classifying all the positions in its game-tree, from leaves to root. ## 2 Some Sample Games Let us first establish the types of the simpler Nim games. It’s easy to see that $`0`$ $`=P,`$ $`1`$ $`=\{0\}=\{P\}=N,`$ $`11`$ $`=\{1\}=\{N\}=O,`$ $`111`$ $`=\{11\}=\{O\}=P,`$ and so on; in general, the type of $`1^n`$ is $`𝒫`$, $`𝒩`$, or $`𝒪`$ according as the residue of $`n`$ mod 3 is 0, 1, or 2. Also $`2`$ $`=N,`$ $`3`$ $`=N,`$ and so on, because in each case Next can win by taking the whole heap. $`12`$ $`=\{1,2,11\}=\{N,N,O\}=Q,`$ $`112`$ $`=\{11,12,111\}=\{O,Q,P\}=N,`$ $`1112`$ $`=\{111,112,1111\}=\{P,N,N\}=N,`$ $`11112`$ $`=\{1111,1112,11111\}=\{N,N,O\}=Q,`$ and so on; in general, the type of $`1^n2`$ is $`𝒩`$, $`𝒬`$, or $`𝒩`$ according as the residue of $`n`$ mod 3 is 0, 1, or 2. The winning strategy for these $`𝒩`$-games is simple: reduce the game to one of the $`𝒫`$-positions $`1^{3k}`$. $`1+1=11`$ is a solution of the equation $`N+N=O`$. Does $`G=N`$ imply that $`G+G=O`$ in general? We can easily see that the answer is “No”: $$2+2=22=\{12,2\}=\{Q,N\}=Q.$$ ($`12`$ is identical to $`21`$, so they can be treated as a single option.) Here are some more calculations which will be useful later. $`\{2\}`$ $`=\{N\}=O`$ $`\{\{2\}\}`$ $`=\{O\}=P`$ $`\{1,11\}`$ $`=\{N,O\}=Q`$ $`\{2,11\}`$ $`=\{N,O\}=Q`$ ## 3 Adding Games The type of $`G+H`$ is not in general determined by the types of $`G`$ and $`H`$. (For example, 1 and 2 are both of type $`𝒩`$, but $`1+1=O`$ while $`2+2=Q`$.) That is, addition does not respect the relation “belongs to the same type as”. To remedy this situation we define equivalence ($``$) by the condition that $`GH`$ if and only if for all games $`X`$, $`G+X`$ and $`H+X`$ belong to the same type. It is easy to show that “equivalence” is an equivalence relation, that it respects bracketing and addition, and that if $`G^{}H^{}`$ then $`\{G^{},H^{},\mathrm{}\}\{H^{},\mathrm{}\}`$ (that is, equivalence options of a game may be conflated). We are now in a position to undertake the main task of this section: determining the addition table. Recall that in the two-player theory, there are only two types ($`𝒩`$ and $`𝒫`$) and their addition table is as shown in Table 1. Here, the entry $`PN`$ denotes the fact that the sum of two $`𝒩`$-games can be either a $`𝒫`$-game or an $`𝒩`$-game. The analogous addition table for three-player games is given by Table 2. Notice that in one particular case (namely $`G=P`$ and $`H=Q`$, or vice versa), knowing the types of $`G`$ and $`H`$ does tell one which type $`G+H`$ belongs to, namely $`Q`$. A corollary of this is that $`P+P+\mathrm{}+P+Q=Q`$. To prove that Table 1 applies, one simply finds solutions of the allowed “equations” $`P+P=P`$, $`P+N=N`$ (from which $`N+P=N`$ follows), $`N+N=P`$, and $`N+N=N`$, and proves that the forbidden equations $`P+P=N`$ and $`P+N=P`$ have no solutions. To demonstrate the validity of Table 2, we must find solutions to twenty-two such equations, and prove that the remaining eighteen have no solutions. Table 3 shows the twenty-two satisfiable equations and their solutions. And now, the proofs of impossibility for the eighteen impossible cases. ###### Claim 1. None of the following is possible. $`O+P`$ $`=N`$ (1) $`N+P`$ $`=P`$ (2) $`O+O`$ $`=P`$ (3) $`P+P`$ $`=O`$ (4) $`O+N`$ $`=O`$ (5) ###### Proof 3.1. By (joint) infinite descent. Here, as in subsequent proofs, the infinite-descent “boilerplating” is omitted. Note that none of the hypothetical $`𝒫`$-games in equations (1)-(4) can be the $`0`$-game, so all of these games $`X,Y`$ have options. Suppose (1) holds; say $`X=O`$, $`Y=P`$, $`X+Y=N`$. Some option $`X^{}+Y`$ or $`X+Y^{}`$ must be a $`𝒫`$-game. But then we have either $`N+P=P`$ (every option $`X^{}`$ must be an $`𝒩`$-game), which is (2), or $`O+O=P`$ (every option $`Y^{}`$ must be an $`𝒪`$-game), which is (3). Suppose (2) holds; say $`X=N`$, $`Y=P`$, $`X+Y=P`$. Then there exists $`X^{}=P`$, which must satisfy $`X^{}+Y=P+P=O`$ (equation (4)). Suppose (3) holds; say $`X=O`$, $`Y=O`$, $`X+Y=P`$. Then there exists $`Y^{}=N`$, which must satisfy $`X+Y^{}=O+N=O`$ (equation (5)). Suppose (4) holds; say $`X=P`$, $`Y=P`$, $`X+Y=O`$. Then there exists $`X^{}=O`$, which must satisfy $`X^{}+Y=O+P=N`$ (equation (1)). Finally, suppose (5) holds; say $`X=O`$, $`Y=N`$, $`X+Y=O`$. Then there exists $`Y^{}=P`$, which must satisfy $`X+Y^{}=O+P=N`$ (equation (1)). ###### Claim 1. None of the following is possible. $`P+P`$ $`=N`$ (6) $`O+P`$ $`=P`$ (7) $`N+P`$ $`=O`$ (8) ###### Proof 3.2. By infinite descent. A solution to (6) yields an (earlier-created) solution to (7), which yields a solution to (8), which yields a solution to (6). ###### Claim 2. It is impossible that $$N+N=P$$ (9) ###### Proof 3.3. By contradiction. A solution to (9) would yield a solution to (8). ###### Claim 3. None of the following is possible. $`Q+P`$ $`=N`$ (10) $`Q+P`$ $`=P`$ (11) $`Q+O`$ $`=P`$ (12) $`Q+P`$ $`=O`$ (13) $`Q+N`$ $`=O`$ (14) ###### Proof 3.4. By infinite descent (making use of earlier results as well). Suppose (10) holds with $`X,Y`$. Some option $`X^{}+Y`$ or $`X+Y^{}`$ must be a $`𝒫`$-game. In the former event, we have $`X^{}P`$ (since $`X=Q`$), so that either $`N+P=P`$ (equation (2)), $`O+P=P`$ (equation (7)), or $`Q+P=P`$ (equation (11)); in the latter event we have $`Q+O=P`$ (equation (12)). Suppose (11) holds with $`X,Y`$. Since $`X=Q`$, it has an option $`X^{}`$ of type $`𝒩`$ or type $`𝒬`$ (for if all options of $`X`$ were $`𝒪`$-games and $`𝒫`$-games, $`X`$ would be of type $`𝒫`$ or $`𝒩`$). If $`X^{}=N`$, then we have $`X^{}+Y=N+P=O`$ (equation (8)), and if $`X^{}=Q`$, then we have $`X^{}+Y=Q+P=O`$ (equation (13)). Suppose (12) holds with $`X,Y`$. Then $`X+Y^{}=Q+N=O`$ (equation (14)). Suppose (13) holds with $`X,Y`$. Since $`X=Q`$, it has an option $`X^{}`$ of type $`𝒪`$ or of type $`𝒬`$ (for if all options of $`X`$ were $`𝒩`$-games and $`𝒫`$-games, $`X`$ would be of type $`𝒪`$ or $`𝒩`$). $`X^{}=O`$ yields $`X^{}+Y=O+P=N`$ (equation (1)), and $`X^{}=Q`$ yields $`X^{}+Y=Q+P=N`$ (equation (10)). Finally, suppose (14) holds with $`X,Y`$. Then there exists $`Y^{}=P`$, which must satisfy $`X+Y^{}=Q+P=N`$ (equation (10)). ###### Claim 4. It is impossible that $$Q+N=P$$ (15) ###### Proof 3.5. By contradiction. A solution to (15) would yield a solution to (13). ###### Claim 5. Neither of the following is possible: $`Q+Q`$ $`=N`$ (16) $`Q+Q`$ $`=P`$ (17) ###### Proof 3.6. By infinite descent. Suppose (16) holds with $`X,Y`$. Then some option of $`X+Y`$ must be a $`𝒫`$-game; without loss of generality, we assume $`X+Y^{}=P`$. But $`X=Q`$, and we have already ruled out $`Q+P=P`$ (equation (11)), $`Q+N=P`$ (equation (5)), and $`Q+O=P`$ (equation (12)), so we have $`X+Y^{}=Q+Q=P`$ (equation (17)). Suppose (17) holds with $`X,Y`$. $`X`$ must have an $`𝒩`$-option or $`𝒬`$-option $`X^{}`$, but if $`X^{}=N`$ then $`X^{}+Y=N+Q=O`$ (equation (14)), which can’t happen; so $`X^{}=Q`$. Similarly, $`Y`$ has a $`𝒬`$-option $`Y^{}`$. $`X^{}+Y=O`$, so $`X^{}+Y^{}=Q+Q=N`$ (equation (16)). (Note that the second half of this proof requires us to look two moves ahead, rather than just one move ahead as in the preceding proofs.) The remaining case is surprisingly hard to dispose of; the proof requires us to look five moves ahead. ###### Claim 6. It is impossible that $$O+O=O$$ (18) ###### Proof 3.7. By infinite descent. Suppose (18) holds with $`X,Y`$. For all $`X^{}`$ we have $`X^{}+Y=N`$, so that $`X^{}+Y`$ must have some $`𝒫`$-option; but this $`𝒫`$-option cannot be of the form $`X^{}+Y^{}`$, since $`N+NP`$ (equation (9)). Hence there must exist an option $`X^{\prime \prime }`$ of $`X^{}`$ such that $`X^{\prime \prime }+Y=P`$. This implies that $`X^{\prime \prime }=N`$, since none of the cases $`O+O=P`$ (equation (3)), $`P+O=P`$ (equation (7)), $`Q+O=P`$ (equation (12)) can occur. Similarly, every $`Y^{}`$ has an option $`Y^{\prime \prime }`$ such that $`X+Y^{\prime \prime }=P`$, $`Y^{\prime \prime }=N`$. Since $`X^{\prime \prime }+Y`$ is a $`𝒫`$-game, $`X^{\prime \prime }+Y^{}`$ and $`X^{}+Y^{\prime \prime }`$ are $`𝒪`$-games and $`X^{\prime \prime }+Y^{\prime \prime }`$ is an $`𝒩`$-game. One of the options of $`X^{\prime \prime }+Y^{\prime \prime }`$ must be a $`𝒫`$-game; without loss of generality, say $`X^{\prime \prime \prime }+Y^{\prime \prime }=P`$. Since $`Y^{\prime \prime }=N`$ and since none of the cases $`N+N=P`$ (equation (9)), $`P+N=P`$ (equation (2)), $`Q+N=P`$ (equation (15)) can occur, $`X^{\prime \prime \prime }`$ must be an $`𝒪`$-game. But recall that $`X^{\prime \prime }+Y`$ is a $`𝒫`$-game, so that its option $`X^{\prime \prime \prime }+Y`$ is an $`𝒪`$-game. This gives us $`X^{\prime \prime \prime }+Y=O+O=O`$, which is an earlier-created solution to (18). The proof of Claim 18 completes the proof of the validity of Table 2. Observe that this final clinching claim, which answers the article’s opening riddle in the negative, depends on five of the preceding six claims. Our straightforward question thus seems to lack a straightforward solution. In particular, one would like to know of a winning strategy for the Natalie-and-Percival coalition in the game $`G+H`$ that makes use of Oliver’s winning strategies for $`G`$ and $`H`$. Indeed, it would be desirable to have strategic ways of understanding all the facts in this section. At this point it is a good idea to switch to a notation that is more mnemonically helpful than $`N`$, $`O`$, and $`P`$, vis-à-vis addition. Let $`\mathrm{𝟎}`$, $`\mathrm{𝟏}`$, and $`\mathrm{𝟐}`$ denote the Nim-positions $`0`$, $`1`$, $`11`$, respectively. Also, let $`\mathbf{}`$ be the Nim-position $`22`$. (Actually, we’ll want these symbols to represent the equivalence classes of these respective games, but that distinction is unimportant right now.) We will say that two games $`G`$, $`H`$ are similar if they have the same type; in symbols, $`GH`$. Every game is thus similar to exactly one of $`\mathrm{𝟎}`$, $`\mathrm{𝟏}`$, $`\mathrm{𝟐}`$, and $`\mathbf{}`$. We can thus use these four symbols to classify our games by type; for instance, instead of writing $`G=N`$, we can write $`G\mathrm{𝟏}`$. Here is the rule for recursively determining the type of a game in terms of the types of its options, restated in the new notation: * $`G`$ is of type $`\mathrm{𝟏}`$ exactly if it has some option of type $`\mathrm{𝟎}`$. * $`G`$ is of type $`\mathrm{𝟐}`$ exactly if all of its options are of type $`\mathrm{𝟏}`$, and it has at least one option. * $`G`$ is of type $`\mathrm{𝟎}`$ exactly if all of its options are of type $`\mathrm{𝟐}`$. * $`G`$ is of type $`\mathbf{}`$ exactly if none of the above conditions is satisfied. Here is the new addition table for 3-player game types; it resembles a faulty version of the modulo 3 addition table. It is also worthwhile to present the “subtraction table” as an object of study in its own right. To this end define $`\mathrm{𝟑}=111`$ as an alternative to $`\mathrm{𝟎}`$. The minuend is indicated by the row and the subtrahend by the column. Note that subtraction is not a true operation on games; rather, the assertion “$`\mathrm{𝟏}\mathrm{𝟐}`$ is $`\mathrm{𝟏𝟐}\mathbf{}`$” means that if $`G,H`$ are games such that $`G+H\mathrm{𝟏}`$ and $`G\mathrm{𝟐}`$ then $`H`$ $`\mathrm{𝟏}`$, $`\mathrm{𝟐}`$, or $`\mathbf{}`$. The six entries in the upper left corner of the subtraction table (the only entries that are single types) correspond to assertions that can be proved by joint induction without any reference to earlier tables. In fact, a good alternative way to prove that addition satisfies Table 4 would be to prove that addition satisfies the properties implied by the six upper-left entries in Table 5 (by joint induction) and then to prove three extra claims: (i) if $`G\mathrm{𝟐}`$ and $`H\mathrm{𝟐}`$ then $`G+H\sim ̸\mathrm{𝟐}`$; (ii) if $`G\mathbf{}`$ and $`H\mathbf{}`$ then $`G+H\sim ̸\mathrm{𝟎}`$; and (iii) if $`G\mathbf{}`$ and $`H\mathbf{}`$ then $`G+H\sim ̸\mathrm{𝟏}`$. ## 4 Adding Games to Themselves Another sort of question related to addition concerns the disjunctive sum of a game with itself. Recall that in two-player game theory, a strategy-stealing argument can be used to show that the sum of a game of type $`𝒩`$ with itself must be of type $`𝒫`$ (even though a sum of two distinct games of type $`𝒩`$ can be of either type $`𝒫`$ or type $`𝒩`$). We seek a similar understanding of what happens when we add a three-player game to itself. Table 6 shows the possible types $`G+G`$ can have in our three-player theory, given the type of $`G`$. To verify that all the possibilities listed here can occur, one can simply look at the examples given at the beginning of Section 3. To verify that none of the omitted possibilities can occur, it almost suffices to consult Table 4. The only possibility that is not ruled out by the addition table is that there might be a game $`X`$ with $`X\mathrm{𝟏}`$, $`X+X\mathrm{𝟏}`$. Suppose $`X`$ were such a game. Then $`X`$ would have to have a $`𝒫`$-option $`X_1^{}`$ (now we call it a $`\mathrm{𝟎}`$-option) along with another option $`X_2^{}`$ such that $`X+X_2^{}\mathrm{𝟎}`$. This implies that $`X_1^{}+X_2^{}\mathrm{𝟐}`$ and $`X_2^{}+X_2^{}\mathrm{𝟐}`$. Since $`X_1^{}\mathrm{𝟎}`$, the condition $`X_1^{}+X_2^{}\mathrm{𝟐}`$ implies (by way of Table 4) that $`X_2^{}\mathrm{𝟐}`$. But $`X_2^{}+X_2^{}\mathrm{𝟐}`$ implies (by way of Table 4) that $`X_2^{}\mathrm{𝟏}\text{ or}\mathbf{}`$. This contradiction shows that no such game $`X`$ exists, and completes the verification of Table 6. In the same spirit, we present a trebling table (Table 7), showing the possible types $`G+G+G`$ can have given the type of $`G`$. To prove that all the possibilities listed in the first three rows can actually occur, one need only check that $`0+0+0\mathrm{𝟎}`$, $`\{\{2\}\}+\{\{2\}\}+\{\{2\}\}\mathbf{}`$, $`1+1+1\mathrm{𝟎}`$, $`2+2+2\mathbf{}`$, $`11+11+11\mathrm{𝟎}`$, and $`\{2\}+\{2\}+\{2\}\mathbf{}`$. To prove that the nine cases not listed cannot occur takes more work. Four of the cases are eliminated by the observation that $`G+G+G`$ can never be of type $`\mathrm{𝟏}`$ (the second and third players can always make the Next player lose by using a copy-cat strategy). Tables 3 and 5 allow one to eliminate three more cases. The next two claims take care of the final two cases. ###### Claim 7. If $`G\mathbf{}`$, then $`G+G+G\sim ̸\mathrm{𝟐}`$. ###### Proof 4.1. Suppose $`X\mathbf{}`$ with $`X+X+X\mathrm{𝟐}`$. Let $`X^\alpha `$ be an option of $`X`$. Since $`X^\alpha +X+X\mathrm{𝟏}`$, $`X^\alpha +X+X`$ must have a $`\mathrm{𝟎}`$-option of the form $`X^\alpha +X^\beta +X`$ (for $`X^\beta `$ some option of $`X`$) or of the form $`X^{\alpha \gamma }+X+X`$ (for $`X^{\alpha \gamma }`$ some option of $`X^\alpha `$). In either case, we find that the $`\mathbf{}`$-game $`X`$, when added to some other game ($`X^\alpha +X^\beta `$ or $`X^{\alpha \gamma }+X`$), yields a game of type $`\mathrm{𝟎}`$; this is impossible, by Table 4. ###### Claim 8. If $`G\mathrm{𝟐}`$, then $`G+G+G\sim ̸\mathrm{𝟐}`$. ###### Proof 4.2. Suppose $`X\mathrm{𝟐}`$ with $`X+X+X\mathrm{𝟐}`$. Notice that $`X^{}+X+X\mathrm{𝟏}`$ for every option $`X^{}`$ of $`X`$. Case I: There exist options $`X^\alpha `$, $`X^\beta `$ of $`X`$ (possibly the same option) for which $`X^\alpha +X^\beta +X\mathrm{𝟎}`$. Then its option $`X^\alpha +(X^\beta +X^\beta )\mathrm{𝟐}`$. Since $`X^\alpha \mathrm{𝟏}`$, Table 5 gives $`X^\beta +X^\beta \mathrm{𝟏}`$. But this contradicts Table 6, since $`X^\beta \mathrm{𝟏}`$. Case II: There do not exist two such options of $`X`$. Let $`X^\alpha `$ be an option of $`X`$. Since $`X^\alpha +X+X\mathrm{𝟏}`$, and since there exists no $`X^\beta `$ for which $`X^\alpha +X^\beta +X\mathrm{𝟎}`$, there must exist an option $`X^{\alpha \gamma }`$ of $`X^\alpha `$ such that $`X^{\alpha \gamma }+X+X\mathrm{𝟎}`$. $`X+X\mathrm{𝟏}\text{ or }\mathbf{}`$, by Table 6, but $`X+X`$ cannot be of type $`\mathbf{}`$, since adding $`X^{\alpha \gamma }`$ yields a $`\mathrm{𝟎}`$-position. Hence $`X+X\mathrm{𝟏}`$, and Table 5 implies $`X^{\alpha \gamma }\mathrm{𝟐}`$. Since $`X+X\mathrm{𝟏}`$, there must exist an option $`X^\delta `$ with $`X^\delta +X\mathrm{𝟎}`$. Everything we’ve proved so far about $`X^\alpha `$ applies equally well to $`X^\delta `$ (since all we assumed about $`X^\alpha `$ was that it be some option of $`X`$). In particular, $`X^\delta `$ must have an option $`X^{\delta ϵ}`$ such that $`X^{\delta ϵ}\mathrm{𝟐}`$. However, since $`X^{\delta ϵ}+X`$ is an option of the $`\mathrm{𝟎}`$-position $`X^\delta +X`$, $`X^{\delta ϵ}+X\mathrm{𝟐}`$. Hence $`X^{\delta ϵ}`$ and $`X`$ are two $`\mathrm{𝟐}`$-positions whose sum is a $`\mathrm{𝟐}`$-position, contradicting Table 4. ## 5 Nim for Three We wish to classify all Nim-positions as belonging to $`𝒩`$, $`𝒪`$, $`𝒫`$, or $`𝒬`$ — or rather, as we now put it, as being similar to $`\mathrm{𝟎}`$, $`\mathrm{𝟏}`$, $`\mathrm{𝟐}`$, or $`\mathbf{}`$. We will actually do more, and determine the equivalence classes of Nim games. Table 8 shows the games we have classified so far (on the left) and their respective types (on the right). We will soon see that every Nim-game is equivalent to one of the Nim-games in Table 8. We call these reduced Nim-positions. The last paragraph of this section gives a procedure for converting a three-player Nim-position into its reduced form. Throughout this section (and the rest of this article), the reader should keep in mind the difference between the notations 2 and $`\mathrm{𝟐}`$. The former is a single Nim-heap of size 2; the latter is the game-type that corresponds to a second-player win. Note in particular that 2 is not of type $`\mathrm{𝟐}`$ but rather of type $`\mathrm{𝟏}`$. We start our proof of the validity of Table 8 by showing that no two games in the table are equivalent to each other. In this we will be assisted by Tables 9 and 10. Table 9 gives the types for games of the form $`1^m+2_n`$. Each row of the chart gives what we shall call the signature of $`1^m`$, relative to the sequence $`2,\{2\},\{\{2\}\},\mathrm{}`$. Since no two games of the form $`1^m`$ have the same signature, no two are equivalent. Similarly, Table 10 is the signature table for games of the form $`1^m2`$, relative to $`2_n`$. We see that all the games $`1^m`$ and $`1^m2`$ are distinct. What about $`22`$? It can’t be equivalent to $`1^{3k+1}2`$ for any $`k`$ (even though both are $`\mathbf{}`$-games), because $`22+1\mathbf{}`$ while $`1^{3k+1}2+1=1^{3k+2}2\mathrm{𝟏}`$. What about 3? It can’t be equivalent to $`1^{3k+1}`$ for any $`k`$, because $`3+1\mathbf{}`$ while $`1^{3k+1}+1\mathrm{𝟐}`$; it can’t be equivalent to $`1^{3k}2`$ because $`3+2_2\mathrm{𝟏}`$ while $`1^{3k}2+2_2\mathbf{}`$; it can’t be equivalent to $`1^{3k+2}2`$ because $`3+1\mathbf{}`$ while $`1^{3k+2}2+1=1^{3k+3}2\mathrm{𝟏}`$; and it can’t be equivalent to $`2`$ because $`\{0,11\}+2\mathrm{𝟐}`$ while $`\{0,11\}+3\mathbf{}`$. Now that we know that all of the Nim games in Table 8 are inequivalent, let us show that every Nim game is equivalent to one of these. ###### Claim 9. $`mn\mathbf{}`$ for all $`m,n2`$. ###### Proof 5.1. Any two players can gang up on the third, by depleting neither heap until the victim has made his move, and then removing both heaps. ###### Claim 10. The following are true for all games $`G`$: 1. $`Gn\sim ̸\mathrm{𝟎}`$ for $`n2`$. 2. $`Gn\sim ̸\mathrm{𝟐}`$ for $`n3`$. 3. If $`Gm\mathrm{𝟏}`$ then $`Gn\mathrm{𝟏}`$, for $`m,n2`$. 4. $`G1n\sim ̸\mathrm{𝟐}`$ for $`n2`$. 5. $`Gmn\sim ̸\mathrm{𝟏}`$ for $`m,n2`$. 6. $`Gmn\sim ̸\mathrm{𝟐}`$ for $`m,n2`$. 7. $`Gmn\sim ̸\mathrm{𝟎}`$ for $`m,n2`$. ###### Proof 5.2. (a) Suppose $`Gn\mathrm{𝟎}`$. Then its options $`G1`$ and $`G`$ are $`\mathrm{𝟐}`$-games. But since $`G`$ is also an option of $`G1`$, this is a contradiction. (b) Suppose $`Gn\mathrm{𝟐}`$. Then $`G`$, $`G1`$, and $`G2`$ are all $`\mathrm{𝟏}`$-games, and in particular $`G2`$ must have a $`\mathrm{𝟎}`$-option. That $`\mathrm{𝟎}`$-option can be neither $`G`$ nor $`G1`$, so there must exist $`G^{}2\mathrm{𝟎}`$, contradicting (a). (c) Assume $`Gm\mathrm{𝟏}`$. Then either $`G\mathrm{𝟎}`$ or $`G1\mathrm{𝟎}`$ (no other option of $`Gm`$ can be of type $`\mathrm{𝟎}`$, by (a)), and in either case $`Gn\mathrm{𝟏}`$. (d) Suppose $`G1n\mathrm{𝟐}`$. Then $`G1`$, $`G11`$, and $`Gn`$ are all $`\mathrm{𝟏}`$-games. $`Gn`$ must have a $`\mathrm{𝟎}`$-option, but $`G1\mathrm{𝟏}`$ and no option $`G^{}n`$ or $`Gm`$ ($`2m<n`$) can be a $`\mathrm{𝟎}`$-game (by (a)), so $`G`$ itself must be a $`\mathrm{𝟎}`$-game. Also, since $`G11\mathrm{𝟏}`$ and $`G1\sim ̸\mathrm{𝟎}`$, there must exist $`G^{}`$ with $`G^{}11\mathrm{𝟎}`$. Then $`G^{}1\mathrm{𝟐}`$ and $`G^{}\mathrm{𝟏}`$, which is inconsistent with $`G\mathrm{𝟎}`$. (e) Every option of $`Gmn`$ has a component heap of size 2 or more, so $`G+m+n`$ has no $`\mathrm{𝟎}`$-options, by (a). (f) Suppose $`Gmn\mathrm{𝟐}`$. Then $`G`$ can’t be $`0`$ (by Claim 9), so it must have an option $`G^{}`$; $`G^{}mn\mathrm{𝟏}`$, contradicting (e). (g) Suppose $`Gmn\mathrm{𝟎}`$. Then $`G`$ can’t be $`0`$ (by Claim 9), so it must have an option $`G^{}`$; $`G^{}mn\mathrm{𝟐}`$, contradicting (f). Note that (e), (f), and (g) together imply that $`Gmn\mathbf{}`$ for all $`m,n2`$. ###### Claim 11. The following are true for all games $`G`$: 1. $`mn`$ for $`m,n3`$. 2. $`1m1n`$ for $`m,n2`$. 3. $`Gmn22`$ for $`m,n2`$. ###### Proof 5.3. (A) Take an arbitrary game $`X`$. We know that each of $`Xm`$, $`Xn`$ is either of type $`\mathrm{𝟏}`$ or type $`\mathbf{}`$ (by (a), (b) above). If either of them is a $`\mathrm{𝟏}`$-game, then so is the other (by (c)), and if neither of them is a $`\mathrm{𝟏}`$-game, then both are $`\mathbf{}`$-games. Either way, $`m+X`$ and $`n+X`$ have the same type. (B) The proof is similar, except that one needs (d) instead of (b). (C) For all $`X`$, $`Gmn+X=(GX)mn\mathbf{}`$ and $`22+X=(X)22\mathbf{}`$. To reduce a given Nim-position $`G=n_1+n_2+\mathrm{}+n_r`$ to one of the previously tabulated forms, first replace every $`n_i>3`$ by 3. This puts $`G`$ in the form $`1^a2^b3^c`$. If $`b+c2`$, then we have $`G22`$. Otherwise, we have $`G`$ in the form $`1^a`$, $`1^a2`$, or $`1^a3`$. Since $`1312`$, the last of these cases can be reduced to $`1^a2`$ unless $`a=0`$. ## 6 Equivalence Classes The Nim game $`22`$ has the property that if one adds to it any other Nim-position, one gets a game of type $`\mathbf{}`$. In fact, if one adds any game whatsoever to $`22`$, one still gets a game of type $`\mathbf{}`$. $`22`$ is thus an element of an important equivalence class, consisting of all games $`G`$ such that $`G+X\mathbf{}`$ for all games $`X`$. We call this class the equivalence class of infinity. This equivalence class is a sort of a black hole, metaphorically speaking; add any game to the black hole, and all you get is the black hole. If you take a two-player game for which a nice theory exists and study the three-player version, then it is unfortunately nearly always the case that most of the positions in the game are in the equivalence class of infinity. There are some games which are “close” to infinity. Paradoxically, such games can give us interesting information about games that are very far away from infinity. Consider, for instance, the $`\mathrm{𝟐}`$-game $`2_1=\{2\}`$ (the game whose sole option is a Nim-heap of size 2). ###### Claim 12. The only game $`G`$ for which $`G+2_1\sim ̸\mathbf{}`$ is the game 0. ###### Proof 6.1. Let $`X`$ be the simplest game not identical to 0 such that $`X+2_1\sim ̸\mathbf{}`$. Case I: $`X+2_1\mathrm{𝟎}`$. Then $`X+2\mathrm{𝟐}`$. But Claim 10(b), together with the fact that $`2`$ is equivalent to every Nim-position $`n`$ with $`n3`$, tells us that this can’t happen. Case II: $`X+2_1\mathrm{𝟏}`$. The winning option of $`X+2_1`$ can’t be $`X+2`$, by Claim 10(a), so it must be an option of the form $`X^{}+2_1`$. But then $`X^{}+2_1\mathrm{𝟎}`$, which contradicts the assumed minimality of $`X`$. ($`X^{}=0`$ won’t help us, since $`0+2_1\mathrm{𝟐}`$, not $`\mathrm{𝟎}`$.) Case III: $`X+2_1\mathrm{𝟐}`$. Letting $`X^{}`$ be any option of $`X`$, we have $`X^{}+2_1\mathrm{𝟏}`$. This contradicts the assumed minimality of $`X`$. This implies that no game is equivalent to 0. ## 7 Open Questions ###### Question 7.1. How do the doubling and tripling tables (Tables 6 and 7) extend to higher compound sums of a game with itself? ###### Question 7.2. Is there a decision procedure for determining when two impartial three-player games are equivalent to each other? ###### Question 7.3. What does the “neighborhood of infinity” look like? The game $`2_1\mathrm{𝟐}`$ has the property that when you add it to any non-trivial game, you get $`\mathbf{}`$. Is there a game of type $`\mathrm{𝟏}`$ with this property? Is there one of type $`\mathrm{𝟎}`$ with this property? ###### Question 7.4. How does the theory generalize to $`n`$ players, with $`n>3`$? It is not hard to show that the portion of Table 5 in the upper left corner generalizes to the case of more than three players in a straightforward way. However, carrying the theory beyond this point seems like a large job. Here are two particular questions that seem especially interesting: Can an $`n`$-fold sum of a game with itself be a win for any of the players other than the $`n`$th? Does there exist a “black hole” $`X`$ such that for all games $`Y`$, $`X+Y`$ is a win for any coalition with over half the players? ## 8 Acknowledgments This research was supported by a Knox Fellowship from Harvard College. I express deep appreciation to John Conway for his encouragement and for stimulating conversations. I also thank Richard Guy and Phil Straffin for many helpful remarks on the manuscript.
no-problem/9903/astro-ph9903299.html
ar5iv
text
# Expected Sub-mm Emission and Dust Properties of Lyman Break Galaxies at High Redshift ## 1. Introduction A number of high-redshift star-forming galaxies have been identified through the Lyman break surveys (Steidel et al. 1996a, b; Lowenthal et al. 1997). Since the number density of these Lyman Break Galaxies (LBG) is comparable to or somewhat higher than that of the local $`L_{}`$ galaxies, they are thought to be natural candidates of the progenitors of nearby normal galaxies (Lowenthal et al. 1997). From the observed UV continuum emission, a typical star-formation rate (SFR) of LBGs is estimated to be moderate, $`10`$ M yr<sup>-1</sup> ($`H_0`$=50 km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0`$=0.5). However, there are increasing evidences of the presence of significant amount of extinction in the UV continuum by interstellar dust in LBGs (Pettini et al. (1998); Sawicki & Yee (1998)). If the inferred UV extinction is corrected, their SFR would be more than $`100M_{}`$ yr<sup>-1</sup> (Dickinson 1998), which may imply that high-redshift LBGs are in their major star-formation period. On the other hand, deep surveys in sub-mm wavelength with Sub-millimeter Common-User Bolometer Array (SCUBA; Holland et al. 1998) revealed a population of FIR-luminous galaxies possibly at high redshift and have a large SFR, $`100`$ M yr<sup>-1</sup> (Smail et al 1997; Hughes et al. 1998; Barger et al. 1998; Lilly et al. 1999). Although the optical identification of the SCUBA sources has not been well established in some cases, they are also candidates of high-redshift forming galaxies. It is then natural to ask what the relationship between these two populations is, or, how the high-redshift LBGs appear in sub-mm wavelength. In the deepest sub-mm imaging so far at the Hubble Deep Field (HDF; Williams et al. 1996) by Hughes et al. (1998), none of the optically-selected LBGs (Steidel et al. 1996b; Lowenthal et al. 1997) was detected. The upper-limit flux value at 850 $`\mu `$m, however, can be used to put some important constraints for the spectral energy distribution (SED) of the LBGs. In this Letter, we investigate sub-mm emission properties of the LBGs in HDF to constrain their physical properties such as dust temperature by using the results of the SCUBA observations by Hughes et al. (1998). ## 2. Spectral Shape of the Lyman Break Galaxies Our sample of LBGs is the same as that used in Sawicki and Yee(1998). They are the 17 galaxies at z$`3`$ with firm spectroscopic redshifts and with photometric data from optical to near-infrared (NIR) wavelength, taken from the 41 high-redshift LBG candidates originally provided by Steidel et al. (1996b) and Lowenthal et al. (1997). Redshifts and magnitudes are listed in Table 1 of Sawicki and Yee (1998). Hughes et al. (1998) detected five sources at 850 $`\mu `$m with SCUBA in HDF. We found none of the optically-selected LBGs is detected within the error circles of the SCUBA sources <sup>1</sup><sup>1</sup>1 There may be small systematic errors in the positions quoted in Hughes et al. (Richards 1998) but it does not affect the identification with LBGs discussed here. . Hughes et al. evaluated their detection limit $`2`$ mJy (5$`\sigma `$). Hereafter we quote this value as the upper-limit flux density at 850 $`\mu `$m for all the LBGs although the true detection limit may change across the field slightly due to the position-dependent confusion noises and detector sensitivities. The upper limit of sub-mm flux density gives a constraint on the spectral shape from the rest-frame UV to FIR wavelength of the LBGs. We compare the observed optical-NIR flux densities and the upper limit of sub-mm flux density of LBGs with the three template spectra of local starburst galaxies. For the template spectra, we adopt the SED of Arp220 (Klaas et al. 1997), as an example of heavily dust enshrouded starburst galaxies, and the averaged SED of high-reddening starburst galaxies (SBH) and low-reddening starburst galaxies (SBL) compiled by Schmitt et al. (1997). These templates are redshifted and then fitted to the optical-NIR photometric data. Figure 1a and 1b show the best and the worst case in the results of our least-square fitting procedure. In many cases, the template of SBL gives a better fit but those of Arp220 and SBH are always steeper than the spectra of LBGs (Figure 1a). For several objects, spectra of LBGs are even flatter than the SBL template (Figure 1b). Reduced $`\chi ^2`$ values (degree of freedom is four) ranges 300-550 for Arp220, 50-250 for SBH, and 1.5-30 for SBL. The absorption at the rest-frame UV wavelength in the LBGs thus seems much smaller than in Arp220- or SBH-like galaxies. If we assume the templates of Arp220 and SBH (fitted at optical-NIR wavelength), the expected flux density at $`850\mu `$m exceeds 2 mJy for 11 and 9 of the 17 objects, respectively. If the SBL template is applied, none of the objects has a $`850\mu `$m flux density larger than 2 mJy, which is consistent with the results of the SCUBA observation. Thus, not only the rest-frame UV-optical SEDs but also the FIR-UV SEDs of LBGs are better represented by the SBL template. On the other hand, SCUBA-selected bright sub-mm sources show more FIR excess. For example, Barger et al. (1998) detected a sub-mm source with $`S_{\nu (850\mu m)}=4.6`$ mJy by SCUBA in the Lockman-Hole field and found a plausible optical counterpart with $`K_{AB}=21.8`$. They argued that only the SED of ultra-luminous infrared galaxies like Arp220 can match this flux ratio. The SEDs of the LBGs are thus much shallower than those of the SCUBA sources detected above a few mJy level at 850 $`\mu `$m. The observed star-forming regions of LBGs are not likely to be heavily enshrouded by dust compared with the SCUBA-selected bright sub-mm sources or the local ultra-luminous infrared galaxies. ## 3. Dust Properties of Lyman Break Galaxies ### 3.1. Model Fitting of the UV Spectra We first estimate the amount of dust extinction of LBGs by fitting the model UV/optical spectra (at 900-10000 Å; Salpeter IMF is assumed ) with an extinction to the observed SEDs following the similar manner as in Sawicki and Yee (1998) but using the different evolutionary-synthesis model developed by Kodama & Arimoto (1997). We assumed the reddening law derived by Calzetti for the local starburst galaxies <sup>2</sup><sup>2</sup>2The formula is presented in Sawicki and Yee (1998). The estimated extinction, $`E(BV)`$, ranges 0.3-0.5 mag with ages of $``$ 0.03 Gyr, which agrees with the results in Sawicki & Yee (1998). For 13 of the 17 objects, acceptable fittings are obtained with finite amounts of extinction and we will discuss the 13 objects in this section. The $`850\mu `$m flux densities of LBGs are then calculated, with the assumption that all the energy absorbed in UV/optical wavelength is re-emitted by the modified black-body radiation with single temperature. The observed 850 $`\mu `$m flux density, $`S_{\nu (850\mu m)}`$, is given by $$S_{\nu (850\mu m)}=\frac{(1+z)F_{abs}}{_0^{\mathrm{}}\nu ^\beta B_\nu (T)𝑑\nu }\nu _{1}^{}{}_{}{}^{\beta }B_{\nu _1}(T)$$ (1) where $`F_{abs}`$ is absorbed flux, $`T`$ is a dust temperature, $`\beta `$ is a power of the emissivity law, $`B_\nu (T)`$ is the Planck function, and $`\nu _1=(1+z)\nu _{850\mu m}`$. Figure 2 shows the dependence of the estimated flux density on the dust temperature and emissivity for a typical case. We assume $`\beta =1`$ in the following discussion, in order to compare with the properties of the local galaxies which are obtained with $`\beta =1`$. For the given $`\beta `$ value, the upper limit of the flux density at 850 $`\mu `$m (2 mJy) imposes the lower limit of the dust temperature, $`T_{lim}`$. In Figure 3 (the fourth panel from the top), the distribution of the obtained $`T_{lim}`$ is shown along with those of the dust temperature of local galaxies from the samples of the optically-selected CfA galaxies classified as E-S0/a and Sb-Sbc (Sauvage and Thuan 1994) and the $`IRAS`$-selected infrared-bright galaxies (Young et al. 1989). These dust temperature of the local galaxies are measured by fitting the $`IRAS`$ 60 and 100 $`\mu `$m flux densities with single-temperature modified-Planck function with $`\beta =1`$. The distribution of $`T_{lim}`$ of the LBGs has a peak at around 45 K and significantly higher than those of the local galaxies. The median value of the temperature lower limit is 43 K while those of CfA and infrared-bright samples are $`T_{med}=33`$ K (E/S0), 30 K (Sb-Sbc), and 36 K (infrared bright). According to the Peto-Prentice Generalized Wilcoxon Test, the temperature distribution of LBGs is different from those of the local galaxies with a confidence level larger than 99.9999 % . It is possible that the high dust temperature of the LBGs is introduced by our assumption of single dust temperature. Most of the nearby star-forming galaxies have a hot dust component ($`T100150`$ K) in addition to a cold one ($`T3050`$ K) (Eales, Wynn-Williams, and Duncan 1989; Klaas et al. 1997). IRAS 60 and 100 $`\mu `$m flux densities of the local galaxies may be dominated by only the cold component. So as to estimate the energy drained to the hot dust, we fitted the SEDs of the three template spectra used in the previous section by the two component models. The results are $`L(cold)/L(hot)=1.9`$ for the SBL, $`L(cold)/L(hot)=3.2`$ for the SBH, and $`L(cold)/L(hot)=9.8`$ for the Arp220 template (The result for Arp220 is consistent with that of Klaas et al.). If we consider the contribution by the possible hot component in calculating the FIR luminosity of the LBGs, adopting the ratio $`L(cold)/L(hot)=1.9`$ which gives the lowest limiting temperature, the peak of $`T_{lim}`$ of the LBGs is shifted to $`37`$ K. According to the Peto-Prentice Generalized Wilcoxon Test, they are still different higher than 99.9999 % level. True $`T_{lim}`$ must be between these two extreme cases, and it may be $`T_{lim}40`$ K. In any case, the dust temperature of the LBGs are likely to be still higher than those of the nearby normal galaxies. The difference of local galaxies and high-redshift LBGs can be in dust extinction properties instead of dust temperature. The reddening law in the LBGs is not understood very well (e.g., Pettini et al. 1998). If we use the SMC-like reddening law instead of the Calzetti law, the total amount of absorbed UV luminosity becomes smaller and the expected sub-mm flux density becomes several times smaller. In such a case, only a weak constraint for dust temperature is obtained. ### 3.2. FIR emission Estimated from UV spectral Slope UV spectra of the local starburst galaxies are well approximated by a power-law of index $`p`$ (1250 $`2200`$ Å), $`f_\lambda =A\lambda ^p`$ (Calzetti Kinney & Storchi-Bergmann (1994), Meurer et al., (1995)), and there is an empirical relation between $`p`$ and $`F_{FIR}/F_{UV}`$ values (Meurer et al., (1997)). In the range of $`p1.8`$, approximately, $$R\mathrm{log}\left(\frac{F_{FIR}}{F_{UV}}\right)0.65p+1.8$$ (2) where $`F_{FIR}=1.26\times 10^{11}\left[2.58\left(\frac{S_{\nu (60\mu m)}}{\mathrm{Jy}}\right)+\left(\frac{S_{\nu (100\mu m)}}{\mathrm{Jy}}\right)\right]`$ $`\mathrm{erg}\mathrm{s}^1`$ $`\mathrm{cm}^2`$, and $`F_{UV}=\lambda _cf_{\lambda _c}`$ ($`\lambda _c=2320`$ Å). Hence, $$F_{FIR}A\lambda _{c}^{}{}_{}{}^{p+1}\times 10^R.$$ (3) If we assume the modified black-body radiation with the power of emissivity law $`\beta =1.0`$, the dust temperature $`T`$ implies the flux ratio $`k(T)S_{\nu (100\mu m)}/S_{\nu (60\mu m)}`$. Then we obtain, $$S_{\nu (60\mu m)}=\frac{1}{1.26\times 10^{11}(2.58+k(T))}\left(\frac{F_{FIR}}{\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2}\right)\mathrm{Jy}.$$ (4) The sub-mm flux is predicted by the spectrum from eq.4. Again, the upper limit of $`S_{\nu lim(850\mu m)}=2`$ mJy gives $`T_{lim}`$ values. The bottom panel of Figure 3 shows a distribution of $`T_{lim}`$ obtained by this empirical method. It peaks at $`38K`$, which is similar to or somewhat lower than the value calculated from the model UV extinction. It is not surprising that the empirical method gives wider $`T_{lim}`$ distribution since there is fairly large scatter in the empirical relation (eq.2) itself (Meurer et al 1997). ## 4. Contribution to the CFIRB The average dust temperature of LBGs can also be examined with the observed intensity of cosmic far-infrared background (CFIRB) at $`850\mu `$m, although the field of view of HDF (4.7 arcmin<sup>2</sup>) is very small and statistical uncertainty may not be negligible; a typical fluctuation of CFIRB at this scale is still unknown. The total $`850\mu `$m flux in HDF evaluated from the average intensity of the CFIRB radiation, $`\nu I_\nu =3.5\times 10^{10}`$ W m<sup>-2</sup> sr<sup>-1</sup> (Fixsen et al. 1997). The five SCUBA sources contribute about 45 % of this flux. The contribution of the LBGs in HDF should not significantly exceed the remaining 55% of the CFIRB flux. In Figure 4, we plot the contribution of the LBGs studied in the previous section to the CFIRB with respect to the assumed dust temperature. Three curves are those given by the model calculations with single and double temperature components and the prediction using the empirical relation. Depending on temperature, LBGs contribute a significant fraction ($`30\%`$). With the constraint not to exceed the CFIRB flux, we obtain fairly high limiting temperature, $`T_{lim}40`$ K, which is consistent with the limit obtained by using the SCUBA upper-limit flux density. ## 5. Discussions We showed that the LBGs in HDF have UV-to-FIR SEDs different from those of the ultra-luminous FIR galaxies in the local universe and of the SCUBA-selected bright sub-mm sources. It may imply that the birght sub-mm sources detected above a few mJy at 850 $`\mu `$m are rather extremely dust-rich objectes among the high-redshift star-forming galaxies selected by the Lyman break technique although it does not have to mean that the entire parent populations of sub-mm sources and LBGs are fundamentally different. We also claimed that dust temperature in the LBGs should be higher than $`40`$K, if we assume the reddening law appropriate for local starburst galaxies. What causes the relatively high dust temperature in the LBGs? High dust temperature may be results of some intrinsic properties of star formation in high-redshift galaxies. There are some possible explanations for effective dust heating. For example, a flatter initial mass function (IMF) which produces a larger number of massive stars for a given mass of dust may provide the effective dust heating. High star-formation efficiency (a higher SFR for a given mass of dust regardless of IMF) brings the similar situation. The difference may also be in properties of dust grains. For the case of low metal abundance (possibly true in young galaxies), dust may be dominated by small-size grains which are easily heated. Finally, we discuss the detectability of LBGs in sub-mm observations. Assuming an appropriate spectral shape like SBL or dust temperature of $`4050K`$, we predict that sub-mm flux at $`850\mu `$m of the LBGs in HDF is typically 0.1-1 mJy. So within the reasonable amount of observational time, only the brightest LBGs could be detected with SCUBA. In the coming decade, however, new sub-mm observational facilities such as Large Millimeter and Sub-millimeter Array (NAOJ) or MilliMeter Array (NRAO), etc. are planned. Our results presented in this letter provides rather optimistic future prospects; only a few hour exposure time will be needed to detect typical LBGs with such new facilities and then we will learn about true star-formation properties of typical high-redshift galaxies. We thank N.Arimoto and T. Kodama for kindly providing their evolutionary synthesis code.
no-problem/9903/cond-mat9903353.html
ar5iv
text
# Stability of vortices in rotating traps: A 3D analysis ## I Introduction Since the first experimental realization of Bose-Einstein condensation (BEC) in weakly interacting gases , there has been a huge theoretical and experimental effort to study its properties in the framework of fully quantum theories and in the so called mean field limit (Gross-Pitaevskii -GP- equations). These equations are formally Nonlinear Schrödinger Equations (NLS) which appear in many fields of physics, e.g. in bulk superfluids and nonlinear optics to cite only a few examples. All of these physical systems have been long known to exhibit solutions corresponding to topological defects , one of the simplest being known as vortices (in two spatial dimensions) or vortex-lines (in three spatial dimensions). Vortices are localized phase singularities with integer topological charge which analogous in the hydrodynamic interpretation to vortices those appearing in fluid dynamics . In the framework of BEC studies it has been raised the question of whether these nonuniform clouds of condensed gases may support the existence of vortices in a stable form. There is a huge literature on vortices and vortex properties in the framework of NLS equations (including its particular cubic version, the GP equation) and its nonconservative extensions, the Ginzburg-Landau (GL) system, and vector GL models. In particular the stability of $`m`$-charged GP vortices in two dimensions was studied in . In three dimensions the GL case has been recently considered and vortex lines geometric instabilities have been found to strongly deform the vortex lines. However the GP equation cannot be obtained as a limit of the GL studied there since dissipation and diffusion are essential ingredients of the models studied in Ref. . This fact makes the conservative case (GP) interesting by itself. Other analysis of vortices and vortex stability in the framework of Nonlinear Optics are included in Ref. . The current setups utilized to generate Bose-Einstein condensates use a magnetic trap to confine the atomic cloud which is modeled by a parabolic trapping potential. This is a distinctive feature from the common NLS systems, in which the vortices are free and move in an homogeneous background. The dynamics of a vortex in a spatially inhomogeneous two dimensional GP problem was studied in Ref. using the method of matched asymptotic expansions, but the authors did not consider the stability of the 2D vortex itself. In principle, the vortex motion equations can be used to study the motion of a single 2D point vortex in spatially inhomogeneous GP problems. However, the dynamics of the many vortex case is more complicated and by no means trivial. For simple approaches to the problem which do not include the effect of vortex cores on the background field see Ref. . The dynamics of 3D vortices is yet more complicated allowing the so-called reconnection. To our knowledge there are no analytical results but only qualitative numerical observations available . Another theoretical framework where non-homogeneous dynamics of vortices has been investigated is the possibility of pinning vortices in type-II superconductors , but it is only the dynamics has been considered through analytical approximation techniques with no comparison with numerics. In all the previously discussed cases the vortex stability is given for granted. In the framework of Bose–Einstein condensed gases studies the problem of the vortex stability has been considered in various papers that try to solve the problem of linear and global stability, either from a purely analytical point of view, such as in , or by mixing analytic and numerical techniques, . In Ref. the authors solve the Gross-Pitaevskii equation and find the energies of the condensate in vortex states, for a number of particles up to $`N=10^4`$. In Ref. the authors solve the Bogoliubov equations for an unit charge vortex in a stationary trap with axial symmetry, their results being also limited to $`N<10^4`$. In Ref. the authors address the problem of minimizing the energy functional with a reduced basis of trial states that is only valid in the limit of small $`U`$. In this paper we unify and substantially extend what has been done in previous works regarding these two questions: global energetic stability and local stability of vortex states. First, in Sect. II we solve the GPE for an axially symmetric harmonic potential, with or without the action of a uniform magnetic field which resembles the effect of a rotating trap. We calculate the lowest stationary solutions that have a well defined value of the third component of the angular momentum, $`m=L_z`$, and we do this for small and for very large values of the nonlinearity ($`N10^7`$). We find that there are continuous intervals of the “angular velocity”, $`[\mathrm{\Omega }_m,\mathrm{\Omega }_{m+1})`$, in which the $`m`$-charged vortex state becomes energetically stable with respect to other states of well defined vorticity. In Sect. III, we study Bogoliubov’s equations from two different points of view: as a consequence of a linear stability analysis of the Gross-Pitaevskii equation (GPE), and as the first corrections to the mean field theory of the dilute condensate. The concepts of dynamical and energetic stability are defined, and it is demonstrated that any possible destabilization of the system must be either of energetic nature, or grow polynomially with respect to time. We next solve the Bogoliubov equations for $`m=1`$ and $`m=2`$ unperturbed vortex states in stationary traps. It is found that the $`m=1`$ and $`m=2`$ vortices are only energetically unstable, which means that the lifetime of both configurations is only limited by dissipation. A similar treatment reveals that rotation can only stabilize the unit charge vortex-line if the angular speed is in a suitable range, $`\mathrm{\Omega }[\mathrm{\Omega }_1,\mathrm{\Omega }_2)`$, while outside of this range, $`\mathrm{\Omega }_2<\mathrm{\Omega }<\mathrm{\Omega }_c`$, the minimum of the energy functional is not an eigenstate of the $`L_z`$ operator –i.e, it is not symmetric under rotations. These results are confirmed by numerical simulations of the evolution of perturbed vortices. In Sect. IV we summarize our work and discuss their implications. ## II Vortex solutions of the GPE ### A Stationary states of GPE in a uniform and constant magnetic field For small temperatures and small densities, the condensate is modeled by the Gross-Pitaevskii equation (GPE) . We will always refer to an axially symmetric trap with term that accounts for rotation around the Z axis and may be generated by a weak magnetic field. The form of the equation is $`i\mathrm{}{\displaystyle \frac{\psi }{t}}`$ $`=`$ $`{\displaystyle \frac{\mathrm{}^2}{2m}}\mathrm{}\psi +{\displaystyle \frac{1}{2}}m\omega ^2\left(\gamma ^2r^2+z^2\right)\psi `$ (1) $`+`$ $`U_0N|\psi |^2\psi +\stackrel{~}{\mathrm{\Omega }}L_z\psi .`$ (2) Here $`U_0=4\pi \mathrm{}^2a/m`$ characterizes the interaction and is defined in terms of the ground state scattering length $`a`$. In all cases we will take the normalization condition to be $$|\psi |^2d^3x=1.$$ (3) It is convenient to express Eq. (1) in a natural set of units which for our problem is built up from two scales: the trap size (measured by the width of the linear ground state), $`a_0=\sqrt{\mathrm{}/m\omega }`$, and its period, $`\tau =1/\omega `$. With these definitions the equation simplifies to $$i\frac{\psi }{t}=\left[\frac{1}{2}\mathrm{}+i\mathrm{\Omega }\frac{}{\theta }+\frac{1}{2}(\gamma ^2r^2+z^2)+U|\psi |^2\right]\psi ,$$ (4) while maintaining the normalization. The new parameters, $`\mathrm{\Omega }=\mathrm{}\stackrel{~}{\mathrm{\Omega }}`$ and $`U=4\pi Na/a_0`$, represent the “angular speed” of the trap and the adimensionalized interaction strength, respectively. For stability reasons (see below), $`\mathrm{\Omega }`$ will be of the order of magnitude of or smaller than the strength of the trapping, $`\omega `$. The other parameter, $`U`$, will take values from $`0`$ to $`6\times 10^4`$. As of the experiments with rubidium and sodium, this implies a minimum of $`10^6`$ and a maximum of $`10^7`$ atoms which is in the range of current and projected experiments. The shape of the trap is dictated by the geometry factor, and in this work it will typically take two possible values: $`\gamma =1`$, corresponding to a spherically symmetric trap, and $`\gamma =2`$, corresponding to an axially symmetric, elongated trap. A stationary solution of (4) will be of the form $`\psi (\stackrel{}{x},t)=e^{i\mu t}\varphi (\stackrel{}{x})`$ , where $`\mu `$ may be interpreted both as a frequency and the chemical potential $$\mu \varphi =\left[\frac{1}{2}\mathrm{}+i\mathrm{\Omega }\frac{}{\theta }+\frac{1}{2}(\gamma ^2r^2+z^2)+U|\varphi |^2\right]\varphi .$$ (5) Any solution of (4) has an energy *per* particle which is given by the functional $`E(\psi ,N)`$ $`=`$ $`{\displaystyle \left(\frac{1}{2}|\psi |^2i\mathrm{\Omega }\overline{\psi }_\theta \psi \right)}`$ (6) $`+`$ $`{\displaystyle \left[\frac{1}{2}\left(\gamma ^2r^2+z^2+U\left|\psi \right|^2\right)\left|\psi \right|^2\right]}`$ (7) For a stationary solution it becomes $$E(\psi ,N)=\mu \frac{U}{2}\left|\varphi \right|^4.$$ (8) The stationary solutions of (4) may also be interpreted as the minimization of $$_\mu =E(\psi ,N)\mu |\psi |^2$$ (9) subject to the constrain of Eq. (3). In that case $`\mu `$ is nothing else but the Lagrange multiplier of the norm. Since we are interested in single vortex solutions to the GPE we will restrict our analysis to stationary states that are also eigenstates of the $`L_z`$ operator. That is, we will look for solutions of the form $`\psi (r,z,\theta ,t)=e^{i\mu t}e^{im\theta }\varphi (r,z).`$ Summarizing, our goal will be to find the unit norm functions $`\varphi _\mu ^{(m)}(r,z)`$ and real numbers $`\mu `$ which are solutions of the equation $$\mu \varphi _\mu ^{(m)}=\left[\frac{1}{2}\mathrm{}m\mathrm{\Omega }+\frac{1}{2}(\gamma ^2r^2+z^2)+U|\varphi _\mu ^{(m)}|^2\right]\varphi _\mu ^{(m)},$$ (10) Our treatment on the following sections will be fully three-dimensional and no spurious conditions (e.g. periodicity) will be imposed on the boundaries. We want to obtain at least the lowest energy state for each value of the vorticity, $`m`$. Also the dependency of spectrum with the nonlinearity and the angular velocity, $`\mathrm{\Omega }`$, is interesting since it will allow us to find whether the vortex-line states may become energetically favorable. ### B Numerical method Due to the nonlinear nature of the problem we want to solve \[Eq. (10)\] there are not many analytical tools available. The most common (and maybe easiest) approach to the problem is to discretize the spatial part and perform time evolution in imaginary time while trying to preserve the normalization, a method which is related to the steepest descent. The precision of the solution depends of the type of spatial discretization used: finite differences (used for example in Refs. ) or spectral methods (such as the one used in Ref. ). However these common methods, such as finite differences and similar spectral methods , have reached a maximum value of the interaction of $`U=10^3`$, which should be contrasted with the value $`U=10^5`$ that can be attained which the technique to be presented later. Properly speaking our technique is a Galerkin type method where one performs the expansion of the unknown solution in a complete basis of the Hilbert space under consideration. For convenience we have used the basis of eigenstates of the harmonic oscillator with fixed vorticity. In this basis our stationary solution is expressed as $$\psi _\mu ^{(m)}(\stackrel{}{x},t)=e^{i\mu t}e^{im\theta }\underset{n}{}c_nP_n^{(m)}(r,z),$$ (11) Here the single index, $`n`$, denotes two quantum numbers, $`(n_z,n_r)`$, regarding the axial and radial degrees of freedom, and $`P_n^{(m)}`$ is a product of a Hermite polynomial, a Laguerre polynomial and a Gaussian $`P_n^{(m)}`$ $`=`$ $`C_nH_{n_z}(z)L_{n_r}(\rho ^2)r^me^{im\theta }e^{(\rho ^2+z^2)/2},`$ (13) $`C`$ $`=`$ $`\sqrt{{\displaystyle \frac{1}{\sqrt{\gamma }\sqrt{\pi }2^{n_z}n_z!}}}\sqrt{{\displaystyle \frac{n_r!}{\pi \left(n_r+m\right)!}}},`$ (14) with $`\rho =r/\sqrt{\gamma }`$ Next, following the same convention about the indices, we have introduced this expansion into Eq. (10) to obtain $$\left(E_i^{ho}\mathrm{\Omega }m\mu \right)c_i+U\underset{jkl}{}A_{ijkl}^{(m)}\overline{c}_jc_kc_l=0,$$ (15) where $`E_i^{ho}`$ is the harmonic oscillator energy of the mode $`P_{im}`$ and the tensor $`A^{ijkl}`$ has the following definition $$A_{ijkl}^{(m)}=2\pi \overline{P}_i^{(m)}\overline{P}_j^{(m)}P_k^{(m)}P_l^{(m)}𝑑r𝑑z.$$ (16) Since the $`P_i^{(m)}`$ are products of known polynomials by exponentials it could be possible, in principle, to evaluate the coefficients in the tensor exactly with a Gaussian quadrature formula of the appropriate order. This approach was used in Ref. for the three-dimensional case. However, when one wishes to use a large number of modes (which in our case is of about $`1600`$ for each value of $`m`$) to achieve large nonlinearities, the search of the quadrature points becomes more difficult than performing a stable integration by means of some other methods, of which the simplest accurate one is Simpson’s rule. Once we fix all of the constants, $`E^{ho}`$, $`A_{ijkl}^{(m)}`$, $`\mu `$ and a guess for the solution, it is feasible to solve (15) iteratively -e.g. by Newton’s method. However, it is wiser to perform two simplifications before implementing the algorithm. The first one is that all of the eigenfunctions, $`P_n^{(m)}`$, can be made real and thus we can impose the coefficients in the expansion, $`\{c_n\}`$, to also be real. The second optimization is that, thanks to the symmetry of the problem, the ground state of Eq. (5) has a well defined positive parity. This allows us to eliminate redundant modes , saving memory and reaching higher energies and nonlinearities which otherwise would be computationally hard to attain. On the other hand, we have always checked that this method produced the same results as the complete one for a selected and significant set of parameter values. And finally it is important to note that the four–index tensor (16) is indeed a product of two smaller tensors, corresponding to the integration on the $`z`$ and $`r`$ variables . This decomposition is most important when working with a large number of modes, because then the size of $`A`$ becomes extremely large (i.e., $`1600^4`$ elements for $`1600`$ modes). Concerning the evaluation of very high order polynomials as the ones involved in our computations it is necessary to say that it is not a simple task, specially for intermediate values of the spatial variables since then there is a lot of comparable terms with usually different signs and the cancellations induce numerical instabilities. The usual procedure to avoid this difficulty is to use Horner’s method to evaluate the polynomial, which is comparable to using FFT techniques, but in our case this is not enough and the evaluation of higher order polynomials could be done only by the recursion formulas for the Hermite and Laguerre polynomials. We remark that the election of this spectral technique was largely influenced by the need of reaching high nonlinearities which are not achievable using the other approaches. Further details on the numerical technique as well as convergence proofs will be given elsewhere ### C Results for the stationary states and spectrum By using the preceding technique, we have searched the lowest states, $`(n_z,n_r=0)`$, for each branch of the spectrum with a different vorticity, $`m=0,\mathrm{},6`$. This was performed for two geometries corresponding to $`\gamma =1`$ (spherically symmetric trap) and $`\gamma =2`$ (cigar shape trap), of a static trap, $`\mathrm{\Omega }=0`$, while varying the intensity of the interaction from $`0`$ to approximately $`50000`$. The results of this study are plotted in Fig. 1. Remarkably, in the absence of rotation, and up from the lowest states, both the spectrum and the energies can be fitted to a simple formula $$\mu _{0m}(N)=\mu _{00}(N)+\omega _{\text{eff}}(N)m.$$ (17) The first term is the chemical potential of the $`m=0`$ ground state and is not relevant to the dynamics. Using the Thomas-Fermi approximation one can show that it grows proportionally to $`\mu N^{2/5}`$, a behavior which is approximately reflected in the numerical results shown in Fig. 1(c). The second term is much more relevant to the evolution of the condensate. It grows linearly, as the energy levels of a linear harmonic oscillator, with an effective frequency, $`\omega _{eff}(N)`$, that decreases with the interaction. The fact that the higher levels of the spectrum of $`\mu `$ remain equispaced even for large interactions is the reason why the condensate exhibits an exponentially divergent response to the parametric perturbation of the trap frequencies, as it is shown in Ref. and . Now we want to study the stationary solutions in the presence of rotation. For $`\mathrm{\Omega }0`$ the proper functions with definite vorticity remain the same, while the chemical potential and the energy suffer a shift that depends on the vorticity of the state $$E_{nm}(U,\mathrm{\Omega })=E_{nm}(U,0)m\mathrm{\Omega }.$$ (18) This shift gives rise to an ample phenomenology which is pictured in Fig. 2.First, we see that the degeneracy with respect to $`m`$ is broken. The only other possible degeneracy that remains is with respect to the $`r`$ and $`z`$ variables, but this will be removed for the case without spherical symmetry, $`\gamma 1`$. Second, the $`m=1,2,3\mathrm{}`$ branches of the spectrum become a minimum of the energy functional with respect to other branches for continuous intervals of the angular velocity, $`[\mathrm{\Omega }_m,\mathrm{\Omega }_{m+1}]`$, where $$\mathrm{\Omega }_m=E_{0,m+1}E_{0m}.$$ (19) However this does not mean that in these intervals the $`m`$-th vortex state becomes a global minimum. Indeed, in Sect. III we will only be able to prove that only the $`m=1,2`$ vortex lines achieve the status of local minimum. It still remains an open question under which situations the ground state must have a well defined vorticity. Third, even though the separation between the $`m=0`$ and $`m=1`$ states becomes very narrow for large interactions, the stabilization frequency $`\mathrm{\Omega }_1`$ only approaches zero asymptotically with $`U`$. As a consequence, $`m=1`$ states are never a global minimum of the energy in a stationary trap, a fact that can be checked by just inspecting the energy functional. And finally, there is a critical value of $`\mathrm{\Omega }`$ for which the energy functional becomes unbounded by below \[See Fig. 2\] and which coincides with the separation between energy levels for large values of the vorticity. This critical value of the frequency, $`\mathrm{\Omega }_c`$, is such that all of the ground states for each value of the vorticity have the same energy $$E_{0m}(U,\mathrm{\Omega }_c)=E_{0k}(U,\mathrm{\Omega }_c),k,m.$$ (20) Using Eqs. (20) and a fit similar to the one in Eq. (17), one finds that it is always smaller than the critical frequency of the linear case $$\mathrm{\Omega }_c=\omega _{\text{eff}}(U).$$ (21) ## III Stability of stationary states ### A The linear stability equations In the preceding section we obtained stationary solutions of the mean field model for the Bose-Einstein condensate, all of which had a well defined value of the third component of the angular momentum operator. We named those states vortices. In this section we now want to study the stability of these solutions according to several criteria of a local nature: local energetic stability and linear stability. We begin our study from the adimensionalized Gross-Pitaevskii equation (4). First we expand the condensate wavefunction around a stationary solution with a fixed vorticity. $`\mathrm{\Psi }(r,z,\theta ,t)`$ $`=`$ $`\psi _0+ϵ\psi _1`$ (22) $`=`$ $`\left[f(r,z)e^{im\theta }+ϵ\alpha (r,z,\theta ,t)\right]e^{i\mu (\mathrm{\Omega })t}.`$ (23) We insert this expansion in Eq. (4) and truncate the equations up to $`𝒪(ϵ^1)`$ thus getting $`i_t\alpha `$ $`=`$ $`\left[H_0+i\mathrm{\Omega }_\theta +2Uf^2\right]\alpha +Uf^2e^{2im\theta }\overline{\alpha },`$ (25) $`i_t\overline{\alpha }`$ $`=`$ $`\left[H_0i\mathrm{\Omega }_\theta +2Uf^2\right]\overline{\alpha }+Uf^2e^{2im\theta }\alpha ,`$ (26) with $`H_0=\frac{1}{2}\mathrm{}+\frac{1}{2}(\gamma ^2r^2+z^2)\mu (\mathrm{\Omega })`$. We can also write this equation in a more compact form $$i\frac{}{t}\stackrel{}{W}=\sigma _z(\mathrm{\Omega })\stackrel{}{W}=(\mathrm{\Omega })\stackrel{}{W}.$$ (27) by using the definitions $`\stackrel{}{W}`$ $`=`$ $`\left(\begin{array}{c}\alpha \\ \overline{\alpha }\end{array}\right),`$ (31) $`\sigma _z`$ $`=`$ $`\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right),`$ (34) $`(\mathrm{\Omega })`$ $`=`$ $`H_0+\left(\begin{array}{cc}i\mathrm{\Omega }_\theta +2Uf^2& Uf^2e^{2im\theta }\\ Uf^2e^{2im\theta }& i\mathrm{\Omega }_\theta +2Uf^2\end{array}\right).`$ (37) In the rest of this work we wish to study the dynamics that is involved in Eq. (27). The simplest way to achieve this is to find a suitable basis in which Eq. (27) becomes diagonal or almost diagonal. In other words, we want a set of vectors, $`\stackrel{}{W}_k^t=(u_k(r),v_k(r))`$ such that $$\lambda _k\stackrel{}{W}_k=\stackrel{}{W}_k.$$ (38) If $``$ has such a diagonal Jordan form, then the perturbation evolves simply as $`\stackrel{}{W}`$ $`=`$ $`{\displaystyle c_ke^{i\lambda t}\stackrel{}{W}_k},`$ (39) $`\alpha (\stackrel{}{r},t)`$ $`=`$ $`{\displaystyle c_ke^{i\lambda t}u_k(\stackrel{}{r},t)}.`$ (40) On the other hand, the lack of a diagonal form, or the existence of complex eigenvalues leads to instability in a way that we will precise later. Associated to Eq. (27) there is an energy functional, $$E_2(\alpha )=2\overline{\alpha }H_0\alpha +\psi _0^2\overline{\alpha }^2+\overline{\psi _0}^2\alpha ^2+4|\psi _0|^2\alpha \overline{\alpha },$$ (41) and a constrained energy functional $$_2(\alpha )=E_2(\alpha )\mu |\alpha |^2$$ (42) which are the $`𝒪(ϵ^2)`$ terms in the expansion of (8) and (9), i.e. the energy introduced in the system by the perturbation. If a diagonal Jordan form like the one of (38) is possible, then it is easy to check that the second functional becomes diagonal, too $`_2(\alpha )`$ $`=`$ $`{\displaystyle |c_k|^2\lambda _kG(\stackrel{}{W}_k)},`$ (44) $`G(\stackrel{}{W}_k)`$ $`=`$ $`{\displaystyle |u_k|^2}|v_k|^2.`$ (45) If the stationary state, $`\psi _0`$, is a local minimum of the energy subject to the constrain of a fixed norm (3), then $`_2`$ must be positive for all perturbations, which has serious implications for the eigenvalues and eigenstates. We will refer to this later. When studying the condensate using tools from Quantum Field Theory, one may try a similar procedure , which is known as Bogoliubov’s theory. In that framework, the $`\overline{\alpha }`$ and $`\alpha `$ are linear operators in a Fock space, and one searches an expansion of these operators in terms of others that diagonalize the energy functional (41) and the evolution equations (27). The resulting equations for the coefficients are known as Bogoliubov’s equations and correspond to the equations (38) for $`u_k`$ and $`v_k`$. ### B Operational procedure It is now useful to perform an expansion of $`\alpha `$ and $`\overline{\alpha }`$ into states of fixed vorticity so that the modes are separated into subspaces according to their vorticities $$\stackrel{}{W}_i^{(n)}=\left(\begin{array}{c}u_i^{(n)}(r)e^{in\theta }\\ v_i^{(2mn)}(r)e^{i(2mn)\theta }\end{array}\right).$$ (46) These subspaces are not mixed by the action of the operators of (27), and we can define their restriction to these subspaces $`_n(\mathrm{\Omega })`$ $`=`$ $`\sigma _z_n(\mathrm{\Omega }),`$ (48) $`_n(\mathrm{\Omega })`$ $`=`$ $`_{0n}(\mathrm{\Omega })+𝒰_n,`$ (49) $`_0`$ $`=`$ $`\left(\begin{array}{cc}H^n(nm)\mathrm{\Omega }& \\ & H^{2mn}(mn)\mathrm{\Omega }\end{array}\right),`$ (52) $`H^n`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{}+{\displaystyle \frac{1}{2}}(\gamma ^2r^2+z^2)+{\displaystyle \frac{n^2}{2r}}+f^2\mu (0)`$ (53) $`𝒰_n`$ $`=`$ $`Uf^2\left(\begin{array}{cc}1& 1\\ 1& 1\end{array}\right).`$ (56) With these definitions the diagonalization procedure (38) becomes $$\lambda _k^n\stackrel{}{W}_k^{(n)}=_n(\mathrm{\Omega })\stackrel{}{W}_k^{(n)},nm.$$ (57) If $`G(\stackrel{}{W}_k^{(n)})>0`$, then $`(u_k^{(n)},v_k^{(2mn)})`$ is a Bogoliubov mode with energy $`ϵ=\lambda _k^{(n)}`$ and vorticities $`(n,2mn)`$, whereas if $`G(\stackrel{}{W}_k^{(n)})<0`$ then the excitation is $`(v_k^{(2mn)},u_k^{(n)})`$ with energy $`ϵ=\lambda _k^{(n)}`$. As a rule of thumb, the $`u`$ function must always be the one with the largest contribution, which is formally stated in $`G(\stackrel{}{W})>0`$. In the following we will refer to these branches of the spectrum by the pairs of quantum numbers $`(n,2mn)`$ and $`(2mn,n)`$, respectively. One may find, in principle, two kinds of solutions. First, the Bogoliubov operator may have complex eigenvalues or even have a non diagonal Jordan form. In both cases we speak of *dynamical instability* because an arbitrarily small perturbation departures from the original state exponentially or polynomially in time. Second, the linearized operator may have only real eigenvalues which should be interpreted as the change of energy in the condensate due to excitations \[See Eq. (42)\]. If $`\lambda >0`$ the state under study $`\psi _0`$ is a local minimum of the energy functional (6) with respect to this family of perturbations, the $`\lambda =0`$ case corresponds to the existence of degeneracy in the system, and finally if $`\lambda <0`$ the system is told to be energetically unstable -i.e, excitations are energetically favorable and the state is not a local minimum of the energy. All of the five cases exposed above have the same implications of stability for Eq. (4), which is a simple partial derivatives equation for an order parameter, and for the more complete Bogoliubov theory, where the perturbations are regarded as many-body corrections and involve more degrees of freedom. Nevertheless, it must be remarked that of the two types of instability that can be found, i.e. dynamical and energetic instabilities, the second one is less harmful because it only affects the dynamics when there is some kind of dissipation that drives the system through the unstable branch. And even then the lifetime of the system can be significant if the intensity of the destabilizing mode is small compared to the typical times of evolution. ### C Numerical procedure We have discretized Eq. (57) in a basis which is essentially the same that we used to solve the stationary GPE. To be more precise, the expansion is as follows $$\stackrel{}{W}_i^{(n)}=\underset{k}{}a_k\left(\begin{array}{c}P_{kn}\\ 0\end{array}\right)+b_l\left(\begin{array}{c}0\\ P_{l,2mn}\end{array}\right).$$ (58) Here we have used the index convention explained above. In this basis, the operator $`_{0n}`$ is diagonal, while the operator $`𝒰`$ can be calculated, either by means of integrals of the wavefunction itself in position representation, or by using a tensor of four indices which is similar to the one in Eq. (16). In any case, the equations are always linear, and so the study of the Bogoliubov spectrum consists in building and diagonalizing a large matrix of real numbers. Even though the procedure is quite simple, the matrices that one must build in order to resolve the case of strong interaction are very large and tend to exhaust computational resources. To be able to reach a large value of the nonlinearity we have had to work in a subspace of states with even parity with respect to the Z axis. This way we could find the excitations with lowest energy for different vorticities, at the cost of missing those with odd parity, which are more energetic anyway . ### D Analytical results The study explained above does not have to be performed for all of the Bogoliubov operators in all of the possible situations. Here we will show several important results regarding when Eq. (57) may imply destabilizing modes. Lack of exponential instabilities in the Bogoliubov theory.- Any eigenvalue $`\lambda `$ satisfying Eq. (57) and $`G(\stackrel{}{W})0`$ must be real. Eigenstates with $`G(\stackrel{}{W})=0`$ may involve complex eigenvalues but they are spurious and are introduced by the linearization procedure. This first part is shown simply by projecting the left and right hands of Eq. (57) against the vector $`\stackrel{}{W}_i^{(n)}`$. Omitting the indices the result is $`\lambda _n`$ $`{\displaystyle (|u|^2|v|^2)}={\displaystyle (\overline{u}_nH^nu+\overline{v}H^{2mn}v)}`$ (59) $`+`$ $`{\displaystyle Uf^2|u+v|^2}{\displaystyle (nm)\mathrm{\Omega }(|u|^2+|v|^2)}.`$ (60) The second part is more subtle. To prove it we must remember that solutions to Eq. (27) are stationary points of the action , $`S=L(t)𝑑t`$ corresponding to the following Lagrangian density $$L=\frac{i}{2}(\alpha \overline{\alpha }_t\overline{\alpha }\alpha _t)+_2(\alpha ).$$ (61) Using (44) it is easy to prove that the $`G(\stackrel{}{W})=0`$ modes are null modes that do not appear in the Lagrangian, and thus are not affected by the dynamics. It must be remarked that this result characterizes possible eigenvalues, but does not grant that $`_n`$ have a Bogoliubov diagonalization. Sufficient condition for stability.- If the linearized Hamiltonian $`_n`$ is positive definite, then $`_n`$ may be diagonalized, all of its eigenvalues are positive real numbers and there are no dynamical nor energetic instabilities. To prove this theorem one only needs to show that there’s a one-to-one correspondence between the eigenfunctions of $`_n^{1/2}\sigma _z_n^{1/2}`$ and the eigenfunctions of $`\sigma _z_n`$ so that $$_n^{1/2}\sigma _z_n^{1/2}|n=\lambda |n,$$ (62) if and only if $$\sigma _z_n\left(_n^{(1/2)}|n\right)=\lambda \left(_n^{(1/2)}|n\right).$$ (63) Then one uses this result to show that the eigenvalue in (57) must be positive. Stability in stationary traps.- In Eq. (57), if $`\mathrm{\Omega }=0`$ and $`n>3m`$ then the linearized Hamiltonian $`_n`$ is positive, the Bogoliubov operator $`_n`$ can be diagonalized and it is also positive. Furthermore, if $`n>m`$ then any real eigenvalue is positive, $`\lambda >0`$. The demonstration has four steps. First, one takes any value of $`n`$ that satisfies that condition and proves that $`H^{2mn}>H^m`$ and $`H^n>H^m0`$. Second, this is used to prove that $`_{0n}>_{0m}`$. Third, it shown that $`𝒰_n`$ is positive which altogether implies $`_n>0`$. The last assertion may be easily checked with the help of (59). The preceding two theorems imply that in a stationary trap any mode with negative energy must be comprised in the $`(0,2m),\mathrm{},(m,m)`$ families, and any dynamic instability must lay in $`(0,2m),\mathrm{},(3m,0)`$. Thus we need only diagonalize a finite number of operators to make sure the system is stable or unstable. This result is an extension of the one obtained in Ref. , where a sufficient condition for stability is found to be $`n^24m^2`$, without taking into account possible complex eigenvalues. Local stability under rotation.- In Eq. (57) the operator $`_n(\mathrm{\Omega })`$ exhibits a linear dependence with respect to $`\mathrm{\Omega }`$ $$_n(\mathrm{\Omega })=_n(0)(nm)\mathrm{\Omega }.$$ (64) While the wave functions of the modes are the same as the ones of the stationary traps, the energies of the excitations suffer a global shift that depends on the vorticity $$\lambda (\mathrm{\Omega })=\lambda (0)(nm)\mathrm{\Omega }.$$ (65) In general, the influence of these shifts has to be checked numerically. It is easy to show, however, that the shift is positive for $`n<m`$, which means that the possibly negative eigenvalues in the range $`0<n<m`$ can be suppressed if $`\mathrm{\Omega }`$ is large enough. Even more, as the shift is a real number, if one demonstrates that there are no dynamical instabilities in the stationary trap, then there will be no dynamical instabilities in the rotating trap, neither. ### E Numerical results Summing up, from a practical point of view, the issue of stability consists in two different steps. The first one is the search for a stationary solution of the GPE with the appropriate vorticity, which we have already performed in Sect. II, and the second one is the study of the spectrum of the Bogoliubov operators for this particular state. Stability of the $`m=1`$ vortex-line in a stationary trap.- In this case of unit charge one only has to study a single operator, $`_0`$, to know whether the system is stable. This calculation provides us with the branch of the spectrum of excitations which is characterized by the quantum numbers $`(0,2)`$ and $`(2,0)`$, as already explained. We have done this for a wide range of nonlinearities in the absence of rotation, $`\mathrm{\Omega }=0`$, and the first conclusion is that the Bogoliubov operator has a diagonal Jordan form with all eigenvalues being real. In Fig 4 we show a selected set of the eigenvalues of the Bogoliubov operator, both for a spherically symmetric trap and an elongated trap. In those pictures one sees several things. First, there are two constant eigenvalues $`\lambda =1`$ which correspond to oscillations of the vortex line along the Z axis. Second, there is a single neutral mode $`\lambda =0`$ only for the spherically symmetric trap, which corresponds to the symmetry of rotation of the condensate around an axis on the XY plane. The symmetry and the mode disappear when $`\gamma =2`$ \[See Fig. 4\]. And finally there is at least one negative eigenvalue $`\mu <0`$ (more in the case of an elongated trap) which is responsible for the energetic destabilization of the system. The largest contribution to this destabilizing mode is a wavefunction captured in the vortex line and has zero vorticity (i.e. it is a *core* mode) \[See Fig. 5\] as it was qualitatively predicted by Rokhsar in Ref. . We must remark that the number of unstable modes increases with the geometry factor: the more elongated the trap is, the easier is to transfer energy from the vortex to the core plus longitudinal excitations. In other words, for $`\gamma 1`$ (spherical or “pancake” traps) there’s only one negative eigenvalue which corresponds to an excitation with a different vorticity than the unperturbed function, while for $`\gamma 1`$ we still have that mode, plus some more which are excited with respect to the Z axis. As a consequence, if the experiment is subject to dissipation and these unstable modes play a significant role in the dynamics, then the more elongated the trap is the less stable the vortex will be. In Fig 6 we also show the lowest eigenvalues of the families $`(1,3)`$,$`(1,1)`$, $`(0,2)`$, $`(2,0)`$ and $`(2,4)`$, that is, excitations where the main contribution is an eigenstate of $`L_z`$ with eigenvalues $`m=0,\pm 1,\pm 2`$. In those pictures one sees that subspaces with excitations of the same vorticity but opposite sign have also different energy, a phenomenon which is solely due to the interaction. Stability of $`m=1`$ vortex lines in rotating traps.- It was already proved (64) that, as the effect of rotation is gradually turned on, the modes with $`n<m`$ and with $`n>m`$ are shifted up and down in the spectrum, respectively. It remained the question of whether the shift is enough to stabilize the vortex states, and the answer is yes, according to numerical experiments. First, as it is shown in Fig. 7, the negative eigenvalue is slightly smaller than the stabilizing frequency, $`|\lambda _0|<\mathrm{\Omega }_1`$, which implies that for $`\mathrm{\Omega }>\mathrm{\Omega }_1`$ the energetically unstable branch with vorticity $`m=0`$ disappears. And second, the eigenvalues of $`_n`$ for $`n>m`$ are found to be larger than $`(nm)\mathrm{\Omega }_1`$. In consequence for at least the interval $`[\mathrm{\Omega }_1,\mathrm{\Omega }_2)`$ all of the $`_n`$ operators are positive and the vortex with unit charge is a local minimum of the energy functional. In any case the shifts are always real, which implies that the $`_n`$ operators remain diagonalizable with real eigenvalues and without dynamical instabilities. Stability of the $`m=2`$ vortex line.- Another interesting configuration is the $`m=2`$ multicharged vortex line. Here one suspects that a configuration with several vortices of unit charge has less energy than a single multicharged vortex, under all circumstances. In other words, they must be always energetically unstable. This intuitive perception is confirmed by the numerics. First the diagonalization of $`_1`$ reveals that this operator has at least one negative eigenvalue, while $`_0`$ has both negative eigenvalues and a pair of complex eigenvalues that, as we saw above, do not participate in the dynamics and must be discarded. Regarding the negative eigenvalues, they do not decrease with the nonlinearity, but are always larger in absolute value than their linear limits. This implies that there are always negative eigenvalues which cannot be suppressed with any rotation below the critical value, $`\mathrm{\Omega }_c>\mathrm{\Omega }>\mathrm{\Omega }_2`$. The immediate consequence of this linear stability analysis is that, due to the linearization of the energy (41) not being positive, the $`m=2`$ vortex-line is never a local minimum of the energy. This is true even for the parameter interval, $`[\mathrm{\Omega }_2,\mathrm{\Omega }_3)`$, in which it has less energy than the rest of stationary states of well defined symmetry. If the $`m=2`$ ground state is not a minimum, and the other symmetric states have more energy, we can conclude that the minimum of the energy functional in the rotating trap with $`\mathrm{\Omega }[\mathrm{\Omega }_2,\mathrm{\Omega }_3)`$ must be a state which is not symmetric with respect to rotations . A similar analysis can be performed for the stationary states with $`m=3,4\mathrm{}`$ which extends this result to larger rotation frequencies, all below the critical one. ### F Lyapunov stability Speaking roughly, a solution of Eq. (4) is Lyapunov stable when every perturbed solution which is close enough to the original wave remains close throughout the evolution. The concepts of Lyapunov stability and linear stability are close, but the latter does not imply the former as it is only defined in the limit of infinitesimal perturbations. Studying the Lyapunov stability of Eq. (4) theoretically is a difficult task that should be subject of further investigation. In the mean time we have performed an “empirical” study of the Lyapunov stability of the stationary solutions with $`m=1`$ and $`m=2`$ vorticities, by simulating numerically how they evolve for small perturbations and long times. The simulation was performed with a three-dimensional split-step pseudospectral method like the one from Ref. , using a $`80\times 80\times 80`$ points grid to study both the $`\gamma =1`$ and $`\gamma =2`$ problems. The main result of this complementary work is that both the unit charge vortex line and the multicharged vortex line are stable to perturbations which involve the destabilizing modes as defined by (57). For example, one may try to add a small contribution ($`0.5\%`$) of a core mode to the $`m=2`$ vortex, and with the result that the vortex line is split into two unit charge vortex lines, which rotate but remain close to the origin. We must remark that, although these simulations only work for finite times which are dictated by the precision of the scheme and the computational resources, these times are typically 20 or 30 periods of the trap, which is much larger than any of the magnitudes that one may address theoretically to the destabilization process (i.e. the negative or complex eigenvalues of Eq. (57)) In the end, what this type of simulations reveal is that the $`m=1`$ and $`m=2`$ stationary states are energetically unstable, but this has no influence on the dynamic unless some other “mixing” or dissipative terms participate in the model. ## IV Conclusions We have studied the vortex solutions of a dilute, nonuniform Bose condensed gas as modeled by the Gross-Pitaevskii equation (4), both in a stationary, axially symmetric trap, and subject to rotation (or a uniform magnetic field). First, we have searched solutions of Eq. (5) that have the lowest energy and which are also eigenstates of the third component of the angular momentum operator, $`\psi (r,z,\theta )=f(r,z)e^{im\theta }`$, both in a stationary trap and in a rotating trap, and from small to very large nonlinearities. It has been found that a nonzero angular speed (or magnetic field) is necessary in order to turn a vortex line state into a minimum of the energy functional with respect to other states of well-defined vorticity. However it remains open the question of whether the minimum of energy must have a well defined vorticity. Next we have studied the stability of these stationary solutions of the GPE. We have formulated a set of coupled equations that describe both the linearization of the GPE around a stationary solution and Bogoliubov’s corrections to the mean field theory that describes the condensate. It has been proved that the problem may not exhibit dynamical instabilities of exponential nature, plus several other theorems that describe the phenomenology associated to the possible instabilities. The perturbative equations have been solved numerically for stationary states having $`m=1`$ and $`m=2`$ vorticities. In both cases it has been found that the only instability is of energetic nature, being limited to a small number of modes whose nature had already been predicted in . For the vortex with unit charge we have found that this instability may be suppressed by rotating the trap at a suitable speed, and even when the trap is stationary, it is expected that it plays no significant role in the dynamics unless there is enough dissipation as to take the system through the unstable branch. On the other hand, the linear stability analysis for the $`m=2`$ multicharged vortex reveals that the energetic instability may never be suppressed, and that this configuration is never a minimum of the energy functional, even though its lifetime is, once more, only conditioned by possible dissipation. The last and probably most important conclusion of this work is that in the rotating trap, and for $`\mathrm{\Omega }>\mathrm{\Omega }_2`$, the state of minimum energy is not an eigenstate of the $`L_z`$ operator, and thus it is not symmetric with respect to rotations. A similar result has been found in Ref. by means of a minimization procedure which is only justified in the limit of very small $`U`$, while our demonstration remains valid for all nonlinearities, as far as the linearization procedure may be carried on. From a experimental point of view, this work has several implications. First, it is clear the conclusion that vortex lines with unit charge may be produced by rotating the trap at a suitable speed and then cooling the gas. Second, once rotation is removed, these vortices will survive for a long time if dissipation is small. Third, the multi-charged vortices are not minimum of the energy functional and thus it will be difficult to produce them by mean of cooling a rotating gas. And finally, if these multi-charged vortices are produced by some other mean such as Quantum Engineering, then we can assure that their lifetime will only depend on the intensity of dissipation, whose effect is to take the system either to the $`m=0`$ ground state if $`\mathrm{\Omega }<\mathrm{\Omega }_1`$, to the unit charge vortex-line state if $`\mathrm{\Omega }<\mathrm{\Omega }_2`$, or to a symmetry-less multicharged state if $`\mathrm{\Omega }>\mathrm{\Omega }_2`$ (A phenomenon which is regarded as splitting in the literature). The numerical results found in the paper have been possible due to the use of a powerful Galerkin spectral method optimized to allow the consideration of thousands of modes which is a step forward with respect to the previous analysis. ###### Acknowledgements. This work has been partially supported by DGICyT under grants PB96-0534 and PB95-0389 We thank Prof. Cirac from the Institud für Theoretische Physik of Innsbruck for proposing us the problem and helping in the development of the stability theorems.
no-problem/9903/cond-mat9903088.html
ar5iv
text
# Untitled Document The Importance of Static Correlation in the Band Structure of High Temperature Superconductors Jason K. Perry First Principles Research, Inc. 8391 Beverly Blvd., Suite #171, Los Angeles, CA 90048 www.firstprinciples.com J. Phys. Chem., in press. Abstract: Recently we presented a new band structure for La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> and other high temperature superconductors in which a second narrow band was seen to cross the primary band at the Fermi level. The existence of this second Fermi level band is in complete disagreement with the commonly accepted LDA band structure. Yet it provided a crucial piece of physics which led to an explanation for superconductivity and other unusual phenomena in these materials. In this work we present details as to the nature of the failure of conventional methods in deriving the band structure of the cuprates. In particular, we use a number of chemical analogues to describe the problem of static correlation in the band structure calculations and show how this can be corrected with the predictable outcome of a Fermi level band crossing. Introduction. Since their discovery more than a dozen years ago,<sup>1</sup> the cuprate high temperature superconductors have proven to be among the most unusual and intriguing materials devised this century. While their most obvious and important characteristic is that they superconduct at temperatures far in excess of the commonly accepted upper limit for conventional BCS superconductors, various experimental probes of their superconducting and normal state properties have revealed anomolous behavior of a much more general nature. The NMR,<sup>2</sup> angle resolved photoemission (ARPES),<sup>3</sup> neutron scattering,<sup>4</sup> Josephson tunnelling,<sup>5</sup> and IR<sup>6</sup> have all characterized these materials as extremely exotic. The materials can generally be described as having two-dimensional CuO<sub>2</sub> sheets sandwiched between other metal oxide sheets which serve as charge reservoirs.<sup>7</sup> In the case of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>, the prototypical high temperature superconductor, the environment around each Cu is a distorted octahedron with the apical O’s, which belong to the La/Sr/O planes, further from the Cu center than the in-plane O’s. When the material is undoped, $`x=0`$, the charge on the La is formally +3, the charge on each O is formally -2, and the charge on the Cu is formally +2. The Cu(II) is expected to be in it’s open-shell $`d^9`$ configuration, with the La and O ions closed shell. This leads to the existence of a “half-filled band” from simple electron counting arguments. Upon doping, substitution of La(III) with Sr(II), Cu(III) ions are formally created as more electrons are removed from that “half-filled band”. Superconductivity is observed over the very narrow doping range of approximately $`x=0.100.25`$, with the optimal doping ($`T_c=39K`$) at $`x=0.15`$.<sup>8</sup> From early LDA band structure calculations it was generally concluded that the materials were indeed very two-dimensional.<sup>9</sup> A Fermi surface arose from a single half-filled band composed of the anti-bonding arrangement of the Cu $`d_{x^2y^2}`$ and O $`p_\sigma `$ orbitals in the signature CuO<sub>2</sub> planes, confirming simple expectations. However, this band structure poses a great problem for physicists since there is virtually nothing remarkable about it that would suggest some sort of exotic supercondicting properties. This has led to the development of a rather odd attitude toward these LDA calculations. It is clearly agreed that they are missing some crucial physics. Beyond that, the overwhelming collection of unusual data which characterizes these materials has led physicists to agree only that this missing physics must be deeply complicated. Somehow in spite of these deficiencies though the qualtitative picture of the LDA band structure has effectively become conventional wisdom. Yet Tahir-Kheli and Perry<sup>10,11</sup> recently offered a new theory of high temperature superconductivity which is remarkably simple and explains substantially more than all previous theories. We showed that much of the confusion about these materials stems from incorrect assumptions about their band structure. The LDA band structure calculations are based on the mean-field approximation, which is known to breakdown in the limit of weakly interacting particles. Such is the case for the cuprates, for which it has been well accepted that many-body effects (or dynamic correlation) are important. Correlation has been introduced in some models to correct the problem, but to our knowledge this has always been done in a limited way, applying the correction only to the three bands produced by the Cu $`d_{x^2y^2}`$ and two O $`p_\sigma `$ orbitals.<sup>12</sup> These three band Hubbard models, which are often reduced to one band Hubbard models, ignore the effect that correlation has on the other bands in the material since it is widely assumed that they are irrelevant. Yet we have argued that this underlying assumption that the single particle band structure is qualitatively correct is in fact false and such a limited approach to the incorporation of correlation actually misses the most important consequence: that the relative energy of the half-filled band changes with respect to the full bands. This is due to the improper description of static correlation in the LDA band structure. In our model, where the correlation correction is applied more universally, the effect is so dramatic that a second band appears at the Fermi level. This is shown in Figure 1. This new band structure still has the approximately half-filled 2-D Cu $`d_{x^2y^2}`$/O $`p_\sigma `$ band, but a second 3-D Cu $`d_{z^2}`$/O $`p_z`$ band is seen at the Fermi level as well, such that electrons are removed from both bands upon doping. Significantly, we identified a symmetry allowed Fermi level crossing of the two bands which we showed was the crucial element in understanding the physics of these materials. This band crossing allows for the formation of a new type of interband Cooper pair, representing a simple twist on the conventional BCS theory of superconductivty. Moreover, the wealth of experimental data which demonstrates more general anomolous behavior can easily be explained by this unusual band structure, and in a number of cases has already been quantitatively reproduced.<sup>10,11,13,14</sup> In this work, we present arguments as to the nature of the correlation problem in conventional LDA calculations and why correcting this problem intuitively leads to the new band structure. We develop these arguments from a chemist’s perspective using a number of familiar molecular systems to illustrate various aspects of the correlation problem. In particular, the chemistry of H<sub>2</sub>, benzene, and the Cu ion dimer will be discussed, leading up to a discussion of the band structure for La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>. The Problem with H<sub>2</sub> Dissociation To understand the basic problem with the LDA band structure calculations of the cuprates, it is only necessary to consider the fundamental problem of dissociation consistency in single configuration based methods.<sup>15</sup> In Figure 2 we show the dissociation curves for H<sub>2</sub> as calculated at the Hartree-Fock (HF) and the B3LYP<sup>16</sup> density functional (DFT) levels using both a restricted spin and symmetry approach and an unrestricted spin and symmetry approach. For both methods (and the case is the same for other DFT functionals) the restricted approach leads to dissociation to an excited state description of two H atoms. The unrestricted approach leads to proper dissociation. This behavior is well understood and represents the primary motivation behind the development of methods such as Generalized Valence Bond (GVB).<sup>17</sup> The problem with the restricted approach is that two electrons are forced to occupy the same orbital (the $`\sigma _g`$ orbital in the case of H<sub>2</sub>). This is a fine approximation near the equilibrium bond length, and indeed both the restricted and unrestricted approaches lead to the same state in this region. However, upon dissociation, forcing two electrons to occupy the same orbital is clearly not appropriate since the local representation of this can be seen to be 50% covalent (the correct dissociation limit) and 50% ionic (an excited state). Explicitly, that is $$\begin{array}{cc}\hfill \mathrm{\Psi }_g=& (\sigma _g)^2\hfill \\ \hfill =& \frac{1}{2}(1s(H1)+1s(H2))^2\hfill \\ \hfill =& \frac{1}{2}((1s(H1))^2+(1s(H2))^2+(1s(H1))^1(1s(H2))^1+(1s(H2))^1(1s(H1))^1)\hfill \\ \hfill =& \frac{1}{\sqrt{2}}(\mathrm{\Psi }(IONIC)+\mathrm{\Psi }(COVALENT))\hfill \end{array}$$ For the HF wavefunction, the energy of this state is $$E_g(r=\mathrm{})=2E_{1s}+\frac{1}{2}J_{1s,1s}$$ where $`E_{1s}`$ is the ground state energy of an H atom and $`J_{1s,1s}`$ is the self-Coulomb energy associated with the H $`1s`$ orbital. The situation is similar for DFT where the exchange and correlation functionals will cancel some but not all of the self-Coulomb term. As a result the error for HF (7.1 eV) is seen to be larger than that for B3LYP (2.8 eV), but the error for B3LYP and other DFT functionals is nevertheless non-zero. The unrestricted approach overcomes the problem of the self-Coulomb energy by breaking spin and symmetry and localizing the alpha spin electron on one H atom and the beta spin electron on the other. As a result, there is dissociation to the proper $`\mathrm{\Psi }(COVALENT)`$ limit. Alternatively, a method which introduces static correlation, such as GVB (or more generally CASSCF), overcomes this problem without breaking spin by describing the bond with two configurations as, $$\mathrm{\Psi }_{GVB}=c_1(\sigma _g)^2c_2(\sigma _u)^2$$ where $`c_1^2+c_2^2=1`$. The energy upon dissociation is, $$E_{GVB}(r=\mathrm{})=c_1^2E_g(r=\mathrm{})+c_2^2E_u(r=\mathrm{})c_1c_2J_{1s,1s}$$ Clearly since $`E_g=E_u`$ upon dissociation, the optimal set of coefficients is $`c_1=c_2=\frac{1}{\sqrt{2}}`$. Hence the GVB wavefunction dissociates properly to $$E_{GVB}(r=\mathrm{})=2E(1s)$$ While this is all very familiar, the point is that it is pertinant to the electronic structure of high temperature superconductors. In these materials the Cu(II) $`d^9`$ spins of the half-filled band are separated by 3.8 Å. At this separation, a breakdown in the mean-field approximation is expected resulting in a substantial overestimate of the self-Coulomb term. Recognition of such has been the motivation behind calculations in which the La<sub>2</sub>CuO<sub>4</sub> unit cell has been doubled to allow for spin polarization.<sup>18,19</sup> In these calculations alpha and beta spins localize to alternating sites in the undoped material, thus removing the self-Coulomb term associated with the half-filled band much like the unrestricted spin and symmetry calculations remove the self-Coulomb term from dissociated H<sub>2</sub>. The work of Svane<sup>18</sup> is particularly important in this regard since it also accounts for the fact that the self-Coulomb term and the self-exchange and correlation terms do not completely cancel. As a solution, he applies a self-interaction correction (SIC) to those orbitals that can be well localized. While in context this is correct, and to some extent his calculations are in agreement with ours, as we show next, correlation of delocalized orbitals is important, too. Static Correlation in Benzene A more complicated example of static correlation is the case of aromatic benzene. At the HF level, there are three orbitals having the symmetries $`A_{2u}`$ and $`E_{1g}`$ under the $`D_{6h}`$ point group which represent the delocalized form of the three benzene $`\pi `$ orbitals. Yet the six atomic p<sub>π</sub> orbitals which form these three molecular orbitals have only a moderate overlap with each other. This leads to an overestimate of the self-Coulomb term associated with the bonds which requires correlation of the type just described. The easiest way to introduce such correlation is through the GVB approach in which symmetry is broken and the three delocalized HF $`\pi `$ orbitals are localized to three $`\pi `$ bonds corresponding to one of the two resonating Kekulé structures. Similarly, the three corresponding $`\pi `$ antibonding orbitals are localized and the GVB wavefunction becomes $$\mathrm{\Psi }_{GVB}=(c_1(\pi _g(1))^2c_2(\pi _u(1))^2)(c_1(\pi _g(2))^2c_2(\pi _u(2))^2)(c_1(\pi _g(3))^2c_2(\pi _u(3))^2)$$ The energy of the GVB wavefunction is 1.12 eV lower than that of the HF wavefunction using a 6-311G\** basis set.<sup>20</sup> This represents a lowering of 0.37 eV per bond, which can be directly related to a reduction in the self-Coulomb term associated with each bond. Additional correlation to account for spin-polarization of the bonds can be introduced through the RCI wavefunction which adds the single excitation configuration $`c_3(\pi _g)^1(\pi _u)^1`$ for each bond in the above equation for $`\mathrm{\Psi }_{GVB}`$ while also relaxing some inherent constraints on the GVB coefficients. This correlation effectively allows alpha and beta spins to separate and lowers the total energy by another 0.30 eV. Resonance can then be included by allowing all excitations between the bonds (i.e. all excitations of the six electrons within the space of the six GVB orbitals). This GVBCI wavefunction lowers the total energy by another 0.53 eV. Significantly, this GVBCI wavefunction is also strictly equivalent to the commonly used CASSCF wavefunction. The two are related by a simple transformation from the localized space (GVBCI) to the delocalized space (CASSCF). The very existence of this transformation implies that the correlation which exists in the GVBCI also exists in the CASSCF. Since it is clear that the most important correlation in the GVBCI is that which reduces the self-Coulomb energy of the $`\pi `$ bonds, the same must be true of the CASSCF, although it is much less transparent. In other words, the correlation which reduces the self-Coulomb energy is independent of whether the orbitals are localized or delocalized. The presence of this same type of correlation in systems that are delocalized is often overlooked. In the case of the superconductors, methods that depend on the localization of orbitals in order to reduce the self-Coulomb energy<sup>18</sup> are in fact biased toward such well localized states since they miss the fact that the energy can be similarly lowered by application of such correlation to states that cannot be well localized. This is not to say that undoped La<sub>2</sub>CuO<sub>4</sub> does not in fact have well localized spins, since the undoped material is clearly an antiferromagnet. But upon doping, when orbitals can no longer be easily localized, this type of correlation should not be expected to just disappear. By our argument here, reduction of the self-Coulomb energy should be considered for both localized and delocalized orbitals in evaluating the band structure. The consequences of this are addressed in the next section. The Problem with Separated Cu Ions The ground state of Cu(I) is known to be $`{}_{}{}^{1}S`$ $`d^{10}`$, the ground state of Cu(II) is known to be $`{}_{}{}^{2}D`$ $`d^9`$, and the ground state of Cu(III) is known to be $`{}_{}{}^{3}F`$ $`d^8`$.<sup>21</sup> While it is the case that there is only one possible $`d^{10}`$ configuration for Cu(I), and the five possible $`d^9`$ configurations for Cu(II) are degenerate, for Cu(III) the ten different possible triplet $`d^8`$ configurations lead to different mixtures of the $`{}_{}{}^{3}F`$ and higher energy $`{}_{}{}^{3}P`$ states. Only the two configurations in which one hole is in the $`d_\sigma `$ orbital and the other is in a $`d_\delta `$ orbital lead to a pure $`{}_{}{}^{3}F`$ state in a single reference description. Using a triple$`\zeta `$ contraction of Hay and Wadt’s ECP basis set,<sup>22</sup> we calculate a second ionization potential (the difference between Cu(I) and Cu(II)) to be 17.54 eV at the HF level and 20.65 eV at the B3LYP level in comparison to the experimental value of 20.29 eV. Similarly, we calculate a third ionization potential (the difference between Cu(II) and Cu(III)) to be 34.32 eV at the HF level and 37.06 eV at the B3LYP level in comparison to the experimental value of 36.83 eV. Clearly, B3LYP is a suitable method for studying the Cu ions. Yet we find that when two Cu ions are low spin coupled and separated by a long distance, these methods have difficulty. As with H<sub>2</sub>, an unrestricted spin and symmetry approach will properly describe the two ions, but attempting to use a restricted spin and symmetry approach fails. The nature of this failure is quite revealing however in how it relates to the band structure of the high temperature superconductors. Results of calculations on various Cu ion dimers are given in Table I. As can be seen, the energy of the Cu(I)$`+`$Cu(I) dimer where both ions are $`d^{10}`$ is correct. The energy of the Cu(I)$`+`$Cu(II) dimer where each ion is an average of $`d^9`$ and $`d^{10}`$ is also correct. However, the energy of the singlet state of Cu(II)$`+`$Cu(II) is high by 14.42 eV at the HF level and high by 4.30 eV at the B3LYP level. This state has the following orbital occupations $$Cu(II)+Cu(II)=(xy_g)^2(xy_u)^2(xz_g)^2(xz_u)^2(yz_g)^2(yz_u)^2(z_g^2)^2(z_u^2)^2(x^2y_g^2)^2(x^2y_u^2)^0$$ As shown for H<sub>2</sub> and benzene, the error in the Cu(II)$`+`$Cu(II) energy can be unambiguously attributed to the lack of static correlation in the half-filled $`d_{x^2y^2}`$ pair of orbitals which leads to this copper dimer being described as 50% Cu(II)$`+`$Cu(II) and 50% Cu(I)$`+`$Cu(III). This state can be correctly described by the GVB or CASSCF method or by breaking symmetry and spin in an unrestricted approach. Alternatively, changing the spin to triplet and singly occupying each of the two $`d_{x^2y^2}`$ orbitals will lead to the correct ground state. This Cu(II)$`+`$Cu(II) model by itself offers a good argument for what might be wrong with conventional LDA band structure calculations of the cuprate superconductors. Doubling the unit cell to allow breaking of symmetry and spin with localization of the alpha and beta spins on alternating copper sites may be one logical solution for understanding the undoped material. Alternatively, introducing more rigorous correlation with a Hubbard model of the isolated Cu $`d_{x^2y^2}`$ / O p<sub>σ</sub> band may be another logical solution. However when our model is taken one step further to consider Cu(II)$`+`$Cu(III) the most important aspect of the lack of static correlation in the half-filled band can be seen, and this point has received little attention until now. When one more electron is removed from the $`d_{x^2y^2}`$ pair of orbitals to form Cu(II)$`+`$Cu(III), the doublet state is again described correctly even though it corresponds to an excited state configuration of Cu(III). The state is actually $`{}_{}{}^{2}D`$ Cu(II) $`+`$ $`{}_{}{}^{1}G`$ Cu(III), where the $`{}_{}{}^{1}G`$ $`d^8`$ configuration of Cu(III) corresponds to having the $`d_{x^2y^2}`$ orbital empty. We calculate the $`{}_{}{}^{3}F`$ $``$ $`{}_{}{}^{1}G`$ excitation energy to be 4.29 eV at the HF level and 3.86 eV at the B3LYP level. However, when an electron is instead removed from the $`d_{z^2}`$ pair of orbitals, which should lead to a ground state description of $`{}_{}{}^{2}D`$ Cu(II) $`+`$ $`{}_{}{}^{3}F`$ Cu(III), the doublet coupling of the two ions is too high in energy by 15.67 eV at the HF level and 5.03 eV at the B3LYP level. Even correcting the improper exchange interaction between the $`d_{z^2}`$ and $`d_{x^2y^2}`$ electrons in this configuration, the HF energy is still 14.95 eV too high, and the B3LYP energy is still 4.48 eV too high. The difference between these two states of Cu(II)$`+`$Cu(III) can be understood in that removing an electron from the $`d_{x^2y^2}`$ orbitals removes the problem with static correlation whereas removing an electron from the $`d_{z^2}`$ orbitals does not. In the former case, there is only one electron remaining in the $`d_{x^2y^2}`$ orbitals and it is shared equally between the two ions. In the latter case, there are still two electrons in the $`d_{x^2y^2}`$ orbitals and without proper correlation the self-Coulomb energy will remain too high. In the end, this means that in starting with a half-filled set of $`d_{x^2y^2}`$ orbitals in Cu(II)$`+`$Cu(II) there is an improper bias of 14.42 eV at the HF level and 4.30 eV at the B3LYP level toward removing an additional electron from $`d_{x^2y^2}`$. However there is actually a bias of 0.53 eV at the HF level and 0.18 eV at the B3LYP level against removing an electron from $`d_{z^2}`$. In other words, the lack of correlation in the $`d_{x^2y^2}`$ orbitals raises the energy of those particular orbitals with respect to all the other orbitals. The three models discussed here, (H<sub>2</sub>, benzene, and the Cu ion dimer), suggest that static correlation needs to be considered in the band structure of the cuprate superconductors, that it needs to be applied to all orbitals regardless of whether or not they can be well localized, and that the primary result will surely be to lower the energy of the entire half-filled band with respect to the other filled bands. The Importance of Static Correlation in the Band Structure of High Temperature Superconductors We have chosen to study the band structure of optimally doped La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> with a Hubbard model which uses parameters derived from DFT calculations on a CuO<sub>6</sub> cluster. The details of the cluster calculations and the procedure for extracting the Hubbard parameters are given explicitly in Perry and Tahir-Kheli.<sup>11</sup> All parameters necessary to describe the Cu $`d_{x^2y^2}`$ / O $`p_\sigma `$ and Cu $`d_{z^2}`$ / O $`p_z`$ bands were derived. These parameters include orbital energies, Coulomb and exchange energies, and orbital couplings. Our original set of parameters, which were published in that work, came from BLYP/6-31+G\* calculations (using an ECP on the Cu). We have since derived parameters from B3LYP/6-311+G\* calculations and found the resulting 2-D band structure (detailed below) to be qualitatively the same as that obtained with the earlier parameter set. However, we have also included a 3-D coupling in this new band structure and as a result we can now calculate such experimental observables as the NMR Cu and O spin relaxation rates,<sup>13</sup> the ARPES Fermi surface, the neutron scattering, and the mid-IR absorption<sup>14</sup> with near quantitative accuracy, something that has not been done with any other band structure. The validity of the general approach can be tested by calculating the Hubbard model band structure within the mean-field approximation. The calculation must be done iteratively until self-consistency is achieved because the orbital energies depend on the Coulomb and exchange field which depends on the orbital occupations which depend on the orbital energies. The first step is to calculate the orbital energies as a function of the orbital occupations. Under the mean-field approximation, this is $$E_i=E_i^0\underset{j}{}(2N_j)(J_{i,j}\frac{1}{2}K_{i,j})$$ where $`E_i^0`$ are the calculated orbital energies when all valence bands are full (formally La(III), Sr(II), Cu(I), and O(-II)), $`N_j`$ are the atomic orbital occupations, $`J_{i,j}`$ are the Coulomb terms between orbitals, and $`K_{i,j}`$ are the exchange terms. Details of how the long range Coulomb field is handled are given in the cited reference.<sup>11</sup> Once the orbital energies are determined, a Hubbard matrix is constructed at every k vector on a grid covering the first Brillouin zone, the eigenvectors and eigenvalues of each matrix are determined corresponding to the orbitals and orbital energies at each k point, the Fermi level is adjusted such that the correct number of orbitals are occupied for the particular doping level, the atomic orbital occupations are then determined, and the process is repeated. It should be noted that in our model $`J_{i,i}=K_{i,i}`$ such that when an orbital is half-occupied its energy is $`E_i=E_i^0\frac{1}{2}J_{i,i}`$. As shown in Figure 1a, using the mean-field approximation to determine orbital energies as above and constraining the model to a 2-D description of the material leads to a band structure which is nearly quantitatively identical to those published using conventional LDA band structure techniques.<sup>9</sup> A single Cu $`d_{x^2y^2}`$ / O $`p_\sigma `$ band which is widely dispersing is seen to cross the Fermi level. A second Cu $`d_{z^2}`$ / O $`p_z`$ band is seen to be several eV lower in energy. This good agreement effectively validates the procedure. It is interesting to note however that the bottom of the $`d_{z^2}`$ band is several eV below the bottom of the $`d_{x^2y^2}`$ band even though at k=(0,0) the $`d_{x^2y^2}`$ orbital represents a non-bonding combination of the Cu orbitals, having no O $`p_\sigma `$ character at all, while the $`d_{z^2}`$ orbital has significant anti-bonding O $`p_z`$ character. Ligand field theory would suggest that the $`d_{z^2}`$ band should be higher in energy than the $`d_{x^2y^2}`$ band at this k point unless the $`d_{z^2}`$ atomic orbital is itself significantly more stable than the $`d_{x^2y^2}`$ atomic orbital. This is indeed the case, but it cannot be explained by differences in the intrinsic $`E_i^0`$ atomic orbital energies for $`d_{z^2}`$ and $`d_{x^2y^2}`$ since this difference is only 0.13 eV. The stabilization of the $`d_{z^2}`$ band with respect to the $`d_{x^2y^2}`$ band is seen only upon removal of electrons from the $`d_{x^2y^2}`$ band. This is counterintuitive and exactly the opposite behavior should be expected from such basic principles as Hund’s rule. It is a direct result though of the improper accounting of the self-Coulomb energy in the mean-field approximation for this strongly correlated system. This behavior is completely analogous to that seen for the Cu ion dimer discussed above. Thus we expect that correlation that would reduce the self-Coulomb term of partially occupied orbitals would lower the energy of the Cu $`d_{x^2y^2}`$ orbital with respect to the Cu $`d_{z^2}`$ orbital. Introducing static correlation to the band structure in a rigorous way is an extremely difficult problem. However, the effect of this correlation on the self-Coulomb term in the mean-field equation can easily be approximated. This is best seen by considering Figure 3 and thinking about what the self-Coulomb energy should be when a particular atomic orbital is half-filled. Figure 3 depicts a localized description of the Cu $`d_{x^2y^2}`$ / O $`p_\sigma `$ band. Such localization can be exact only when the band is half-filled. The localization can still be approximately correct with the addition or removal of electrons if the ensuing delocalized states are viewed as arising from the resonance of localized states. Figure 3a shows the mean-field spin coupling in the CuO<sub>2</sub> plane while Figure 3b shows an antiferromagnetic spin coupling which is relevant when the material is undoped. Upon doping, this antiferromagnetic order is destroyed and a correlated paramagnetic spin-coupling such as that depicted in Figures 3c and 3d is expected. In the mean-field picture, when the Cu $`d_{x^2y^2}`$ orbital is half occupied, the local spin is 50% alpha and 50% beta leading to a self-Coulomb term which is $`\frac{1}{2}J`$. However, in both the antiferromagnetic and correlated paramagnetic pictures when the Cu $`d_{x^2y^2}`$ orbital is half occupied, a resonance exists between states that have a local spin in that orbital that is purely alpha or purely beta. This picture is fundamentally different from that of the mean-field approximation and leads to a self-Coulomb term which is $`0J`$. From the arguments used to make the connection between the GVBCI and CASSCF descriptions of benzene, the same can be said of the Cu $`d_{z^2}`$ and O $`p_z`$ orbitals even though localization of these orbitals is not as straightforward. That is, delocalized states must be viewed as arising from the resonance of very low symmetry localized states. So for the Cu $`d_{x^2y^2}`$ and $`d_{z^2}`$ orbitals and the O $`p_z`$ orbital, the correlation corrected mean-field equation becomes $$E_i=E_i^0(2N_i)J_{ii}\underset{ji}{}(2N_j)(J_{ij}\frac{1}{2}K_{ij}),N_i>1$$ $$E_i=E_i^0J_{ii}\underset{ji}{}(2N_j)(J_{ij}\frac{1}{2}K_{ij}),N_i1$$ Upon examination, it can easily be seen that if an orbital is half-occupied or less, the full self-Coulomb term will be removed from $`E_i^0`$. The situation is a little less clearcut for the O $`p_\sigma `$ orbitals. In the antiferromagnetic picture of Figure 3b, alpha or beta spin is localized to alternating Cu sites, but as a result each O site is then 50% alpha and 50% beta. Thus, the self-Coulomb term is expected to be $`\frac{1}{2}J`$ for the half-occupied orbital as it is under the mean-field approximation. In the correlated paramagnetic picture of Figure 3c and 3d, for the one O atom that lies between two spin paired Cu atoms, the self-Coulomb term also turns out to be $`\frac{1}{2}J`$. However, for the three other O atoms surrounding any particular Cu site, the self-Coulomb term is expected to be $`\frac{1}{4}J`$. This is because the uncorrelated spins between the two Cu atoms lead to spin on the O which is 25% pure alpha, 25% pure beta, and 50% half alpha/half beta. The latter term leads to the $`\frac{1}{4}J`$ Coulomb repulsion. On average then, when the O $`p_\sigma `$ orbital is half-occupied, the self-Coulomb term is $`\frac{3}{4}\times \frac{1}{4}J+\frac{1}{4}\times \frac{1}{2}J=\frac{5}{16}J`$. The correlation corrected mean-field equation for this orbital then becomes $$E_i=E_i^0\frac{11}{16}(2N_i)J_{ii}\underset{ji}{}(2N_j)(J_{ij}\frac{1}{2}K_{ij}),N_i>1$$ $$E_i=E_i^0(\frac{5}{16}(2N_i)+\frac{3}{8})J_{ii}\underset{ji}{}(2N_j)(J_{ij}\frac{1}{2}K_{ij}),N_i1$$ This latter set of equations is clearly approximate and may vary substantially from that obtained from the exact wavefunction, which is of course unknown. So we should note that we have generated band structures with a variety of values for the extent of the self-Coulomb term removed from the O $`p_\sigma `$ $`E_i^0`$ atomic orbital energies to test the importance of this term. For values ranging from $`\frac{1}{2}J`$ removed at half-occupancy to a full $`J`$ removed, no qualitative difference in the band structure was observed. We thus feel that the choice of $`\frac{11}{16}J`$ removed from the orbital energy for O $`p_\sigma `$ at half-occupany is reasonable. The results of including this static correlation in the Hubbard model can be seen in Figure 1b. Here we present the two-dimensional band structure obtained with the newer B3LYP/6-311+G\* parameters. As occurs with the older BLYP/6-31+G\* band structure, the Cu $`d_{x^2y^2}`$ / O $`p_\sigma `$ band is seen to be stabilized with respect to the Cu $`d_{z^2}`$ / O $`p_z`$ band. The change is so dramatic that the second band is seen now to lie just below the Fermi level at optimal doping, a rather robust effect. As we pointed out in our first published work on this subject, a symmetry allowed crossing of the two bands is observed very near the Fermi level.<sup>11</sup> For this newer set of parameters, it is just 0.024 eV below the Fermi level. The existence of such a crossing provides the unique opportunity for a new type of Cooper pair to form. In conventional BCS superconductors, pairs of electrons near the Fermi level form an attractive coupling when one of the electrons is in state k and the other is in state -k. With the existence of a Fermi level band crossing, such an attractive coupling can be formed between electrons in states k and -k where each of the electrons belongs to a different band. This new and simple twist on the conventional theory immediately provides an explanation for the $`d`$-wave gap observed in the Josephson tunneling<sup>5,10</sup> and ARPES.<sup>3,14a</sup> While our early work resorted to empirical modifications to the Hubbard model to a achieve a band crossing at exactly the Fermi level, recently we found that the introduction of a small 3-D coupling on the order of 0.05-0.15 eV between O $`p_z`$ orbitals of neighboring planes was enough to produce a Fermi level band crossing.<sup>14c</sup> This is shown in Figure 4. The crossing occurs in a limited area of the 3-D Brillouin zone, but this is all that is necessary for the formation of interband Cooper pairs. We should mention that several researchers have previously noted $`z^2`$ character near the Fermi level in spin-polarized band structure calculations on undoped La<sub>2</sub>CuO<sub>4</sub>,<sup>18,19</sup> so this new band structure should not come as a complete surprise even though it is radically different from the band structure that has gained common acceptance. To our knowledge though, no one has ever noted the band crossing before, and it is this that leads directly to the unusual physics of high temperature superconductivity. Conclusions We have shown that the conventional LDA band structure calculations for La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> and other high temperature superconductors have failed due to an underestimation of the static correlation. This same failure affects molecular systems such as H<sub>2</sub>, benzene, and the Cu ion dimer in a well understood way. We have corrected the problem within the framework of a Hubbard model by altering the accounting associated with the self-Coulomb term. The result was a radically different band structure in which a second Cu $`d_{z^2}`$ / O $`p_z`$ band was seen to cross the primary Cu $`d_{x^2y^2}`$ / O $`p_\sigma `$ band at the Fermi level. The observation of this band crossing led to a new interband pairing theory for the mechanism of superconductivity in these materials. Finally, we must stress that not only does the new band structure and interband pairing theory explain the origin of $`d`$-wave superconductivity in these materials, it explains the origin of the high T<sub>c</sub> as resulting from unusual behavior in the dielectric constant stemming from the band crossing.<sup>14b</sup> It also quantitatively explains the anomolous behavior of the NMR Cu and O spin relaxation rates as simply the result of rapidly changing orbital character near the Fermi level.<sup>13</sup> It explains the ARPES pseudogap as originating from the very narrowly dispersing Cu $`d_{z^2}`$ band.<sup>14a</sup> It further explains the incomensurate peaks of the neutron scattering and the mid-IR absorption.<sup>14d</sup> None of the physics associated with understanding these experiments is particularly difficult when this new band structure is used. In contrast, the physics that has been proposed by various sources in reference to the conventional band structure to explain any one of the above mentioned experiments has always been deeply complex and limited in its predictive capability. We suggest that nature usually prefers the simpler solution. Acknowledgment We would like to acknowledge the substantial contributions of Dr. Jamil Tahir-Kheli to this work. We would also like to acknowledge many useful discussions with Dr. Jean-Marc Langlois. We especially thank Prof. William A. Goddard III for his continuing guidance. References. <sup>1</sup>J.G. Bednorz and K.A. Müller, Z. Phys. B 64, 189 (1986). <sup>2</sup>R.E. Walstedt, B.S. Shastry, and S.-W. Cheong, Phys. Rev. Lett. 72, 3610 (1994). <sup>3</sup>M.R. Norman, H. Ding, M. Randeria, J.C. Campuzano, T. Yokoya, T. Takeuchi, T. Takahashi, T. Mochiku, K. Kadowaki, P. Guptasarma, and D.G. Hinks, Nature 392, 157 (1998). <sup>4</sup>P. Dai, H.A. Mook, and F. Dogan, Phys. Rev. Lett. 80, 1738 (1998). <sup>5</sup>C.C. Tsuei, J.R. Kirtley, C.C. Chi, L.S. Yu-Jahnes, A. Gupta, T. Shaw, J.Z. Sun, and M.B. Ketchen, Phys. Rev. Lett. 73, 593 (1994). <sup>6</sup>D.B. Tanner and T. Timusk, in Physical Properties of High Temperature Superconductors III, ed. D.M. Ginsberg (World Scientific, New Jersey; 1990), 363. <sup>7</sup>R.M. Hazen, in Physical Properties of High Temperature Superconductors II, ed. D.M. Ginsberg (World Scientific, New Jersey; 1990), 121. <sup>8</sup>H. Takagi, R.J. Cava, M. Marezio, B. Batlogg, J.J. Krajewski, W.F.Peck, Jr., P. Bordet, D.E. Cox, Phys. Rev. Lett. 68, 3777 (1992). <sup>9</sup>J. Yu, A.J. Freeman, and J.H. Xu, Phys. Rev. Lett. 58, 1035 (1987); L.F. Mattheiss, Phys. Rev. Lett. 58, 1028 (1987); W.E. Pickett, Rev. Mod. Phys. 61, 433 (1989), and references therein. <sup>10</sup>J. Tahir-Kheli, Phys. Rev. B 58, 12307 (1998). <sup>11</sup>J.K. Perry and J. Tahir-Kheli, Phys. Rev. B 58, 12323 (1998). <sup>12</sup>M.S. Hybertsen, E.B. Stechel, W.M.C. Foulkes, and M. Schlüter, Phys. Rev. B 45, 10032 (1992). <sup>13</sup>J. Tahir-Kheli, J. Phys. Chem., in press. <sup>14</sup>a) J.K. Perry and J. Tahir-Kheli, submitted to Phys. Rev. Lett.; b) J. Tahir-Kheli, submitted to Phys. Rev. Lett.; c) J.K. Perry and J. Tahir-Kheli, submitted to Phys. Rev. Lett.; d) J. Tahir-Kheli, to be published. See www.firstprinciples.com. <sup>15</sup>The reader is referred to any number of standard texts such as I.N. Levine, Quantum Chemistry, 4th ed (Prentice Hall, Englewood Cliffs, New Jersey; 1991); A. Szabo and N.S. Ostlund, Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory, 1st ed. rev. (McGraw-Hill, New York; 1989). <sup>16</sup>A.D. Becke, J. Chem. Phys. 98, 5648 (1993). <sup>17</sup>F.W. Bobrowicz and W.A. Goddard III, in Modern Theoretical Chemistry: Methods of Electronic Structure Theory, edited by H.F. Schaefer III (Plenum, New York; 1977). <sup>18</sup>A. Svane, Phys. Rev. Lett. 68, 1900 (1992). <sup>19</sup>K. Shiraishi, A. Oshiyama, N. Shima, T. Nakayama, and H. Kamimura, Solid State Comm. 66, 629 (1988). <sup>20</sup>R. Krishnan, J.S. Binkley, R. Seeger, and J.A. Pople, J. Chem. Phys. 72, 650 (1980). <sup>21</sup>C.E. Moore, Atomic Energy Levels, NSRDS-NBS 35 (reprint of NBS circular 467) (U.S. Government Printing Office, Washington, D.C., 1971). <sup>22</sup>P.J. Hay and W.R. Wadt, J. Chem. Phys. 82, 299 (1985). Table I. Calculated energetics for the Cu ion dimer (in eV). HF(calc) and B3LYP(calc) are computed under a spin and symmetry restricted formalism. HF(exact) and B3LYP(exact) represent the correct values for two non-interacting ions. | Dimer | HF(calc) | HF(exact) | B3LYP(calc) | B3LYP(exact) | | --- | --- | --- | --- | --- | | $`Cu(I/^1S)+Cu(I/^1S)`$ | 0.00 | 0.00 | 0.00 | 0.00 | | $`Cu(I^1S)+Cu(II/^2D)`$ | 17.54 | 17.54 | 20.65 | 20.65 | | $`Cu(II/^2D)+Cu(II/^2D)`$ | 49.50 | 35.08 | 45.60 | 41.30 | | $`Cu(II/^2D)+Cu(III/^1G)`$ | 56.15 | 56.15 | 61.57 | 61.57 | | $`Cu(II/^2D)+Cu(III/^3F)`$ | 67.53 | 51.86 | 62.74 | 57.71 | Figure Captions Figure 1. a Calculated 2-D band structure for optimally doped La<sub>1.85</sub>Sr<sub>0.15</sub>CuO<sub>4</sub> using our Hubbard model and retaining the mean-field approximation. b Calculated 2-D band structure using our Hubbard model and including static correlation. The two bands are seen to cross along the $`(0,0)(\pi ,\pi )`$ direction very near the Fermi level. Note other bands are not shown for clarity. Figure 2. Calculated dissociation curves for H<sub>2</sub> at the HF (top) and B3LYP (bottom) levels using both a spin and symmetry restricted approach and a spin and symmetry unrestricted approach. For both computational levels the restricted approach is seen to dissociate to an incorrect higher limit. Figure 3. a Schematic description of Cu spin couplings under the mean-field approximation. Each Cu site is 50% alpha spin and 50% beta spin. b Schematic description of the antiferromagnetic state where alternating Cu sites are either alpha spin or beta spin. c and d Two schematic descriptions of the paramagnetic state where a given Cu site may be spin paired with any of the four adjacent Cu sites. Figure 4. 3-D Fermi surface for optimally doped La<sub>1.85</sub>Sr<sub>0.15</sub>CuO<sub>4</sub>. Cross sections of this Fermi surface are given at a $`k_z=0`$, b $`k_z=1.3\frac{\pi }{c}`$, c $`k_z=1.54\frac{\pi }{c}`$, and d $`k_z=2\frac{\pi }{c}`$. Electrons begin to come out of the second band at $`k_z=1.54\frac{\pi }{c}`$ allowing the formation of interband Cooper pairs in the vicinity of the band crossing.
no-problem/9903/astro-ph9903035.html
ar5iv
text
# On the Dynamical Foundations of 𝜶 Disks. ## 1 Introduction For many years, the principle uncertainty and greatest impediment for the development of accretion disk theory was an understanding of the origin of turbulent transport. In their classic paper, Shakura & Sunyaev (1973) made the physically reasonable and enormously productive ansatz that, whatever the underlying cause for its existence, the turbulent stress tensor $`T_{ij}`$ scaled with the local gas pressure $`P`$. They denoted the constant of proportionality as $`\alpha `$, and the “$`\alpha `$ disk” moniker has since become synonymous with the standard disk model. Despite the ongoing development of increasingly sophisticated large scale numerical models, $`\alpha `$ disk modeling still remains the central link between theory and observations, the cornerstone of accretion disk phenomenology. In the last several years, a promising candidate has emerged as the physical basis for $`\alpha `$ disk models. This is the magneto-rotational (“Balbus-Hawley”) MHD instability (Balbus & Hawley 1998, and references therein). A large and ever-increasing body of numerical simulations leaves little doubt that this instability leads to the turbulent enhancement of angular momentum transport within an accretion disk. What has been lacking, however, is a systematic explanation of how turbulence may or may not lead to the phenomenological $`\alpha `$ disk equations which have been in use for many decades. Since this approach has been (and continues to be) the link between accretion disk theory and observations, a better understanding of the $`\alpha `$ formalism is clearly desirable. The present paper seeks to fill this role. The dynamical foundations of viscous disk theory have always been somewhat fuzzy, and the benefits of sharpening our understanding are numerous. For example, it is desirable to clarify which results of $`\alpha `$ disk phenomenology are truly fundamental, and which have more limited domains of applicability. The nature of time-dependent turbulent transport could be more fully elucidated: in what sense is it equivalent to a viscous stress? Most interestingly, by introducing the intermediate integral scales of turbulence into the investigation, an enormously richer class of physical problem emerges. Classical $`\alpha `$ disk theory addresses mean flow (macroscopic) dynamics, subsuming all integral scale structure into a viscous stress tensor. In this approach, macroscopic disk structure is coupled directly to the dissipative scales. One cannot begin to answer such questions as whether the turbulence is self-maintaining, or whether the transport is local or global; everything is simply prescribed. Finally, there are important questions facing disk modelers in nonmagnetized disks, such as protostellar disks on scales larger than a few AU. Is it sensible to model such regions with an $`\alpha `$ viscosity? Are disks which evolve under the influence of self-gravity $`\alpha `$ disks? What distinguishes turbulence well-modeled by $`\alpha `$ viscosity, from turbulence which is not? These and related questions form the focus of this paper. An overview of this paper is as follows. In §2, we present a review of classical viscous accretion theory. Although this material is well-known, we revisit it with a renewed attention upon how the viscous stress appears in the angular momentum and energy fluxes. This becomes a benchmark for the turbulent theories discussed in §§ 3 and 4. In §3 we show that MHD turbulence acts very much along the lines of classical viscous theory, in both steady-state as well as evolutionary disk models. In §4, it is shown the turbulent transport arising from self-gravity is not, in general, compatible with a viscous formalism. We discuss the physical basis for this behavior, and show that there are limiting cases of restrictive generality, which are compatible with $`\alpha `$ disk theory. Finally §5 summarizes our findings. ## 2 Preliminaries ### 2.1 Classical Viscous Disk Theory We begin with a brief review of classical viscous disk theory (Lynden-Bell & Pringle 1974, Pringle 1981). In Cartesian coordinates, with $`x`$, $`y`$, $`z`$ represented by dummy indices $`i`$, $`j`$, $`k`$, the viscous stress tensor takes the form (Landau & Lifschitz 1959): $$\sigma _{ij}=\eta (_jv_i+_iv_j\frac{2}{3}\delta _{ij}_kv_k).$$ (1) We use the standard notational convention of $`_i`$ denoting the partial derivative with respect to spatial coordinate $`i`$, and summation over repeated indices is implied unless stated otherwise. $`v_i`$ is the $`i`$th component of the velocity vector, and $`\eta `$ is the dynamical viscosity. To the extent that the fluid behaves incompressibly, we may ignore the divergence term, a standard and generally well-justified procedure for the class of turbulence we wish to consider here. The idea of viscous disk theory is to regard the effects of turbulence as greatly enhancing the magnitude of $`\eta `$ beyond its microscopic value, and doing nothing else. The dynamical equations are mass conservation, $$_t\rho +_j(\rho v_j)=0,$$ (2) and the equation of motion $$\rho (_tv_i+v_j_jv_i)=_j(P\delta _{ij}\sigma _{ij})\rho _i\mathrm{\Phi },$$ (3) ($`\delta _{ij}`$ is the Kronecker delta function) which may also be written $$_t(\rho v_i)+_j(\rho v_iv_j+P\delta _{ij}\sigma _{ij})=\rho _i\mathrm{\Phi },$$ (4) an explicit statement of momentum conservation. (Our notation is again standard: $`\rho `$ is the mass density, $`\mathrm{\Phi }`$ is the gravitational potential, $`P`$ is the gas pressure.) At this stage, we assume that $`\mathrm{\Phi }`$ is an imposed central disk potential; self-gravity is considered in §4. In viscous disk theories, momentum transport — or more usefully, angular momentum transport—is the task of $`\sigma _{ij}`$. If the differential rotation rate decreases with increasing radius, viscosity transports angular momentum outward. Multiplying eq. by $`v_i`$, integrating terms by parts, using mass conservation, and finally summing over $`i`$, leads to a mechanical energy equation $$_t(\rho v^2/2+\rho \mathrm{\Phi })+_j(\rho v^2v_j/2+\rho \mathrm{\Phi }+Pv_jv_i\sigma _{ij})=P_jv_j(_jv_i)\sigma _{ij},$$ (5) where $$v^2=v_iv_i$$ (6) The right hand side of eq. represents work done on the fluid and heating of the fluid, respectively. The presence of work and heating terms links disk mechanics with thermodynamics. Here, we wish to highlight the dual role of $`\sigma _{ij}`$: in eq. , it is a term in the transport of mechanical energy flux; coupled in eq. to the strain $`_jv_i`$, it is a mechanical energy loss term. In viscous disk models, the latter is, of course, the origin of accretion disk luminosity. Since $`\sigma _{ij}`$ is a symmetric tensor, $$(_jv_i)\sigma _{ij}=(1/2)(_jv_i+_iv_j)\sigma _{ij}=(1/2)\eta ^1\sigma _{ij}\sigma _{ij}>0$$ (7) so the dissipated energy ultimately radiated is necessarily positive definite for incompressible turbulence (Landau & Lifschitz 1959). In cylindrical coordinates $`(R,\varphi ,z)`$ the azimuthal equation of motion for an axisymmetric viscous disk expresses angular momentum conservation, and takes the explicit form $$_t(\rho Rv_\varphi )+\mathbf{}(\rho Rv_\varphi 𝐯\eta R^2\mathbf{}\mathrm{\Omega })=0.$$ (8) The rotational velocity $`v_\varphi `$ is Keplerian, $$v_\varphi ^2=\frac{GM}{R}$$ (9) where $`M`$ is the central mass, and $$R\mathrm{\Omega }=v_\varphi .$$ (10) In the simplest form of viscous disk theory, we ignore the vertical structure, treating the disk as flat, set $`v_\varphi =R\mathrm{\Omega }`$, and assume axisymmetry. That is, we use the height integrated form of the mass, angular momentum, and energy equations. With $`\mathrm{\Sigma }`$ denoting the disk column density, mass conservation becomes $$\frac{\mathrm{\Sigma }}{t}+\frac{1}{R}\frac{R\mathrm{\Sigma }v_R}{R}=0,$$ (11) while angular momentum conservation follows immediately from eq. : $$\frac{}{t}(\mathrm{\Sigma }R^2\mathrm{\Omega })+\frac{1}{R}\frac{}{R}(\mathrm{\Sigma }R^3\mathrm{\Omega }v_R\nu \mathrm{\Sigma }R^3\frac{d\mathrm{\Omega }}{dR})=0,$$ (12) where $`\nu `$ is the kinematic viscosity, $`\eta \rho \nu `$. The energy dissipated per unit area of the disk $`Q_e`$ (“emissivity”), is found from eq. to be $$Q_e=(9/8)\nu \mathrm{\Sigma }\mathrm{\Omega }^2.$$ (13) Under steady state conditions, the mass flux $`\dot{M}2\pi R\mathrm{\Sigma }v_R`$ and the angular momentum flux must both be constant. Assuming that the viscous stress vanishes at the inner edge of the disk $`R_0`$ leads to the relation $$\dot{M}\left(1\left(\frac{R_0}{R}\right)^{1/2}\right)=3\pi \nu \mathrm{\Sigma }.$$ (14) The turbulent paramater $`\nu \mathrm{\Sigma }`$ is severely restricted by this relation, and can be eliminated in favor of $`\dot{M}`$ in the expression for the emissivity, leading to (Pringle 1981): $$Q_e=\frac{3GM\dot{M}}{8\pi R^3}\left(1\left(\frac{R_0}{R}\right)^{1/2}\right).$$ (15) This is the classical $`(Q_e,\dot{M})`$ relationship, which leads to a surface emission temperature profile $`T_{eff}(R)R^{3/4}`$. Its utility lies in the absence of an explicit viscosity term, which has been eliminated by the requirements of constant angular momentum and mass flux. We may drop the assumption of time steady conditions, forgoing a functional restriction for $`\nu \mathrm{\Sigma }`$ in the process. Using eq. in eq. , leads to a generalized mass accretion formula: $$\mathrm{\Sigma }v_RR=3R^{1/2}\frac{}{R}(\nu \mathrm{\Sigma }R^2\mathrm{\Omega }).$$ (16) This in turn may be used back in eq. , yielding an equation for $`\mathrm{\Sigma }`$ in terms of $`\nu `$ (Lynden-Bell & Pringle 1974): $$\frac{\mathrm{\Sigma }}{t}=\frac{3}{R}\frac{}{R}\left[R^{1/2}\frac{}{R}\left(\nu R^{1/2}\mathrm{\Sigma }\right)\right]$$ (17) This is the classical evolutionary equation commonly used in accretion disk modeling. It requires an a priori specification of the functional dependence of $`\nu `$ to be useful, and leads to diffusive behavior (disk spreading) in eruptive systems. Ultimately mass is transported inwards and angular momentum outwards: all of the latter is in a vanishingly small component of the former. ## 3 Magnetohydrodynamical Turbulence. The fundamental assumption underlying essentially all phenomenological modeling of turbulent disks is the following: it makes sense to use a two scale approach to mathematically represent disk attributes of astrophysical interest. With the possible exception of “flickering” in CV systems (e.g., Welsh, Wood, & Horne 1996), observational data is assumed to involve length and time scales which are larger than the characteristic “eddy turnover” scales of the turbulence. One works with averages that are assumed to be well-defined, and to represent the large-scale properties of the disk, much as classical dynamo theory (Krause & Rädler 1980) represents mean fields and mean helicity. While this statement probably does not strike the reader as startling or controversial, it masks subtleties. The assumuption has remarkably restrictive consequences. The problem is that power spectrum of almost all nondissipative quantities (including the stress tensor itself) is dominated by the largest scales of the turbulence, in this case, the disk scale height and rotation period. The implicit averaging must be on scales large compared to the scale height but small compared with the radius, and large compared with the orbital period but small compared with viscous and thermal time scales. There must be an asymptotic domain where these scales are cleanly separated, so that a computed radial disk profile is insensitive to the averaging procedure. Let us assume this is the case and pursue its consequences. The dynamical equation of motion in the presence of a magnetic field $`𝐁`$ is $$\rho \frac{𝒗}{t}+(\rho 𝒗\mathbf{}\mathbf{})𝒗=\mathbf{}\left(P+\frac{B^2}{8\pi }\right)\rho \mathbf{}\mathrm{\Phi }+\left(\frac{𝐁}{4\pi }\mathbf{}\mathbf{}\right)𝐁+\eta _V^2𝒗.$$ (18) We have denoted the viscosity as $`\eta _V`$ to distinguish it from the resistivity associated with the magnetic field, which we will denote as $`\eta _B`$. We have dropped terms proportional to $`\mathbf{}`$$`\mathbf{}`$$`𝒗`$ in the viscous term. The azimuthal component of this equation can be written in a form which expresses angular momentum conservation: $$\frac{}{t}(\rho Rv_\varphi )+\mathbf{}\mathbf{}R\left[\rho v_\varphi 𝒗\frac{B_\varphi }{4\pi }𝑩_𝒑+\left(P+\frac{B_p^2}{8\pi }\right)\widehat{𝒆}_\mathit{\varphi }\eta _VR^2\mathbf{}(v_\varphi /R)\right]=0,$$ (19) where the subscript $`𝒑`$ denotes a poloidal vector component. We now separate the circular motion $`R\mathrm{\Omega }`$ from the noncircular motion $`𝐮`$, treating the latter as a fluctuating quantity, though not necessarily with vanishing mean. We have $$𝒗=R\mathrm{\Omega }\widehat{𝒆}_\mathit{\varphi }+𝒖.$$ (20) Although mean drift velocities may be present (the disk must accrete), we assume that such motions are small compared with fluctuations amplitudes, $$|𝒖|^2u^2,$$ (21) where the angle brackets denote a suitable average, discussed below. Furthermore, the direct contribution to the angular momentum flux from the microscopic viscosity $`\eta _V`$ is generally negligible. (This is why turbulence is necessary!) We may drop this term. Substituting for $`𝒗`$ in eq. (19), and averaging over azimuth, denoting such means as $`_\varphi `$. This gives $$\frac{}{t}\rho R^2\mathrm{\Omega }_\varphi +\mathbf{}\mathbf{}R\left[\rho R\mathrm{\Omega }𝒖_𝒑_\varphi +𝑻\right]=0,$$ (22) where $`𝑻`$ is the poloidal stress tensor $$𝑻=\rho u_\varphi 𝒖_𝒑B_\varphi 𝑩_𝒑/4\pi _\varphi $$ (23) We have dropped the $`\rho u_\varphi `$ term in comparison with $`\rho R\mathrm{\Omega }`$ in the leading time derivative. Henceforth, we will use the Alfven velocity $$𝒖_𝑨\frac{𝑩}{\sqrt{4\pi \rho }}$$ (24) in favor of the magnetic field vector $`𝑩`$. ### 3.1 MHD Turbulence and Viscous Disk Theory To make contact with the classical viscous disk theory of the previous section we need to integrate eq. (22) over $`z`$, and to assume that surface terms can be dropped. Furthermore, we wish to regard height-integrated, azimuthal averaged flow quantities as smooth functions of $`R`$. As indicated above, this also implies some sort of radial smoothing—an average over a volume small compared with $`R`$, but larger than a disk scale height. We follow the notation of Balbus & Hawley (1998), and define the density weighted mean of flow attribute $`X`$ to be $$X_\rho \frac{1}{2\pi \mathrm{\Sigma }\mathrm{\Delta }R}_{\mathrm{}}^{\mathrm{}}_{R\mathrm{\Delta }R/2}^{R+\mathrm{\Delta }R/2}_0^{2\pi }\rho X𝑑\varphi 𝑑R𝑑z$$ (25) where $`\mathrm{\Sigma }`$ is the integrated and similarly radially averaged disk column density. The radial component of $`𝑻/\rho `$ resulting from this operation will be denoted simply as $`W_{R\varphi }`$. Angular momentum conservation becomes $$\frac{}{t}\mathrm{\Sigma }R^2\mathrm{\Omega }_\rho +\frac{1}{R}\frac{}{R}\left(R^3\mathrm{\Omega }\mathrm{\Sigma }u_R_\rho +R^2\mathrm{\Sigma }W_{R\varphi }\right)=0$$ (26) Mass conservation follows straightforwardly from integrating the fundamental equation, and leads to a form essentially identical to eq. : $$\frac{\mathrm{\Sigma }}{t}+\frac{1}{R}\frac{(R\mathrm{\Sigma }u_R_\rho )}{R}=0$$ (27) Using this in eq. (26) gives a general formula for the mass accretion rate $$\mathrm{\Sigma }u_R_\rho =\frac{1}{R(R^2\mathrm{\Omega })^{}}\frac{}{R}(\mathrm{\Sigma }R^2W_{R\varphi }),$$ (28) where the prime denotes differentiation with respect to $`R`$. Combining eqs. (28) and (27) gives us the analogue to eq. (17) in a turbulent disk, for any angular momentum profile: $$\frac{\mathrm{\Sigma }}{t}=\frac{1}{R}\frac{}{R}\frac{1}{(R^2\mathrm{\Omega })^{}}\frac{}{R}(\mathrm{\Sigma }R^2W_{R\varphi }).$$ (29) Since $`W_{R\varphi }`$ of not known a priori, in practical terms, eq. (29) represents only a marginal improvement on the phenomenological equation (17). But we may see that “viscous” evolution does not require the explicit adoption of a viscous stress tensor. Any disk in which $`u_R`$ and $`u_\varphi `$ (and $`u_{AR}`$ and $`u_{A\varphi }`$) are positively correlated must behave similarly, with the caveat that the correlation tensor must be a locally defined quantity. We have discussed thus far only the dynamics of the turbulence. Once $`W_{R\varphi }`$ is known, and it depends primarily on correlations on the largest turbulent scales, the disk evolution may be directly calculated by equation (29). Classical viscous disk theory also addresses the energetics. Since viscosity is the agent of transport, there must be dissipation as well. The energy is directly thermalized from its free source in the differential rotation, down to thermal scales. In a turbulent disk, matters are more complex. Energy cascades from the differential rotation to the scales of the largest fluctuations, thence to the integral self-similar scales, and finally to the dissipative Kolmogorov scale (which may be set by resistivity rather than viscosity). In a steady state disk, we expect that the rate at which energy is extracted from the differential rotation, which may be easily calculated in terms of the stress tensor, to be equal to the rate at which it is thermalized, which would otherwise not be directly calculable. The upshot of this is that the steady state turbulent disks behaves viscously in their energetics as well in their dynamics. But classical viscous theory makes a stronger assumption by its very nature: the rate of thermalization of the free energy of differential rotation is the same in both steady and evolutionary models. This may be true in an evolving turbulent disk, but it is not obviously true. It depends upon whether the cascade is efficient. Fortunately, we shall see that this question may be directly and quantitatively answered within the stress tensor formalism we are using. The evolution of the magnetic field in a plasma with resistivity $`\eta _B`$ is given by $$\frac{𝑩}{t}=\mathbf{}\mathbf{\times }\left(𝒗\mathbf{\times }𝑩\eta _B\mathbf{}\mathbf{\times }𝑩\right).$$ (30) The energy of mechanical energy equation is obtained by dotting eq. (18) with $`𝒗`$, dotting eq. (30) with $`𝑩`$, and combining the two. After some simplification (e.g., Balbus & Hawley 1998), we arrive at $$\frac{}{t}\left(\frac{1}{2}\rho v^2+\rho \mathrm{\Phi }+\frac{B^2}{8\pi }\right)+\mathbf{}\mathbf{}[]=P\mathbf{}𝒗\eta _V(_iv_j)(_iv_j)\frac{\eta _B}{4\pi }|\mathbf{}\mathbf{\times }𝑩|^2$$ (31) We have dropped the term $`\eta _V|\mathbf{}\mathbf{}𝒗|^2`$ on the right hand side since in a turbulent disk it is generally small compared with the other viscous term. The right hand pressure term, though also proportional to $`\mathbf{}`$$`\mathbf{}`$$`𝒗`$, cannot be dropped; as noted in §2, it represents a link to the internal energy of the disk via the first law of thermodynamics. The unwritten energy flux in square brackets is $$𝒗\left(\frac{1}{2}\rho v^2+\rho \mathrm{\Phi }+P\right)+\frac{𝑩}{4\pi }\mathbf{\times }(𝒗\mathbf{\times }𝑩)$$ (32) where, as before, we have not included the transport due to explicit viscosity or resistivity. The energy flux is more complex than its angular momentum counterpart, but can be greatly simplified by retaining only leading terms in a $`uR\mathrm{\Omega }`$ expansion. When averaged as before, the radial energy flux is $$\mathrm{\Sigma }(\frac{1}{2}R^2\mathrm{\Omega }^2+\mathrm{\Phi })u_R_\rho +\mathrm{\Sigma }R\mathrm{\Omega }W_{R\varphi },$$ (33) which may be compared with the radial angular momentum flux of eq. (26), $$\mathrm{\Sigma }R^2\mathrm{\Omega }u_R_\rho +\mathrm{\Sigma }RW_{R\varphi }.$$ (34) (Note that to effect the height integration, we have assumed that the magnetic field is force-free above the disk. This is a physically reasonable assumption, but one that is less than general. The energy dissipation rate is, however, indifferent to the presence or absence of surface terms resulting from vertical integrations, since the ignored vertical fluxes would contribute nothing to the local mechanical energy losses in the disk.) The key point is to observe that the only turbulence parameters entering into either the angular momentum or energy radial fluxes are $`u_R_\rho `$ and $`W_{R\varphi }`$. This is the essence of an $`\alpha `$ disk. The issue is most clear for a steady model (Balbus & Hawley 1998). In this case the accretion rate $$\dot{M}2\pi \mathrm{\Sigma }Ru_R_\rho $$ (35) is constant, and if the angular momentum flux at the inner edge of the disk $`R_0`$ is vanishingly small, then $$W_{R\varphi }=\frac{\dot{M}\mathrm{\Omega }}{2\pi \mathrm{\Sigma }}\left[1\left(\frac{R_0}{R}\right)^{1/2}\right],$$ (36) Now, one cannot calculate directly the thermalization losses, since the small scale gradients are not known. But one can calculate the divergence of the large scale flux, since the spatial dependence of $`W_{R\varphi }`$ is determined by angular momentum conservation. This must be the small scale dissipation rate. We find $$Q_e=\mathrm{\Sigma }W_{R\varphi }\frac{d\mathrm{\Omega }}{d\mathrm{ln}R},$$ (37) which is precisely the analogue of eq. (13) if $`W_{R\varphi }`$ is replaced by a large scale viscous stress. (Note that $`Q_e>0`$.) This result can also be obtained directly from the energy equation for the $`u`$ fluctuations themselves, by demanding that sources and sinks balance in steady state (Balbus & Hawley 1998). More is required, however. In viscous models, the thermalization rate is given by eq. (13) whether steady conditions prevail or not. The question before us is whether the thermalization rate (37) is just as general: does it hold when $`\dot{M}`$ is not constant and when the energy density of the disk changes with time? We now show that it does. First, let us recall the fundamental relations for the angular and epicyclic frequencies, $$\mathrm{\Omega }^2=\frac{1}{R}\frac{\mathrm{\Phi }}{R},\kappa ^2=\frac{1}{R^3}\frac{}{R}(R^4\mathrm{\Omega }^2)=\frac{1}{R^3}\frac{}{R}(R^3\mathrm{\Phi }^{}),$$ (38) as well as the specific energy, $$\frac{1}{2}R^2\mathrm{\Omega }^2+\mathrm{\Phi }=\frac{1}{2R}\frac{}{R}(R^2\mathrm{\Phi }).$$ (39) Thus, the energy flux of eq. (33) becomes $$\frac{1}{2R}\mathrm{\Sigma }u_R_\rho \frac{}{R}(R^2\mathrm{\Phi })+\mathrm{\Sigma }R\mathrm{\Omega }W_{R\varphi }.$$ (40) We may substitute for $`u_R_\rho `$ using equation (28). This gives an energy flux of $$\mathrm{\Omega }\frac{(R^2\mathrm{\Phi })^{}}{(R^3\mathrm{\Phi }^{})^{}}\frac{}{R}(\mathrm{\Sigma }R^2W_{R\varphi })+\mathrm{\Sigma }R\mathrm{\Omega }W_{R\varphi }.$$ (41) The quantity of interest is the heating rate $`Q_e`$ due to turbulent dissipation per unit area, and it is given by vertically integrating the left-hand side of eq. (31). Making use of the above it may be written in the form $$Q_e=\frac{1}{2R}\frac{R^2\mathrm{\Phi }_s}{R}\frac{\mathrm{\Sigma }}{t}+\frac{1}{R}\frac{}{R}R\left[\mathrm{\Omega }\frac{(R^2\mathrm{\Phi })^{}}{(R^3\mathrm{\Phi }^{})^{}}\frac{}{R}(\mathrm{\Sigma }R^2W_{R\varphi })+\mathrm{\Sigma }R\mathrm{\Omega }W_{R\varphi }\right].$$ (42) If we now use eq. (29) for $`\mathrm{\Sigma }/t`$ in eq. (42), we obtain $`Q_e`$ $`=`$ $`{\displaystyle \frac{1}{R^2}}{\displaystyle \frac{(R^2\mathrm{\Phi })}{R}}{\displaystyle \frac{}{R}}\left[{\displaystyle \frac{R^2\mathrm{\Omega }}{(R^3\mathrm{\Phi }^{})^{}}}{\displaystyle \frac{}{R}}(\mathrm{\Sigma }R^2W_{R\varphi })\right]`$ (43) $`{\displaystyle \frac{1}{R}}{\displaystyle \frac{}{R}}\left[R\mathrm{\Omega }{\displaystyle \frac{(R^2\mathrm{\Phi })^{}}{(R^3\mathrm{\Phi }^{})^{}}}{\displaystyle \frac{}{R}}(\mathrm{\Sigma }R^2W_{R\varphi })\mathrm{\Sigma }R^2\mathrm{\Omega }W_{R\varphi }\right].`$ This unwieldy formula immediately simplifies to $$Q_e=\frac{R\mathrm{\Omega }}{(R^3\mathrm{\Phi }^{})^{}}\frac{}{R}(\mathrm{\Sigma }R^2W_{R\varphi })\frac{}{R}\frac{1}{R}\frac{}{R}(R^2\mathrm{\Phi })+\frac{1}{R}\frac{}{R}(\mathrm{\Sigma }R^2\mathrm{\Omega }W_{R\varphi }).$$ (44) Furthermore, for any function $`\mathrm{\Phi }`$, the following identity is easily verified, $$(R^1(R^2\mathrm{\Phi })^{})^{}=R^2(R^3\mathrm{\Phi }^{})^{},$$ (45) leading to a complete collapse of our expression down to the single term $$Q_e=\mathrm{\Sigma }W_{R\varphi }\frac{d\mathrm{\Omega }}{d\mathrm{ln}R},$$ (46) which is the desired result. To leading order in the turbulent fluctuation amplitudes, the thermalization rate per unit area of a magnetized disk is given by the above, whether the disk is evolving or in a steady state. This result is nicely compatible with classical viscous thin disk theory. That not all disk turbulence is so easily subsumed will be seen in the next section. ## 4 Self-Gravity Self gravitational forces can be important for galactic and protostellar disks. In its most extreme manifestation, self-gravity can hold the disk together and cause substantial deviations from a Keplerian rotation law. But this requires a disk mass comparable to or in excess of the central compact mass, and we will not consider this limit. Instead we focus on a more common situation in which the local self-gravitating free fall time $`(G\rho )^{1/2}`$ is comparable to or smaller than the $`1/\mathrm{\Omega }`$. There are several equivalent ways of expressing this condition, the classical Toomre (1964) Q criterion being the best known (Binney & Tremaine 1987). With $`c_S`$ denoting the sound speed, if $$Q\frac{\kappa c_S}{\pi G\mathrm{\Sigma }}<1,$$ (47) then local density perturbations are unstable to gravitational collapse in a thin disk. If we define the vertical scale height $`H`$ by $`cH=\mathrm{\Omega }`$ and the disk density by $`\rho H=\mathrm{\Sigma }`$, then for a Keplerian disk the $`Q`$ criterion becomes $$\frac{\mathrm{\Omega }^2}{\pi G\rho }<1,$$ (48) in rough agreement with our initial estimate, and we may avoid an explicit reference to the disk temperature. Self-gravity is obviously important in the formation stages of a galaxy or a star, but it is also likely to be a key component in later evolutionary stages, especially in the outer regions of the disk where $`\mathrm{\Omega }^2/\rho `$ is likely to be small. We shall concentrate here on the latter case, assuming a well-defined Keplerian disk is present, with self-gravity causing small but critical departures from circular flow. The ratio of disk mass $`M_d`$ to central mass $`M`$ is found from eq. (48) to be of order $`(H/QR)`$, so our appoximation is justified for thin disks. Progress in the numerical modeling of disk systems has been impressive, and sophisticated simulations are now possible, although investigators understandably tend to want to explore the more dramatic behavior of very massive disks. One of the interesting question these modelers are addressing is whether turbulence wrought by self-gravity is amenable to a viscous diffusion treatment (e.g., Laughlin & Royczyska 1996). We now examine this point. ### 4.1 Dynamical and Energy Fluxes The self-gravity potential $`\mathrm{\Phi }_S`$ satisfies the Poisson equation, most conveniently written in the form $$\rho =\frac{1}{4\pi G}_i_i\mathrm{\Phi }_S,$$ (49) where the subscript $`i`$ (or $`j,k`$ below) denotes a Cartesian coordinate, and the summation convention on repeated subscripts is used unless otherwise stated. The connection between the gravitational force and its associated stress tensor was first made by Lynden-Bell & Kalnajs (1972): $$\rho _i\mathrm{\Phi }_S=\frac{_i\mathrm{\Phi }_S}{4\pi G}_j_j\mathrm{\Phi }_S=\frac{1}{4\pi G}_j\left[(_j\mathrm{\Phi }_S)(_i\mathrm{\Phi }_S)+\frac{\delta _{ij}}{2}(_k\mathrm{\Phi }_S)(_k\mathrm{\Phi }_S)\right]$$ (50) To keep both the gravitational and nongravitational components of the stress tensor on an equal footing, define the velocities $$𝒖_𝑮\frac{\mathbf{}\mathrm{\Phi }_S}{\sqrt{4\pi G\rho }}.$$ (51) Then, in the presence of self-gravity and magnetic fields, the $`R\varphi `$ component of the stress tensor becomes $$W_{R\varphi }=u_Ru_\varphi +u_{GR}u_{G\varphi }u_{AR}u_{A\varphi }_\rho $$ (52) Equations (26) and (29), angular momentum conservation and the disk evolution equation, continue to hold in precisely the same form when self-gravity is present, if the stress tensor $`W_{R\varphi }`$ is amended simply as above. Gravitational torques are calculated formally in exactly the same way as turbulent and magnetic torques. When $`Q`$ is of order unity, the kinetic $`u`$ terms and gravitational $`u_G`$ terms of $`W_{R\varphi }`$ are comparable if $`u|\mathbf{}\mathrm{\Phi }_S|/2\mathrm{\Omega }`$—i.e., if the fluctuation velocities are due to self-gravity impulses on a rotation time scale. Consider next the energetics of self-gravity. Is the volumetric dissipation rate still given by equation (46), with the gravitationally amended version of $`W_{R\varphi }`$? The answer is not in general, but under some interesting conditions it is. Let us see how this emerges. We seek to write the expression $`\rho v_i_i\mathrm{\Phi }_S`$ in conservation form: the time derivative of an energy density plus the divergence of a flux. We have, $$\rho v_i_i\mathrm{\Phi }_S=_i(\rho v_i\mathrm{\Phi }_S)\mathrm{\Phi }_S_i(\rho v_i)=_i(\rho v_i\mathrm{\Phi }_S)+_t(\rho \mathrm{\Phi }_S)\rho _t\mathrm{\Phi }_S$$ (53) where the last equality follows from mass conservation plus an integration by parts. There is no sign yet of the gravitational stress tensor putting in an appearance, but the final term remains dangling for the moment. This may be written $$\rho _t\mathrm{\Phi }_S=\frac{1}{4\pi G}(_i_i\mathrm{\Phi }_S)(_t\mathrm{\Phi }_S)=\frac{1}{4\pi G}_i[(_i\mathrm{\Phi }_S)(_t\mathrm{\Phi }_S)]\frac{1}{8\pi G}_t[(_j\mathrm{\Phi }_S)(_j\mathrm{\Phi }_S)].$$ (54) Returning to vector invariant notation, $$\rho 𝒗\mathbf{}\mathbf{}\mathrm{\Phi }_S=\frac{}{t}(\rho \mathrm{\Phi }_S+\frac{1}{8\pi G}|\mathbf{}\mathrm{\Phi }_S|^2)+\mathbf{}\mathbf{}\left(\rho 𝒗\mathrm{\Phi }_S\frac{\mathbf{}\mathrm{\Phi }_S}{4\pi G}\frac{\mathrm{\Phi }_S}{t}\right)$$ (55) There is some ambiguity as to whether one assigns terms to the energy density or the flux. For example, an equivalent formulation of eq. (55) is $$\rho 𝒗\mathbf{}\mathbf{}\mathrm{\Phi }_S=\frac{1}{8\pi G}\frac{}{t}|\mathbf{}\mathrm{\Phi }_S|^2+\mathbf{}\mathbf{}\left[\rho 𝒗\mathrm{\Phi }_S\frac{\mathbf{}\mathrm{\Phi }_S}{4\pi G}\frac{\mathrm{\Phi }_S}{t}+\frac{1}{8\pi G}\frac{}{t}\mathbf{}\mathrm{\Phi }_S^2\right].$$ (56) But there is no apportionment that of itself produces an $`R\varphi `$ component of the gravitational stress tensor. Since the energy flux is most readily interpretable in eq. (55), we shall use this form of energy conservation in our discussion below. The combination $`\mathbf{}\mathrm{\Phi }_S\mathrm{\Phi }_S/t`$ will be familiar to students of acoustical theory (e.g., Lighthill 1978) where precisely this form of energy flux is associated not with a gravitational potential, but with the velocity potential of irrotational sound waves. In the acoustic case, this emerges from the “$`Pv`$” term in the energy flux, a term which is third order in the fluctuation amplitudes for incompressible turbulence, and therefore negligible. Indeed, an important physical distinction between a disk in which there is a superposition of waves and a disk which is truly turbulent is the dominance of the $`W_{R\varphi }`$ term over the $`Pv`$ in the latter’s energy flux. If waves were present in a turbulent disk, would this change the relative dominance? Not in a thin non-self-gravitating Keplerian disk with good $`R\varphi `$ correlations in the stress tensor. In a density wave, the pressure contribution to the energy flux will be of order $`u^2c_S`$ in the velocities, whereas the stress tensor term is of order $`\alpha R\mathrm{\Omega }c_S^2`$. Since $`u^2\alpha c_S^2`$, the stress tensor contribution will always be dominant (by a factor of $`R/H`$) in a thin disk. The appearance of a second order contribution of the potential in the energy flux suggests qualitatively new transport features in self-gravitating disks. In retrospect, the breakdown of the $`\alpha `$ formalism is perhaps not surprising. Turbulence in hydrodynamical shear flows or MHD disks arises because vorticity fields and magnetic fields are “ensnared” by shear, and funnel this free energy into fluctuations. These fields may become ensnared because both are frozen into their respective fluids. Their evolution is entirely local, and the vorticity and magnetic fields are governed by essentially identical equations. Gravitational fields are not frozen into the fluid, and we should not expect local dissipation of its associated turbulence, which is the inevitable consequence of an energy flux depending upon the stress tensor and drift velocity, as may be seen in eq. (33). Self-gravity is generally a global phenomenon (its field equation is elliptic), and one has no cause to expect a repetition of our earlier magnetic success with local theory. ### 4.2 The Local Limit If instead of the combination $`\rho 𝒗\mathrm{\Phi }_S\mathbf{}\mathrm{\Phi }_S_t\mathrm{\Phi }_S`$ appearing in the energy flux, the combination $`\mathrm{\Omega }\mathbf{}\mathrm{\Phi }_S_\varphi \mathrm{\Phi }_S`$ emerged, we would be able to construct a local model of the dissipation. In this case, the gravitational component of the stress tensor would couple energetically precisely as magnetic and Reynolds stresses couple. It will turn out that the vanishing of both these terms corresponds to $$\left(\frac{}{t}+\mathrm{\Omega }\frac{}{\varphi }\right)\mathrm{\Phi }_S=0,$$ (57) when the disturbances are analyzed in terms of WKB waves. This is a very revealing requirement, for it is just this condition that defines the corotation resonance in linear density wave theory, and it is only at this location that waves couple directly the disk (e.g., Goldreich & Tremaine 1979, hereafter GT). It is quite natural, therefore, that this condition reemerges as the requirement for gravitationally driven energy stresses to be thermalized. Let us examine the structure of the energy conservation equation further. We focus for simplicity upon an unmagnetized discs, and assume that $`R\mathrm{\Omega }u_G`$. Denoting the volumetric mechanical energy losses as $`\epsilon `$, the self-gravitational analogue to equation (31) becomes $$\frac{}{t}\left[\rho (\frac{v^2}{2}+\mathrm{\Phi }+\mathrm{\Phi }_S)+\frac{1}{8\pi G}|\mathbf{}\mathrm{\Phi }_S|^2\right]+\mathbf{}\mathbf{}\left[\rho 𝒗(\frac{v^2}{2}+\mathrm{\Phi }+\mathrm{\Phi }_S)\frac{\mathbf{}\mathrm{\Phi }_S}{4\pi G}\left(\frac{\mathrm{\Phi }_S}{t}\right)\right]=\epsilon .$$ (58) We have neglected the pressure contribution to the energy flux (but see below). We rewrite the rate of production of mechanical work as given by the left hand side of (58), separating the terms in a suggestive manner: $$\frac{}{t}+\mathbf{}\mathbf{}𝓕+\mathbf{}\mathbf{}\left(\rho \mathrm{\Phi }_S𝒗\frac{\mathbf{}\mathrm{\Phi }_S}{4\pi G}\frac{D\mathrm{\Phi }_S}{Dt}\right)=\epsilon ,$$ (59) where the energy density $``$ is $$=\rho (\frac{v^2}{2}+\mathrm{\Phi }+\mathrm{\Phi }_S)+\frac{1}{8\pi G}|\mathbf{}\mathrm{\Phi }_S|^2,$$ (60) the local flux $`𝓕`$ is $$𝓕=\rho 𝒗(\mathrm{\Phi }+\frac{v^2}{2})+\mathrm{\Omega }\frac{\mathbf{}\mathrm{\Phi }_S}{4\pi G}\frac{\mathrm{\Phi }_S}{\varphi },$$ (61) and $$\frac{D}{Dt}\frac{}{t}+\mathrm{\Omega }\frac{}{\varphi }.$$ (62) The final terms on the left side of the eq. (59) are “anomalous” from the point-of-view of $`\alpha `$ disk theory, and the flux will henceforth be denoted as $`𝑭_𝑬^{\mathbf{}}`$. The radial component of $`𝓕`$ is just given by eq. (33), with $`W_{R\varphi }`$ amended as in eq. (52). Were $`𝑭_𝑬^{\mathbf{}}`$ negligible, we would be lead directly to an $`\alpha `$ disk model via the route we followed in §§ 2 and 3. However, since these terms may be of order $`R\mathrm{\Omega }u_G^2`$ in the velocities, they cannot be neglected. Their physical interpretation is discussed in the next section. ### 4.3 Wave Fluxes in Self-Gravitating Disks To understand the role of the anomalous flux, it is helpful to study it in the context of the simplest possible fluctuating self-gravitating disk model: WKB waves in a thin, pressureless disk. (The inclusion of pressure terms leads to a more complicated calculation, but with precisely the same final conclusion.) The waves have the canonical form $`\mathrm{exp}(i^Rk𝑑x+im\varphi i\omega t)`$, where $`k`$ is the local radial wavenumber $`m`$ the azimuthal wavenumber variable, and $`\omega `$ the fixed wave frequency, and satisfy the dispersion relation $$(\omega m\mathrm{\Omega })^2=\kappa ^22\pi G\mathrm{\Sigma }|k|.$$ (63) The potential $`\mathrm{\Phi }_S`$ has the vertical spatial dependence $`e^{|kz|}`$ out of the disk midplane (Lin & Shu 1966, Binney & Tremaine 1987). The radial anomalous energy flux, averaged over azimuth and integrated over height is $`𝑭_𝑬^{\mathbf{}}\mathbf{}\widehat{𝒆}_𝑹`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}\rho u_R\mathrm{\Phi }_S{\displaystyle \frac{1}{4\pi G}}{\displaystyle \frac{\mathrm{\Phi }_S}{R}}\left({\displaystyle \frac{\mathrm{\Phi }_S}{t}}+\mathrm{\Omega }{\displaystyle \frac{\mathrm{\Phi }_S}{\varphi }}\right)_\varphi 𝑑z`$ (64) $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}\rho u_R\mathrm{\Phi }_S_\varphi 𝑑z{\displaystyle \frac{1}{4\pi G}}\left(\mathrm{\Omega }{\displaystyle \frac{\omega }{m}}\right){\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{\mathrm{\Phi }_S}{R}}{\displaystyle \frac{\mathrm{\Phi }_S}{\varphi }}_\varphi 𝑑z`$ To do the first integral, we need to be able to express $`u_R`$ in terms of $`\mathrm{\Phi }_S`$. This relation may be read off directly from eq. (11) of GT: $$u_R=\frac{m\mathrm{\Omega }\omega }{2\pi G\mathrm{\Sigma }}\mathrm{\Phi }_S(0)\mathrm{sgn}(k).$$ (65) where $`\mathrm{\Phi }_S(0)`$ is the midplane ($`z=0`$) value of the potential. Denoting the potential amplitude by $`\stackrel{~}{\mathrm{\Phi }}_S`$, and assuming $`\rho (z)=\mathrm{\Sigma }\delta (z)`$, the first integral is then $$_{\mathrm{}}^{\mathrm{}}\rho u_R\mathrm{\Phi }_S_\varphi 𝑑z=\frac{1}{4\pi G}(m\mathrm{\Omega }\omega )\stackrel{~}{\mathrm{\Phi }}_{S}^{}{}_{}{}^{2}\mathrm{sgn}(k)$$ (66) The second integral may be evaluated by noting that the integrand depends on $`z`$ as $`\mathrm{exp}(2|kz|)`$. This gives $$\frac{1}{4\pi G}\left(\mathrm{\Omega }\frac{\omega }{m}\right)_{\mathrm{}}^{\mathrm{}}\frac{\mathrm{\Phi }_S}{R}\frac{\mathrm{\Phi }_S}{\varphi }_\varphi 𝑑z=\frac{1}{8\pi G}(m\mathrm{\Omega }\omega )\stackrel{~}{\mathrm{\Phi }}_{S}^{}{}_{}{}^{2}\mathrm{sgn}(k).$$ (67) Thus, $$𝑭_𝑬^{\mathbf{}}\mathbf{}\widehat{𝒆}_𝑹=\frac{1}{8\pi G}(\omega m\mathrm{\Omega })\stackrel{~}{\mathrm{\Phi }}_{S}^{}{}_{}{}^{2}\mathrm{sgn}(k)$$ (68) The angular momentum flux is also to be found in GT (eq. (30); note their definition differs by a factor of $`2\pi R`$ from ours). It is simply $$F_J^{}=\frac{m}{8\pi G}\stackrel{~}{\mathrm{\Phi }}_{S}^{}{}_{}{}^{2}\mathrm{sgn}(k).$$ (69) In other words, the anomalous radial energy flux is the product of this angular momentum flux and the Doppler-shifted wave pattern speed $`\omega /m\mathrm{\Omega }`$. It therefore is identifiable as a true wave energy flux. (Turbulent energy and angular momentum fluxes, by way of contrast, are related by a factor $`\mathrm{\Omega }`$.) Its significance is that in a $`Q1`$ disk it will contribute to the total energy flux at a level comparable to the stress tensor $`W_{R\varphi }`$ if $`|\omega /m\mathrm{\Omega }|/\mathrm{\Omega }`$ is of order unity. The effect is to prevent self-gravitating disks from behaving like $`\alpha `$ disks; only if the anomalous flux vanishes can a self-gravitating disk behave like a local $`\alpha `$ disk. The “forbidden zone”of wave propagation near the corotation point $`\omega =m\mathrm{\Omega }`$ will display properties similar to an $`\alpha `$ disk. However, when a disk location undergoes forcing due to a potential from a wave pattern rotating with a frequency very different from the local rotation frequency, it will not behave like an $`\alpha `$ disk. Such a situation may occur, for example, when an exterior disk is forced by the potential caused by a developing central bar instability. Energy can be exchanged between fluctuations and the differential rotation of the disk; unlike angular momentum, it need not be conserved in the noncircular motions. In contrast to a turbulent $`\alpha `$ disk, a self-gravitating disk can evolve by extracting energy from the background shear and allocating it to the flucutations (wave energy) without the need for mechanical energy dissipation. This allows for angular momentum transport with no associated local energy losses. Significant angular momentum transport of this type can occur if a global nonaxisymmetric mode develops in an initially gravitationally unstable disk. Such a construction need not be merely a transient initial condition. Such features are semi-permanent, slowly evolving as the disk background parameters change (Papaloizou & Savonije 1991, Papaloizou 1996, Laughlin & Royczyska 1996). More recently, careful analyses of self-gravitating disk simulations carried out by Laughlin, Korchagin & Adams (1997, 1998) clearly show angular momentum transport produced by the onset of global nonaxisymmetric instability and subsequent generation of a global wave pattern which has extracted energy from the background shear. Transport of this type, which is seen to persist even after initial saturation, certainly does not have the character of that exhibited by an $`\alpha `$ disk (Laughlin & Royczyska 1996). These simulations, however, involve massive disks (comparable stellar and disk masses); similar studies of lower mass Keplerian disks have yet to be done. ### 4.4 Conditions Under Which Self-Gravity Leads to an $`𝜶`$ Disk There is a local limit in which the nonlocal energy flux terms vanish and eq. (37) is recovered. It occurs when the shearing box limit is used to study self-gravitating disks. The shearing box approximation is a standard approach to the dynamics of thin disks, both self-gravitating (Goldreich & Lynden-Bell 1965, Julian & Toomre 1966, Toomre 1981) and non self-gravitating (Goldreich, Goodman, & Narayan 1986; Hawley, Gammie, & Balbus 1995). In this scheme, the disk is divided into local Cartesian patches with periodic boundary conditions being applied on their boundaries. Thus one considers a small box in the disk, and sets up local corotating coordinates corotating with the patch center. Strictly periodic boundary conditions are applied in the azimuthal direction, so that $`\varphi `$-averaging amounts to averaging over one azimuthal width of the box. However, in the radial direction, because of the presence of large scale shear, periodicity is applied at boundary points which are azimuthally separating from one another. Related periodic points must shear apart with time. Thus, strict periodicity would hold only in comoving, shearing coordinates. Note that there is no preferred location in this description so the box may be centered anywhere (except, of course, the origin). The significance of these boundary conditions is that they force the divergence of $`𝑭_𝑬^{\mathbf{}}`$ to be zero when averaged over the box. If $`\mathrm{\Delta }R`$ is the radial extent of the box, and $`R\pm \mathrm{\Delta }R/2`$ represent the outer ($`+`$) and inner ($``$) boundaries, then the integrated box average leads to a term of the form $$\left[\frac{\mathrm{\Phi }_S}{R}\frac{D\mathrm{\Phi }_S}{Dt}_\varphi \right]_{R\mathrm{\Delta }R/2}^{R+\mathrm{\Delta }R/2},$$ (70) which must vanish. The square bracket notation denotes a difference to be taken between the upper and lower indicated locations. The boundary conditions force every fluid element on the inner edge to have a corresponding partner on the outer edge, and the appearance of $`D/Dt`$, rather than $`/t`$, ensures cancellation. Were the partial time derivative used, we would not be forced to this conclusion, because the disk passes by “faster” at one of the boundaries compared with the other. If $`𝑭_𝑬^{\mathbf{}}`$ vanishes (in this averaged sense), we are lead to a standard $`\alpha `$ disk model. Clearly, however, this conclusion is entirely driven by the choice of boundary conditions. Energy loss or gain from an evolving wave-like flux would be quite incompatible with this type of periodicity. There is no physical reason for the above boundary conditions to be satisfied in the neighborhood of an arbitrarily chosen disk location. Nevertheless, it is possible that there are circumstances under which $`𝑭_𝑬^{\mathbf{}}`$ may in effect vanish. Disks evolving under the influence of their own self-gravity tend to hover near the critical $`Q=1`$ level (Laughlin & Bodenheimer 1994). WKB waves with radial wavenumber $`k_{crit}=\pi G\mathrm{\Sigma }/c_S^21/(QH)`$ are neutrally stable ($`\omega m\mathrm{\Omega }=0`$); all other wavennumbers propagate. While ostensibly comparable in magnitude to $`𝓕`$, $`𝑭_𝑬^{\mathbf{}}`$ may be smaller in a $`Q=1`$ disk. The most responsive (dominant?) local modes have $`\omega m\mathrm{\Omega }\mathrm{\Omega }`$, and this may be enough to suppress $`𝑭_𝑬^{\mathbf{}}`$. The effectivenss of this process depends both upon the disk’s alacrity in maintaining $`Q`$ near unity, and upon the shape of the wave power spectra. Clearly, numerical simulations are needed to resolve the question of whether $`Q=1`$ disks can be treated within the $`\alpha `$ formalism. Values of $`Q`$ near unity are favored, of course, because dropping below this critical level results in vigorous dissipative shock heating, raising the temperature and stabilizing. Rising above $`Q=1`$ allows the disk to cool and become destabilized (e.g., Sellwood & Carlberg 1984). The critical criterion also has some observational support through the work of Kennicutt (1989), who has found that the gaseous $`Q`$ value of active star-forming regions of disk galaxies is near critical. More generally, it is yet well understood under what conditions heating and cooling will be able to regulate $`Q`$ efficiently in disks, thereby allowing the use of a simplifying $`\alpha `$ formalism. ## 5 Summary The dynamical foundations of viscous $`\alpha `$ disk models are rooted in the correlated fluctuations which create the underlying turbulent stresses. In this paper, we have shown that the mean flow dynamics of MHD turbulence follows the $`\alpha `$ prescription, and in particular that the disk energy dissipation rate is always give by eq. (37), even if the disk is evolving. The local character of MHD disturbances is itself rooted in the flux freezing equation, which forces local dissipation of the magnetic field in turbulent flow, analogous to vorticity dynamics in an unmagnetized shear layer. The mean flow dynamics of a self-gravitating disk in general cannot be described so simply. Classical viscous disk theory requires a simple restrictive form for the mean momentum and energy fluxes (eqs. and ); neither can depend upon transport properties other than $`u_R_\rho `$ and $`W_{R\varphi }`$. The energy flux of self-gravitating disks is not reducible to a superposition of these quantities. Instead, what we refer to as anomalous flux terms are present. These terms allow self-gravitating disturbances (not necessarily of WKB form) to propagate nonlocally in the disk via the perturbed gravitational potential; a viscous disk cannot communicate with itself in a similar fashion. The angular momentum flux (strictly conserved) in a self-gravitating disk has the same canonical form it has in a non self-gravitating disk, proportional to $`W_{R\varphi }`$; the energy flux is fundamentally different. In a non self-gravitating thin disk, wave energy transport depends upon terms in the flux which, while formally present, are small by order $`H/R`$. In a self-gravitating disk, the additional (non-pressure) terms that are present in the energy flux couple directly to the differential rotation of the disk, as does $`W_{R\varphi }`$. This additional coupling means in effect that transport becomes global on rotational time scales. Over similar times without self-gravity, the domain of wave influence is restricted to the vertical scale-height $`H`$. Shearing box simulations of self-gravitating disks employ boundary conditions which force local behavior, and inevitably must give rise to an $`\alpha `$ disk. Because self-gravity is intrinsically nonlocal in its manifestations, analyzing transport phenomena in self-gravitating disks within the shearing box formalism may be misleading. On the other hand, it is possible that critical $`Q1`$ disks will be dominated by wavenumbers for which $`|\omega /m\mathrm{\Omega }|/\mathrm{\Omega }`$ is small, in which case $`\alpha `$ modeling might be a fair phenomenological description. To date however, global numerical simulations of massive self-gravitating disks, do not seem to lend themselves readily to an $`\alpha `$ formalism. Whether the same is true for self-gravitating disks much less massive then their central stars is not yet known. We thank J. Hawley for useful discussions. S.A.B. acknowledges support from NASA grants NAG5-3058, and NAG5-7500, and NSF grant AST-9423187. J.C.B.P. acknowledges support from PPARC Grant GR/H/ 09454. References Balbus, S. A., & Hawley, J. F. 1991, ApJ, 376, 214 Balbus, S. A., & Hawley, J. F. 1998, Rev Mod Phys, 70, 1 Binney, J., & Tremaine, S. 1987, Galactic Dynamics (Princeton: Princeton University Press) Goldreich, P., Goodman, J., & Narayan, R. 1986, MNRAS, 221, 339 Goldreich, P., & Lynden-Bell, D. 1965, MNRAS, 130, 125 Goldreich, P., & Tremaine, S. D. 1979, ApJ, 233, 857 (GT) Hawley, J. F., Gammie, C. F., & Balbus, S. A. 1995, ApJ, 440, 742 Julian, W. H., & Toomre, A. 1966, ApJ, 146, 810 Kennicutt, R. C. 1989, ApJ, 344, 685 Krause, F., & Rädler, K.-H. 1980, Mean-Field Magnetohydrodynamics and Dynamo Theory, (Oxford: Pergammon) Landau, L. D., & Lifschitz, E. M. 1959, Fluid Mecahnics, (Oxford: Pergammon) Laughlin, G., & Bodenheimer, P. 1994, ApJ, 436, 335 Laughlin, G., Korchagin, V., & Adams, F. C. 1997, ApJ, 477, 410 Laughlin, G., Korchagin, V., & Adams, F. C. 1998, ApJ, 504, 945 Laughlin, G., & Royczyska, M. 1996, 456, 279 Lighthill, J. 1978, Waves in Fluids, (Cambridge: Cambridge University Press) Lin, C. C., & Shu, F. H. 1966, ApJ, 55, 229 Lynden-Bell, D., & Kalnajs, A. J. 1972, MNRAS, 157, 1 Lynden-Bell, D., & Pringle, J. E. 1974, MNRAS, 168, 603 Papaloizou, J. C. 1996, in Gravitational Dynamics, ed. O. Lahav, E. Terlevich, & R. Terlevich (Cambridge: Cambridge Univ. Press), p. 119 Papaloizou, J. C., & Savonije, G. J. 1991, MNRAS, 248, 353 Pringle, J. E. 1981, ARAA, 19, 137 Sellwood, J. A., & Carlberg, R. G. 1984, ApJ, 282, 61 Shakura, N. I., & Sunyaev, R. A. 1973, AA, 24, 337 Toomre, A. 1964, ApJ, 139, 1217 Toomre, A. 1981, in The Structure and Evolution of Normal Galaxies, eds. S. M. Fall and D. Lynden-Bell (Cambridge: Cambridge University press), p. 111 Welsh, W. F., Wood, J. H., & Horne, K. 1996, in Cataclysmic Variables and Related Objects; Proceedings of the 158th Colloquium of the International Astronomical Union, eds. A. Evans and J. H. Wood (Dordrecht: Kluwer), p. 29
no-problem/9903/gr-qc9903044.html
ar5iv
text
# Photon Stars ## 1 Introduction The starting point of this investigation was the discussion of the Carnot-Bekenstein-process in the environment of a Schwarzschild black hole \[HUS\]. There it was assumed that the black hole is surrounded by a cloud of radiation with a local temperature according to Tolman’s equation \[TOL\] $$T(r)=\frac{T_{\mathrm{}}}{g_{00}(r)}$$ (1) where $`T_{\mathrm{}}`$ is the usual Hawking temperature at $`r=\mathrm{}`$. This equation also follows for the equilibrium distribution of photons in relativistic kinetic theory \[NEU\] or from elementary thermodynamical gedanken experiments \[HUS\]. These yield a modified formula for the efficiency of the Carnot-Bekenstein-process $$\eta =1\frac{T_2\sqrt{g_{00}(2)}}{T_1\sqrt{g_{00}(1)}}$$ (2) between two heat reservoirs at different height. In the equilibrium we have $`\eta =0`$ which implies (1). All this holds as long as the back reaction of the radiation to the metric can be neglected. But, since $`g_{00}(r)=1\frac{}{r}`$, at the Schwarzschild radius $`r=`$ the temperature and the energy density diverges and cannot longer be regarded as a mere perturbation. Rather one would have to solve the field equation anew, this time allowing for the energy-stress tensor of the photon gas to act as a source of gravitation. It is not at all clear whether the resulting metric will be a modified black hole in some sense. So we have the following problem which may be considered independently of the original motivation: Calculate the static, spherically symmetric metric of Einstein’s field equations with the energy-stress tensor of a perfect fluid $$T_{ab}=(\rho +P)u_au_b+Pg_{ab}$$ (3) consisting of photons, i.e. $$\rho =3P.$$ (4) In section 2 the corresponding field equations are transformed into an autonomous two–dimensional system of differential equations which is discussed in section 3. In section 4 we investigate the metric of a photon star for $`r0`$ and study its global properties by means of numerical solutions. Physical characteristics like radius, mass and temperature are defined in section 5. Section 6 contains concluding remarks. ## 2 Transformation of the field equations With except of eq. (4) the problem stated above is just the well-known problem of constructing the interior solution of a star. We may choose coordinates $`(t,r,\theta ,\varphi )`$ such that the metric is given by $$ds^2=f(r)dt^2+h(r)dr^2+r^2d\mathrm{\Omega }^2$$ (5) where $`f`$ and $`h`$ are unknown functions and $`d\mathrm{\Omega }^2`$ is the surface element of a unit sphere. With respect to these coordinates the field equations boil down to a system of three coupled differential equations ( cf. \[WAL\] 6.2.3–6.2.5) $`{\displaystyle \frac{8\pi G}{c^2}}\rho `$ $`=`$ $`{\displaystyle \frac{h^{}}{rh^2}}+{\displaystyle \frac{h1}{hr^2}}`$ (6) $`{\displaystyle \frac{8\pi G}{3c^2}}\rho `$ $`=`$ $`{\displaystyle \frac{f^{}}{rfh}}{\displaystyle \frac{h1}{hr^2}}`$ (7) $`{\displaystyle \frac{16\pi G}{3c^2}}\rho `$ $`=`$ $`{\displaystyle \frac{f^{}}{rfh}}{\displaystyle \frac{h^{}}{rh^2}}+{\displaystyle \frac{1}{\sqrt{fh}}}{\displaystyle \frac{d}{dr}}\left({\displaystyle \frac{f^{}}{\sqrt{fh}}}\right)`$ (8) We recall that Tolman’s equation \[TOL\] was derived under the same assumptions we made, except spherical symmetry. Hence we may adopt (1) or, equivalently, $$\rho (r)=\frac{\rho _1}{f^2(r)},$$ (9) since $$\rho =\sigma T^4$$ (10) from local statistical mechanics, where $`\sigma `$ is the Stefan-Boltzmann constant. We write $$\rho _1=\frac{c^4}{8\pi G}C$$ (11) and obtain from (6) and (7) $`{\displaystyle \frac{C}{f^2}}`$ $`=`$ $`{\displaystyle \frac{h^{}}{rh^2}}+{\displaystyle \frac{h1}{hr^2}}`$ (12) $`{\displaystyle \frac{C}{3f^2}}`$ $`=`$ $`{\displaystyle \frac{f^{}}{rfh}}{\displaystyle \frac{h1}{hr^2}}.`$ (13) Eq. (8) is identically satisfied, which confirms Tolman’s result. Otherwise the system of differential equations would be overdetermined. The set of solutions $`𝒮`$ of the system (12), (13) can be parameterized by 3 parameters, e. g. $`C`$ and two initial values for $`f`$ and $`h`$. But $`𝒮`$ is invariant under the 2-parameter group of scale transformations $$r\lambda r,f\mu f,hh,C\frac{\lambda ^2}{\mu ^2}C$$ (14) Thus there exists only a 1-parameter family of solutions looking qualitatively different. An equivalent second order equation is obtained by eliminating $`h`$ using $$h=\frac{3f(f+rf^{})}{Cr^2+3f^2}$$ (15) and inserting the derivative of (15) into (12): $$f^{\prime \prime }=\frac{6rf^2+6r^2ff^{}6f^3f^{}+2r^3f^2}{Cr^3f+3rf^3}$$ (16) For $`C=0`$ we have the well-known equations which lead to a 2-parameter family of Schwarzschild metrics. For $`C0`$ we may scale every solution such that it becomes a solution with $$C=1.$$ (17) The remaining subgroup of (14) with $`\lambda =\mu `$ may be used to simplify the differential equation (16). We perform the transformation $$s=\mathrm{ln}\frac{r}{r_0},x=\frac{f(r)}{r},y=f^{}(r)+x,$$ (18) where $`r_0>0`$ is arbitrary. The resulting reduced equations read $`{\displaystyle \frac{dx}{ds}}`$ $`=`$ $`y2x`$ (19) $`{\displaystyle \frac{dy}{ds}}`$ $`=`$ $`{\displaystyle \frac{y(2y3x(x^21))}{x(3x^2+1)}}.`$ (20) Note that this system is autonomous. The resulting symmetry $`ss+s_0`$ reflects the scale invariance of (16) w.r. to the subgroup $`\lambda =\mu `$. Even if we cannot solve it exactly, it seems to be better accessible for intuition and developing approximation schemes. Any two different solutions of (19), (20) correspond to different similarity classes of $`𝒮`$. ## 3 Discussion of the reduced equations We calculate some typical solutions of (19), (20) numerically and display them as curves in the $`xy`$-plane. Figure 1: A selection of numerical solution curves of the reduced equation (19), (20) together with the parabolic Schwarzschild approximation It is obvious that $$x_0=\sqrt{\frac{3}{7}},y_0=2\sqrt{\frac{3}{7}}$$ (21) is a stable stationary point of (19), (20) which is an attractor of the whole open quadrant $`x>0,y>0`$. It follows by inverting the transformation (18) that $$f(r)=\sqrt{\frac{7}{3}}r,h(r)=\frac{7}{4}$$ (22) is an exact solution of (19), (20) which is asymptotically approached by any other solution for $`r\mathrm{}`$. It follows that the space-time of photon stars is not asymptotically flat. The other exact solution of (19), (20) $$y=0,x=ae^{2s}$$ (23) yields $$f(r)=\frac{a}{r},h(r)=0$$ (24) and is hence unphysical. A typical solution curve of (19), (20) starts from $`x=+\mathrm{}`$, $`y=0`$, $`s=\mathrm{}`$ and runs close to the solution (23) until it reaches small values of $`x`$. Then according to the ”$`x`$” in the denominator of (20), $`\frac{dy}{ds}`$ increases rapidly and the curve is turned up towards the $`y`$-axis. Then it describes a parabolic-like bow and approaches the stationary point by a clockwise vortex. It is instructive to draw the general Schwarzschild solutions ( $`C=0`$) $$f_S(r)=a\frac{b}{r}$$ (25) into the $`xy`$-diagram. They are given by the family of parabolas $$x=y\frac{b}{a^2}y^2,$$ (26) which approximate the solution curves of (19), (20) having the same vertex at $$x_1=\frac{a^2}{4b},y_1=\frac{a^2}{2b}.$$ (27) In this way, for each solution $`f(r),h(r)`$ of (12), (13) we can define a unique Schwarzschild approximation $`f_S(r),h_S(r)`$. ## 4 Properties of the metric We now turn to the discussion of the solutions of (16) for $`f(r)`$ which yield $`h(r)`$ by (15). To study the behaviour for smaller $`r`$ we expand $`f`$ into a Laurent series and insert this series into (16). It turns out that the series starts with $$f(r)=\frac{A}{r}+B+\mathrm{}$$ (28) in accordance with a Schwarzschild solution for $`C=0`$. The coefficients $`A,B`$ are left undetermined since they represent the two initial values for (16). The next terms are uniquely determined by (16). It is straight forward to calculate the first, say, 20 terms by using a computer algebra software like MATHEMATICA. Here we only note down the first nonvanishing extra terms for $`f`$ and $`h^1`$: $`f(r)`$ $`=`$ $`{\displaystyle \frac{A}{r}}+B+{\displaystyle \frac{CB}{15A^2}}+𝒪(r^5),`$ (29) $`h^1(r)`$ $`=`$ $`{\displaystyle \frac{A}{Br}}+1{\displaystyle \frac{C}{15A^2}}r^4+𝒪(r^5).`$ (30) For small $`r`$, $`f`$ and $`h`$ are also approximated by Schwarzschild solutions, but unlike the approximation discussed above, we have to choose $`A>0`$ in order to obtain a positive solution $`f(r)>0`$. $`f`$ cannot change its sign in a continuous way, since $`f(r_0)=0`$ implies $`f^{\prime \prime }(r_0)=\mathrm{}`$ by (16). (According to the scale invariance (14), $`f(r)`$ may be multiplied by $`1`$, but this gives no physically different solution.) Thus we may state that for $`r0`$ the metric looks like that of a Schwarzschild black hole with negative mass, independent of $`C`$. For the geodesic motion close to $`r=0`$ we may thus adopt the effective potential of Schwarzschild theory ( \[WAL\] 6.3.15) (with $`MM`$): $$V=\frac{1}{2}\kappa +\kappa \frac{M}{r}+\frac{L^2}{2r^2}+\frac{ML^2}{r^3}$$ (31) where $$\kappa =\{\begin{array}{cc}\hfill 1& \text{ (timelike geodesics)}\hfill \\ \hfill 0& \text{ (null geodesics)}\hfill \end{array}$$ (32) It follows that $`r=0`$ can never be reached by particles or photons due to the infinite high potential barrier. Although curvature blows up for $`r0`$, as in the Schwarzschild case, the nature of the singularity is less harmful. We conjecture that it cannot be regarded as a ”naked singularity” in whatever technical sense (see \[EAR\] for details of the various definitions) and that Cauchy surfaces still exist in the spacetime given by (12), (13), if the line $`r=0`$ is excluded from the spacetime manifold. From a computational point of view, the singularity of $`f(r)`$ at $`r=0`$ suggests to transform (16) into a differential equation for $$F(r):=rf(r)$$ (33) and to re-transform to $`f(r)`$ after a numerical solution for $`F`$ has been obtained. We used the NDSolve-command of MATHEMATICA to produce the following numerical solutions: Figure 2: A typical numerical solution $`f(r),h(r)`$ of the system (12), (13). $`h(r)`$ has its maximum at $`r=r_0`$. The corresponding Schwarzschild solution $`f_S(r)`$ with $`f_S(r_0)=0`$ is also displayed. A typical solution is shown in Fig. 2. Recall that for the Schwarzschild metric $`h(r)`$ diverges at $`r=`$ and $`f()=0`$. For the solution of Fig. 2 $`h(r)`$ has a relatively sharp maximum at $`r_0`$ and $`f(r_0)`$ is becoming small. For $`r>r_0`$, $`f`$ and $`h`$ are comparable with their Schwarzschild approximations $`f_S`$ and $`h_S`$, if $`r`$ is not too large. For $`r<r_0`$, $`f`$ remains small within some shell $`r_1<r<r_0`$ and diverges for $`r0`$ according to (29). By (9) this means that the energy density is concentrated within that shell and $`r_0`$ may be viewed as the ”radius of the photon star”. Other solutions with larger values of $`C`$ show a more diffuse cloud of photons and a less sharp maximum of $`h`$, see Fig. 3 and 4. These solutions are all scaled to the same value of $`r_0`$. Figure 3: Numerical solutions $`h(r)`$ for different $`C`$. The solutions are scaled such that they obtain their maximum at the same value $`r_0=1`$. Figure 4: Numerical solutions $`f(r)`$ for different $`C`$ and the same scaling as in Fig. 3. ## 5 Physical parameters of a photon star We have seen that the set of solutions $`𝒮`$ may be characterized by 3 parameters, e. g. $`A,B`$ and $`C`$ in (29). From the analogy with the Schwarzschild case ($`C=0`$) we expect that only a 2-parameter family represents physically different spacetimes. In the Schwarzschild case, one parameter is set to 1 by the choice of the units, and the remaining parameter $``$ distinguishes between black holes of different mass. More specifically, one postulates that the velocity of light, expressed by $`\frac{dr}{dt}`$ approaches 1 for $`r\mathrm{}`$. The metric then obtains the form $$f_S(r)=1\frac{}{r},h_S=f_S^1.$$ (34) In the case of the photon star, we cannot proceed in the same way, since the metric will not be asymptotically flat. But instead we may postulate the ”gauge condition” that the Schwarzschild approximation of $`f,h`$, defined in section 3, should obey condition (34). If this is not the case, one has to perform a suitable scale transformation (14). In this way we obtain a two-fold of physically different solutions. We now consider physical parameters characterizing this two-fold of solutions. One could be the radius $`r_0`$ of the photon star defined above. In analogy to the Schwarzschild theory (\[WAL\] 6.2.7) we introduce the (gravitational) mass function $$m(r):=\frac{r}{2}(1h^1(r))\frac{c^2}{G}.$$ (35) In the domain where $`f(r)f_S(r)`$ this is the ”would-be-mass” of an equivalent black hole. In the domain $`r<r_0`$ the interpretation of (35) is not so obvious. As to be expected from the above discussion of the metric for $`r0`$, it turns out that $`m(0)<0`$. A typical mass function is shown in Fig. 5, where also the ”proper mass” $$m_p(r)=\frac{4\pi }{c^2}_0^r\rho (r^{})h(r^{})^{\frac{1}{2}}𝑑r^{}$$ (36) and the difference $`m_pm`$ is displayed. Figure 5: A typical numerical solution of the mass functions $`m(r),m_p(r)`$ and the difference $`m_p(r)m(r)`$. It may be, as in this case, that the majority of the photons are ”hidden” by the apparent negative mass in the center with respect to gravitation. Nevertheless, we could use $$m_0=m(r_0)$$ (37) as a further physical parameter characterizing a photon star. Generally, by (35) and $`h(r_0)>\frac{4}{7}=lim_r\mathrm{}h(r)`$ $$\frac{3}{7}M_S(r_0)<m_0<M_S(r_0),$$ (38) where $$M_S(r_0):=\frac{r_0c^2}{2G}.$$ (39) If $`h(r_0)1`$ we have $`m_0M_S(r_0)`$, as in the Schwarzschild case. As another physical parameter we consider the ”surface temperature” $$T_0:=T(r_0)=\left(\frac{\rho (r_0)}{\sigma }\right)^{1/4}.$$ (40) If we have only a two-fold of physically different solutions, as we claimed above, $`T_0`$ should be a function of $`r_0`$ and $`m_0`$. Indeed, (12) together with $`h^{}(r_0)=0`$ shows that $`f(r_0)`$ is a function of $`r_0`$ and $`h(r_0)`$, hence of $`r_0`$ and $`m_0`$. Then, by (40) and (9), also $`T_0`$ depends only on $`r_0`$ and $`m_0`$. Since $`\sigma `$ depends on $`\mathrm{}`$, the result can be conveniently expressed by using Planck units, indicated by a subscript P: $$\frac{T_0}{T_P}=\frac{15^{1/4}}{(2\pi )^{3/4}}\left[\frac{m_0}{M_P}\left(\frac{L_P}{r_0}\right)^3\right]^{1/4}.$$ (41) To give a numerical example, if we take $`m_0`$ as the mass of the sun, $`r_0`$ as the corresponding Schwarzschild radius, we obtain $`T_0410^{12}`$K. This would correspond to a very hard gamma radiation with a wavelength $`\lambda 10^{15}`$ m. The Hawking temperature of this example is $`T_H10^8`$K, since $`T_Hm^1`$ whereas $`T_0m_0^{1/2}`$ for $`mM_S(r_0)`$. ## 6 Conclusion It is difficult to assess the physical relevance of our findings. But one point seems to be clear: The global character of the solutions $`f,h`$ is completely different from the Schwarzschild approximations $`f_S,h_S`$, no matter how small $`C`$ is. So the class $`𝒮`$ of solutions of (12), (13) does not depend continuously on $`C`$ in any reasonable sense. Any small amount of radiation will destroy the event horizon, at least if an equilibrium is approached. If this would also be the case in simulations of the birth of ”black holes” by collapsing matter, then they would never be born, and perhaps won’t exist at all.
no-problem/9903/astro-ph9903231.html
ar5iv
text
# The Prediction and Detection of UHE Neutrino Bursts. ## I Introduction Gamma ray bursts are presently the most enigmatic astrophysical phenomenon. Recent observations indicate that they originate from cosmological sources . They are observed in satellites near the Earth at a rate of $``$ 1 per day. The relativistic fireball model is consistent with all observed features of GRBs and has been used by Waxman and Bahcall to predict a measurable flux of $`10^{14}`$ eV neutrinos. According to this model, a detector of $``$ 1 $`\mathrm{km}^2`$ effective area will observe $``$ 20 neutrino induced muons per year in coincidence with GRBs. Other models of astrophysical processes also demand production of high-energy neutrinos, including other burst models, AGN models, and topological string models. The Neutrino Burster Experiment (NuBE) will measure the flux of UHE ($`>`$ 1 TeV) neutrinos over a $``$ $`\mathrm{km}^2`$ effective area and will test the fireball model stringently and uniquely, with an inexpensive, quick and robust experiment. ## II UHE Neutrinos Coincident with GRBs, Different Models ### A Ultra-relativistic Fireball Model General phenomenological considerations indicate that GRBs could be produced by the dissipation of the kinetic energy of a relativistic expanding fireball. According to Waxman and Bahcall , a natural consequence of the dissipative cosmological fireball model of gamma ray bursters is the conversion of a significant fraction of fireball energy into an accompanying burst of $`10^{14}`$ eV neutrinos, created by photomeson production of pions in interactions between the fireball $`\gamma `$ rays and accelerated protons. The basic picture is that of a compact source producing a relativistic wind. The variability of the source output results in fluctuations of the wind bulk Lorentz factor which leads to internal shocks in the ejecta. Both protons and electrons are accelerated at the shock and $`\gamma `$ rays are radiated by synchrotron and inverse Compton radiation of shock accelerated electrons. The accelerated protons undergo photomeson interactions and produce a burst of neutrinos to accompany the GRB. Figure 1 illustrates the expected neutrino flux from the GRB model described above in comparison to a typical AGN model and the expected atmospheric neutrino background. These neutrino bursts should be easily detected above the background, since they would be correlated both in time and angle to the GRB $`\gamma `$ rays. ### B Cosmic String Model of Neutrino Production UHE neutrinos may also originate in cosmic strings. Cosmic strings are topological relics from the early universe which could be superconducting and carry electric current under certain circumstances. A free string (a nonconducting string uncoupled from electromagnetic and gravitational fields) generically attains the velocity of light at isolated points in time and space, which are known as cusps. Superconducting cosmic strings (SCS) emit energy in the form of classical electromagnetic radiation and ultra-heavy fermions or bosons which decay or cascade at or near the cusp. Using recent progress on the nature of electromagnetic symmetry restoration in strong magnetic fields, the study of the decay products of ultraheavy fermions near SCS cusps consistent with an SCS explanation of $`\gamma `$ ray bursts shows that the energy emitted from the cusps is found to be mostly in the form of high energy neutrinos . The neutrino flux is roughly nine orders of magnitude higher than that of the $`\gamma `$ rays. Therefore this is another model that predicts high energy neutrinos to be observed in coincidence with $`\gamma `$ ray bursts. ## III Detection of UHE Neutrino Bursts ### A Description of the Detector The neutrino burst experiment (NuBE) is designed to search for UHE neutrinos in coincidence with GRBs. NuBE is a water Cherenkov detector whose simple design derives from the very high energy of the GRB neutrinos. The expected energy of the neutrinos in the fireball model is $``$ 100 TeV, which leads to Cherenkov signals detectable with high efficiency at large distances from the core track. We can detect a muon core when the muon neutrino interacts in material within $``$ 10 km of the array, leading to a highly radiative muon observable with high efficiency at perpendicular distances $`>`$ 150 m from the core all along its multi-km length. An electron neutrino has a core track which is itself only a few meters in length, but the light from this short core is intense and can be seen by the proposed array at distances in excess of 500 m. Coincidence is required with measurements of the photon arrival time and the GRB location provided by detection in satellites. The 4$`\pi `$ NuBE detector approximates a sphere of diameter $`>`$ 700m, creating an effective area of $`>`$1.5 $`\mathrm{km}^2`$. The detector consists of four strings placed in the clear water of the deep ocean with their anchors at the corners of a square having $`>`$ 400m sides, as shown in Figure 2. Each string has two photon-detector nodes separated by $`>`$ 400m along the string. Each node acts independently of the other 7 nodes in the array, having its own local trigger and data acquisition and storage, thus providing robustness and redundancy. Local node clocks are periodically synchronized using bright flashes of blue light from calibration spheres at each node. Absolute time is kept via these clocks to accuracy of $`<`$ 1 second per year. A signal of a high energy event in NuBE consists of a locally triggered event in any node occurring within 5 $`\mu `$s of a locally triggered event in any other node. The 5 $`\mu `$s accounts for the muon or photon transit time across the array. The coincidence that indicates high energy events is determined off-line. In a 2-node event the time difference between the arrival times at each node gives the incident track direction on a cone, while events having more nodes hit provide the incident direction to within as little as $``$ 10 degrees. This angular resolution capability provides robust verification of correlation with the GRB for any signal that arrives in the GRB time window. The electronics connecting the photon detector to the data acquisition system is straightforward, requiring no further R&D. It includes a 5ns resolution TDC for relative arrival time which allows up/down discrimination on local events and provides a measurement of the number of photoelectrons produced by the time over threshold for the signal. Much of the detector can be assembled from “off-the-shelf” items; anchors, strings and housing spheres are items of commerce familiar to many of our collaborators. Deep-sea rated battery packs on each node can provide $`>`$ 1 year of untended operation. The detector is easy to deploy and to recover in any of a variety of locations, since it doesn’t require accurate positioning. Placing the strings in the location at a site off the coast of St. Croix in the US Virgin islands can be done easily by vessels of opportunity or with minimum schedule lead time. This site has the additional advantage of providing 4km deep water within 15km of the shore, a clear virtue for site visits and data retrieval. The 4km depth attenuates the cosmic ray muon background to a few per minute per node, an ideal calibration rate. Up/down discrimination allows us to calibrate with the Superkamiokande experiment for their highest energy upward signals, giving verification of an energy threshold for each node. NuBE provides $`>`$ 1 $`\mathrm{km}^2`$ collecting area in its 4-string implementation and can tell us quickly whether the fireball model is correct in its predictions of high-energy neutrino bursts. The total project, from initial approval to completion of data analysis, will take $`<`$ 3 years and cost $`<`$ $ 3M. ### B Underlying Physics in the Detection of Neutrino Bursts It is important to maximize the physical inferences that can be drawn from coincident photon and neutrino detection . The possible simultaneous or near simultaneous observation of neutrinos and $`\gamma `$ rays will provide many important new insights into the properties of neutrinos and GRB sources. It will also yield a novel test of the weak equivalence principle (WEP). This has been previously noted for the SN1987A explosion where neutrinos were observed within a known (but large) time interval of $`\gamma `$ ray emission. The same tests can be done for a much higher accuracy and with better statistics since we will be dealing with multiple sources at cosmological distances. The fact that the mystery of the distance scale for GRBs has been solved in some sources makes this statement stronger. The neutrinos from GRBs can be used to test the limits of the relativity principles. This was done for the neutrino emission from SN1987A . Neutrinos from GRBs could be used to test the simultaneity of neutrino and photon arrival to an accuracy of 1s (1ms for short bursts), checking the assumption that photons and neutrinos should have the same limiting speed. Considering a burst at $``$ 100 Mpc with $``$ 1s accuracy, as an example, the fractional difference in limiting speed of $`10^{16}`$ is revealed. This may be compared to the SN1987A value of $`10^8`$. According to the WEP, the photons and neutrinos should suffer the same time delay as they pass through a gravitational potential. If the most influential gravitational potential along the path is the local galaxy, we can compute a time difference that would result from various trajectories with respect to the galactic nucleus, the suspected site of the black hole. NuBE detection of GRB neutrinos would allow a test of the WEP to an accuracy of $`10^7`$. Results from measurements on low energy neutrinos from the supernova 1987A probed this value to $`10^2`$ . On the other hand the most influential gravitational potential sampled may be near the source itself. If we see nearly the same delay for all GRB events regardless of distance this may point to a failure of general relativity in predicting the exit time from the source. Since there are several GRB sources the corresponding statistics would increase, and the GRB sources being much further away than the SN1987A would offer a new distance scale and improved sensitivity. It would be interesting to investigate the possible detection of Tau neutrinos. This would imply neutrino oscillations in transit because none of our astrophysical models predict Tau neutrinos to be produced at the source. The key signature is the charged current Tau neutrino interaction, which produces a double cascade, one on either end of a minimum ionising track. Tau neutrinos could be theoretically identified by the double bang events but the two individual bangs would be very difficult to resolve in our proposed detector. This experiment could be an indirect test for pointing to the model for the highest energy cosmic ray production. There have been suggestions that GRBs could be the source of Ultra High Energy (UHE) Cosmic Rays (CR) . This source model is consistent with the observed CR flux above $`10^{20}`$ eV. For a homogeneous GRB distribution this model predicts an isotropic, time dependent CR flux. Thus the large distances, short emission time, and trajectory through varying gravitational fields, leads to the potential for tests of some fundamental neutrino properties not possible in terrestrial laboratories . Limits may also be placed on neutrino mass, lifetime, electric charge and on neutrino oscillation parameters. ## IV Conclusions and Present Status of Field There are a number of active efforts to observe high energy astrophysical neutrinos including AMANDA, NESTOR, Baikal, ANTARES and Superkamiokande. The relatively dense instrumentation of these experiments, compared to NuBE, is intended to derive source origin by pointing back to the neutrino trajectory with a high degree of accuracy. The Superkamiokande detector in particular provides an excellent calibration point for a large area detector because of its very high efficiency for GeV neutrinos. However these are all relatively small arrays and consequently will detect at best only one or two neutrinos coincident with GRBs per year. NuBE in comparison is aimed at making a large ($`>1\mathrm{k}\mathrm{m}^2`$ effective area ), sparse detector to look specifically for neutrinos $`>`$ 10 TeV and to determine whether they are in coincidence with the GRBs. It is the most efficient and robust way of constructing a detector for UHE neutrino bursts and will detect $``$ 20 events per year (as predicted by the fireball model).
no-problem/9903/cond-mat9903422.html
ar5iv
text
# Stripe Disordering Transition ## Abstract We have recently begun Monte Carlo simulations of the dynamics of stripe phases in the cuprates. A simple model of spinodal decomposition of the holes allows us to incorporate Coulomb repulsion and coherency strains. We find evidence for a possible stripe disordering transition, at a temperature below the pseudogap onset. Experimental searches for such a transition can provide constraints for models of stripe formation. The relationship between stripe phases and the pseudogap in underdoped cuprates is not well understood. In our modelPstr ; MKK ; Mia the pseudogap is primary. It represents an instability of the hole Fermi liquid driven by Van Hove nestingRiSc . However, there is a competition of instabilities, with an antiferromagnet (or flux phaseAffl ; Laugh ; WeL ) at half filling and a charge-density wave (CDW) at the bare Van Hove singularity (VHS) near optimal doping. This competition leads to a classical phase separation of the holes – two minima in the free energyRM3 ; Pstr . This is restricted to a nanoscopic scale by long-range Coulomb effects, leading to phases similar to the experimentally observed stripe phasesTran . For such a nanoscale phase separation, the correct dispersion and pseudogap must be found by appropriate averaging over the heterogeneous, usually fluctuating stripes. Fortunately, tunneling and photoemission are sensitive mainly to the pseudogaps, and hence can be described by a simple Ansatz of the stripe phaseMKK ; Mia . For other purposes, a more detailed picture of the stripes is needed. As a first step, we have begun Monte Carlo calculations of a classical picture of this restricted phase separation. Using the derived form of the free energy vs doping, we calculate the dynamic spinodal decomposition of the holes in the presence of Coulomb interactions. We find that there can be a stripe disordering transition, Fig. 1, at a temperature below the pseudogap onset. The disordering temperature is proportional to the free energy barrier between the two end-phases, inset, Fig. 2. Technical details of the calculation are as follows: we work with a generic form of the free energy, $`F=F_0x(xx_c)^2`$, which approximates the calculated free energy of Ref. Pstr . The calculations are done on 128$`\times `$128 lattices, with periodic boundary conditions. The critical doping $`x_c`$ is taken as 1/6, which necessitates a non-Markovian algorithm – a particular lattice site must retain memory of the average hole occupation over several cycles. We typically choose 30 cycles, which means that a single hole must spread out over 6 lattice sites – close to the size of a magnetic polaronAuer . The algorithm chosen is able to find the correct ground states in the low doping limit (which can be found analytically). The stripes are not topological, and the stripe-like domains are produced by coherency strainsFraPe . In the absence of such strains, the domains would be irregular shaped, approximately equiaxed, as found by Veillette, et al.VBBK The coherency strains produce a mixture of stripes along both $`x`$ and $`y`$ axes; to get single-axis stripes, as in the figure, it is assumed that there are local martensitic domains. The phase separation can be most clearly seen in a plot of the distribution of site occupancies by holes, Fig. 2. At low temperatures, this is a two-peaked structure, with one peak (off scale in the figure) at zero doping, and the other near $`x_c`$ (it is actually at a doping below $`x_c`$, due to charging effects). As the temperature increases, the two-peak structure is gradually smeared out, and at high temperatures there is only a monotonic distribution. This finite system has a crossover rather than a sharp transition. For the parameters chosen, the transition is centered near $`k_BT_m30meV`$, which is approximately the barrier height of the free energy (inset). This result is not very sensitive to the value of dielectric constant, $`ϵ`$. Thus, as the underdoped cuprate cools from high temperatures, there can be a series of phase transitions. At high temperatures, there will be the pseudogap onset. In our simplified mean field AnsatzMia , this appears as a long-range ordered CDW phase, but the inclusion of two-dimensional fluctuationsKaSch ; RM5 leads to appropriate pseudogap behavior. The stripe phase ordering temperature found here could in principle fall at a lower temperature. The stripes in our simulations continue to fluctuate, and the long-range stripe order phase seen by TranquadaTran may be yet another transition. The two-branched transition to a stripe phase bears some resemblance to the phase diagram of Emery, Kivelson, and ZakharEKZ , but is in fact different. Their upper transition ($`T_1^{}`$) corresponds to the onset of stripe order, their lower ($`T_2^{}`$) to the onset of a spin gap on the hole doped stripes. There is not much experimental evidence for the onset of short-range stripe order, although phase separation in La<sub>2</sub>CuO<sub>4+δ</sub> starts near 400KRad , much lower than the pseudogap onset temperature, $`800K`$BatT . In most materials, the incommensurate magnetic modulations near $`(\pi ,\pi )`$ broaden out and disappear near the pseudogap $`T^{}`$, which is a lower temperature ($``$150K for the compositions studied)Moo . The best place to look would be in the extremely underdoped regime, where $`T^{}`$ is highest. While the above calculations reproduce the general properties of the stripes, there are a number of features which are not well reproduced. First, for the elastic constants of LSCOMig , the stripes lie along the orthorhombic axes – i.e., they are diagonal stripes. Further, for the parameters assumed, the charged stripes tend to grow wider with increased doping, maintaining a constant interstripe spacing, whereas experimentYam suggests that the stripe shape stays constant, but the stripes move closer, as doping increases, at least for $`x0.12`$. This suggests that some important feature has been omitted from the model, most probably the topological nature of the stripes as magnetic antiphase boundaries. MTV’s work was supported by DOE Grant DE-FG02-85ER40233. Publication 758 of the Barnett Institute.
no-problem/9903/gr-qc9903102.html
ar5iv
text
# The Laser Interferometer Gravitational-wave Observatory Scientific Data Archive ## 1 Introduction Despite an 83 year history, our best theory explaining the workings of gravity — Einstein’s theory of general relativity — is relatively untested compared to other physical theories. This owes principally to the fundamental weakness of the gravitational force: the precision measurements required to test the theory were not possible when Einstein first described it, or for many years thereafter. It is only in the last 35 years that general relativity has been put to significant test. Today, the first effects of static relativistic gravity beyond those described by Newton have been well-studied using precision measurements of the motion of the planets, their satellites and the principal asteroids. Dynamical gravity has also been tested through the (incredibly detailed and comprehensive) observations of the slow, secular decay of a pair of the Hulse-Taylor binary pulsar system . What has not heretofore been possible is the direct detection of dynamical gravity — gravitational radiation. That is about to change. Now under construction in the United States and Europe are large detectors whose design sensitivity is so great that they will be capable of measuring the minute influence of gravitational waves from strong, but distant, sources. The United States project, the Laser Interferometer Gravitational-wave Observatory (LIGO), is funded by the National Science Foundation under contract to the California Institute of Technology and the Massachusetts Institute of Technology. Both LIGO and its European counterpart VIRGO will generate enormous amounts of data, which must be sifted for the rare and weak gravitational-wave signals they are designed to detect. To understand the LIGO data problem, one must first understand something of the LIGO detector (§2) and the signals it hopes to observe (§3), since these determine the size of the data archive and place challenging constraints on its organization. In the following sections I describe the magnitude and character of the data generated by LIGO (§4), how the data will be collected and staged to its final archive (§5), the kinds of operations on the data that must be supported by the archive and associated data analysis system (§6), anticipated data access patterns (§7), some of the criteria involved in the design of the LIGO Data Analysis System (LDAS) (§8), and a proposed strategy for the staged use of the several components of the LIGO Data Analysis System (§9). ## 2 The LIGO Detector The LIGO Project consists of three large interferometric gravitational wave detectors. Two of these detectors are located in Hanford, Washington; the remaining detector is located in Livingston, Louisiana. At each LIGO site is a large vacuum system, consisting of two 4 Km long, 1 m diameter vacuum pipes that form two adjacent sides of a square, or arms. Laser light of very stable frequency is brought to the corner, where a partially reflecting mirror, or beamsplitter, allows half the light to travel down one arm and half the light to travel down the other arm. At the end of each arm a mirror reflects the light back toward the corner, where it recombines optically at the beamsplitter.<sup>1</sup><sup>1</sup>1In fact, LIGO utilize several additional mirrors that permit the light to traverse the detector arms many times before recombining at the beamsplitter. This detail, while important for increasing the sensitivity of the detector, is not important for understanding the basic operation of the instrument. This basic configuration of lasers and mirrors, illustrated schematically in in figure 1, is called an interferometer. The nature of light is such that, when it recombines in this way at the beamsplitter, some of the light will travel back toward the laser and some of the light will travel in an orthogonal direction. The amount of light traveling in each direction depends on the ratio of the difference in the arm lengths to the wavelength of the light, modulo unity. The laser light wavelength used in LIGO is approximately 1000 nm; consequently, by monitoring the amplitude of the light emerging from the beamsplitter and away from the laser, each LIGO interferometer is sensitive to changes in the arm length difference to much better than one part in $`10^{10}`$.<sup>2</sup><sup>2</sup>2How much better depends on the laser power incident on the beamsplitter and the number of arm transversals before recombination at the beamsplitter (see previous footnote). The initial LIGO instrumentation will be capable of measuring changes in the arm length difference to better than one part in $`10^{21}`$ of the arm length. The signature of a gravitational-wave incident in a single LIGO interferometer is a time-varying change in its arm length difference. Since the arm length difference is a single number at each moment of time the “gravitational-wave” data channel is a single number as a function of time: a time-series. The two Hanford interferometers are of different lengths: one has arms of length 4 Km, while the other has arms of length 2 Km. The Livingston interferometer has 4 Km arms. Together the three interferometers can be used to increase confidence that signals seen are actually due to gravitational waves: the geographic separation of the two sites reduces the likelihood that coincident signals in the two detectors are due to something other than gravitational-waves; additionally, a real gravitational wave will have a signal in the 2 Km Hanford interferometer of exactly half the amplitude as the corresponding signal in the 4 Km Hanford interferometer. Finally, while each interferometer is relatively insensitive to the incident direction of a gravitational wave signal, the geographic separation of the two 4 Km detectors, together with data from the French/Italian VIRGO detector, may permit the sky location of an observed source to be determined from the relative arrival time of the signal in the several detectors. Joint analyses of of the output of several interferometers is critical to the scientific success of the gravitational wave detection enterprise. ## 3 LIGO Signals The nature of the signals expected to be present in the LIGO data stream determines the character of the data analysis. That, in turn, determines how the data will be accessed, the archive structure and the data life-cycle. In this section we consider the types of signals that may be expected in the LIGO data stream and how these determine the amount of data that must be archived and made accessible. Despite its unprecedented sensitivity, the LIGO detectors will be able to observe only the strongest gravitational radiation sources the Universe has to offer. These are all astronomical in origin. It is in this sense that LIGO is an observatory, as opposed to an experiment: while in an experiment both the source and the receiver can be controlled, astronomical sources can only be studied in situ. The most intense radiation LIGO may observe are thought to be short bursts of radiation, such as arise shortly before or during the collision of orbiting neutron stars or black holes. These bursts of radiation are expected to last from seconds to minutes. The character of the anticipated burst sources is such that, for many, the only anticipated signature of the source is the imprint it leaves in a gravitational wave detector. Consequently, LIGO cannot rely on some other instrument, such as an optical or gamma-ray telescope, to signal when to look, or not look, for most burst sources. Since burst sources of gravitational radiation are expected, for the most part, to leave no significant signature in instruments other than gravitational wave detectors, we have very little real knowledge of the expected rate of burst sources. Present estimates of burst rates are based on limited astronomical observations of nearby burst source progenitors, coupled with theoretical estimates of their formation rate and evolution. These estimate suggest that the rate of burst, from anticipated sources, observable directly in the initial LIGO instrumentation from anticipated sources is unlikely to exceed one per year in the most optimistic scenarios (planned enhancements and upgrades will increase the expected rate by several orders of magnitude). These estimates are, in reality, quite weak. The rate estimate for bursts from inspiraling binary neutron star systems, which is the firmest of all event rates, is uncertain by several orders of magnitude. Several anticipated burst sources are unobservable except by gravitational wave detectors. Finally, all source rate estimates apply only to anticipated burst sources, and the nature of our knowledge of the cosmos gives good reason to believe that there may be unanticipated sources that these new detectors can observe. The proper conclusion, then, is that the initial observations will inform us more than we can anticipate them. In addition to burst sources, LIGO may also be able to detect radiation from sources that are long-lived and nearly monochromatic. The instantaneous power in these periodic sources will be much less than in the burst sources; however, through coherent observation over several month or longer time scales a measurable signal may emerge. Unlike burst sources, periodic signals are always “on”; like burst sources, continuous observations over month to year periods are necessary if LIGO is to have a reasonable prospect of observing any that are present. Finally, LIGO may be sensitive to a stochastic signal, arising from processes in the early Universe or from the confusion limit of, e.g., a large number of sources each too weak to be detected individually. Like a periodic source, a stochastic signal is always on; also like a periodic source, LIGO will require continuous observation over a period of several months if it is to detect a stochastic signal of even the most optimistic strength. Lastly, unlike either a burst source or a periodic source, a stochastic signal appears in a single interferometer to be no different than intrinsic detector noise: it is only in the correlation of the output of two or more geographically separated detectors that a stochastic signal can be distinguished from intrinsic instrumental or terrestrial noise sources. Detection of any of the anticipated LIGO sources thus requires continuous and high duty-cycle observations over periods of months to years. Additionally, the signature of gravitational wave sources in LIGO is apparent in the behavior of the detector over a period of time, which may be quite long. As a consequence, LIGO data cannot immediately be organized into “events” that are cataloged, stored and analyzed independently: the temporal relationships in the detector output is of fundamental importance and must be preserved over the entire duration of the experiment if the data is to be analyzed successfully. Finally, analysis of the LIGO data for at least one potential source — a stochastic signal — requires the cross-correlation of the data from several, geographically separated interferometers, which places an additional requirement on the simultaneous accessibility of data from multiple interferometers at the same epoch. ## 4 LIGO data types The LIGO data archive will include the data collected at the instrument, information about the data and the instrument, and information derived from the data about the data and the instrument. Different classes of data will have different lifetimes; similarly, the kind of access required of different data classes are different. In recognition of this, several different high-level data types will be supported by LIGO, and different data classes will be stored in different cross-reference databases, catalogs or repositories. In this section I describe the four different data types and three different catalogs that will be created and maintained for LIGO data. The first two data types — frame data (§4.1) and meta-data (§4.2) — are long-lived objects associated with their own catalogs. The third data type — “events” (cf. §4.3) — is also associated with its own catalog, but is more transient. The fourth data type — “light-weight” data — is intended to support import and export of LIGO data to and from the LIGO Data Analysis System (LDAS), so that investigations can take advantage of the wide range of general purpose tools developed for studying data sets. ### 4.1 LIGO frame data and frame data catalog LIGO data will be recorded digitally. Since LIGO is sensitive only to radiation at audio frequencies, the gravitational-wave channel is recorded with a bandwidth typical of audio frequencies: 8.192 KHz, corresponding to a Nyquist sampling frequency of 16.384 KHz.<sup>3</sup><sup>3</sup>3We adopt the usual, if confusing, convention that a KHz is $`10^3`$ Hz, while a KByte is $`2^{10}`$ bytes. The signal itself will be recorded with 2 byte integer dynamic range; consequently, the gravitational-wave channel generates data at a rate of 32 KBytes/s/IFO (where IFO denotes a single interferometer). By itself this is a relatively modest data rate: 2 days of a single LIGO interferometer’s gravitational-wave channel could fit on a single uncompressed exabyte tape. In order for each LIGO interferometer to achieve the requisite sensitivity, however, numerous control systems must operate to continuously adjust the laser, mirrors and other detector sub-systems. Additionally, physical environment monitors will record information on the seismic, acoustic, electromagnetic, cosmic ray, power-grid, residual vacuum gas, vacuum contamination, and local weather conditions that could affect the detector operation . There will be 1,262 data channels of this kind recorded at the Hanford, Washington Observatory, and 515 data channels recorded at the Livingston, Louisiana Observatory at a variety of rates and dynamic ranges, corresponding to a total data rate of 9,479 KBytes/s at Hanford and 4,676 KBytes/s at Livingston . In the course of a year, LIGO will have acquired over 416 TBytes, and the first LIGO science observation is expected to last for 2 yrs, from 2002 to 2004. ### 4.2 LIGO meta-data and meta-data catalog In addition to LIGO data arising from the instrument control systems and environmental monitors, a separate data catalog will be accumulated consisting initially of at least the operator logbook, instrument state or configuration information, and other summary information about each detector and its physical environment that may be deemed relevant to the later understanding of the data stream. The resulting meta-data is neither continuous nor periodic. On the other hand, entries are keyed to the main data, either precisely or by epoch. The rate of meta-data is expected to be, on average, 10 KBytes/s . Meta-data entries will include text narratives, tables, figures, and camera images. Entries may also include snippets of data derived or summarized from one or more channels of the main LIGO data stream, from other experiments or from observations made at other facilities. Finally, the meta-data is, unlike the main data stream, meant to be extensible: as the LIGO data stream is analyzed, annotations and results will be summarized as meta-data. The meta-data is thus the record of everything that is known or learned about the frame data at any give time or during any give epoch. ### 4.3 LIGO event data and event data catalog As analysis proceeds, certain features of the LIGO data will be identified as “events”. These will be recorded in an event data catalog, which is distinct from the meta-data catalog. An event, in this context, is not necessarily of short or limited duration and may not even have a definite start or end time: for example, evidence of an unanticipated coherent, periodic signal in some data channel would be considered an event. Some data features classified as events may eventually be recognized as gravitational wave sources; however, the vast majority of events will be instrument artifacts or have some other, terrestrial or non-gravitational wave origin. As events are investigated and come to be understood, they will move from the event catalog to the meta-data catalog. ### 4.4 LIGO “light-weight” data The LIGO Data Analysis System (LDAS) will provide specialized tools for the efficient manipulation of LIGO frame data, meta-data and event data. To permit LIGO data analysis to take advantage of the much wider range of general purpose tools developed for investigating data sets, a mechanism for exporting relatively small amounts of LIGO data to these applications, and importing the annotated results of investigations made outside the LDAS framework, will be provided. This mechanism will be provided in the form of a “light-weight” data format, which is sufficiently flexible that it can be be read and written by other applications (e.g., Matlab ) with a minimum amount of overhead. Light-weight data will not have the permanence of event data, meta-data or raw data: the results of investigations undertaken outside the LDAS framework will eventually be integrated into the LDAS framework as event data or meta-data. ## 5 The LIGO data life-cycle During normal operations, the LIGO Livingston Observatory will generate data at a rate of 4,676 KBytes/s; the LIGO Hanford Observatory, with its two interferometers, will generate data at a rate of 9,479 KBytes/s (cf. §4.1). Meta-data (cf. §4.2) is expected to be generated at a mean cumulative rate of approximately 10 KBytes/s. Data generated at the sites is packaged by the data acquisition system into frames. A frame is a flexible, self-documenting, formatted data structure, with a header consisting of instrument state and calibration information followed by one or more channels of LIGO data over a common epoch. A frame may also contain meta-data fields. While the period of time, number and identity of the channels covered by a frame is flexible, the data acquisition will write a series of uniform frames of approximately 1 s duration. The frame data object used to hold LIGO data from acquisition onward was developed cooperatively with the VIRGO project, with the explicit goal of reducing the logistical problems that would arise in future, collaborative data analysis exercises. Immediately after it is closed, each acquired frame is passed to the “on-line” LIGO Data Analysis System (LDAS) at the corresponding site (Hanford or Livingston). The on-site or on-line LDAS maintains the past 16 hours of frame data on local disk (corresponding to just over 520 GBytes at Hanford and just over 256 GBytes at Livingston). Each hour the least recently acquired data is transferred to more permanent storage (e.g., tapes) and purged from the system. As data is transferred to more permanent storage, several redundant and identical copies will be made. One copy from each site will be shipped via commercial carrier to a central, long-term archival center, associated with the “off-line” LIGO Data Analysis System and located on the Caltech campus. This data will be in transit for at least one and up to several days. After it arrives at the central data archive, the data from the two LIGO sites will be ingested into the archive. It is at the central data archive that LIGO data from the two observatories will first be accessible either widely simultaneously; prior to that data acquired at Hanford will only be available at Hanford and data acquired at Livingston will only be available at Livingston. As data is ingested into the archive a combination of compression and selection of the data will occur, reducing the volume by approximately 90%.<sup>4</sup><sup>4</sup>4A determination of which data channels may be compressed using lossy algorithms, or discarded entirely, has not yet been made. The compression and selection will not be uniform in time: certain epochs chosen at random or deemed particularly interesting, either because of instrument testing or diagnoses, or because of suggestive behavior of the gravitational-wave channel, may be recorded at full bandwidth. Once the data has been successfully ingested and verified, redundant data at the interferometer sites will be purged and the central data archive will become the single repository and authoritative source for LIGO data. The central LIGO data archive will hold up to 5 yrs of accumulated data from three interferometers. Beyond that period the data volume will be reduced further by a combination of compression and selection of the data, except that the gravitational-wave channel will be preserved with full fidelity indefinitely. ## 6 LIGO Data Analysis LIGO data are time series. The principal component of the gravitational wave channel is noise; all anticipated signals have amplitudes small compared to the noise. All detectable signals have some characteristic that gives them a coherence that is not expected of noise. For example, weak burst sources are detectable if their time dependence or energy power spectrum is well known; periodic signals are detectable when their frequency is Doppler-modulated by Earth’s rotation and motion about the sun; a stochastic signal is manifest as a cross-correlation of the noise in the gravitational-wave channel of two detectors with a frequency dependence characteristic of the separation between the detectors. The principal tool for time series data analysis is linear filtering; correspondingly, the important computational operation are linear algebra operations, eigenvalue/vector analyses, discrete Fourier transforms, and convolutions. The eigenvalue/vector analyses do not involve high dimensional systems; however, the discrete Fourier transforms and convolutions can involve very long vectors: for periodic signal searches over a large bandwidth, the vector dimensions correspond to weeks to months of the gravitational-wave channel at full bandwidth. To meet the estimated computational needs of LIGO data analysis, three Beowulf clusters of commodity personal computers will be constructed. Two of these, each sized to provide approximately 10 Gflops of sustained computing on a prototypical analysis problem (detection of a radiation burst arising from the inspiral of a compact neutron star or black hole binary system), will be located at the observatory sites in Hanford and Livingston; one, sized to provide approximately 30 Gflops of sustained computing on this same problem, will be co-located with the LIGO data archive (cf. §5). These Beowulf clusters form the computational muscle of the LIGO Data Analysis System, which is described further in §8. ## 7 Data access patterns Access to data collected during LIGO operations places constraints on data organization, the mechanisms by which data are retrieved from the archive, and the mechanisms by which data are annotated. The challenges of manipulating a data archive as large as LIGO’s requires that the archive organization archive organization and mechanisms for ingestion, access and annotation reflect the anticipated data access patterns. Many of these decisions regarding the data archive have not yet been made; consequently, in this section I can describe only the nature of the anticipated data access patterns that are considerations in these decisions. “Users” of LIGO data comprise scientists searching for radiation sources and scientists monitoring and diagnosing instrument performance. (Scientists involved in the real-time operation of the detectors real-time instrument operations will require access to data as it is generated and before it is migrated to the central data archive. This does not directly affect the central data archive, but does affect the organization and accessibility of the data at each site.) Some of these user types sub-divide further: for example, searching for gravitational wave bursts requires a different kind of access than searching for periodic or stochastic gravitational wave signals. Each user type requires a different kind of visibility into the data archive. These patterns of access can be distinguished by focusing on * data quantity per request, * predictability of data requests, * number of data channels per request, * type of data channels requested. The data access patterns for gravitational wave signal identification are expected to be quite complex. The character of burst, periodic and stochastic signals in the detector lead to access patterns that differ markedly in data quantity, number of channels, and type of data channels per request. Additionally, the analysis for signals of all three types will have an automated component, which makes regular and predictable requests of the archive for data, and a more “interactive” component, which makes irregular and less predictable requests of the archive. Data analysis for burst signals generally involves correlation operations, wherein a signal template, describing the expected character of the signal, is correlated with the observed data. The correlations will generally be performed using fast transform techniques; consequently, the minimum period of time that a data request will involve is the length of a template. Since burst signals are expected to be of relatively short duration and the detector bandwidth is relatively large, the templates are themselves short. Consequently, the data requests are expected to be for segments of data of relatively short duration. Periodic signal sources are manifest in the data as a frequency modulated but otherwise nearly monochromatic signal. The frequency modulation is determined entirely by the source’s sky position. For these sources, the signal power is expected to be of the same magnitude of the noise power only when the instrument bandwidth can be narrower than at most 1/month. Thus, data requests associated with periodic signal searches will involve segments much longer than for burst sources. Stochastic signals appear in the data stream of a single detector no different than other instrumental noise sources. They become apparent only when the data streams of two or more detectors are cross-correlated. For a schematic picture of how a stochastic signal is identified, let $`x(\tau )`$ be the cross correlation of the gravitational wave channels $`h_1(t)`$ and $`h_2(t)`$ of two detectors; then $$x(\tau )=\frac{1}{T}_0^T𝑑t_1h_1(t_1)h_2(t_2+\tau )$$ (1) for $`T`$ large compared to the correlation time of the detector noise. The stochastic signal is apparent in $`x(\tau )`$ as excess power at “frequencies” (inverse $`\tau `$) less than the light travel time between the two detectors. For the two geographically distinct LIGO detectors, this corresponds to frequencies less than approximately 100 Hz. To detect a stochastic signal is to detect this excess power. Estimates of the strength of possible stochastic signals suggest that detection might require years of data. Nevertheless, because the signal signature is the (incoherent) excess power the volume of data per request need not be great at all: data segments of duration seconds will be sufficient. What is unique about stochastic signal analysis, however, is that the analysis requires data from both the Hanford and Livingston interferometers simultaneously. The automated component of the gravitational wave data analysis will make the greatest demands, by data volume, on the LIGO data archive: the full length of the gravitational wave channel, as well as a subset of the instrument and physical environment monitor channels will be processed by the system. These requests will be predictable by the archive; consequently, pre-reading and caching can be used to eliminate any latency associated with data retrieval for these requests. As discussed in §9, data analysis will almost certainly be hierarchical, with an automated first pass selecting interesting events that will be analyzed with increasing levels of interactivity. At each stage of the hierarchy, the number of events analyzed will decrease and the volume interferometer data requested of the archive (in channels, not time) will increase. Shortly after operations begin we can expect that the analyses performed at each level of the hierarchy, except the upper-most, will be systematized, meaning that the requests, while less frequent, are still predictable. Thus, an event identified at one level can lead to the caching of all data that will be needed at the next level of the hierarchy, again eliminating the latency involved in the data requests. Scientists who are diagnosing or monitoring the instrument can be expected to have similar access patterns to scientists searching directly for gravitational wave events. The principal difference is that the data volumes are expected to be smaller (the study is of noise, not signals of low level embedded in the noise) and the range of channels involved in the analysis larger (many of the diagnostic channels recorded will not directly influence the gravitational wave channel even if they are important for understanding and tuning the operation of the detector.) Finally, an important class of users, especially as the observatories are coming on-line, will be more interactive users who are “experimenting” with new analysis techniques, or studying the characteristics of the instrument. (Interactive, in this usage, includes small or short batch jobs that are not part of an on-going, continuous analysis process.) These users, which include scientists searching for data, diagnosing or monitoring the operations of the detectors, will be requesting relatively small volumes of data, both by segment duration and by channel count. ## 8 Accessing and manipulating LIGO Data User access to, and manipulation of, the LIGO data archive will be handled through the LIGO Data Analysis System (LDAS). While the general architecture of the LDAS has been determined, most of its design and implementation details have yet to be determined; consequently, in this section, I will describe LDAS only in the broadest of terms. At the highest level, LDAS consists of three components: two “on-line” systems, one each at the Hanford and Livingston sites, and one “off-line” system located with the central data archive on the Caltech campus. The on-line systems are responsible for manipulating and providing access to data that has not yet been transfered to the central data archive, while the off-line system provides the equivalent functionality for data stored in the central data archive. The bulk of LIGO data analysis will take place entirely within LDAS: users will, generally, see only calculation results or highly abstracted or reduced summaries of the data. This capability is critical given both the sheer volume of the LIGO data as well as the geographically distributed LIGO Science Collaboration membership, which includes researchers based throughout the North America, Europe, Japan and Australia. Except for operations that involve exporting LIGO data to applications outside of LDAS (where issues of network bandwidth arise), LDAS is required to support users not physically co-located with the data archive in parity with local users. To meet this requirement the LDAS is being designed to be more than a data archive, library or repository: it is a remotely programmable data analysis environment, tailored to the kinds of analysis that is required of the full bandwidth LIGO data. In the LDAS model, data analysis involves an action taken on a data object. The user specifies the data, the action, and the disposition of the results. At the user level there are several different ways of specifying the same data: e.g., by epoch (“thirty seconds of all three gravitational-wave channel beginning Julian Day 2453317.2349”), by logical name (“Hanford magnetometer channel 13 of event CBI1345”), or by some selection criteria (“gravitational wave channels from Hanford-2 from Julian Day 2453238 where beamsplitter seismometer rms is less than 13.23”). There will be a variety of analysis actions available to the user, which may be built-up from a set of “atomic” actions like discrete Fourier transform, linear filtering, and BLAS-type operations. These operations are denoted “filters.” Finally, the results of these filter actions on the data can be stored for further action, displayed in some fashion (e.g., as a figure or table), or exported from the LDAS as light-weight data. Figure 2 is a block diagram schematic of the LDAS system. The user interaction with LDAS will be through either an X11 or web-based interface. These two interfaces generate instructions to the LDAS in its native control language, which will be Tcl with extensions. Instructions to the LDAS are handled by the Distributed Data Analysis Manager. This software component is responsible for allocating and scheduling the computational resources available to LDAS. In particular, * it determines what data is required by the user-specified operation and requests it from the appropriate data archives, which are shown below the Data Analysis Manager on the block diagram; * it allocates and instructs the analysis engines (the Beowulf cluster) on the operations that are to be performed on the data, including pre-conditioning of the data stream (in the Data Conditioning Unit), generalized filtering operations (in the filter units), and event identification and management operations on the output of the filtering operations (in the Event Manager); and * it disposes of the results of the analysis, either back into the data archive, onto a disk cache, or back to the user in the form of, e.g., a figure. The Distributed Data Analysis Manager never itself actually manipulates the data; rather, it issues instructions to the other units that include where to expect data from and where to send results to. The other units (the data archives, the data conditioning unit, the filters and the event manager) then negotiate their own connections and perform the analysis as instructed. ## 9 On-line and off-line data analysis The LDAS sub-system installed at each LIGO observatory and at the central data archive will be functionally equivalent, although their relative scales will vary: the sub-system installed at the central archive will have access to data from all three interferometers and computing resources adequate to carry out more sophisticated and memory intensive analyses than the sub-systems installed at the separate observatories, which will only have access to data collected locally over the past several hours. When operating as a scientific instrument, LIGO will acquire data automatically. Correspondingly, a significant component of the data analysis resources are devoted to an automatic analysis of the data carried out in lock-step with data acquisition. The details of that automatic analysis have not been decided on, nor has the disposition of the automatic part of the data analysis among the LDAS components at the observatories and the centralized data archive. Nevertheless, certain fundamental requirements that any data analysis system must fulfill suggest how the analysis workload at the observatories might differ from that undertaken at the central data archive and how the total data analysis workload might best be distributed. A principal requirement of the data analysis system is that it maintain pace with the data generated by the instrument: unanalyzed data is no better than data never taken. Sophisticated data analysis can maximize the probability of detecting weak signals when present and minimize the probability of mistakenly identifying noise as a signal; however, the most sophisticated analyses cannot be carried out uniformly on all the data while still maintaining pace with data acquisition rates. Another important consideration is that the computational resources placed at each site have access only to locally acquired data no more than several hours old. Computational resources located with the central data archive, on the other hand, are available to work with data from all three sites over nearly the entire past history of the detector: only data acquired during the immediate past several days, before it reaches the archive, will not be available for analysis. This last caveat is an important one: while many potential gravitational wave sources are not expected to have an observable signature in more conventional astronomical instruments (e.g., optical or $`\gamma `$-ray telescopes), some anticipated sources may very well have such a signature that follows a gravitational wave burst by moments to hours. In this case, prompt identification of a gravitational wave burst could be used to alert other observatories, allowing astronomers to catch some of these sources at early times in their optically visible life. Exploiting gravitational wave observations in this way requires on-site analysis, since data will not reach the central archive for several days after it has been acquired. All these considerations suggest a two-pass strategy for data analysis. The first pass takes place at the observatory sites: in it, all data acquired during normal operations is subjected to quick, but relatively unsophisticated, analyses whose goal is to rapidly identify stretches of data that might contain a burst signal. No consideration is given, in the on-line system, to searching for stochastic or periodic gravitational wave signals. In accepting the goal of identifying candidate burst signals in the on-site system, one willingly accepts a relatively high level of false alarms in order to achieve a relatively high detection efficiency. The on-site systems can also monitor the detector behavior, identifying and flagging in the meta-data periods where detector mis-behavior disqualifies data from further analysis. Periodically, then, analysis at the site will identify intervals that include candidate gravitational wave bursts. If an identified candidate is believed to be among the type that can be associated with observations at another astronomical observatory, a more sophisticated analysis can be triggered to determine the likelihood of an actual detection in this limited data interval. If the identified candidate is not of this kind, or if the more sophisticated analysis suggests that the event is not conclusively a gravitational wave, then the data segment can be flagged in the meta-data by the on-site system for later consideration. Thus, the first pass of the data does three things: 1. it keeps up with the flow of data; 2. it flags data segments that bear at least some of the characteristics that we associate with gravitational waves; 3. it flags data segments as disqualified from further analysis for gravitational waves; and 4. it handles time-critical analyses. The second-pass of the data takes place in the LDAS component co-located with the central data archive. Here we capitalize on the work performed at the sites by focusing attention on the “suspicious” data segments identified at the sites. The time available for this more critical and in depth analysis is expanded in proportion to the fraction of the entire data stream occupied by the suspicious data segments; additionally, the computational resources are used more effectively, because data from the two sites is available simultaneously to the analysis system. Finally, analysis aimed at periodic and stochastic gravitational wave signals is performed exclusively in the off-site system. This choice is made both because the analysis is not time critical and the duration of the data that must be analyzed in order to observe evidence of a signal is long compared to the time it takes to move the data from the sites to the central data archive. Thus, the second pass of the data 1. keeps up with the flow of interesting data; 2. introduces more critical judgment into the analysis process; and 3. handles analysis tasks that are not time critical. The apparently conflicting requirements of keeping up with the data flow while still maintaining a high degree of confidence in the final results are thus satisfied by splitting the analysis into two components. The first component identifies “interesting” data segments that are subjected to a more critical — and time consuming — examination in the second component. The second component of the analysis takes place only at the data archive, where access to the entire LIGO data stream from both detectors is available, while the first component takes place at the individual sites where, only limited access to recent data from a single instrument is available. ## 10 Conclusions LIGO is an ambitious project to detect directly gravitational waves from astrophysical sources. The signature that these sources produce in the detector output are not discrete event that occur at predictable times, but manifest themselves in weak but coherent excitations, lasting anywhere from seconds to years, that occur randomly in one or more “detectors”. Correspondingly, the data acquired at LIGO are time series and the analysis depends on correlating the observed detector output with a model of the anticipated signal, or cross-correlating the output of several detectors in search of coherent excitations of extra-terrestrial origin. The duration of the signals, their bandwidth, and the randomness of their occurrence together require that LIGO be prepared to handle on order 400 TBytes of data, involving three detectors, per year of operation. The nature of the time-series analysis that will be undertaken with this data and the geographical distribution of the scientists participating in the LIGO Science Collaboration pose requirements on the data archive and on the analysis software and hardware. Data collected from LIGO are divided into two kinds: frame data and meta-data. Frame data is the raw interferometer output and includes instrument control and monitoring information as well as physical environment monitors. Meta-data includes operator logbooks, commentary, and diagnostic data about the data and the instrument: i.e., it is data about data. (If the frame data is the Torah, then the meta-data is the Talmud.) As LIGO data is analyzed, a third category of data is created — “event” data, which includes results of intermediate analyses that explore the detector behavior, highlight a possible gravitational wave source, or set limits on source characteristics. As event data matures, it becomes meta-data: further commentary on the data. LIGO data analysis will be carried out by collaborating scientists at institutions around the globe. The character of the analysis and the volume of the data precludes any significant analysis being carried out on computing hardware local to a given collaborator. To support LIGO data analysis, a centralized LIGO Data Analysis System (LDAS) is being built, which is designed to support remote manipulation and analysis of LIGO data through web and X11 interfaces. In this system, significant amounts of data rarely leave LDAS: only highly abstracted summaries of the data are communicated to local or distant researchers. Finally, there is an inherent conflict involved in the twin requirements of keeping pace with the flow of the data and maintaining high confidence in the conclusions reached by the analysis. This conflict is exacerbated by the geographical separation of the LIGO detectors: the bandwidth of the data generated at each site makes it infeasible to bring all the LIGO data together for analysis until several days after it has been acquired. By taking advantage of local computing at each site and the approximately one day that the data from each site is locally available, this conflict can be mitigated: data local to a site can be analyzed using tests of low sophistication, to identify subintervals of the LIGO time series that have “suspicious” character. After the data from the two sites is brought together at the central archive, more time consuming — but sophisticated — analyses can focus on those suspicious intervals. It is a pleasure to acknowledge Kent Blackburn, Albert Lazzarini, and Roy Williams for many helpful and informative discussions on the technical details of the LIGO data analysis and archive system design. The ideas discussed here on the use of the on-line and off-line data analysis system have been informed by discussions with Rainer Weiss. This work was supported by National Science Foundation award PHY 98-00111 to The Pennsylvania State University.
no-problem/9903/cond-mat9903309.html
ar5iv
text
# Non-linear Microwave Surface Impedance of Epitaxial HTS Thin Films in Low DC Magnetic Fields ## I Introduction Understanding the mechanisms of the non-linearity of high-$`T_c`$ superconductors (HTS) at microwave frequencies is very important from the point of view of application of the materials in both passive and active microwave devices . Recently, unusual features such as decrease of the surface resistance $`R_s`$ and reactance $`X_s`$ of HTS thin films with microwave field $`H_{rf}`$ have been reported . Similar observations were made in weak ($`20`$ mT) static fields $`H_{dc}`$ , which have shown that a small dc magnetic field can cause a decrease of $`R_s`$ and $`X_s`$ in both the linear and nonlinear regimes. In the present paper we report measurements of the microwave field dependences of $`R_s`$ and $`X_s`$ of high-quality epitaxial YBaCuO thin films in zero and finite ($`12`$ mT) applied dc magnetic fields. All the samples have rather different functional form of $`R_s(H_{rf})`$, but $`X_s(H_{rf})`$ is universal and nearly temperature-independent. At the same time, $`H_{dc}`$ applied parallel to the c-axis of the films has a qualitatively similar effect on both $`R_s(H_{rf})`$ and $`X_s(H_{rf})`$, giving evidence of non-monotonic behavior of $`R_s`$ and $`X_s`$ as a function of $`H_{dc}`$ both in the linear and nonlinear regimes. An even more striking feature is that for some of the samples the dc field can decrease $`R_s`$ below its low-power zero-field value, thereby offering a possible way of reducing the microwave losses of HTS thin films. ## II Experimental Results The films are deposited by e-beam co-evaporation onto polished (001)-oriented MgO single crystal $`10\times 10`$ mm<sup>2</sup> substrates. The films are 350 nm thick. The c-axis misalignment of the films is typically less than 1$`\%`$, and the $`dc`$ critical current density $`J_c`$ at 77 K is around $`210^6`$ A/cm<sup>2</sup> . The films were patterned into linear coplanar transmission line resonators with resonance frequency of $`8`$ GHz using the technique described in . The nonlinear measurements were performed using a vector network analyzer with a microwave amplifier providing CW output power up to 0.3 W. The low-power values of $`R_s`$ and $`\lambda `$ at 15 K are 60, 35, 50 $`\mu \mathrm{\Omega }`$ and 260, 210, 135 nm for samples TF1, TF2 and TF3, respectively. Changes in $`R_s`$ and $`X_s`$ with $`H_{rf}`$, $`\mathrm{\Delta }R_s=R_s(H_{rf})R_s(0)`$ and $`\mathrm{\Delta }X_s=X_s(H_{rf})X_s(0)`$, are plotted in Fig. 1 for all three samples. It is seen that the $`H_{rf}`$-dependence of $`\mathrm{\Delta }R_s`$ is rather different for different samples, whereas $`\mathrm{\Delta }X_s(H_{rf})`$ is universal. For sample TF1, $`\mathrm{\Delta }R_sH_{rf}^2`$ from the lowest $`H_{rf}`$. For sample TF2, a decrease in $`R_s`$ is observed with increased $`H_{rf}`$, and the absolute value of $`R_s`$ falls below the corresponding low-power value. Finally, for sample TF3, $`\mathrm{\Delta }R_s`$ is rather independent of $`H_{rf}`$ up to sufficiently high fields ($``$60 kA/m), after which a skewing of the resonance curve is observed. The surface reactance, $`X_s`$, for all three samples is a sublinear function of $`H_{rf}`$ ($`H_{rf}^n`$, $`n<1`$) at low powers, then has a kink, followed by a superlinear functional dependence ($`H_{rf}^n`$, $`n>1`$). The effect of dc magnetic fields ($`12`$ mT) on the microwave power dependence of $`R_s`$ and $`X_s`$ for all the samples is illustrated in Fig. 2 and Fig. 3. The common feature for all three samples is that the dependences of $`R_s(H_{rf})`$ and $`X_s(H_{rf})`$ upon $`H_{dc}`$ are non-monotonic. For samples TF1 and TF2 (for particular $`H_{rf}`$-range and $`H_{dc}`$-values), the static field leads to a decrease in $`R_s`$ compared to the low-power zero-field value. This means that both dc and rf fields can cause a reduction of the microwave losses in YBaCuO (see Fig. 1a and Fig. 2a,b). A possible mechanism of such a behavior is discussed later. One can see that for sample TF1, a dc field of a certain strength (10 mT) can cause a decrease in $`R_s`$, whereas $`X_s`$ is always enhanced by a dc field. Similarly, for sample TF3, the behavior of $`R_s(H_{rf})`$ and $`X_s(H_{rf})`$ in $`H_{dc}`$ is also uncorrelated. However, for TF1 we observe a reduction of $`R_s`$ without an accompanying decrease in $`X_s`$, whereas for TF3 the effect is opposite; for particular values of $`H_{dc}`$ (5, 10 mT) the in-field ($`H_{dc}0`$) value of $`X_s(H_{rf})`$ is lower than the corresponding value for $`H_{dc}=0`$ (Fig. 2c), while the in-field value of $`R_s(H_{rf})`$ is always higher than corresponding zero-field value (Fig. 2c). Here, the most pronounced decrease in $`X_s`$ for TF3 is observed at $`H_{dc}=5`$ mT. Finally, for sample TF2 there is a well pronounced correlated behavior of $`R_s(H_{rf})`$ and $`X_s(H_{rf})`$ in a dc field; $`H_{dc}`$ of any value from 5 to 12 mT decreases both $`R_s`$ and $`X_s`$ (see Fig. 2b and Fig. 3b). ## III Discussion and Conclusions A powerful approach in distinguishing between various non-linear mechanisms is a parametric representation of the data in terms of the $`r`$ parameter, where $`r=\mathrm{\Delta }R_s/\mathrm{\Delta }X_s`$ . In Fig. 4 we plotted the $`H_{rf}`$-dependence of the $`r`$ parameter for all three samples in different dc magnetic fields from 0 to 12 mT. One can see that for sample TF1, all the in-field $`r(H_{rf})`$ curves almost collapse over the entire range of $`H_{rf}`$, whereas the zero-field $`r(H_{rf})`$ data are clearly different from the in-field ones. This is especially noticeable at low $`H_{rf}`$ (between 3-7 kA/m), where the $`r`$ values differ by up to a factor of 10 between the zero-field and in-field $`r(H_{rf})`$ dependences. At the same time, at the lowest $`H_{rf}`$ (2-3 kA/m), the zero-field $`r`$-values match very well with the in-field ones (see Fig. 4a). Therefore, the low-power nonlinearity for sample TF1 appears to have the same origin for zero-field and in-field regimes, whereas the high-power range mechanisms are likely to be different. For sample TF2 at $`H_{dc}=5`$ mT and 10 mT, $`r(H_{rf})`$ is rather noisy, which clearly correlates with the noisy dependence of $`X_s(H_{rf})`$ for this sample at the relevant dc fields (see inset in Fig. 3b). The $`r`$ parameter oscillates between $``$4 and 6 with an average values close to 0.3–0.4 and 0.2–0.3 for 5 and 10 mT, respectively. For zero field and 12 mT, $`r(H_{rf})`$ are quite consistent, starting to increase from large negative values $`1`$ at low powers, and saturating to the level of $``$0.2 to $``$0.1 at higher $`H_{rf}`$. Finally, for sample TF3 at zero dc field, $`r`$ increases with $`H_{rf}`$ at low powers, whereas the 10 and 12 mT $`r`$-values decrease, but all three curves level off for $`H_{rf}10`$ kA/m to a value of $`0.1`$. However, $`r(H_{rf})`$ at 12 mT appears to tend to small negative values at high $`H_{rf}`$, consistent with the decrease in $`R_s(H_{rf})`$ at 12 mT in the relevant $`H_{rf}`$ range (Fig. 2c). Standing apart from other dependences is $`r(H_{rf})`$ at 5 mT, which shows very high values ($`2`$) at low $`H_{rf}`$, saturating at a level of $`0.4`$ at higher $`H_{rf}`$. Note that this value is about a factor of 4 larger than the saturation level for other curves. This seems to imply that $`H_{dc}=5`$ mT causes a switching of the mechanism of the nonlinearity in the film, as compared to the mechanism at other fields, including zero-field results. Recently Ma et al. have found that YBaCuO thin films deposited by the same method exhibit correlation of $`R_s(H_{rf})`$ with the values of low-power residual $`\lambda _{res}`$ and the normal-fluid conductivity $`\sigma _n`$. At the same time, they failed to note any correlation between the power dependence and $`R_{res}`$. A similar conclusion can be drawn from our results (see Fig. 1a). One can see that $`R_s`$ is almost independent of $`H_{rf}`$ for sample TF3, which has the lowest $`\lambda (15K)=135`$ nm, whereas sample TF1 with the largest $`\lambda (15K)=260`$ nm exhibits the strongest $`H_{rf}`$-dependence. On the other hand, there is no strict correlation between $`R_s(H_{rf})`$ and low-power $`R_s`$ (see Sec. II and Fig. 1a), which is also consistent with the results of Ma et al. However, the strongest power dependence, $`R_sH_{rf}^2`$, is observed for sample TF1 with both the highest $`R_{res}`$ and $`\lambda _{res}`$, in agreement with recent results on YBaCuO thin films with different low-power characteristics . There are two further distinctive features for samples TF1, when compared with the two other samples. The functional form of $`R_s(H_{rf})`$ is noticeably changed by a dc field ($`R_sH_{rf}^n`$, where $`n=`$2, 1.12, 0.8 and 1.24 for 0, 5, 10 and 12 mT, respectively), whereas $`X_s(H_{rf})`$ is not affected by $`H_{dc}`$. In addition, for TF1, a dc magnetic field changes not only the power dependence of $`R_s`$, but the absolute value of the low-power $`X_s`$ (see Fig. 2a), while for TF2 and TF3 no such effect is observed (Fig. 2b,c). The effect of a dc field on $`R_s(H_{rf})`$ is also seen for sample TF3 at $`H_{dc}=5`$ mT which, as will be argued later, may switch the mechanism of nonlinearity for this sample. Recently Habib et al. have found that for a stripline resonator with a weak link in the middle, $`R_s(H_{rf})`$ is strongly affected by the junction, whereas $`X_s(H_{rf})`$ was found to be insensitive to the presence of the weak link. Based on this finding, we can suggest that the difference between $`R_s(H_{rf})`$ for our samples may originate from different microstructure (type, dimension and number of defects) of the samples, whereas the similar form of $`X_s(H_{rf})`$ appears to reflect the intrinsic behavior of each film, mostly exhibited by grains. This assumption is further supported by the strong effect of a small dc field on $`R_s(H_{rf})`$ for samples TF1 (Fig. 2a) and TF3 (at 5 mT, Fig. 2c), whereas the functional form of $`X_s(H_{rf})`$ is unchanged by $`H_{dc}`$. ### A Analysis of Possible Mechanisms As we have shown earlier , such uncorrelated behavior of $`R_s(H_{rf})`$ and $`X_s(H_{rf})`$, as we observed for our samples (Fig. 1), cannot be explained by any of the known theoretical models, including Josephson vortices (where $`r_{JF}<1`$, $`\mathrm{\Delta }R_s`$,$`\mathrm{\Delta }X_sH_{rf}^n`$, $`0.5<n<2`$), heating of weak links ($`r_{HE}<1`$, $`\mathrm{\Delta }R_s,\mathrm{\Delta }X_sH^2`$) and the RSJ model ($`r_{RSJ}<1`$, $`\mathrm{\Delta }R_s`$ increasing in a stepwise manner and $`\mathrm{\Delta }X_s`$ oscillating with $`H_{rf}`$), intrinsic pair breaking or uniform heating (for both mechanisms $`r<10^2`$, and $`\mathrm{\Delta }R_s,\mathrm{\Delta }X_sH^2`$). We can also rule out the mechanism of the superconductivity stimulation by microwave radiation , recently claimed by us and Choudhury et al. . In this mechanism, the dc magnetic field decreases the order parameter, increasing both $`R_s`$ and $`X_s`$, which we do not observe for any of our samples. Moreover, we see that even in the low-power regime $`H_{dc}`$ can cause reduction of both $`\mathrm{\Delta }R_s`$ and $`\mathrm{\Delta }X_s`$, which is not explained by the above model at all. The most plausible mechanism responsible for the decrease in $`R_s`$ and $`X_s`$ with both $`H_{rf}`$ and $`H_{dc}`$ fields seems to be field-induced alignment of the spins of magnetic impurities, which are likely to be present in most HTS (particularly in YBaCuO) . This mechanism was recently claimed by Hein et al. to explain their results on non-monotonic behavior of $`R_s`$ and $`X_s`$ in $`H_{dc}`$ and $`H_{rf}`$ for YBaCuO thin films. However, because our non-linear results for $`\mathrm{\Delta }R_s`$ and $`\mathrm{\Delta }X_s`$ are not correlated, and moreover, exhibit different $`R_s(H_{rf})`$ dependences, we suppose that other strong nonlinear mechanism(s) may interfere with the spin-alignment mechanism. We suggest that this mechanism might be Cooper pair breaking at low powers, and nucleation and motion of rf-vortices at higher powers. Heating effects at high powers may also play an important role. However, additional investigations are necessary to answer this question unambiguously. In conclusion, we have presented here the results on non-monotonic microwave power dependence of $`R_s`$ and $`X_s`$ in both zero and weak ($`12`$ mT) dc magnetic field for very high-quality epitaxial YBaCuO thin films. Since this unusual behavior has come into being only owing to a significant progress in the thin films fabrication for the past few years, we conclude that the features observed by us seem to originate from the intrinsic properties of superconductors. However, different functional form of $`R_s(H_{rf})`$ for different samples and universal $`X_s(H_{rf})`$ behavior seem to imply that the microstructure still plays a significant role in the macroscopic properties of the samples. In addition, the observed decreases in $`R_s`$ and $`X_s`$ below their zero-field low-power values means that there is still room for improvement of the microwave properties of the thin films. This can be realized upon adequate understanding of the mechanisms responsible for the unusual behavior observed, and can lead to improved characteristics of HTS-based microwave devices.
no-problem/9903/astro-ph9903180.html
ar5iv
text
# NEAR-INFRARED PHOTOMETRY OF BLAZARS ## Abstract ABSTRACT The rapid variability of blazars in almost all wavelengths is now well established. Two days of observations were conducted at the Palomar Observatory during the nights of 25 and 26 February 1997 with the 5-meter Hale telescope, in order to search for rapid variability in the near-infrared (NIR) bands J, H, K<sub>s</sub> for a selection of eight blazars. With the possible exception of 1156+295 (4C 29.45), no intraday or day-to-day variability was observed during these two nights. However, for these eight blazars, we have measured the NIR $`\mathrm{\nu }`$F<sub>ν</sub> luminosities and spectral indices. It has recently been reported that the $`\mathrm{\gamma }`$-ray emission is better correlated with the near-infrared luminosity than with the X-ray luminosity (Xie et al. 1997). This correlation is suggested as a general property of blazars because hot dust is the main source of soft photons which are scattered off the relativistic jets of electrons to produce the gamma rays by inverse Compton scattering. We thus used this relationship to estimate the $`\mathrm{\gamma }`$-ray luminosity. 1) Service d’Astrophysique DAPNIA, CEA Saclay F-91191 Gif sur Yvette cedex 2) Département de physique, Université de Versailles, F-78035 Versailles cedex 3) Jet Propulsion Laboratory 169-327, 4800 Oak Grove Dr., Pasadena, CA 91109 KEYWORDS: AGN, blazar, near infrared, observations 1. INTRODUCTION 1.1 The blazar properties The discovery that blazars (i.e., optically violently variable quasars and BL Lac objects) and flat radio-spectrum quasars emit most of their power in high-energy gamma rays (Fichtel, et al. 1994) probably represents one of the most surprising results from the Compton Gamma-Ray Observatory (CGRO). Their luminosity above 100 MeV in some cases exceeds 10<sup>48</sup> ergs s<sup>-1</sup> (assuming isotropic emission) and can be larger (by a factor of 10-100) than the luminosity in the rest of the electromagnetic spectrum. Moreover, the $`\gamma `$-ray emission can be strongly variable on time-scales as short as days, indicating that the emission region is extremely compact (Kniffen et al. 1993). Blazars have smooth, rapidly variable, polarized continuum emission from radio through UV/X-ray wavelengths. All have compact flat-spectrum radio cores and many exhibit superluminal motions. 1.2 The origin of gamma rays in blazars A variety of theoretical models have been recently proposed to explain the origin of the $`\gamma `$-ray emission of blazars. Most models describing the high-energy emission involve beaming from a jet of highly relativistic particles and include: (1) synchrotron self-Compton. The $`\gamma `$-ray spectrum is the high-energy extension of the inverse-Compton radiation responsible of the X-ray radiation (Maraschi et al. 1992), i.e., the scattering of synchrotron radiation by relativistic electrons gives rise to a higher frequency flux, which can be scattered a second time and so on. (2) inverse Compton scattering of accretion-disk photons by relativistic nonthermal electrons in the jet (Dermer et al. 1992). (3) inverse Compton scattering of ambient soft X-rays by relativistic pairs accelerated in situ by shock fronts in a relativistic jet (Blandford & Levinson 1995). (4) synchrotron emission by ultrarelativistic electrons and positrons (Ghisellini et al. 1993). Various relations between the emission at different wavelengths are implied by these models and can be used to observationally distinguish among a variety of emission mechanisms. 1.3. The infrared and near-infrared luminosities A strong correlation between $`\gamma `$-ray and near-infrared luminosities was recently reported for a sample of blazars and it was suggested that this relation might be a common property of these objects (Xie et al. 1997). For that reason, the authors conclude that hot dust is likely to be the main source of the soft photons (near-infrared) which are continuously injected within the knot and then produce $`\gamma `$-ray flares by inverse Compton scattering on relativistic electrons. Given this correlation, it is easy to use the near-infrared luminosities to deduce the $`\gamma `$-ray fluxes, and then, the total emitted fluxes. 2. OBSERVATIONS We observed eight blazars with the 5-meter Hale telescope on Mt. Palomar during the nights of 25 and 26 February 1997, using the Cassegrain Infrared Camera, an instrument based on a $`256\times 256`$-pixel InSb array with the J (1.25 $`\mu `$m), H (1.65 $`\mu `$m) and K<sub>s</sub> (2.15$`\mu `$m) filters and a field-of-view of 32 arcsec. The reduction of data was done under IRAF and included subtraction of the dark noise, flat field corrections, and combination of images to remove bad pixels, cosmic rays, and the sky. Then aperture photometry for each object was performed using nearby faint standards for calibration. The apparent magnitudes are summarized in Table 1 and plotted in Figure 1. Due to the steadyness of the sources, it was possible to fit the energy flux to a power law (defined as $`f(\nu )\nu ^\alpha `$) by $`\chi ^2`$ minimization, giving the spectral index, $`\alpha `$, for each source (Table 1). Finally, we calculated the luminosity, $`L(\nu )=4\pi d_L^2\nu f(\nu )`$, (1) using the luminosity distance, $`d_L`$, where $`q_0=0.5`$, $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, z is the redshift, and c the velocity of light in vacuum: $`d_L=\frac{c}{H_0q_0^2}(zq_0(1q_0)(\sqrt{1+2q_0z}1))`$ (Weinberg 1972) (2) and the K-correction, where $`\alpha `$ is the spectral index: $`f(\nu )=f_{obs}(\nu )(1+z)^{(\alpha 1)}`$. The K-corrected $`\overline{L}_\nu `$ luminosities in the K<sub>s</sub>-band (as defined in Dondi & Ghisellini (1995)), calculated from our near-infrared observations, are given in Table 1. For 0716+714, we took the lower limit $`z>0.3`$ of Wagner et al. (1996). All other redshifts were taken from compilations of Ghisellini et al. (1993) and Dondi & Ghisellini (1995). A strong correlation observed between $`\gamma `$-ray and near-infrared luminosities was shown (Xie et al. 1997) and the authors suggest that it may be a common property of blazars. According to them, inverse Compton scattering of the infrared radiation from hot circumnuclear dust by a relativistic electron beam should be responsible for the $`\gamma `$-ray flares. According to Xie, et al. (1997), the near-infrared and $`\gamma `$-ray luminosities of blazars can be related by: $`\mathrm{log}\overline{L}_\gamma =1.26\mathrm{log}\overline{L}_{IR}11.38`$ Using this relationship, we then estimated the $`\nu `$F<sub>ν</sub> luminosity in the $`\gamma `$-ray range, using the K-corrected $`\overline{L}_\nu `$ luminosity in the $`K_s`$-band. These results are summarized in Table 1 and their discussion is given in Chapuis et al. (1998). Acknowledgement Observations at the Palomar Observatory were made as part of a continuing collaborative agreement between Palomar Observatory and the Jet Propulsion Laboratory. The research described in this paper was carried out in part by the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration. REFERENCES Blandford, R.D., & Levinson, A. 1995, ApJ, 441, 79 Chapuis, C., et al. 1998 (in preparation) Dermer, C., Schlickheiser, R., & Mastichiadis, A. 1992, A&A, 256, L27 Dondi, L., & Ghisellini, G. 1995, MNRAS, 273, 583 Fichtel, C. E., et al. 1994, ApJS, 94, 551 Ghisellini, G., Padovani, P., Celotti, A., & Maraschi, L. 1993, ApJ, 407, 65 Kniffen, D. A., et al. 1993, ApJ, 411, 133 Maraschi, L., Ghisellini, G., & Celotti, A. 1992, ApJ, 397, L5 Wagner, S.J., et al. 1996, AJ, 111, 2187 Weinberg, S. 1972, Gravitation and Cosmology, John Wiley & Sons NY Xie, G., Zhang, Y., & Fan, J. 1997, ApJ, 477, 114
no-problem/9903/astro-ph9903224.html
ar5iv
text
# X-ray observations through the outburst cycle of the dwarf nova YZ Cnc ## 1 Introduction Dwarf novae are named after their outbursts, during which their luminosity at optical and ultraviolet wavelengths increases by factors $`\text{ }<100`$ (for a review see the monography on Cataclysmic Variables by Warner 1995). The outbursts are thought to be due to increased accretion onto the white dwarf. Such an increase can be the consequence of increased transfer of matter from the donor star to the accretion disk that surrounds the white dwarf; alternatively an instability inside the accretion disk could trigger higher accretion onto the white dwarf. The latter model has been prominent in recent theoretical work, but is not without difficulties (see reviews by e.g. Cannizzo 1993, Verbunt 1991, Livio 1999). X-rays of dwarf novae arise from close to the white dwarf and thus reflect the condition in the accretion disk close to the white dwarf (see, e.g., the review by Verbunt 1996). For a comparison between different models, it is necessary to consider the whole outburst cycle, including the quiescent interval (e.g. Pringle et al. 1986). YZ Cnc is a member of the class of SU UMa type dwarf novae, in which short outbursts are occasionally interspersed by longer and brighter superoutbursts. Its orbital period is 0.0868(2) d (Shafter & Hessman 1988); a more accurate period of 0.086924(7) d is suggested by Van Paradijs et al. (1994). YZ Cnc is remarkable for the behaviour of its ultraviolet resonance lines during its dwarf nova outbursts: during each orbit the profiles of these lines change from almost pure emission to P Cygni profiles with deep absorption, and back again; these changes are not accompanied by changes in the continuum (Drew & Verbunt 1988, Woods et al. 1992). In X-rays YZ Cnc has been studied with the Einstein satellite (Córdova & Mason 1984, Eracleous et al. 1991), with EXOSAT (van der Woerd 1987), and with the ROSAT PSPC during the ROSAT All Sky Survey and in subsequent pointings (Verbunt et al. 1997, van Teeseling & Verbunt 1994). In this paper we report on a ROSAT campaign intended to determine the X-ray fluxes throughout the outburst cycle of YZ Cnc; and also to determine whether the orbital variation in the ultraviolet lines is accompanied by orbital variation in the X-ray flux. In Sect. 2 we describe the observations and data analysis, the results and their interpretation are given in Sect. 3; comparison with earlier X-ray observations is made in Sect. 4, and the implications for the models of dwarfs nova outbursts are discussed in Sect. 5. ## 2 Observations and data reduction All 1998 observations were obtained with the ROSAT X-ray telescope (Trümper et al. 1991) in combination with the high-resolution imager (HRI, David et al. 1995). The log of the observations is given in Table 1. The data reduction was done with the Extended Scientific Analysis System (Zimmermann et al. 1996). YZ Cnc was detected in every pointing, the countrate was determined by applying a maximum-likelihood technique which compares the observed photon distribution with the point spread function of the HRI (Cruddace et al. 1988). The resulting countrates are given in Table 1. YZ Cnc is much brighter than the background countrate; the errors in the countrate are therefore dominated by Poisson statistics on the detected number of source counts. ## 3 Results and interpretation We investigate the variation of the X-ray flux of YZ Cnc through the outburst cycle, and also on the orbital timescale. ### 3.1 Outburst cycle In Figure 1 we show the optical lightcurve of YZ Cnc from April 5 to 26, 1998, as determined by the American Association of Variable Star Observers, together with the HRI countrates listed in Table 1. The optical lightcurve shows maxima of ordinary outbursts occurring on JD 2450912 and JD 2450921, and a superoutburst maximum near JD 2450930. In comparing the X-ray countrates with the optical lightcurves we remark on three features of Fig. 1. First, the HRI countrates are lower during the optical outbursts than in the quiescent intervals. Second, in both quiescent intervals that we cover, the countrate is lower in the later observation. Third, in both outbursts that we cover, the countrate is lower in the later observation. The distribution of the photons over the energy channels of the HRI is the same for the observations taking during outburst as for those taken during quiescence. Comparison of the distributions obtained for the first observations during outburst (i.e. those of April 8 and 16) with those obtained during the later outburst observations (of April 10 and 18) suggests that the decrease in X-rays is marginally less at the lower energies, i.e. that the spectrum becomes slightly softer as the optical outburst proceeds. The significance of this softening is marginal, but it suffices to show that the decrease of the X-ray flux cannot be due to the disappearance of an ultra-soft component. In accordance with these findings, we interpret the change in HRI countrates during the outburst cycle as a change mainly in the amount of gas that emits keV photons. This amount drops gradually during quiescence, more dramatically in the beginning of an outburst, and gradually again as the outburst proceeds. ### 3.2 Short-term variability We have searched for short-term variability by dividing the individual observations in smaller intervals. For the outburst data we determine the average countrate during each ROSAT orbit; for the higher countrate during quiescence we use bins of 256 s. Figure 2 show the resulting lightcurves. Significant variation is present both during outburst and during quiescence. During early ordinary outburst (April 8,16) the variation appears dominated by a long-term decline. No orbital variation is apparent in any of the outburst data. During quiescence, the flux level at a given orbital phase varies as much as the overall variation. We have folded the variation on the orbital period of 0.086924 d, and find no significant variation on the orbital period, in quiescence or in outburst. Any orbital variation is less than the irregular variations seen in Fig. 2. ## 4 Comparison with previous X-ray observations ### 4.1 Previous observations of YZ Cnc To compare our observations with previous ROSAT PSPC observations, we note that for a 2-3 keV thermal spectrum as found for YZ Cnc by Van Teeseling & Verbunt (1994) the ROSAT PSPC (channels 50-201) countrate is similar to the Einstein IPC countrate, and about twice the ROSAT HRI countrate. From the results listed in Table 1, we therefore expect countrates in the ROSAT PSPC (ch. 50-201) or Einstein IPC of 0.22-0.17 cts/s in quiescence and of 0.047-0.025 cts/s during outburst. The All-Sky Survey observation was obtained from 10 to 12 October 1990, i.e. during the outburst which peaked on October 10 (Bortle 1990). The countrate (in PSPC channels 52-201) is about 0.1 cts/s (Verbunt et al. 1997), and does not vary significantly. The pointed observations with the ROSAT PSPC gave countrates (in channels 50-201) of 0.4 cts/s on 3 April 1991 immediately before an optical outburst maximum, and of 0.27 cts/s on 7-11 October 1993 in quiescence (van Teeseling & Verbunt 1994). We have analyzed a previously unpublished ROSAT PSPC observation, obtained on 1 May 1994. The countrate (ch. 50-201) is 0.249(14) cts/s, marginally lower than the 1993 countrate. All these countrates are significantly higher than the corresponding ones during outburst and in quiescence in April 1998, and indicate long-term variability in both quiescent and outburst X-ray fluxes of YZ Cnc. An Einstein IPC countrate of about 0.04 cts/s was observed on 8 April 1979 (Córdova & Mason 1984), and has been hitherto interpreted as obtained during quiescence. The Einstein countrate corresponds to the outburst level of April 1998. Outbursts of YZ Cnc were observed by the AAVSO peaking on March 8, 18 and 28 and on April 15 and 23 in 1979 (Bortle 1979). AAVSO measurements of YZ Cnc in March and April 1979 are shown in Fig. 3. The quiescent interval separating the March 28 and April 15 outbursts was longer than the intervals preceding and following it. A single, uncertain measurement obtained close in time to the Einstein observation suggests that YZ Cnc was brighter than its quiescent level. We suggest that the Einstein observation was obtained during an outburst peaking close to 8 April 1979, which was missed by the optical observers. The countrates observed with the EXOSAT Low Energy detector (with 3000 Lexan filter) between 27 October and 13 November 1983 were about 0.01 cts/s during quiescence, and a factor 4 lower during outbursts (Van der Woerd 1987). For an assumed column in the range $`N_\mathrm{H}=10^{19}10^{20}\mathrm{cm}^2`$, an HRI countrate of 0.1 cts/s predicts an countrate for the EXOSAT LE (3000 Lexan) in the range $`0.020.01`$ cts/s. Depending on the assumed column, the EXOSAT observations are thus compatible both with the higher X-ray luminosity as observed with the ROSAT PSPC observations, and with the somewhat lower luminosity of our ROSAT HRI observations. ### 4.2 Orbital variability Our finding that the observed X-ray flux of YZ Cnc varies on short time scale unrelated to the orbital phase, is in accordance with similar findings by Van der Woerd (1987) in his analysis of the EXOSAT data. Any explanation of the marked change in the profiles of the ultraviolet resonance lines in YZ Cnc in terms of a variable absorption column between the ultraviolet continuum source and Earth, must be compatible with a much less marked variation in the X-ray flux. For a cold gas with cosmic abundances, an upper limit in X-ray variability on orbital time scales of $`\text{ }<10`$% (see the April 18 data in Fig. 2) corresponds to an upper limit in the column of $`N_\mathrm{H}\text{ }<10^{20}\mathrm{cm}^2`$. The upper limit to the column in the more realistic case of the complicated ionization structure in the wind of a dwarf nova can be determined only in a detailed model of this structure (see e.g. the review by Drew 1997 and references therein). ### 4.3 Observations of other dwarf novae Various other dwarf nova outbursts have been covered in X-rays. A general description applies to the EUV/X-ray lightcurves of VW Hyi, Z Cam and SS Cyg observed so far: a hard component is more luminous in quiescence, and less luminous but constant during outburst, whereas a soft component is brighter during outburst than in quiescence, and decreases rapidly after reaching its maximum early in the outburst (Wheatley et al. 1996a,b, Ponman et al. 1995). The situation is different for the dwarf nova outbursts of U Gem, where both soft (0.15-0.5 keV) and hard (2-10 keV) X-ray fluxes are higher during outburst than in quiescence; and both components appear to decrease faster during the outburst than the optical flux (Mason et al. 1978, Swank et al. 1978). The EUVE observations of U Gem and SS Cyg show that the soft component is not alltogether optically thick (Mauche et al. 1995, Long et al. 1996). The soft component has a different temperature in each of the above dwarf novae, but in most cases the HRI band is dominated by the hard component. (The exception is SS Cyg, whose soft component has a relatively high characteristic temperature; Van Teeseling 1997.) This supports our conclusion in Sect. 3.1 that the hard component is responsible for the observed decrease during outburst of the X-ray flux in YZ Cnc. As regards the quiescent interval between outbursts, we are aware of only one other system that has been observed throughout several full outburst cycles, viz. VW Hyi. EXOSAT observations of this system showed a decrease of the flux in the 0.05-1.5 keV energy range during each of three covered quiescent intervals (van der Woerd & Heise 1987, Pringle et al. 1987), in accordance with our findings for YZ Cnc. ## 5 Implications for outburst models In Sect. 3.1 we concluded that the changes in X-ray flux of YZ Cnc that we observe through the outburst cycle are due mainly to changes in the amount of X-ray emitting gas. We will compare our results with the predictions of the two classes of models of dwarf nova outbursts, the mass transfer instability and the disk instability. Both models can explain a lower X-ray flux during the outburst as a consequence of a transition of an optically thin, very hot boundary layer during quiescence into a rather less hot, optically thick gas during outburst when the accretion rate onto the white dwarf is high (Pringle & Savonije 1979). The observations of dwarf novae in outburst indicate that not all the X-ray emitting gas disappears during the outburst, but that some of it remains, at much the same temperature as during quiescence. It would be tempting to locate this remaining component away from the disk, e.g. in a white dwarf or disk corona, if it is to escape becoming optically thick with the increased accretion rate. However, it then has to be explained why this component has a spectrum very similar to the component in the disk, and why it changes at all during outburst. A location of the X-ray emitting gas separate from the optically thick accretion disk could also be an ingredient in understanding how the small variation in X-rays is compatible with a large absorbing column required to explain the strong variations in the ultraviolet resonance lines. The mass transfer instability model is not sufficiently developed to predict the mass transfer as a function of time, and thus to predict the evolution of the X-ray flux, during outburst. However, the model does predict a continued decrease of the accretion rate onto the white dwarf during quiescence, perhaps levelling off to a constant level in long quiescent intervals when the disk reaches equilibrium with the lower mass inflow rate at its outer edge. A continuing decrease of the X-ray flux in quiescence, as observed for YZ Cnc and various other dwarf novae, is thus in accordance with the transfer instability model (for model accretion rates onto the white dwarf during the outburst cycle, see e.g. Pringle et al. 1986). The disk instability model in its simple form predicts a gradual increase of the accretion rate onto the white dwarf during quiescence, and there therefore a gradual increase in the optical and ultraviolet flux. An increased accretion rate through an optically thin disk also predicts an increase in the X-ray flux, contrary to our observations of YZ Cnc. The rise of the ultraviolet flux in quiescence predicted by the disk instability model is contrary to observations of several dwarf novae, in particular the eclipsing system Z Cha, but also VW Hyi and WX Hyi (Van Amerongen et al. 1990, Verbunt et al. 1987, Hassall et al. 1985). Szkody et al. (1991) investigated the evolution of the ultraviolet flux of many dwarf novae in quiescence, and did not find a single case where the ultraviolet flux increases when measured in a single interoutburst interval. A white dwarf that dominates the ultraviolet flux in quiescence and cools after the outburst has been suggested as explanation for the observed ultraviolet flux decrease. This explanation is not compatible with the observations in the ultraviolet of Z Cha, in which the contribution by white dwarf and disk are determined separately. A cooling white dwarf doesn’t explain our X-ray observations of YZ Cnc. A well-known problem of the disk instability model is its failure to describe the observation in short outbursts that the optical rise precedes the ultraviolet rise by several hours (Pringle et al. 1986). Various ad hoc suggestions have been made to explain this ultraviolet delay. These models suggest that the inner part of the disk continues to drain in quiescence. Such models, which include the effect of a magnetic field of the white dwarf (Livio & Pringle 1992), a wind from the accretion disk in quiescence (Meyer & Meyer-Hofmeister 1994), and irradiation of the disk by the (relatively) hot white dwarf (King 1997) possibly are compatible with the decrease of the ultraviolet and X-ray flux during quiescence. Finally, we note that the X-ray flux at the end of the quiescent interval preceding the outburst peaking on JD 2450912 is higher than the flux measured during the two subsequent quiescent intervals. This indicates variability in the flux level between different outburst intervals, and shows that a trend in the quiescent interval is best measured from a single interval. The X-ray flux in the beginning of the superoutburst, on April 24, is similar to the fluxes we measure in the beginning of the ordinary outbursts. This variability may be similar to the long-term variations that we find by comparing our new observations with earlier observations made with the ROSAT PSPC (see Sect. 4.1). We do not find any clear correlation with the outburst pattern. Thus, the ROSAT PSPC measurements were made in relatively long quiescent intervals (11 days in Oct 1993, May 1994) and in a relatively short quiescent interval (5 days in April 1991). The ROSAT PSPC observations were obtained longer before the next superoutburst than our new HRI observations; but the Einstein observation was also made long before the next superoutburst. The high countrate observed with the ROSAT PSPC therefore is not due to a different length of the quiescent periods, nor to a different location in the interval between superoutbursts. ###### Acknowledgements. PJW acknowledges support by PPARC as a postdoctoral fellow.
no-problem/9903/cond-mat9903079.html
ar5iv
text
# A prognosis oriented microscopic stock market model ## I Introduction In the last years a number of microscopic models for price fluctuations have been developed by physicists and economists . The purpose of these models is, in our view, not to make specific predictions about the future developments of the stock market (for instance with the intention to make a fortune) but to reproduce the universal statistical properties of liquid markets. Some of these properties are an exponentially truncated Levy-distribution for the price differences on short time scales (significantly less than one month) and a linear autocorrelation function of the prices which decays to zero within a few minutes . We present a new microscopic model with interacting investors in the spirit of that speculate on price changes that are produced by themselves. The main features of the model are individual forecasts (or prognoses) for the stock price in the future, a very simple trading strategy to gain profit, limited orders for buying and selling stocks and various versions of interaction among the investors during the stage of forecasting the future price of a stock. The paper is organized as follows: In section 2 we define our model, in section 3 we present the results of numerical simulations of this model including specific examples of the price fluctuations using different interactions among the investors, the autocorrelation function of the price differences and most importantly their distribution, which turn out to be (exponentially) truncated Levy distributions. Section 4 summarizes our findings and provides an outlook for further refinements of the model. ## II The model The system consists of one single stock with actual price $`K(t)`$ and $`N`$ investors labeled by an index $`i=1,\mathrm{},N`$. In the most simplified version of the model the investors have identical features and are described at each time step by three variables: * The personal prognosis of investor $`i`$ at time $`t`$ about the price of the stock at time $`t+1`$. * The cash capital (real variable) of investor $`i`$ at time $`t`$. * The number of shares (integer variable) of investor $`i`$ at time $`t`$. The system at time $`t=0`$ is initialized with some appropriately generated initial values for $`P_i(t=0)`$, $`C_i(t=0)`$ and $`S_i(t=0)`$, plus a particular price for the stock. The dynamics of the system evolves in discrete time steps $`t=1,2,3,\mathrm{}`$ and is defined as follows. Suppose time step $`t`$ has been finished, i.e. the variables $`K(t)`$, $`P_i(t)`$, $`C_i(t)`$ and $`S_i(t)`$ are known. Then the following consecutive procedures are executed. Make Prognosis Each investor sets up a new personal prognosis via $$P_i(t+1)=(xP_i(t)+(1x)K(t))e^{r_i},$$ (1) where $`x[0,1]`$ is a model dependent weighting factor (for the investor’s old prognosis and the price of the stock) and $`r_i`$ are independent identically distributed random variables of mean zero and variance $`\sigma `$ that mimic a (supposedly) stochastic component in the individual prognosis (external influence, greed, fear, sentiments $`\mathrm{}`$, see also ). Make Orders Each investor gives his limit order on the basis of his old and his new prognosis: $`P_i(t+1)P_i(t)>0`$: investor $`i`$ puts a buy-order limited by $`P_i(t)`$, which means that he wants to transform all cash $`C_i(t)`$ into $`\mathrm{int}[C_i(t)/P_i(t)]`$ shares if $`K(t+1)P_i(t)`$. $`P_i(t+1)P_i(t)<0`$: investor $`i`$ puts a sell-order limited by $`P_i(t)`$, which means that he wants to transform all stocks into $`S_i(t)K(t+1)`$ cash if $`K(t+1)P_i(t)`$. Now let $`i_1,i_2,\mathrm{},i_{N_A}`$ be the investors that have put a sell-order and their limits are $`P_{i_1}(t)P_{i_2}(t)\mathrm{}P_{i_{N_A}}(t)`$, and let $`j_1,j_2,\mathrm{},j_{N_B}`$ be the investors that have put a buy-order and their limits are $`P_{j_1}(t)P_{j_2}(t)\mathrm{}P_{j_{N_B}}(t)`$. Calculate new price Define the supply and demand functions $`A(K)`$ and $`B(K)`$, respectively, via $`A(K)`$ $`=`$ $`{\displaystyle \underset{a=1}{\overset{N_A}{}}}S_{i_a}\theta (KP_{i_a}(t))`$ (2) $`B(K)`$ $`=`$ $`{\displaystyle \underset{b=1}{\overset{N_A}{}}}\mathrm{\Delta }S_{j_b}[1\theta (KP_{j_b}(t))]`$ (3) with $`\mathrm{\Delta }S_{j_b}=\mathrm{int}[C_{j_b}(t)/P_{j_b}(t)]`$ the number of shares demanded by investor $`j_b`$, and $`\theta (x)=1`$ for $`x0`$ and $`\theta (x)=0`$ for $`x<0`$. Then the total turnover at price $`K`$ would be $$Z(K)=\mathrm{min}\{A(K),B(K)\}$$ (4) and the new price is determined is such a way that $`Z(K)`$ is maximized. Since $`Z(K)`$ is a piece-wise constant function it is maximal in a whole interval, say $`K[P_{i_{\mathrm{max}}},P_{j_{\mathrm{max}}}]`$ for some $`i_{\mathrm{max}}\{i_1,\mathrm{},i_{N_A}\}`$ and $`j_{\mathrm{max}}\{j_1,\mathrm{},i_{N_B}\}`$. Then we define the new price to be the weighted mean $$K(t+1)=\frac{P_{i_{\mathrm{max}}}A(P_{i_{\mathrm{max}}})+P_{j_{\mathrm{max}}}B(P_{j_{\mathrm{max}}})}{A(P_{i_{\mathrm{max}}})+B(P_{j_{\mathrm{max}}})}.$$ (5) Note that the weight by the total supply and demand takes care of the price being slightly higher (lower) than the arithmetic mean $`(P_{i_{\mathrm{max}}}+P_{j_{\mathrm{max}}})/2`$ if the supply is smaller (larger) than the demand. Execute orders Finally the sell-orders of the investors $`i_1,\mathrm{},i_{\mathrm{max}}`$ and the buy-orders of the investors $`j_1,\mathrm{},j_{\mathrm{max}}`$ are executed at the new price $`K(t+1)`$, i.e. the buyers $`j_1,\mathrm{},j_{\mathrm{max}}`$ update $`S_{j_b}(t+1)`$ $`=`$ $`S_{j_b}(t)+\mathrm{int}[C_{j_b}(t)/P_{j_b}(t)]`$ (6) $`C_{j_b}(t+1)`$ $`=`$ $`C_{j_b}(t)K(t+1)(S_{j_b}(t+1)S_{j_b}(t))`$ (7) and the investors $`i_1,\mathrm{},i_{\mathrm{max}}`$ sell all their shares at price $`K(t+1)`$: $`S_{i_a}(t+1)`$ $`=`$ $`0`$ (8) $`C_{i_a}(t+1)`$ $`=`$ $`C_{i_a}(t)+S_{i_a}(t)K(t+1)`$ (9) If $`A(P_{i_{\mathrm{max}}})<B(P_{j_{\mathrm{max}}})`$ then investor $`j_{\mathrm{max}}`$ cannot buy $`\mathrm{int}[C_{j_{\mathrm{max}}(t)}/P_{j_{\mathrm{max}}}(t)]`$ but only the remaining shares, whereas in the case $`A(P_{i_{\mathrm{max}}})>B(P_{j_{\mathrm{max}}})`$ investor $`i_{\mathrm{max}}`$ cannot sell all his shares. The orders of the investors $`i_{\mathrm{max}+1},\mathrm{},i_{N_A}`$ and $`j_{\mathrm{max}+1},\mathrm{},j_{N_B}`$ cannot be executed due to their limits. The execution of orders completes one round, measurements of observables can be made and then the next time step will be processed. A huge variety of interaction among the investors can be modeled, here we restrict ourselves to three different versions taking place at the level of the individual prognosis genesis: * Each investor $`i`$ knows the prognoses $`P_{i_1}(t),\mathrm{},P_{i_m}(t)`$ of $`m`$ randomly selected (once at the beginning of the simulation) neighbors. When making an order, he modifies his strategy and puts in the case $$P_i(t+1)[g_i(t)P_i(t)+\underset{n=1}{\overset{m}{}}g_{i_n}(t)P_{i_n}(t)]<(>)0$$ (10) a buy (sell) order limited still by his own prognosis $`P_i(t)`$. We choose the weights $`g_i(t)=1/2`$ and $`g_{i_n}(t)=1/2m`$ for $`n=1,\mathrm{},m`$. * In addition to interaction I<sub>1</sub> investor $`i`$ changes the weights $`g`$ after the calculation of the new price $`K(t+1)`$ according to the success of the prognoses: $`g_i_{}(t+1)=g_i_{}(t)\mathrm{\Delta }g`$ (11) $`g_{i_+}(t+1)=g_{i_+}(t)+\mathrm{\Delta }g`$ (12) where fro each investro $`i`$ the index $`i_{}`$ ($`i_+`$) denotes the investor from the set $`\{i,i_1,\mathrm{},i_m\}`$ with the worst (best) prognosis, i.e.: $`i_{}\{i,i_1,\mathrm{},i_m\}\mathrm{such}\mathrm{that}\mathrm{abs}[P_i_{}(t)K(t+1)]\mathrm{is}\mathrm{maximal}`$ (13) $`i_+\{i,i_1,\mathrm{},i_m\}\mathrm{such}\mathrm{that}\mathrm{abs}[P_{i_+}(t)K(t+1)]\mathrm{is}\mathrm{minimal}`$ (14) The weight $`g_i`$ is forced to be positive, because an investor should believe in his own prognosis $`P_i(t)`$. * In addition to interaction I<sub>2</sub> neighbors with weights $`g_i_{}(t+1)<0`$ are replaced by randomly selected new neighbors. ## III Results In this section we present the results of numerical simulations of the model described above. In what follows we consider a system with 1000 investors and build ensemble averages over 10000 independent samples (i.e. simulations) of the system. We checked that the results we are going to present below do not depend on the system size (the number of traders). When changing the system size, i.e. the number $`N`$ of investors, the statistical properties of the price differences do not change qualitatively. Increasing $`N`$ only decreases the average volatility (variance of the price changes). For concreteness we have chosen the following parameters: the initial price of the stock is $`K_0=100`$ (arbitrary units, ), Each trader has initially $`C_i(t=0)=50000`$ units of cash and $`S_i(t=0)=500`$ stocks (thus the total capital of each trader is initially 100000 units). The standard deviation of the Gaussian random variable $`z`$ is $`\sigma =0.01`$ (with mean zero). We performed the simulations over 1000 time steps which is roughly 10 time longer than the transient time of the process for these parameters. In other words, we are looking at its stationary properties. First we should note that in the deterministic case $`\sigma =0`$ no trade would take place , hence the stochastic component in the individual forecasts is essential for any interesting time evolution of the stock market price. We focus on the time dependence of the price $`K(t)`$, the price change $`\mathrm{\Delta }_T(t)=K_{t+T}K_T`$ in an interval $`T`$, their time dependent autocorrelation $$C_T(\tau )=\frac{\mathrm{\Delta }_T(t+\tau )\mathrm{\Delta }_T(t)\mathrm{\Delta }_T(t+\tau )\mathrm{\Delta }_T(t)}{(\mathrm{\Delta }_T(t))^2\mathrm{\Delta }_T(t)^2}$$ (15) and their probability distribution $`P(\mathrm{\Delta }_T(t))`$. The statistical properties of the price changes produced by our model depend very sensitively on the parameter $`x`$ in equation (1). In particular for the case $`x=1`$ it turns out that the total turnover decays like $`t^{1/2}`$ in the interaction-free case, which implies that after a long enough time no investor will buy or sell anything anymore. However, only an infinitesimal deviation from $`x=1`$ leads to a saturation of the total turnover at some finite value and trading will never cease. In Fig.1–4 we present the results of the interaction-less case with $`x=1`$ (Fig. 1) and $`x=0`$ and contrast it with the results of the model with interactions $`I_1`$, also for $`x=1`$ (Fig. 3) and $`x=0`$ (Fig. 4). For $`x=0`$ investor $`i`$ does not look at his old prognosis but only at the actual stock price when making a new prognosis. In this case the distribution of the price can be fitted very well by a Gaussian distribution irrespective of the version of interaction or no interaction. The self similarity exponent $`1/\mu 0.5`$ agrees with the scaling behavior of a Gaussian distribution. The autocorrelation function of the price differences decays alternating to zero within a few time steps. In the opposite case $`x=1`$ investor $`i`$ makes his new prognosis $`P_i(t+1)`$ based on his own old one and never looks at the current stock price. Now we can show that the distribution of the price differences decays exponentially in its asymptotic, but the self similarity exponent $`1/\mu 0.2`$ is too small to agree with a Levy stable distribution. The autocorrelation function of the price differences decays very quickly, so that there are significant linear anti-correlations only between consecutive differences. | $`1/\mu `$ | I<sub>0</sub> | I<sub>1</sub> | I<sub>2</sub> | I<sub>3</sub> | | --- | --- | --- | --- | --- | | $`x=0`$ | $`0.442`$ | $`0.466`$ | $`0.472`$ | $`0.472`$ | | $`x=1`$ | $`0.228`$ | $`0.212`$ | $`0.185`$ | $`0.185`$ | The selfsimilarity exponent has been determined via the scaling relation $`P(\mathrm{\Delta }_T=0)T^{1/\mu }`$ and a linear fit to the data of $`P(\mathrm{\Delta }_T=0)`$ versus $`T`$ in a log-log plot. These least square fits yield the relative errors for our estimates of the self similarity exponent $`1/\mu `$ in the table above, which lay between $`0.1\%`$ and $`0.3\%`$. ## IV Summary and outlook We presented a new microscopic model for liquid markets that produces an exponentially truncated Levy-distribution with a self similarity exponent $`1/\mu 0.2`$ for the price differences on short time scales. Studying the distribution on longer time scales we find that it converges to a Gaussian distribution. The autocorrelation function of the price changes decays to zero within a few time steps. The statistical properties of our prognosis oriented model depend very sensitively on the rules how the investors make their prognoses. There are many possible variations of our model that could be studied. It is plausible that a heterogeneous system of traders leads to stronger price fluctuations and thus a smaller value for the self similarity exponent $`\mu `$ (which appears to be $`1/\mu 0.7`$ for real stock price fluctuations ). The starting wealth could be distributed with a potential law (comparable with the cluster size in the Cont-Bouchaud model). Or the investors could have different rules for making prognoses and following trading strategies. Another possible variation is to implement a threshold in the simple strategy in order to simulate risk aversion (the value of the threshold could depend on the actual volatility). Unfortunately, forecasts for real stock markets cannot be made with our model, because it is a stochastic model. We see possible applications for this model in the pricing and the risk measurement of complex financial derivatives. Acknowledgment We thank D. Stauffer for helpful discussions. H. R.’s work was supported by the Deutsche Forschungsgemeinschaft (DFG).
no-problem/9903/astro-ph9903106.html
ar5iv
text
# Similar Shot Noise in Cyg X-1, GRO J0422+32 and 1E 1724-3045 ## 1 Introduction The aperiodic X-ray variability of the black hole X-ray binary Cyg X-1 and other stellar black hole candidates has been described by the phenomenological “shot noise” models (e.g. and references therein). Such rapid X-ray variability or flickering is more pronounced during their hard state . In hard states, the power density spectra show a flat top followed by a power law at a certain break frequency. The higher amplitude of the rapid variability is, the lower the break frequency is. This suggests that the shot properties vary with time in the hard states. In the framework of “shot noise” models, the properties of the shots and superposed shot profiles in Cyg X-1 have been studied with data obtained from Uhuru, HEAO 1 A-2, EXOSAT/ME, Ginga, and RXTE/PCA (see and references therein;). In summary, the shot properties are as follows: (1) The shot has nearly time symmetric rise and decay lasting for up to a few seconds; (2) The energy spectrum of the shot changes with time. (3) The shot duration changes with the states. Recently, striking timing similarities of the power density spectra between the black hole candidate GRO J0422+32 (Nova Persei 1992) and the X-ray burster 1E 1724-3045 were reported. In this paper, we show similar aperiodic variability in Cyg X-1 in hard state and compare the shot noise properties with those of GRO J0422+32 and 1E 1724-3045. We also present our results from the study of about $``$ 500 shots observed in Cyg X-1 with RXTE/PCA. ## 2 RXTE/PCA Observation of Cyg X-1 We analyze the data obtained from RXTE/PCA observations conducted on 1997 Jan 17 and 20. The entire PCA observation lasted for about 12.5 hours. The average count rates in the entire PCA band for the two days are 4330 and 3880 cps, respectively. High time resolution ($``$ 2<sup>-12</sup> s) Single-Bit mode data in the energy range 1.0-5.1 keV, 5.1-8.7 keV, 8.7-18.3 keV and 18.3-98.5 keV were used in studying the aperiodic variability. We combine the data in the 4 bands to calculate the average power density spectra (PDS). The average PDS of Jan 20 is plotted in Fig.1. The PDS displays a flat top below a low frequency break around 0.03 Hz, a peaked-noise component centered at 0.2 Hz with a FWHM $``$ 0.2 Hz and a second break frequency around 3 Hz. Similar to the previous study of GRO J0422+32 and 1E 1724-3045, we apply a model consisting of two “shot noise” components characterized by two Lorentzian functions in the frequency range below 0.1 Hz and above 1.0 Hz (dashed line and dash-dot line), and fit the residual noise power in the frequency range 0.1–1.0 Hz with a model composed of a linear rise with an exponential decay (solid line in the inset panel). The model is shown as solid line in the Figure 1. ## 3 Comparison with Observations of GRO J0422+32 and 1E 1724-3045 The PDS of OSSE observation to GRO J0422+32 and the PDS of 1E 1724-3045 observed with RXTE/PCA can also be characterized by two “shot noise” components and one peaked-noise feature. The characteristic duration of the two “shot noise” components ($`\tau _1`$ or $`\tau _2`$) is represented by the Half Width of Half Maximum (HWHM) of each Lorentzian as $$\tau _{1,2}=\frac{1}{2\pi HWHM}$$ The parameters of the noise components of the three sources are compared in Table 1. ## 4 X-ray Shots of Cyg X-1 in Hard State To investigate the above interpretations in terms of “shot noise”, we study the X-ray shots observed with RXTE/PCA in the energy range $``$ 2–60 keV. The X-ray Shots were selected in the light curve. The criteria are that their peak X-ray counts in 0.125 s bin should be larger than 1250, and should be the maximum within the neighboring 5.0 s on both sides. In the entire observation, we have found 513 shots which meet the requirement. ### Shot Width The width of each shot is derived from the auto-correlation coefficients of 10 s light curve around each shot, $`A(i)`$. They are defined as follows: $$A(i)=\frac{_{k=0}^{Ni1}(X_k\overline{X})(X_{k+i}\overline{X})}{_{k=0}^{N1}(X_k\overline{X})^2}$$ where $`\delta t`$ is $``$ 2.44 ms, the time resolution of the light curves, and $`A(i)`$ ($`i=0,\mathrm{},N1`$) the auto-correlation coefficient at $`i\delta t`$. Then we define the average shot width as $$T_{width}=2.0\times \sqrt{\frac{_{k=1}^Mk^2A(k)+(0.25)^2A(0)}{_{k=0}^MA(k)}}\delta t$$ where 0.25 represents the average time shift of the central bin A(0), and $`M`$ is the maximum of $`i`$ with $`A(i)`$ no less than 0.0 in the main peak of the auto-correlation function. In Fig.2, we show the profile of one of the brightest shot. The profile was obtained from the Standard 1 data mode. The corresponding auto-correlation coefficients are shown in Fig.3. They were obtained from Single-Bit data and bined to a time resolution of $``$ 2.44 ms. The coefficients used in the calculation of shot width are shaded in the figure. The distribution of shot width for the 513 shots is shown in Fig.4. ### Peak-aligned Shot Profile and Spectral Variation We superposed the 513 shots by aligning them at the center of their 0.125 s peak bins (combine all 4 energy channels). The time resolution used in the alignment is $``$ 2.44 ms. In the upper panel of Fig.5, we show the peak-aligned shot profiles of the soft band (channel 1+2) and the hard band (channel 3+4), respectively. They were normalized to their peak counts. In the lower panel of the same figure, we plot the residuals of the profile subtraction (3+4)–(1+2). In general, the peak-aligned profiles in Fig.5 shows: (1)The time scale of the shot rise in the hard band is smaller than that in the soft band; (2)The shot decay in the hard band is slower than the decay in the soft band; (3)There are more hard photons after the shot peak than those before the shot peak, indicating a spectral hardening. Three factors would introduce the difference between the profiles in the soft band and those in the hard band and the observed residuals in the lower panel of Fig.5. One is the time lag of the shot rise between the soft band and the hard band. The other is the slower decay in the hard band compared with that of the soft band. The third is that the shot peak in the hard band is narrower than that in the soft band. To study the spectral variation, we plot the ratio between the peak-aligned profile in the hard band and that in the soft band in Fig.6. Both sides around the peak were fit to a 5-degree polynomial. The spectra after the shot peak is harder than that before the shot peak, as shown in Fig.6. This is consistent with Ginga results. ## 5 Summary In summary, we have obtained the following results: * We have found that the aperiodic X-ray variability in Cyg X-1 in hard state is similar to those observed in another black hole candidate GRO J0422+32 and a neutron star X-ray binary 1E 1724-3045. Based on the mass of companion star and the mass of the central object in the three X-ray binaries, we conclude that the generation mechanism of the X-ray shots is probably independent on the mass of the companion star, the mass of the central compact object (BH or NS), and the type of accretion. * The spectral evolution around the shot peak could last for as long as a few seconds, and there is a spectral hardening after the shot peak. These are consistent with the previous study of the X-ray shots in Cyg X-1 observed with Ginga. * The duration of $``$ 513 bright shots in Cyg X-1, defined from their auto-correlation coefficients, ranging from $``$ 0.1 s to $``$ 2.0 s, is not bimodally distributed. This does not support the assumption that there are two kinds of shots with different duration as inferred from the power spectra. Thus the attribution of each of the noise component to a group of shots with a certain characteristic duration is probably wrong. ## Acknowledgments WY appreciate various supports and helpful discussions and comments by Prof. J. Van Paradijs, Dr. C. Kouveliotou and Dr. M. Finger at NASA/MSFC. ## References
no-problem/9903/cond-mat9903453.html
ar5iv
text
# For proceedings of Frontiers in Neutron Scattering Research, ISSP, University of Tokyo, Nov. 24–27, 1998. To be published in J. Phys. Chem. Solids CHARGE SEGREGATION AND ANTIFERROMAGNETISM IN HIGH-𝑇_𝑐 SUPERCONDUCTORS ## 1 INTRODUCTION Neutron-scattering, nuclear-magnetic-resonance (NMR), and muon-spin-rotation ($`\mu `$SR) studies have all provided evidence for the coexistence of local antiferromagnetic spin correlations with superconductivity in the layered cuprates. The nature of this coexistence has been the subject of considerable debate. Neutron scattering studies of certain cuprate superconductors have indicated that the magnetic excitations are surprisingly similar to those in the undoped parent compounds, which are both insulating and antiferromagnetic due to strong electronic interactions. One way to accomodate both antiferromagnetic correlations and mobile holes is through charge segregation. One type of segregation of particular interest involves periodically-spaced charge stripes that act as antiphase domain walls between narrow antiferromagnetic domains. Perhaps the first evidence of such correlations was the measurement of inelastic magnetic scattering peaked at incommensurate wave vectors in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> with $`x0.15`$ . Study of the charge modulation has become possible with the discovery that, for $`x\frac{1}{8}`$, stripes can be pinned by the structural modulation induced by partial substitution of the smaller ion Nd<sup>3+</sup> for La<sup>3+</sup> . The charge order originally detected by neutron scattering has been confirmed with high-energy x-rays , and there is considerable evidence for intimate coexistence of stripe order and superconductivity at $`x=0.15`$ . Elastic incommensurate magnetic peaks have also been discovered in Zn-doped La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> with $`x=0.14`$ , and even in samples with no Zn and $`x=0.12`$ . In samples of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> with no static order, incommensurate splitting of the inelastic magnetic scattering is nearly identical to that of the elastic peaks in Nd-doped samples with the same Sr concentration . The possibility that the magnetic scattering in underdoped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> is incommensurate had been considered earlier , and recently it has been demonstrated that, indeed, at least part of the scattering is incommensurate , with modulation wave vectors consistent with the 214 system . In this paper, we briefly discuss several topics related to stripes in the cuprates. In the next section we review the connection between the experimentally observed superlattice peaks, and the real-space modulations of the spin and charge densities. In section 3, we present some new results on a Zn-doped 214 sample. Finally, in section 4 we list some of the open questions concerning stripes and superconductivity in the cuprates. ## 2 NATURE OF THE SPIN AND CHARGE DENSITY MODULATIONS There have been questions raised concerning the interpretation of the observed superlattice peaks that suggest some confusion over the distinction between scattering from a 1D system and that from a 2 or 3D system with a 1D modulation. In order to clarify things, first consider a line of atoms with a spacing $`a`$. In reciprocal space, the scattering from such a 1D system consists of constant-intensity sheets separated by $`2\pi /a`$. In contrast, consider a Bravais lattice of atoms with positions $`𝐑_j`$, and suppose that the positions undergo a small sinusoidal modulation of the form $$𝐮_j=𝐀\mathrm{sin}(𝐠𝐑_j+\varphi ),$$ (1) where A is the amplitude, g is the modulation wave vector, and $`\varphi `$ is an arbitrary phase shift that we will ignore. Overhauser has shown that the scattering from such a system is given by $$I(𝐐)=\underset{𝐆,n}{}J_n^2(𝐐𝐀)\delta (𝐐𝐆n𝐠).$$ (2) When $`n=0`$ the scattering corresponds to fundamental Bragg peaks with $$I(𝐆)1\frac{1}{2}(𝐐𝐀)^2e^{(𝐐𝐀)^2/2},$$ (3) for $`𝐐𝐀1`$. The modulation causes an intensity reduction with a form similar to a Debye-Waller factor. For $`n=1`$, one finds superlattice peaks split about each reciprocal lattice vector G by g, with $$I(𝐆+𝐠)\frac{1}{4}(𝐐𝐀)^2.$$ (4) Higher-order peaks will also occur, but they are extremely weak for small $`𝐐𝐀`$. For example, $`I(𝐆+2𝐠)\frac{1}{4}I^2(𝐆+𝐠)`$. The configuration of superlattice peaks observed in stripe-orderedLa<sub>1.6-x</sub>Nd<sub>0.4</sub>Sr<sub>x</sub>CuO<sub>4</sub> has been reviewed elsewhere . Briefly, there are two sets of superlattice peaks, which are most easily described in terms of a unit cell with $`a3.8`$ Å. One set of peaks is split about the antiferromagnetic wave vector by an amount $`ϵ\times 2\pi /a`$ along the and directions, indicating antiphase antiferromagnetic domains. A second set of peaks occurs about nuclear Bragg peaks, split by $`2ϵ\times 2\pi /a`$, indicative of charge-order. Given that we have peaks split in two directions, there are two possible interpretations: either we are averaging over domains each of which has a single modulation direction, or the two modulations are superimposed in each region of the sample . The simple stripe picture is based on the former model. Can we rule out the latter? If there is a stripe grid, then the phase of the antiferromagnetic domains must be modulated in two directions. The arrangement of domains and their relative phase factors forms a checker-board structure, similar to a simple Néel antiferromagnet. The unit cell of such a structure has its axes rotated by $`\pi /4`$ relative to the modulation directions, and it has an area twice that of a single domain. This means that in reciprocal space, the first superlattice peaks should be rotated by $`\pi /4`$ relative to the modulation wave vectors. Experimentally, we have checked for magnetic peaks split along and $`[1\overline{1}0]`$, and have found nothing. (In principle, a stripe grid should result in a square lattice of superlattice peaks, so that there should be peaks in these directions regardless of the modulation directions; however, they might be anomalously weak.) Hence, using the grid interpretation, the peaks split along and imply stripe modulations in real space along and $`[1\overline{1}0]`$ (i.e., diagonal stripes). Unlike the magnetic peaks, the charge-order peaks from such a grid of diagonal stripes should not be rotated. The first charge order peaks should appear at $`(ϵ,ϵ,0)`$, so that the peaks we have observed at $`(0,2ϵ,0)`$ would involve the sum of the two modulation wave vectors. To test this possibility, we performed neutron scattering measurements on the PONTA (5G) triple-axis spectrometer at the JRR-3 reactor at JAERI. The $`x=0.12`$ Nd-doped crystal characterized previously was used, and elastic scans in the appropriate directions were performed (using 14.7-meV neutrons and relatively open collimation) at two temperatures: 7 K and 65 K. We subtracted the data at 65 K from the 7 K measurement in order to isolate any signal that might appear only at low temperature. The results are shown in Fig. 1. The peak found in the direction is consistent with previous work ; however, only noise is present along . Thus, a 2D grid interpretation appears to be incompatible with experiment. ## 3 STATIC MODULATIONS IN Zn-DOPED La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> It is of interest to search for evidence of charge-stripe order in other cuprate systems. So far, charge-order superlattice peaks have only been observed in Nd-doped La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>, where the stripes are pinned by the low-temperature-tetragonal lattice structure. One obvious candidate system is Zn-dopedLa<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>. Elastic magnetic peaks have been observed by Hirota et al. in a superconducting crystal with $`x=0.14`$. With the intention of looking for charge order, a single crystals ($`0.45`$ cm<sup>3</sup>) of La<sub>1.88</sub>Sr<sub>0.12</sub>Cu<sub>0.98</sub>Zn<sub>0.02</sub>O<sub>4</sub> was grown with an infrared image furnace at the University of Tokyo. Again, we made use of the PONTA (5G) triple-axis spectrometer, with an incident energy of 14.7 meV and a pyrolytic graphite filter before the sample. Relatively tight collimation was used to measure the lattice parameters, yielding $`a=5.2657`$ Å and $`b=5.3042`$ Å ($`ba=0.0385`$ Å) at 40 K. Although the crystal is orthorhombic at low temperature, we chose to work in tetragonal coordinates ($`ab=3.74`$ Å) to search for elastic magnetic peaks. Opening the horizontal collimations to $`40^{}`$-$`40^{}`$-$`80^{}`$-$`80^{}`$, we scanned along $`𝐐=(\frac{1}{2},\frac{1}{2}+\xi ,0)`$ and found peaks at $`\xi =\pm 0.121\pm ϵ`$. An example is shown in Fig. 2. The peak width is roughly 40% greater than that found under the same conditions in La<sub>1.48</sub>Nd<sub>0.4</sub>Sr<sub>0.12</sub>CuO<sub>4</sub>, with no correction for resolution. The temperature dependence of the peak intensity is presented in Fig. 3. The disordering temperature of $`20`$ K is intermediate with respect to those ($`T_m30`$ K and 17 K) found in crystals of La<sub>1.88</sub>Sr<sub>0.12</sub>Cu<sub>1-y</sub>Zn<sub>y</sub>O<sub>4</sub> with $`y=0`$ and 0.03 (respectively) by Kimura et al. . Initial attempts to observe charge-order peaks were unsuccessful. $`\mu `$SR and NMR studies indicate that Zn suppresses superconductivity locally, resulting in electronic inhomogeneity. One might expect that if Zn serves to pin stripes, this effect would also be inhomogeneous. Hence, it is of interest to compare the magnetic peak intensity with that found in La<sub>1.48</sub>Nd<sub>0.4</sub>Sr<sub>0.12</sub>CuO<sub>4</sub>, where $`\mu `$SR has shown the magnetic order to be relatively uniform . We measured the same magnetic peaks in the latter compound under identical conditions, except that we worked at a temperature of 7 K in order to avoid any significant contribution from the Nd moments. The relative crystal volumes were determined by phonon measurements. Normalizing by volume, and assuming no substantial difference in $`l`$ dependence of the scattering, the magnetic intensity in the Zn-doped crystal was found to be just $`(22\pm 6)\%`$ of that in the Nd-doped crystal. If this represents the volume fraction showing stripe order, then it is not surprising that charge-order peaks were not observed. ## 4 OPEN QUESTIONS While there is substantial evidence for stripe correlations in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>, the issue of whether charge stripes are common to all superconducting cuprates remains controversial. The recent observations of incommensurate magnetic scattering in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.6</sub> provide an important connection. One would also like to see evidence of related charge correlations, but this is more difficult. Mook and coworkers have made some progress in this direction using a special energy-integrated technique . It may also be necessary to investigate phonon anomalies, such as the high-energy longitudinal optical branch in La<sub>1.85</sub>Sr<sub>0.15</sub>CuO<sub>4</sub> that has been studied by Egami and coworkers . Another issue in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> concerns the so-called resonance peak. The resonance peak is centered on the antiferromagnetic wave vector, and the energy at which it is centered increases with $`x`$. Bourges has noted that the ratio of the resonance-peak energy to the superconducting transition temperature, $`T_c`$, is roughly constant. In the BCS model for superconductivity, the ratio of the superconducting gap to $`T_c`$ is also a constant. This similarity might lead one to suspect a connection between the resonance-peak energy and the superconducting gap. However, measurements of the gap in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> by tunneling and photoemission indicate that the superconducting gap increases while $`T_c`$ decreases on the underdoped side. While similar measurements are not yet available for YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>, infrared conductivity studies of the latter system suggest that the size of the gap does not decrease as $`x`$ is reduced from 1 . What is the possible relationship between the resonance peak and stripe correlations? So far, we have discussed only hole-doped superconductors. Might stripes be relevant to electron-doped superconductors? Moving beyond cuprates, charge stripes are already known to be important in nickelates and certain manganates. Do stripes occur in other transition-metal oxide systems? Clearly, there is a great deal of work left to do, and neutron scattering will be a prominent tool in this effort. ## 5 ACKNOWLEDGMENT Work at Brookhaven is supported by Contract No. DE-AC02-98CH10886, Division of Materials Sciences, U.S. Department of Energy.
no-problem/9903/cond-mat9903421.html
ar5iv
text
# SO(6)-Generalized Pseudogap Model of the Cuprates ## Abstract The smooth evolution of the tunneling gap of Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> with doping from a pseudogap state in the underdoped cuprates to a superconducting state at optimal and overdoping reflects an underlying SO(6) instability structure of the $`(\pi ,0)`$ saddle points. The pseudogap is probably not associated with superconductivity, but is related to competing nesting instabilities, which are responsible for the stripe phases. We earlier introduced a simple Ansatz of this competition in terms of a pinned Balseiro-Falicov (pBF) model of competing charge density wave and (s-wave) superconductivity. This model gives a good description of the phase diagram and the tunneling and photoemission spectra. Here, we briefly review these results, and discuss some recent developments: experimental evidence for a non-superconducting component to the pseudogap; and SO(6) generalizations of the pBF model, including flux phase and d-wave superconductivity. Recent photoemissionGp0 ; Gp2 and tunnelingtu1 ; tu3 studies in underdoped cuprates find a remarkably smooth evolution of the pseudogap into the superconducting gap as doping increases. This has led to the suggestion that the pseudogap is caused by superconducting fluctuations or precursor pairingRand . We suggest alternatively that the pseudogap represents a competing ordered state closely related to the stripe phases, with the smooth evolution due to an underlying SO(6) symmetry of the instabilities of the Van Hove singularity (VHS). In this picture, the stripe phases represent a nanoscale phase separation, between a magnetic (spin-density wave or flux phase) instability at half filling and a charge-density wave (CDW) near optimal dopingPstr . We have introduced a simple Ansatz, the pinned Balseiro-Falicov (pBF)MKK ; BFal model, which captures the essential features of the stripe-superconductivity competition. Pseudogap Phase Diagram: By comparing simultaneous measurementsDCN of the photoemission gap $`\mathrm{\Delta }`$ with the pseudogap onset temperature $`T^{}`$, we find an approximately constant ratio $`2\mathrm{\Delta }(0)/k_BT^{}8`$, which allows us to plot the Bi-2212 pseudogap phase diagram as $`T^{}`$ vs $`x`$, providing a direct comparison with transport-derived pseudogaps in LSCO and YBCOBatT , Fig. 1a. Remarkably, all three materials scale onto a single, universal phase diagram, the scaling involving only a shift of the x-axes, relative to LSCO. Such a shift is, however, not consistent with a universal scaling of the superconducting $`T_c`$’s – optimal T<sub>c</sub> falls at a different $`x`$ for each material (parabolic curves in Fig. 1a,b). On the other hand, the Uemura plotsUem also find that optimal $`T_c`$ falls at very different values of $`n/m`$ for different cuprates. The simple assumptionMG that $`n/mx`$ (with the constant of proportionality fixed by the LSCO data), unifies the Uemura plot (symbols in Fig. 1b) with the pseudogap scaling of $`T_c`$ (curves, Fig. 1b). This strongly suggests that the scaling for YBCO and Bi-2212 merely converts the data to the correct value of x. The resulting phase diagram can be well fit by the pBF model, Figure 1c, although the ratio $`2\mathrm{\Delta }/k_BT^{}`$ is 4.1, somewhat lower than experiment. SO(6): The group structure of the model should be thought of not as a symmetry group, but more in a renormalization group sense, as in the one-dimensional metal g-ology. (The group structure of g-ology has been discussed by Solomon and BirmanSoBir .) Due to the logarithmic divergence of the density of states near a VHS, the Fermi surface almost reduces to two points – the VHS’s at $`(\pi ,0)`$ and $`(0,\pi )`$. The possible instabilities of the model have an underlying SO(6) symmetrySO6 , but which instabilities are observed depend sensitively on the form of the coupling constants – corresponding to the g’s of g-ology. There are fundamentally two classes of instability – nesting instabilities which couple the two VHS’s and pairing instabilities, which are intra-VHS. The SO(6) symmetry of the model is most clearly manifested in the equation for the total gap at $`(\pi ,0)`$: $$\mathrm{\Delta }_t=\sqrt{\underset{i}{}\mathrm{\Delta }_i^2},$$ (1) where the $`\mathrm{\Delta }_i`$ are the individual gaps associated with each instability. (Note the g-ology flavor of this result: there is no underlying symmetry which says that all the $`\mathrm{\Delta }_i`$’s are equal.) In this case, the pBF model amounts to the replacement $`\mathrm{\Delta }_{SDW}^2+\mathrm{\Delta }_{CDW}^2\mathrm{\Delta }_p^2`$, where $`\mathrm{\Delta }_p`$ is the net pseudogap, which has similar form to a CDW gap. Equation 1 shows that the smooth evolution of the tunneling gap with doping is consistent with a crossover from magnetic behavior near half filling to superconducting behavior at optimal doping. Photoemission and Tunneling Spectra: Fig. 2 compares the energy dispersion (a) and the tunneling spectra (b) near the Fermi level, in the underdoped regime. It can be seen that structure in the tunneling dos is directly related to features in the dispersion of the gapped bands. Thus, peak A is associated with the dispersion at $`(\pi ,0)`$ – the VHS peak split by the combined CDW-superconducting gap. Peak B is due to the superconducting gap away from $`(\pi ,0)`$ – particularly near $`(\pi /2,\pi /2)`$. Feature C is associated with the CDW gap $`G_k`$ near $`(\pi /2,\pi /2)`$. An equation similar to Eq. 1 arises in the theory of Bilbro and McMillanBM – also a (three-dimensional) VHS theory – and was postulated to explain thermodynamic data on the pseudogapLor . What is new here is that the vector addition is found to hold only near the saddle points, while the gaps split near $`(\pi /2,\pi /2)`$, and only the superconducting gap is near the Fermi surface there. This is consistent with Panagopoulos and XiangPX , who found that, near the gap zero at $`(\pi /2,\pi /2)`$, the slope of the gap scales with $`T_c`$, and not with the gap near $`(\pi ,0)`$. Similarly, MourachkineMou has found evidence for two tunneling gaps, very similar to features A and B of Fig. 2b; as predicted, feature B scales with T<sub>c</sub>, and not with the pseudogap, feature A. Feature B arises from the superconducting gap at the hole pockets, as can be seen from the energy dispersion at the B gap energy, curve b,d in Fig. 3. The phase diagram is most naturally fit by assuming that the pseudogap represents a nesting instability, and superconductivity a pairing instability. A more precise determination will require careful experimentation. Thus, Fig. 2c compares the tunneling spectra for two models, the original pBF model in terms of a CDW and an s-wave superconductor, and a modified version involving flux phase – d-wave superconductivity competition. The resulting spectra are, as expected, nearly identical. Close inspection shows differences near $`(\pi ,\pi )`$, where the gap is purely due to the pairing instability. Van Hove Pinning: An essential ingrediant of the model is that the gap remains centered at $`(\pi ,0)`$ over the full doping range from half filling to optimal doping – that is, that the VHS is pinned at the Fermi level. This remarkable consequence of strong correlation effects was first pointed out in 1989RM3 , and has been rederived numerous times since thenSurv1 . We have noted that this pinning should be measurable, both in tunneling and in photoemission, and a preliminary analysis of the data appears to confirm the predictionMKK . Evidence for a Nonsuperconducting Pseudogap: The photoemission and tunneling spectra near optimal doping have a very characteristic form below $`T_c`$. There is a sharp quasiparticle peak at an energy $`\mathrm{\Delta }`$ above (or below) the Fermi level, with a pronounced dip near 2$`\mathrm{\Delta }`$, followed by a broad hump at higher energies. The dip is most probably associated with reduced quasiparticle scattering within the superconducting state, which terminates when pairbreaking sets in at energies above twice the superconducting gap, 2$`\mathrm{\Delta }_s`$CoCo . Recently, Miyakawa, et al.tu7 showed how the tunneling gap in Bi-2212 evolves with doping, scaling a series of tunneling curves to the respective $`\mathrm{\Delta }`$’s. These curves show significant deviations from scaling of the dip feature with the tunneling gap, which suggest that in the underdoped regime, $`\mathrm{\Delta }_s<\mathrm{\Delta }`$. By assuming that the dip scales exactly with $`\mathrm{\Delta }_s`$, it is possible to extract the doping dependence of $`\mathrm{\Delta }_s`$, and correspondingly of $`\mathrm{\Delta }_p`$, the non-superconducting component of the gapMK2 , Fig. 4. Shown also is recent Terahertz data from Corson, et al.COre , who extract a bare superconducting transition temperature $`T_{c0}>T_c`$ from a Berezinski-Kosterlitz-Thouless analysis of superconducting fluctuations. (The effective $`T_{c0}`$ is found to be strongly frequency dependent; the squares in Fig. 4 are an estimate based on the highest frequency data, 600GHz.) Note that $`T_{c0}`$ is considerably smaller than the pseudogap onset and has very different scaling, actually decreasing with increased underdoping. Indeed, this $`T_{c0}`$ is consistent with the values of $`\mathrm{\Delta }_s`$ estimated in Ref. MK2 , with the same ratio of $`\mathrm{\Delta }`$ to $`T_c`$ as found for the total gap in the overdoped regime (where the nonsuperconducting component is absent). The data suggest a rather modest pair-breaking effect of the stripes, reducing the optimal $`T_c`$ from $`125K`$ to $`95K`$. Conclusions: The simple pinned Van Hove Ansatz for the striped pseudogap phase in the cuprates provides a detailed explanation for the phase diagram and the experimental tunneling and photoemission spectra. In particular: (1) The fact that the tunneling peaks are experimentally found to coincide with the $`(\pi ,0)`$ photoemission dispersiontu3 shows that the $`(\pi ,0)`$ dispersion has a gap – that is, that the pseudogap is associated with VHS nestingPstr . (2) The interpretation is self-consistent, in that the experiments seem to find that the Fermi level is pinned near the VHS in the underdoped regimeMKK . (3) The tunneling gap has a characteristic asymmetry which vanishes at optimal doping; this is evidence that optimal doping is that point at which the Fermi level exactly coincides with the VHSMKK . (4) While there are superconducting fluctuations in the underdoped regime, a large fraction of the pseudogap has a non-superconducting origin. (5) Portions of the tunneling spectra associated with the Fermi surface near $`(\pi /2,\pi /2)`$ show distinct scaling with $`T_c`$, not $`T^{}`$. (6) The pseudogap phase diagrams for Bi-2212, LSCO, and YBCO appear to be universal and consistent with the Uemura plot, while optimal doping $`x_c`$ varies from compound to compound. Our interest in the tunneling studies was sparked by conversations with A.M. Gabovich. We would like to thank NATO for enabling him to visit us. MTV’s work was supported by DOE Grant DE-FG02-85ER40233. Publication 757 of the Barnett Institute. $`:`$ On leave of absence from Inst. of Atomic Physics, Bucharest, Romania
no-problem/9903/solv-int9903004.html
ar5iv
text
# Finite genus solutions for the Ablowitz-Ladik hierarchy. ## 1 Introduction. The problem of constructing the quasiperiodic solutions (QPS) is one of the most challenging problems of the theory of integrable systems, and many mathematicians and physicists spent much efforts to obtain the QPS for almost all equations that are known to be integrable. The Ablowitz-Ladik hierarchy (ALH), which has been introduced in , is not an exception. So, e.g, one should mention the works by N.N.Bogolyubov (Jr) et al and S.Ahmad, A.R.Chowdhury devoted to the discrete nonlinear Schrodinger equation (DNLSE) and the discrete modified Korteveg-de Vries equation (DMKdV), which are the best studied equations of the ALH. There authors were studying this problem in the framework of the inverse scattering transform (IST). Another, the so-called algebraic-geometrical, approach has been used by Miller, Ercolani, Krichever and Levermore, who considered in the complex version of the DNLSE and obtained the Baker-Akhiezer function and QPS corresponding to finite genus Riemann surfaces. This work provides almost exhaustive solution of the problem of the finite genus QPS, but its results need some further simplification to be useful for practical purposes, especially if one wants to extend them to the higher equations of the ALH and in this work I will try to avoid the algebro-geometrical language, and will use some more direct (and simpler) strategy. As it has been established in , each finite genus QPS of the DNLSE can be presented as a quotient of the $`\theta `$-functions of some arguments multiplied by an exponent of some phase, all of them being some linear functions of the coordinates (the same is true in the cases of the DMKdV as well as all other equations of the hierarchy). Thus, since we know the structure of the solutions, all we have to do to derive them is to calculate some number of constant parameters. So, it desirable to develop some method which will enable us to obtain these constants (and hence solutions) straightforwardly, not using technique (sometimes rather complicated) of the theory of the functions and differentials on hyperelliptic Riemann surfaces. It turns out that this can be done. Moreover, this can be done not only for the DNLSE or DMKdV but, in principle, for all equations of the hierarchy simultaneously. Namely this is the main question of this paper. The key moment is that the $`\theta `$-functions of the finite genus Riemann surfaces (of which the finite genus QPS are built-up) satisfy some algebraic relation, the so-called Fay’s trisecant formula , which can be used to obtain an infinite number of differential identities, which, as will be shown below, are closely related to the ALH, and can be used to obtain the QPS we are looking for. Such approach also demonstrates some new, to my knowledge, feature of the ALH (and namely this was one of the main motives to write this paper): the equations of the ALH naturally appear when flows over Riemann surfaces are considered (I will return to this question below). The plan of the paper is as follows. After presenting some basic facts on the ALH (section 2) I will discuss the Fay’s formula and its differential consequences (section 3). These results will be used to obtain the finite genus QPS for the ALH (section 4). ## 2 Ablowitz-Ladik hierarchy The ALH is an infinite set of integrable differential-difference equations, which has been introduced in . All equations of the ALH can be presented as the compatibility condition for the linear system $`\mathrm{\Psi }_{n+1}`$ $`=`$ $`U_n\mathrm{\Psi }_n`$ (2.1) $`{\displaystyle \frac{}{z_j}}\mathrm{\Psi }_n`$ $`=`$ $`V_n^{(j)}\mathrm{\Psi }_n,j=\pm 1,\pm 2,\mathrm{}`$ (2.2) where $`\mathrm{\Psi }_n`$ is a $`2`$-column, $`U_n`$ and $`V_n`$ are $`2\times 2`$ matrices with $`U_n`$ being given by $$U_n=U_n(\lambda )=\left(\begin{array}{cc}\lambda & r_n\\ q_n& \lambda ^1\end{array}\right)$$ (2.3) (here $`\lambda `$ is the auxiliary (spectral) constant parameter) and the matrices $`V_n^{(j)}`$ are polynomials in $`\lambda `$, $`\lambda ^1`$. The ALH in a natural way can be splitted into two subsystems (subhierarchies). One of them corresponds to the case when $`V_n^{(j)},j=1,2,\mathrm{}`$ are $`j`$th order polynomials in $`\lambda ^1`$ (’positive’ subhierarchy). Its simplest equations are $`{\displaystyle \frac{q_n}{z_1}}`$ $`=`$ $`ip_nq_{n+1}`$ (2.4) $`{\displaystyle \frac{r_n}{z_1}}`$ $`=`$ $`ip_nr_{n1}`$ (2.5) where $$p_n=1q_nr_n$$ (2.6) The ’negative’ subhierarchy is build up of the $`V`$-matrices being polynomials in $`\lambda `$. Its simplest equations are $`{\displaystyle \frac{q_n}{\overline{z}_1}}`$ $`=`$ $`ip_nq_{n1}`$ (2.7) $`{\displaystyle \frac{r_n}{\overline{z}_1}}`$ $`=`$ $`ip_nr_{n+1}`$ (2.8) (I use the notation $`\overline{z}_j=z_j,j=1,2,\mathrm{}`$). It has been shown in that the ALH can be presented in the form of the functional-difference equations: $`q_n(z,\overline{z})q_n(zi[\xi ],\overline{z})`$ $`=`$ $`\xi \left[1q_n(z,\overline{z})r_n(zi[\xi ],\overline{z})\right]q_{n+1}(z,\overline{z})`$ (2.9) $`r_n(z,\overline{z})r_n(z+i[\xi ],\overline{z})`$ $`=`$ $`\xi \left[1q_n(z+i[\xi ],\overline{z})r_n(z,\overline{z})\right]r_{n1}(z,\overline{z})`$ (2.10) for the ’positive’ subhierarchy, and $`q_n(z,\overline{z})q_n(z,\overline{z}i[\xi ^1])`$ $`=`$ $`\xi ^1\left[1q_n(z,\overline{z})r_n(z,\overline{z}i[\xi ^1])\right]q_{n1}(z,\overline{z})`$ (2.11) $`r_n(z,\overline{z})r_n(z,\overline{z}+i[\xi ^1])`$ $`=`$ $`\xi ^1\left[1q_n(z,\overline{z}+i[\xi ^1])r_n(z,\overline{z})\right]r_{n+1}(\overline{z},z)`$ (2.12) for the ’negative’ one. Here the designations $$f(z,\overline{z})=f(z_1,z_2,z_3,\mathrm{}\overline{z}_1,\overline{z}_2,\overline{z}_3,\mathrm{})$$ (2.13) and $`f(z\pm [\xi ],\overline{z})`$ $`=`$ $`f(z_1\pm \xi ,z_2\pm \xi ^2/2,z_3\pm \xi ^3/3,\mathrm{}\overline{z}_1,\overline{z}_2,\overline{z}_3,\mathrm{})`$ (2.14) $`f(z,\overline{z}\pm [\xi ^1])`$ $`=`$ $`f(z_1,z_2,z_3,\mathrm{}\overline{z}_1\pm \xi ^1,\overline{z}_2\pm \xi ^2/2,\overline{z}_3\pm \xi ^3/3,\mathrm{})`$ (2.15) are used. Expanding equations (2.9), (2.10) in power series in $`\xi `$ one can obtain all equations of the ’positive’ subhierarchy. Analogously, expanding equations (2.11), (2.12) in power series in $`\xi ^1`$ one can obtain all equations of the ’negative’ one. In what follows I will use also the tau-functions of the ALH, $`\sigma _n`$, $`\rho _n`$ and $`\tau _n`$, which are defined by $$q_n=\frac{\sigma _n}{\tau _n},r_n=\frac{\rho _n}{\tau _n}$$ (2.16) and $$\tau _{n1}\tau _{n+1}=\tau _n^2\sigma _n\rho _n$$ (2.17) The functional representation of the ALH in terms of the tau-functions can be written as $`\tau _n(z)\sigma _n(z+i[\xi ])\sigma _n(z)\tau _n(z+i[\xi ])`$ $`=`$ $`\xi \tau _{n1}(z)\sigma _{n+1}(z+i[\xi ])`$ (2.18) $`\rho _n(z)\tau _n(z+i[\xi ])\tau _n(z)\rho _n(z+i[\xi ])`$ $`=`$ $`\xi \rho _{n1}(z)\tau _{n+1}(z+i[\xi ])`$ (2.19) $`\tau _n(z)\tau _n(z+i[\xi ])\rho _n(z)\sigma _n(z+i[\xi ])`$ $`=`$ $`\tau _{n1}(z)\tau _{n+1}(z+i[\xi ])`$ (2.20) (where dependence on $`\overline{z}_j`$’s is omitted) and $`\tau _n(\overline{z})\sigma _n(\overline{z}+i[\xi ^1])\sigma _n(\overline{z})\tau _n(\overline{z}+i[\xi ^1])`$ $`=`$ $`\xi ^1\tau _{n+1}(\overline{z})\sigma _{n1}(\overline{z}+i[\xi ^1])`$ (2.21) $`\rho _n(\overline{z})\tau _n(\overline{z}+i[\xi ^1])\tau _n(\overline{z})\rho _n(\overline{z}+i[\xi ^1])`$ $`=`$ $`\xi ^1\rho _{n+1}(\overline{z})\tau _{n1}(\overline{z}+i[\xi ^1])`$ (2.22) $`\tau _n(\overline{z})\tau _n(\overline{z}+i[\xi ^1])\rho _n(\overline{z})\sigma _n(\overline{z}+i[\xi ^1])`$ $`=`$ $`\tau _{n+1}(\overline{z})\tau _{n1}(\overline{z}+i[\xi ^1])`$ (2.23) (where dependence on $`z_j`$’s is omitted). The key idea of the present work is to establish the relation between these equations and the famous Fay’s identity for the $`\theta `$-functions which can be used to derive the finite-gap QPS of the ALH. ## 3 Fay’s identity. In this paper we will deal with the compact Riemann surface $`X`$ of the genus $`g`$ corresponding to the hyperelliptic curve $$s^2=𝒫_{2g+2}(\xi )$$ (3.1) where $`𝒫_{2g+2}(\xi )`$ is a polynomial without multiple roots of degree $`2g+2`$. In the framework of the IST such curves appear in the analysis of the scattering problem (2.1). For example, in the case of the periodic conditions $$q_{n+g+1}=q_n,r_{n+g+1}=r_n$$ (3.2) the polynomial $`𝒫_{2g+2}(\xi )`$ is defined by $$𝒫_{2g+2}(\lambda ^2)=\lambda ^{2(g+1)}\left\{\left[\mathrm{tr}T_n(\lambda )\right]^24detT_n(\lambda )\right\}$$ (3.3) where $`T_n(\lambda )`$ is the transfer matrix of the scattering problem (2.1), $$T_n(\lambda )=U_{n+g}(\lambda )\mathrm{}U_n(\lambda )$$ (3.4) (it can be straightforwardly shown that the right-hand side of (3.3) under the restriction (3.2) does not depend on the index $`n`$). Topologically, $`X`$ is a sphere with $`g`$ handles. One can choose a set of $`2g`$ closed contours (cycles) $`\{a_i,b_i\}_{i=1,\mathrm{},g}`$ with the intersection indices $$a_ia_j=b_ib_j=0,a_ib_j=\delta _{ij}i,j=1,\mathrm{},g$$ (3.5) and find $`g`$ independent holomorphic differentials, say ones given locally by $$\stackrel{~}{\omega }_k=\frac{\xi ^{k1}d\xi }{\sqrt{𝒫_{2g+2}(\xi )}},k=1,\mathrm{},g$$ (3.6) which can be used to construct the canonical basis of the holomorphic 1-forms $$\omega _k=\underset{l=1}{\overset{g}{}}C_{k,l}\stackrel{~}{\omega _l}$$ (3.7) where $`\omega _k`$’s satisfy the normalization conditions $$_{a_i}\omega _k=\delta _{ik}$$ (3.8) Then, the matrix of the $`b`$-periods, $$\mathrm{\Omega }_{ik}=_{b_i}\omega _k$$ (3.9) determines the so-called period lattice, $`L_\mathrm{\Omega }=\left\{𝒎+\mathrm{\Omega }𝒏,𝒎,𝒏^g\right\}`$, the Jacobian of this surface $`\mathrm{Jac}(X)=^g/L_\mathrm{\Omega }`$ (2$`g`$ torus) and the Abel mapping $`X\mathrm{Jac}(X)`$, $$P_{P_0}^P𝝎$$ (3.10) where $`𝝎`$ is the $`g`$-vector of the 1-forms, $`𝝎=(\omega _1,\mathrm{},\omega _g)^T`$ and $`P_0`$ is some fixed point of $`X`$. A central object of the theory of the compact Riemann surfaces is the $`\theta `$-function, $`\theta (𝜻)=\theta (𝜻,\mathrm{\Omega })`$, $$\theta \left(𝜻\right)=\underset{𝒏^g}{}\mathrm{exp}\left\{\pi i𝒏\mathrm{\Omega }𝒏+2\pi i𝒏𝜻\right\}$$ (3.11) which is a quasiperiodic function on $`^g`$ $`\theta \left(𝜻+𝒏\right)`$ $`=`$ $`\theta \left(𝜻\right)`$ (3.12) $`\theta \left(𝜻+\mathrm{\Omega }𝒏\right)`$ $`=`$ $`\mathrm{exp}\left\{\pi i𝒏\mathrm{\Omega }𝒏2\pi i𝒏𝜻\right\}\theta \left(𝜻\right)`$ (3.13) for $`𝒏^g`$. To simplify the following formulae I will use the designations $$\theta _B^A\left(𝜻\right)=\theta \left(𝜻+_B^A𝝎\right)$$ (3.14) and $$\widehat{\theta }_B^A=\theta \left[𝜹\right]\left(_B^A𝝎\right)$$ (3.15) Here $`\theta \left[𝒄\right]\left(𝜻\right)`$ is the so-called $`\theta `$-function with characteristics, $$\theta \left[𝒄\right]\left(𝜻\right)=\mathrm{exp}\left\{\pi i𝒂\mathrm{\Omega }𝒂+2\pi i𝒂\left(𝜻+𝒃\right)\right\}\theta \left(𝜻+\mathrm{\Omega }𝒂+𝒃\right),𝒄=(𝒂,𝒃)$$ (3.16) and $`𝜹=(𝜹^{},𝜹^{\prime \prime })\frac{1}{2}^{2g}/^{2g}`$ is a non-singular odd characteristics, $$\theta \left[𝜹\right]\left(\mathrm{𝟎}\right)=0,\mathrm{grad}_𝜻\theta \left[𝜹\right]\left(\mathrm{𝟎}\right)\mathrm{𝟎}$$ (3.17) Function (3.15) is closely related to the prime form , $$E(P,Q)=\frac{\widehat{\theta }_Q^P}{\sqrt{\chi (P)}\sqrt{\chi (Q)}}$$ (3.18) where $`\chi `$ is given by $$\chi (P)=\underset{i=1}{\overset{g}{}}\left(\frac{}{\zeta _i}\theta \left[𝜹\right]\right)\left(\mathrm{𝟎}\right)\omega _i(P)$$ (3.19) The prime form $`E(P,Q)`$ is skew-symmetric, $`E(P,Q)=E(Q,P)`$, has first-order zero along the diagonal $`P=Q`$ and is otherwise non-zero. Analogously, $$\widehat{\theta }_Q^P=\widehat{\theta }_P^Q,\widehat{\theta }_P^P=0$$ (3.20) One of the most interesting results of the theory of the $`\theta `$-functions is the following identity for the $`\theta `$-functions associated with the finite-genus Riemann surfaces, the Fay’s identity: $$\widehat{\theta }_{P_3}^{P_1}\widehat{\theta }_{P_2}^{P_4}\theta _{P_2}^{P_1}\left(𝜻\right)\theta _{P_3}^{P_4}\left(𝜻\right)\widehat{\theta }_{P_2}^{P_1}\widehat{\theta }_{P_3}^{P_4}\theta _{P_3}^{P_1}\left(𝜻\right)\theta _{P_2}^{P_4}\left(𝜻\right)=\widehat{\theta }_{P_3}^{P_2}\widehat{\theta }_{P_1}^{P_4}\theta \left(𝜻\right)\theta _{P_2P_3}^{P_1P_4}\left(𝜻\right)$$ (3.21) (here $`P_1,\mathrm{},P_4`$ are arbitrary points of $`X`$) and namely this formula will be basis of the following consideration. ## 4 Quasiperiodic solutions. It is already known that in the quasiperiodic case the tau-functions of the ALH are (up to some simple factors) the $`\theta `$-functions of different arguments and I am going now to present the Fay’s identity and some of its corollaries in the form similar to (2.18)–(2.20) and (2.21)–(2.23) which will enable to obtain the finite-gap solutions of these functional equations, i.e. to obtain the finite-gap solutions of the ALH. Hereafter I will use the letters $`A`$, $`B`$, $`C`$, $`D`$ for the points of the Riemann surface which correspond to the points $`0`$ and $`\mathrm{}`$ of the complex plane, $$A=\mathrm{}_+,D=\mathrm{}_{},B=0_{},C=0_+$$ (4.1) Since $`A,D`$ and $`B,C`$ are poles and zeroes of the meromorphic function $`\pi (P)`$, which is a projection of $`X`$ onto the extended complex plane $`^1`$ sending a point $`P=(s,\xi )`$ into $`\xi `$, they satisfy, according to the Abel’s theorem, the condition $`_{BC}^{AD}𝝎L_\mathrm{\Omega }`$. The integration pathes in (3.10) can be chosen in such a way that $$_{BC}^{AD}𝝎=\mathrm{𝟎}$$ (4.2) (here zero stands for $`\mathrm{𝟎}`$ from $`^g`$, not from $`\mathrm{Jac}(X)`$) and in what follows I will accept (4.2) as true. Now I am going to use (3.21) thinking of three points from $`(P_1,P_2,P_3,P_4)`$ as constant (I will choose them from the set $`(A,B,C,D)`$) and the fourth one (I will denote it by $`P`$) as variable. Setting $`(P_1,P_2,P_3,P_4)=(A,B,C,P)`$ one can rewrite (3.21) as $$\widehat{\theta }_C^B\widehat{\theta }_A^P\theta \left(𝜻\right)\theta _D^P\left(𝜻\right)+\widehat{\theta }_B^A\widehat{\theta }_C^P\theta _C^A\left(𝜻\right)\theta _B^P\left(𝜻\right)=\widehat{\theta }_C^A\widehat{\theta }_B^P\theta _B^A\left(𝜻\right)\theta _C^P\left(𝜻\right)$$ (4.3) This formula is the quasiperiodic analog of (2.18). Shifting the arguments of the $`\theta `$-functions, $`𝜻𝜻+_A^C𝝎`$, one can obtain the equation which will be transformed below to (2.19): $$\widehat{\theta }_C^B\widehat{\theta }_A^P\theta _A^C\left(𝜻\right)\theta _B^P\left(𝜻\right)+\widehat{\theta }_B^A\widehat{\theta }_C^P\theta \left(𝜻\right)\theta _{AB}^{CP}\left(𝜻\right)=\widehat{\theta }_C^A\widehat{\theta }_B^P\theta _B^C\left(𝜻\right)\theta _A^P\left(𝜻\right)$$ (4.4) At last, replacing in (3.21) $`(P_1,P_2,P_3,P_4)`$ with $`(A,P,C,D)`$ using (4.2) and making the shift $`𝜻𝜻+_A^C𝝎`$ one can write the identity $$\widehat{\theta }_D^A\widehat{\theta }_C^P\theta \left(𝜻\right)\theta _B^P\left(𝜻\right)\widehat{\theta }_B^A\widehat{\theta }_A^P\theta _A^C\left(𝜻\right)\theta _D^P\left(𝜻\right)=\widehat{\theta }_C^A\widehat{\theta }_D^P\theta _B^A\left(𝜻\right)\theta _A^P\left(𝜻\right)$$ (4.5) which is a quasiperiodic analogue of (2.20). Our first goal is to present equations (4.3)–(4.5) in the bilinear form. To this end I will first shift the arguments of the $`\theta `$-functions: $`𝜻𝜻_n`$, $$𝜻_n=𝜻+n_A^B𝝎$$ (4.6) Next, I will introduce the functions $`\sigma _n(P)`$, $`\rho _n(P)`$ and $`\tau _n(P)`$, $`\tau _n(P)`$ $`=`$ $`\alpha _n(P)\theta _B^P\left(𝜻_n\right)`$ (4.7) $`\sigma _n(P)`$ $`=`$ $`\beta _n(P)\theta _D^P\left(𝜻_n\right)`$ (4.8) $`\rho _n(P)`$ $`=`$ $`\gamma _n(P)\theta _{AB}^{CP}\left(𝜻_n\right)`$ (4.9) It is not difficult to verify that if one chooses the functions $`\alpha _n`$, $`\beta _n`$, $`\gamma _n`$ as follows $`\alpha _n(P)`$ $`=`$ $`\alpha _{}\mu ^{n^2/2}\mathrm{exp}\left\{n\phi _{DC}(P)\right\}`$ (4.10) $`\beta _n(P)`$ $`=`$ $`q_{}\epsilon ^n\mathrm{exp}\left\{\phi _{AC}(P)\right\}\alpha _n(P)`$ (4.11) $`\gamma _n(P)`$ $`=`$ $`r_{}\epsilon ^n\mathrm{exp}\left\{\phi _{AC}(P)\right\}\alpha _n(P)`$ (4.12) where the functions $`\phi _{QR}`$ are defined in the vicinity of the point $`B`$ by $$\mathrm{exp}\left\{\phi _{QR}(P)\phi _{QR}(B)\right\}=\frac{\widehat{\theta }_Q^P}{\widehat{\theta }_R^P}\frac{\widehat{\theta }_R^B}{\widehat{\theta }_Q^B},$$ (4.13) the constant $`\mu `$ is given by $$\mu =\frac{\left(\widehat{\theta }_C^A\right)^2}{\widehat{\theta }_D^A\widehat{\theta }_C^B}$$ (4.14) and $`\alpha _{}`$, $`q_{}`$, $`r_{}`$ and $`\epsilon `$ are arbitrary constants satisfying $$q_{}r_{}=\frac{\left(\widehat{\theta }_B^A\right)^2}{\widehat{\theta }_D^A\widehat{\theta }_C^B}$$ (4.15) then (4.3)–(4.5) can be rewritten in terms of the functions $`\sigma _n(P)`$, $`\rho _n(P)`$ and $`\tau _n(P)`$ as $`\tau _n(B)\sigma _n(P)\sigma _n(B)\tau _n(P)`$ $`=`$ $`K(P)\tau _{n1}(B)\sigma _{n+1}(P)`$ (4.16) $`\rho _n(B)\tau _n(P)\tau _n(B)\rho _n(P)`$ $`=`$ $`K(P)\rho _{n1}(B)\tau _{n+1}(P)`$ (4.17) $`\tau _n(B)\tau _n(P)\rho _n(B)\sigma _n(P)`$ $`=`$ $`\tau _{n1}(B)\tau _{n+1}(P)`$ (4.18) where $$K(P)=\frac{1}{\epsilon }\frac{\widehat{\theta }_D^A}{\widehat{\theta }_C^B}\frac{\widehat{\theta }_B^P\widehat{\theta }_C^P}{\widehat{\theta }_A^P\widehat{\theta }_D^P}$$ (4.19) Thus we have presented the Fay’ identities in the bilinear form similar to (2.18)–(2.20). What I have to do now is to introduce a $`z`$-dependence in such a way that a shift over the Riemann surface from the point $`B`$ to a point $`P`$ (which correspond to the points $`0`$ and $`\xi `$ of the complex plane) can be taken into account by the simultaneous shifts $`z_mz_m+i\xi ^m/m`$: $`f_n(B)`$ $`=`$ $`f_n(z)=f_n(z_1,z_2,z_3,\mathrm{}),`$ (4.20) $`f_n(P)`$ $`=`$ $`f_n(z+i[\xi ])=f_n(z_1+i\xi ,z_2+i\xi ^2/2,z_3+i\xi ^3/3,\mathrm{})`$ (4.21) (I hope that the usage of the same letters for functional dependence on both the point of the Riemann surface and the ALH variables $`z_m`$ will not lead to confusion). In other words I want to introduce such functions $`𝜻(z_1,z_2,\mathrm{})`$ and $`\phi _{QR}(z_1,z_2,\mathrm{})`$ that $$𝜻(z+i[\xi ])𝜻(z)=_B^P𝝎$$ (4.22) and $$\phi _{QR}(z+i[\xi ])\phi _{QR}(z)=\phi _{QR}(P)\phi _{QR}(B)$$ (4.23) This can be done as follows. In the neighborhood of the point $`B`$ (which is an preimage of the point $`\xi =0`$ of the complex plane) the components of the integral in (4.22) can be presented in terms of the $`\xi `$-coordinate as $$_B^P\omega _k=W_k(\xi )=\underset{l=1}{\overset{g}{}}C_{k,l}_0^\xi \frac{x^{l1}\mathrm{d}x}{\sqrt{𝒫_{2g+2}(x)}}$$ (4.24) where the sign of the square root is fixed by $`\sqrt{1}=1`$. Hence, taking $`𝜻`$ to be a linear function of the coordinates $`z_m`$, $$𝜻=𝜻(z)=\underset{m=1}{\overset{\mathrm{}}{}}𝜻^{(m)}z_m$$ (4.25) one can conclude that to satisfy (4.22) the vectors $`𝜻^{(m)}`$ should be defined as the coefficients of the series $$\underset{m=1}{\overset{\mathrm{}}{}}𝜻^{(m)}\xi ^m=i\xi \frac{\mathrm{d}}{\mathrm{d}\xi }𝑾(\xi )$$ (4.26) (here $`𝑾`$ is the vector with the components $`W_k`$). Using (4.24) one can rewrite (4.26) as $$\underset{m=1}{\overset{\mathrm{}}{}}\zeta _k^{(m)}\xi ^m=i\underset{l=1}{\overset{g}{}}C_{k,l}\frac{\xi ^l}{\sqrt{𝒫_{2g+2}(\xi )}}$$ (4.27) (the right-hand side of this equation should be understood as a power series in $`\xi `$). In a similar way one can tackle equation (4.23) and to derive the following result: $`\phi _{QR}(z)`$ is the linear function, $$\phi _{QR}(z)=\underset{m=1}{\overset{\mathrm{}}{}}\phi _{QR}^{(m)}z_m$$ (4.28) with the coefficients $`\phi _{QR}^{(m)}`$ being defined by $$\underset{m=1}{\overset{\mathrm{}}{}}\phi _{QR}^{(m)}\xi ^m=i\xi \frac{\mathrm{d}}{\mathrm{d}\xi }\mathrm{ln}\frac{\theta \left[𝜹\right]\left(_Q^B𝝎+𝑾(\xi )\right)}{\theta \left[𝜹\right]\left(_R^B𝝎+𝑾(\xi )\right)}$$ (4.29) (the right-hand side is again a power series in $`\xi `$). Thus one can write the following expressions for the tau-functions: $`\tau _n(z)`$ $`=`$ $`\alpha _{}\mu ^{n^2/2}\mathrm{exp}\left\{n\phi _{DC}(z)\right\}\theta \left(𝜻_n(z)\right)`$ (4.30) $`\sigma _n(z)`$ $`=`$ $`q_{}\epsilon ^n\mu ^{n^2/2}\mathrm{exp}\left\{n\phi _{DC}(z)+\phi _{AC}(z)\right\}\theta \left(𝜻_n(z){\displaystyle _A^C}𝝎\right)`$ (4.31) $`\rho _n(z)`$ $`=`$ $`r_{}\epsilon ^n\mu ^{n^2/2}\mathrm{exp}\left\{n\phi _{DC}(z)\phi _{AC}(z)\right\}\theta \left(𝜻_n(z)+{\displaystyle _A^C}𝝎\right)`$ (4.32) At last, we have to rewrite the function $`K(P)`$ from the right hand side of (4.16)–(4.18). This is the first time, since the Fay’s identity has been written down, that we need some facts from the theory of the Riemann surfaces – till now all was done by simple algebra. Consider the function $$f(P)=\frac{\widehat{\theta }_B^P\widehat{\theta }_C^P}{\widehat{\theta }_A^P\widehat{\theta }_D^P}$$ (4.33) This is a single-valued, due to the condition (4.2), function which possesses zeroes at the points $`B`$, $`C`$ and poles at $`A`$, $`D`$. Remembering that $`B`$, $`C`$ correspond to $`\xi =0`$, and $`A`$, $`D`$ – to $`\xi =\mathrm{}`$, one can easily obtain one function with the same divisor, $`B+CAD`$, namely, the projection $`\pi (P)`$ discussed above (see paragraph before (4.2)). The quotient $`\pi (P)/f(P)`$ has no poles (and zeroes as well) on $`X`$, hence it is a constant $$f(P)=C\xi \text{for}P=(s,\xi )$$ (4.34) Thus, if we take $$\epsilon =C\frac{\widehat{\theta }_D^A}{\widehat{\theta }_C^B}$$ (4.35) then $$K(P)=\xi $$ (4.36) and relations (4.16)–(4.18) become (2.18)–(2.20), or in other words, the functions defined by (4.30)–(4.32) solve equations (2.18)–(2.20). Till now we were operating in a neighborhood of the point $`B`$ and obtained solutions of equations (2.18)–(2.20), and hence of (2.9)–(2.10), i.e. solved the ’positive’ part of the ALH. To take into account the ’negative’ equations (2.11)–(2.12), or (2.21)–(2.23), one can proceed in the similar way, but this time considering flows near another distinguished point, $`D`$, which is an preimage of the point $`\xi =\mathrm{}`$. It can be shown that functions $`\tau _n`$, $`\sigma _n`$, $`\rho _n`$ given by (4.30)–(4.32) will solve (2.21)–(2.23) provided we introduce the $`\overline{z}`$-dependence by replacing $`𝜻(z)`$ $``$ $`𝜻(z,\overline{z})`$ (4.37) $`\phi _{DC}(z)`$ $``$ $`\phi _{DC}(z)+\overline{\phi }_{BA}(\overline{z})`$ (4.38) $`\phi _{AC}(z)`$ $``$ $`\phi _{AC}(z)+\overline{\phi }_{CA}(\overline{z})`$ (4.39) (the overbar does not mean the complex conjugation!) where $$𝜻(z,\overline{z}+i[\xi ^1])𝜻(z,\overline{z})=\overline{𝑾}(\xi ^1)=_D^P𝝎$$ (4.40) and $$\overline{\phi }_{QR}(\overline{z}+i[\xi ^1])\overline{\phi }_{QR}(\overline{z})=\mathrm{ln}\frac{\widehat{\theta }_Q^P}{\widehat{\theta }_R^P}\frac{\widehat{\theta }_R^D}{\widehat{\theta }_Q^D}$$ (4.41) Thus, now we have all necessary to formulate the main result of this paper. The finite genus solutions of the ALH can be presented as $`q_n(z,\overline{z})`$ $`=`$ $`q_{}\epsilon ^n\mathrm{exp}\left\{\phi (z,\overline{z})\right\}{\displaystyle \frac{\theta \left(𝜻(z,\overline{z})+n𝑼𝑽\right)}{\theta \left(𝜻(z,\overline{z})+n𝑼\right)}}`$ (4.42) $`r_n(z,\overline{z})`$ $`=`$ $`r_{}\epsilon ^n\mathrm{exp}\left\{\phi (z,\overline{z})\right\}{\displaystyle \frac{\theta \left(𝜻(z,\overline{z})+n𝑼+𝑽\right)}{\theta \left(𝜻(z,\overline{z})+n𝑼\right)}}`$ (4.43) where $$𝑼=_A^B𝝎,𝑽=_A^C𝝎$$ (4.44) The functions $`𝜻(z,\overline{z})`$ and $`\phi (z,\overline{z})`$ are given by $`𝜻(z,\overline{z})`$ $`=`$ $`{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}\left(𝜻^{(m)}z_m+\overline{𝜻}^{(m)}\overline{z}_m\right)+\text{constant}`$ (4.45) $`\phi (z,\overline{z})`$ $`=`$ $`{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}\left(\phi ^{(m)}z_m+\overline{\phi }^{(m)}\overline{z}_m\right)+\text{constant}`$ (4.46) where the constants $`𝜻^{(m)}`$, $`\overline{𝜻}^{(m)}`$ and $`\phi ^{(m)}`$, $`\overline{\phi }^{(m)}`$ are defined as coefficients of the series $`{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}\zeta _k^{(m)}\xi ^m`$ $`=`$ $`i{\displaystyle \underset{l=1}{\overset{g}{}}}C_{k,l}{\displaystyle \frac{\xi ^l}{\sqrt{𝒫_{2g+2}(\xi )}}}`$ (4.47) $`{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}\overline{\zeta }_k^{(m)}\xi ^m`$ $`=`$ $`i{\displaystyle \underset{l=1}{\overset{g}{}}}C_{k,g+1l}{\displaystyle \frac{\xi ^l}{\sqrt{\overline{𝒫}_{2g+2}(1/\xi )}}}`$ (4.48) with $$\overline{𝒫}_{2g+2}(\xi )=\xi ^{2g+2}(\xi )𝒫_{2g+2}(1/\xi )$$ (4.49) and $`{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}\phi ^{(m)}\xi ^m`$ $`=`$ $`i\xi {\displaystyle \frac{\mathrm{d}}{\mathrm{d}\xi }}\mathrm{ln}{\displaystyle \frac{\theta \left[𝜹\right]\left(𝑼𝑽+𝑾(\xi )\right)}{\theta \left[𝜹\right]\left(𝑼+𝑾(\xi )\right)}}`$ (4.50) $`{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}\overline{\phi }^{(m)}\xi ^m`$ $`=`$ $`i\xi {\displaystyle \frac{\mathrm{d}}{\mathrm{d}\xi }}\mathrm{ln}{\displaystyle \frac{\theta \left[𝜹\right]\left(𝑼+\overline{𝑾}(1/\xi )\right)}{\theta \left[𝜹\right]\left(𝑼+𝑽+\overline{𝑾}(1/\xi )\right)}}`$ (4.51) The constant $`\epsilon `$ is given by (4.35) and $`q_{}`$, $`r_{}`$ are arbitrary constants related by (4.15). The ’real’ tau-function $`\tau _n`$ can be written as $$\tau _n(z,\overline{z})=\alpha _{}\mu ^{n^2/2}\mathrm{exp}\left\{n\psi (z,\overline{z})\right\}\theta \left(𝜻(z,\overline{z})+n𝑼\right)$$ (4.52) where the constant $`\mu `$ is given by (4.14), $$\psi (z,\overline{z})=\underset{m=1}{\overset{\mathrm{}}{}}\left(\psi ^{(m)}z_m+\overline{\psi }^{(m)}\overline{z}_m\right)+\text{constant}$$ (4.53) and $`{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}\psi ^{(m)}\xi ^m`$ $`=`$ $`i\xi {\displaystyle \frac{\mathrm{d}}{\mathrm{d}\xi }}\mathrm{ln}{\displaystyle \frac{\theta \left[𝜹\right]\left(𝑼𝑽+𝑾(\xi )\right)}{\theta \left[𝜹\right]\left(𝑽+𝑾(\xi )\right)}}`$ (4.54) $`{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}\overline{\psi }^{(m)}\xi ^m`$ $`=`$ $`i\xi {\displaystyle \frac{\mathrm{d}}{\mathrm{d}\xi }}\mathrm{ln}{\displaystyle \frac{\theta \left[𝜹\right]\left(𝑽+\overline{𝑾}(1/\xi )\right)}{\theta \left[𝜹\right]\left(𝑼+𝑽+\overline{𝑾}(1/\xi )\right)}}`$ (4.55) ## 5 Discussion. In this paper we have obtained the finite genus solutions for the ALH. The results can be used to derive also the finite genus solutions for other integrable hierarchies, which can be ’embedded’ into the ALH (see . So, for example, the functions $`Q`$ $`=`$ $`{\displaystyle \frac{\sigma _1}{\tau _0}}=Q_{}\mathrm{exp}\left\{\psi (z,\overline{z})+\phi (z,\overline{z})\right\}{\displaystyle \frac{\theta \left(𝜻(z,\overline{z})\stackrel{~}{𝑽}\right)}{\theta \left(𝜻(z,\overline{z})\right)}}`$ (5.1) $`R`$ $`=`$ $`{\displaystyle \frac{\rho _1}{\tau _0}}=R_{}\mathrm{exp}\left\{\psi (z,\overline{z})\phi (z,\overline{z})\right\}{\displaystyle \frac{\theta \left(𝜻(z,\overline{z})+\stackrel{~}{𝑽}\right)}{\theta \left(𝜻(z,\overline{z})\right)}}`$ (5.2) where $$\stackrel{~}{𝑽}=𝑽𝑼=_B^C𝝎,$$ (5.3) and constants $`Q_{}`$, $`R_{}`$ are related by $$Q_{}R_{}=\left[\epsilon \frac{\widehat{\theta }_B^A\widehat{\theta }_C^A}{\widehat{\theta }_D^A\widehat{\theta }_C^B}\right]^2$$ (5.4) solve the nonlinear Scrödinger equation $`i_2Q+_{11}Q+2Q^2R=0`$ (5.5) $`i_2R+_{11}R+2QR^2=0`$ (5.6) where $`_m=/z_m`$, as well as all higher equations of the hierarchy (see ). The quantities $$p_n=\frac{\tau _{n+1}\tau _{n1}}{\tau _n^2}=\mu \frac{\theta \left(𝜻_T(x,\overline{x})+(n1)𝑼\right)\theta \left(𝜻_T(x,\overline{x})+(n+1)𝑼\right)}{\theta ^2\left(𝜻_T(x,\overline{x})+n𝑼\right)}$$ (5.7) where $$𝜻_T(x,\overline{x})=𝜻^{(1)}x+\overline{𝜻}^{(1)}\overline{x}+\mathrm{constant}$$ (5.8) solve the 2D Toda lattice equation $$\frac{^2}{x\overline{x}}\mathrm{ln}p_n=p_{n1}2p_n+p_{n+1}$$ (5.9) In the relations between the ALH and the Davey-Stewartson equation (together with the Ishimori model) have been derived. One can find there expressions for the corresponding finite genus solutions. The last example stems from the fact that for any fixed $`n`$ the quantity $$u=r_{n1}p_nq_{n+1}=\frac{\rho _{n1}\sigma _{n+1}}{\tau _n^2}$$ (5.10) solves the Kadomtsev-Petviashvili (KP) equation, $$_1\left(4_3u+_{111}u+12u_1u\right)=3_{22}u$$ (5.11) Thus, the results of the previous section yield the following finite genus solution for the KP: $$u=Q_{}R_{}\frac{\theta \left(𝜻_{KP}(z_1,z_2,z_3)\stackrel{~}{𝑽}\right)\theta \left(𝜻_{KP}(z_1,z_2,z_3)+\stackrel{~}{𝑽}\right)}{\theta ^2\left(𝜻_{KP}(z_1,z_2,z_3)\right)}$$ (5.12) $$𝜻_{KP}(z_1,z_2,z_3)=𝜻^{(1)}z_1+𝜻^{(2)}z_2+𝜻^{(3)}z_3+\mathrm{constant}$$ (5.13) Here I set $`n=0`$ in (5.10) and omitted the $`\overline{z}_m`$-dependence for all $`m`$’s as well as the dependence on $`z_m`$ for $`m>3`$. This solution differs from the already known one which corresponds to an odd-order polynomial $`𝒫_{2g+1}`$ and, what is crucial, has been obtained by considering flows near the infinity ($`\pi (P)=\mathrm{}`$) which in this case is a ramification (Weierstrass) point ( the point $`\xi =\mathrm{}`$ has only one preimage on the Riemann surface). I cannot at present discuss solution (5.12) in detail. For example I do not know whether it is possible to obtain from (5.12) any non-trivial real solutions. In any case, solution (5.12) seems to be interesting and worth following studies. To conclude, I want to point out the main differences between the approach of this paper and ones used earlier . In the IST-based methods the hyperelliptic curves appear in the analysis of the spectral data of the scattering problem (2.1), while the dependence on the coordinates is derived from the system (2.2). Here we didn’t use the zero-curvature representation (2.1)– (2.2) explicitly (though it is surely hidden in the functional equations (2.9)–(2.12)). We started with an almost arbitrary polynomial $`𝒫_{2g+2}`$ (the fact that it is related to the transfer matrix of the scattering problem (2.1) wasn’t crucial for our consideration) and obtained the $`z_j`$-, $`\overline{z}_j`$-dependence directly from the equations of the ALH (and not from the corresponding equations for the transfer matrix $`T_n`$). As to the algebro-geometrical method, the main distinguishing point is that the approach of the work (and analogous works devoted to other integrable equations) is, so to say ’global’, while ours is ’local’. The authors of used the Baker-Akhiezer function and other structures defined for the whole Riemann surface $`X`$. At the same time we didn’t use globally defined objects: each time we introduced some functions depending on the point $`P`$ of the Riemann surface $`X`$ it was understood that it is defined in some vicinity of some distinguished point ($`B`$ or $`D`$). We even didn’t discussed the question whether our functions, say $`\phi _{QR}(P)`$, are well-defined, or single-valued, for all $`P`$’s, $`PX`$. All we needed is the Taylor expansions, say (4.28), hence for our purposes it was enough that our functions exist locally, for $`P`$ belonging to some (arbitrary small) neighborhood of the point $`B`$ (or $`D`$). At last, I would like to note that the idea to apply the Fay’s identity to differential equations is far from new. For example, in the book one can find few examples of how to demonstrate that $`\theta `$-functions of some arguments solve the KP, KdV, sine-Gordon equations. However in these examples the main question is how some combinations of differential operators (flows) act on a $`\theta `$-function. To my knowledge the problem of action of these operators taken separately hasn’t been considered before. Now we know a partial answer: the flows near a regular (not Weierstrass) point of a Riemann surface can be described by means of the equations of the ALH. Such an appearance of the ALH seems to be new and rather interesting. Combined with the results of the works it can be viewed as one more point indicating the ’universality’ of this hierarchy.
no-problem/9903/physics9903021.html
ar5iv
text
# Universality of the Hohenberg-Kohn functional \[ \] In a recent article H. L. Neal presents a pedagogical approach to density-functional theory in the formulation of Kohn and Sham, which is still largely ignored in undergraduate teaching despite its enormous significance in many branches of physics, by discussing the application to one-dimensional two-particle systems. In this context Neal derives an analytic expression for the Hohenberg-Kohn functional $`F[\rho ]`$, given by Eq. (30) of the original paper, that he suggests is exact for all systems with the harmonic interaction $`u(x_1,x_2)=k(x_1x_2)^2/2`$. The purpose of this comment is to refute this claim for arbitrary external potentials and to point out that the functional used in Ref. really constitutes an approximation in the same spirit as the local-density approximation. Exploiting the universality of the Hohenberg-Kohn functional, Neal calculates $$F=Ev(x)\rho (x)𝑑x$$ (1) for an analytically solvable model of two coupled harmonic oscillators. He then rewrites the total energy $`E`$ and the external potential $`v`$ on the right-hand side in terms of the density $`\rho `$, using exact relations that are only valid at the ground state of the particular model, however. This substitution hence replaces the explicit dependence on the external potential by a system-specific energy surface and fails to produce a universal functional. In particular, the minimum of the total energy obtained in this way in general differs from the true ground state. This subtle point is best seen if the resulting functional $`F[\rho ]`$ is directly inserted into the variational expression $$\frac{\delta }{\delta \rho (x)}\left(F[\rho ]+\left(v(x)\mu \right)\rho (x)𝑑x\right)=0$$ (2) that determines the ground-state density for arbitrary external potentials. The Lagrange multiplier $`\mu `$ enforces the proper normalization. In the notation of Ref. , which also provides $`\delta F[\rho ]/\delta \rho (x)`$ \[Eq. (31)\], one thus obtains $$\rho (x)=\rho _0\mathrm{exp}\left(\frac{4\omega (v(x)v(0))}{\mathrm{}\omega _0(\omega _0+\omega )}\right)$$ (3) and hence recovers the specific relation between $`v`$ and $`\rho `$ that was previously employed to construct the functional $`F[\rho ]`$ in the first place. By design, this expression is exact for the coupled harmonic oscillators. For all other external potentials, however, Eq. (3) constitutes an approximation to the ground-state density that does not coincide with the minimum of the true total-energy surface. The Kohn-Sham formalism requires $`F[\rho ]`$ as an auxiliary quantity to define $$F_r[\rho ]=F[\rho ]T_r[\rho ],$$ (4) where $`T_r[\rho ]`$ denotes the kinetic energy of the noninteracting reference system. In the usual terminology, $`F_r[\rho ]`$ contains the Hartree and the exchange-correlation part of the total energy, its derivative $`\delta F_r[\rho ]/\delta \rho (x)`$ contributes to the effective potential that appears in the single-particle Kohn-Sham equations. It follows from the above argument that this term is not treated exactly in Ref. , because the Hohenberg-Kohn functional as well as $`T_r[\rho ]`$ are constructed for a particular system and hence are not free from an implicit dependence on the harmonic-oscillator potential. The Hartree and exchange-correlation potential is thus modelled by that of two coupled harmonic oscillators. In this sense the approach is analogous in spirit to the local-density approximation widely used in atomic and solid-state physics, which similarly replaces the exact exchange-correlation energy by that of a homogeneous electron gas with the same local density. Much as the local-density approximation is successful for weakly inhomogeneous systems, so Neal’s effective potential may be applied to one-dimensional two-particle systems with the harmonic interaction in general confining potentials. Indeed, Ref. presents meaningful results for several potential wells with different shapes. For a quantitative analysis we carefully reexamine the examples discussed in the original paper. We diagonalize the Hamiltonians using a basis of noninteracting harmonic-oscillator eigenfunctions. In this way all matrix elements can be calculated either analytically or by a numerically exact Gauss-Hermite quadrature. The results for case 2: $`v(x)=\alpha |x|`$ and case 3: $`v(x)=\alpha \mathrm{exp}(\beta x^2)`$, converged with respect to the number of basis functions, are more accurate and hence differ from those quoted in Ref. . We have set $`\alpha =1.0`$, $`\beta =0.1`$ and $`k=1.0`$. In Table I we contrast the exact total energy $`E`$ with the Kohn-Sham solution $`E_{\mathrm{KS}}`$. Although small, the deviation is genuine. For comparison we also quote the estimate $`E_{\mathrm{var}}`$ from a two-parameter variational wave function given in Ref. . The two approximate schemes yield similar small errors if the quantum well resembles a harmonic potential near the origin (case 3), otherwise the variational wave function is less appropriate and the effective potential gives better agreement with the exact solution (case 2). In summary, although not exact, Neal’s scheme yields good approximate results for confined two-particle systems with the harmonic interaction in one dimension. Furthermore, the procedure closely follows the spirit of the widely used local-density approximation and is hence of additional pedagogical value. We hope that it will find recognition and contribute towards making density-functional theory more accessible to students.
no-problem/9903/gr-qc9903057.html
ar5iv
text
# On the uniqueness of the expected stress-energy tensor in renormalizable field theories ## Acknowledgments This work is supported in part by funds provided by the Spanish DGICYT grant no. PB95-1204 and Junta de Andalucía grant no. FQM0225.
no-problem/9903/chao-dyn9903001.html
ar5iv
text
# Improvement of SNR with Chaotic Spreading Sequences for CDMA ITW 1999, Kruger National Park, South Africa, June 20 – June 25 Improvement of SNR with Chaotic Spreading Sequences for CDMA Ken Umeno<sup>2</sup><sup>2</sup>2This work was partly supported by President’s Special Research Grant of RIKEN.Ken-ichi Kitayama Communications Research Laboratory, Ministry of Posts and Telecommunications 4-2-1,Nukui-Kitamachi,Koganei, Tokyo 184-8795, Japan {umeno,kitayama}@crl.go.jp Abstract — We show that chaotic spreading sequences generated by ergodic mappings of Chebyshev orthogonal polynomials have better correlation properties for CDMA than the optimal binary sequences (Gold sequences) in the sense of ensemble average. I. Introduction Recently, the applications of chaos to practical communication systems are gaining attention. Here, we investigate correlation properties of some ideal chaotic signals for their use as spreading sequences in CDMA. II. Model and Theory We consider chaotic spreading sequences for CDMA as follows. $`X_{n+1}^{\left(1\right)}=T\left[X_n^{\left(1\right)}\right]\text{User 1}`$ $`\mathrm{}`$ $`X_{n+1}^{\left(K\right)}=T\left[X_n^{\left(K\right)}\right]\text{User K}.`$ Here, $`T(x)`$ is assumed to be one of Chebyshev polynomials $`T_m(x),m2`$ defined by $`T_m[\mathrm{cos}(\theta )]=\mathrm{cos}(m\theta )`$. $$T_1\left(x\right)=x,T_2\left(x\right)=2x^21,T_3\left(x\right)=4x^33x,\mathrm{}.$$ (1) We consider periodic sequences ($`X_{n+N}=X_n`$) as the spreading sequences. With an explicit expression of an ergodic invariant measure $`\rho (x)dx=\frac{dx}{\pi \sqrt{1x^2}}`$ of $`T_m(x),m2`$, they satisfies the orthogonal relation $$_1^1T_i\left(x\right)T_j\left(x\right)\rho \left(x\right)𝑑x=\delta _{i,j}\frac{\left(1+\delta _{i,0}\right)}{2}.$$ (2) Here, $`\delta _{i,j}`$ is the Kronecker delta function. It is known that these periodic orbits of ergodic dynamical systems are distributed according to the ergodic invariant measure. Thus, we can estimate a mean energy of spreading sequences $$\underset{j=1}{\overset{N}{}}T^2\left(X_j\right)=N_1^1T\left(x\right)^2\rho \left(x\right)𝑑x=\frac{1}{2}N.$$ (3) $$\underset{j=1}{\overset{N}{}}T\left(X_j\right)T\left(X_{j+l}\right)=N_1^1T_m\left(x\right)T_{m^{l+1}}\rho \left(x\right)𝑑x=0$$ (4) The mean interference noise $`Pn`$ is 0 as derived in $$\underset{j=1}{\overset{N}{}}T\left(X_j\right)T\left(Y_j\right)=N_1^1T\left(x\right)\rho \left(x\right)𝑑x_1^1T\left(y\right)\rho \left(y\right)𝑑y=0.$$ (5) By the Eq. (8) of Ref. the mean variance of the interference noise can also be estimated as follows: $$Pn^2\left[\underset{j=1}{\overset{N}{}}T\left(X_j\right)T\left(Y_j\right)\right]^2=\frac{1}{4}N.$$ (6) Thus, with the use of the Gaussian assumption for $`K1`$ interference noises, we finally obtain the mean SNR denoted by $`R_{\text{chaos}}(K)`$ as follows: $$R_{\text{chaos}}\left(K\right)=\frac{\frac{1}{2}N}{\frac{1}{2}\sqrt{N\left(K1\right)}}=\sqrt{\frac{N}{K1}}.$$ (7) On the other hand, the mean SNR for Gold sequences obtained by Tamura, Nakano, and Okazaki is given by $$R_{\text{Gold}}\left(K\right)=\sqrt{\frac{N^3}{\left(K1\right)\left(N^2+N1\right)}}.$$ (8) Thus, we show the following relations about mean SNR between chaotic spreading sequences and Gold sequences. $$R_{\text{Gold}}\left(K\right)<R_{\text{chaos}}\left(K\right)\text{for}N<\mathrm{}$$ (9) $$\underset{N\mathrm{}}{lim}R_{\text{Gold}}\left(K\right)/R_{\text{chaos}}\left(K\right)=1,$$ (10) which exhibits a fact that suitable chaotic sequences can have better correlation properties for CDMA than the optimal binary sequences. Our estimation of SNR for chaotic sequences is also applied to other chaotic maps with explicit invariant measures . III. Conclusion We elucidate a correlation merit of chaotic spreading sequences by using the orthogonal properties of Chebyshev ergodic maps. Acknowledgements We thank T. Itabe and Y. Furuhama of CRL for their encouragement. References
no-problem/9903/gr-qc9903077.html
ar5iv
text
# A simple shear-free non-singular spherical model with heat flux ## Abstract We obtain an exact simple solution of the Einstein equation describing a spherically symmetric cosmological model without the big-bang or any other kind of singularity. The matter content of the model is shear free isotropic fluid with radial heat flux and it satisfies the weak and strong energy conditions. It is pressure gradient combined with heat flux that prevents occurrence of singularity. So far all known non-singular models have non-zero shear. This is the first shear free non-singular model, which is also spherically symmetric. PACS numbers: 04.20Jb, 04.60, 98.80Hw Since Senovilla’s discovery of an exact singularity free cosmological solution of the Einstein equation representing a perfect fluid with the equation of state $`\rho =3p`$ (and subsquently in the same framework the one with $`\rho =p`$), it is now being recognised that the singularity theorems can not, as generally believed earlier, prevent occurrence of non-singular cosmological solutions satisfying all the energy and causality conditions. And there is no conflict with the theorems in this. The theorems became inapplicable because one of the assumptions, existence of closed trapped surface, is not respected by these solutions and its violation does not entail any unphysical behaviour for the matter content. This assumption was however always a suspect but this fact was not fully appreciated in absence of a non-singular solution. The Senovilla solution did this signal service of dispelling the folklore belief. A large family of non-singular cosmological models and its generalization with heat flux has been considered but they are all cylindrically symmetric (see an excellent recent review ). For practical cosmology, spherical symmetry is however more appropriate. It is therefore pertinent to seek spherically symmetric non-singular models. The first model of this kind was obtained by one of us which has imperfect fluid with heat flux (note, the expression for $`\theta `$ should have a negative sign before it) and it satisfies all the energy conditions and has no singularity of any kind. It was obtained by letting one of Tolman’s solutions expand. The solution has a free time function which can be chosen suitably to have non-singular behaviour for physical and kinematical parameters and there exist multiple such choices. It is also possible to have a non-singular model with null radiation flux . These models are both inhomogeneous and anisotropic and have the typical behaviour, beginning with low density at $`t\mathrm{}`$, contracting to high density at $`t=0`$ and then again expanding to low density as $`t\mathrm{}`$. Nowhere any physical parameter diverges. In the Raychaudhuri equation , which governs cosmological dynamics, it is acceleration (pressure gradient) and rotation (centrifugal force) that counteract gravitational collapse. In cosmology, there is absence of overall rotation and hence for checking collapse to avoid singualrity presence of acceleration becomes necessary. All the known non-singular models \[1-2,5,7,9\], are not only accelerating but also shearing. Though shear acts in favour of collapse in the Raychaudhuri equation but its dynamical action through tidal acceleration makes collapse incoherent which acts against concentration of large mass in small enough region. This would ultimately work against formation of compact trapped surface. Raychaudhuri in one of his recent theorems establishes that the necessary condition for non-singular cosmological model is that space average of physical and kinematical parameters must vanish. That means the parameters must depend upon space variables. The space gradient of expansion is in vorticity-free spacetime given by space divergence of shear and heat flux . Hence for non-singulaity presence of atleast one of them is necessary. The Ruiz-Senovilla family of non-singular cylindrical models is the example of presence of shear without heat flux. It can be shown that presence of shear is in general essential for perfect fluid G-2 symmetric non-singular models . The spherical non-singular model has both shear and heat flux. Then the question arises, could heat flux alone, of course combined with pressure gradient, avoid singularity? This is what we wish to demonstrate in this letter by obtaining a simple non-singular solution which describes an inhomogeneous shear-free spherical model filled with isotropic fluid and radial heat flux. The model satisfies the weak and strong energy conditions as well as has a physically acceptable fall off behaviour in both $`r`$ and $`t`$ for physical and kinematic parameters. Again there is a free time function which can be chosen suitably to give non-singular behaviour to model and there exist multiple such choices. The metric of the model is given by $$ds^2=(r^2+P)^{2n}dt^2(r^2+P)^{2m}[dr^2+r^2(d\theta ^2+sin^2\theta d\varphi ^2)]$$ (1) where $$2n=2m\pm \sqrt{8m^2+8m+1}$$ in particular $$2m=1\sqrt{3/2}<0,2n=\sqrt{3/2}.$$ (2) Here $`P=P(t)`$ which can be chosen freely. The Einstein field equation for perfect fluid with radial heat flux reads as $$R_{ik}\frac{1}{2}Rg_{ik}=[(\rho +p)u_iu_kpg_{ik}+\frac{1}{2}(q_iu_k+q_ku_i)]$$ (3) where we have set $`8\pi G/c^2=1`$, $`u_iu^i=1=q_iq^i,q_iu^i=0`$, $`\rho ,p`$ denote fluid density and isotropic pressure, and $`q_i`$ is the radial heat flux vector. From eqns. (1) and (3) we obtain $`\rho `$ $`=`$ $`{\displaystyle \frac{3m^2\dot{P}^2}{(r^2+P)^{2n+2}}}4m{\displaystyle \frac{3P+(m+1)r^2}{(r^2+P)^{2m+2}}},`$ $`p`$ $`=`$ $`{\displaystyle \frac{m}{(r^2+P)^{2n+2}}}[2(r^2+P)\ddot{P}+(3m2n2)\dot{P}^2]`$ $`+{\displaystyle \frac{4}{(r^2+P)^{2m+2}}}[(m+n)P+n^2r^2],`$ $`q`$ $`=`$ $`{\displaystyle \frac{4m(n+1)r\dot{P}}{(r^2+P)^{n+2}}}`$ (4) and the expansion and acceleration are given by $$\theta =\frac{3m\dot{P}}{(r^2+P)^{n+1}},\dot{u}_r=\frac{nr}{r^2+P}.$$ (5) We have freedom to choose the function $`P(t)`$ which could be chosen suitably to give non-singular behaviour to the above parameters. As a matter of fact there exist multiple choices, for instance $`P(t)=a^2+b^2t^2,a^2+e^{bt^2},a^2+b^2coswt,a^2>b^2`$. For all these choices it is clear that all the physical and kinematic parameters remain regular and finite for the entire range of variables. Note that it also admits an interesting oscillating behaviour in time in which the model oscillates between two finite regular states. The first case of oscillating non-singular model was recently considered in the spherical family . The oscillating non-singular models are quite novel and interesting of their own accord. In non-oscillating case, all the parameters given above tend to zero as $`r\mathrm{}`$ or $`t\pm \mathrm{}`$. The universe begins with low density and contracts to maximum density and then again expands to low density without ever becoming singular. This is a typical behaviour for non-singular models . However in the oscillating case, model oscillates in time between two regular finite states, and the parameters fall off to zero as $`r\mathrm{}`$. This is how oscillatory and non-oscillatory singularity free models differ from each-other in their global behaviour. It is obvious from the simple expression for the metric that spacetime is causally stable. For verification of the energy contions, we will have to find the eigenvalues of the energy momentum tensor, which are given as follows: $$[\frac{1}{2}(\rho p+D),\frac{1}{2}(\rho pD),p,p],D^2=(\rho +p)^24q^2.$$ (6) Note that in all the above expressions there would in view of eqn. (2) be relative dominance of the term of $`(r^2+P)^{2(m+1)}`$. The weak and strong energy conditions would require $`\rho 0,D0,\rho +p+D0,2p+D0`$. It can be easily verified that these conditions would hold good for the choices for $`P(t)`$ given above. The dominant energy condition which would require $`\rho p`$ cannot however be satisfied as it is clearly violated for large $`r`$. Thus the model satisfies the weak and strong but not the dominant energy condition. We have thus obatined a spherical model with isotropic pressure fluid and radial heat flux without the big-bang or any other kind of singularity. This is the first shear free non-singular model. It is inhomogeneous but isotropic. It is heat flux that combines with pressure gradient to avoid singularity. From the point of view of realistic cosmology, merits of the present model are its isotropy and spherical symmetry. Apart fron the first Senovilla model and a large family of cylindrical non-singular models , there now also exists a large family of spherical non-singular models and to that the present one adds a novel family of shear free models. Even though it does not satisfy the dominant energy condition, it is a very simple and interesting model. It is remarkable to note that for the first time cosmic singularity has been avoided in absence of shear. There has been interesting cases of cosmological models , for instance , which do not satisfy all the energy conditions, yet deserve consideration for their other remarkable properties. The present model is simple, sheer free and isotropic and hence is interesting enough. Above all it is a very simple spherical model and thus also points to an important fact that non-singular cosmological solutions are no longer isolated but could occur more generally even in spherical symmetry. Acknowledgement: LKP thanks IUCAA for a visit that made this work possible.
no-problem/9903/nucl-th9903066.html
ar5iv
text
# 1 Introduction ## 1 Introduction The search for quark-gluon plasma, the deconfined strongly interacting matter, has entered a decisive phase with the relativistic heavy ion collision experiments slated to begin within a few months at the Brookhaven National Laboratory. This will put the scores of theoretical models, conjectures, and speculations put forward over the last decade as plausible signatures of quark gluon plasma (QGP) to a severe test. The corresponding search at the CERN SPS has already yielded a large body of data which have been carefully examined for the evidence of QGP. It has been suggested that it is quite likely that the partonic phase may, indeed, have been reached in sulfur and lead induced collisions at the CERN SPS . The parton cascade model proposed by Geiger and Müller and refined and studied exhaustively by Geiger and coworkers attempts to describe the relativistic collision of nuclei using the partonic picture of hadronic interactions where the nuclear dynamics is described in terms of quark and gluon interactions within perturbative QCD, embedded in the frame work of relativistic transport theory. The complete space-time picture of the evolution is simulated by solving an appropriate transport equation in six-dimensional phase space using Monte Carlo techniques. The model supplemented with a cluster hadronization scheme has been developed into a computer code VNI . It was recently shown to give a reasonable description to particle spectra at SPS energies for $`S+S`$ and $`Pb+Pb`$ collisions , which needs to be understood in a greater detail in view of the suggestion made in Ref. . In the present work we continue this effort and use the power of the parton cascade model to see how quickly does the partonic matter produced in such collisions thermalize at CERN SPS and BNL RHIC energies. It should be added that this question has been studied within the context of RHIC (and LHC) energies by Geiger , and we display them somewhat differently (and with much better statistics) for a ready and easy comparison. Even though the developments in the parton cascade model have been very well documented, it is worthwhile recalling the most important steps: * The initial state associated with the incoming nuclei involves their decomposition into nucleons and of the nucleons into partons on the basis of experimentally measured nucleon structure functions and elastic form factors. This procedure then translates the initial nucleus-nucleus system into two colliding clouds of virtual partons. * The parton cascade development starts from the initial inter-penetrating parton clouds and traces their space-time development with mutual interactions and self interactions of the system of quarks and gluons. The model includes multiple elastic and inelastic interactions described as sequences of elementary $`2\mathrm{\hspace{0.17em}2}`$ scatterings, $`1\mathrm{\hspace{0.17em}2}`$ emissions, and $`2\mathrm{\hspace{0.17em}1}`$ fusions. Several important effects which characterize the space-time evolution of a many parton system in nuclear collisions like the individual time scale of each parton-parton collision, the formation time of the parton radiation, the effective suppression of radiative emissions from virtual partons due to an enhanced absorption probability of others in regions of dense phase space occupation, and the effects of soft gluon interference in low energy gluon emissions are explicitly accounted for. * And finally, the hadronization dynamics of the evolving system in terms of a parton coalescence to colour neutral clusters is described as a local statistical process that depends on the spatial separation and colour of the nearest-neighbour partons . The pre-hadronic clusters then decay to form hadrons. In the present paper we shall be concerned with only up to the second stage of the collision, for times $``$ 2 fm/$`c`$ so that the matter is still in the form of primary, secondary, and space-like partons and which essentially spans the so-called pre-equilibrium era of the evolution. We consider four cases: $`S+S`$ and $`Pb+Pb`$ collisions at $`\sqrt{s}=`$ 20 A$``$GeV and 200 A$``$GeV. In all the cases we place the two nuclei centred at $`z=\pm 1`$ fm at $`t=`$ 1 fm/$`c`$, which will propel them on to a complete overlap at around $`t=0`$ fm/$`c`$. We analyze the results for the time evolution of the longitudinal and rapidity distribution of partons and also see how the thermalization is attained and maintained, if at all. It should be added right at the outset that only the ‘real’ partons have been included in these spectra . Thus the initial state, before the collision will reflect only the distribution of valence quarks, and as the collision proceeds more and more of the (initially) space-like gluons and sea-quarks gain enough energy to become either time-like or be on the mass-shell. One may also add that the rapidity variable is not defined for space-like particles (as for them $`E<|p_z|`$) and moreover, the partons which remain space-like throughout the collision do not contribute to the reaction dynamics and will be reabsorbed during the hadronization. ## 2 $`\sqrt{s}=`$ 20 A$``$GeV ### 2.1 Production of (semi)hard partons In Fig. 1, we have shown the time development of the longitudinal distribution of (real) partons for $`S+S`$ collision at 20 A$``$GeV. We see that the the partonic distributions almost touch when $`t=0.4`$ fm/$`c`$ and the nuclei disengage by $`t`$ 1 fm/$`c`$, leaving a trail of secondary partons near $`z=0`$, which are created in the (semi)hard collisions and radiations. The evolution of the rapidity distribution of the partons for $`S+S`$ collisions is shown in Fig. 2. We see that initially they are distributed over about four units of rapidity, and peak around $`y=\pm `$ 2. As a result of the collision the number of partons having $`y\pm `$ 2 gets reduced, and most of the secondary partons materialize by the time $`t=`$ 0.4 fm/$`c`$. They are seen to be confined to $`|y|`$ 2. Approximating the width of the region over which the secondary partons are spread, as $`\mathrm{\Delta }y`$ 4, we see that there is a production of about two partons per participating nucleon. The dip in the rapidity distribution is quite interesting too, as it implies that a large part of the partons have just passed through, without interacting. The situation for the collision of lead nuclei is quite different. We see (Fig. 3) that the nuclei start touching at $`t=`$ 0.8 fm/$`c`$ and there is already a considerable overlap by the time $`t=`$ 0.4 fm/$`c`$. The nuclei disengage only by $`t`$ 1.5 fm/$`c`$, leaving a trail of secondary partons in their wake. The differences between the two cases are much more dramatic, when we look at the evolution of the rapidity distribution of the partons (Fig. 4). Thus we see that there is a considerable production of partons already at $`t=0`$ fm/$`c`$, even when the nuclei have not yet ploughed through each other. The particle production continues till about 1.2 fm/$`c`$ which is clear from the modification of the rapidity distribution till then. We also see that now the parton production is much more, the dip at $`y=0`$ in the rapidity distribution before the collision is completely filled up by the end of the collision, and we have a flat top distribution of partons! We also see that again the secondary partons are spread over $`|y|`$ 2 and up to four partons per participating nucleon are produced. Recalling that the nuclei would be thicker at smaller transverse distances from the collision axis and that nucleons there are more likely to re-scatter, these observations imply a considerable multiple scattering among the partons. ### 2.2 Thermalization of partons As indicated in the Introduction, the thermalization of partons has been addressed in detail in the parton cascade model studies. Several phenomenological estimates have also been obtained in the literature . We adopt a slightly different strategy here and look at the $`|p_x|,|p_y|,`$ and $`|p_z|`$ distribution of the materialized partons at different times in the centre of mass system of the colliding nuclei. We confine our attention to $`|z|`$ 0.5 fm, and determine whether these distribution become isotropic at some stage during the evolution. A similar approach was used in Ref. when only the radiation of the gluons following the first (semi)hard scatterings was included in such collisions. However, it is expected that multiple scatterings included in the parton cascade model used in the present work will hasten this process and also maintain the thermal equilibrium. In absence of the multiple scatterings a thermal equilibrium can neither be achieved nor maintained. The results of our simulations for $`S+S`$ collisions at $`\sqrt{s}`$ =20 A$``$GeV are shown in Fig. 5. We see that the scatterings which began even before $`t=0`$ fm, (see Figs. 1 & 2) increase the value of $`<|p_x|>`$ and $`<|p_y|>`$ and decreases the $`<|p_z|>`$, as seen from the reduction of the number of partons having large $`|p_z|`$ and their reappearance with smaller $`|p_z|`$. There will also be an escape of partons which have a large rapidity from the zone $`|z|`$ 0.5 fm chosen here. As a result of these two processes the momentum distribution becomes isotropic around $`t=`$ 1 fm/$`c`$, and stays so for a brief while after that. The corresponding results for $`Pb+Pb`$ collisions (Fig. 6) are similar in nature and we see that the number of partons released is much larger in the heavier system. Isotropy of the momenta is attained in the system near $`z=0`$ around $`t=`$ 1 fm/$`c`$ which is maintained afterwards. Thus we conclude that the partonic system produced in heavy-ion collisions at $`\sqrt{s}=`$ 20 A$``$GeV will have an excursion into a thermalized zone at about 1 fm/$`c`$ after the nuclei overlap as a result of (semi)hard partonic collisions and gluonic radiations. ## 3 $`\sqrt{s}=`$ 200 A$``$GeV While the attempt to use the parton cascade model at such low energies as $`\sqrt{s}=`$ 20 A$``$GeV are rather recent, those at the energies relevant to BNL RHIC, 200 A$``$GeV, are quite well documented and thus we shall give only the results for the rapidity distribution and the approach to isotropic momenta in the central zone of the collision volume. ### 3.1 Production of (semi)hard partons Thus in Fig. 7 we have shown the time evolution of the rapidity distribution of the real partons for central collisions of sulfur nuclei. As before the dashed histograms show the results for the primary (uninteracted) partons while the solid histograms show the sum of the primary and the secondary (semi)hard partons produced in the collision. We note that the production of partons in the parton cascade approach utilized here is completed by 0.4 fm/$`c`$ after the nuclei overlap fully. We can also estimate that the up to 6 partons per ‘participating’ nucleon may be produced in these collisions which are spread over $`|y|`$ 2.5. The corresponding results for central collisions of lead nuclei (Fig. 8) are very revealing, as the dip in the rapidity distribution of the partons is much less pronounced, as a result of a much larger production of partons due to multiple scatterings. We also estimate that up to 9 partons per nucleon may be produced in this case. Before moving on, it is of interest to note that the shape of the initial state parton distribution here is different from the one in Fig. 2 (at 20 A$``$GeV) as we prepare the assembly of the partons at a higher $`Q_0^2`$ (see, Discussion), resulting in a higher number of partons at the lower end of $`x`$, which translates into smaller $`|y|`$ in this plot. ### 3.2 Thermalization of partons The evolution of the momentum distribution of the partons produced due to the (semi)hard scatterings and radiations are shown in Fig. 9 and 10 respectively for the central collisions of sulfur and lead nuclei at energies relevant to BNL RHIC. We see that initially the partons have a large $`|p_z|`$, (note also the dip at $`|p_z|`$ 0, depicting the separation of the partons in the rapidity space) and the collisions transfer $`p_z`$ into $`p_x`$ and $`p_y`$ which increase substantially and rapidly. The peak in the $`|p_z|`$ spectrum around 1 GeV holds the primary partons which have not interacted yet. We note that by the time $`t=`$ 0.4 fm/$`c`$ the slopes of the momenta along the three direction are similar, but for the above mentioned peak, which persists till the end. Ignoring these uninteracted partons, we see that the partons which materialize during the collision attain an isotropic momentum distribution by $`t`$ 0.5 fm/$`c`$. Once again, the partons having large rapidities will escape the longitudinal slice which we have considered. ## 4 Discussion Before concluding it is worthwhile that some aspects of the model used for the studies reported here are reiterated. The first one concerns the so-called $`p_0`$ which is used to divide the scatterings in the parton cascade model into soft (elastic) and (semi)hard reactions. It is taken as 1.12 GeV for the collisions at 20 A$``$GeV and 2.09 GeV for collisions at 200 A$``$GeV based on considerations of $`pp`$ cross-sections discussed in detail . There is no reason to believe that they should have the same values for nuclear collisions, but these provide a convenient starting point. An increase in $`p_0`$ will obviously decrease the (semi)hard scatterings. The scale $`Q_0`$ at which the nucleon structure functions are initialized are chosen as 1.32 and 2.35 GeV at the two energies, based on an estimate of $`<p_T^2>`$ in primary-primary collisions among the primary partons. We have already remarked that a larger $`Q_0`$ leads to the different shapes for the rapidity distributions at the higher energy considered here. A careful reader must have noticed that the transverse components of the momenta “stabilize” fairly quickly. This has its origin in the fact that no collisions between partons is permitted, if their relative $`\sqrt{s}`$ is less than 2 GeV. This is done to ensure that the perturbative QCD treatment used in the parton cascade model remains valid. Again, a parton created in a scattering with a large momentum looses energy quickly due to gluonic radiations (till its virtuality drops to the cut-off $`\mu _0`$ 1 GeV chosen in the calculations) and its momentum is considerably reduced before it undergoes next collision. When the initial energy is high and the parton density is large, it may participate in further hard scatterings with production of additional partons, else- it will undergo only soft scatterings (see, e.g., Ref. ). A more comprehensive approach could proceed as follows: use the (practical) procedure of soft and (semi)hard scatterings of the parton cascade model to initiate the collision and once the partonic density is large enough to ensure screening of the long range forces on a scale where the Debye mass $`\mu _D`$ is much larger than $`\mathrm{\Lambda }_{QCD}`$, remove the division of soft and hard scatterings implemented in the model; evaluating the scattering in terms of the Debye mass at all possible $`\sqrt{s}`$ between partons and for all momentum transfers. This treatment will, however, quickly get beyond the capability of most of the computers due to the very large number of scatterings taking place. The other and more serious problem will involve the comparatively small number of partons in a given event to implement this scheme over the entire space-time spanned by the system as then we would be plagued by large fluctuations. A more practical approach could be to invoke hydrodynamics to pursue the evolution beyond the point of thermalization, after an average is taken over a large number of events. This is in progress. ## 5 Conclusions To conclude, we have seen that the parton cascade model suggests that there is a substantial production of partonic matter as a result of (semi)hard scatterings and QCD branchings in $`S+S`$ and $`Pb+Pb`$ collisions at 20 A$``$GeV, which may attain an isotropy in momentum distribution at about 1 fm/$`c`$ after the nuclei overlap completely at $`|z|0`$. The simulations reveal an increased multiple scattering activity in lead induced collisions. The corresponding production at 200 A$``$GeV is (obviously) much larger and the isotropy in the momentum distribution is attained within 0.5 fm/$`c`$. It is also suggested that this could be a convenient point to initialize a hydrodynamic approach to the evolution, if desired. ## Acknowledgments The author gratefully acknowledges the hospitality of University of Bielefeld where part of this work was done. He would also like to acknowledge useful comments from Hans Gutbrod and Bikash Sinha. FIGURE CAPTIONS Fig. 1 Longitudinal distribution of (real) partons in central collision of sulfur nuclei at $`\sqrt{s}=`$ 20 A$``$GeV, at different times before and after the collision. The solid histograms give the sum of primary and (semi)hard secondary partons, while the dashed histograms give the result for the primary (uninteracted) partons. Fig. 2 Rapidity distribution of (real) partons in central collision of sulfur nuclei at $`\sqrt{s}=`$ 20 A$``$GeV, at different times before and after the collision. The solid histograms give the sum of primary and (semi)hard secondary partons, while the dashed histograms give the result for the primary (uninteracted) partons. Fig. 3 Longitudinal distribution of (real) partons in central collision of lead nuclei at $`\sqrt{s}=`$ 20 A$``$GeV, at different times before and after the collision. The solid histograms give the sum of primary and (semi)hard secondary partons, while the dashed histograms give the result for the primary (uninteracted) partons. Fig. 4 Rapidity distribution of (real) partons in central collision of lead nuclei at $`\sqrt{s}=`$ 20 A$``$GeV, at different times before and after the collision. The solid histograms give the sum of primary and (semi)hard secondary partons, while the dashed histograms give the result for the primary (uninteracted) partons. Fig. 5 Time evolution of the $`|p_x|`$ (crosses), $`|p_y|`$ (diamonds) and $`|p_z|`$ (histogram) distribution of the (real) partons in central collision of sulfur nuclei at $`\sqrt{s}=`$ 20 A$``$GeV. Fig. 6 Time evolution of the $`|p_x|`$ (crosses), $`|p_y|`$ (diamonds) and $`|p_z|`$ (histogram) distribution of the (real) partons in central collision of lead nuclei at $`\sqrt{s}=`$ 20 A$``$GeV. Fig. 7 Rapidity distribution of (real) partons in central collision of sulfur nuclei at $`\sqrt{s}=`$ 200 A$``$GeV, at different times before and after the collision. The solid histograms give the sum of primary and (semi)hard secondary partons, while the dashed histograms give the result for the primary (uninteracted) partons. Fig. 8 Rapidity distribution of (real) partons in central collision of lead nuclei at $`\sqrt{s}=`$ 200 A$``$GeV, at different times before and after the collision. The solid histograms give the sum of primary and (semi)hard secondary partons, while the dashed histograms give the result for the primary (uninteracted) partons. Fig. 9 Time evolution of the $`|p_x|`$ (crosses), $`|p_y|`$ (diamonds) and $`|p_z|`$ (histogram) distribution of the (real) partons in central collision of sulfur nuclei at $`\sqrt{s}=`$ 200 A$``$GeV. Fig. 10 Time evolution of the $`|p_x|`$ (crosses), $`|p_y|`$ (diamonds) and $`|p_z|`$ (histogram) distribution of the (real) partons in central collision of lead nuclei at $`\sqrt{s}=`$ 200 A$``$GeV.
no-problem/9903/astro-ph9903423.html
ar5iv
text
# The Projected Three-point Correlation Function: Theory and Observations ## 1 Introduction Traditionally, two-point statistics, the auto-correlation function $`\xi (r)`$ and the power spectrum $`P(k)`$, have been the dominant benchmarks for testing theories of structure formation. However, with the advent of large galaxy surveys and the development of non-linear cosmological perturbation theory and large N-body simulations, it has become clear that higher-order correlations provide new probes of large-scale structure. In particular, the $`N>2`$-point functions and moments test models for bias—the relation between the galaxy and mass distributions—and constrain non-Gaussianity in the initial conditions (Fry & Gaztañaga 1993, Frieman & Gaztañaga 1994, Gaztañaga 1994, Gaztañaga & Frieman 1994, Fry 1994, Fry & Scherrer 1994, Jaffe 1994, Fry, Melott, & Shandarin 1995, Juszkiewicz, et al. 1995, Chodorowski & Bouchet 1996, Gaztañaga & Mahonen 1996, Szapudi, Meiksin, & Nichol 1996, Jing 1997, Matarrese et al. 1997, Mo, Jing, & White 1997, Verde et al. 1998, Scoccimarro et al. 1998, Scoccimarro, Couchman, & Frieman 1998). The galaxy three-point function $`\zeta `$ has been measured in several angular and redshift catalogs (Peebles & Groth 1975, Peebles 1975, Groth & Peebles 1977, Fry and Seldner 1982, Bean et al. 1983, Efstathiou & Jedredjewski 1984, Hale-Sutton et al. 1989, Baumgart & Fry 1991, Jing, Mo, & Boerner 1991, Jing & Boerner 1998). Given the limited volumes covered by these surveys, these measurements generally probed the three-point function on scales $`0.1\begin{array}{c}<\hfill \\ \hfill \end{array}r\begin{array}{c}<\hfill \\ \hfill \end{array}10h^1`$ Mpc. They established that galaxies cluster hierarchically: on these scales, the hierarchical three-point amplitude, defined by $$Q(x_{12},x_{13},x_{23})\frac{\zeta (\text{x}_1,\text{x}_2,\text{x}_3)}{\xi (x_{12})\xi (x_{13})+\xi (x_{12})\xi (x_{23})+\xi (x_{13})\xi (x_{23})},$$ (1) (with $`x_{ij}=|\text{x}_i\text{x}_j|`$) is nearly constant, independent of scale and configuration, with values in the range $`Q0.61.3`$, depending on the catalog. This hierarchical form is consistent with expectations from N-body simulations in the non-linear regime (e.g., Matsubara & Suto 1994, Fry, Melott, & Shandarin 1993, Scoccimarro et al. 1998, Scoccimarro & Frieman 1998). On larger scales, $`r\begin{array}{c}>\hfill \\ \hfill \end{array}10h^1`$ Mpc, in the weakly non-linear regime where $`\xi (r)\begin{array}{c}<\hfill \\ \hfill \end{array}1`$, non-linear perturbation theory (PT)—corroborated by N-body simulations—predicts that $`Q`$ becomes strongly dependent on the shape of the triangle defined by the three points $`\text{x}_i`$ (Fry 1984, Fry, Melott, & Shandarin 1993, Jing & Boerner 1997, Scoccimarro 1997, Gaztañaga & Bernardeau 1998, Scoccimarro et al. 1998). The large-scale configuration-dependence of the three-point function is characteristic of the non-linear dynamics of gravitational instability and is sensitive to the initial power spectrum, to the bias, and to non-Gaussianity in the initial conditions. Thus measurement of the three-point function and its shape dependence should provide a powerful probe of structure formation models. Measurements of higher-order galaxy correlations on large scales have been confined so far to volume-averaged correlation functions—the one-point cumulants $`S_N`$—and their cousins (e.g., cumulant correlators). These statistics, based on counts-in-cells, are computationally easier to measure than the corresponding $`N`$point functions and, being averages over many $`N`$point configurations, can be measured with higher signal to noise as well. On the other hand, this averaging process by definition destroys the information about configuration-dependence contained in the $`N`$point functions themselves. In this Letter, we report measurement of the projected three-point function on large scales in the APM Galaxy Survey (Maddox et al. 1990), compare the results with predictions of non-linear perturbation theory (PT) and N-body simulations, and briefly discuss their implications for bias and non-Gaussianity. We first recall the perturbative expression for the three-point function. To lowest order, the statistical properties of the density contrast field, $`\delta (\text{x})=(\delta \rho (\text{x})/\overline{\rho })1`$, are characterized by the auto-correlation function $`\xi (x_{12})=\delta (\text{x}_1)\delta (\text{x}_2)`$, and its Fourier transform, the power spectrum $`P(k)`$, where $`\delta (\text{k}_1)\delta (\text{k}_2)=(2\pi )^3P(k_1)\delta _\mathrm{D}^3(\text{k}_1+\text{k}_2)`$; explicitly, $$\xi (r)=\frac{1}{2\pi ^2}𝑑kk^2P(k)\frac{\mathrm{sin}(kr)}{kr}.$$ (2) In the results to be shown below, we consider the cold dark matter (CDM) family of models, with shape parameter $`\mathrm{\Gamma }=\mathrm{\Omega }h`$ ranging from 0.5 (standard CDM) to 0.2. We also consider a model consistent with the linear power spectrum inferred from the APM survey itself (Baugh & Gaztañaga 1996), $$P_{\mathrm{APM}\mathrm{like}}(k)=\frac{Ak}{\left[1+(k/0.05)^2\right]^{1.6}},$$ (3) with $`k`$ in $`h\mathrm{Mpc}^1`$. For Gaussian initial conditions, to linear order in $`\delta `$, the connected $`N>2`$-point functions vanish. In second-order perturbation theory, the three-point correlation function $`\zeta (\text{x}_1,\text{x}_2,\text{x}_3)\delta (\text{x}_1)\delta (\text{x}_2)\delta (\text{x}_3)`$ can be expressed as (Fry 1984, Jing & Boerner 1997, Gaztañaga & Bernardeau 1998) $`\zeta (\text{x}_1,\text{x}_2,\text{x}_3)`$ $`=`$ $`{\displaystyle \frac{10}{7}}\xi (x_{12})\xi (x_{13})+{\displaystyle \frac{4}{7}}\left[{\displaystyle \frac{\mathrm{\Phi }^{}(x_{12})\mathrm{\Phi }^{}(x_{13})}{x_{12}x_{13}}}+\left(\xi (x_{12})+2{\displaystyle \frac{\mathrm{\Phi }^{}(x_{12})}{x_{12}}}\right)\left(\xi (x_{13})+2{\displaystyle \frac{\mathrm{\Phi }^{}(x_{13})}{x_{13}}}\right)\right]`$ $``$ $`{\displaystyle \frac{\text{x}_{12}\text{x}_{13}}{x_{12}x_{13}}}\left(\xi ^{}(x_{12})\mathrm{\Phi }^{}(x_{13})+\xi ^{}(x_{13})\mathrm{\Phi }^{}(x_{12})\right)`$ $`+`$ $`{\displaystyle \frac{4}{7}}\left({\displaystyle \frac{\text{x}_{12}\text{x}_{13}}{x_{12}x_{13}}}\right)^2\left(\xi (x_{12})+3{\displaystyle \frac{\mathrm{\Phi }^{}(x_{12})}{x_{12}}}\right)\left(\xi (x_{13})+3{\displaystyle \frac{\mathrm{\Phi }^{}(x_{13})}{x_{13}}}\right)+\mathrm{cyclic}\mathrm{permutations},`$ where $$\mathrm{\Phi }(r)\frac{1}{2\pi ^2}𝑑kP(k)\frac{\mathrm{sin}(kr)}{kr},$$ (5) and $`f^{}(x)=df/dx`$. In eqns.(2)-(5), the power spectrum and auto-correlation function are implicitly evaluated at linear order in perturbation theory; we expect the leading-order result (1) to be valid in the weakly non-linear regime ($`\xi 1`$) but to break down for $`\xi (r)\begin{array}{c}>\hfill \\ \hfill \end{array}0.5`$. Eqn.(1) was derived for the Einstein-de Sitter ($`\mathrm{\Omega }_m=1`$) model, but it is known to be an excellent approximation for $`\mathrm{\Omega }\begin{array}{c}>\hfill \\ \hfill \end{array}0.1`$ (Bouchet et al. 1992, 1995, Bernardeau 1994, Catelan et al. 1995, Scoccimarro et al. 1998, Kamionkowski & Buchalter 1998). For non-Gaussian initial conditions, there will in general be corrections to (1) which depend on the initial (linear) three- and four-point functions (Fry & Scherrer 1994). The expressions above apply to unbiased tracers of the density field; since galaxies of different types (e.g., ellipticals and spirals) have different clustering properties, we know that at least some galaxy species are biased. As an example, suppose the probability of forming a luminous galaxy depends only on the underlying mean density field in its immediate vicinity. Under this simplifying assumption, the relation between the galaxy density field $`\delta _g(\text{x})`$ and the mass density field $`\delta (\text{x})`$ is $`\delta _g(\text{x})=f(\delta (\text{x}))=\mathrm{\Sigma }_nb_n\delta ^n`$, where $`b_n`$ are the bias parameters. To leading order in perturbation theory, this local bias scheme implies $$\xi _g(x)=b_1^2\xi (x),$$ (6) $$\zeta _g(\text{x}_1,\text{x}_2,\text{x}_3)=b_1^3\zeta (\text{x}_1,\text{x}_2,\text{x}_3)+b_1^2b_2[\xi (x_{12})\xi (x_{13})+\xi (x_{12})\xi (x_{23})+\xi (x_{13})\xi (x_{23})],$$ (7) and therefore (Fry & Gaztañaga 1993, Fry 1994) $$Q_g=\frac{1}{b_1}Q_\delta +\frac{b_2}{b_1^2}.$$ (8) Gaztañaga & Frieman (1994) have used the corresponding relation for the skewness $`S_3`$ to infer $`b_11`$, $`b_20`$ from the APM catalog, but the results are degenerate due to the relative scale-independence of $`S_3`$. <sup>1</sup><sup>1</sup>1Bernardeau (1995) pointed out a systematic correction to the way the $`S_J`$ predictions should be projected. After taking into account the correct selection function and the uncertainties in the APM shape of $`P(k)`$, this effect is less significant that claimed by Bernardeau, and the original interpretation is still valid, in a greement with the results for $`q_3`$ presented here. On the other hand, Fry (1994) used the projected bispectrum from the Lick catalog to infer $`b_13`$; however, in order to extract a statistically significant $`Q_g`$ from the catalog, an average over scales including those beyond the weakly non-linear regime was required. As we will see below, the configuration-dependence of $`Q_g`$ on large scales in the APM catalog is quite close to that expected in PT, suggesting that $`b_1`$ is of order unity for these galaxies. The simple model above undoubtedly does not capture the full complexity of biasing (e.g., Mo & White 1996, Blanton et al. 1998, Dekel & Lahav 1998, Sheth & Lemson 1998), but it provides a convenient framework that is well matched to the quality of the current data. In a projected catalog with radial selection function $`\varphi (x)`$ (normalized such that $`𝑑xx^2\varphi (x)=1`$), the galaxy angular two- and three-point functions at small angular separations ($`\theta _{ij}1`$) are given by (Peebles & Groth 1975, Peebles 1980) $$w_g(\theta )=2_0^{\mathrm{}}𝑑xx^4\varphi ^2(x)F^2(x)_0^{\mathrm{}}𝑑u\xi _g(r;z)$$ (9) $$z_g(\theta _{12},\theta _{13},\theta _{23})=_0^{\mathrm{}}𝑑xx^6\varphi ^3(x)F^3(x)_{\mathrm{}}^{\mathrm{}}_{\mathrm{}}^{\mathrm{}}𝑑u𝑑v\zeta _g(r_{12},r_{13},r_{23};z),$$ (10) where $`\theta _{12},\theta _{13}`$, and $`\theta _{23}`$ are the sides of a triangle projected on the sky. We assume the proper separations $`r_{ij}`$ are small compared to the mean depth of the sample; in this case, $$r_{ij}=\frac{1}{1+z}\left[(f_{ij}/F)^2+x^2\theta _{ij}^2\right]^{1/2},$$ (11) with $`f_{12}=u`$, $`f_{13}=v`$, and $`f_{23}=uv`$. Here, $`x`$ is the comoving radial (coordinate) distance to redshift $`z`$, and $`F(x,\mathrm{\Omega })`$ is a geometrical factor which relates proper and coordinate distance intervals, $`F(x)=[1(H_0x/c)^2(\mathrm{\Omega }1)]^{1/2}`$. The radial selection function for the APM Galaxy Survey can be approximated by (Gaztañaga & Baugh 1998) $$\varphi (x)_{APM}=Cx^b\mathrm{exp}^{x^2/D^2},$$ (12) with $`b=0.1`$ and $`D=335h^1`$ Mpc. At this depth, to an accuracy of better than a few percent we can approximate $`F`$ by its Einstein-de Sitter value $`F=1`$. The projected hierarchical amplitude is defined by analogy with eqn.(1), $$q_3(\theta _{12},\theta _{13},\theta _{23})\frac{z_g(\theta _{12},\theta _{13},\theta _{23})}{w_g(\theta _{12})w_g(\theta _{13})+w_g(\theta _{12})w_g(\theta _{23})+w_g(\theta _{13})w_g(\theta _{23})}.$$ (13) In projecting eqns.(2) and (1) in PT we assume leading-order perturbative growth for the redshift-evolution of $`\xi `$ and $`\zeta `$ for $`\mathrm{\Omega }_m=1`$; in this case, both $`Q`$ and $`q_3`$ are independent of the power spectrum normalization (e.g., independent of $`\sigma _8`$). For the analysis, we use the equal-area projection pixel map of the APM survey, with an area roughly $`120^o\times 60^o`$, containing $`N_{gal}1.3\times 10^6`$ galaxies brighter than $`b_j=20`$ and fainter than $`b_j=17`$. Each pixel has an area $`(\mathrm{\Theta }_p)^2(0.06)^2`$ sq. deg., and the mean galaxy count per pixel is $`N=N_{gal}/N_{pix}=0.97`$. The estimator of the density fluctuation amplitude at the $`i`$th pixel is $`\widehat{\delta }_i=(N_i/N)1`$, where $`N_i`$ is the galaxy count in that pixel. The estimator for the galaxy angular two-point function is then (Peebles & Groth 1975) $$\widehat{w}(\theta )=\frac{1}{N_\theta ^{(2)}}\underset{i,j}{}\delta _i\delta _jW_{ij}(\theta ),$$ (14) where $`N_\theta ^{(2)}=_{i,j}W_{ij}(\theta )`$ is the number of pairs of pixels at separation $`\theta `$ in the survey region, and the angular window function $`W_{ij}(\theta )=1`$ if pixels $`i`$ and $`j`$ are separated by $`|\stackrel{}{\theta }_i\stackrel{}{\theta }_j|=\theta \pm \mathrm{\Theta }_p`$, and 0 otherwise. The reduced angular three-point function is estimated as $$\widehat{z}(\theta _{12},\theta _{13},\theta _{23})=\frac{1}{N_\theta ^{(3)}}\underset{i,j,k}{}\delta _i\delta _j\delta _kW_{ijk}(\theta _{12},\theta _{13},\theta _{23}),$$ (15) where $`N_\theta ^{(3)}=_{i,j,k}W_{ijk}`$ is the count of triplets of pixels with angular separations $`\theta _{12},\theta _{13},\theta _{23}`$ in the survey region, and $`W_{ijk}=1`$ for pixel triplets with that angular configuration and 0 otherwise. We note that, in the limit of counts of pairs and triplets of objects, these estimators are equivalent to the minimum variance estimators of Landy & Szalay (1993) and Szapudi & Szalay (1998). We employ these estimators for angular separations $`0.5`$ deg, at least an order of magnitude larger than the pixel scale; in this limit, finite pixel-size corrections should be less than a few percent and have been neglected. To test the validity of PT on the scales of interest, to verify the algorithm for measuring the angular three-point function, and to check projection and finite sampling (and boundary) effects in the APM survey, we use the simulated APM maps of Gaztañaga & Baugh 1998. These are created from N-body simulations of the SCDM (with $`\sigma _8=1`$) and APM-like ($`\sigma _8=0.8`$) models, with box size of $`600h^1`$ Mpc and $`128^3`$ particles, both with $`\mathrm{\Omega }_m=1`$. From each N-body realization we make 5 APM-like maps from 5 different observers; the observers are spaced sufficiently far apart compared to $`D`$ that the ‘galaxies’ they observe do not appear to be strongly correlated (Gaztañaga & Bernardeau 1998). As the simulation is done in a periodic box, we replicate the box to cover the full radial extent of the APM (around 1800 $`h^1`$ Mpc, at which distance the expected number of galaxies is smaller than unity). To account for possible boundary effects, we employ the APM angular survey mask, including plate shapes and holes. We have also made larger angular maps without the APM mask and find no significant differences from the results shown below, from which we conclude that the APM mask does not affect the estimations of the 2 or 3-point functions on the scales under study here. The projected simulations have $`30\%`$ lower mean surface density of galaxies than the APM. We have diluted the APM pixel maps by this amount (by randomly sampling the galaxies) and found that this dilution has a negligible effect on the clustering properties within the errors. To estimate the projected 3-point function we need to find all triplets in the pixel maps which have a given triangle configuration and repeat the process for all configurations. Consider a triplet of pixels with labels 1,2,3 on the sky. Let $`\theta _{12}`$ and $`\theta _{13}`$ be the angular separations between the corresponding pairs of pixels and $`\alpha `$ the interior angle between these two triangle sides. One can characterize the configuration-dependence of the three-point function by studying the behavior of $`q_3(\alpha )`$ for fixed $`\theta _{12}`$ and $`\theta _{13}`$. Our algorithm for counting such triplets is as follows. For each pixel in the map, we find all pixels that lie in two concentric annuli of radii $`\theta _{12}\pm 0.5`$ and $`\theta _{13}\pm 0.5`$ about it, where $`\theta _{ij}`$ is now measured in units of $`\mathrm{\Theta }_p`$. We count all pairs of pixels between the two annuli; this requires $`(m_{12}\times m_{13})/2`$ operations, where $`m_{ij}=2\pi \theta _{ij}`$. As seen from the central pixel, each pair has an angular separation $`\alpha `$, and the results are binned in $`\alpha `$. We repeat this procedure for each pixel in the map, building the estimators (14) and (15). Thus, to find all triplets with separations $`\theta _{12}`$ and $`\theta _{13}`$ requires only $`N_{pix}(m_{12}\times m_{13})/2`$ operations; for $`\theta _{12}=\theta _{13}=1`$ deg, this computation only takes a few hours of CPU time on a modest workstation. By contrast, a naive $`𝒪(N_{pix}^3)`$ operation with $`N_{pix}10^6`$ would be quite a lengthy computational task given current computer power. The two lower panels in Fig. 1 show results for $`q_3(\alpha )`$ for the SCDM and APM-like models, for $`\theta _{12}=\theta _{13}=2`$ degrees, projected at the depth of the APM survey. The configuration-dependence of the hierarchical three-point amplitude is seen to be quite sensitive to the shape of the power spectrum. Both the shape and amplitude of $`q_3(\alpha )`$ predicted by PT (solid curves) are reproduced by the N-body results (points) even on these moderately small scales (at the mean depth of the APM, 2 deg corresponds to $`14`$ h<sup>-1</sup> Mpc). Part of this agreement traces to the fact that $`q_3`$ involves a weighted sum over spatial 3-point configurations covering a range of scales; due to the shape of the power spectrum, $`Q`$ increases on large scales, so these configurations (which are further in the perturbative regime) are more heavily weighted in projection. The error bars on the simulation results are estimated from the variance between 10 maps (5 observers each in 2 N-body realizations), assuming they are independent, and correspond to the 1-$`\sigma `$ interval of confidence for a single observer (i.e., they are not divided by $`\sqrt{101}`$). The top panel of Fig. 1 and all those of Fig. 2 show the measurements of $`q_3(\alpha )`$ in the APM survey itself at $`\theta _{12}=\theta _{13}=0.54.5`$ degrees. Closed squares correspond to estimations in the full APM map. The values at $`\alpha =0`$ are in agreement with the cumulant correlators, $`c_{12}q(0)`$, estimated (with 4x4 bigger pixels) by Szapudi & Szalay (1999). The mean values are comparable to the values of $`s_3/3`$ in the APM (Gaztanaga 1994, Szapudi et al. 1995). Also notice that at the scales considered here, $`0.5`$ deg, the values of $`s_3`$ in the APM and the EDSGC (Szapudi, Meiksin, & Nichol 1996) are very similar (Szapudi & Gaztanaga 1998). The APM results are compared with the values of $`q_3`$ for the APM-like spectrum in PT (solid curves) and in simulations (open triangles with errorbars). As the APM-like model has, by construction, the same $`w(\theta )`$ as the real APM map, we assume that the sampling errors should be similar in the APM and in the simulations. This might not be true on the largest scales, however, where systematics in both the APM survey and the simulations are more important (e.g., the simulation might be affected by the periodic boundaries, and the power in the simulation and in the survey may differ on the largest scales). At scales $`\theta \begin{array}{c}>\hfill \\ \hfill \end{array}1`$ deg, the agreement between the APM-like model and the APM survey is quite good; this corresponds roughly to physical scales $`r\begin{array}{c}>\hfill \\ \hfill \end{array}7`$ h<sup>-1</sup> Mpc, not far from the non-linear scale (where $`\xi _21`$). At $`4.5`$ deg, the APM agrees better with the PT prediction than with the simulation results, which show large variance; this could be an indication that the 600 h<sup>-1</sup> Mpc simulation box is not big enough, leading to larger sampling errors than the APM itself. We also note that the SCDM model clearly disagrees with the APM data for $`q_3`$; this can be seen at $`\theta _{12}=\theta _{13}=2`$ deg (Fig.1) and for $`\theta _{12}=\theta _{13}=3.5`$ deg (dashed curve in lower-left panel of Fig. 2). This conclusion is independent of the power spectrum normalization. At smaller angles, $`\theta \begin{array}{c}<\hfill \\ \hfill \end{array}1`$ deg, $`q_3`$ in the simulations is larger than in either the real APM or PT (top-left panel in Fig. 2). The discrepancy between simulations and PT on these relatively small scales is clearly due to non-linear evolution. The interpretation of the discrepancy with the real APM is less clear: a number of assumptions underlying the simulations could affect the final results at these non-linear scales. For example, systematic uncertainties in the APM selection function or a linear bias would lead one to infer a different linear APM-like power spectrum from the $`w(\theta )`$ data. Also, a model with low $`\mathrm{\Omega }_m`$ would undergo less non-linear evolution, which might give a better match to the APM results for $`q_3`$; this could provide an interesting test for a low-density universe. This discrepancy at non-linear scales is similar to the one found in Baugh & Gaztañaga (1996), where the real APM values of $`S_J`$ were closer to the PT predictions than to simulations. Other possible contributions to this effect include non-linear bias and non-linear projection effects (Gaztañaga & Bernardeau 1998). The open circles in each figure show the mean of the estimations of $`q_3`$ in 4 disjoint subsamples of the APM survey (equally spaced in right ascension, as in Baugh & Efstathiou 1994). For illustration, the values of $`q_3`$ for each of the 4 zones are shown in the top panel of Figure 1: the dotted, short-dashed, long-dashed, and dot-dashed curves correspond to zones of increasing RA (the middle two of these correspond to relatively lower galactic lattitude). These estimations of $`q_3`$ are subject to larger finite-volume effects, because each zone is only 1/4 the size of the full APM. Because the zones cover a range of galactic latitude, a number of the systematic errors in the APM catalog (star-galaxy separation, obscuration by the galaxy, plate matching errors) might be expected to vary from zone to zone. We find no evidence for such systematic variation in $`q_3`$: the individual zone values are compatible with the full survey within the (sampling) errors in the simulations (compare the top and middle panels in Figure 1). On larger scales, $`\theta \begin{array}{c}>\hfill \\ \hfill \end{array}3`$ deg, the individual zone amplitudes exhibit large variance, and boundary effects come into play. On all scales, we find large covariance between the errors of $`q_3`$: the data points at different $`\alpha `$ are strongly correlated. This is illustrated in the middle panel of Fig. 1: the dotted and continuous curves correspond to results for 2 of the 10 observers. Sampling or finite-volume effects are seen to produce a systematic vertical shift in the curve rather than a scatter around some mean value. A similar trend is found in the real APM catalog (compare the 4 curves in the top panel of Fig. 1). This covariance can be studied analytically (Hui, Scoccimarro, Frieman, & Gaztanaga, in preparation) and comes from fluctuations on scales comparable to the sample size. Ratio bias and integral-constraint bias (e.g., Hui & Gaztanaga 1999) could also be important. Other possible sources of systematic discrepancies between the model predictions and the APM results include the shape of the APM radial selection function, the evolution of clustering, and the shape of the linear power spectrum. We find that the first two effects introduce differences smaller than $`10\%`$ in the amplitude of $`q_3`$ (in agreement with Gaztanaga 1995), which are not significant given the errors. The uncertainty in the shape of the linear $`P(k)`$ is more important, and as mentioned above is critical for the interpretation of $`q_3`$ at the smallest angles. Nevertheless, at large scales ($`\theta >1`$ deg) these uncertainties appear to be within the errors (when we take into account that the errors are strongly correlated). This is illustrated in the bottom-left panel of Fig. 2—the two solid curves bracketing the APM-like results show the PT predictions for two power spectra which conservatively bracket the uncertainties in the linear spectrum inferred from the APM $`w(\theta )`$ (see Fig.13 of Gaztanaga & Baugh 1998): $`P(k)k^a/[1+(k/0.06)^3]`$, with $`a=0.2`$ and $`1.2`$. The model with $`a=1.2`$ (the lower solid curve at small $`\alpha `$) appears to give a better match to the $`q_3`$ results in the APM than the central APM-like model of Eq.; for reference, a CDM model with $`\mathrm{\Gamma }=0.3`$ gives a nearly identical value for $`q_3(\alpha )`$ on these scales. Thus, although the APM results for $`q_3`$ generally fall below the PT predictions on angular scales $`\theta \begin{array}{c}>\hfill \\ \hfill \end{array}2`$ deg, they are consistent within the sampling errors and given the uncertainties in the shape of the linear power spectrum inferred from the APM $`w(\theta )`$. To place accurate constraints upon bias models and initially non-Gaussian fluctuations, we must quantitatively model the covariance between the estimates of $`q_3`$; this will be done elsewhere, but we can nevertheless get a qualitative sense of the limits here. We expect the strongest constraints to come from intermediate scales, $`\theta 12.5`$ deg, where both the sampling errors and the non-linearities are small. The upper right panel of Fig. 2 shows the PT predictions for the APM-like model with linear bias parameter $`b_1=2`$ (dashed curve) and a non-linear bias model with $`b_1=1`$, $`b_2=0.5`$ (dotted curve). Even if the errors are $`100\%`$ correlated, these models are clearly ruled out by the APM data; we conservatively conclude that $`b_1\begin{array}{c}<\hfill \\ \hfill \end{array}1.5`$ is required for a simple linear bias model to fit the APM data. Note that a model with $`b_1=1.5`$ and $`b_2=0.5`$ would have roughly the correct amplitude for $`q_3`$, but its shape would be flatter than the data, especially at larger $`\theta `$. As a simple example of a non-Gaussian model, the dotted curve in the lower-left panel of Fig. 2 shows the leading-order prediction for the $`\chi ^2`$ isocurvature model (Peebles 1997, 1998a, 1998b, Antoniadis, etal. 1997, Linde & Mukhanov 1997, White 1998) with the APM-like spectrum. In this model, the initial density field is the square of a Gaussian random field, and the leading-order 3-point function is simply $`\zeta =2[2\xi (x_{12})\xi (x_{13})\xi (x_{23})]^{1/2}`$. Clearly, the projected three-point function for this model is substantially larger than that of the corresponding Gaussian model for intermediate $`\alpha `$; both the amplitude and shape are discrepant with the APM data. To make this comparison precise, the non-linear corrections for this model should be self-consistently included; however, these corrections are expected to increase rather than reduce the $`q_3`$ amplitude, likely making the disagreement worse. In general, if we assume no biasing, the initial non-gaussianities are restricted to $`|q_3|\begin{array}{c}<\hfill \\ \hfill \end{array}0.5`$. At angular scales $`\theta \begin{array}{c}>\hfill \\ \hfill \end{array}1`$ deg, corresponding to physical scales for which $`\xi \begin{array}{c}<\hfill \\ \hfill \end{array}1`$, the agreement between PT and the APM survey for the angular three-point amplitude $`q_3`$ is quite good, implying that APM galaxies are not significantly biased on these scales and that their spatial distribution is consistent with non-linear evolution from Gaussian initial conditions. This substantiates and extends the conclusions of Gaztañaga (1994,1995) and Gaztañaga & Frieman (1994). ###### Acknowledgements. We thank A. Buchalter, A. Jaffe, and M. Kamionkowski, who have recently carried out an independent computation of the angular three-point function in PT, for disucssions, as well as L. Hui, R. Juszkiewicz, and R. Scoccimarro. EG would like to thank G.Dalton, G.Efstathiou and the Astrophysics group at Oxford, were this work started. This research was supported in part by the DOE and by NASA grant NAG5-7092 at Fermilab and by a NATO Collaborative Research Grants Programme CRG970144 between IEEC and Fermilab. EG also acknowledges support by IEEC/CSIC and by DGES(MEC) (Spain), project PB96-0925.
no-problem/9903/astro-ph9903285.html
ar5iv
text
# Detecting planets around stars in nearby galaxies ## 1 Introduction The existence of ‘other worlds’ has always been one of the most discussed topics in the history of philosophy and science. The question has fascinated researchers since more than 2000 years, but the first attempt in modern astronomy to discover extrasolar planets was given by Huyghens (), in the XVII century. One had to wait nearly another 300 years until the first extrasolar planets have been discovered (Mayor & Queloz ; Marcy & Butler ), namely by observing the radial velocity of the parent star by Doppler-shift measurements. All of the confirmed detections of extrasolar planets so far result from this technique and $`20`$ planets have been found (Schneider ). Already in 1991, Mao & Paczyński () have pointed out that not only a (dark) foreground star that passes close to the line-of-sight of an observed luminous background source star yields a detectable variation in the observed light of the source star but also a planet around the foreground (lens) star can significantly modify the observed light curve. Gould & Loeb () have shown that there is a significant probability to detect jupiter-mass and saturn-mass planets around stars in the Galactic disk that act as microlenses by magnifying the light of observed stars in the Galactic bulge. Bennett & Rhie () have pointed out that the capability of detecting planets by this photometric microlensing ($`\mu `$L) technique extends to earth-mass planets, where the limit is given by the finite size of the source stars. Contrary to all techniques employed or suggested to search for planets, photometric $`\mu `$L does not favour nearby objects. This makes it the unique technique to search for planets around stars at distances larger than a few kpc. Moreover, for disk lenses and bulge sources, a separation between planet and parent star of 2–6 AU is favoured, making it an ideal method to look for jupiter-like systems. Since the parent star of the planet acts as a gravitational lens only through its gravitational field, there is no luminosity bias for the parent stars that are generally not even seen. Moreover, it is the only method to discover Earth-like planets from ground-based observations.<sup>1</sup><sup>1</sup>1In 1992, Earth mass objects have been discovered around the pulsar PSR1257+12 (Wolszczan & Frail ; Wolszcan ) through time-delay measurements. The discovery is undoubtful, but the very nature of these objects is completely unknown: it is difficult, at the moment, to conciliate this discovery with our picture of planetary systems. A precise definition of a planet is a subtle question (see Marcy & Butler ). Several teams have started to look for planetary anomalies in $`\mu `$L light curves with monitoring programs that perform frequent and precise observations, namely PLANET (Albrow et al. ; Dominik et al. ), MPS (Rhie et al. \[1999a\]), and MOA (Hearnshaw et al. ). All these teams rely on the microlensing ’alerts’ issued by teams that undertake surveys of $`10^7`$ stars: OGLE (Udalski et al. ), MACHO<sup>2</sup><sup>2</sup>2MACHO will discontinue its operation by the end of 1999. (Alcock et al. ), and EROS (Palanque-Delabrouille et al. ). While most of these alerts are on Galactic bulge stars, MACHO and EROS also observe(d) fields towards the Magellanic Clouds. However, the number of events towards SMC and LMC comprises only 5–10% of the total number of events. In addition to detecting planets around stars in the Galactic disk (typically at $`4\text{kpc}`$ distance) one could also think of detecting planets around stars in the Magellanic Clouds (at $`50\text{kpc}`$ distance). However, in addition to the relative small number of detected events, finite source effects play a much more prominent role for lensing of stars in the Magellanic Clouds by stars in the Magellanic Clouds than for lensing of Galactic bulge stars by Galactic disk stars (Sahu ) resulting in a dramatic decrease in the probability to detect planetary signals. Safizadeh et al. () have pointed out that planets around disk stars can also be detected by looking at the shift of the light centroid of observed source stars caused by microlensing of disk stars and surrounding planets with upcoming space interferometers that allow to measure astrometric shifts at the $`\mu `$as level. Contrary to photometric $`\mu `$L, the observed signal of this ’astrometric $`\mu `$L’ technique decreases with the distance of the lenses (e.g. Dominik & Sahu ). With $`\mu `$as-astrometry, jupiter-mass planets can only be detected for distances up to $`30\text{kpc}`$. This leaves photometric $`\mu `$L as the only method ever capable of detecting planets in nearby galaxies like M31. In contrast to microlensing observations towards the Galactic bulge and the Magellanic Clouds, a large number of source stars fall onto the same pixel of the detector for observations towards M31. However, it is still possible to detect $`\mu `$L events even in unresolved star fields (Baillon et al. ; Gould ). Since standard photometric methods cannot be used to reveal $`\mu `$L events, new techniques have been developed: super-pixel photometry (Ansari et al. ) and difference image photometry (Tomaney & Crotts ; Alard & Lupton ). These techniques are used for the $`\mu `$L searches towards M31 as carried out by the Columbia-VATT search (Crotts & Tomaney ), AGAPE (Ansari et al. ), SLOTT-AGAPE (Bozza et al. ), and MEGA (Crotts et al. ). In this paper we investigate the possibility to detect planets around stars in M31 with experiments that make use of either of these techniques. By searching for planets (or, at least, brown dwarfs) even in other galaxies, the limit for planet detection is further pushed towards larger distances. The paper is organized in the following way: in Sect. 2, we discuss the characteristics of microlensing signals caused by planets. In Sect. 3, the conditions for detecting anomalies in light curves of M31 are discussed. In Sect. 4, we calculate the probability to detect planetary signals in M31, and in Sect. 5, we discuss the extraction of planetary parameters. Finally, in Sect. 6, we summarize and conclude. ## 2 Microlensing signals of planets A microlensing event occurs if a massive lens object with mass $`M`$ located at a distance $`D_\mathrm{L}`$ from the observer passes close to the line-of-sight towards a luminous source star at the distance $`D_\mathrm{S}`$ from the observer. Let $`u`$ denote the angular separation betwen lens and source in units of the angular Einstein radius $$\theta _\mathrm{E}=\sqrt{\frac{4GM}{c^2}\frac{D_\mathrm{S}D_\mathrm{L}}{D_\mathrm{L}D_\mathrm{S}}}.$$ (1) For the ’standard model’ of $`\mu `$L, i.e. point-like sources and lenses, the magnification $`\mu `$ is then given by (Paczyński ) $$\mu (u)=\frac{u^2+2}{u\sqrt{u^2+4}}.$$ (2) If one assumes uniform rectilinear motion between lens and source with the relative proper motion $`\mu `$, one has $$u(t)=\sqrt{u_0^2+\left(\frac{tt_0}{t_\mathrm{E}}\right)^2},$$ (3) where $`t_\mathrm{E}=\theta _\mathrm{E}/\mu `$, $`u_0`$ gives the impact parameter, and $`t_0`$ gives the time of the smallest separation between lens and source. This means that one observes a light curve $`\mu (u(t))`$ that has the form derived by Paczyński (), the so-called Paczyński curve. For recent and complete reviews of the theory of microlensing and of the observational results we further refer the to the works of Paczyński (), Roulet & Mollerach (), and Jetzer (). More sophisticated models of the lens and the source include the finite source and the binarity (or multiplicity) of these objects. For such models, the light curves can differ significantly from Paczyński curves. If one neglects the binary motion, a binary lens is characterized by two parameters, the mass ratio between the lens objects $`q`$ and their instantanous angular separation $`d`$, measured in units of $`\theta _\mathrm{E}`$. The model of a binary lens includes the configuration of a star that is surrounded by a planet. In the following, we let $`M`$ denote the mass of the more massive object (star), while $`m`$ denotes the mass of the less massive object (planet) and $`q=m/M<1`$. This means that $`\theta _\mathrm{E}`$ refers to the mass $`M`$ of the more massive object. For any mass ratio $`q`$, the caustics of a binary lens can show three different topologies (Schneider & Weiß ; Erdl & Schneider ) depending on the separation $`d`$: For ’wide binaries’ there are two disjoint diamond-shaped caustic near the positions of each of the lens objects, for ’intermediate binaries’ there is only one caustic with 6 cusps, and for ’close binaries’ there is one diamond-shaped caustic near the center-of-mass and two small triangular shaped caustics. As $`q0`$, the region of intermediate binaries vanishes as $`q^{1/3}`$ and the transition close-intermediate-wide occurs at $`d=1`$ (Dominik ). This means that for planets, one has a ’central caustic’ near the star and either a diamond-shaped caustic (for $`d>1`$) or two triangular shaped caustics (for $`d<1`$) at the position that had an image under the lens action of the star, considered at the position of the planet. We will refer to the latter caustic(s) as ’planetary caustic(s)’. Since the caustics are small and well-separated, the light curve mainly follows a Paczyński curve and is only locally distorted by either of the caustics. This allows us to distinguish two main types of anomalies in the light curve, namely the events affected by the central caustic (type I), and the ones affected by one of the planetary caustics (type II). To produce a Type I anomaly, the source has to pass the lens star with a small impact parameter, say $`u_00.1`$. Unless the source size is larger than variations in the magnification pattern, type I anomalies occur in high-magnification events ($`\mu 1/u`$ for $`u1`$). Moreover, the anomaly occurs near the maximum of the underlying Paczyński curve. Griest & Safizadeh () have pointed out that for high-magnification events, the probability to detect a planetary signal, namely as type I anomaly, is very large. In order to produce a high detection probability, the central caustic is often elongated along the lens axis, so that the magnification pattern is highly asymmetric around the lens star. If there are $`N`$ planets with masses $`m_i`$ around the parent star with mass $`M`$, they all perturbate the central caustic (Gaudi et al. ), where the effect is proportional to the mass ratios $`q_i=m_i/M`$ (Dominik ). Though in principle, one can obtain information about the whole planetary system, the extraction of this information is non-trivial and the results are likely to be ambiguous (Dominik & Covone, in preparation). Type II anomalies are produced when the source passes close enough to the lens to produce a detectable Paczyński curve ($`u_01`$), but not close enough to feel the effects of the central caustic ($`u_00.1`$), and also gets affected by the planetary caustics, so that the source light beam will also be deflected by the planet, and a perturbation of the Paczyński curve is produced at a time that depends on the angular separation between star and planet. From this time and from the duration of the perturbations, mass ratio $`q`$ and separation $`d`$ can be determined from high-quality observations, unless the duration is strongly influenced by the source size (Gaudi & Gould ; Dominik & Covone, in preparation). Experiments towards unresolved star fields in nearby galaxies set very limiting conditions on the detection of $`\mu `$L events in general and on the detection of anomalies in particular. First, only the parts of the light curve that correspond to large magnifications can be observed. Second, anomalies can only be seen when they constitute very large deviations of the received flux. Therefore, all observed events are high-magnification events which gives a lot of candidates to look for type I anomalies. On the other hand, the background Paczyński curve for type II anomalies is not observed, and the planetary caustic has to be approached very closely to produce a high magnification. Therefore, type II anomalies are not likely to be detected in M31 experiments. Griest & Safizadeh () have studied the influence of the finite source size for type I anomalies. For sources in the Galactic bulge and lenses in the Galactic disk, they find that the finite source size can be neglected even for giant sources ($`R10R_{}`$) for a parent star of solar-mass and a mass ratio $`q>10^3`$. The characteristic quantity for the effect of the finite source size is the ratio between source size and the physical size of the angular Einstein radius at the position of the source $$r_E^{}=D_\mathrm{S}\theta _\mathrm{E}=\sqrt{\frac{4GM}{c^2}\frac{D_\mathrm{S}(D_\mathrm{S}D_\mathrm{L})}{D_\mathrm{L}}}.$$ (4) For lensing of bulge stars by disk stars, $`D_\mathrm{S}8\text{kpc}`$ and $`D_\mathrm{L}D_\mathrm{S}/2`$, while for M31 sources and lenses, $`D_\mathrm{S}D_\mathrm{L}600\text{kpc}`$ and $`D_\mathrm{S}D_\mathrm{L}10\text{kpc}`$. Therefore $`r_\mathrm{E}^{}`$ is approximately the same in the two cases and the estimates for the effect of the finite source size made for bulge stars and disk lenses are also valid for M31 sources and lenses. If the finite source size becomes non-negligible, the planetary signal is suppressed. We therefore restrict our discussion to planets with mass ratio $`q>10^3`$, i.e. Jupiter-like planets around stars of solar-mass and systems with larger mass ratio. ## 3 Detectability of anomalies in M31 experiments For $`\mu `$L searches towards M31, each pixel of the detector contains light from many unresolved stars. There are several differences between classical microlensing surveys (i.e. surveys on resolved stars) and surveys towards unresolved star fields. The first one concerns the photometric errors. While in the classical regime, the photon noise is generally dominated by the light from the lensed star, it is dominated by the flux from stars that are not lensed for observations towards unresolved star fields. This means that the noise does not depend on the magnification. A second important difference is that it is impossible to determine the baseline flux of the lensed star. This means that the actual magnification and the Einstein time $`t_\mathrm{E}`$ of the event are not known. Moreover, in surveys towards unresolved star fields, there is a natural selection bias for the events with respect to the impact parameters and the luminosity of the lensed sources (e.g. Kaplan ): events that involve lensing of giant stars and events with small impact parameters are preferred. Searches of $`\mu `$L events towards unresolved star fields (Crotts ; Baillon et al. ), M31 in particular, have motivated the development of new photometric methods. While the AGAPE team has implemented a ’super-pixel photometry’ method (Ansari et al. ; Kaplan 1998), the Columbia-VATT team has used a ’difference image photometry’ method (Crotts & Tomaney ; Tomaney & Crotts ). Recently, Alard & Lupton () have improved the latter method yielding the ’Optimal image subtraction’ (OIS) technique. The Columbia-VATT collaboration has found six candidate events towards M31 (Crotts & Tomaney ). AGAPE has observed 7 fields towards M31 in autumns 1994 and 1995, using the 2 meters telescope Bernard Lyot at the Pic du Midi Observatory. Their data analysis has selected 19 microlensing candidate events that are broadly consistent with Paczyński curves. Only two of them can be retained as convincing candidates at the moment (Melchior ). One of these events shows a small but statistically significant deviation from a Paczyński curve (Ansari et al. ). This event could be due to lensing of a binary source, or even to a binary lens. There are too few data points to resolve the question, and other observations are needed to confirm that the event is due to $`\mu `$L and not due to stellar variability. In any case, the possibility to detect binary lens events towards unresolved star fields has been demonstrated. This gives us some confidence that future $`\mu `$L searches towards nearby galaxies could not only detect binary-lens events, but also reveal Jupiter-like planets. From a general point of view, we expect a larger fraction of anomalous microlensing events, since smaller impact parameters are favoured so that source trajectories are more likely to pass through the more asymmetric parts of the magnification pattern. However, the less accurate photometry sets a severe limit on the detection of anomalies. In the following, we determine how large an anomaly has to be in order to be detected in an M31 $`\mu `$L experiment. The light in an observed pixel is composed of contributions from the lensed star and many other unresolved stars. Since the light from the lensed star is in general spread over several pixels, only a fraction $`f`$ of it is received on a given pixel. If $`\mu `$ denotes the magnification of the lensed star, and $`F_{\mathrm{star}}^{(0)}`$ denotes its unlensed flux, the flux variation on the pixel is given by $$\mathrm{\Delta }F_{\mathrm{pixel}}=(\mu 1)fF_{\mathrm{star}}^{(0)},$$ (5) where $`\mu `$, $`f`$ and $`F_{\mathrm{star}}^{(0)}`$ are not observed individually. Let us now consider an anomaly in an event, i.e. a deviation from a Paczyński curve. Let $`\mu `$ denote the magnification for the Paczyński curve and $`\mu ^{}`$ the magnification for the anomalous curve. The difference in the pixel flux variations is then given by $$\mathrm{\Delta }(\mathrm{\Delta }F_{\mathrm{pixel}})=(\mu ^{}\mu )fF_{\mathrm{star}}^{(0)}.$$ (6) This difference is detectable when it exceeds the rms fluctuation $`\sigma _{\mathrm{pixel}}`$ by a factor Q, i.e. $$\mu ^{}\mu Q\frac{\sigma _{\mathrm{pixel}}}{fF_{\mathrm{star}}^{(0)}}.$$ (7) One sees that the brighter the star the less the magnification variation has to be in order to be detected. Thus, giant stars are preferred as sources. For $`\mu 1`$, one obtains a detection threshold $`\delta _{\mathrm{th}}`$ for anomalies with Eq. (5) as $$\delta _{\mathrm{th}}\left|\frac{\mu ^{}\mu }{\mu }\right|_{\mathrm{th}}=Q\frac{\sigma _{\mathrm{pixel}}}{\mathrm{\Delta }F_{\mathrm{pixel}}}.$$ (8) To obtain an estimate, we have a look at the values of $`\sigma _{\mathrm{pixel}}`$ and $`(\mathrm{\Delta }F_{\mathrm{pixel}})_{\mathrm{max}}`$, i.e. $`\mathrm{\Delta }F_{\mathrm{pixel}}`$ at the maximum, for the 19 candidate events detected by AGAPE and analyzed using the super-pixel photometry technique (Ansari et al. ). This analysis has been made on $`7\times 7`$ pixels squares, the so-called “super-pixel”, which correspond more or less to the average PSF dimension. It has been found that $`\sigma _{\mathrm{pixel}}1.7\sigma _\gamma `$, where $`\sigma _\gamma `$ denotes the photon noise. The value of $`\sigma _{\mathrm{pixel}}`$ and $`(\mathrm{\Delta }F_{\mathrm{pixel}})_{\mathrm{max}}`$ at the maximum as well as their ratio are listed in Table 1. The ratio $`\sigma _{\mathrm{pixel}}/(\mathrm{\Delta }F_{\mathrm{pixel}})_{\mathrm{max}}`$ has mean value $`0.078\pm 0.026`$. Therefore, for $`Q=2`$, we obtain $`\delta _{\mathrm{th}}15\%`$ for the detection of anomalies near the maximum. For ’optimal image subtraction’, the effective rms fluctuation can be pushed closer to the photon-noise limit (Alard & Lupton ), yielding $`\sigma _{\mathrm{pixel}}1.2\sigma _\gamma `$, so that the detection threshold reduces to $`\delta _{\mathrm{th}}10\%`$. ## 4 Detection probability for planetary signals For $`\mu `$L events towards M31, the lens can be located in the Milky Way halo, the M31 halo, or the M31 bulge. It is almost impossible to discriminate among these different possible locations of the lens from a single observed light curve, though for a very small subset of microlensing events it is possible to tell something about the lens location (Han & Gould ). Since we expect only those events for which the lens is in the M31 bulge as being due to stars, we will consider only those events as potential targets for a search for planetary anomalies. As pointed out before, one also needs a small impact parameter in order to produce an observable signal. Therefore, we restrict our attention to events that satisfy the following two conditions 1. $`u_0<u_{\mathrm{th}}0.1`$; <sup>3</sup><sup>3</sup>3For smaller $`u_{\mathrm{th}}`$, the detection probability will be larger. 2. lens in the bulge or in the disk of the target galaxy. Since we need more than one observed data point to be confident that we observe a $`\mu `$L anomaly, we require an observable anomaly to deviate by more than $`\delta _{\mathrm{th}}`$ and during more than $`t_\mathrm{E}/100`$, i.e. $`7`$ hours for a month-long event, therefore requiring some dense sampling over the peak of the $`\mu `$L event. The probability to detect a signal depends on the projected separation $`d`$ between the star and the jupiter-like planet, as defined in Sect. 2. Our calculation of the detection probability is similar to the one done by Griest & Safizadeh (), but we use different detection criteria here. For calculating the magnifications, we have used the approach developped by Dominik (), released as ’Lens Computing Package (LCP)’. The “cross section” of the central caustic depends strongly on the direction of the source. Due to the elongated shape along the lens axis, it has a maximum for trajectories orthogonal to this axis, and a minimum for parallel trajectories. We have calculated the largest impact parameter $`u_{\mathrm{max}}u_{\mathrm{th}}`$ that satisfies our detection criterium for several different source directions. The detection probability for a planet for each of the considered directions $`\alpha `$ is then simply given by $`P(\alpha )=u_{\mathrm{max}}(\alpha )/u_{\mathrm{th}}`$, using the fact that the distribution of impact parameters is approximately uniform for small impact parameters for events from microlensing experiments towards unresolved star fields. The final detection probability has been calculated by averaging over the different trajectories. The results are shown in Fig. 1. For both values of $`\delta _{\mathrm{th}}`$, there is some reasonable probability to detect planetary signals for planets in the lensing zone (i.e. the range of planetary position for which the planetary caustics is within the Einstein ring of the major component of the system, $`0.618d1.618`$). In agreement with previous work (Griest & Safizadeh ; Dominik ), the detection probability reaches a maximum for planets located close to the Einstein ring of their parent star (the caustic size increases towards $`d1`$). Averaged over the lensing zone, the detection probability is $`20\%`$ for $`\delta _{\mathrm{th}}=15\%`$ and $`35\%`$ for $`\delta _{\mathrm{th}}=10\%`$. With a 2m-telescope, one can detected $`400`$ events per year towards the M31 bulge (Han ). Present-day microlensing surveys towards M31 are still far away from such a theoretical limit, but the technique has demonstrated to be successful, and fruitful developments can be expected in the near future. With $`50\%`$ of these events being due to M31 bulge lenses (Han ) and $`50\%`$ of these bulge lens events having $`u_0<0.1`$ (Baillon et al. ), one can expect to detect up to 35 anomalies caused by Jupiter-like planets per year if every M31 bulge star has such a planet in its lensing zone. To be able to observe and characterize the planetary anomaly, frequent observations (every few hours) during the anomaly are necessary. Future observing programs towards M31 or other neigboring galaxies should take this into account. ## 5 Extraction of planetary parameters There is a crucial difference between the detection of a signal that is consistent with a planet and the detection of a planet, i.e. the determination of parameters that unambiguously characterize its nature. In fact, it has been shown that the first microlensing event MACHO LMC-1 is consistent with a planet (Rhie & Bennett ; Alcock et al. ). However, it appears to be consistent with a binary lens of practically any mass ratio $`q`$ (Dominik & Hirshfeld ), so that the existence of a planet cannot be claimed from this event. However, most of the papers about the detection of planets only show the possibility that a signal that arises from a planet can be detected (Mao & Paczyński ; Griest & Safizadeh ; Safizadeh et al. ), while the question about the extraction of parameters has only been addressed by a few people. Dominik () has stressed that this is complicated by several points: there may be several different models that are consistent with the data, the fit parameters have finite uncertainties (in particular blending strongly influences $`t_\mathrm{E}`$), and the physical lens parameters only result on a stochastical basis using assumptions about galaxy dynamics. Gaudi & Gould () have shown that one needs frequent and precise observations to determine the mass ratio $`q`$ and the separation $`d`$ from type II anomalies. However, it is more difficult to contrain these parameters in type I anomalies. Additional complication arise because one does not obtain information about the time separation between the main peak and the planetary peak, there is a degeneracy between $`d`$ and $`q`$ (Dominik ), and observed anomaly results from the combined action of all planets around the lens star (Gaudi et al. ). Despite of the question whether $`d`$ and $`q`$ are well-determined, those parameters do not give the mass of the planet $`m`$, nor its true separation $`a`$. Moreover, an additional uncertainty enters because $`d=a_\mathrm{p}/r_\mathrm{E}`$ corresponds only to the projected instantaneous separation $`a_\mathrm{p}`$. Using models for the galactic dynamics, rather broad probability distributions for $`a`$ and $`m`$ result. However, as we stated before, photometric microlensing is the only method able to detect signals of planets around stars in M31, so that if there is a way to find planets, this is the only one. As we have shown, the prospects for detecting planetary signals are good. This means that even if planets can be truly characterized in only a fraction of the events where signals consistent with a planet can be detected, there is still a chance for being able to claim a planet. Such a subset of events could e.g. consist of events where the source trajectory crosses the caustic. Such caustic crossing events are likely to provide additional information. A complete discussion of the extraction of planetary parameters is beyond the scope of this paper and will be presented elsewhere (Dominik & Covone, in preparation). ## 6 Summary and conclusions While microlensing is already the only method to detect planets around stars that are at several kpc distance, namely by precise and frequent monitoring of $`\mu `$L events towards the Galactic bulge, future $`\mu `$L experiments towards nearby galaxies as M31 can even push this distance limit much further. Pixel lensing and difference image photometry have demonstrated to be successful methods to search for $`\mu `$L events towards unresolved star fields, and improvements are expected from the ’Optimal Image Subtraction (OIS)’ technique (Alard & Lupton ). While AGAPE recently reported the observation of the possible first anomalous $`\mu `$L event towards M31 (Ansari et al. ), we have shown that even planetary systems can give rise to measurable anomalies. These planetary anomalies are due to passages of the source close to the central caustic near the parent star, i.e. the detection channel discussed by Griest & Safizadeh (). Using the estimate of Han () that about 400 events per year towards M31 can be detected with a 2m-telescope, we estimate that up to 35 jupiter-mass planets per year can be detected if they exist frequently in the lensing zone around their parent star. Following theoretical work by Gaudi & Sackett (), PLANET (Albrow et al. ) and MPS and MOA (Rhie et al. \[1999b\]) have recently published first results concerning the determination of the abundance of planets from the absence of observed signals. From our estimates it follows that future $`\mu `$L experiments towards M31 can have the power to yield strong constraints on the abundance of jupiter-mass planets. ## Acknowledgements We gratefully thank V. Cardone, J. Kaplan, Y. Giraud-Heraud, P. Jetzer and E. Piedipalumbo for stimulating discussions. We also thank the referee for some remarks that helped in improving the paper. RdR and AAM are finantially sustained by the M.U.R.S.T. grant PRIN97 “SIN.TE.SI”. GC has received support from the European Social Funds. The work of MD has been financed by a Marie Curie Fellowship (ERBFMBICT972457) from the European Union.
no-problem/9903/hep-th9903058.html
ar5iv
text
# AdS3 Black Hole Entropy and the Spectral Flow on the Horizon ## Abstract We consider the entropy problem of $`AdS_3`$ black holes using the conformal field theory at the horizon. We observe that the supersymmetry is enhanced at the horizon of massless $`AdS_3`$ black hole. This allows us to determine the vacuum of the modular invariant conformal field theory to be the NS-ground state (which corresponds to $`AdS_3`$ spacetime). This is smoothly related to the R-ground state (corresponding to massless black hole) by a spectral flow, which can be understood as a superconformal transformation. preprint: APCTP-1999006// hep-th/9903058 The microscopic origin of the entropy of black holes has been a challenging problem in quantum gravity since its original formulation. It has proven to be a fertile ground to test the ideas of string theory, and the conformal field theory on D-branes has been quite successful. However, in the D-brane approach, the geometric picture was not so clear because it was formulated in the weak coupling limit. The fact that it can be extended to near-extremal cases has been taken with a grain of salt, since the D-branes are BPS objects. Better understanding of the Bekenstein-Hawking entropy could follow from the relationship of the BTZ black hole in 2+1 dimensions and higher dimensional black holes in string theory. This possibility is due to the observation that the near horizon geometry of higher dimensional black hole configurations can be related to that of BTZ black hole by some duality transformations. This makes the study of BTZ black hole quite important. Since black holes in 2+1 dimensions have asymptotic geometry of anti-de Sitter ($`AdS`$) space, some kind of holographic principle might hold a key to the problem of black hole entropy, in light of recent developments in $`AdS`$/CFT. However, at least for the case of BTZ black hole, we need not resort to the full string theory on $`AdS_3\times S^3\times M^4`$ to solve the entropy problem. In this paper, we consider the entropy problem of $`AdS_3`$ black holes using the conformal field theory at the horizon (to be precise we mean the apparent horizon). First we observe that the supersymmetry is enhanced at the horizon of massless $`AdS_3`$ black hole. This allows us to determine the vacuum of the modular invariant conformal field theory to be the NS-ground state (which corresponds to $`AdS_3`$ spacetime). This is smoothly related to the R-ground state (corresponding to massless black hole) by a spectral flow, which can be understood as a superconformal transformation. Let us now describe the BTZ black hole. There are no curvature singularities for the solution so that the BTZ black hole solution came as a surprise. The metric of black hole of mass $`M`$ and angular momentum $`J`$ is $$ds^2=N^2dt^2+N^2dr^2+r^2(d\varphi +N^\varphi dt)^2,$$ (1) where the lapse and shift functions are $$N=\left(8GM+\frac{r^2}{l^2}+\frac{16G^2J^2}{r^2}\right)^{1/2},N^\varphi =\frac{4GJ}{r^2}.$$ (2) The asymptotic symmetry of $`AdS_3`$ is generated by two copies of the Virasoro algebra with generators $`L_n,n\mathrm{integer}`$, with central charge $`c=3l/2G`$. Although the bulk degrees of freedom are nondynamical, we have a nontrivial dynamical conformal field theory (CFT) on the boundary. Such conformal symmetry is also found on the horizon of the black hole. Let us sketch the current status of study on BTZ black hole. Applying the Cardy’s formula for the asymptotic growth of states for CFT, one can count the number density of the microscopic degrees. Using this Carlip obtained the black hole entropy from CFT on the horizon. The derivation was based on the fact that gravity in $`2+1`$ dimensions can be formulated as topological Chern-Simons theory, and the boundary dynamics at the horizon is a CFT described by the $`SL(2,R)\times SL(2,R)`$ Wess-Zumino-Witten model. However, the difficulties with this approach were spelled out recently. Carlip’s original work is flawed by the fact although he resorts to large $`k`$ limit, where $`k`$ is the level number of the $`SL(2,R)`$ Kac-Moody algebra, $`k`$ actually is not so large, on shell, at the point $`\mathrm{}r_{}r_+k`$ which is the most relevant to the black hole entropy in his calculation. This was remedied with a simpler boundary condition for the horizon, and he obtained the central charge without the details of the boundary CFT, and derived entropies even for higher dimensional black holes. Similar results were also found by Solodukhin. On the other hand, Strominger obtained the entropy from the CFT at the asymptotic boundary of $`AdS_3`$ black hole. This very simple and elegant observation, however, cannot tell what the boundary degree of freedom is, and whether the central charge is the effective central charge $`c_{\mathrm{eff}}=c24\mathrm{\Delta }_{\mathrm{min}}`$ or not, just as in the case of Carlip’s approach. (See also .) Here $`\mathrm{\Delta }_{\mathrm{min}}`$ is minimum value of the conformal dimension of the CFT. We will elaborate on this point later. Applying the Regge-Teitelboim method, Bañados et al. obtained the algebra satisfied by the global charges without the details of boundary theoryi . There is also an explicit derivation using a boundary system coupled to the bulk geometric background. However, as is noted by Carlip, the central charge is modified to be $`c_{\mathrm{eff}}=1`$ and is too small to account for the black hole entropy. There have been many related works due to the recent keen interest of $`AdS`$/CFT duality of Maldacena. For example the works of Martinec takes the $`AdS`$/CFT seriously and argues that Liouville field theory (derived from Chern-Simons Gravity) is just an effective theory corresponding to the macroscopic description, and cannot account for the black hole entropy. There is also a stringy interpretation of the entropy. Despite these conformal field theoretic approaches, it has already been pointed out that none is completely satisfactory. In this paper we revisit the CFT approach to the black hole entropy problem. In deriving the Cardy’s formula used to calculate the entropy, the following two ingredient are quite essential. First, the partition function of a CFT must have modular invariance, $`\tau 1/\tau `$, where $`\tau `$ is the modular parameter. Secondly, to evaluate the partition function using saddle point approximation, the value of the central charge has to be shifted to $`c_{\mathrm{eff}}=c24\mathrm{\Delta }_{\mathrm{min}}`$, whenever the ground state eigenvalue $`\mathrm{\Delta }_{\mathrm{min}}`$ of $`L_0`$ does not vanish. Here we stress that $`\mathrm{\Delta }_{\mathrm{min}}`$ should be evaluated on the plane, i.e. $`L_0`$ is the zero mode of the stress energy tensor on the plane, even though the partition function is that of a CFT on a cylinder. Getting the correct number for $`\mathrm{\Delta }_{\mathrm{min}}`$ is the source of the difficulty in the problem. Extracting the exact information on the conformal data ($`c`$, $`\mathrm{\Delta }_{\mathrm{min}}`$) of the CFT starting from a black hole geometry is usually quite difficult. One way to obtain $`\mathrm{\Delta }_{\mathrm{min}}`$ easily is to make use of supersymmetry. In the Neveu-Schwarz(NS)-sector of a superconformal field theory, $`\mathrm{\Delta }_{\mathrm{min}}=0`$ always, and Ramond(R)-sector has value of $`c/24`$. $`AdS_3`$ geometry is identified as the bosonic backgrounds of (1,1)-type $`AdS`$ supergravity. To be more specific, we have the following cases: (i) the $`AdS_3`$ vacuum(global $`AdS_3`$ spacetime) has four Killing spinors which are antiperiodic (NS-sector), (ii) a massless black black hole has two periodic Killing spinors (R-sector), (iii) a massive extremal black hole has one periodic Killing spinor (R-sector). This can be easily seen by analyzing the Killing spinor equations for each geometry: $$D_\lambda \chi =\frac{ϵ}{2l}\gamma _\lambda \chi .$$ (3) In the above $`D_\lambda =_\lambda +\frac{1}{4}\omega _\lambda ^{ab}\gamma _a\gamma _b`$ is the covariant differential form with respect to the spin connection $`\omega ^{ab}`$, $`ϵ=\pm 1`$ and $`\{\gamma _a,\gamma _b\}=2\eta _{ab}`$. Two values of $`ϵ`$ is possible because there are two independent representations for Clifford algebra in three spacetime dimension. Naively one expects that the massless black hole corresponds to the ground state of the boundary CFT, and this looks reasonable because $`AdS`$ vacuum ($`M=1/8G`$) is disconnected from the black-hole spectrum ($`M0`$), although $`AdS`$ vacuum has the lowest energy. However, this choice of the ground state gives a wrong answer for the Bekenstein-Hawking entropy, while using the $`AdS`$ vacuum instead gives the correct answer. If we restrict the spectrum solely to the black holes, the corresponding CFT seems to be restricted to the R-sector only. We know very well that we cannot have a modular invariant theory restricting to the R-sector only, because the contribution to the partition function from the R-sector transforms into that of NS-sector under some modular transformations. Furthermore the operator product expansion (OPE) algebra for superconformal theory is such that $$[R]\times [R][NS],[NS]\times [NS][NS],[R]\times [NS][R].$$ (4) This means that we cannot restrict the CFT to the R-sector only. Moreover, the identity will come out from the OPE algebra eventually. If we examine this more carefully, we note the $`AdS`$ vacuum is not quite disjoint from the black hole spectrum, but is connected via singular point particle geometries between them (see preprint version, hep-th/9204099 of ). This becomes quite clear as we consider the boundary geometries. Actually one can create massless black hole out of $`AdS_3`$ by head-on collision of two massless particles. In this case mass gap, $`\mathrm{\Delta }M=1/8G`$, between the $`AdS_3`$ spacetime and massless black hole exactly matches with the energy of those two particles. The spatial geometry of a point particle has a conical singularity. Here the conical singularity is not a serious problem. One can resolve the singularity by replacing the matter distribution over a small region for the point particle. In fact, in the high energy scale, this point particle structure can be resolved. The most important thing is that the string cannot see this orbifold fixed point as singular. Now we look into the boundary conformal structure starting from its geometry. The metric on the $`r=r_0`$ surface for the geometry generated by the point particle source of mass $`m=\alpha ^2/8G`$ in the bulk <sup>*</sup><sup>*</sup>*here we have negative mass because we set the massless black hole case to have $`m=0`$ is $`ds^2=r_0^2\left({\displaystyle \frac{dt^2}{l^2}}+d\varphi ^2\right),\varphi \varphi +2\pi \alpha ,0<\alpha 1,`$ (5) from which we note the deficit angle $`2\pi (1\alpha )`$. In the above we have redefined $`(1+l^2/r_0^2)^{1/2}t`$ as $`t`$. It is convenient to work in the Euclidean scheme. $$ds_E^2=r_0^2\left(d\tau ^2+d\varphi ^2\right),$$ (6) where $`\tau =it/l`$ is the Euclidean time. One can focus on the $`r=r_0`$ region by rescaling $`d\stackrel{~}{s}^2ds^2/r_0^2=d\tau ^2+d\varphi ^2`$. Although the boundary topology is $`S^1\times \mathrm{}`$, the angular geometry has a deficit angle. We can map this cylindrical geometry with deficit angle to a conical one, i.e. the boundary conformal cone, by the following exponential conformal mapping $`w=e^{\tau +i\varphi }(\overline{w}=e^{\tau i\varphi })`$: $$ds_{\mathrm{cone}}^2=dwd\overline{w}R^{2\alpha 2}(dR^2+R^2d\theta ^2),$$ (7) where $`w`$ is the holomorphic coordinate on the cone. In the above we have introduced the polar coordinates $`R=(\alpha e^\tau )^{1/\alpha }`$ and $`\theta =\varphi /\alpha `$ to show the conical structure, where $`0R<\mathrm{},\theta \theta +2\pi `$. With another conformal transformation $`w(z)=z^\alpha /\alpha ,\overline{w}=\overline{z}^\alpha /\alpha `$ we get to the conformal plane: $`ds_{cone}^2=z^{\alpha 1}\overline{z}^{\alpha 1}dzd\overline{z}.`$ (8) So $`\alpha =1`$, which is the plane geometry, corresponds to the AdS vacuum and the singular limit $`\alpha 0`$ approaches to the massless black hole. The mass parameter of the point particle is a continuous parameter which interpolates these two limits. The stress energy tensor of the CFT on the cone is obtained from the following conformal transformation with the Schwarzian derivative: $`T_{\mathrm{plane}}(z)z^{2(\alpha 1)}T_{\mathrm{cone}}(w){\displaystyle \frac{c}{24z^2}}(\alpha 1)(\alpha +1).`$ (9) Thus the conformal weight of a primary on the cone is related to that on the plane as follows (with similar expressions for $`\overline{L}_0`$): $`(L_{\mathrm{cone}})_0=(L_{\mathrm{plane}})_0+{\displaystyle \frac{c}{24}}(\alpha 1)(\alpha +1).`$ (10) Here we note that $`L_{\mathrm{cone}}`$ interpolates $`L_{\mathrm{cylinder}}`$ (for $`\alpha 0`$) and $`L_{\mathrm{plane}}`$ (for $`\alpha =1`$). For spinning particle case, we can follow similar step. In this case $`L_0`$ and $`\overline{L}_0`$ is shifted differently to have $`L_0\overline{L}_00`$. There is another thing which connects these two limit. This is the spectral flow between these two states in the corresponding boundary CFT. However, to realize spectral flow one actually needs an extended supersymmetry. In fact, one can show that at the black hole horizon, there is supersymmetry enhancement. To see this let us consider the Killing spinor equation for the massless black hole case. $`D\chi `$ $`=`$ $`\left(d+{\displaystyle \frac{1}{2}}{\displaystyle \frac{r}{l}}\left({\displaystyle \frac{dt}{l}}\gamma _0\gamma _1d\varphi \gamma _1\gamma _2\right)\right)\chi `$ (11) $`=`$ $`{\displaystyle \frac{ϵ}{2l}}\left({\displaystyle \frac{rdt}{l}}\gamma _0+{\displaystyle \frac{ldr}{r}}\gamma _1+rd\varphi \gamma _2\right)\chi `$ (12) The solutions are $`ϵ=1;\chi `$ $`=`$ $`\sqrt{r}\chi _+`$ (13) $`\chi `$ $`=`$ $`\left({\displaystyle \frac{1}{\sqrt{r}}}+{\displaystyle \frac{\sqrt{r}}{2l}}x^\alpha \gamma _\alpha \right)\chi _{}`$ (14) $`ϵ=1;\chi `$ $`=`$ $`\left({\displaystyle \frac{1}{\sqrt{r}}}{\displaystyle \frac{\sqrt{r}}{2l}}x^\alpha \gamma _\alpha \right)\chi _+`$ (15) $`\chi `$ $`=`$ $`\sqrt{r}\chi _{},`$ (16) where $`x^\alpha \gamma _\alpha =t\gamma _0/l+\varphi \gamma _2`$. Due to the $`\varphi `$-dependent terms, only two out of these four Killing spinors survive upon the identification $`\varphi \varphi +2\pi `$. However near the horizon the $`\varphi `$-dependent terms drop out because they are relatively small compared to the other $`1/\sqrt{r}`$ terms, and all four solutions survive enhancing supersymmetry. Such an enhancement is not a surprising thing as we can see in similar cases discussed before. Actually this enhancement of supersymmetry at the boundary CFT is consistent with the spacetime supersymmetry. So if the horizon is where the boundary CFT (for the microscopic degrees of freedom for BTZ black holes) is located, then we can solve the entropy problem. The (1,1)-type $`AdS`$ supergravity in the bulk gives rise to (2,2) supersymmetric horizon CFT. One can ask, when the mass of the black hole is zero, if the boundary CFT makes any sense, because the horizon is just a point. It is not a problem because the BTZ black hole coordinate patch cannot cover the whole region of $`AdS_3`$. In fact, $`r=0`$ ‘point’ is the null surfaces in the global coordinates for $`AdS_3`$, deliminating the Poincaré region. Below the black hole spectrum, that is to say, for the $`AdS_3`$ spacetime and conical geometries of point particles, one can take the boundary at any point with finite radius. This is so because the boundary geometry looks the same regardless of the value of the radius we have. The isomorphism, which maps the R-sector to the NS-sector in the (2,2) supersymmetric CFT is the spectral flow. We want to show that the spectral flow in fact is a symmetry transformation, i.e. superconformal transformation in superspace. To see this we write down the super stress-energy tensor in superspace formalism as follows: $`𝒥(z,\theta ^+,\theta ^{})=J(z)+\theta ^+G^{}(z)+\theta _{}G^+(z)+i\theta ^+\theta ^{}T(z).`$ (17) The superconformal transformation of the stress energy tensor is given by $`𝒥(z,\theta ^+,\theta ^{})(𝒟^+\stackrel{~}{\theta }^{}𝒟^{}\stackrel{~}{\theta }^+)\stackrel{~}{𝒥}(\stackrel{~}{z},\stackrel{~}{\theta }^+,\stackrel{~}{\theta }_{})+{\displaystyle \frac{ik}{4}}S(Z,\stackrel{~}{Z}),`$ (18) where $`𝒟^\pm =\frac{}{\theta ^{}}+\theta ^\pm \frac{}{z}`$ are the superderivatives and $`S(Z,\stackrel{~}{Z})`$ is the $`N=2`$ super-Schwarzian derivative. $`Z=(z,\theta ^+,\theta ^{})`$ is the complex $`N=2`$ supercoordinate. In general, the superconformal transformation on a super Riemann surface is given as $`\stackrel{~}{z}=f(z),\stackrel{~}{\theta }^+=\mu ^{++}(z)\theta ^++\mu ^+(z)\theta ^{},\stackrel{~}{\theta }^{}=\mu ^+(z)\theta ^++\mu ^{}(z)\theta ^{},`$ (19) with the following superconformal condition: $`\mu ^{++}=\sqrt{f^{}}e^{i\xi },\mu ^{}=\sqrt{f^{}}e^{i\xi },\mu ^+=\mu ^+=0,`$ (20) for some functions $`f(z)`$ and $`\xi (z)`$. Conventional conformal transformation corresponds to the case of $`\xi =0`$. The prime denotes the derivative with respect to $`z`$. Under this the components of super stress-energy tensor transform as $`J(z)`$ $``$ $`f^{}\stackrel{~}{J}(\stackrel{~}{z}){\displaystyle \frac{k}{2}}\xi ^{}`$ (21) $`G^+(z)`$ $``$ $`(f^{})^{\frac{3}{2}}e^{i\xi }\stackrel{~}{G}^+(\stackrel{~}{z}),G^{}(z)(f^{})^{\frac{3}{2}}e^{i\xi }\stackrel{~}{G}^{}(\stackrel{~}{z})`$ (22) $`T(z)`$ $``$ $`(f^{})^2\stackrel{~}{T}(\stackrel{~}{z})+2\xi ^{}f^{}\stackrel{~}{J}(\stackrel{~}{z})+{\displaystyle \frac{k}{4}}\left[2(\xi ^{})^2+{\displaystyle \frac{f^{\prime \prime \prime }}{f^{}}}{\displaystyle \frac{3}{2}}\left({\displaystyle \frac{f^{\prime \prime }}{f^{}}}\right)^2\right].`$ (23) Denoting the right hand sides as $`J_\xi (z),G_\xi ^\pm (z)`$ and $`T_\xi (z)`$ respectively, one can simplify the whole expression as $`J_\xi (z)=J_0(z){\displaystyle \frac{c}{6}}\xi ^{},G_\xi ^\pm (z)=G_0^\pm (z)e^{\pm i\xi (z)},`$ (24) $`T_\xi (z)=T_0(z)+2\xi ^{}J_0(z){\displaystyle \frac{c}{6}}(\xi ^{})^2,`$ (25) where $`k=c/3`$ was used. We note here $`J_0(z),G_0^\pm (z)`$ and $`T_0(z)`$ are nothing but the standard superconformal transformation. Rephrasing $`J`$ as $`iJ/2`$ and $`\xi `$ as $`i\eta \mathrm{ln}z`$, we are led to the spectral flow map for the $`N=2`$ superconformal symmetry. This means that the spectral flow can be understood as a kind of superconformal transformation. The ‘twist’ operator which connects the NS-vacuum and R-vacuum is not just a tool here but is in the set of primary operators. (The same interpretation is expected for the spectral flow for the higher extended superconformal symmetry.) The true vacuum turns out to be the NS-vacuum. This makes $`c_{\mathrm{eff}}=c`$, whatever theory the horizon CFT is. Another advantage of the spectral flow is that we can understand the the black hole creation (R-state) from particle (NS-state) collisions in terms of OPE. The particle collision would be OPE of two NS-states, and it seems that making R-state is impossible. However with the spectral flow, we can actually make $$[NS]_\eta \times [NS]_\eta ^{}[NS]_{\eta +\eta ^{}}$$ (26) If $`\eta +\eta ^{}=1`$ actually $`[NS]_1=[R]`$. The Cardy’s formula, which counts the physical degrees of freedom, must be invariant under the symmetry, and thus under the spectral flow. Relevance of spectral flow for black hole entropy was discussed in ref., where the unitary representation of $`N=2`$ superconformal algebra is used. Description of the spectral flow in terms of charged particles coupled to the Chern-Simons gauge theory was discussed in different setting. To calculate the black hole entropy from this horizon CFT, we have to know the central charge $`c`$ and the conformal weight $`\mathrm{\Delta }`$ of the horizon state of the black hole. We use the results recently obtained by Carlip. $$c=\frac{3r_+\beta }{GT},\mathrm{\Delta }=\frac{r_+T}{8G\beta },$$ (27) where $`\beta `$ is the inverse Hawking temperature and $`T`$ is an arbitrary periodicity, which does not affect the result for the entropy. Differently from the asymptotic CFT, the central charge $`c`$ depends on the inner and outer horizon radius $`r_{},r_+`$ of the black hole. Different black hole gives different CFT on the horizon. Another remarkable thing for the horizon CFT is that the central charge $`\overline{c}`$ of the right moving mode vanishes in the case. The same features were also found in . In three dimension, one can determine the arbitrary periodicity $`T`$ as follows. Given a black hole of mass $`\widehat{M}`$ and angular momentum $`\widehat{J}`$, the conformal data are fixed as in (27). Each horizon state of conformal weight $`(\delta ,\overline{\delta })`$ contributes to the bulk mass $`M`$ and angular momentum $`J`$. Although the energy of a horizon state needs not be equal to the bulk mass they must be proportional to each other; $`\delta +\overline{\delta }=\gamma Ml+\zeta `$, where $`\gamma `$ and $`\zeta `$ are dimensionless constants to adjust the different scalings and different base points of the energy respectively. One can also find the relation between the angular momentum of the horizon state and the bulk angular momentum $`J`$ as $`\delta \overline{\delta }=\gamma J`$. The energy gap beween the R-vacuum and NS-vacuum of the horizon CFT matches with the mass gap between the massless black hole and the $`AdS`$ vacuum. $`{\displaystyle \frac{c}{24}}={\displaystyle \frac{\gamma l}{8G}},`$ (28) which tells us $`c=3\gamma l/G`$ and $`\beta /T=\gamma l/r_+`$. Therefore the ambiguity of the periodicity $`T`$ is concerned with the different energy scalings of the horizon CFT and the bulk $`AdS`$ geometry. From Carlip’s results and some known facts about the $`AdS`$ vacuum one can determine $`\gamma `$ and $`\zeta `$ completely, therefore fix the periodicity. $`\delta (M,J)={\displaystyle \frac{r_+}{2l}}\left(Ml+J+{\displaystyle \frac{l}{8G}}\right){\displaystyle \frac{\beta }{T}}`$ (29) $`\overline{\delta }(M,J)={\displaystyle \frac{r_+}{2l}}\left(MlJ+{\displaystyle \frac{l}{8G}}\right){\displaystyle \frac{\beta }{T}}`$ (30) $`{\displaystyle \frac{\beta }{T}}={\displaystyle \frac{\sqrt{2}l}{\left(l^2+\left(r_++r_{}\right)^2\right)^{1/2}}}.`$ (31) The central charge $`c`$ together with the conformal weight $`\delta (\widehat{M},\widehat{J})=\mathrm{\Delta }`$ results in the correct statistical entropy $`S2\pi r_+/4G`$. Lastly, we would like to comment on the success of the string theory which gives the correct entropy, regardless of the delicate arguments we had to give in supergravity theory. The successes which specify the microscopic structure are those works making use of the BPS arguments. At least in the weak coupling regime, they point out that the microscopic structure resides on the world volume of D-branes. In this weak coupling region, it is meaningless to say the bulk geometry. For the D1-D5-KK case, the effective world volume theory is the (4,4) supersymmetric sigma model on the symmetric product of K3. Owing to the extended supersymmetry, the Cardy’s formula can be applied at any point of ground state under the spectral flow as long as both R-sector and NS-sector are fully included in the calculation. Fortunately for this world volume theory, there is no reason to pick out the R sector ground state as the true ground state. There are several works which seem to produce the correct central charge and entropy using the AdS/CFT correspondence, also without referring to the points we have resolved above. In fact, one needs to address the same question of the true ground state for this scheme also so far as the microscopic structure is not specified. (Even though the microscopic structure is not specified, one can determine the ground state using supersymmetry.) In the calculation of the two point function of stress energy tensor, it has been a priori assumed that the ground state is the NS-sector since the Poincaré coordinates (without identification) are usually used. However, if one neglect the 3-sphere part of $`AdS_3\times S_3`$, the supersymmetry on the asymptotic boundary is just $`N=1`$ for the R sector ground state because there is no supersymmetry enhancement on the asymptotic boundary differently from the horizon. Therefore spectral flow is not expected in this asymptotic CFT. One has to resort to other method to say that the NS vacuum is the true vacuum of the asymptotic CFT. Acknowledgements We would like to thank J. de Boer, A. Giveon and N. Ohta for discussions. This work is supported by KOSEF (981-0201-002-2) and by Korea Research Foundation (1998-015-D00073).
no-problem/9903/cond-mat9903165.html
ar5iv
text
# Raman scattering in a two-dimensional electron gas: Boltzmann equation approach ## Abstract The inelastic light scattering in a 2-d electron gas is studied theoretically using the Boltzmann equation techniques. Electron-hole excitations produce the Raman spectrum essentially different from the one predicted for the 3-d case. In the clean limit it has the form of a strong non-symmetric resonance due to the square root singularity at the electron-hole frequency $`\omega =vk`$ while in the opposite dirty limit the usual Lorentzian shape of the cross section is reestablished. The effects of electromagnetic field are considered self-consistently and the contribution from collective plasmon modes is found. It is shown that unlike 3-d metals where plasmon excitations are unobservable (because of very large required transfered frequencies), the two-dimensional electron system gives rise to a low-frequency ($`\omega k^{1/2}`$) plasmon peak. A measurement of the width of this peak can provide data on the magnitude of the electron scattering rate. PACS: 73.50.-h, 78.30.-j Raman scattering is a powerful method for experimental studies of elementary excitations in various structures. In particular, high-$`T_c`$ superconductors produce Raman spectra which remain mysterious over a broad region of frequency. Namely, the high-frequency continuum , $`2\mathrm{\Delta }`$-peak (see, e.g., Ref. ), two-magnon spectra, revealing strong mutual influence between antiferromagnetism and superconductivity , still does not have a robust self-consistent theoretical description. Therefore, the development of the theory of Raman scattering from different excitations is still of considerable current interest. Here we are interested in the Raman scattering from excitations of a 2-d normal electron system, namely from electron-hole pairs and collective plasmon excitations. We show that Raman scattering cross section in 2-d systems differs from the spectrum of a 3-d metal in two aspects. First, the scattering from electron-hole pairs becomes more singular (due to the square-root singularity in the density of states). A finite strength of the electron-hole contribution is determined by the electron scattering rate. Second, as soon as the plasmon spectrum is gapless ($`\omega k^{1/2}`$), a corresponding peak is located in the reasonably low-frequency ($`10meV`$) range. Inelastic light scattering in 2-d systems has been extensively used in investigations of the excitations of the Fermi sea , the energy gap in the fractional quantum Hall regime , exciton states , spin-density and charge-density excitations . Experimental evidence of 2-d plasmons in GaAs heterostructures comes from the Raman spectroscopy measurements in a magnetic field . By varying the direction of magnetic field it is possible to distinguish Raman response of a 2-d electron system from a contribution of a background . The $`k^{1/2}`$-spectrum is clearly observed, however, no data are available for the dependence of a lineshape of the plasmon peak on the electron scattering rate. The standard quantum mechanical theory of Raman scattering in electron systems applies the Green function formalism. We use in the present paper a different approach, based on the Boltzmann equation. Such a semiclassical method is valid when the characteristic scale of transferred light momentum is less than the Fermi momentum of a 2-d electron gas. For the typical situation of GaAs/AlGaAs heterojunction the concentration of carriers is of the order of $`16\times 10^{11}cm^2`$ and corresponds to the Fermi momentum value of $`38\times 10^5cm^1`$. On the other hand the typical values of the momentum transfer are considerably lower $`0.21\times 10^5cm^1`$. The main advantage of the kinetic approach is a possibility to include effects of electron scattering and to account for electromagnetic field in a self-consistent manner by solving simultaneously the Maxwell equation. This gives a plasmon contribution together with the electron-hole continuum. It is also possible to generalise easily this theory to the case of external magnetic field applied. The system under study is shown in Fig. 1. Two-dimensional electron gas ($`z=0`$ plane) is embedded into a host material with the dielectric constant $`ϵ`$. The polarization of incident (i) and, hence, of scattered (s) light waves are assumed to be parallel to the plane $`z=0`$. The effective Hamiltonian describing Raman scattering from electronic fluctuations is bilinear in the vector potential of light, $$H_{eff}=\frac{e^2}{2mc^2}d^2s\delta n_\gamma (𝐬,t)𝐀^2(𝐬,t),$$ (1) where the fluctuation $`\delta n_\gamma `$ expressed via the nonequilibrium partition function $`\delta f_p(𝐬,t)`$ $$\delta n_\gamma (𝐬,t)=\frac{2d^2p}{(2\pi )^2}\gamma _p\delta f_p(𝐬,t),$$ (2) differs from the usual electronic density only by the anisotropic dimensionless factor $`\gamma _p`$ (electron-light vertex). This factor depends on the light polarization and accounts for the virtual interband transitions. Its exact form is not essential for the following (see Ref. ). Varying the expression (1) over the vector potential $`𝐀`$ we obtain the electron current induced by the incident light with the frequency $`\omega ^{(i)}`$ and the in-plane wave vector $`𝐤_s^{(i)}`$: $$𝐣^{(i)}(𝐬,t)=\frac{e^2}{mc}\delta n_\gamma (𝐬,t)𝐀^{(i)}\mathrm{exp}(i\omega ^{(i)}t+i𝐤_s^{(i)}𝐬).$$ (3) The 2-d current (3) produces a scattered electromagnetic wave with different frequency $`\omega ^{(s)}`$ and wave vector $`𝐤_s^{(s)}`$. The solution of corresponding nonuniform Maxwell equation is straightforward. After some simple calculations it gives for the amplitude of light scattered into the half-space $`z>0`$ the expression, $$A(\omega ^{(s)},𝐤_s^{(s)})=\frac{2\pi ie^2A^{(i)}}{mc^2k_z^{(s)}}\delta n_\gamma (\omega ,𝐤_s),$$ (4) where $`k_z^{(s)2}=ϵ\omega ^{(s)2}/c^2𝐤_s^{(s)2}`$; the Fourier-component of the density fluctuations depends on the transferred energy $`\omega =\omega ^{(i)}\omega ^{(s)}`$ and momentum $`𝐤_s=𝐤_s^{(i)}𝐤_s^{(s)}`$. The Raman scattering cross section defined as the normalised energy flow $`|\omega ^{(s)}A^{(s)}/\omega ^{(i)}A^{(i)}|^2`$ related to the interval $`d\omega ^{(s)}d^2k_s^{(s)}/(2\pi )^3`$, has the form: $$\frac{d^2\sigma }{d\omega ^{(s)}do^{(s)}}=\frac{ϵ^{1/2}e^4}{2\pi m^2c^5}\frac{\omega ^{(s)3}}{\omega ^{(i)2}k_z^{(s)}}K(\omega ,𝐤_s),$$ (5) where $`K(\omega ,𝐤_s)`$ is the Fourier component of the correlator of density fluctuations $$K(𝐬𝐬^{},tt^{})=\delta n_\gamma (𝐬,t)\delta n_\gamma (𝐬^{},t^{}).$$ One can argue that the expression (5) diverges as the direction of scattered light approaches the electron plane: $`k_z^{(s)}0`$. In fact, this means that as soon as the ”width” of a two-dimensional system $`l`$ is assumed to be the smallest one of all of the characteristic lengths of the problem, we are restricted to the limit $`k_z^{(s)}>>l^1`$. To evaluate this correlator we apply the fluctuation-dissipation theorem which expresses it via the imaginary part of the generalized response $`\delta n_\gamma (\omega ,𝐤)`$ to an arbitrary external potential $`U(\omega ,𝐤)`$ (in what follows we omit the subscript $`s`$): $$K(\omega ,𝐤)=\frac{2}{1\mathrm{exp}(\omega /T)}\text{Im}\left(\frac{\delta n_\gamma (\omega ,𝐤)}{U(\omega ,𝐤)}\right).$$ (6) The most simple way to derive the generalised response (6) is to make use of the linearized Boltzmann equation for the nonequilibrium part of the distribution function $`\delta f_p=\chi _pf_0/\epsilon `$ : $$i(\omega \mathrm{𝐤𝐯}+i\tau ^1)\chi _p(𝐤,\omega )=i\omega \gamma _pU(𝐤,\omega )e\mathrm{𝐯𝐄}(𝐤,\omega ),$$ (7) where $`f_0(\epsilon )`$ is the local-equilibrium Fermi-Dirac partition function. The second term in the right hand side of Eq. (7) accounts for the fluctuating electromagnetic field $`𝐄`$. It satisfies the Maxwell equation with the nonequilibrium electric current determined from Eq. (7): $$\text{rot rot}𝐄(z,𝐬,\omega )\frac{ϵ\omega ^2}{c^2}𝐄(z,𝐬,\omega )=\frac{4\pi ie\omega }{c^2}\delta (z)𝐯\chi _p(𝐬,\omega ).$$ (8) Here the brackets denote the integral over the Fermi line $$\mathrm{}=\frac{2dp_F}{v(2\pi )^2}(\mathrm{}).$$ We are interested in the solution of Eq. (8) at $`z=0`$. The straightforward derivation gives for the Fourier component of the electric field $$𝐄(z=0,𝐤,\omega )=\frac{2\pi ie}{ϵ\omega }\sqrt{k^2ϵ\omega ^2/c^2}𝐯\chi _p(𝐤,\omega ).$$ (9) Substituting the solution (9) into the Boltzmann equation (7) one gets the integral equation for the electronic density fluctuation $`\chi _p(𝐤,\omega )`$. Such an equation has a simple solution which after the substitution into Eq. (2) and then into the fluctuation-dissipation theorem (6) gives the Raman cross section. Finally we obtain (see Fig. 2) $`K(𝐤,\omega )\text{Im}{\displaystyle \frac{\omega \gamma _p^2}{\omega \mathrm{𝐤𝐯}+i\tau ^1}}+`$ (10) $`\text{Im}F_\alpha (𝐤,\omega )D_{\alpha \beta }(𝐤,\omega )F_\beta (𝐤,\omega ),`$ (11) where the proportionality coefficient (Bose factor) is omitted, see Eq. (6); $`D(𝐤,\omega )`$ is the two-dimensional electromagnetic Green function $$D_{\alpha \beta }^1(𝐤,\omega )=\frac{1}{\omega }\frac{v_\alpha v_\beta }{\omega \mathrm{𝐤𝐯}+i\tau ^1}\frac{ϵ\delta _{\alpha \beta }}{2\pi e^2\sqrt{k^2ϵ\omega ^2/c^2}}$$ (12) and $`F_\alpha (𝐤,\omega )`$ is the oscillator strength $$F_\alpha (𝐤,\omega )=\frac{v_\alpha \gamma _p}{\omega \mathrm{𝐤𝐯}+i\tau ^1}.$$ (13) We devote the rest of the paper to the discussion of different terms in the Raman cross section (10). The first term represents the scattering from the electron-hole pairs. For the estimates we suppose the Fermi line to be isotropic, i.e., $$K_{eh}(k,\omega )\frac{m\gamma ^2}{\pi }\text{Im}\frac{\omega }{\sqrt{(\omega +i\tau ^1)^2k^2v^2}}.$$ (14) In the dirty limit $`kv\tau <<1`$ (the so-called zero-momentum transfer limit) one can neglect $`v^2k^2`$ in the denominator. The cross section then takes the same well-known Lorentzian form as in 3-d case : $`\omega \tau /(\omega ^2\tau ^2+1)`$. In the clean limit $`kv\tau >>1`$ the expression (14) has a square root singularity at $`\omega =kv`$ rather than a step-like one (as in 3-d case). It results in the strong non-symmetric resonance (see Fig. 3); the finite height of this resonance is controlled by the scattering rate $`\tau ^1`$. For anisotropic Fermi line the resonance location is defined by the maximum value of electron velocity along the momentum transfer $`\omega =\mathrm{𝐤𝐯}_{max}`$. The second term in Eq. (10) represents effects of Coulomb interaction and collective electron excitations, namely 2-d plasmons. This term is important in the clean limit only. At low transferred frequencies $`\omega <<kv`$ it results in the screening of the isotropic scattering channel through the renormalization of the electron-light vertex $`\gamma _p\gamma _p\gamma _p/1`$, similar to usual 3-d case . At high transferred frequencies $`\omega >>kv`$ the second term in Eq. (10) gives a 2-d plasmon peak located at the plasmon frequency $`\omega _{pl}(k)`$ which is determined by the dispersion equation $$\omega ^2=\frac{2\pi e^2}{ϵ}v_k^2\sqrt{k^2ϵ\omega ^2/c^2},$$ (15) where $`v_k`$ means the component of electron velocity in the $`k`$-direction. The typical momenta transfer $`k`$ are of the order of light momenta. Hence from the formula (15) one can see that $`\omega >>vk`$ proving the initial assumption was correct. We can also omit the term $`ϵ\omega ^2/c^2`$ (this term cares for the finite plasmon velocity) in Eq. (15) in comparison with the term $`k^2`$ due to the fact that $`c^2k>>v^2p_F`$ for typical values of $`k`$. Indeed, this means that it is enough to use the Poisson equation for electromagnetic fluctuations instead of the Maxwell equation (8). The only difference occurs at very small transferred momenta where the Poisson equation gives the infinite plasmon velocity in the limit $`k0`$. The formula (15) is valid only if the plasmon wavelength becomes large compared to the layer thickness, $`k>>l^1`$. If this condition is violated the three-dimensional problem has to be solved with boundary conditions satisfied on both sides of the layer. It’s solution gives the expression $$\omega _{pl}^2(k)=\frac{4\pi e^2}{ϵ}\sqrt{v_k^2v_z^2}\text{tanh}\left(\sqrt{\frac{v_k^2}{v_z^2}}\frac{kl}{2}\right),$$ where the electron velocity along the perpendicular direction $`v_z`$ appeares. Note, that the angular brackets now denote the integral over the three-dimensional Fermi surface. When $`l0`$ this expression reduces to Eq. (15) and for $`l\mathrm{}`$ it gives the frequency of ordinary 3-d plasmon. Near the plasmon resonance the Raman cross section has the symmetric Lorentzian lineshape $$K_{pl}(k,\omega )\frac{m\gamma ^2}{8\pi \omega }\frac{k^2v^2\tau ^1}{(\omega _{pl}(k)\omega )^2+\tau ^2/4}.$$ (16) The relative height of two resonances (14) and (16) is $`K_{eh}/K_{pl}k^{3/2}v^{3/2}\tau ^{1/2}/\omega _{pl}`$ and can be either more or less than unity depending on the momentum transfer $`k`$ and scattering rate $`\tau ^1`$. In conclusion, we have calculated the Raman scattering intensity from two-dimensional electronic fluctuations. The main distinctive features from a usual three-dimensional metal are: the more singular electron-hole contribution (14) and low frequency plasmon resonance (16). The electronic Raman scattering in a 2-d system in a transverse magnetic field can be considered studied the same Boltzmann equation technique as it has been done for 3-d electron system . Author thanks Prof. L.A. Falkovsky for numerous fruitful discussions and valuable comments. The work was supported by the Russian Foundation for Basic Research, Grant No 97-02-16044 and by a scholarship from KFA, Forschungszentrum in Juelich, Germany.
no-problem/9903/cond-mat9903396.html
ar5iv
text
# CANTED ANTIFERROMAGNETIC PHASE IN A DOUBLE QUANTUM WELL IN A TILTED QUANTIZING MAGNETIC FIELD ## Abstract We investigate the double-layer electron system in a parabolic quantum well at filling factor $`\nu =2`$ in a tilted magnetic field using capacitance spectroscopy. The competition between two ground states is found at the Zeeman splitting appreciably smaller than the symmetric-antisymmetric splitting. Although at the transition point the system breaks up into domains of the two competing states, the activation energy turns out to be finite, signaling the occurrence of a new insulator-insulator quantum phase transition. We interpret the obtained results in terms of a predicted canted antiferromagnetic phase. Much interest in double-layer systems is aroused by the presence of an additional degree of freedom which is associated with the third dimension. In a double-layer system with symmetric electron density distributions in a normal to the interface magnetic field at filling factor $`\nu =2`$, the competition of different ground states is expected that is controlled by the relation between the Coulomb interaction energy, the spin splitting, and the symmetric-antisymmetric splitting caused by interlayer tunneling. In the simplest single-particle picture each Landau level has four sublevels originating from the spin and subband splittings. With increasing spin splitting a transition should occur from a spin unpolarized ground state with anti-parallel spin orientations of occupied sublevels to a ferromagnetic one with parallel spins when the Zeeman energy $`\mu gB`$ is equal to the symmetric-antisymmetric splitting $`\mathrm{\Delta }_{SAS}`$. Experimentally, however, the clear transition was observed at the Zeeman energy significantly smaller than $`\mathrm{\Delta }_{SAS}`$ , which points out the importance of many-body effects for the transition. Recent theoretical considerations have revealed the crucial role of electron-electron interaction for the spin structure of double-layer electron systems. For the symmetric bilayer electron system in a potential well that is stable to symmetry breaking (so-called easy plane two-dimensional ferromagnet ), in addition to the consistent with found in experiment shift of the phase transition point to smaller magnetic fields, a new so-called canted antiferromagnetic phase occurs between spin unpolarized and ferromagnetic state . It is shown that, due to Coulomb repulsion of electrons, mixing the symmetric and antisymmetric states with opposite spin directions forms a new ground state which is a two-particle spin singlet. The transition point between this spin singlet state and the ferromagnetic state in which the spins in both layers point in the direction of the applied magnetic field is defined by the relation $$\mu gB\frac{\mathrm{\Delta }_{SAS}^2}{E_c}$$ (1) with $`E_c`$ denoting the Coulomb energy. Because normally $`E_c>\mathrm{\Delta }_{SAS}`$, the transition is expected at $`\mu gB<\mathrm{\Delta }_{SAS}`$. Near the transition the intralayer exchange interaction connects both lowest states of the electron system and gives rise to the intermediate canted antiferromagnetic phase that is characterized by interlayer antiferromagnetic spin correlations in the two-dimensional plane and is related to the zero-energy spin excitation mode . The appearance of this phase signifies a new class of quantum phase transitions between insulators with different spin structures. The idea of a novel phase has been supported by the recent experiments on inelastic light scattering in which transitions were observed near $`\nu =2`$ between the ground states of a bilayer system with different spin structures . However, these states so far have not been studied by transport measurement methods. The existence of a canted antiferromagnetic phase is predicted for a double layer with asymmetric electron density distributions as well; moreover, external bias allows a continuous tuning of the $`\nu =2`$ state within a single gated sample . As a result, in the general case one has two transitions with increasing bilayer asymmetry: ferromagnetic – canted antiferromagnetic – spin unpolarized phase. For a sufficiently large ratio $`\mathrm{\Delta }_{SAS}/\mu gB`$, the canted antiferromagnetic phase becomes the ground state at the symmetry/balance point, and the first transition disappears. A further increase of the ratio leads to the disappearance of the second transition. In a disordered system the interval between two transition points is expected to have intrinsic structure and can include different spin Bose glass phases . Here, we employ a capacitance spectroscopy technique to study the phase transition in the double-layer electron system in a parabolic quantum well at filling factor $`\nu =2`$ at a tilted magnetic field. The scenario of the observed transition gives strong evidence for a new insulator-insulator quantum phase transition and supports the formation of the recently predicted canted antiferromagnetic phase, although for tilted magnetic fields a rigorous theory is not yet available. The samples are grown by molecular beam epitaxy on semi-insulating GaAs substrate. The active layers form a 760 Å wide parabolic well. In the center of the well a 3 monolayer thick Al<sub>x</sub>Ga<sub>1-x</sub>As ($`x=0.3`$) sheet is grown which serves as a tunnel barrier between both parts on either side. The symmetrically doped well is capped by 600 Å AlGaAs and 40 Å GaAs layers. The sample has ohmic contacts (each of them is connected to both electron systems in two parts of the well) and two gates on the crystal surface with areas $`120\times 120`$ and $`220\times 120`$ $`\mu `$m<sup>2</sup>. The gate electrode enables us both to tune the carrier density in the well, which is equal to $`4.2\times 10^{11}`$ cm<sup>-2</sup> at zero gate bias, and measure the capacitance between the gate and the well. For capacitance measurements we apply an ac voltage $`V_{ac}=2.4`$ mV at frequencies $`f`$ in the range 3 to 600 Hz between the well and the gate and measure both current components as a function of gate bias $`V_g`$ in the temperature interval between 30 mK and 1.2 K at magnetic fields of up to 14 T. Our measurements are similar to magnetotransport measurements in Corbino geometry: when disturbed the sample edge becomes equipotential within the edge magnetoplasmon roundtrip time $`L/\sigma _{xy}`$ which is normally much shorter than the time of charge redistribution normal to the edge $`C_0L^2/\sigma _{xx}`$, where $`\sigma _{xx}`$ and $`\sigma _{xy}`$ are the dissipative and Hall conductivity, $`C_0`$ is the capacitance per unit area between gate and quantum well, and $`L`$ is the characteristic sample dimension. At low frequencies $`f\sigma _{xx}/C_0L^2`$, the imaginary current component reflects the thermodynamic density of states in a double-layer system. In this limit, the, e.g., $`\nu =2`$ imaginary current component minimum is accompanied by a peak in the active current component which is proportional to $`(fC_0)^2\sigma _{xx}^1`$ and is used for measurements of the temperature dependence of $`\sigma _{xx}`$. At high frequencies the minimum in the imaginary current component should deepen as caused by in-plane transport so that both current components tend to zero. The positions of the $`\nu =2`$ imaginary current component minimum (or, active current component maximum) in the ($`B_{},V_g`$) plane are shown in Fig. 1 alongside with those at filling factors $`\nu =1,3,4`$ for a normal and tilted magnetic fields. In the normal magnetic field, at the gate voltages $`V_{th1}<V_g<V_{th2}`$, at which one subband is filled with electrons in the back part of the well with respect to the gate, the experimental points are placed along a straight line with a slope defined by capacitance between the gate and the bottom electron layer (Fig. 1a). Above $`V_{th2}`$, where a second subband collects electrons in the front part of the well, a minimum in the imaginary current component at integer $`\nu `$ corresponds to a gap in the spectrum of the bilayer electron system, and the slope is inversely proportional to the capacitance between gate and top electron layer . At a tilt angle of 30, a splitting of the line indicating the position of the $`\nu =2`$ minimum is observed close to the balance point (Fig. 1b). Knowing $`\mathrm{\Delta }_{SAS}=1.3`$ meV from far infrared measurements and model calculations we can estimate from Eq. (1) the Coulomb energy $`E_c6`$ meV at the transition. This value is smaller than the energy $`e^2/\epsilon l=15`$ meV (where $`l`$ is the magnetic length) because of finite extension of the electron wave functions in the $`z`$ direction. As seen from Figs. 1c,1d, with increasing tilt angle $`\mathrm{\Theta }`$ the center of the splitting moves towards more negative gate voltages. It is important that the occurrence of two distinct minima at fixed magnetic field points to the competition between the two ground states of our bilayer system. Fig. 2 represents the behaviour of the activation energy along the $`\nu =2`$ line in Fig. 1 at different tilt angles. For $`\mathrm{\Theta }=0^{}`$ at $`V_g>V_{th2}`$, the activation energy passes through a maximum and then monotonously decreases with increasing magnetic field. At a tilted magnetic field there emerges a deep minimum of the activation energy at the field corresponding to the splitting point. We find that for all tilt angles the minimum activation energy is finite (Fig. 2). A set of the experimental traces near the splitting point for $`\mathrm{\Theta }=45^{}`$ is displayed in the inset of Fig. 2. An interplay is seen of two deep minima in the magnetocapacitance at filling factors slightly above and slightly below $`\nu =2`$, respectively, which correspond to maxima of the activation energy with a minimum in between at exactly $`\nu =2`$. At the splitting point both magnetocapacitance minima are observable simultaneously with roughly equal amplitudes. It is interesting to compare our experimental findings with results of Ref. where a double layer system with higher mobility and smaller $`\mathrm{\Delta }_{SAS}`$ was investigated at $`\nu =2`$ at a normal magnetic field. In both experiments a change of the ground state at the balance point is reached by varying a tuning parameter: a total electron density in Ref. and a tilt angle in our case. In addition, we observe the coexistence of two ground states near the transition point and find a finite value of the activation energy at the transition, i.e., it is insulator-insulator transition. To verify that our experimental results are beyond the single-particle model we compare them with the single-particle spectrum in a tilted magnetic field calculated in self-consistent Hartree approximation (details of calculation will be published elsewhere). In the calculation we do not take into account the spin splitting (supposing small $`g`$ factor) as well as the exchange energy. For this simplest case the calculated gap for filling factor $`\nu =2`$ also exhibits a minimum (Fig. 3). It is quite easy to understand the physical origin of the minimum: on one hand, a parallel component of the magnetic field leads to increasing the subband energies because of narrowing the electron density distribution in the $`z`$ direction. As a result, $`\mathrm{\Delta }_{SAS}`$ increases a little. On the other hand, to form a gap in the spectrum the tunneling between the layers should occur with conservation of the in-plane momentum. Therefore, it is accompanied with a shift of the center of the in-plane wave function by an amount $`d_0\mathrm{tan}\mathrm{\Theta }`$ (where $`d_0`$ is the distance between the centers of mass for electron density distributions in two lowest subbands) which enhances with both deviation from the balance point due to the increase of $`d_0`$ and tilting the magnetic field. The increase of the effective tunneling distance $`\sqrt{d^2+d_0^2\mathrm{tan}^2\mathrm{\Theta }}`$ (where $`d`$ is the tunnel barrier width) results in decreasing the subband spacing. The combination of these contributions, taking into account the known behaviour of the $`\nu =2`$ gap in the normal magnetic field (see Fig. 3), gives rise to the non-monotonous dependences of the gap at tilted magnetic fields. From the calculation it follows that the position of the minimum shifts to lower magnetic fields $`B_{}`$ with increasing tilt angle, which is consistent with our experimental finding. Nevertheless, the measured minimum activation energy is far smaller than the calculated half-gap. In principle, lower values of the measured activation energy might be explained in the single-particle picture assuming that the quantum level width is finite. However, we reject such an explanation for the following reasons: first, at $`\mathrm{\Theta }=0^{}`$ at balance we do observe an activation energy very close to half of $`\mathrm{\Delta }_{SAS}`$ and practically at the same magnetic field where a deep minimum in the activation energy is observed at $`\mathrm{\Theta }=30^{}`$; second, when sweeping the gate voltage through the splitting point the activation energy shows two maxima with a non-zero minimum in between at $`\nu =2`$, which indicates the coexistence of two ground states in the form of domains at the critical point and is in contradiction to the single-particle picture. To apply the line-of-reasoning developed in a many-body lattice model to the case of a tilted magnetic field one has to introduce two significant changes: (i) as one rung one should consider two sites shifted on distance $`d_0\mathrm{tan}\mathrm{\Theta }`$; (ii) the value of $`\mathrm{\Delta }_{SAS}`$ should be replaced by the subband spacing as determined from self-consistent Hartree approximation. Then, conclusion of Ref. about the existence of an intermediate canted antiferromagnetic phase is expected to remain valid. We note that the theory deals with a change of the ground state while experimentally we measure the energy of a charge excitation with $`k=\mathrm{}`$ . Therefore, the comparison of theory with experiment is straightforward only in the single-particle picture where the nearest excited state is identical with the competing ground state. According to Ref. , in the many-body problem peculiarities of the activation energy are expected near the phase transition. In our opinion, the observed deep minimum in activation energy at tilted magnetic fields is a manifestation of transition from a spin unpolarized state to a canted antiferromagnetic phase: at gate voltages right above $`V_{th2}`$ the spin unpolarized state only can be realized for filling factor $`\nu =2`$. The ferromagnetic and canted antiferromagnetic phase should then be considered as possible states near the balance point. At the transition point a finite activation energy is found, which is not the case for the direct transition from spin unpolarized to ferromagnetic state . Hence, the transition scenario forces us to recognize the phase around the balance point as a canted antiferromagnetic phase in a disordered sample, or, as shown in Refs. , a Bose glass of the singlet bosons. In summary, we have studied the double-layer electron system in a parabolic quantum well at $`\nu =2`$ at tilted magnetic fields using capacitance spectroscopy. We observe a change of the ground state of the system at the Zeeman splitting far smaller than $`\mathrm{\Delta }_{SAS}`$. At the transition point the activation energy is found to be finite although the ground state is composed of domains of two competing states. Our data correspond well to insulator-insulator quantum phase transition from a spin unpolarized to canted antiferromagnetic state in a disordered system . We are thankful to V. Pellegrini for valuable discussions. This work was supported in part by Deutsche Forschungsgemeinschaft, AFOSR under Grant No. F49620-94-1-0158, the Russian Foundation for Basic Research under Grants No. 97-02-16829 and No. 98-02-16632, and the Programme ”Nanostructures” from the Russian Ministry of Sciences under Grant No. 97-1024. The Munich - Santa Barbara collaboration has also been supported by a joint NSF-European Grant and the Max-Planck research award.
no-problem/9903/astro-ph9903126.html
ar5iv
text
# Investigation on the Bimodal Distribution of the Duration of Gamma-ray Bursts from BATSE Light Curves ## 1 Introduction The distribution of the duration of gamma-ray bursts shows an indication of two distinct groups from earlier experiments. Data from Burst and Transient Source Experiment (BATSE) have confirmed the bimodal distribution of the duration of gamma-ray bursts. In terms of the parameter T90, which is the time interval during which the integrated counts of a burst go from 5% to 95% of the total integrated counts, the bursts are separated into two groups around T90 $``$ 2 s . Time dilation, an evidence for the cosmological origin of GRBs, was found in the long GRBs . It is not yet known whether the two kinds of bursts are different or not. A recent study on the pulses in GRBs suggests that the duration of the equivalent width of each pulse and the mean duration of individual pulse are bimodal . In this paper, we present a different approach to investigate the average pulse width of GRBs. ## 2 Data Preparation and Pulse Width Definition Light curves of the BATSE GRBs in 4B catalogue are studied. The light curves are from concatenated DISCLA, PREB and DISCSC data, and were obtained from Compton Observatory Science Support Center (COSSC). They have been arranged into 64 ms time bins. First we subtract BATSE background from GRB light curves. The BATSE background were estimated by a 5-degree polynomial. The total number of GRBs with visually acceptable background estimate is 1186. Then we calculate the average pulse width $`T_P`$ of each GRB as follows. First we calculate the auto-correlation of the light curve of each GRB. The auto-correlation coefficients of the GRB, $`A(i)`$, are defined as follows: $$A(i)=\frac{_{k=0}^{Ni1}(X_k\overline{X})(X_{k+i}\overline{X})}{_{k=0}^{N1}(X_k\overline{X})^2}$$ where $`A(i)`$ ($`i=0,\mathrm{},N1`$) the auto-correlation coefficient at $`i\delta t`$. We define the average pulse width $`T_P`$ as $$T_P=2.0\times \sqrt{\frac{_{k=1}^Mk^2A(k)+(0.25)^2A(0)}{_{k=0}^MA(k)}}\delta t$$ where 0.25 represents the average time shift of the central bin of the auto-correlation coefficient A(0), and $`M`$ the maximum of $`i`$ with $`A(i1)`$+$`A(i)`$ no less than 0.0 in the main peak of the auto-correlation. The auto-correlation coefficients of BATSE trigger No.143 is shown in Fig.1. The data in the shaded region is used to calculate $`T_P`$. We calculate $`T_P`$ of each GRB and study the distribution of the average pulse width of the 1186 GRBs. ## 3 Results We have obtained the following results from the study of the average pulse width $`T_P`$ * The distribution of $`T_P`$ of GRB is bimodal. This suggests that the average pulse width is bimodally distributed, and GRBs can be divided into two groups, namely shot-pulse bursts and long-pulse bursts. The distribution of $`T_P`$ is peaked at about 0.5 s and 14 s for the two groups, respectively. They are roughly separated around 2 s. This is shown in Fig.2. * The average pulse width of the dim long-pulse bursts are longer than the bright long-pulse bursts. However, the average pulse width of the short-pulse bursts does not show a simple relation with GRB peak flux. This is shown in Fig.3. ## 4 Summary We have presented our preliminary analyses of 1186 BATSE GRB light curves in order to study the bimodal distribution of the duration of GRBs. We conclude * The duration of the average pulse width in GRBs are bimodally distributed. This is consistent with a different approach (Mitrofanov et al. 1998). * Long-pulse bursts show the evidence for the time dilation effect. This isn’t shown for the short-pulse bursts. Further study of the short-pulse bursts is need, and probably need to include correction of the BATSE selection effect and to study short GRBs with high time resolution TTE data. ## Acknowledgments WY appreciate various assistances by Dr. R. S. Mallozzi at MSFC/UAH. ## References
no-problem/9903/astro-ph9903430.html
ar5iv
text
# On the origin of [O iv] emission in Wolf-Rayet galaxies ## 1 Introduction Infrared observations of emission line galaxies give access to ions not visible in the optical domain. Among those is the high excitation \[O iv\] 25.9 $`\mu `$m line which has not only been observed in active galactic nuclei but also in several starburst galaxies, although at a much fainter level (Genzel et al. 1998, Lutz et al. 1998, hereafter LKST98). Hot, massive stars generally emit only few ionizing photons with energies above the He ii edge at 54.42 eV required for the production of \[O iv\]. Therefore the origin of this high excitation line in starbursts has been unclear so far. Different excitation mechanisms (weak AGNs, super-hot stars, planetary nebulae, and ionizing shocks) have been discussed by LKST98. Based on simple estimates, photoionization and shock models, these authors favor ionizing shocks related to the starburst activity as the most likely explanation for \[O iv\] 25.9 $`\mu `$m emission in general. According to LKST98, massive super-hot stars remain, however, an option for the high excitation dwarf galaxies included in their sample. One of these objects (NGC 5253) was studied in more detail by Crowther et al. (1999, hereafter C99), who showed, by computing photoionization models around single stars, that WNE-w stars can indeed produce strong \[O iv\] and \[Ne v\] in surrounding Hii regions. The fact that these lines were not prominent in NGC 5253 led them to exclude the possibility of a significant number of such stars being present in this galaxy. As will be shown below we do not support their conclusion for a variety of reasons. To shed more light on the origin of the \[O iv\] 25.9 $`\mu `$m emission in dwarf galaxies, we use the information on nebular properties and stellar content derived from optical studies to complement the information from IR data. In addition to NGC 5253, we also consider II Zw 40. These are the two compact low metallicity galaxies which show the highest excitation among the starbursts observed by LKST98. Both objects are known as so-called WR galaxies (cf. Conti 1991, Schaerer et al. 1999b), where the presence of broad stellar emission testifying to the presence of WR stars provides powerful constraints on the burst age and massive star content (e.g. Schaerer et al. 1999a). In this paper, we present a stellar population model which reproduces the observed stellar features and which, used as an input for photoionization models, explains at the same time the ionization structure of the nebular gas as revealed by the optical and IR fine structure lines. ## 2 On the association of \[O iv\] 25.9 $`\mu `$m with He ii In the sample of \[O iv\] emitting starbursts of LKST98, II Zw 40 and NGC 5253 show the strongest excitation (measured by \[Ne iii\]/\[Ne ii\]), the largest \[O iv\] strength (quantified by \[O iv\]/(\[Ne ii\]+0.44\[Ne iii\]); cf. LKST98, Fig. 2) and stand out by several properties: * Nebular He ii $`\lambda `$4686 emission indicative of high excitation is present in the region dominating the optical emission (Walsh & Roy 1989, 1993, hereafter WR89, WR93, Guseva et al. 1998) * A significant number of Wolf-Rayet stars has been detected in these regions (Kunth & Schild 1981, Walsh & Roy 1987, Vacca & Conti 1992, Schaerer et al. 1997, hereafter SCKM97) * In both the dwarf II Zw 40 and the amorphous galaxy NGC 5253 one or few star forming regions of a young age clearly dominate the production of ionizing photons (Vanzi et al. 1996, Beck et al. 1996, Calzetti et al. 1997). Finding 1) confirms the presence of high energy photons ($`>`$ 54 eV) deduced from the IR observations of \[O iv\] and naturally suggests a direct link between the nebular He ii and \[O iv\] emission. Furthermore the optical observations allow a more precise localisation of the high excitation regions. 1) and 2) indicate that II Zw 40 and NGC 5253 are objects where the observed He ii emission is likely due to hot WR stars (Schaerer 1996, 1997, 1998; De Mello et al. 1998). 3) justifies, at least to first order, the use of a “spectral template” of the brightest starburst region as a representation of the ionizing spectrum of the entire region covered by the ISO observations. ## 3 Stellar population WR stars of both WN and WC types have been observed in the two dominant regions of NGC 5253 by SCKM97. In II Zw 40 broad He ii $`\lambda `$4686 indicative of WR stars was detected by Kunth & Sargent (1981), Vacca & Conti (1992) and Guseva et al. (1998). The latter also detect broad C iv $`\lambda `$5808 emission due to WC stars. The WR and O star content was already analysed by Schaerer (1996), SCKM97 and Schaerer et al. (1999a). In Fig. 1 the observed intensities of the various WR features are shown and compared to an instantaneous burst model of Schaerer & Vacca (1998) with a Salpeter IMF at the appropriate metallicity (Z/Z$``$ 1/5). The observations of Guseva et al. (1998) refer to the entire “WR bump” (4643-4723 Å) and represent therefore an upper limit. Shifts in $`W(\mathrm{H}\beta )`$ of the theoretical predictions for the WC lines with respect to the observations are not significant since they correspond to very short timescales. This and other potential uncertainties affecting such a comparison have been extensively discussed in Schaerer et al. (1999a). Figure 1 shows that all line strengths are reasonably well reproduced by the model. At the corresponding ages of $``$ 3-5 Myr (cf. SCKM97) our synthesis model provides therefore a good description of the massive star content in these regions. ## 4 Photoionization models The spectral energy distributions predicted by the synthesis models described above have been used as input to the photoionization code PHOTO (same version as in Stasińska & Leitherer 1996, hereafter SL96). The remaining input parameters are the total number of stars, the density distribution and the chemical composition here taken as Z/Z=1/4 for easy comparison with SL96 (the ionization structure of a nebula is insensitive to a small change in the abundances in the gas). Following SL96 we calculate sequences of models for a spherical gaz distribution with a uniform hydrogen density $`n`$ and filling factor $`ϵ`$, both assumed constant during the evolution of the starburst. For a given age, models with the same ionization parameter $`U=A(Q_{\mathrm{H}^0}nϵ^2)^{1/3}`$ have the same ionization structure. $`Q_{\mathrm{H}^0}`$ is the total number of photons above 13.6 eV, and $`A`$ a function of the electron temperature (see SL96). The densities derived in II Zw 40 and NGC 5253 ($``$ 70–300 cm<sup>-3</sup>, WR89, WR93, C99) are low enough, so that collisional deexcitation is negligible for the lines of interest. We therefore explore the parameter space by simply taking $`n`$=10 cm<sup>-3</sup> and $`ϵ`$=1, and consider three different initial masses for the starburst: $`10^3`$, $`10^6`$ and $`10^9`$ M. These three model sequences will be referred to as the sequence with low, intermediate and high $`U`$. In Fig. 2 we show the temporal evolution of selected line ratios. Unlike C99, we chose to show line ratios that are independent of the abundances of the parent elements, in order to facilitate comparison with observations of different galaxies. The only exceptions are \[O iv\]/\[Ne iii<sup>1</sup><sup>1</sup>1The O/Ne ratio is 5.0 in the models compared to 4.9 – 5.5 in II Zw 40 (WR93) and 4.–7.4 in NGC 5253 (WR89). and He i $`\lambda `$5876/$`\mathrm{H}\beta `$ when helium is fully ionized in the Hii region. Also, we limit ourselves to line ratios that involve a dominant ionic stage in the nebula. Line ratios like \[O iv\] 25.9 $`\mu `$m/\[Ne ii\] 12.81 $`\mu `$m or \[Ne v\] 14.3 $`\mu `$m/\[Ne ii\] 12.81 $`\mu `$m as used by Genzel et al. (1998) are difficult to interpret, as the lines are likely emitted by very different regions. As expected, line ratios from adjacent ionic stages show a progressive decrease of the overall excitation with time for the models we are considering. Helium remains fully ionized up to 4 Myr. Notable exceptions to this trend are \[O iv\], \[Ne v\] and He ii, species with the ionization potential at or above the He ii edge, which appear during a short phase (at ages $`t`$ 3-4 Myr) where hot WR stars provide a non-negligible flux above 54 eV (see Schaerer & Vacca 1998). Slower temporal changes would of course be obtained for non instantaneous bursts. It must be noted that those line ratios which are function of the ionization parameter also depend somewhat on the adopted geometry, and this in a non trivial way. For example, in models with a thin shell geometry, the line ratios shown here differ by factors up to 3 from models for full spheres with the same mean ionization parameter. In the case of the \[Ne v\]/\[Ne iii\] ratio, the value predicted during the WR phase can be smaller by a factor of about 10 since, despite the presence of high energy photons, there is no matter emitting at a high ionization parameter, close to the star cluster. In the following we compare the model predictions with observations of NGC 5253 and II Zw 40. ## 5 Comparison with observations of NGC 5253 and II Zw 40 Observed line ratios are overplotted on the model predictions shown in Fig. 2. The optical data is taken from WR89 and WR93 (region 1 in both objects). IR fluxes for NGC 5253 are taken from Genzel et al. (1998), LKST98, and C99. For \[S iv\]/\[S iii\] a lower limit is obtained since different ISO apertures are involved in the measurements. All other limits are “real” detection limits. Adopting the Draine (1989) extinction curve and $`A_v=7.7`$ mag (C99) increases the \[Ar iii\]/\[Ar ii\] and \[S iv\]/\[S iii\] ratios by $``$ 40 %. Other IR line ratios are much less affected. C99 consider two separate emission regions to be responsible for the high excitation lines (e.g. \[S iv\]) and lower excitation lines respectively (e.g. \[Ne ii\]). We see no compelling reason for such a somewhat “artificial” separation. A similar structure is e.g. naturally obtained in an ideal spherical nebula. Instead of using line fluxes corrected for such effects we therefore use the original measurements. IR line ratios for II Zw 40 are from LKST98. We have no access to the acquired ISO SWS spectra, which should, however, become available soon. From the top panels of Figure 2 it is evident that no single model can reproduce at the same time all the observed line ratios. This finding is not surprising and may be due to several reasons: 1) The structure of the galaxies is more complicated than assumed in the models. 2) Although one or few bursts of similarly young age dominate the ionizing flux (Sect. 2), the ionizing spectrum is likely not fully described by a single burst population. 3) Atomic data may be inaccurate. In particular, the computation of collision strenghts for fine structure transitions is a very delicate problem, and the evaluation of the formal uncertainty is difficult. A comparison between the plasma diagnostics obtained using different IR and optical lines for the planetary nebula NGC 6302 led Oliva et al. (1996) to suggest that the collision strengths which enter in the calculation of the intensity of \[Ne v\] 14.3 $`\mu `$m are overestimated by a factor 3! Similar problems are likely to occur for other fine structure lines. The \[O iii\]/\[O ii\] ratio, which is one of the best studied from all points of view, indicates that, if the age of the starburst lies between 3 and 4 Myr, as indicated by the Wolf-Rayet features, the models with intermediate $`U`$ are the most adequate to represent the two galaxies under study. At such an age, helium is still completely ionized, because the radiation field is hard enough. The discrepancy with the measurement of He i $`\lambda `$5876/$`\mathrm{H}\beta `$ in II Zw 40 is likely due to absorption by Galactic interstellar sodium intervening at this redshift (Izotov 1999, private communication). Our main result is illustrated in the last three panels of Fig. 2 where we show that during a short phase the stellar population provides enough photons above 54.4 eV to naturally produce the He ii $`\lambda `$4686, \[O iv\] and \[Ne v\] lines at levels comparable to the observed ones. The emission is due to the presence of hot WR stars at ages $`t`$ 3-4 Myr (cf. Schaerer 1996, Schaerer & Vacca 1998). The predicted strength of these lines exceeds even somewhat the observations<sup>2</sup><sup>2</sup>2The observations of Guseva et al. (1998) indicate a larger value He ii $`\lambda `$4686/$`\mathrm{H}\beta `$=0.018 for II Zw 40.. However, this does not invalidate our conclusion. A more realistic population “mix” can easily reconcile the intensity of He ii $`\lambda `$4686 with the observations. As for the \[O iv\] and \[Ne v\] lines, they are sensitive to the geometry (see above), which provides ample space for fitting with tailored photoionization models. This should, however, only be undertaken when the relevant atomic data have been validated by detailed multiwavelength studies of simpler objects and by comparisons with photoionization models. ## 6 Summary and discussion We propose that \[O iv\] 25.9 $`\mu `$m emission in NGC 5253 and II Zw 40 is due to the presence of hot WR stars observed in both objects. We draw this conclusion from both empirical and theoretical facts. First, we note that nebular He ii $`\lambda `$4686 and \[O iv\] emission occur simultaneously in these objects. Furthermore a close link between nebular He ii and WR stars has now been established for the so-called WR galaxies, extra-galactic Hii regions and the few Local Group Hii regions exhibiting this feature (Schaerer 1996, 1997, 1998; see Schaerer et al. 1999b for a catalogue of these objects). Second, quantitative models of the stellar populations using up-to-date non-LTE atmospheres including stellar winds and the most recent evolutionary tracks are able to explain the observed massive star population and the optical emission lines (including nebular He ii $`\lambda `$4686) during the WR rich phase (Schaerer 1996, 1998), even for the lowest metallicity object, I Zw 18 (De Mello et al. 1998, Stasińska & Schaerer 1999), where such stars were detected recently. In addition, as demonstrated here, the \[O iv\] emission (and other IR lines) is also naturally reproduced by these models. The two objects considered here are the best available to constrain the origin of \[O iv\] emission: they show the highest excitation, and represent the most simple objects in terms of their ionizing population. IR observations of the 8 Local Group (LG) Hii regions known to exhibit nebular He ii (cf. Garnett et al. 1991) can provide a simple “consistency” test: the presence of this line is a necessary condition for showing \[O iv\] emission. The presence or absence of \[O iv\] 25.9 $`\mu `$m is, however, also influenced by the ionization parameter and the nebular geometry (cf. above). Observational evidence suggests that such high excitation Hii regions occur preferentially at low metallicities. This holds both for the LG and extragalactic objects (cf. Schaerer 1997, 1998), including II Zw 40 and NGC 5253. The same can thus be expected for the contribution of WR stars to \[O iv\]. Low metallicity may indeed also justify the neglect of line blanketing in the WR models of Schmutz et al. (1992) included in our synthesis models. The effects discussed by C99 and Crowther (1998) suppressing the output of photons above the He ii edge in metal-rich WR models and/or high density winds could well be ineffective at low metallicities, as also suggested by the empirical evidence. Our explanation for the stellar photoionization origin of \[O iv\] in dwarf-like low metallicity galaxies cannot be necessarily generalised to all the objects of LKST98. Although indeed 6 out of 14 from their list are known WR “galaxies” (Schaerer et al. 1999b), it is unlikely that the regions where WR stars are detected contribute a significant fraction of the total ionizing flux in these complex objects. Outflows, weak Seyfert activity, and other phenomena are known in some of them and provide alternative explanations as discussed by LKST98. More complex models will be required to interpret such objects, to provide an theoretical understanding of new empirical IR diagnostic diagrams (cf. Genzel et al. 1998), and to assess the contribution of stellar sources to high energy photons. ###### Acknowledgements. Yuri Izotov kindly provided us with data prior to publication. We thank Marc Sauvage and Suzanne Madden for useful discussions. DS acknowledges a grant from the Swiss National Foundation of Scientific Research.
no-problem/9903/gr-qc9903011.html
ar5iv
text
# Improved Upper Bound to the Entropy of a Charged System. II ## Abstract Recently, we derived an improved universal upper bound to the entropy of a charged system $`S\pi (2Ebq^2)/\mathrm{}`$. There was, however, some uncertainty in the value of the numerical factor which multiplies the $`q^2`$ term. In this paper we remove this uncertainty; we rederive this upper bound from an application of the generalized second law of thermodynamics to a gedanken experiment in which an entropy-bearing charged system falls into a Schwarzschild black hole. A crucial step in the analysis is the inclusion of the effect of the spacetime curvature on the electrostatic self-interaction of the charged system. According to the thermodynamical analogy in black-hole physics, the entropy of a black hole is given by $`S_{bh}=A/4\mathrm{}`$, where $`A`$ is the black-hole surface area. (We use gravitational units in which $`G=c=1`$). Moreover, a system consisting of ordinary matter interacting with a black hole is widely believed to obey the generalized second law of thermodynamics (GSL): “The sum of the black-hole entropy and the common (ordinary) entropy in the black-hole exterior never decreases”. This general conjecture is one of the corner stones of black-hole physics. It is well known, however, that the validity of the GSL depends on the (plausible) existence of a universal upper bound to the entropy of a bounded system : Consider a box filled with matter of proper energy $`E`$ and entropy $`S`$ which is dropped into a black hole. The energy delivered to the black hole can be arbitrarily red-shifted by letting the assimilation point approach the black-hole horizon. If the box is deposited with no radial momentum a proper distance $`R`$ above the horizon, and then allowed to fall in such that $$R<\mathrm{}S/2\pi E,$$ (1) then the black-hole area increase (or equivalently, the increase in black-hole entropy) is not large enough to compensate for the decrease of $`S`$ in common (ordinary) entropy. Arguing from the GSL, Bekenstein has proposed the existence of a universal upper bound to the entropy $`S`$ of any system of total energy $`E`$ and effective proper radius $`R`$: $$S2\pi RE/\mathrm{},$$ (2) where $`R`$ is defined in terms of the area $`A`$ of the spherical surface which circumscribe the system $`R=(A/4\pi )^{1/2}`$ . This restriction is necessary for enforcement of the GSL; the box’s entropy disappears but an increase in black-hole entropy occurs which ensures that the GSL is respected provided $`S`$ is bounded as in Eq. (2). Evidently, this universal upper bound is a quantum phenomena (the upper bound goes to infinity as $`\mathrm{}0`$). This provides a striking illustration of the fact that the GSL is intrinsically a quantum law. The universal upper bound Eq. (2) has the status of a supplement to the second law; the latter only states that the entropy of a closed system tends to a maximum without saying how large that should be. Other derivations of the universal upper bound Eq. (2) which are based on black-hole physics have been given in . Few pieces of evidence exist concerning the validity of the bound for self-gravitating systems . However, the universal bound Eq. (2) is known to be true independently of black-hole physics for a variety of systems in which gravity is negligible . We noted , however, that there is one disturbing feature of the universal bound Eq. (2): Black holes conform to the bound ; however, it is only the Schwarzschild black hole which actually saturates the bound. This uniqueness of the Schwarzschild black hole (in the sense that it is the only black hole which have the maximum entropy allowed by quantum theory and general relativity) is somewhat disturbing. Recently, Hod derived an (improved) upper bound to the entropy of a spinning system and proved that all electrically neutral Kerr black holes have the maximum entropy allowed by quantum theory and general relativity. The unity of physics (and of black holes in particular) motivates us to look for an improved upper bound to the entropy of a charged system. Moreover, the plausible existence of an upper bound stronger than Eq. (2) on the entropy of a charged system has nothing to do with black-hole physics; a part of the energy of the electromagnetic field residing outside the charged system seems to be irrelevant for the system’s statistical properties. This reduce the phase space available to the components of a charged system. Evidently, an improved upper bound to the entropy of a charged system must decrease with the (absolute) value of the system’s charge. However, our simple argument cannot yield the exact dependence of the entropy bound on the system’s parameters: its energy, charge, and proper radius. It is black-hole physics (more precisely, the GSL) which yields a concrete expression for the universal upper bound; recently, we have derived an improved universal upper bound to the entropy of a charged system $`S\pi (2Ebq^2)/\mathrm{}`$ . There was, however, some uncertainty in the value of the numerical factor which multiply the $`q^2`$ term. In this paper we remove this uncertainty. We consider a charged body of rest mass $`\mu `$, charge $`q`$, which is dropped into a Schwarzschild black hole. The equation of motion of a charged body on a Schwarzschild background is a quadratic equation for the conserved energy $`E`$ (energy-at-infinity) of the body $$r^4E^2\mathrm{\Delta }(\mu ^2r^2+p_{\varphi }^{}{}_{}{}^{2})(\mathrm{\Delta }p_r)^2=0,$$ (3) where $`\mathrm{\Delta }=r^22Mr`$. The quantities $`p_\varphi `$ and $`p_r`$ are the conserved angular momentum of the body and its covariant radial momentum, respectively. The conserved energy $`E`$ of a body having a radial turning point at $`r=r_++\xi `$ (for $`\xi r_+`$ where $`r_+=2M`$ is the location of the black-hole horizon) is given by Eq. (3) $$E=\sqrt{\mu ^2+p_\varphi ^2/r_{+}^{}{}_{}{}^{2}}(\xi /r_+)^{1/2}[1+O(\xi /r_+)].$$ (4) This expression is actually the effective potential (gravitational plus centrifugal) for given values of $`\mu `$ and $`p_\varphi `$. It is clear that it can be minimized by taking $`p_\varphi =0`$ (which also minimize the increase in the black-hole surface area). However, the well-known analysis of is not complete because it does not take into account the effect of the spacetime curvature on the particle’s electrostatic self-interaction. The black-hole gravitational field modifies the electrostatic self-interaction of a charged particle in such a way that the particle experience a repulsive (i.e., directed away from the black hole) self-force. A variety of techniques have been used to demonstrate this effect . The physical origin of this force is the distortion of the charge’s long-range Coulomb field by the spacetime curvature. The contribution of this effect to the particle’s energy is $`Mq^2/2r^2`$ . In order to find the change in black-hole surface area caused by an assimilation of the body, one should evaluate $`E`$ at the point of capture, a proper distance $`b`$ outside the horizon. The relevant dimension of the body in our gedanken experiment is its shortest length. In other words, the entropy bound is set by the smallest body’s dimension (provided $`b\mathrm{}/E`$ ). This conclusion is supported by numerical computations for neutral systems. Thus, we should evaluate $`E`$ at $`r=r_++\delta (b)`$, where $`\delta (b)`$ is determined by $$_{r_+}^{r_++\delta (b)}(12M/r)^{1/2}𝑑r=b.$$ (5) Integrating Eq. (5) one obtains (for $`br_+`$) $$\delta (b)=b^2/8M,$$ (6) which implies (to leading order in b/M) $$E=(2\mu b+q^2)/8M.$$ (7) An assimilation of the charged body results in a change $`\mathrm{\Delta }M=E`$ in the black-hole mass and a change $`\mathrm{\Delta }Q=q`$ in its charge. The relation $`A=4\pi [M+(M^2Q^2)^{1/2}]^2`$ implies that (for $`Q=0`$) $`\mathrm{\Delta }A=8\pi [4M\mathrm{\Delta }M(\mathrm{\Delta }Q)^2]`$ (terms of order $`(\mathrm{\Delta }M)^2`$ are negligible for $`bM`$). Thus, taking cognizance of Eq. (7) we find $$(\mathrm{\Delta }A)_{min}=4\pi (2\mu bq^2),$$ (8) which is the minimal black-hole area increase for given values of the body’s parameters $`\mu ,q,`$ and $`b`$. Assuming the validity of the GSL, one can derive an upper bound to the entropy $`S`$ of an arbitrary system of proper energy $`E`$, charge $`q`$ and circumscribing radius $`R`$ (by definition, $`Rb`$): $$S\pi (2ERq^2)/\mathrm{}.$$ (9) It is evident from the minimal black-hole area increase Eq. (8) that in order for the GSL to be satisfied $`[(\mathrm{\Delta }S)_{tot}(\mathrm{\Delta }S)_{bh}S0]`$, the entropy $`S`$ of the charged system must be bounded as in Eq. (9). This upper bound is universal in the sense that it depends only on the system’s parameters (it is independent of the black-hole mass which was used to derive it). This improved bound is very appealing from a black-hole physics point of view : consider a charged Reissner-Nordström black hole of charge $`Q`$. Let its energy be $`E`$; then its surface area is given by $`A=4\pi r_{+}^{}{}_{}{}^{2}=4\pi (2Er_+Q^2)`$. Now since $`S_{bh}=A/4\mathrm{}`$, $`S_{bh}=\pi (2Er_+Q^2)/\mathrm{}`$, which is the maximal entropy allowed by the upper bound Eq. (9). Thus, all Reissner-Nordström black holes saturate the bound. This proves that the Schwarzschild black hole is not unique from a black-hole entropy point of view, removing the disturbing feature of the entropy bound Eq. (2). This is precisely the kind of universal upper bound we were hoping for ! Evidently, systems with negligible self-gravity (the charged system in our gedanken experiment) and systems with maximal gravitational effects (i.e., charged black holes) both satisfy the upper bound Eq. (9). Therefore, this bound appears to be of universal validity. One piece of evidence exist concerning the validity of the bound for the specific example of a system composed of a charged black hole in thermal equilibrium with radiation . The intriguing feature of our derivation is that it uses a law whose very meaning stems from gravitation (the GSL, or equivalently the area-entropy relation for black holes) to derive a universal bound which has nothing to do with gravitation \[written out fully, the entropy bound would involve $`\mathrm{}`$ and $`c`$, but not $`G`$\]. This provides a striking illustration of the unity of physics. In summary, an application of the generalized second law of thermodynamics to a gedanken experiment in which an entropy-bearing charged system falls into a Schwarzschild black hole, enables us to derive an improved universal upper bound to the entropy of a charged system . In doing so, we removed the former uncertainty regarding the precise value of the numerical coefficient which multiply the $`q^2`$ term. A crucial step in the analysis is the inclusion of the influence of the spacetime curvature on the system’s electrostatic self-interaction. Note added: I have learned that recently Bekenstein and Mayo \[Phys. Rev. D 61, 024022 (2000)\] analyzed the same problem, and independently obtained the universal upper bound (which was already derived in ). ACKNOWLEDGMENTS It thank Jacob D. Bekenstein and Avraham E. Mayo for helpful discussions. This research was supported by a grant from the Israel Science Foundation.
no-problem/9903/cond-mat9903322.html
ar5iv
text
# Optical Investigations of Charge Gap in Orbital Ordered La1/2Sr3/2MnO4 ## Abstract Temperature and polarization dependent electronic structure of La<sub>1/2</sub>Sr<sub>3/2</sub>MnO<sub>4</sub> were investigated by optical conductivity analyses. With decreasing temperature, for $`Eab`$, a broad mid-infrared (MIR) peak of La<sub>1/2</sub>Sr<sub>3/2</sub>MnO<sub>4</sub> becomes narrower and moves to the higher frequency, while that of Nd<sub>1/2</sub>Sr<sub>3/2</sub>MnO<sub>4</sub> nearly temperature independent. We showed that the MIR peak in La<sub>1/2</sub>Sr<sub>3/2</sub>MnO<sub>4</sub> originates from orbital ordering associated with CE-type magnetic ordering and that the Jahn-Teller distortion has a significant influence on the width and the position of the MIR peak. Due to extensive studies of 3d transition metal oxides, it has been recognized that correlations among spin, charge, and orbital degrees of freedom play important roles in their physical properties. Especially in doped manganites, such a coupling exhibits very interesting phenomena: colossal magnetoresistance, magnetic field induced structural phase transition, and charge/orbital ordering. Recently, much interest has been focused on the charge/orbital ordering which can be characterized by a real space ordering of Mn<sup>3+</sup>/Mn<sup>4+</sup> ions at a commensurate value of charge carrier, such as 1/8, 1/2, and 2/3. The charge/orbital ordering usually incorporates with a sharp increase of resistivity, a suppression of magnetic susceptibility, and changes of lattice constants. To get understanding on the charge/orbital ordering, many efforts have been put into La<sub>1/2</sub>Sr<sub>3/2</sub>MnO<sub>4</sub>, which is known to have a CE-type antiferromagnetic (AFM) ordering below $`T_N`$ $``$ 110 K. Murakami et al. reported diffraction studies of La<sub>1/2</sub>Sr<sub>3/2</sub>MnO<sub>4</sub> using x-ray near the Mn K-absorption edge. From anomalous dispersion of scattering factor for Mn<sup>3+</sup> and Mn<sup>4+</sup>, they claimed that the charge/orbital ordering was observed directly. To explain why the Mn 3d orbital ordering can influence such Mn 1s $``$ 4p dipole transition, Ishihara et al. suggested Coulomb repulsion between the Mn 3d and 4p electrons. However, Elfimov et al. pointed out that band structure effects rather than the local Coulomb repulsion should dominate the polarization dependence of the K edge scattering. In this Letter, we report optical conductivity spectra, $`\sigma (\omega )`$ of La<sub>1/2</sub>Sr<sub>3/2</sub>MnO<sub>4</sub> (LSMO) and Nd<sub>1/2</sub>Sr<sub>3/2</sub>MnO<sub>4</sub> (NSMO). Note that the former show an charge/orbital ordering around $`Tco`$ $``$ 220 K, but that the latter does not show any ordering at all. As temperature ($`T`$) decreases, a mid-infrared (MIR) peak in the LSMO ab-plane becomes narrower and a corresponding optical gap significantly increases. On the contrary, the MIR peak of the NSMO shows little $`T`$-dependence. To understand these interesting phenomena, we calculated the polarization dependent $`\sigma (\omega )`$ using the linearized muffin-tin orbital (LMTO) methods and by analysis of the tight binding (TB) model. The LMTO results were in a remarkable agreement with experimental ones, indicating that the strong orbital ordering with the CE-type AFM ordering bring forth the MIR peak. Furthermore, the TB analysis clearly suggests that the strong $`T`$-dependences of the optical gap $`\mathrm{\Delta }`$ and the MIR peak of LSMO should be caused cooperatively by the orbital ordering, the CE-type AFM ordering, and the Jahn-Teller (JT) distortion. We prepared LSMO and NSMO single crystals using the floating zone methods. Details of sample growth and characterizations were reported earlier. Near normal incident reflectivity spectra $`R(\omega )`$ were measured from 0.01 to 6.0 eV with various temperatures and polarizations. Just before reflectivity measurements, we polished the crystals up to 0.3 $`\mu `$m using diamond pastes. To subtract surface scattering effects, a gold normalization technique was used. Using the Kramers-Kronig (KK) transformation, $`\sigma (\omega )`$ were obtained. To reduce errors of the KK analysis, we also independently measured $`\sigma (\omega )`$ in the frequency region of 1.5 $``$ 5.0 eV using the spectroscopic ellipsometry (SE). For such optically uniaxial samples, we should measure ratios of reflectances for p\- and s\- polarized lights at several incident angles and then calculated optical constants. The SE results agreed quite well with the KK results, demonstrating the validity of our KK analysis. Figures 1(a) and (b) show the polarization dependent $`\sigma (\omega )`$ of LSMO and NSMO at 290 K, respectively. Note that the behaviors of $`\sigma (\omega )`$ at 290 K are quite similar for both crystals, suggesting the optical transitions related with the La and the Nd ions should be located at the energy region higher than 4.0 eV. The $`\sigma (\omega )`$ in the ab-plane ($`Eab`$) are quite different from those along the c-axis ($`Ec`$). \[Similar anisotropy could be seen in a bilayer manganite, La<sub>1.2</sub>Sr<sub>1.8</sub>Mn<sub>2</sub>O<sub>7</sub>.\] Gap values were estimated from crossing points of abscissa with linear extrapolations of $`\sigma (\omega )`$. For both crystals, $`\sigma (\omega )`$ for $`Eab`$ show broad peaks around 1.0 and 3.5 eV with $`\mathrm{\Delta }`$ 0.2 eV, and $`\sigma (\omega )`$ for $`Ec`$ show peaks around 1.2 and 4.0 eV with $`\mathrm{\Delta }`$ 0.7 eV. Since the broad peaks located above 2.0 eV are similar to those in cubic perovskite manganites, these features can be assigned to O 2p $``$ Mn $`e_g`$ transitions. Although $`\sigma (\omega )`$ for both crystals are very similar at 290 K, their $`T`$-dependences are quite different. Figures 2(a) and (b) show $`T`$-dependent $`\sigma (\omega )`$ of LSMO and NSMO for $`Eab`$. \[$`T`$-dependences of $`\sigma (\omega )`$ for $`Ec`$ are quite small.\] For LSMO, there are large spectral weight changes up to 2.0 eV. With decreasing $`T`$, the spectral weight below 0.8 eV is transferred to a higher energy: $`\mathrm{\Delta }`$ increases significantly and the broad peak around 1.0 eV becomes narrower. For NSMO, there is little $`T`$-dependence in $`\sigma (\omega )`$ and $`\mathrm{\Delta }`$ is also nearly independent of $`T`$. From this comparison, we can argue that the large spectral changes in LSMO should come from the charge/orbital ordering associated with the CE-type AFM ordering. To get further insights, we compared our experimental results with theoretical predictions. Figure 3 shows $`\sigma (\omega )`$ calculated for Y<sub>1/2</sub>Sr<sub>3/2</sub>MnO<sub>4</sub> using the LMTO method. \[Even though we calculated for Y<sub>1/2</sub>Sr<sub>3/2</sub>MnO<sub>4</sub>, the main features of $`\sigma (\omega )`$ are thought to be nearly the same as LSMO.\] Since we used the phenomenological Lorentzian broadening with $`\mathrm{\Delta }\epsilon 0.13`$ eV, the value of $`\sigma (0)`$ is finite even in the insulating state. The overall features, especially polarization dependence, of the theoretical $`\sigma (\omega )`$ are nearly the same as those in Fig. 1(a). Due to the limitation of the LMTO method for the higher-energy excitations, the theoretical value for $`\sigma (\omega )`$ around 4.0 eV is by a factor of two smaller than the experimental value. As shown in Fig. 3, the oxygen displacement $`\delta `$ between Mn(1) and Mn(2) along the zigzag chain can induce large spectral weight changes below 2.0 eV. One of the important issues is what drives the charge/orbital ordering. As possible candidates, the intersite Coulomb repulsion and the JT distortion have been considered. Compared to LSMO, NSMO is known to have a shorter distance of the Mn-O-Mn straight bond, which results in a larger intersite Coulomb interaction and a larger hopping energy of e<sub>g</sub> conduction electrons. However, as shown in insets of Fig. 2, the measured values of $`\mathrm{\Delta }`$ for LSMO are larger than those of NSMO. It implies that the conduction electron screening in NSMO should be dominant, which leads to no magnetic spin ordering. These results are consistent with recent neutron scattering data which showed no magnetic ordering in NSMO and an AFM ordered phase in the MnO<sub>2</sub> layer of LSMO. Our first principles calculations revealed that the CE-type AFM ordering produce a strong orbital ordering even without the JT distortion. Once the orbital ordering occurs, the JT distortion will be induced. Then it will enhance the orbital ordering and stabilize the CE-type AFM ordering cooperatively. \[Being consistent with this argument, the magnitude of the JT distortion in the ab-plane of LSMO is larger than that of NSMO.\] To clarify effects of the orbital ordering and the JT distortion on the electronic structure of LSMO, we set up a TB model for the MnO<sub>2</sub> plane in LSMO by taking account of only $`e_g`$ orbitals at the Mn sites. \[Here, the TB orbital $`|e_g`$ should be considered as a Wannier state, i.e. a superposition of the Mn 3d and the O 2p states.\] The model Hamiltonian can be written as $`H`$ $`=`$ $`{\displaystyle \underset{ij\alpha \beta \sigma }{}}t_{ij}^{\alpha \beta }d_{i\alpha \sigma }^+d_{j\beta \sigma }J_H{\displaystyle \underset{i\alpha \sigma \sigma ^{}}{}}\stackrel{}{S}_i\stackrel{}{\sigma }_{\sigma \sigma ^{}}d_{i\alpha \sigma }^+d_{i\alpha \sigma ^{}}`$ (2) $`g{\displaystyle \underset{i\alpha \beta \sigma }{}}\stackrel{}{Q}_i\stackrel{}{\tau }_{\alpha \beta }d_{i\alpha \sigma }^+d_{i\beta \sigma }+{\displaystyle \underset{i}{}}{\displaystyle \frac{c}{2}}\stackrel{}{Q}_i^2,`$ where $`d_{i\alpha \sigma }`$ represents an annihilation operator for the state at the site $`i`$ with the orbital index $`\alpha `$ and spin index $`\sigma `$. It is noted that the $`e_g`$ states consist of two orbitals, $`|x^2y^2`$ and $`|3z^2r^2`$. The second term corresponds to the Hund coupling of the $`e_g`$ conduction electrons with the $`t_{2g}`$ localized spin $`\stackrel{}{S}_i`$ at the site $`i`$, the third term to the JT type electron-lattice interaction with the coupling constant $`g`$, and the last term to the elastic energy of the JT phonon mode $`\stackrel{}{Q}=(Q_2,Q_3)`$. $`\stackrel{}{\sigma }`$ and $`\stackrel{}{\tau }`$ are Pauli matrices. The parameters in the electronic part of the TB Hamiltonian were determined as $`t_{dd\sigma }`$ = 0.7 eV, $`J_H`$ = 0.75 eV, $`g`$ = 3.85 eV/Å, and $`c`$ = 13.58 eV/Å<sup>2</sup>. Assuming the CE-type AFM ordering of the $`t_{2g}`$ spins, we obtained the density of state (DOS) without any JT distortion. As shown in Fig. 4(a), DOS has three separate main peaks, each of which corresponds to bonding (B), non-bonding (N), and anti-bonding (A) states of the Mn(1) and the Mn(2) $`e_g`$ orbitals. The B states are fully occupied and separated by the unoccupied N states with a band gap of $``$ 0.2 eV. Due to the peculiar nature of the 1D zigzag chain geometry in the CE-type AFM configuration, the $`|3x^2r^2_1`$ orbitals at the Mn(1) sites are strongly hybridized with the $`|x^2y^2_2`$ components of the $`e_g`$ orbitals at the neighboring Mn(2) sites along the chain, while the inter-chain hybridization is suppressed by the exchange splitting due to AFM coupling. The strong hybridization along the zigzag chains separate the B and A states by $``$ 2.0 eV. As a result, the $`|3x^2r^2_1`$ orbital state dominates the occupancy at the Mn(1) site and leads to the orbital ordered structure in the MnO<sub>2</sub> layer. The orbital ordered electronic structure together with the JT distortion in the CE-type AFM state leads to interesting consequences on the interband transition. While the Mn(1) site maintains its inversion symmetry, the Mn(2) site at the edge of the zigzag chain has no inversion symmetry due to the CE-type AFM ordering. Thus, the $`e_g`$-type Wannier state at the Mn(2) site becomes a mixture of the $`d`$\- and the $`p`$-orbital states. Since both types of Mn atoms are on the mirror plane with respect to the $`z`$-reflection, no dipole transition is allowed for $`Ec`$. On the other hand, in the case of $`Eab`$, the dipole transition at the Mn(2) site becomes allowed because $`\mathrm{B}_{Mn(2)}|p_{x,y}|\mathrm{N}_{Mn(2)}0`$. Therefore, we expect that $`\sigma (\omega )`$ for $`Ec`$ should be strongly suppressed below 2.0 eV, while the $`\sigma (\omega )`$ for $`Eab`$ have its first peak near 1.0 eV which corresponds to the B $``$ N interband transition. These TB analyses are consistent with the experimental result of Fig. 1(a) as well as the LMTO result of Fig. 3. In Fig. 4(b), we show the joint DOS (JDOS) projected on the Mn(2) site. When the frequency and polarization dependences of dipole matrix element are neglected, $`\sigma (\omega )`$ is considered to be proportional to the JDOS, since the dipole transition at the Mn(2) site without inversion symmetry is a major contributor. The solid line represents the JDOS without any JT distortion, and the dashed line with the oxygen distortion of $`\delta =0.10`$ a.u. We can obtain the JT distortion, $`Q_23\delta \sqrt{2}`$ and $`Q_33\delta \sqrt{6}`$ for Mn(1), and $`Q_20`$ and $`Q_33\delta \sqrt{6}`$ for Mn(2) by restricting the volumes of the octahedra unchanged. The peak near 1.0 eV corresponds to the B $``$ N transition, and the peak near 2.0 eV corresponds to the B $``$ A transition. The overall shape is in close agreement with the LMTO result of Fig. 3, but the B $``$ A feature turns out to be very weak in the experimental spectrum, shown in Fig. 2(a). In Fig. 4(b), it is emphasized that the increasing JT distortion results in the narrowing of the B band and consequently the width of the B $``$ N transition as well, which is quite consistent with experimental observation on the $`T`$-dependence of $`\sigma (\omega )`$. As $`T`$ decreases, a fluctuation in the CE-type AFM ordering is suppressed, the JT distortion increases and the orbital ordering is enhanced. The observed strong $`T`$-dependence of $`\sigma (\omega )`$ of LSMO is the result of cooperative enhancement of the orbital ordering. Even though the $`T`$-dependence of the MIR peak below $`T_{CO}`$ in LSMO can be well understood by the orbital ordered electronic structure with the JT distortion, the $`\sigma (\omega )`$ of either LSMO above $`T_{CO}`$ or NSMO at all $`T`$ still exhibit the similar MIR features. Like in the case of Fe<sub>3</sub>O<sub>4</sub>, it could be attributed to the local orbital fluctuation without a long range CE-type AFM ordering or charge ordering. In summary, we investigated the orbital ordering in La<sub>1/2</sub>Sr<sub>3/2</sub>MnO<sub>4</sub> using the optical conductivity analyses. With decreasing temperature, the peak corresponding to bonding $``$ non-bonding transition shifts to the high frequency and becomes narrower. Comparing with optical conductivity of Nd<sub>1/2</sub>Sr<sub>3/2</sub>MnO<sub>4</sub> and theoretical results, we conclude that such behaviors could be explained by the CE-type orbital ordering within the MnO<sub>2</sub> layers stabilized by the Jahn-Teller distortion. This work are financial supported by Ministry of Education through the Basic Science Research Institute Program No. BSRI-98-2416, by the Korea Science and Engineering Foundation through RCDAMP of Pusan National University, and by Ministry of Science and Technology through grant No. I-3-061. This work was also supported by a Grant-In-Aid for Scientific Research from the Ministry of Education, Science, Sports and Culture and from Precursory Research for Embryonic Science and Technology (PRESTO), Japan Science and Technology Corporation (JST). Partly, this work is also supported by NEDO.
no-problem/9903/adap-org9903005.html
ar5iv
text
# 1 Introduction ## 1 Introduction One of the essential features of the life system is that most of its element do possess an ability to replicate themselves by copying their own informations. The copying mechanisms in each element are mostly the same but its effectiveness is not perfect due to chemical and quantum fluctuations. If it were perfect, there should not be any change of the species of the system in its time evolution. The random fluctuation involved in this copying process is, hence, another important ingredient of the system in understanding its time evolution. The natural consequences of the replication with random fluctuation are of great interest since they may explain main characteristics of our life system. Admitting the complication in treating the real life system, one may introduce a simplified version of the system that, however, comprises the essential features of our life system on the aspect of the replication and the distribution of the information numbers. In this note, we shall consider the following simplified version. The system consists of objects that carry a characteristic information defined as an element belonging to all the n multiple tensor products of a base space for all the nonnegative integer n. We shall then define the information number of an object by N when it belongs to N-multiple tensor product space. The elements in the base space are fixed and their total number, m, is finite. In the real life system, for example, these base elements correspond to the four base acids. Namely, they are adenine, guanine, cytosine, and thymine. The information ensemble space is the place where the dynamics of interest occurs. A solution (moduli) space is defined as a subset of the information ensemble space with a restriction to the objects carrying the information element that may replicate themselves. We assume that the replication mechanism of a base element is mostly the same and the unit processes copying base information are independent events. Due to the chemical and quantum mechanical fluctuations, there might be copying errors that maps one specific element of the base space to an element in the ensemble space. Of course, the errors are quite small but still to be a main source of the dynamical flow of the system after generations. The errors include the change of the original base element into another base element, omission of the original base element and mapping the base element into more than a base element. The probability of mapping the base element to more than two base units is quite small compared to the total probability of errors and, hence, the mean of the fluctuation is very close to the case of no errors. The mean of the errors is more or less inclined to the direction of growing of average numbers after one copy event because of the asymmetry reflected in the fact that there are no counterparts of mapping a base element to more than two base elements about the symmetry point of the mapping to just one element. The unit time between subsequent generations will be set all the same and time flow will be measured by the number of generations. This implies that all the objects in the system at certain time have exactly the same numbers of ancestors born after an initial time. The next generation will be defined by the set of two identical descendents from each object of the original system which belongs to the solution category. The solution space is determined as a function of environment and the system itself. The properties of the solution space are not known mostly for realistic life systems. We shall introduce a minimal assumption that the density of solutions over the total ensemble space can be defined as a smooth function. This means that the solutions are densely distributed over the whole information configuration space. Basic features of the time evolution of the system are as follows. Because of the copying errors, the consequence of all the replications of the objects of the original generation belonging to the solution space will be the next generation set whose elements may or may not belong to the solution space. After a unit time, the next generation again replicate its elements once they belong to the solution space. Repeating this process from generations to generations, the generations will flow in the information configuration space quite as the time flies, but the flow will be restricted to regions of near solution space. The errors propagate and the initial set diffuses into the new arena in the configuration space. What are the properties of the dynamics of the system as the number of generations grows? In this note, we shall concentrate on the information number fluctuation of the system objects as a result of the replication. Especially, we will consider the averaged information number and their variation as a function of time. In this way, we will demonstrate that there is a finite probability of the appearance of objects carrying a large number of information compared to that of an initial ancestor. ## 2 Self replicating system and its evolution As mentioned in the introduction, the self-replicating system consists of objects that carry an information element belonging to the information configuration space. To define the configuration space, we first introduce a base space $`B`$ with a finite number of elements: $$B=\{b_1,b_2,\mathrm{},b_m\}.$$ (2.1) The information configuration space is then the direct sum of all n-multiple tensor product, $$E=\underset{n=0}{\overset{\mathrm{}}{}}B^n,$$ (2.2) where $`B^n`$ denotes the n multiple tensor product of the base space B. For each element of the information configuration space, we specify its number of information by the order of the tensor product where it belongs. As a subset of the ensemble space, the solution space $`S_k`$ is composed of the information elements, whose objects have an ability to replicate themselves from the $`k`$-th generation to the $`(k+1)`$-th generation. As remarked earlier, we will assume that the solution space elements are densely distributed over the information ensemble space, so that one may define the density of the solutions $`\rho (n,k)`$ $$\rho (n,k)=Z_\rho \frac{N(S_kB^n)}{N(B^n)}.$$ (2.3) as a smooth function of $`n`$ where $`N(A)`$ is the number of elements in a set $`A`$ and $`Z_\rho `$ is a normalization constant. The number fluctuation involved in the replication event of a base element takes the a distribution $`d(1+\mu _0,\sigma _0)`$ with a mean $`1+\mu _0`$ and a standard deviation $`\sigma _0`$. If one denotes the probability of $`l`$ base elements resulted from a unit base by $`p_l`$ for a copy, one may infer that $`p_0p_2=p(p_2p_0p)`$, $`p_3p`$ and the contribution of all the higher mode may be ignored as explained earlier. By an explicit computation, one finds that $`\sigma _0^22p`$ and $`\mu _02p_3+p_2p_0\sigma _0^2`$. Here, the variation $`\sigma _0^2`$ is much smaller than one, and we use an approximation that one is describing the distribution as if $`n`$ is a continuous parameter. When one replicates an object carrying $`n`$ information, the number fluctuation associated with this is described by the normal distribution $$\delta n=z(n(1+\mu _0),\sqrt{n}\sigma _0)$$ (2.4) owing to the independence of all the replication events as well as the central limit theorem. The informations of the system is also given as a subset $`A_k`$ of the configuration space at the k-th generation. The distribution of the system informations over the ensemble space at the k-th generation is then described by $$\varphi (n,k)=Z_\varphi N(A_kB^n),$$ (2.5) where $`\varphi (n,0)`$ represents the initial distribution of the informations and $`Z_\varphi `$ denotes a normalization factor. With help of the distribution function in (2.4), one finds the density distribution of the system in the next generation is determined by the diffusion process from the set $`A_kS_k`$ at the k-th generation. Namely, all the element in $`A_kS_k`$ are doubled by replications and the number distribution of each descendent is described by (2.4). This is summarized in terms of the density description by $$\varphi (n,k+1)=_0^{\mathrm{}}𝑑n^{}G(n,n^{})\rho (n^{},k)\varphi (n^{},k).$$ (2.6) where $`G(n,n^{})`$ is the propagator of the distribution $$G(n,n^{})=\frac{1}{\sqrt{2\pi n^{}}\sigma _0}\mathrm{Exp}\{\frac{(nn^{}n^{}\mu _0)^2}{2n^{}\sigma _0^2}\}.$$ (2.7) In (2.6), the factor two by the doubling as a result of replication is absorbed into the normalization factor of the density function. By the mathematical induction, the density of the system information at arbitrary generation is obtained from the initial data by $$\varphi (n,k)=_0^{\mathrm{}}𝑑n^{}P(n,n^{};k)\varphi (n^{},0),$$ (2.8) where the propagator is defined as $$P(n,n^{};k)=\left(\underset{i=1}{\overset{k1}{}}_0^{\mathrm{}}𝑑n_i\right)\left(\underset{j=0}{\overset{k1}{}}G(n_{j+1},n_j)\rho (n_j,j)\right),$$ (2.9) with $`n_k=n`$ and $`n_0=n^{}`$. The propergator in (2.6) can be represented in a differential form for small $`\mu _0`$ and $`\sigma _0^2`$. To measure the smallness of the mean and the variation, we introduce new parameters $`\mu `$ and $`\sigma `$ by $$\mu _0=\mu ϵ,\sigma _0^2=\sigma ^2ϵ$$ (2.10) such that the new parameter $`\sigma ^2`$ is $`O(1)`$. To obtain the differential form of the propagation, we define the infinitesimal time evolution by $$\psi (s,t+ϵ)=_0^{\mathrm{}}𝑑sG(s,s^{})\psi (s^{},t),$$ (2.11) where one measures the time by $`ϵ`$ multiplied by the number of generations $`k`$. Introducing a variable $`\xi `$ by $$\frac{ss^{}(1+\mu ϵ)}{\sigma \sqrt{s^{}}}=\xi \sqrt{ϵ},$$ (2.12) one may rewrite the above integral as $`\psi (s,t+ϵ)={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\xi {\displaystyle \frac{(1+b\xi )}{(1+\mu ϵ)\sqrt{2\pi }}}e^{\xi ^2/2}\psi ({\displaystyle \frac{(1+2b\xi +2b^2\xi ^2)s}{1+\mu ϵ}},t)+O(ϵ^2)`$ $`={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\xi e^{\xi ^2/2}{\displaystyle \frac{(1+b\xi )}{\sqrt{2\pi }}}[(1\mu ϵ)\psi +s(2b\xi +2b^2\xi ^2\mu ϵ)\psi ^{}+2b^2\xi ^2s^2\psi ^{\prime \prime }]+O(ϵ^2)`$ (2.13) with $`b=\sigma \sqrt{ϵ}/\sqrt{4(1+\mu ϵ)s}`$. Integrating over the $`\xi `$-variable, one may obtain the following differential equation, $$\frac{}{t}\psi (s,t)=\left[\frac{\sigma ^2}{2}(s\frac{^2}{s^2}+2\frac{}{s})\mu s\frac{}{s}\mu \right]\psi (s,t),$$ (2.14) where the terms of $`O(ϵ^2)`$ are ignored. Alternatively, the above equation may be presented by $`{\displaystyle \frac{}{t}}\psi (r^2,t)=H(𝐩,𝐱)\psi (r^2,t)`$ (2.15) with the Hamiltonian $`H(𝐩,𝐱){\displaystyle \frac{\sigma ^2}{8}}𝐩^2+{\displaystyle \frac{i\mu }{2}}𝐱𝐩+\mu ={\displaystyle \frac{\sigma ^2}{8}}(𝐩+{\displaystyle \frac{2i\mu }{\sigma ^2}}𝐱)^2+{\displaystyle \frac{\mu ^2}{2\sigma ^2}}𝐱^2`$ (2.16) where the definition $`r^2=s=x_1^2+x_2^2+x_3^2+x_4^2`$ and the four dimensional gradient $`i𝐩`$ are introduced. Although the Hamiltonian is not a Hermitian operator, the time evolution of the system is well defined. Inclusion of the density contribution to the time evolution is a complicated problem. Some speculations on the dynamics with some generic density function will be relegated to the conclusion. Here, let us consider the case that it is possible to write the density function in a form $`\rho (s,t)=\rho _0(t)(1ϵU(s,t)+O(ϵ^2))`$ (2.17) where the function $`U(s,t)`$ satisfies the requirement $`ϵ\underset{L\mathrm{}}{lim}{\displaystyle \frac{1}{L}}{\displaystyle _0^L}𝑑s|U(s,t)|1.`$ (2.18) Note that the overall factor $`\rho _0`$ will be absorbed into the normalization of the distribution function $`\varphi `$ without changing the probability amplitude. Thus the time evolution of the system function $`\varphi (s,t)`$ for this case can be easily identified as $$\varphi (r^2,t+ϵ)=[1ϵH(𝐩,𝐱)][1ϵU(r^2,t)]\varphi (r^2,t)+O(ϵ^2),$$ (2.19) from the combination of (2.6) (2.14) and (2.17). Hence the differential equation describing the time evolution with the density function (2.17) is $`{\displaystyle \frac{}{t}}\varphi (r^2,t)=[H(𝐩,𝐱)+U(r^2,t)]\varphi (r^2,t).`$ (2.20) As discussed in the introduction, the form of the potential $`U(s,t)`$ may depend upon the system function $`\varphi (s,t)`$ because the system itself works as an environmental element. However, we won’t further discuss this possibility in the following for the simplicity. ## 3 Solution and statistics of self-replicating system In the preceding section, we discuss the time evolution process and the differential equation governing the system dynamics. For the case that one may characterize the system by the potential $`U`$ for the solution density contribution, the time evolution of the system follows the differential equation (2.20). Let us first consider the case $`U=0`$, which implies that the density function is constant in its argument. Namely, the solution space distribution is uniform over the information configuration space. To find the dynamics, it is convenient to construct the kernel $`K(r^2,r_{}^{}{}_{}{}^{2};t)`$ that is defined as a solution of the equation (2.20) with the initial condition $`\varphi _0(r^2)=\delta (r^2r_{}^{}{}_{}{}^{2}).`$ (3.1) With help of the kernel, the solution for a general initial condition $`\varphi (r^2,0)`$ is obtained with $`\varphi (r^2,t)={\displaystyle _0^{\mathrm{}}}d(r_{}^{}{}_{}{}^{2})K(r^2,r_{}^{}{}_{}{}^{2};t)\varphi (r_{}^{}{}_{}{}^{2},0).`$ (3.2) In order to find the kernel, we first introduce the function $`\stackrel{~}{\varphi }(r^2,t)`$ by $`\varphi (r^2,t)=e^{\mu r^2/\sigma ^2}\stackrel{~}{\varphi }(r^2,t),`$ (3.3) and insert this into the equation (2.20). One finds that the equation for $`\stackrel{~}{\varphi }`$ now reads $`{\displaystyle \frac{}{t}}\stackrel{~}{\varphi }(r^2,t)=\stackrel{~}{H}(𝐩,𝐱)\stackrel{~}{\varphi }(r^2,t)`$ (3.4) with a new Hamiltonian $`\stackrel{~}{H}(𝐩,𝐱)={\displaystyle \frac{\sigma ^2}{8}}𝐩^2+{\displaystyle \frac{\mu ^2}{2\sigma ^2}}𝐱^2.`$ (3.5) It is interesting to note that this is the Hamiltonian that describes a simple harmonic oscillator in four dimensional flat Euclidean space. The kernel for the equation (3.4) is simply a product of the propagator for one dimensional simple harmonic oscillator: $`\stackrel{~}{K}(𝐱,𝐱^{};t)=\left({\displaystyle \frac{\mu }{2\pi \mathrm{sinh}\mu t}}\right)^2\mathrm{Exp}\left\{{\displaystyle \frac{2\mu }{\sigma ^2\mathrm{sinh}\mu t}}[(𝐱^2+𝐱_{}^{}{}_{}{}^{2})\mathrm{cosh}\mu t2𝐱𝐱^{}]\right\},`$ (3.6) with the solution of the differential equation (3.4) $`g(𝐱,t)={\displaystyle d^4x^{}\stackrel{~}{K}(𝐱,𝐱^{};t)g(𝐱^{},0)}.`$ (3.7) for an arbitrary initial condition $`g(𝐱,0)`$. The kernel for our system is then obtained by taking $`g(𝐱^{},0)=e^{\mu r_{}^{}{}_{}{}^{2}/\sigma ^2}\delta (r_{}^{}{}_{}{}^{2}r_{0}^{}{}_{}{}^{2})`$ in (3.7) and multiplying the factor $`e^{\mu r^2/\sigma ^2}`$. Using the integral representation of the Bessel function, one finds that the expression for the kernel reads $`K(r^2,r_0^2;t)=e^{\mu r^2/\sigma ^2}{\displaystyle d^4x^{}\stackrel{~}{K}(𝐱,𝐱^{};t)e^{\mu r_{}^{}{}_{}{}^{2}/\sigma ^2}\delta (r_{}^{}{}_{}{}^{2}r_{0}^{}{}_{}{}^{2})}`$ $`={\displaystyle \frac{\mu r_0}{2r\mathrm{sinh}\mu t}}\mathrm{Exp}\left\{{\displaystyle \frac{2\mu }{\sigma ^2}}[(r^2r_0^2)(r^2+r_0^2)\mathrm{coth}\mu t]\right\}I_1({\displaystyle \frac{2\mu rr_0}{\sigma ^2\mathrm{sinh}\mu t}}).`$ (3.8) where $`I_1(x)`$ is the first-kind Bessel function of imaginary argument. For the consistency of the equation (3.2), the kernel must satisfy the relation $`K(r^2,r_{}^{}{}_{}{}^{2};t+t^{})={\displaystyle _0^{\mathrm{}}}d(z^2)K(r^2,z^2;t)K(z^2,r_{}^{}{}_{}{}^{2};t^{}),`$ (3.9) which may obtained by applying the equation (3.2) twice for time intervals $`[0,t^{}]`$ and $`[t^{},t+t^{}]`$. Upon using the expression (3), this composition rule may be checked explicitly by a straightforward computation with help of the formulas for the definite integrals involving the Bessel function. Following the standard rule, the mean $`M(t)`$ and $`V(t)`$ variation of the number of information are defined by $`M(t)`$ $``$ $`{\displaystyle \frac{_0^{\mathrm{}}𝑑ss\varphi (s,t)}{_0^{\mathrm{}}𝑑s\varphi (s,t)}}`$ (3.10) $`V(t)`$ $``$ $`{\displaystyle \frac{_0^{\mathrm{}}𝑑ss^2\varphi (s,t)}{_0^{\mathrm{}}𝑑s\varphi (s,t)}}M^2(t).`$ (3.11) For the case of the initial condition $`\varphi _0(s)=\delta (ss_0)`$, one may compute explicitly the mean and variation noting that $`\varphi (s,t)`$ is given by $`K(s,s_0;t)`$. For this initial condition, we first compute the total amplitude $`P(t)`$; the resulting expression reads $`P(t){\displaystyle _0^{\mathrm{}}}𝑑s\varphi (s,t)=1\mathrm{Exp}\{{\displaystyle \frac{2\mu s_0e^{\mu t}}{\sigma ^2\mathrm{sinh}\mu t}}\}`$ (3.12) As a function of $`t`$, $`\mu `$ and $`\sigma `$, the mean and variation for the initial condition are explicitly $`M(t)`$ $`=`$ $`s_0e^{2\mu t}P^1(t)`$ (3.13) $`V(t)`$ $`=`$ $`\left[\sigma ^2s_0P(t)e^{3\mu t}\mu ^1\mathrm{sinh}\mu ts_0^2\mathrm{Exp}\{4\mu t{\displaystyle \frac{2\mu s_0e^{\mu t}}{\sigma ^2\mathrm{sinh}\mu t}}\}\right]P^2(t).`$ (3.14) By taking the limit that $`\mu `$ goes to zero from these expressions, one may obtain the mean and variation of the system with $`\mu =0`$ and they are $`M_0(t)`$ $`=`$ $`{\displaystyle \frac{s_0}{1e^{\frac{\lambda }{t}}}}`$ (3.15) $`V_0(t)`$ $`=`$ $`{\displaystyle \frac{s_0^2e^{\frac{\lambda }{t}}}{(e^{\frac{\lambda }{t}}1)^2}}\left[{\displaystyle \frac{2t}{\lambda }}(e^{\frac{\lambda }{t}}1)1\right].`$ (3.16) where $`\lambda =2s_0/\sigma ^2`$. For $`t\lambda `$, one finds that the mean and variation are $`M_0(t)s_0,V_0(t)2s_0^2t/\lambda ,`$ (3.17) which agree with those computed from the initial condition $`\varphi (s,0)=\delta (ss_0)`$. On the other hand, for $`t\lambda `$, we have $`M_0(t){\displaystyle \frac{\sigma ^2t}{2}},V_0(t)\left({\displaystyle \frac{\sigma ^2t}{2}}\right)^2.`$ (3.18) Hence the mean and variation are independent of the initial parameter $`s_0`$ and grow in a power law of time as the time gets larger. With the nonvanishing $`\mu `$, one finds that as far as $`|\mu t|1`$, the mean and the variation behave in the same way as the case with $`\mu =0`$. On the other hand, for the case $`\mu t1`$, the mean and variation are approximated by $`M(t)`$ $``$ $`{\displaystyle \frac{s_0e^{2\mu (t+\lambda )}}{e^{2\mu \lambda }1}}`$ (3.19) $`V(t)`$ $``$ $`{\displaystyle \frac{s_0^2e^{2\mu (2t+\lambda )}}{(e^{2\mu \lambda }1)^2}}\left[{\displaystyle \frac{1}{\mu \lambda }}(e^{2\mu \lambda }1)1\right].`$ (3.20) For the negative $`\mu `$, the large time (i.e. $`t\mu ^1`$ ) behaviors are simply $`M(t)\sigma ^2/(4|\mu |)`$ and $`V(t)\sigma ^4/(4\mu )^2`$ and hence only in this case the mean and variation of the system are bounded from above all the time. One may further consider the case with nonvanishing potential $`U(s,t)`$. Of course, it is not possible to solve the system explicitly without an explicit form of the potential. However, the influence of the potential to the time evolution of the system is not so complicated to understand. Since the role of the potential $`U`$ appears as a weighting factor $`e^{ϵU(s,t)}`$ on the amplitude in each propagating interval $`[t,t+ϵ]`$, the larger positive value of $`U(s,t)`$ at a certain position results in the more suppression of the amplitude, while the larger negative value leads to the more amplification of the probability amplitude. Especially, for the case that the harmonic potential term in (3.5) dominates the potential $`U`$ at large $`s`$, e.g. $`|U(s,t)|\mu ^2s/2\sigma ^2\mathrm{as}s\mathrm{large},`$ (3.21) the large time features of the system mostly agrees with those in (3.19) and (3.20). For the case that $`U`$ is much larger than $`U_0=\mu ^2s/2\sigma ^2`$, the growth of the mean and variation are relatively suppressed. The other limiting case, $`U/U_01`$, the mean and variation grow much faster in time compared to the case $`U=0`$. What happens to the case that the influence of the density cannot be presented in terms of the potential description? In this case, the amplification by the density multiplication in each generation are $`O(1)`$ and become huge after a large number of generations are passed. The distribution function of the system will be sharply peaked at the maximum of the density after a large number of generations. This is characterized by the mean $`M(t)\mathrm{Max}(\rho (s,t))`$ with $`V(t)0`$. Thus, in this case one may conclude that there is no evolution of the system at all after a large time as far as the probability distribution of the system is concerned. One may speculate that the real life system presides in between the two limiting cases of the $`O(1)`$ variation of the density function and $`U=0`$. ## 4 Outlook As a simplified version of the real life system, we have modeled the self-replicating system whose dynamics can be projected onto the space of information configurations. Each object in the system carries an information element in the configuration space. The unit copying process for the replication of the base information is characterized by the number fluctuation with the mean $`\mu _0`$ and the standard deviation $`\sigma _0`$, and each of unit copying is assumed to be an independent event. We define the solution space by the condition that its element has an enough information to replicate itself to the next generation. To characterize the properties of the solution space, we define the density function of the solution space over the information configuration space. We have shown that the dynamics of the system is described with help of the time evolution kernel and may be mapped into the evolution of the four dimensional harmonic oscillator for the case of uniform density of the solution space. In addition, we have discussed the system dynamics for the various case of nonuniform density. Especially, we have proved that the time evolution of the mean and variation of the information numbers over the system configuration are growing with time with a few exceptions. This exception compiles the case of the negative $`\mu `$, which is improbable in the realistic system. In our model, some details of the real life system are not included for simplicity. For example, the unit time intervals between generations are not all the same but depend upon the generations and the information number variable in the real life system. The number of descendents from a system object after a generation is also dependent on the time and the number variable. Moreover, there are two kinds of the replication processes, which are nothing but asexual and sexual reproductions. There may be also a probability to fail to produce something else rather than base elements in the unit copying process, though it is expected to be extremely small. As mentioned earlier, it may be the case that the density of the solution space is a functional of the system distribution $`\varphi (s,t)`$. In this case, the equation governing the time evolution inevitably becomes nonlinear in $`\varphi (s,t)`$ and this nonlinear effect may be important in understanding the decelerating force of the population growth due to the limitation of resources. These kind of fluctuations may be important in understanding the local dynamical evolution at a certain local time interval $`[t,t+\mathrm{\Delta }t]`$. However, if the fluctuations are averaged over the long time, the effects of these deviations may be effectively ignored without introducing any serious change in global pictures. Furthermore, the detailed description of these variations may be incorporated by the slight modifications of the model presented. Nevertheless, the effect of these variation may be crucial in comparing the theory to the characteristics of the real life system because observations on the real life system is confined to a present short time interval. In this sense, the experimental implication of our model need more investigations because how to extract essential features of the real system dynamics from the present data depends considerably on the detailed form of local fluctuations. ACKNOWLEDGEMENTS The authors would like to thank Choonkyu Lee, Joohan Lee, Hyunsoo Min, and Jae Hyung Yee for enlightening discussions.
no-problem/9903/gr-qc9903055.html
ar5iv
text
# 1 Introduction ## 1 Introduction Quantum field theory permits the existence of states where the renormalized energy density can become arbitrarily negative in regions of spacetime even though the total energy is always positive . Negative energy is an essential ingredient in many bizarre effects, including wormholes , warp drives , time machines ; and may be used to violate the $`2^{nd}`$ law of thermodynamics , . Fortunately (or unfortunately!) there appear to be severe restrictions on the magnitude and duration of negative energies that might occur in a quantum field. One form of these restrictions are the “quantum inequalities”, originally proposed by Ford and Roman , and studied by numerous authors since , which essentially state that large amounts of negative energy can only be “seen” for very short intervals of time. These inequalities have been used to place stringent limitations on warp drive and wormhole geometries ,. Recently, Ford and Roman proposed the “quantum interest conjecture” and proved it for delta function pulses of negative energy for massless scalar fields in 2D and 4D Minkowski spacetime . This conjecture is a consequence of the quantum inequalities (QI’s), and states that any negative energy pulse (the “loan”) must be accompanied (“repaid”) by a positive energy pulse within a certain maximum time interval, and the larger the separation of the pulses the larger the magnitude the positive pulse must be relative to the negative pulse (i.e., repaid with “interest”). At first glance this statement may not seem too profound – after all the total energy must be positive, so if there is a location with negative energy there will be compensating positive energy somewhere in the spacetime. But the quantum interest conjecture tells us a lot more about the nature of negative energies in free-fields: negative energy is always in close proximity to an entourage of positive energy. This, for instance, has immediate consequences in attempts to violate the $`2^{nd}`$ law of thermodynamics. For suppose negative energies were “substantial” enough that one could in principle reflect only the negative energy part of the flux produced by an accelerating mirror as shown in Figure 1 (a variant of a device first proposed by Davies who used it to construct a reversible process that effectively transferred energy from a cold body to a hot one without doing work). The resultant stream of negative energy could be sent far enough away from the device so that one could reasonably apply the free-field quantum inequalities to the stream. Even though each pulse within the stream may be consistent with the original quantum inequality, the stronger quantum interest conjecture strictly forbids such a flux of negative energy. This implies that the mirror device in Figure 1 cannot exist; if we want to reflect negative energy we must reflect its support of positive energy, which is at least as large in magnitude. Thus one cannot subject a hot body to a pure flux of negative energy to lower its entropy (at least using scalar quantum fields), as suggested in . In this paper, using a simple scaling argument, we present a proof of quantum interest for arbitrary distributions of negative energy of scalar fields in 4D Minkowski spacetime (slightly weaker results are obtained in 2D Minkowski spacetime). We do this first for the massless scalar field in section 3, after introducing the quantum inequalities in section 2. In section 4 we show that a massive scalar field has stronger constraints on the magnitude and duration of negative energies than a massless field, thus making the results of section 3 applicable to both types of scalar field. In section 5 we briefly comment on the possibility of extending quantum interest to the Electromagnetic and Dirac fields, curved spacetimes and to situations in Minkowski space where mirror-like boundary conditions are imposed on the fields. ## 2 Quantum inequalities The quantum inequalities can be stated as follows. An inertial observer samples the local energy density $`\rho (t)`$ over a period of time with a sampling function $`g(t)`$ to obtain an average energy density $`\rho `$: $$\rho =_{\mathrm{}}^{\mathrm{}}g(t)\rho (t)𝑑t.$$ (1) The only conditions imposed upon $`g(t)`$ are that $$_{\mathrm{}}^{\mathrm{}}g(t)𝑑t=1,\text{and}g(t)0t.$$ (2) Then, $$\rho \rho _{min},$$ (3) where $`\rho _{min}`$ is a constant that depends upon the sampling function $`g(t)`$ and the dimensionality $`d`$ of the spacetime. Note that for a given energy density $`\rho (t)`$ (3) must be satisfied by *all* choices of $`g(t)`$. Flanagan’s optimal bound for a massless scalar field in $`2D`$ is $$\rho _{min}=\frac{1}{24\pi }_{\mathrm{}}^{\mathrm{}}\frac{g^{}(t)^2}{g(t)}𝑑t,$$ (4) while Fewster and Eveson obtained the following bounds in 2D and 4D Minkowski spacetime : $`\rho _{min}={\displaystyle \frac{1}{16\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{g^{}(t)^2}{g(t)}}𝑑t,\text{(2D)}`$ (5) $`\rho _{min}={\displaystyle \frac{1}{16\pi ^2}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}\left(g^{1/2}(t)^{\prime \prime }\right)^2𝑑t,\text{(4D)}.`$ (6) Certain sampling functions will not give a lower bound, in particular if there are discontinuities in $`g(t)`$ or $`g^{}(t)`$. For example the rectangular pulse function ($`g(t)=\frac{1}{t_0}`$ when $`\frac{t_0}{2}<t<\frac{t_0}{2}`$ and $`0`$ elsewhere) doesn’t give a finite lower bound $`\rho _{min}`$. This makes sense if we recall the positive/negative energy delta pulse pair produced by a mirror that instantaneously accelerates from rest (producing a negative pulse), then, after undergoing a period of uniform acceleration, decelerates to zero acceleration (emitting a positive pulse) . The magnitude of energy produced by the mirror is proportional to its change in acceleration with time. We can thus make the negative pulse as energetic as we want, but doing so shortens the time interval before the positive pulse arrives (the mirror is decelerated before it crashes into the observer). If we sample the negative energy with the rectangular function we can avoid measuring any positive energy by timing the rectangular function to turn off before the positive pulse arrives. More insight into the intimate relationship between the sampling function and minimum bound can be obtained from the derivation of Fewster and Eveson. One can write (6) as $$\rho _{min}=\frac{1}{16\pi ^3}_0^{\mathrm{}}(\widehat{g^{1/2}}(w))^2w^4𝑑w,$$ (7) where $`\widehat{g^{1/2}}(w)`$ is the Fourier transform of the square root of $`g(t)`$. Smooth sampling functions, like the Lorentzian function originally employed by Ford, decay rapidly in the frequency domain, smoothing over higher frequency (hence higher energy) transient components of the flux. Negative energies in a free field appear to be coherence or interference effects produced by peculiar superpositions of the positive mode quanta of the field. For example, the well-known vacuum + 2 particle state $`|\psi =\alpha |0+\beta |2`$ has negative energy at periodic intervals with appropriate choices of $`\alpha `$ and $`\beta `$: the frequency and energy density of the negative regions are proportional to the frequency of the 2-particle modes . This suggests that if we want to see a lot of negative energy we need to look at such high frequency transient phenomena, and the only way to “catch” the negative energy is to use a sampling function with steep edges. But as discussed in the introduction the quantum interest conjecture seems to say that one cannot interact with this negative energy as one can with positive energy – “catch” may be an overstatement. ## 3 Quantum interest for massless scalar fields The key to obtaining useful information from the quantum inequalities in light of the arbitrariness of the sampling function, and hence lower bound, is to choose an appropriate class of sampling function. To prove quantum interest, we will use a function $`g(t)`$ with compact support ($`g(t)`$ is zero outside the range $`[t_0/2,t_0/2]`$), that has a single maximum at $`t=0`$ and is sufficiently smooth such that a lower bound in (4) - (6) exists. For simplicity we will also assume that $`g(t)`$ is symmetric about $`t=0`$. For example, the following sampling functions will do (though for the most part the particular choice won’t matter): $`g(t)\mathrm{cos}^n\left({\displaystyle \frac{\pi t}{t_0}}\right),{\displaystyle \frac{t_0}{2}}<t<{\displaystyle \frac{t_0}{2}},(n2)`$ (8) $`0elsewhere,`$ or $`g(t)\left(t^2{\displaystyle \frac{t_0^2}{4}}\right)^n,{\displaystyle \frac{t_0}{2}}<t<{\displaystyle \frac{t_0}{2}},(n2)`$ (9) $`0elsewhere.`$ The minimum bounds are strongest (least negative) when $`n=2`$; as $`n\mathrm{}`$ these functions approach $`\delta (t)`$ which has no lower bound. Now consider the hypothetical situation shown in Figure (2). We have an isolated distribution of negative energy flowing past the observer who samples it with a function $`g(t)`$ like (8) or (9), timed to snugly encompass the negative flux. We want to answer two questions: 1) How isolated can the negative pulse be? In other words, how soon before or after the negative flux arrives *must* one see positive energy. 2) When we do start sampling positive energy, must one pay quantum interest? I.e., does the total positive energy outweigh the negative energy by an amount that increases the further the two pulses are apart. To answer these questions we sample the distribution again with a second function $`\overline{g}(t)`$ that is merely a copy of $`g(t)`$ scaled by a factor $`x1`$: $$\overline{g}(t)=\frac{1}{x}g\left(\frac{t}{x}\right).$$ (10) The support of $`\overline{g}(t)`$ is thus $`[xt_0/2,xt_0/2]`$, and the leading factor of $`1/x`$ is a normalization constant to give $`\overline{g}`$ unit integral. If we calculate the minimum negative energy density $`\overline{\rho }_{min}`$ allowed by the quantum inequalities using $`\overline{g}`$ in (4) or (5) for 2D and (6) in 4D Minkowski spacetime we obtain the key result: $$\overline{\rho }_{min}=\frac{\rho _{min}}{x^d}.$$ (11) Here $`\rho _{min}`$ is the lower bound associated with $`g(t)`$ and $`d`$ is the spacetime dimension (2 or 4). This expression immediately suggests the principle of quantum interest. We have total negative energy $`E_m=_{t_0/2}^{t_0/2}\rho (t)𝑑t`$ and an average energy density of $`\rho _{avg}=E_m/t_0\rho `$. If we now increase our sampling range to $`xt_0`$, and $`\rho (t)`$ is zero outside of $`[t_0/2,t_0/2]`$, then $`\rho _{avg}`$ will scale as $`1/x`$. But the maximum allowed negative energy density scales as $`1/x^d`$, thus positive energy (and probably quite a lot of it) is eventually needed to satisfy the quantum inequalities. We can make the preceding statement more precise. Define a constant $`y`$, with $`0<y1`$, such that $$\rho =_{t_0/2}^{t_0/2}g(t)\rho (t)𝑑t=y\rho _{min}.$$ (12) Note that for most sampling functions $`g(t)`$ there will probably not be any quantum state that achieves the minimum ($`y=1`$). Now stretch $`g(t)`$ by the factor $`x>1`$, and to answer the first question we will show that there is a largest possible $`x=x_{max}`$ allowed by the QI’s if we assume zero energy density outside of the negative pulse, as illustrated in Figure 2: $$_{xt_0/2}^{xt_0/2}\rho (t)\overline{g}(t)𝑑t=\frac{1}{x}_{t_0/2}^{t_0/2}\rho (t)g(t/x)𝑑t\overline{\rho }_{min}=\frac{\rho _{min}}{x^d}.$$ (13) Using (12) we can rewrite the inequality as $$x^{d1}\frac{1}{y}\frac{_{t_0/2}^{t_0/2}\rho (t)g(t)𝑑t}{_{t_0/2}^{t_0/2}\rho (t)g(t/x)𝑑t}.$$ (14) This clearly shows that if we have some negative energy ($`y0`$) then there is an upper bound on $`x`$, for, recalling that $`g(t)`$ is positive with a single peak at $`t=0`$ so that $`g(0)g(t/x)g(t)`$, one can see that the ratio of the two integrals in (14) is $`1`$ (but is at least as large as $`\frac{_{t_0/2}^{t_0/2}\rho (t)g(t)𝑑t}{g(0)_{t_0/2}^{t_0/2}\rho (t)𝑑t}`$). Thus we can write $$x_{max}^{d1}=\frac{1}{y}\frac{_{t_0/2}^{t_0/2}\rho (t)g(t)𝑑t}{_{t_0/2}^{t_0/2}\rho (t)g(t/x_{max})𝑑t}.$$ (15) This upper bound depends on the sampling function and in general will over-estimate the maximum allowed separation since a real distribution of energy must satisfy (15) for all choices of $`g(t)`$. Without a specific sampling function or energy distribution we cannot reduce (15) any further, but we can see that the range of possible $`x`$ is most strongly influenced by $`y`$. If $`y=1`$ (we have a state that actually achieves the minimum allowed by $`g(t)`$) then the only way (14) or (15) can be satisfied is if $`x=1`$; i.e. positive energy must *immediately* follow and or precede the negative energy. If $`y`$ is close to zero then $`x`$ can be large and we can approximate the integral in the denominator of (15) by evaluating $`g(t/x)`$ at $`t=0`$: $$x_{max}^{d1}\frac{1}{y}\frac{\rho }{g(0)E_m},1/y1.$$ (16) In most situations $`\rho /g(0)E_m`$ will be a number of order unity. If we have a delta function pulse of negative energy centered at $`t=0`$ (as considered by Ford and Roman) we obtain a similar relation $$x_{max}^{d1}=\frac{1}{y}.$$ (17) The above expressions (15) - (17) all show that stronger distributions of negative energy (larger y) are required to be close to positive energy (smaller $`x_{max}`$). Also note that the bound on $`x`$ is stronger in 4 dimensional spacetime. To answer the second question, namely whether the quantum interest $`ϵ`$ defined by $$\frac{E_p}{|E_m|}=(1+ϵ)$$ (18) is positive, consider the situation in Figure 3 (note that in this figure we have omitted the $`1/x`$ normalization constants in the plots of $`\overline{g}`$), where $`E_p`$ is the total positive energy, i.e. $`E_p=_{t_1}^{xt_0/2}\rho (t)𝑑t`$. Here we stretch $`g(t)`$ by a new factor $`x`$ (possibly larger than $`x_{max}`$, which is the maximum $`x`$ if we only sample negative energy), and the positive energy flux arrives at time $`t_1`$, with $`t_0/2t_1x_{max}t_0/2`$. For simplicity we only consider positive energy that arrives after the negative energy, but this doesn’t affect the generality of the argument. Applying the QI’s and scaling relation to this situation yields $$_{xt_0/2}^{xt_0/2}\rho (t)\overline{g}(t)𝑑t=\frac{1}{x}_{t_0/2}^{t_0/2}\rho (t)g(t/x)𝑑t+\frac{1}{x}_{t_1}^{xt_0/2}\rho (t)g(t/x)𝑑t\overline{\rho }_{min}=\frac{\rho _{min}}{x^d}$$ (19) To simplify the appearance of this expression we assume that $`\rho (t)`$ is negative semi-definite in the range $`[t_0/2,t_0/2]`$, and positive semi- definite elsewhere (again this does not qualitatively affect the conclusions). Then we can find a number $`a`$, where $`0a<t_0/2`$, such that $`_{t_0/2}^{t_0/2}\rho (t)g(t/x)𝑑t=g(a/x)E_m`$, and a number $`b`$, where $`a<t_1b<xt_0/2`$, such that $`_{t_1}^{xt_0/2}\rho (t)g(t/x)𝑑t=g(b/x)E_p`$ (see Figure 3). Thus we can rewrite (19) as $$|E_m|g(a/x)+E_pg(b/x)\frac{|\rho _{min}|}{x^{d1}}.$$ (20) (This expression is already quite suggestive: if the right hand side of (20) is close to zero then $`E_p`$ will have to outweigh $`|E_m|`$ by roughly $`g(a/x)/g(b/x)`$ to satisfy the inequality). Using (12) and (15), we can write (20) as $$|E_m|g(a/x)+E_pg(b/x)\left(\frac{x_{max}}{x}\right)^{d1}_{t_0/2}^{t_0/2}g(t/x_{max})\rho (t)𝑑t.$$ (21) As we did for (19), we can simplify (21) using $`_{t_0/2}^{t_0/2}g(t/x_{max})\rho (t)𝑑t=g(a_{max}/x_{max})|E_m|`$, where $`0a_{max}<t_0/2`$ (note that $`a_{max}`$ simply labels the evaluation of the integral when $`x=x_{max}`$ and doesn’t refer to any maximization of the label $`a`$ defined earlier; in particular, because $`g(t)`$ decreases monotonically away from $`t=0`$, $`x<x_{max}`$ implies that $`a>a_{max}`$ and hence $`g(a/x)<g(a_{max}/x_{max})`$, and vice -versa). This gives, after some rearrangement and utilizing (18) $$(1+ϵ)\frac{1}{g(b/x)}\left(g(a/x)g(a_{max}/x_{max})\left(\frac{x_{max}}{x}\right)^{d1}\right).$$ (22) Inequality (22) must be satisfied for all choices of the scaling factor $`x`$. For smaller $`x`$ ($`xx_{max}`$) $`ϵ`$ can be negative, but we want to show that as $`x`$ increases eventually $`ϵ`$ must become positive. Later we will choose a more restrictive distribution of positive energy to better illustrate quantum interest, but first we will show that (at least when $`d=4`$) the total amount of positive energy is strictly greater than the total negative energy that passes the observer. To do so, evaluate (22) in the limit as $`x\mathrm{}`$. In this limit for $`t=a/x`$ and $`t=b/x`$ we can accurately evaluate $`g(t)`$ in a Taylor series about $`t=0`$: $$g(t)=g(0)\frac{|g^{\prime \prime }(0)|}{2}t^2+O(t^4).$$ (23) There are no odd powers because of the assumed symmetry in $`g`$, but even if we don’t require $`g`$ to be symmetric there will not be any $`t`$ term in the series because of the peak at $`t=0`$ (which also forces $`g^{\prime \prime }(0)`$ to be negative). Thus (22) can be written as $$ϵ\frac{|g^{\prime \prime }(0)|}{2g(0)}\frac{b^2a^2}{x^2}\frac{g(a_{max}/x_{max})}{g(0)}\left(\frac{x_{max}}{x}\right)^{d1}+O(1/x^3).$$ (24) In the limit $`x\mathrm{}`$, $`ϵ0`$ and when the dimension $`d=4`$, $`ϵ`$ is strictly greater than $`0`$ for $`x`$ sufficiently large. In 2 dimensional Minkowski space we can only conclude that $`ϵ`$ is at least zero for arbitrary fluxes using the large $`x`$ behavior of the inequality (22). To gain more insight into inequality (22) it is useful to restrict the positive flux to last for a time $`t_0`$. Then $$\frac{xt_0}{2}=t_1+t_0$$ (25) and $$\frac{t_0}{2}t_1\frac{x_{max}t_0}{2},$$ (26) hence $$3xx_{max}+2.$$ (27) To obtain a lower bound estimate $`ϵ_{\mathrm{}}`$ for the quantum interest $`ϵ`$, set $`a=t_0/2`$, $`a_{max}=0`$ and $`b=t_1`$ in (22) (this will be a good approximation for larger $`x`$; see Figure 3) $$1+ϵ_{\mathrm{}}\frac{g(t_0/2x)g(0)(x_{max}/x)^{d1}}{g((t_0/2)(12/x))},$$ (28) where we have used (25) to eliminate $`t_1`$ from the expression. For a concrete example we will use the polynomial sampling function with n=2 (9), i.e. $`g(t)(t^2t_0^2/4)^2`$. Define $`z\frac{\mathrm{\Delta }t}{t_0}=\frac{t_1t_0/2}{t_0}`$, so $`z`$ is the time interval separating the positive and negative pulses divided by $`t_0`$.Using (25) to (27) we can find the range of $`z`$: $`0zz_{max}`$, $`z_{max}=(x_{max}1)/2`$. When $`x=x_{max}`$ (and the exact inequality (22) gives $`ϵ1`$), $`z=z_{max}1`$. With these definitions (28) becomes (after some simplification) $$1+ϵ_{\mathrm{}}\frac{1}{4}\left[(z+2)^2\frac{(z+3/2)^{5d}(z_{max}+1/2)^{d1}}{(z+1)^2}\right].$$ (29) For large $`z`$ and $`z_{max}`$ $$ϵ_{\mathrm{}}z(\frac{z}{4}+1)\frac{z^{3d}z_{max}^{d1}}{4}\left(\frac{3(5d)4}{2z}+\frac{d1}{2z_{max}}+1\right).$$ (30) When $`z`$ is in the range $`[z_{max}1,z_{max}]`$, (30) is almost a straight line, with $`ϵ_{\mathrm{}}`$ ranging from a minimum of $`5/4`$ to a maximum of $`z_{max}/4`$ in 2D spacetime (compare Figure 4 where expression (29) is plotted), and from $`9/8`$ to $`(3/4)z_{max}`$ in 4D spacetime (compare Figure 5). This shows quite clearly that quantum interest grows (almost linearly) as the pulse separation increases. But a note of caution: this example will give an accurate lower bound on the quantum interest only if our choice of sampling function doesn’t overestimate the “real” $`x_{max}`$ or $`z_{max}`$ for a given distribution of negative energy. Recall that the “real” $`x_{max}`$ must satisfy inequality (15) for *any* choice of sampling function. For example, a sharply peaked sampling function (e.g. (9) with large $`n`$) will not give very stringent lower bounds on $`\rho _{min}`$, and consequently (15) will overestimate $`x_{max}`$ for a small pulse of negative energy ($`y1`$). A similar analysis to that above would then seem to indicate that the quantum interest diverges in the limit as $`n\mathrm{}`$ at $`z=z_{max}`$, but in truth the value of $`z_{max}`$ was overestimated. ## 4 Massive scalar fields In this section we will briefly show that the quantum interest inequalities (14) and (22) and hence all the results from the previous section also apply to the massive scalar field in 4 dimensional Minkowski spacetime. Fewster and Eveson obtained the following expression for $`\rho _{min}`$ in $`4D`$ Minkowski spacetime for a scalar field of mass $`m`$: $$\rho _{min}=A_0^{\mathrm{}}𝑑s_m^{\mathrm{}}𝑑\omega _k\omega _k^2[\omega _k^2m^2]^{1/2}|\widehat{g^{1/2}}(s+\omega _k)|^2,$$ (31) where $`A`$ is a positive constant, $`\widehat{g^{1/2}}(s)`$ is the Fourier transform of $`g^{1/2}(t)`$, and one integrates over the spectrum of field modes (i.e. $`\omega _k=\sqrt{|k|^2+m^2}`$, $`\stackrel{}{k}`$ is the 3-momentum of a mode with frequency $`\omega _k`$). If $`\rho _{min}(m)`$ denotes the minimum negative energy bound for a field of mass $`m`$ with sampling function $`g(t)`$, then $$\overline{\rho }_{min}(m)=\frac{\rho _{min}(mx)}{x^4},$$ (32) where $`\overline{\rho }_{min}(m)`$ is the minimum bound with a sampling function $`\overline{g}(t)=g(t/x)/x`$ (the Fourier transform of the scaling relation is $`\widehat{\overline{g}^{1/2}}(s)=\sqrt{x}\widehat{g^{1/2}}(sx)`$). But notice from (31) that $`\rho _{min}(mx)\rho _{min}(m)`$ for $`x1`$ (due to the $`m`$ dependance in the integrand and lower limit of the second integral), hence $$\overline{\rho }_{min}(m)\frac{\rho _{min}(m)}{x^4}.$$ (33) Thus a massive scalar field will have tighter constraints on allowed negative energies than a massless field (compare (11)), and all the inequalities derived in the previous section remain valid for a massive field. (In 2D Minkowski space (32) holds with $`x^4`$ replaced by $`x^2`$, but one cannot conclude that (33) is valid $`x`$.) ## 5 Beyond scalar fields in Minkowski spacetime The scaling argument used to prove quantum interest for scalar fields might readily be applied to other quantum fields, such as the Electromagnetic (EM) field or Dirac field, and possibly to certain curved spacetimes or Minkowski space with boundary conditions as in the Casimir effect. Ford and Roman found a quantum inequality for EM fields in 4D Minkowski space using a Lorentzian sampling function : $$\rho _{EM}\frac{3}{16\pi ^2t_0^4}.$$ (34) This expression certainly indicates that a scaling relation like (11) holds for EM fields. The only complication to obtaining definitive results in this case is that the Lorentzian sampling function does not have compact support, so one cannot rule out the possibility that long distance interference effects may spoil quantum interest for arbitrary energy fluxes of the EM field (though this seems unlikely). There is some evidence that the Dirac field might also satisfy negative energy inequalities similar to those of scalar and EM fields. Vollick has recently shown that a superposition of two single particle electron states can exhibit negative energy densities, but they are constrained by an inequality identical in form to that of the EM and scalar fields . Fewster and Teo have derived lower bounds of the form (31) for states of scalar quantum fields in static, curved spacetimes (those with timelike killing vector fields that are hypersurface orthogonal). The scaling argument will work in certain static spacetimes. For example one can easily show that the scaling relation (33) holds in an open static Robertson-Walker universe ($`ds^2=dt^2+a^2[d\xi ^2+sinh^2(\xi )d\mathrm{\Omega }^2]`$, $`a`$ is consant), as the lower bound for the sampled energy density takes the form : $$\rho _{min}=A_0^{\mathrm{}}𝑑s_C^{\mathrm{}}𝑑\omega _k\omega _k^2[\omega _k^2C^2]^{1/2}|\widehat{g^{1/2}}(s+\omega _k)|^2,$$ (35) where $`C=\sqrt{1/a^2+m^2}`$ and $`m`$ is the mass of the scalar field (compare (31)). In a spacetime with a non-zero expectation value $`\rho _0`$ for the ground state energy density, such as the Boulware state outside a static star or with the Casimir effect between two conducting plates, one might expect a scaling relation of the form $$\overline{\rho }_{min}=\frac{\rho _{min}}{x^d}+\rho _0$$ (36) to hold. In other words, perhaps one may be able prove the quantum interest conjecture for energies *relative* to the ground state energy – see for examples where the quantum inequalities take on the from $`\rho `$ *free field term* \+ *Casimir terms*.<sup>1</sup><sup>1</sup>1In fact, such types of inequalities, called ‘difference inequalities’, have been derived before in several contexts . I was unaware of these results when I wrote this paper, and would like to thank Tom Roman for pointing them out to me. ## 6 Conclusion In this paper we have proven the quantum interest conjecture of Ford and Roman for arbitrary distributions of negative energy of scalar fields in 4D Minkowski spacetime (slightly weaker results hold in 2D). Specifically, any flux of negative energy flowing past an inertial observer *must* be followed or preceded by positive energy within a finite time interval that decreases the larger the amount of negative energy. In addition, the total amount of positive energy seen ($`E_p`$) is always greater than the total amount of negative energy ($`|E_m|`$). In a more restricted scenario where the duration of the positive and negative fluxes are equal, we showed that the quantum interest $`ϵ\left(\frac{E_p}{|E_m|}1\right)`$ grew almost linearly with pulse separation. The nature of existing QI’s for EM fields, the Dirac field and scalar fields in certain static spacetimes suggests that quantum interest may have broader application than free scalar fields in Minkowski spacetime. In a situation where the ground state energy density of the field is non- zero (e.g. in the Casimir effect) we may still expect quantum interest to hold, but then “negative” energy would refer to energies less than that of the ground state. An important consequence of quantum interest is what it tells us about the nature of negative energies in free fields. A local pulse of negative energy is not an entity that can be manipulated or interacted with independently of the accompanying positive energy that must be near by. Even if there are states where the positive and negative energies are separated by a sizeable distance (as suggested by (14) when the amount of negative energy is very small), one could still only interact with the pulse pair as a single entity. For example, absorbing, reflecting or scattering only the positive part of the flux would create an isolated negative pulse, violating the quantum inequalities. Furthermore, this implies that one cannot subject a hot body to a net flux of negative energy that otherwise might have lowered its entropy in violation of the $`2^{nd}`$ law of thermodynamics. Acknowledgement I would like to thank Werner Israel for many stimulating discussions.
no-problem/9903/physics9903010.html
ar5iv
text
# Concepts of Space, Time, and Consciousness in Ancient India ## 1 Introduction Ancient Indian ideas of physics, available to us through a variety of sources, are generally not known in the physics world. Indian astronomer/physicists, starting with a position that sought to unify space, time, matter, and consciousness, argued for relativity of space and time, cyclic and recursively defined universes, and a non-anthropocentric view. The two most astonishing numerical claims from the ancient Indians are: a cyclic system of creation of the universe with a period of 8.64 billion years, although there exist longer cycles as well; and, speed of light to be 4,404 yojanas per nimeṣa, which is almost exactly 186,000 miles per second (Kak, 1998a)! A critic would see the numbers as no more than idle coincidences. But within the Indian tradition it is believed that reality, as a kind of a universal state function, transcends the separate categories of space, time, matter, and observation. In this function, called Brahman in the literature, inhere all categories including knowledge. The conditioned mind can, by “tuning” in to Brahman, obtain knowledge, although it can only be expressed in terms of the associations already experienced by the mind. Within the Indian tradition, scientific knowledge describes as much aspects of outer reality as the topography of the mindscape. Furthermore, there are connections between the outer and the inner: we can comprehend reality only because we are already equipped to do so! My own papers listed in the bibliography can serve as an introduction to these ideas and point to further references for the reader to examine. Two philosophical systems at the basis of Indian physics—and metaphysics—are Sāṃkhya and Vaiśeṣika. Sāṃkhya, which is an ancient system that goes back to the 3rd millennium BC, posits 25 basic categories together with 3 constituent qualities, which evolve in different ways to create the universe at the microcosmic as well as the macrocosmic levels. It also presupposes a “potential” (tanmātra) to be more basic than the material entity. Vaiśeṣika is a later system which is an atomic theory with the non-atomic ground of ether, space, and time upon which rest four different classes of indestructible atoms which combine in a variety of ways to constitute all matter; it also considers mind to be atomic (Kak, 1999). These systems presuppose genesis and evolution both at the cosmic and psychological levels. They also accept cyclic and multiple universes, and centrality of observers. Unfortunately, historians of science are generally oblivious of Indian physics, astronomy or cosmology. Amongst popular books, Paul Halpern’s The Cyclical Serpent (1995) is unusual in that it places modern speculations regarding an oscillating universe within the context of the cyclic cosmology of the Purāṇas, but even this book doesn’t define a context for the Indian ideas. In this paper we present, in a capsule form, the basic Indian ideas on space, time, and observation from the age of the epics and the early Purāṇas. The ideas of these period seem to belong to last centuries BC and they are described in the Mahābhārata, Purāṇas, and the early Siddhāntas. To keep our sources to a minimum, we mainly use Yoga-Vāsiṣṭha (YV), an ancient Indian text, over 29,000 verses long, traditionally attributed to Vālmīki, author of the epic Rāmāyaṇa, which is over two thousand years old. ## 2 Vedic and Purāṇic Cosmology We first look at Vedic cosmology. The Vedas are texts that represent the ancient knowledge tradition of India. While their compilations go back to at least the third millennium, some of their contents might be even older. The Vedic tradition is a part of the Indian culture tradition that has been traced back, archaeologically, to about 8000 BC (Feuerstein et al, 1995). The antiquity of the Vedic texts is, in part, confirmed by their celebration of the Sarasvati river as the greatest river of their age, and modern hydrological studies have established that this river dried up around 2000 BC. The king-lists in the Purāṇas take us to several millennia before the period of the drying up of the Sarasvati. There is also a rock art tradition in India that has been traced to about 40000 BC (Wakankar, 1992). There are several statements in the Vedic texts about the universe being infinite, while at the same time the finite distance to the sun is explicitly mentioned (Kak, 1998a-d). Aditi, the great mother of the gods, is a personification of the concept of infinity. A famous mantra speaks of how taking infinity out of infinity leaves it unchanged. This indicates that paradoxical properties of the notion of infinity were known. In a reference to mapping the outer world into an altar made of bricks, the Yajurveda (hymn 17) names numbers in multiples of ten that go upto ten hundred thousand million. This also suggests a belief in a very large universe. The Śatapatha Brāhmaṇa, a commentatorial prose text on the Veda, that most likely goes back to the early centuries of the second millennium BC, provides an overview of some broad aspects of Vedic cosmology. The sixth chapter of the book, entitled “Creation of the Universe”, speaks of the creation of the earth later than that of other stars. Creation is seen to proceed under the aegis of the Prajāpati (reference either to a star or to abstract time) with the emergence of Aśva, Rāsabha, Aja and Kūrma before the emergence of the earth. Viśvanātha Vidyālaṅkāra suggests that these are the sun (Aśva), Gemini (Rāsabha), Aja (Capricorn) and Kūrma (Cassiopeia). This identification is supported by etymological considerations. The Ṛgveda 1.164.2 and Nirukta 4.4.27 define Aśva as the sun. Rāsabha which literally means the twin asses are defined in Nighanṭu 1.15 as Aśvinau which later usage suggests are Castor and Pollux in Gemini. In Western astronomy the twin asses are to be found in the next constellation of Cancer as Asellus Borealis and Asellus Australis. Aja (goat) is defined by Nighanṭu 1.15 as a sun and owing to the continuity that we see in the Vedic and later European names for constellations (as in the case of the Great Bear) it is reasonable to identify it as the constellation Capricorn (caper goat + cornu horn). Kūrma is a synonym of Kaśyapa (tortoise) which is like Cassiopeia (from Greek Kassiopeia), and it is appropriate because it is near the pole. The Purāṇas view the universe to have a diameter of about 500 million yojanas, but beyond the universe lies the limitless Pradhāna, that has within it countless other universes (Kak, 1998a). ## 3 The Yoga-Vāsiṣṭha The internal evidence of the Yoga-Vāsiṣṭha (YV) indicates that it was authored or compiled later than the Rāmāyaṇa. Chapple (1984) summarizes the views of various scholars who date it variously as early as the sixth century AD or as late as the 13th or the 14th century. Dasgupta (1975, 1932) dated it about the sixth century AD on the basis that one of its verses appears to be copied from one of Kālidāsa’s plays considering Kālidāsa to have lived around the fifth century. The traditional date of Kālidāsa is 50 BC and new arguments (Kak 1990) support this earlier date so that the estimates regarding the age of YV are further muddled and it is possible that this text could be 2000 years old. YV may be viewed as a book of philosophy or as a philosophical novel. It describes the instruction given by Vasiṣṭha to Rāma, the hero of the epic Rāmāyaṇa. Its premise may be termed radical idealism and it is couched in a fashion that has many parallels with the notion of a participatory universe argued by Wheeler and others. Its most interesting passages from the scientific point of view relate to the description of the nature of space, time, matter, and consciousness. It should be emphasized that the YV ideas do not stand in isolation. Similar ideas are to be found in the earlier Vedic books. At its deepest level the Vedic conception is to view reality in a monist manner; at the next level one may speak of the dichotomy of mind and matter. Ideas similar to those found in YV are also encountered in Purāṇas and Tantric literature. YV is a text that belongs to the mainstream of the ancient Vedic tradition that professes to deal with knowledge. Astronomical references in the Vedic texts take us back to the 4th or 5th millennium BC or even earlier (e.g. Kak 1994-6). Roughly speaking, the Vedic system speaks of an interconnectedness between the observer and the observed. A similar conception appears to have informed many ancient peoples including the Greeks. The Vedic system of knowledge is based on a tripartite approach to the universe where connections exist in triples in categories of one group and across groups: sky, atmosphere, earth; object, medium, subject; future, present, past; and so on. Beyond the triples lies the transcendental “fourth”. Three kinds of motion are alluded to in the Vedic books: these are the translational motion, sound, and light which are taken to be “equivalent” to earth, air, and sky. The fourth motion is assigned to consciousness; and this is considered to be infinite in speed. At least one of the founders of quantum theory was directly inspired by the Vedic system of knowledge. Schrödinger (1961) claims that the Vedic slogan “All in One and One in All” was an idea that led him to the creation of quantum mechanics (see also Moore, 1989). Even before Schrödinger, the idealist philosophical tradition in Europe had long been moulded by Vedic ideas. It should also be noted that many parts of the Vedic literature are still not properly understood although considerable progress has recently taken place in the study of Vedic science. It is most interesting that the books in this Indian tradition speak about the relativity of time and space in a variety of ways. The medieval books call the Purāṇas speak of countless universes, time flowing at different rates for different observers and so on. Universes defined recursively are described in the famous episode of Indra and the ants in Brahmavaivarta Purāṇa 4.47.100-160, the Mahābhārata 12.187, and elsewhere. These flights of imagination are to be traced to more than a straightforward generalization of the motions of the planets into a cyclic universe. They must be viewed in the background of an amazingly sophisticated tradition of cognitive and analytical thought (see e.g. Staal 1988; Rao and Kak 1998). ### Selected Passages The page numbers given at the end of each passage are from the Venkatesananda (1993) translation. YV consists of 6 books where the sixth book itself has two parts. The numbers in the square brackets refer to the book, (part), section, verse. The reference to the Sanskrit original is also listed in the bibliography. ### Time * Time cannot be analyzed; for however much it is divided it survives indestructible. \[1.23\] * There is another aspect of this time, the end of action (kṛtānta), according to the law of nature (niyati). \[1.25.6-7\] * The world is like a potter’s wheel: the wheel looks as if it stands still, though it revolves at a terrific speed. \[1.27\] * Just as space does not have a fixed span, time does not have a fixed span either. Just as the world and its creation are mere appearances, a moment and an epoch are also imaginary. \[3.20\] * Infinite consciousness held in itself the notion of a unit of time equal to one-millionth of the twinkling of an eye: and from this evolved the time-scale right upto an epoch consisting of several revolutions of the four ages, which is the life-span of one cosmic creation. Infinite consciousness itself is uninvolved in these, for it is devoid of rising and setting (which are essential to all time-scales), and it devoid of a beginning, middle and end. \[3.61\] ### Space * There are three types of space—the psychological space, the physical space and the infinite space of consciousness. \[3.17\] The infinite space of individed consciousness is that which exists in all, inside and outside… The finite space of divided consciousness is that which created divisions of time, which pervades all beings… The physical space is that in which the elements exist. The latter two are not independent of the first. \[3.97\] * Other universes/wormholes. I saw within \[the\] rock \[at the edge of the universe\] the creation, sustenance and the dissolution of the universe… I saw innumerable creations in the very many rocks that I found on the hill. In some of these creation was just beginning, others were populated by humans, still others were far ahead in the passage of their times. \[6.2.86\] * I perceived within each molecule of air a whole universe. \[6.2.92\] ### Matter * In every atom there are worlds within worlds. \[3.20\] * I saw reflected in that consciousness the image of countless universes. I saw countless creations though they did not know of one another’s existence. Some were coming into being, others were perishing, all of them had different shielding atmospheres (from five to thirty-six atmospheres). There were different elements in each, they were inhabited by different types of beings in different stages of evolution.. \[In\] some there was apparent natural order in others there was utter disorder, in some there was no light and hence no time-sense. \[6.2.59\] ### Experience * Direct experience alone is the basis for all proofs… That substratum is the experiencing intelligence which itself becomes the experiencer, the act of experiencing, and the experience. \[2.19-20\] * Everyone has two bodies, the one physical and the other mental. The physical body is insentient and seeks its own destruction; the mind is finite but orderly. \[4.10\] * I have carefully investigated, I have observed everything from the tips of my toes to the top of my head, and I have not found anything of which I could say, ‘This I am.’ Who is ‘I’? I am the all-pervading consciousness which is itself not an object of knowledge or knowing and is free from self-hood. I am that which is indivisible, which has no name, which does not undergo change, which is beyond all concepts of unity and diversity, which is beyond measure. \[5.52\] * I remember that once upon a time there was nothing on this earth, neither trees and plants, nor even mountains. For a period of eleven thousand years the earth was covered by lava. In those days there was neither day nor night below the polar region: for in the rest of the earth neither the sun nor the moon shone. Only one half of the polar region was illumined. Then demons ruled the earth. They were deluded, powerful and prosperous, and the earth was their playground. Apart from the polar region the rest of the earth was covered with water. And then for a very long time the whole earth was covered with forests, except the polar region. Then there arose great mountains, but without any human inhabitants. For a period of ten thousand years the earth was covered with the corpses of the demons. \[6.1\] ### Mind * The same infinite self conceives within itself the duality of oneself and the other. \[3.1\] * Thought is mind, there is no distinction between the two. \[3.4\] * The body can neither enjoy nor suffer. It is the mind alone that experiences. \[3.115\] * The mind has no body, no support and no form; yet by this mind is everything consumed in this world. This is indeed a great mystery. He who says that he is destroyed by the mind which has no substantiality at all, says in effect that his head was smashed by the lotus petal… The hero who is able to destroy a real enemy standing in front of him is himself destroyed by this mind which is \[non-material\]. * The intelligence which is other than self-knowledge is what constitutes the mind. \[5.14\] ### Complementarity * The absolute alone exists now and for ever. When one thinks of it as a void, it is because of the feeling one has that it is not void; when one thinks of it as not-void, it is because there is a feeling that it is void. \[3.10\] * All fundamental elements continued to act on one another—as experiencer and experience—and the entire creation came into being like ripples on the surface of the ocean. And, they are interwoven and mixed up so effectively that they cannot be extricated from one another till the cosmic dissolution. \[3.12\] ### Consciousness * The entire universe is forever the same as the consciousness that dwells in every atom, even as an ornament is non-different from gold. \[3.4\] * The five elements are the seed of which the world is the tree; and the eternal consciousness is the seed of the elements. \[3.13\] * Cosmic consciousness alone exists now and ever; in it are no worlds, no created beings. That consciousness reflected in itself appears to be creation. \[3.13\] * This consciousness is not knowable: when it wishes to become the knowable, it is known as the universe. Mind, intellect, egotism, the five great elements, and the world—all these innumerable names and forms are all consciousness alone. \[3.14\] * The world exists because consciousness is, and the world is the body of consciousness. There is no division, no difference, no distinction. Hence the universe can be said to be both real and unreal: real because of the reality of consciousness which is its own reality, and unreal because the universe does not exist as universe, independent of consciousness. \[3.14\] * Consciousness is pure, eternal and infinite: it does not arise nor cease to be. It is ever there in the moving and unmoving creatures, in the sky, on the mountain and in fire and air. \[3.55\] * Millions of universes appear in the infinite consciousness like specks of dust in a beam of light. In one small atom all the three worlds appear to be, with all their components like space, time, action, substance, day and night. \[4.2\] * The universe exists in infinte consciousness. Infinite consciousness is unmanifest, though omnipresent, even as space, though existing everywhere, is manifest. \[4.36\] * The manifestation of the omnipotence of infinite consciousness enters into an alliance with time, space and causation. Thence arise infinite names and forms. \[4.42\] * Rudra is the pure, spontaneous self-experience which is the one consciousness that dwells in all substances. It is the seed of all seeds, it is the essence of this world-appearance, it is the greatest of actions. It is the cause of all causes and it is the essence of all beings, though in fact it does not cause anything nor is it the concept of being, and therefore cannot be conceived. It is the awareness in all that is sentient, it knows itself as its own object, it is its own supreme object and it is aware of infinite diversity within itself… The ifinite consciousness can be compared to the ultimate atom which yet hides within its heart the greatest of mountains. It encompasses the span of countless epochs, but it does not let go of a moment of time. It is subtler than the tip of single strand of hair, yet it pervades the entire universe… It does nothing, yet it has fashioned the universe. ..All substances are non-different from it, yet it is not a substance; though it is non-substantial it pervades all substances. The cosmos is its body, yet it has no body. \[6.1.36\] ### The YV model of knowledge YV is not written as a systematic text. Its narrative jumps between various levels: psychological, biological, and physical. But since the Indian tradition of knowledge is based on analogies that are recursive and connect various domains, one can be certain that our literal reading of the passages is valid. YV appears to accept the idea that laws are intrinsic to the universe. In other words, the laws of nature in an unfolding universe will also evolve. According to YV, new information does not emerge out of the inanimate world but it is a result of the exchange between mind and matter. It accepts consciousness as a kind of fundamental field that pervades the whole universe. One might speculate that the parallels between YV and some recent ideas of physics are a result of the inherent structure of the mind. ## 4 Other Texts Our readings of the YV are confirmed by other texts such as the Mahābhārata and the Purāṇas as they are by the philosophical systems of Sāṃkhya and Vaiśeṣika, or the various astronomical texts. Here is a reference to the size of the universe from the Mahābhārata 12.182: > The sky you see above is infinite. Its limits cannot be ascertained. The sun and the moon cannot see, above or below, beyond the range of their own rays. There where the rays of the sun and the moon cannot reach are luminaries which are self-effulgent and which possess splendor like that of the sun or the fire. Even these last do not behold the limits of the firmament in consequence of the inaccessibility and infinity of those limits. This space which the very gods cannot measure is full of many blazing and self-luminous worlds each above the other. > (Ganguly translation, vol. 9, page 23) The Mahābhārata has a very interesting passage (12.233), virtually identical with the corresponding material in YV, which describes the dissolution of the world. Briefly, it is stated how a dozen suns burn up the earth, and how elements get transmuted until space itself collapses into wind (one of the elements). Ultimately, everything enters into primeval consciousness. If one leaves out the often incongrous commentary on these ideas which were strange to him, we find al-Bīrūnī in his encyclopaedic book on India written in 1030 speaking of essentially the same ideas. Here are two little extracts: > The Hindus have divided duration into two periods, a period of motion, which has been determined as time, and a period of rest, which can only be determined in an imaginary way according to the analogy of that which has first been determined, the period of motion. The Hindus hold the eternity of the Creator to be determinable, not measurable, since it is infinite. > > They do not, by the word creation, understand a formation of something out of nothing. They mean by creation only the working with a piece of clay, working out various combinations and figures in it, and making such arrangements with it as will lead to certain ends and aims which are potentially in it. > (Sachau, 1910, vol 1, pages 321-322) The mystery of consciousness is a recurring theme in Indian texts (Kak, 1997). Unfortunately, the misrepresentation that Indian philosophy is idealistic, where the physical universe is considered an illusion, has become very common. For an authoritative modern exposition of Indian ideas of consciousness one must turn to Aurobindo (e.g. 1939, 1956). ## 5 Concluding Remarks It appears that Indian understanding of physics was informed not only by astronomy and terrestrial experiments but also by speculative thought and by meditations on the nature of consciousness. Unfettered by either geocentric or anthropocentric views, this understanding unified the physics of the small with that of the large within a framework that included metaphysics. This was a framework consisting of innumerable worlds (solar systems), where time and space were continuous, matter was atomic, and consciousness was atomic, yet derived from an all-pervasive unity. The material atoms were defined first by their subtle form, called tanmātra, which was visualized as a potential, from which emerged the gross atoms. A central notion in this system was that all descriptions of reality are circumscribed by paradox (Kak, 1986). The universe was seen as dynamic, going through ceaseless change. ## 6 References Sri Aurobindo, 1939. The Life Divine. Aurobindo Ashram, Pondicherry. Sri Aurobindo, 1956. The Secret of the Veda. Aurobindo Ashram, Pondicherry. C. Chapple, 1984. Introduction and bibliography in Venkatesananda (1984). S. Dasgupta, 1975. A History of Indian Philosophy. Motilal Banarsidass, Delhi. G. Feuerstein, S. Kak, D. Frawley, 1995. In Search of the Cradle of Civilization. Quest Books, Wheaton. K.M. Ganguly (tr.), 1883-1896. The Mahābhārata. Reprinted Munshiram Manoharlal, Delhi, 1970. P. Halpern, 1995. The Cyclical Serpent: Prospects for an Ever-Repeating Universe. Plenum Press, New York. S. Kak, 1986. The Nature of Physical Reality. Peter Lang, New York. S. Kak, 1990. Kalidasa and the Agnimitra problem. Journal of the Oriental Institute 40: 51-54. S. Kak, 1994. The Astronomical Code of the Ṛgveda. Aditya, New Delhi. S. Kak, 1995a. From Vedic science to Vedānta. Brahmavidyā: The Adyar Library Bulletin, 59: 1-36. S. Kak, 1995b. The astronomy of the age of geometric altars. Quarterly Journal of the Royal Astronomical Society 36: 385-396. S. Kak, 1996. Knowledge of planets in the third millennium BC. Quarterly Journal of the Royal Astronomical Society 37: 709-715. S. Kak, 1997. On the science of consciousness in ancient India. Indian Journal of History of Science 32: 105-120. S. Kak, 1997-8. Vaiṣṇava metaphysics or a science of consciousness. Prāchya Pratibhā 19: 113-141. S. Kak, 1997-8. Consciousness and freedom according to the ŚivaSūtra. Prāchya Pratibhā 19: 233-248. S. Kak, 1998a. The speed of light and Purāṇic cosmology. LANL physics archive 9804020. Also in Rao and Kak (1998). S. Kak, 1998b. Sāyaṇa’s astronomy. Indian Journal of History of Science 33: 31-36. S. Kak, 1998c. Early theories on the distance to the sun. Indian Journal of History of Science 33: 93-100. S. Kak, 1998d. The orbit of the sun in the Brāhmaṇas. Indian Journal of History of Science 33: 175-191. S. Kak, 1999. Physical concepts in Sāṃkhya and Vaiśeṣika. Chapter in Science and Civilization in India, Vol. 1, Part 2, edited by G.C. Pande, Oxford University Press, Delhi, in press. W. Moore, 1989. Schrödinger: Life and Thought. Cambridge University Press, Cambridge. T.R.N. Rao and S. Kak, 1998. Computing Science in Ancient India. USL Press, Lafayette. E.C. Sachau, 1910. Alberuni’s India. Reprinted by Low Price Publications, Delhi, 1989. E. Schrödinger, 1961. Meine Weltansicht. Paul Zsolnay, Vienna. F. Staal, 1988. Universals. University of Chicago Press, Chicago. S. Venkatesananda (tr.), 1984. The Concise Yoga Vāsiṣṭha. State University of New York Press, Albany. S. Venkatesananda (tr.), 1993. Vāsiṣṭha’s Yoga. State University of New York Press, Albany. Yoga Vāsiṣṭha, 1981. Munshiram Manoharlal, Delhi. V.S. Wakankar, 1992. Rock painting in India. In Rock Art in the Old World, M. Lorblanchet (ed.). 319-336. New Delhi.
no-problem/9903/gr-qc9903080.html
ar5iv
text
# 1 INTRODUCTION AND SUMMARY ## 1 INTRODUCTION AND SUMMARY Perhaps the most fascinating questions confronting contemporary physics concern the search of the appropriate framework for the unified description of Gravity and Quantum Mechanics. This search for “Quantum Gravity” is proving very difficult , especially as a result of the scarce experimental information available on the interplay between Gravity and Quantum Mechanics. However, in recent years there has been a small (but nevertheless encouraging) number of new proposals of experiments probing the nature of the interplay between Gravity and Quantum Mechanics. At the same time the “COW-type” experiments, initiated with the celebrated experiment by Colella, Overhauser and Werner , have reached levels of sophistication such that even gravitationally induced quantum phases due to local tides can be detected. In light of these developments there is now growing (although still understandably cautious) hope for data-driven insight into the structure of Quantum Gravity. The primary objective of the present Article is the one of providing a careful discussion of the most recent addition to the (still far from numerous) family of Quantum Gravity experiments, which this author proposed in the short Letter in Ref. . This most recent proposal probes in a rather direct way the properties of space-time, which is of course the most fundamental element of a Quantum Gravity, by exploiting the remarkable accuracy achievable with advanced modern interferometers, such as the ones used for searches of gravity waves. While perhaps (especially in light of the gloom overall status of “Quantum Gravity phenomenology”) already sufficient interest in the experiment proposed in Ref. could come from a pragmatic phenomenological viewpoint, in this Article I shall also relate the class of observations accessible to modern interferometers to a physical picture of the (necessarily small) way in which Quantum Gravity might affect phenomena probing space-time at distances significantly larger than the Planck length $`L_{planck}10^{35}m`$ (but significantly shorter than distance scales probed in ordinary particle-physics or gravity experiments). This physical picture is motivated by the huge gap between the minute Planck length and the distance scales probed in present-day particle-physics or gravitational experiments. The size of this gap provides motivation for exploring the possibility that on the way to Planck-length physics a few intermediate steps of partial unification of Gravity and Quantum Mechanics might be required before reaching full unification. Of course, as long as we are lacking direct experimental evidence to the contrary, it is also reasonable to work (as many distinguished colleagues do) on the hypothesis that Gravity and Quantum Mechanics should merge directly into a fully developed Quantum Gravity, but in the present Article (as in the previous papers ) I shall be concerned with the investigation of the properties that one could demand of a theory suitable for a first stage of partial unification of Gravity and Quantum Mechanics. In particular, I shall review the arguments presented in Refs. suggesting that the most significant implications of Quantum Gravity for low-energy (large-distance) physics might be associated to the structure of the non-trivial “Quantum Gravity vacuum”. A satisfactory picture of this Quantum Gravity vacuum is not available at present, and therefore we must generically characterize it as the appropriate new concept that in Quantum Gravity takes the place of the ordinary concept of “empty space”; however, it is plausible that some of the arguments by Wheeler, Hawking and others (see, e.g., Refs. and references therein), which have attempted to develop an intuitive description of the Quantum Gravity vacuum, might have captured at least some of its actual properties. Other possible elements for the search of a theory suitable for a first stage of partial unification of Gravity and Quantum Mechanics come from studies suggesting that this unification might require a novel relationship between “measuring apparatus” and “system”. My intuition on the nature of this new relationship is mostly based on work by Bergmann and Smith and the observations I reported in Refs. , which took as starting point an analysis by Salecker and Wigner . The intuition emerging from these considerations on a novel relationship between measuring apparatus and system and by a Wheeler-Hawking picture of the Quantum Gravity vacuum are not sufficient for the full development of a new formalism describing the first stage of partial unification of Gravity and Quantum Mechanics, but they provide encouragement for the search of a formalism based on a mechanics not exactly of the type of ordinary Quantum Mechanics. Moreover, one can use this emerging intuition for rough estimates of certain candidate Quantum-Gravity effects. The estimates most relevant for the present Article are the ones concerning the space-time “fuzziness” which modern interferometers could investigate following Ref. . A prediction of nearly all approaches to the unification of Gravity and Quantum Mechanics is that at very short distances the sharp classical concept of space-time should give way to a somewhat “fuzzy” (or “foamy”) picture (see, e.g., Refs. ), but it is usually very hard to characterize this fuzziness in physical operative terms. In Section 2 I provide an operative definition of fuzzy distance that has completely general applicability. My operative definition of fuzzy distance involves the use of interferometers, and the remarkable recent progress in the accuracy of these devices provides motivation for an analysis aimed at investigating the possible observable implications of Quantum Gravity for modern interferometers. In Section 3 I provide estimates for the quantum fluctuations that could affect distances if the above-mentioned intuition on the first stage of partial unification of Gravity and Quantum Mechanics is correct. I shall proceed with the attitude of searching for plausible (but admittedly “optimistic”) estimates of the relevant Quantum Gravity effects, and, although quantitative estimates will be derived, the true emphasis is on the qualitative aspects of the phenomena, since this type of information could be helpful to colleagues on the experimental side in establishing how to look for these phenomena. Some of the estimates I provide are motivated by studies of the measurability of distances in Quantum Gravity. A second group of estimates is based on elementary toy models of the stochastic processes that might characterize space-time fuzziness. The third and final group of estimates is motivated by arguments of “consistency” (in the sense discussed later) with recent proposals of Quantum-Gravity induced deformation of the dispersion relation that characterizes the propagation of massless particles. All of these arguments indicate that a priority for interferometry-based tests of space-time fuzziness must be high sensitivity at low frequencies, and I hope this will be taken into account in planning future gravity-wave interferometers. In Section 4 I shall observe (extending the related observations reported in Ref. ) that the remarkable sensitivity achieved by modern interferometers, especially the ones used to search for gravity waves , allows to set highly significant bounds on some of the fuzziness scenarios discussed in Section 2. Perhaps the most intuitive way to characterize the obtained bounds is given by the fact that we are now in a position to rule out a picture of fuzzy space-time such that minute Planck-length ($`10^{35}m`$) fluctuations would affect distances at a rate of one per each Planck time $`10^{44}s`$. In Section 5 I derive a novel absolute bound on the measurability of the amplitude of a gravity wave. This measurability bound is obtained by combining a well-known “standard quantum limit,” which depends on the mass of the mirrors used by the gravity-wave interferometers, and a limitation on the mass of the mirrors that is imposed by gravitational effects. I find that this measurability bound is too weak to be tested with available or planned gravity-wave interferometers. Its significance mostly resides in the fact that it illustrates even more clearly than previous measurability analyses the fact that the unification of Gravity and Quantum Mechanics requires a new relationship between measuring apparatus and system. In Section 6 I discuss the aspects of certain existing Quantum Gravity approaches which are in one or another way related to the type of fuzzy space-times considered in Section 2. In Section 7 I discuss how the class of experiments proposed in Ref. (and here analyzed in detail) complements other proposals of Quantum Gravity experiments. I also outline the general features that an experiment must have in order to uncover aspects of the interplay between Gravity and Quantum Mechanics. In Section 8 I use the results discussed in Sections 2,3,4,5,6 to better define the idea of a theory appropriate for the description of a first stage of partial unification of Gravity and Quantum Mechanics. Closing remarks, also on the outlook for Quantum-Gravity phenomenology, are offered in Section 9. ## 2 OPERATIVE DEFINITION OF FUZZY DISTANCE While nearly all approaches to the unification of Gravity and Quantum Mechanics appear to lead to a somewhat fuzzy picture of space-time, within the various formalisms it is often difficult to characterize physically this fuzziness. Rather than starting from formalism, I shall advocate an operative definition of fuzzy space-time.<sup>2</sup><sup>2</sup>2Once we have a physical definition of fuzzy space-time the analysis of the various Quantum Gravity formalisms could be aimed at providing predictions for this fuzziness. Of course, in order for the formalisms to provide such physical predictions it is necessary to equip them with at least some elements of a “measurement theory”. More precisely for the time being I shall just consider the concept of fuzzy distance. I shall be guided by the expectation that at very short distances the sharp classical concept of distance should give way to a somewhat fuzzy distance. Since interferometers are ideally suited to monitor the distance between test masses, I choose as operative definition of Quantum-Gravity induced fuzziness one which is expressed in terms of Quantum-Gravity induced noise in the read-out of interferometers. In order to articulate this proposal it will prove useful to briefly review some aspects of the physics of Michelson interferometers. These are schematically composed of a (laser) light source, a beam splitter and two fully-reflecting mirrors placed at a distance $`L`$ from the beam splitter in orthogonal directions. The light beam is decomposed by the beam splitter into a transmitted beam directed toward one of the mirrors and a reflected beam directed toward the other mirror; the beams are then reflected by the mirrors back toward the beam splitter, where they are superposed<sup>3</sup><sup>3</sup>3Although all modern interferometers rely on the technique of folded interferometer’s arms (the light beam bounces several times between the beam splitter and the mirrors before superposition), I shall just discuss the simpler “no-folding” conceptual setup. The readers familiar with the subject can easily realize that the observations here reported also apply to more realistic setups, although in some steps of the derivations the length $`L`$ would have to be understood as the optical length (given by the actual length of the arms times the number of foldings).. The resulting interference pattern is extremely sensitive to changes in the positions of the mirrors relative to the beam splitter. The achievable sensitivity is so high that planned interferometers with arm lengths $`L`$ of $`3`$ or $`4`$ $`Km`$ expect to detect gravity waves of amplitude $`h`$ as low as $`310^{22}`$ at frequencies of about $`100Hz`$. This roughly means that these modern gravity-wave interferometers should monitor the (relative) positions of their test masses (the beam splitter and the mirrors) with an accuracy of order $`10^{18}m`$ and better. In achieving this remarkable accuracy experimentalists must deal with classical-physics displacement noise sources (e.g., thermal and seismic effects induce fluctuations in the relative positions of the test masses) and displacement noise sources associated to effects of ordinary Quantum Mechanics (as I shall mention again later the combined minimization of photon shot noise and radiation pressure noise leads to an irreducible noise source which has its root in ordinary Quantum Mechanics). The operative definition of fuzzy distance which I advocate characterizes the corresponding Quantum Gravity effects as an additional source of displacement noise. A theory in which the concept of distance is fundamentally fuzzy in this operative sense would be such that even in the idealized limit in which all classical-physics and ordinary Quantum-Mechanics noise sources are completely eliminated the read-out of an interferometer would still be noisy as a result of Quantum Gravity effects. Adopting this operative definition of fuzzy distance, interferometers are of course the natural tools for experimental tests of proposed space-time fuzziness scenarios. However, even the remarkable sensitivity estimate of order $`10^{18}m`$ given above is quite far from the Planck length $`10^{35}m`$, and it might appear safe to assume that any scenario for space-time fuzziness would not observably affect the operation of even the most sophisticated modern interferometers. In spite of the intuition emerging from this preliminary considerations, in the next Sections 3 and 4 I shall show that some plausible (albeit somewhat speculative) fuzziness scenarios can be tested in a rather significant way by modern interferometers. The key observation is based on the fact that the physics of an interferometer involves other length scales besides the $`10^{18}m`$ length scale discussed above, and the combinations of length scales which characterize on the one hand the noise levels achievable by modern interferometers and on the other hand the Quantum-Gravity induced noise levels turn out to be comparable. In particular, a proper description of noise levels in an interferometer must provide the displacement sensitivity as a function of frequencies $`f`$ (notice the additional length scale $`cf^1`$ obtained combining $`f`$ with the speed-of-light constant $`c310^8m/s`$), and similarly the “amount of fuzziness” predicted by certain space-time fuzziness scenarios turns out to be $`f`$-dependent. Within certain ranges of values of $`f`$ one finds that the experimental limits are actually significant with respect to the theoretical predictions. Before providing this phenomenological analysis I shall use the next Section to discuss estimates of the type of noise levels that could be expected within certain space-time fuzziness scenarios. ## 3 SOME CANDIDATE FUZZY SPACE-TIMES ### 3.1 Minimum-length noise In many Quantum Gravity approaches there appears to be a length scale $`L_{min}`$, often identified with the string length ($`L_{string}10^{34}m`$) or the Planck length, which sets an absolute bound on the measurability of distances (a minimum uncertainty): $`\delta DL_{min}.`$ (1) This property emerges in approaches based on canonical quantization of Einstein’s gravity when analyzing certain gedanken experiments (see, e.g., Ref. and references therein). In Critical Superstring Theories, theories whose mechanics is still governed by the laws of ordinary Quantum Mechanics but with one-dimensional (rather than point-like) fundamental objects, a relation of type (1) follows from the stringy modification of Heisenberg’s uncertainty principle $`\delta x\delta p`$ $``$ $`1+L_{string}^2\delta p^2.`$ (2) In fact, whereas Heisenberg’s uncertainty principle allows $`\delta x=0`$ (for $`\delta p\mathrm{}`$), for all choices of $`\delta p`$ the uncertainty relation (2) gives $`\delta xL_{string}`$. The relation (2) is suggested by certain analyses of string scattering , but it might have to be modified when taking into account the non-perturbative solitonic structures of Superstring Theory known as Dirichlet branes . In particular, evidence has been found in support of the possibility that “Dirichlet particles” (Dirichlet 0 branes) could probe the structure of space-time down to scales shorter than the string length. In any case, all evidence available on Critical Superstring Theory is consistent with a relation of type (1), although it is probably safe to say that some more work is still needed to firmly establish the string-theory value of $`L_{min}`$. Having clarified that a relation of type (1) is a rather common prediction of theoretical work on Quantum Gravity, let us then consider how such a relation could affect the noise levels of an interferometer, i.e. let us consider the type of fuzziness (in the sense of the operative definition I advocated) which could be encoded in relation (1). First let us observe that relation (1) does not necessarily encode any fuzziness; for example, relation (1) could simply emerge from a theory based on a lattice of points with spacing $`L_{min}`$ and equipped with a measurement theory consistent with (1). The concept of distance in such a theory would not necessarily be affected by the type of stochastic processes that lead to noise in an interferometer. However, it is also possible for relation (1) to encode the net effect of some underlying physical processes of the type one would qualify as quantum space-time fluctuations. These fluctuations, following work initiated by Wheeler and Hawking, are often visualized as involving geometry and topology fluctuations , virtual black holes , and other novel phenomena. A very intuitive description of the way in which the dynamics of matter distributions would be affected by this type of fuzziness of space-time is obtained by noticing certain similarities between a thermal environment and the environment of quantum space-time fluctuations consistent with (1). This (however preliminary) network of intuitions suggests that (1) could be the result of fuzziness for distances $`D`$ of the type associated to stochastic fluctuations with root-mean-square deviation $`\sigma _D`$ given by $$\sigma _DL_{min}.$$ (3) The associated displacement amplitude spectral density $`S_{min}(f)`$ should roughly have a $`1/\sqrt{f}`$ behaviour $$S_{min}(f)\frac{L_{min}}{\sqrt{f}}.$$ (4) This can be justified using the observation that for a frequency-band limited from below only by the time of observation $`T_{obs}`$ the relation between $`\sigma `$ and $`S(f)`$ is given by $`\sigma ^2={\displaystyle _{1/T_{obs}}^{f_{max}}}[S(f)]^2𝑑f.`$ (5) Substituting the $`S_{min}(f)`$ of Eq. (4) for the $`S(f)`$ of Eq. (5) one obtains a $`\sigma `$ that approximates the $`\sigma _D`$ of Eq. (3) up to small (logarithmic) $`T_{obs}`$-dependent corrections. A more detailed description of the displacement amplitude spectral density associated to Eq. (3) can be found in Refs. . For the objectives of the present article the rough estimate (4) is sufficient since, if indeed $`L_{min}L_{planck}`$, from (4) one obtains $`S_{min}(f)10^{35}m/\sqrt{f}`$, which is still very far from the sensitivity of even the most advanced modern interferometers, and therefore we should not be concerned with corrections to Eq. (4). ### 3.2 Random-walk noise motivated by the analysis of a Salecker-Wigner gedanken experiment The above argument relating the measurability bound (1) to fuzziness of type (3) can be used in general to relate any bound on the measurability of distances to an estimate of the possible stochastic quantum fluctuations affecting the operative definition of distances. In this Subsection 3.2 I shall consider a measurability bound that emerges when taking into account the quantum properties of devices. It is well understood (see, e.g., Refs. ) that the combination of the gravitational properties and the quantum properties of devices can have an important role in the analysis of the operative definition of gravitational observables. Since the analyses that led to the proposal of Eq. (3) only treated the devices in a completely idealized manner (assuming that one could ignore any contribution to the uncertainty in the measurement of $`D`$ due to the gravitational and quantum properties of devices), it is not surprising that analyses that took into account the gravitational and quantum properties of devices found more significant limitations to the measurability of distances. Actually, by ignoring the way in which the gravitational properties and the quantum properties of devices combine in measurements of geometry-related physical properties of a system one misses some of the fundamental elements of novelty we should expect for the interplay of Gravity and Quantum Mechanics; in fact, one would be missing an element of novelty which is deeply associated to the Equivalence Principle. In measurements of physical properties which are not geometry-related one can safely resort to an idealized description of devices. For example, in the famous Bohr-Rosenfeld analysis of the measurability of the electromagnetic field it was shown that the accuracy allowed by the formalism of ordinary Quantum Mechanics could only be achieved using idealized test particles with vanishing ratio between electric charge and inertial mass. Attempts to generalize the Bohr-Rosenfeld analysis to the study of gravitational fields (see, e.g., Ref. ) are of course confronted with the fact that the ratio between gravitational “charge” (mass) and inertial mass is fixed by the Equivalence Principle. While ideal devices with vanishing ratio between electric charge and inertial mass can be considered at least in principle, devices with vanishing ratio between gravitational mass and inertial mass are not admissible in any (however formal) limit of the laws of gravitation. This observation provides one of the strongest elements in support of the idea that the mechanics on which Quantum Gravity is based must not be exactly the one of ordinary Quantum Mechanics, since it should accommodate a somewhat different relationship between “system” and “measuring apparatus”. \[In particular, the new mechanics should not rely on the idealized “measuring apparatus” which plays such a central role in the mechanics laws of ordinary Quantum Mechanics, see, e.g., the “Copenhagen interpretation”.\] In trying to develop some intuition for the type of fuzziness that could affect the concept of distance in Quantum Gravity, it might be useful to consider the way in which the interplay between the gravitational and the quantum properties of devices affects the measurability of distances. In Refs. I have argued that a natural starting point for this type of analysis is provided by the procedure for the measurement of distances which was discussed in influential work by Salecker and Wigner . These authors “measured” (in the “gedanken” sense) the distance $`D`$ between two bodies by exchanging a light signal between them. The measurement procedure requires by attaching<sup>4</sup><sup>4</sup>4Of course, for consistency with causality, in such contexts one assumes devices to be “attached non-rigidly,” and, in particular, the relative position and velocity of their centers of mass continue to satisfy the standard uncertainty relations of Quantum Mechanics. a light-gun (i.e. a device capable of sending a light signal when triggered), a detector and a clock to one of the two bodies and attaching a mirror to the other body. By measuring the time $`T_{obs}`$ (time of observation) needed by the light signal for a two-way journey between the bodies one also obtains a measurement of the distance $`D`$. For example, in Minkowski space and neglecting quantum effects one simply finds that $`D=cT_{obs}/2`$. Within this setup it is easy to realize that the interplay between the gravitational and the quantum properties of devices leads to an irreducible contribution to the uncertainty $`\delta D`$. In order to see this it is sufficient to consider the contribution to $`\delta D`$ coming from the uncertainties that affect the motion of the center of mass of the system composed by the light-gun, the detector and the clock. Denoting with $`x^{}`$ and $`v^{}`$ the position and the velocity of the center of mass of this composite device relative to the position of the body to which it is attached, and assuming that the experimentalists prepare this device in a state characterised by uncertainties $`\delta x^{}`$ and $`\delta v^{}`$, one easily finds $`\delta D\delta x^{}+T_{obs}\delta v^{}\delta x^{}+\left({\displaystyle \frac{1}{M_b}}+{\displaystyle \frac{1}{M_d}}\right){\displaystyle \frac{\mathrm{}T_{obs}}{2\delta x^{}}}\sqrt{{\displaystyle \frac{\mathrm{}T_{obs}}{2}}\left({\displaystyle \frac{1}{M_b}}+{\displaystyle \frac{1}{M_d}}\right)},`$ (6) where $`M_b`$ is the mass of the body, $`M_d`$ is the total mass of the device composed of the light-gun, the detector, and the clock, and the right-hand-side relation follows from observing that Heisenberg’s Uncertainty Principle implies $`\delta x^{}\delta v^{}(1/M_b+1/M_d)\mathrm{}/2`$. \[N.B.: the reduced mass $`(1/M_b+1/M_d)^1`$ is relevant for the relative motion.\] Clearly, from (6) it follows that in order to eliminate the contribution to the uncertainty coming from the quantum properties of the devices it is necessary to take the formal “classical-device limit,” i.e. the limit<sup>5</sup><sup>5</sup>5A rigorous definition of a “classical device” is beyond the scope of this Article. However, it should be emphasized that the experimental setups being here considered require the devices to be accurately positioned during the time needed for the measurement, and therefore an ideal/classical device should be infinitely massive so that the experimentalists can prepare it in a state with $`\delta x\delta v\mathrm{}/M0`$. It is the fact that the infinite-mass limit is not accessible in a gravitational context that forces one to consider only “non-classical devices.” This observation is not inconsistent with conventional analyses of decoherence for macroscopic systems; in fact, in appropriate environments, the behavior of a macroscopic device will still be “closer to classical” than the behavior of a microscopic device, although the limit in which a device has exactly classical behavior is no longer accessible. of infinitely large $`M_d`$. Up to this point I have not yet taken into account the gravitational properties of the devices and in fact the “classical-device limit” encountered above is fully consistent with the laws of ordinary Quantum Mechanics. From a physical/phenomenological and conceptual viewpoint it is well understood that the formalism of Quantum Mechanics is only appropriate for the description of the results of measurements performed by classical devices. It is therefore not surprising that the classical-device (infinite-mass) limit turned out to be required in order to reproduce the prediction $`min\delta D=0`$ of ordinary Quantum Mechanics (which, as well known, allows $`\delta A=0`$ for any single observable $`A`$, since it only limits the combined measurability of pairs of conjugate observables). If one also takes into account the gravitational properties of the devices, a conflict with ordinary Quantum Mechanics immediately arises because the classical-device (infinite-mass) limit is in principle inadmissible for measurements concerning gravitational effects.<sup>6</sup><sup>6</sup>6This conflict between the infinite-mass classical-device limit (which is implicit in the applications of the formalism of ordinary Quantum Mechanics to the description of the outcome of experiments) and the nature of gravitational interactions has not been addressed within any of the most popular Quantum Gravity approaches, including “Canonical/Loop Quantum Gravity” and “Critical Superstring Theory” . In a sense somewhat similar to the one appropriate for Hawking’s work on black holes , this “classical-device paradox” appears to provide an obstruction for the use of the ordinary formalism of Quantum Mechanics for a description of Quantum Gravity. As the devices get more and more massive they increasingly disturb the gravitational/geometrical observables, and well before reaching the infinite-mass limit the procedures for the measurement of gravitational observables cannot be meaningfully performed . In the Salecker-Wigner measurement procedure the limit $`M_d\mathrm{}`$ is not admissible when gravitational interactions are taken into account. At the very least the value of $`M_d`$ is limited by the requirement that the apparatus should not turn into a black hole (which would not allow the exchange of signals required by the measurement procedure). These observations, which render unavoidable the $`\sqrt{T_{obs}}`$-dependence of Eq. (6), provide motivation for the possibility that in Quantum Gravity any measurement that monitors a distance $`D`$ for a time $`T_{obs}`$ is affected by quantum fluctuations such that<sup>7</sup><sup>7</sup>7Note that Eq.(7) sets a minimum uncertainty which takes only into account the quantum and gravitational properties of the measuring apparatus. Of course, an even tighter bound might emerge when taking into account also the quantum and gravitational properties of the system under observation. However, according to the estimates provided in Refs. the contribution to the uncertainty coming from the system if of the type $`\delta DL_{planck}`$, so that the total contribution (summing the system and the apparatus contributions) would be of the type $`\delta DL_{planck}+\sqrt{L_{QG}cT_{obs}}`$ which in nearly all contexts one can be concerned with (which would have $`cT_{obs}L_{planck}`$ can be approximated by completely neglecting the $`L_{planck}`$ correction originating from the quantum and gravitational properties of the system. $`\delta D\sqrt{L_{QG}cT_{obs}},`$ (7) where $`L_{QG}`$ could in principle be an independent fundamental length scale (a length scale characterizing the nature of the novel Quantum-Gravity relationship between system and apparatus), but one is tempted to consider the possibility that $`L_{QG}`$ be simply related to the Planck length. Interestingly, according to (7) the Salecker-Wigner measurement of a distance $`D`$, which requires a time $`2D/c`$, would be affected by an uncertainty of magnitude $`\sqrt{L_{QG}D}`$. A $`\delta D`$ that increases with $`T_{obs}`$ (e.g. as in (7)) is not surprising for space-time fuzziness scenarios; in fact, the same phenomena that would lead to fuzziness are also expected to induce “information loss” (the information stored in a quantum system degrades as $`T_{obs}`$ increases). The argument based on the Salecker-Wigner setup provides motivation to explore the specific form $`\delta D\sqrt{T_{obs}}`$ of this $`T_{obs}`$-dependence. Of course, the analyses reported above and in Ref. do not necessarily indicate that fuzziness of the type operatively defined in Section 2 should be responsible for the measurability bound (7). The intuitive/heuristic arguments I advocated can provide a (tentative) estimate of the measurability bound, but a full Quantum Gravity theory woud be required in order to be able to determine which phenomena could be responsible for the bound. If one assumes that indeed fuzziness of the type operatively defined in Subsection 2 is responsible for the measurability bound (7) one is led to the possibility that a distance $`D`$ would be affected by fundamental stochastic fluctuations with root-mean-square deviation $`\sigma _D`$ given by $`\sigma _D\sqrt{L_{QG}cT_{obs}}.`$ (8) From the type of $`T_{obs}`$-dependence of Eq. (8) it follows that the quantum fluctuations responsible for (8) should have displacement amplitude spectral density $`S(f)`$ with the $`f^1`$ dependence<sup>8</sup><sup>8</sup>8Of course, one expects that an $`f^1`$ dependence of the Quantum-Gravity induced $`S(f)`$ could only be valid for frequencies $`f`$ significantly smaller than the Planck frequency $`c/L_{planck}`$ and significantly larger than the inverse of the time scale over which, even ignoring the gravitational field generated by the devices, the classical geometry of the space-time region where the experiment is performed manifests significant curvature effects. typical of “random walk noise” : $`S(f)=f^1\sqrt{L_{QG}c}.`$ (9) In fact, there is a general relation (which follows from the general property (5)) between $`\sigma _D\sqrt{T_{obs}}`$ and $`S(f)f^1`$. If indeed $`L_{QG}L_{planck}`$, from (9) one obtains $`S(f)f^1(510^{14}m\sqrt{Hz})`$. As I shall discuss in detail later, by the standards of modern interferometers this noise level is quite significant, and therefore, before discussing other estimates of distance fuzziness, let us see whether the naive guess $`L_{QG}L_{planck}`$ can be justified within the argument used in arriving at (7). Since (7) was motivated from (6), and in going from (6) to (7) the scale $`L_{QG}`$ was introduced to parametrize the minimum allowed value of $`1/M_b+1/M_d`$, we could get some intuition for $`L_{QG}`$ from trying to establish this minimum allowed value of $`1/M_b+1/M_d`$. As mentioned, a conservative (possibly very conservative) estimate of this minimum value can be obtained by enforcing that $`M_b`$ and $`M_d`$ be at least sufficiently small to avoid black hole formation. In leading order (e.g., assuming corresponding spherical symmetries) this amounts to the requirement that $`M_b<\mathrm{}S_b/(cL_{planck}^2)`$ and $`M_d<\mathrm{}S_d/(cL_{planck}^2)`$, where the lengths $`S_b`$ and $`S_d`$ characterize the sizes of the regions of space where the matter distributions associated to $`M_b`$ and $`M_d`$ are localized. This observation implies $`{\displaystyle \frac{1}{M_b}}+{\displaystyle \frac{1}{M_d}}>{\displaystyle \frac{cL_{planck}^2}{\mathrm{}}}\left({\displaystyle \frac{1}{S_b}}+{\displaystyle \frac{1}{S_d}}\right).`$ (10) This suggests that $`L_{QG}min[L_{planck}^2(1/S_b+1/S_d)]`$: $`\delta Dmin\sqrt{\left({\displaystyle \frac{1}{S_b}}+{\displaystyle \frac{1}{S_d}}\right){\displaystyle \frac{L_{planck}^2cT_{obs}}{2}}}.`$ (11) Of course, this estimate is very preliminary since a full Quantum Gravity would be needed here; in particular, the way in which black holes were handled in my argument might have missed important properties which would become clear only once we have the correct theory. However, it is nevertheless striking to observe that the naive guess $`L_{QG}L_{planck}`$ appears extremely far from the intuition emerging from this estimate; in fact, $`L_{QG}L_{planck}`$ would require that the maximum admissible value of $`S_d`$ be of order $`L_{planck}`$. \[I take $`S_b`$ as fixed since it characterizes the size of the bodies whose distance is being measured, but of course the observer can choose the size $`S_d`$ of the devices.\] Since our analysis only holds for bodies and devices that can be treated as approximately rigid<sup>9</sup><sup>9</sup>9The fact that I have included only one contribution from the quantum properties of the devices, the one associated to the quantum properties of the motion of the center of mass, implicitly relies on the assumption that the devices and the bodies can be treated as approximately rigid. Any non-rigidity of the devices would of course introduce additional contributions to the uncertainty in the measurement of $`D`$. I shall further comment on the additional uncertainties that are introduced by the non-rigidity of devices in Section 5, where I consider some properties of the mirrors used in gravity-wave interferometry. and any non-rigidity would introduce additional contributions to the uncertainties, it is reasonable to assume that $`max[S_d]`$ be some small length (small enough that any non-rigidity would negligibly affect the measurement procedure), but the condition $`max[S_d]L_{planck}`$ appears rather extreme. As I shall discuss in Section 4, already available experimental data rule out $`L_{QG}L_{planck}`$ in Eq. (9), and therefore if the $`f^1`$-dependence of Eq. (9) is verified in the physical world (which is of course only one of the possibilities, and a rather speculative one) $`max[S_d]`$ must be somewhat larger than $`L_{planck}`$. As long as this type of analysis involves a $`max[S_d]`$ which is independent of $`\delta D`$ one still finds $`\sqrt{T_{obs}}`$-dependence of $`\sigma _D`$ (i.e. $`f^1`$-dependence of $`S(f)`$). If the correct Quantum Gravity is such that something like (11) holds but with $`max[S_d]`$ that depends on $`\delta D`$, one would have a different $`T_{obs}`$-dependence (and corresponding $`f`$-dependence), as I shall show in one example discussed in Subsection 3.6. ### 3.3 Random-walk noise from random-walk models of quantum space-time fluctuations Since in this Article, like in Ref. , I am advocating a rather pragmatic phenomenological approach to Quantum Gravity, and taking into account the operative definition of fuzzy distance given in Section 2, it seems reasonable to consider the possibility that the properties of a distance $`D`$ in a quantum space-time would involve a fluctuation of magnitude $`L_{planck}10^{35}m`$ over each time interval $`t_{planck}=L_{planck}/c10^{44}s`$. The type of interferometer noise that would result from such a random-walk model of quantum space-time has the same qualitative structure as the noise I discussed in the previous Subsection motivated by the Salecker-Wigner measurement procedure. In fact, experiments monitoring the distance $`D`$ between two bodies for a time $`T_{obs}`$ (in the sense appropriate, e.g., for a gravity-wave interferometer) would involve a total effect associated to quantum space-time amounting to $`n_{obs}T_{obs}/t_{planck}`$ randomly directed fluctuations of magnitude $`L_{planck}`$. An elementary analysis allows to establish that in such a context the root-mean-square deviation $`\sigma _D`$ would be proportional to $`\sqrt{T_{obs}}`$: $$\sigma _D\sqrt{L_{planck}cT_{obs}}.$$ (12) We encounter again the $`\sqrt{T_{obs}}`$-dependence already considered in relation to the analysis of the Salecker-Wigner measurement procedure. Of course, this means that also for this random-walk models of quantum space-time the displacement amplitude spectral density has the characteristic $`f^1`$ behaviour. It also means that Eq. (12) as it stands predicts too much fuzziness. Therefore, if such a random-walk model of quantum space-time is verified in the physical world it must be that some of the simplifying assumptions made in deriving Eq. (12) were too naive. One possibility one might want to consider is the one in which the quantum properties of space-time are such that fluctuations of magnitude $`L_{planck}`$ would occur with frequency somewhat lower than $`1/t_{planck}`$. In closing this Subsection it seems worth adding a few comments on the stochastic processes here considered. In most physical contexts a series of random steps does not lead to $`\sqrt{T_{obs}}`$ dependence of $`\sigma `$ because often the context is such that through the fluctuation-dissipation theorem the source of $`\sqrt{T_{obs}}`$ dependence gets tempered. The hypothesis explored in this Subsection, which can be partly motivated from the analysis of the Salecker-Wigner measurement procedure reported in the previous Subsection, is that the type of underlying dynamics of quantum space-time be such that the fluctuation-dissipation theorem be satisfied without spoiling the $`\sqrt{T_{obs}}`$ dependence of $`\sigma `$. This is an intuition which apparently is shared by other authors; in fact, the study reported in Ref. (which followed by a few months Ref. , but clearly was the result of completely independent work) also models some implication of quantum space-time (the ones that affect clocks) with stochastic processes whose underlying dynamics does not produce any dissipation and therefore the “fluctuation contribution” to the $`T_{obs}`$ dependence remains unaffected, although the fluctuation-dissipation theorem is fully taken into account. Since the mirrors of interferometers are basically extremities of a pendulum, another aspect that the reader might at first find counter-intuitive is that the $`\sqrt{T_{obs}}`$ dependence of $`\sigma `$, although coming in with a very small prefactor, for extremely large $`T_{obs}`$ would seem to give values of $`\sigma `$ too large to be consistent with the structure of a pendulum. This is a misleading intuition which originates from the experience with ordinary (non-Quantum-Gravity) analyses of the pendulum. In fact, the dynamics of an ordinary pendulum has one extremity “fixed” to a very heavy and rigid body, while the other extremity is fixed to a much lighter body. The usual stochastic processes considered in the study of the pendulum affect the heavier body in a totally negligible way, while they have strong impact on the dynamics of the lighter body. A pendulum analyzed in the spirit of the present Subsection would be affected by stochastic processes which are of the same magnitude both for its heavier and its lighter extremity. In particular in the directions orthogonal to the vertical axis the stochastic processes affect the position of the center of mass of the entire pendulum just as they would affect the position of the center of mass of any other body (the string that connects the two extremities of the pendulum would not affect the motion of its center of mass). ### 3.4 Random-walk noise motivated by linear deformation of dispersion relation Both the analysis of the Salecker-Wigner measurement procedure and the analysis of simple-minded random-walk models of quantum space-time fluctuations have provided some encouragement for the study of interferometer noise of random-walk type. A third candidate Quantum Gravity effect that provides some encouragement for the random-walk noise scenario has emerged in the context of studies of Quantum-Gravity induced deformation of the dispersion relation that characterizes the propagation of massless particles. Deformed dispersion relations are not uncommon in the Quantum Gravity literature. For example, they emerge naturally in Quantum Gravity scenarios requiring a modification of Lorentz symmetry. Modifications of Lorentz symmetry could result from space-time discreteness, a possibility extensively investigated in the Quantum Gravity literature (see, e.g., Ref. ), and it would also naturally result from an “active” Quantum-Gravity vacuum of the type advocated by Wheeler and Hawking (such a vacuum might physically label the space-time points). While most Quantum-Gravity approaches will lead to deformed dispersion relations, the specific structure of the deformation can differ significantly from model to model. Assuming that the deformation admits a series expansion at small energies $`E`$, and parametrizing the deformation in terms of an energy<sup>10</sup><sup>10</sup>10I parametrize deformations of dispersion relations in terms of an energy scale $`E_{QG}`$, which is implicitly assumed to be rather close to $`E_{planck}`$, while I parametrize the proposals for measurability bounds with a length scale $`L_{QG}`$, which is implicitly assumed to be rather close to $`L_{planck}`$. This is somewhat redundant, since of course $`E_{planck}=\mathrm{}c/L_{planck}`$, but it can help the reader in identifying the origin of a conjectured fuzziness scenario by simply looking at the type of parametrization that describes the stochastic processes. scale $`E_{QG}`$ (a scale characterizing the onset of Quantum-Gravity dispersion effects, often identified with the Planck energy $`E_{planck}10^{19}GeV`$), one would expect to be able to approximate the deformed dispersion relation at low energies according to $$c^2𝐩^2E^2\left[1+\xi \left(\frac{E}{E_{QG}}\right)^\alpha \right]$$ (13) where the power $`\alpha `$ and the sign ambiguity $`\xi =\pm 1`$ would be fixed in a given dynamical framework. For example, in some of the approaches based on dimensionful “$`\kappa `$” quantum deformations of Poincaré symmetries one finds evidence of a dispersion relation for massless particles $`c^2𝐩^2=E_{QG}^2\left[1e^{E/E_{QG}}\right]^2`$, and therefore $`\xi =\alpha =1`$. Scenarios (13) with $`\alpha =1`$ are in a sense consistent with random-walk noise. In fact, an experiment involving as a device (as a probe) a massless particle satisfying the dispersion relation (13) with $`\alpha =1`$ would be naturally affected by a device-induced uncertainty that grows with $`\sqrt{T_{obs}}`$. This is for example true in Quantum-Gravity scenarios in which the Hamiltonian equation of motion $`\dot{x}_i=H/p_i`$ is still valid (at least approximately), where the deformed dispersion relation (13) leads to energy-dependent velocities for massless particles $$vc\left[1\left(\frac{1+\alpha }{2}\right)\xi \left(\frac{E}{E_{QG}}\right)^\alpha \right],$$ (14) and consequently the uncertainty in the position of the massless probe when a time $`T_{obs}`$ has lapsed since the observer (experimentalist) set off the measurement procedure is given by $$\delta xc\delta t+\delta vT_{obs}c\delta t+\frac{1+\alpha }{2}\alpha \frac{E^{\alpha 1}\delta E}{E_{QG}^\alpha }cT_{obs},$$ (15) where $`\delta t`$ is the quantum uncertainty in the time of emission of the probe, $`\delta v`$ is the quantum uncertainty in the velocity of the probe, $`\delta E`$ is the quantum uncertainty in the energy of the probe, and I used the relation between $`\delta v`$ and $`\delta E`$ that follows from (14). Since the quantum uncertainty in the time of emission of a particle and the quantum uncertainty in its energy are related<sup>11</sup><sup>11</sup>11It is well understood that the $`\delta t\delta E\mathrm{}`$ relation is valid only in a weaker sense than, say, Heisenberg’s Uncertainty Principle $`\delta x\delta p\mathrm{}`$. This has its roots in the fact that the time appearing in Quantum-Mechanics equations is just a parameter (not an operator), and in general there is no self-adjoint operator canonically conjugate to the total energy, if the energy spectrum is bounded from below . However, the $`\delta t\delta E\mathrm{}`$ relation does relate $`\delta t`$ intended as quantum uncertainty in the time of emission of a particle and $`\delta E`$ intended as quantum uncertainty in the energy of that same particle. by $`\delta t\delta E\mathrm{}`$, Eq. (15) can be turned into an absolute bound on the uncertainty in the position of the massless probe when a time $`T_{obs}`$ has lapsed since the observer set off the measurement procedure: $$\delta xc\frac{\mathrm{}}{\delta E}+\frac{1+\alpha }{2}\alpha \frac{E^{\alpha 1}\delta E}{E_{QG}^\alpha }T_{obs}\sqrt{\left(\frac{\alpha +\alpha ^2}{2}\right)\left(\frac{E}{E_{QG}}\right)^{\alpha 1}\frac{c^2\mathrm{}T_{obs}}{E_{QG}}},$$ (16) where I also used the fact that in principle the observer can prepare the probe in a state with desired $`\delta t`$, so it is legitimate to minimize the uncertainty with respect to the free choice of $`\delta t`$. For $`\alpha =1`$ the $`E`$-dependence on the right-hand side of Eq. (16) disappears and one is led again (see Subsections 3.2 and 3.3) to a $`\delta x`$ of the type $`(constant)\sqrt{T_{obs}}`$: $$\delta x\sqrt{\frac{c^2\mathrm{}T_{obs}}{E_{QG}}}.$$ (17) When massless probes are used in the measurement of a distance $`D`$, as in the Salecker-Wigner measurement procedure, the uncertainty (17) in the position of the probe translates directly into an uncertainty on $`D`$: $$\delta D\sqrt{\frac{c^2\mathrm{}T_{obs}}{E_{QG}}}.$$ (18) This was already observed in Refs. which considered the implications of deformed dispersion relations (13) with $`\alpha =1`$ for the Salecker-Wigner measurement procedure. Since deformed dispersion relations (13) with $`\alpha =1`$ have led us to the same measurability bound already encountered both in the analysis of the Salecker-Wigner measurement procedure and the analysis of simple-minded random-walk models of quantum space-time fluctuations, if we assume again that such measurability bounds emerge in a full Quantum Gravity as a result of corresponding quantum fluctuations (fuzziness), we are led once again to random-walk noise: $$\sigma _D\sqrt{\frac{c^2\mathrm{}T_{obs}}{E_{QG}}}.$$ (19) ### 3.5 Noise motivated by quadratic deformation of dispersion relation In the preceding Subsection 3.4 I observed that Quantum-Gravity deformed dispersion relations (13) with $`\alpha =1`$ can also motivate random-walk noise $`\sigma _D(constant)\sqrt{T_{obs}}`$. If we use the same line of reasoning that connects a measurability bound to a scenario for fuzziness when $`\alpha 1`$ we find $`\sigma _Dc(E/E_{QG})\sqrt{T_{obs}}`$, where $`c(E/E_{QG})`$ is a ($`\alpha `$-dependent) function of $`E/E_{QG}`$. However, in these cases with $`\alpha 1`$ clearly the connection between measurability bound and fuzzy-distance scenario cannot be too direct; in fact, the energy of the probe $`E`$ which naturally playes a role in the context of the derivation of the measurability bound does not have a natural counter-part in the context of the conjectured fuzzy-distance scenario. In order to preserve the conjectured connection between measurability bounds and fuzzy-distance scenarios one can be tempted to envision that if $`\alpha 1`$ the interferometer noise levels induced by space-time fuzziness might be of the type \[see Eq. (16)\] $$\sigma _D\sqrt{\left(\frac{\alpha +\alpha ^2}{2}\right)\left(\frac{E^{}}{E_{QG}}\right)^{\alpha 1}\frac{c^2\mathrm{}T_{obs}}{E_{QG}}},$$ (20) where $`E^{}`$ is some energy scale characterizing the physical context under consideration. \[For example, at the intuitive level one might conjecture that $`E^{}`$ could characterize some sort of energy density associated with quantum fluctuations of space-time or an energy scale associated with the masses of the devices used in the measurement process.\] Since $`\alpha 1`$ in all Quantum-Gravity approaches believed to support deformed dispersion relations, and since it is quite plausible that $`E_{QG}`$ would be rather close to $`10^{19}GeV`$, it appears likely that the factor $`(E^{}/E_{QG})^{\alpha 1}`$ would suppress the random-walk noise effect. ### 3.6 Noise with $`f^{5/6}`$ amplitude spectral density In Subsection 3.2 a bound on the measurability of distances based on the Salecker-Wigner procedure was used as motivation for experimental tests of interferometer noise of random-walk type, with $`f^1`$ amplitude spectral density and $`\sqrt{T_{obs}}`$ root-mean-square deviation. In this Subsection I shall pursue further the observation that the relevant measurability bound could be derived by simply insisting that the devices do not turn into black holes. That observation allowed to derive Eq. (11), which expresses the minimum uncertainty $`\delta D`$ on the measurement of a distance $`D`$ (i.e. the measurability bound for $`D`$) as proportional to $`\sqrt{T_{obs}}`$ and $`\sqrt{(1/S_b+1/S_d)}`$. Within that derivation the minimum uncertainty is therefore obtained in correspondence of the minimum value of $`1/S_b+1/S_d`$ consistent with the structure of the measurement procedure. Since, given the size $`S_b`$ of the bodies whose distance is being measured, the minimum of $`1/S_b+1/S_d`$ corresponds to $`max[S_d]`$ I was led to consider how large $`S_d`$ could be while still allowing to disregard any non-rigidity in the quantum motion of the device (which would otherwise lead to additional contributions to the uncertainties). I managed to motivate the random-walk noise scenario by simply assuming that $`max[S_d]`$ be independent of the accuracy $`\delta D`$ that the observer would wish to achieve. However, as already argued earlier in this Article, the same physical intuition that motivates some of the fuzzy space-time scenarios here considered also suggests that Quantum Gravity might require a novel measurement theory, possibly involving a new type of relation between system and measuring apparatus. Based on this intuition, it seems reasonable to contemplate the possibility that $`max[S_d]`$ might actually depend on $`\delta D`$. It is such a scenario that I want to consider in this Subsection. In particular I want to consider the case $`max[S_d]\delta D`$, which, besides being simple, has the plausible property that it allows only small devices if the uncertainty to be achieved is small, while it would allow correspondingly larger devices if the observer was content with a larger uncertainty. This is also consistent with the idea that elements of non-rigidity in the quantum motion of extended devices might be negligible if anyway the measurement is not aiming for great accuracy, while they might even lead to the most significant contributions to the uncertainty if all other sources of uncertainty are very small. Salecker and Wigner would also argue that “large” devices are not suitable for very accurate space-time measurements (they end up being “in the way” of the measurement procedure) while they might be admissible if space-time is being probed rather softly. In this scenario with $`max[S_d]\delta D`$, Eq. (11) takes the form $`\delta D\sqrt{\left({\displaystyle \frac{1}{S_b}}+{\displaystyle \frac{1}{S_d}}\right){\displaystyle \frac{L_{planck}^2cT_{obs}}{2}}}\sqrt{{\displaystyle \frac{L_{planck}^2cT_{obs}}{2\delta D}}},`$ (21) which actually gives $`\delta D\left({\displaystyle \frac{1}{2}}L_{planck}^2cT_{obs}\right)^{1/3}.`$ (22) As already done with the other measurability bounds discussed in this Article, I shall take Eq. (22) as motivation for the investigation of the corresponding fuzziness scenarios characterised by $`\sigma _D\left(\stackrel{~}{L}_{QG}^2cT_{obs}\right)^{1/3}.`$ (23) Notice that in this equation I replaced $`L_{planck}`$ with a generic length scale $`\stackrel{~}{L}_{QG}`$, since it is possible that the heuristic argument leading to Eq. (23) might have captured the qualitative structure of the phenomenon while providing an incorrect estimate of the relevant length scale. As discussed later in this Article significant bounds on this length scale can be set by experimental data, so we can take a phenomenological attitude toward $`\stackrel{~}{L}_{QG}`$. As one can verify for example using Eq. (5), the $`T_{obs}^{1/3}`$ dependence of $`\sigma _D`$ is associated with displacement amplitude spectral density with $`f^{5/6}`$ behaviour: $`𝒮(f)=f^{5/6}(\stackrel{~}{L}_{QG}^2c)^{1/3}.`$ (24) For $`\stackrel{~}{L}_{QG}10^{35}m`$ this equation would predict $`𝒮(f)=f^{5/6}(310^{21}mHz^{1/3})`$. ## 4 COMPARISON WITH GRAVITY-WAVE INTERFEROMETER DATA From the point of view of the operative definition of fuzzy distance given in Section 2 the scenarios for space-time fuzziness considered in the previous Section can all be characterized in terms of three alternative possibilities for the root-mean-square deviation $`\sigma _D`$ associated to the fluctuations induced on $`D`$ by conjectured quantum properties of space-time. For convenience I report here the three alternatives for $`\sigma _D`$ that I ended up considering: $$\sigma _DL_{min},$$ (25) $`\sigma _D\sqrt{L_{QG}cT_{obs}},`$ (26) $`\sigma _D\left(\stackrel{~}{L}_{QG}^2cT_{obs}\right)^{1/3}.`$ (27) The discussion of the fuzziness scenarios considered in the previous Section was consistent with the assumption that the length scale characterizing fuzziness (be it $`L_{min}`$, $`L_{QG}`$ or $`\stackrel{~}{L}_{QG}`$) would be a general fundamental property of Quantum Gravity, independent of the peculiarities of the specific experimental setup and of its environment. However, the fuzziness scenario considered in Subsection 3.5 provided some motivation for the idea that at least $`L_{QG}`$ (if Eq. (26) was to be realized in the physical world) might not be a universal length scale, i.e. it might depend on some specific properties of the experimental setup and in particular in some contexts (those with small $`E^{}/E_{QG}`$) one might find $`\stackrel{~}{L}_{QG}`$ to be significantly smaller than $`L_{planck}`$. The possibility that the “magnitude” of space-time fuzziness might depend on the specific context and experimental setup is also consistent with the arguments which support the possibility of a novel Quantum-Gravity relationship between system and measuring apparatus. If the length scale characterizing fuzziness depended on this relationship it might take different values in different experimental setups. Setting aside these possible complications associated to a novel Quantum-Gravity relationship between system and measuring apparatus, I shall proceed discussing the bounds set on the length scales $`L_{min}`$, $`L_{QG}`$ or $`\stackrel{~}{L}_{QG}`$ by available experimental data. Let me start observing that, while conceptually they represent drastic departures from conventional physics, phenomenologically the proposals (25), (26) and (27) appear to encode only minute effects. For example, assuming that $`L_{min}`$, $`L_{QG}`$ and $`\stackrel{~}{L}_{QG}`$ are not much larger than the Planck length, all of these proposals encode submeter uncertainties on the size of the whole observable universe (about $`10^{10}`$ light years). However, the precision of modern gravity-wave interferometers is such that they can provide significant information at least on the proposals (26) and (27). In fact, as already mentioned in Section 2, the operation of gravity-wave interferometers is based on the detection of minute changes in the positions of some test masses (relative to the position of a beam splitter). If these positions were affected by quantum fluctuations of the type discussed above the operation of gravity-wave interferometers would effectively involve an additional source of noise due to Quantum-Gravity. This observation allows to set interesting bounds already using existing noise-level data obtained at the Caltech 40-meter interferometer, which has achieved displacement noise levels with amplitude spectral density lower than $`10^{18}m/\sqrt{Hz}`$ for frequencies between $`200`$ and $`2000`$ $`Hz`$. While these sensitivity levels are still very far from the levels required in order to test proposal (25) (from the analysis reported in Subsection 3.1 it follows that for $`L_{min}L_{planck}`$ and $`f1000Hz`$ the Quantum-Gravity noise induced in that scenario is only of order $`10^{36}m/\sqrt{Hz}`$), as seen by straightforward comparison with Eq. (9) these sensitivity levels clearly rule out all values of $`L_{QG}`$ down to the Planck length. Actually, even values of $`L_{QG}`$ significantly smaller than the Planck length are inconsistent with the data reported in Ref. ; in particular, by confronting Eq. (9) with the observed noise level of $`310^{19}m/\sqrt{Hz}`$ near $`450`$ $`Hz`$, which is the best achieved at the Caltech 40-meter interferometer, one obtains the bound $`L_{QG}10^{40}m`$. While, as mentioned, at present we should allow for some relatively small factor to intervene in the relation between $`L_{QG}`$ and $`L_{planck}`$, the exclusion of all values of $`L_{QG}`$ down to $`10^{40}m`$ appears to be quite significant, perhaps even problematic, for the proposal (26). In particular, this experimental bound rules out the possibility that (26) might be the result of space-time fluctuations of the random-walk type discussed in Subsection 3.3, with a fluctuation of magnitude $`L_{planck}10^{35}m`$ for each time interval $`t_{planck}10^{44}s`$; in fact, as shown above, such a picture would lead to values of $`L_{QG}`$ not significantly smaller than $`L_{planck}`$. The fact that this picture is ruled out is perhaps the most striking lesson coming out of available interferometer data. Only a few years ago it might have seemed impossible to test a scenario involving fluctuations of magnitude $`L_{planck}`$, even if such fluctuations might have been quite frequent (one each $`t_{planck}`$). From the point of view of modeling quantum fluctuations of space-time our simple-minded random-walk model might still be useful, but clearly some new element must be introduced in order to temper the associated fuzziness of space-time; for example, as mentioned in closing Subsection 3.3, one might consider the possibility that fluctuations of magnitude $`L_{planck}`$ would not be as frequent as $`1/t_{planck}`$. In any case, of course, even more stringent bounds on $`L_{QG}`$ are within reach of the next LIGO/VIRGO generation of gravity-wave interferometers. It would seem that very little room for adjustments of the random-walk noise scenario would remain available if also LIGO and VIRGO give negative results for what concerns this scenario. The sensitivity achieved at the Caltech 40-meter interferometer also sets a bound on the proposal (23)-(24). By observing that Eq. (24) would imply Quantum-Gravity noise levels for gravity-wave interferometers of order $`\stackrel{~}{L}_{QG}^{2/3}(10m^{1/3}/\sqrt{Hz})`$ at frequencies of a few hundred $`Hz`$, one obtains from the data reported in Ref. that $`\stackrel{~}{L}_{QG}10^{29}m`$. This bound is remarkably stringent in absolute terms, but is still quite far from the range of values one ordinarily considers as likely candidates for length scales appearing in Quantum Gravity. A more significant bound on $`\stackrel{~}{L}_{QG}`$ should be obtained by the LIGO/VIRGO generation of gravity-wave interferometers. For example, it is plausible that the “advanced phase” of LIGO achieve a displacement noise spectrum of less than $`10^{20}m/\sqrt{Hz}`$ near $`100`$ $`Hz`$ and this would probe values of $`\stackrel{~}{L}_{QG}`$ as small as $`10^{34}m`$. Looking beyond the LIGO/VIRGO generation of gravity-wave interferometers, one can envisage still quite sizeable margins for improvement by optimizing the performance of the interferometers at low frequencies, where both (9) and (24) become more significant. It appears natural to perform such studies in the quiet environment of space, perhaps through future refinements of LISA-type setups . The indication of the low-frequency range as most promising for Quantum Gravity tests at interferometers should be seen as the most robust result obtained in this Article. The arguments advocated in the previous Section 3 were all rather speculative and it would not be surprising if some of the details of the estimates turned out to be completely off the mark, but the fact that nearly all of those arguments pointed us toward the low-frequency region might nevertheless be indicative. I hope that, in spite of the heuristic nature of the arguments advocated in the previous Section, colleagues on the experimental side will take the low-frequency hint into consideration in planning future experimental tests of quantum properties of space-time. ## 5 ABSOLUTE MEASURABILITY BOUND FOR THE AMPLITUDE OF A GRAVITY WAVE Up to this point I have discussed how certain plausible quantum properties of space-time would affect the noise levels in interferometers. The fact that I considered the sensitivity levels of gravity-wave interferometers was due to the fact that these are the most advanced modern interferometers, and it was not in any way related to their function as gravity-wave detectors. In this Section 5 I instead consider an aspect of the physics of gravity waves; specifically, I discuss the way in which the interplay between Gravity and Quantum Mechanics could affect the measurability of the amplitude of a gravity wave. The reader should notice that in this Section nothing is assumed of Quantum Gravity: I just combine known properties of Gravity and Quantum Mechanics. This is also different from the analyses reported in the previous sections which concerned candidate Quantum Gravity phenomena. The motivation for considering those Quantum Gravity phenomena came from combining known properties of Gravity and Quantum Mechanics, but the phenomena (e.g. the models for space-time fuzziness) could not be seen as straightforward combination of Gravity and Quantum Mechanics, they truly pertained to a novel type of physics. Having clarified in which sense this Section represents a deviation from the main bulk of observations reported in the present Article, let me start the discussion by reminding the reader of the fact that, as already mentioned in Section 2, the interference pattern generated by a modern interferometer can be remarkably sensitive to changes in the positions of the mirrors relative to the beam splitter, and is therefore sensitive to gravitational waves (which, as described in the proper reference frame , have the effect of changing these relative positions). With just a few lines of simple algebra one can show that an ideal gravitational wave of amplitude $`h`$ and reduced<sup>12</sup><sup>12</sup>12I report these results in terms of reduced wavelengths $`\lambda ^o`$ (which are related to the wavelengths $`\lambda `$ by $`\lambda ^o=\lambda /(2\pi )`$) in order to avoid cumbersome factors of $`\pi `$ in some of the formulas. wavelength $`\lambda _{gw}^o`$ propagating along the direction orthogonal to the plane of the interferometer would cause a change in the interference pattern as for a phase shift of magnitude $`\mathrm{\Delta }\varphi =D_L/\lambda ^o`$, where $`\lambda ^o`$ is the reduced wavelength of the laser beam used in the measurement procedure and $`D_L2h\lambda _{gw}^o\left|\mathrm{sin}\left({\displaystyle \frac{L}{2\lambda _{gw}^o}}\right)\right|,`$ (28) is the magnitude of the change caused by the gravitational wave in the length of the arms of the interferometer. (The changes in the lengths of the two arms have opposite sign .) As already mentioned in Section 2, modern techniques allow to construct gravity-wave interferometers with truly remarkable sensitivity; in particular, at least for gravitational waves with $`\lambda _{gw}^o`$ of order $`10^3Km`$, the next LIGO/VIRGO generation of detectors should be sensitive to $`h`$ as low as $`310^{22}`$. Since $`h310^{22}`$ causes a $`D_L`$ of order $`10^{18}m`$ in arms lengths $`L`$ of order $`3Km`$, it is not surprising that in the analysis of gravity-wave interferometers, in spite of their huge size, one ends up having to take into account the type of quantum effects usually significant only for the study of processes at or below the atomic scale. In particular, there is the so-called standard quantum limit on the measurability of $`h`$ that results from the combined minimization of photon shot noise and radiation pressure noise. While a careful discussion of these two noise sources (which the interested reader can find in Ref. ) is quite insightful, here I shall rederive this standard quantum limit in an alternative<sup>13</sup><sup>13</sup>13While the standard quantum limit can be equivalently obtained either from the combined minimization of photon shot noise and radiation pressure noise or from the application of Heisenberg’s uncertainty principle to the position and momentum of the mirror, it is this author’s opinion that there might actually be a fundamental difference between the two derivations. In fact, it appears (see, e.g., Ref. and references therein) that the limit obtained through combined minimization of photon shot noise and radiation pressure noise can be violated by careful exploitation of the properties of squeezed light, whereas the limit obtained through the application of Heisenberg’s uncertainty principle to the position and momentum of the mirror is so fundamental that it could not possibly be violated. and straightforward manner (also discussed in Ref. ), which relies on the application of Heisenberg’s uncertainty principle to the position and momentum of a mirror relative to the position of the beam splitter. This can be done along the lines of my analysis of the Salecker-Wigner procedure for the measurement of distances. Since the mirrors and the beam splitter are macroscopic, and therefore the corresponding momenta and velocities are related non-relativistically, Heisenberg’s uncertainty principle implies that $`\delta x\delta v{\displaystyle \frac{\mathrm{}}{2}}\left({\displaystyle \frac{1}{M_m}}+{\displaystyle \frac{1}{M_b}}\right){\displaystyle \frac{\mathrm{}}{2M_m}},`$ (29) where $`\delta x`$ and $`\delta v`$ are the uncertainties in the relative position and relative velocity, $`M_m`$ is the mass of the mirror, $`M_b`$ is the mass of the beam splitter. \[Again, the relative motion is characterised by the reduced mass, which is given in this case by $`(1/M_m+1/M_b)^1`$.\] Clearly, the high precision of the planned measurements requires that the position of the mirrors be kept under control during the whole time $`2L/c`$ that the beam spends in between the arms of the detector before superposition. When combined with (29) this leads to the finding that, for any given value of $`M_m`$, the $`D_L`$ induced by the gravitational wave can be measured only up to an irreducible uncertainty, the so-called standard quantum limit: $`\delta D_L\delta x+\delta v\mathrm{\hspace{0.17em}2}{\displaystyle \frac{L}{c}}\delta x+{\displaystyle \frac{\mathrm{}L}{cM_m\delta x}}\sqrt{{\displaystyle \frac{\mathrm{}L}{cM_m}}}.`$ (30) Here of course the reader will realize that the conceptual steps are completely analogous to the one of the discussion given in Section 3 of the Salecker-Wigner procedure for the measurement of distances. The similarities between the analysis of measurability for Salecker-Wigner distance measurements and the analysis of measurability by gravity-wave interferometers are a consequence of the fact that in both contexts a light signal is exchanged and the measurement procedure requires that the relative positions of some devices be known with high accuracy during the whole time that the signal spends between the bodies. The case of gravity-wave measurements is a canonical example of my general argument that the infinite-mass classical-device limit underlying ordinary Quantum Mechanics is inconsistent with the nature of gravitational measurements. As the devices get more and more massive they not only increasingly disturb the gravitational/geometrical observables, but eventually (well before reaching the infinite-mass limit) they also render impossible the completion of the procedure of measurement of gravitational observables. In trying to asses how this observation affects the measurability of the properties of a gravity wave let me start by combining Eqs. (28) and (30): $`\delta h=\delta \left({\displaystyle \frac{D_L}{L}}\right)=h{\displaystyle \frac{\delta D_L}{D_L}}{\displaystyle \frac{\sqrt{\frac{\mathrm{}L}{cM_m}}}{2\lambda _{gw}^o\left|\mathrm{sin}\left(\frac{L}{2\lambda _{gw}^o}\right)\right|}}.`$ (31) In complete analogy with some of the observations made in Section 3 concerning the measurability of distances, I observe that, when gravitational effects are taken into account, there is an obvious limitation on the mass of the mirror: $`M_m`$ must be small enough that the mirror does not turn into a black hole.<sup>14</sup><sup>14</sup>14This is of course a very conservative bound, since a mirror stops being useful as a device well before it turns into a black hole, but even this conservative approach leads to an interesting finding. In order for the mirror not to be a black hole one requires $`M_m<\mathrm{}S_m/(cL_{planck}^2)`$, where $`L_{planck}10^{33}cm`$ is the Planck length and $`S_m`$ is the size of the region of space occupied by the mirror. This observation combined with (31) implies that one would have obtained a bound on the measurability of $`h`$ if one found a maximum allowed mirror size $`S_m`$. In estimating this maximum $`S_m`$ one can be easily led to two extreme assumptions that go in opposite directions. It is perhaps worth commenting on the weaknesses of these assumptions, as this renders more intuitive the discussion of the correct estimate. On one extreme, one could suppose that in order to achieve a sensitivity to $`D_L`$ as low as $`10^{18}m`$ it might be necessary to “accurately position” each $`10^{36}m^2`$ surface element of the mirror. If this was really necessary, our line of argument would then lead to a rather large measurability bound. Fortunately, the phase of the wavefront of the reflected light beam is determined by the average position of all the atoms across the beam’s width, and microscopic irregularities in the structure of the mirror only lead to scattering of a small fraction of light out of the beam. This suggests that in our analysis the size of the mirror should be assumed to be of the order of the width of the beam . Once this is taken into account another extreme assumption might appear to be viable. In fact, especially when guided by intuition coming from table-top interferometers, one might simply assume that the mirror could be attached to a very massive body. Within this assumption our line of argument would not lead to any bound on the measurability of $`h`$. However, whereas for the type of accuracies typically involved in table-top experiments the idealization of a mirror attached to the table is appropriate<sup>15</sup><sup>15</sup>15Of course, in a table-top experiment it is possible to attach a mirror to the table in such a way that the noise associated to the residual relative motion of the mirror with respect to the table be smaller than all other sources of noise., in gravity-wave interferometers the precision is so high that it becomes necessary to take into account the fact that no attachment procedure can violate Heisenberg’s Uncertainty Principle (and causality). Clearly by attaching a mirror of size $`S_m`$ to a massive body one would not avoid the minimum uncertainty $`\sqrt{cTL_{planck}^2/S_m}`$ in the position of the mirror over a time $`T`$. Actually, mirrors provide a good context in which to illustrate the interplay between gravitational and quantum properties of devices. If a mirror is extended enough that one might not be able to neglect the fact it is not really moving rigidly with its center of mass, additional contributions to quantum uncertainties are found. The relative motion of different parts of the mirror is, of course, not “immune” to the Uncertainty Principle, and over a time $`T_{obs}`$ the relative position of different parts of the mirror will necessarily have an uncertainty proportional to $`\sqrt{\mathrm{}T_{obs}/m}`$, where $`m`$ is the mass of the small portions of the mirror whose relative position we are considering (rather than the larger mass of the entire mirror). In order to be able to use the mirror these uncertainties must be small enough to render the mirror consistent with the level of accuracy required by the measurement.<sup>16</sup><sup>16</sup>16It is worth emphasizing that ideal mirrors (like other ideal classical devices) are consistent with the laws of ordinary (non-gravitational) Quantum Mechanics, but are inadmissible once gravitational interactions are turned on. In the limit of ordinary Quantum Mechanics in which each small portion of the mirror has infinite mass the Uncertainty Principle ceases to induce non-rigidity in the mirror. However, when gravitational interactions combine with Quantum Mechanics this infinite mass limit is inconsistent with the procedure of measurement of gravitational observables. These considerations further supports the point that in the present analysis the size $`S_m`$ of the mirror should be taken to be of the order of the width of the beam, rather than being replaced by the size of some massive body attached to the mirror. In light of these considerations one clearly sees an upper bound on $`S_m`$; in fact, if the width of the beam (and therefore the effective size of the mirror) is larger than the $`\lambda _{gw}^o`$ of the gravity wave<sup>17</sup><sup>17</sup>17Note that for the gravitational waves to which LIGO will be most sensitive, which have $`\lambda _{gw}^o`$ of order $`10^3Km`$, the requirement $`S_m<\lambda _{gw}^o`$ simply states that the size of mirrors should be smaller than $`10^3Km`$. This bound might appear very conservative, but I am trying to establish an in principle limitation on the measurability of $`h`$, and therefore I should not take into account that present-day technology is very far from being able to produce a $`10^3Km`$ mirror with the required profile precision. which one is planning to observe, that same gravity wave would cause phenomena that would not allow the proper completion of the measurement procedure (e.g. deforming the mirror and leading to a nonlinear relation between $`D_L`$ and $`h`$). One concludes that $`M_m`$ should be smaller than $`\mathrm{}\lambda _{gw}^o/(cL_{planck}^2)`$, and this can be combined with (31) to obtain the measurability bound $`\delta h>{\displaystyle \frac{L_{planck}}{2\lambda _{gw}^o}}{\displaystyle \frac{\sqrt{L/\lambda _{gw}^o}}{\left|\mathrm{sin}\left(\frac{L}{2\lambda _{gw}^o}\right)\right|}}.`$ (32) This result not only sets a lower bound on the measurability of $`h`$ with given arm’s length $`L`$, but also encodes an absolute (i.e. irrespective of the value of $`L`$) lower bound, as a result of the fact that the function $`\sqrt{x}/|\mathrm{sin}(x/2)|`$ has an absolute minimum: $`min[\sqrt{x}/\mathrm{sin}(x/2)]1.66`$. This novel measurability bound is a significant departure from the principles of ordinary Quantum Mechanics, especially in light of the fact that it describes a limitation on the measurability of a single observable (the amplitude $`h`$ of a gravity wave), and that this limitation turns out to depend on the value (not the associated uncertainty) of another observable (the reduced wavelength $`\lambda _{gw}^o`$ of the same gravity wave). It is also significant that this new bound (32) encodes an aspect of a novel type of interplay between system and measuring apparatus in Quantum-Gravity regimes; in fact, in deriving (32) a crucial role was played by the fact that in accurate measurements of gravitational/geometrical observables it is no longer possible to advocate an idealized description of the devices. Also the $`T_{obs}`$-dependent bound on the measurability of distances which I reviewed in Section 3 encodes a departure from ordinary Quantum Mechanics and a novel type of interplay between system and measuring apparatus, but the bound (32) on the measurability of the amplitude of a gravity wave (which is one of the new results reported in the present Article) should provide even stronger motivation for the search of formalism in which Quantum Gravity is based on a new mechanics, not exactly given by ordinary Quantum Mechanics. In fact, while one might still hope to find alternatives to the Salecker-Wigner measurement procedure that allow to measure distances evading the bound (8) (or its $`\delta Dmax[S_d]`$ version (23)), it appears hard to imagine that there could be anything (even among “gedanken laboratories”) better than an interferometer for measurements of the amplitude of a gravity wave. The fact that in the limit $`\lambda _{gw}^o\mathrm{}`$ (the no-gravity-wave limit) the bound (32) reduces to the bound $`\delta h>\delta L/L`$ is of course consistent with the fact that when no gravity wave is going through the interferometer the only Quantum-Gravity related noise sources (if any) come directly from the distance fuzziness $`\delta L`$, which I considered in the previous Sections. The analysis reported in this Subsection appears to indicated that the interferometer noise associated to distance fuzziness could be simply seen as the $`\lambda _{gw}^o\mathrm{}`$ limit of a more complicated $`\lambda _{gw}^o`$-dependent type of Quantum-Gravity related noise affecting the observation of gravity waves in a full Quantum Gravity context. It is also important to realize that the bound (32) cannot be obtained by just assuming that the Planck length $`L_{planck}`$ provides the minimum uncertainty for lengths . In fact, if the only limitation was $`\delta D_LL_{planck}`$ the resulting uncertainty on $`h`$, which I denote with $`\delta h^{(L_{planck})}`$, would have the property $`min[\delta h^{(L_{planck})}]=min\left[{\displaystyle \frac{L_{planck}}{2\lambda _{gw}^o\left|\mathrm{sin}\left(\frac{L}{2\lambda _{gw}^o}\right)\right|}}\right]={\displaystyle \frac{L_{planck}}{2\lambda _{gw}^o}},`$ (33) whereas, exploiting the above-mentioned properties of the function $`\sqrt{x}/|\mathrm{sin}(x/2)|`$, from (32) one finds<sup>18</sup><sup>18</sup>18I am here (for “pedagogical” purposes) somewhat simplifying the comparison between $`\delta h`$ and $`\delta h^{(L_{planck})}`$. As mentioned, in principle one should take into account both uncertainties inherent in the “system” under observation, which are likely to be characterized exclusively by the Planck-length bound, and uncertainties coming from the “measuring apparatus”, which might easily involve other length (or time) scales besides the Planck length. It would therefore be proper to compare $`\delta h^{(L_{planck})}`$, which would be the only contribution present in the conventional idealization of “classical devices”, with the sum $`\delta h+\delta h^{(L_{planck})}`$, which, as appropriate for Quantum Gravity, provides a sum of system-inherent uncertainties plus apparatus-induced uncertainties. $`min[\delta h]>min\left[{\displaystyle \frac{L_{planck}}{2\lambda _{gw}^o}}{\displaystyle \frac{\sqrt{L/\lambda _{gw}^o}}{\left|\mathrm{sin}\left(\frac{L}{2\lambda _{gw}^o}\right)\right|}}\right]>min[\delta h^{(L_{planck})}].`$ (34) In general, the dependence of $`\delta h^{(L_{planck})}`$ on $`\lambda _{gw}^o`$ is different from the one of $`\delta h`$. Actually, in light of the comparison of (33) with (34) it is amusing to observe that the bound (32) could be seen as the result of a minimum length $`L_{planck}`$ combined with an $`\lambda _{gw}^o`$-dependent correction. This would be consistent with some of the ideas mentioned in Section 3, the energy-dependent effect of in vacuo dispersion and the corresponding proposal (19) for distance fuzziness, in which the magnitude of the Quantum Gravity effect depends rather sensitively on some energy-related aspect of the problem under investigation (just like $`\lambda _{gw}^o`$ gives the energy of the gravity wave). It is easy to verify that the bound (32), would not observably affect the operation of even the most sophisticated planned interferometers. However, in the spirit of what I did in the previous Sections considering the operative definition of distances, also for the amplitudes of gravity waves the fact that we have encountered an obstruction in the measurement analysis based on ordinary Quantum Mechanics (and the fact that by mixing Gravity and Quantum Mechanics we have obtained some intuition for novel qualitative features of such gravity-wave amplitudes in Quantum Gravity) could be used as starting point for the proposal of novel Quantum Gravity effects possibly larger than the estimate (32) obtained by naive combination of Gravity and Quantum Mechanics without any attempt at a fully Quantum-Gravity picture of the phenomenon. Although possibly very interesting, these fully Quantum-Gravity scenarios for the properties of gravity-wave amplitudes will not be explored in the present Article. ## 6 RELATIONS WITH OTHER QUANTUM GRAVITY APPROACHES The general strategy for the search of Quantum Gravity which has led to the arguments reviewed and/or presented in the previous sections is evidently quite different from the strategy adopted in other approaches to the unification of Gravity and Quantum Mechanics. \[I shall discuss these differences in greater detail in Section 8.\] However, it is becoming increasingly clear (especially in discussions and research papers that were motivated by Refs. ) that in spite of these differences some common elements of intuition concerning the interplay of Gravity and Quantum Mechanics are emerging. In this Section I want to emphasize these relationships with some Quantum Gravity approaches and at the same time I want to clarify the differences with respect to other Quantum Gravity approaches. ### 6.1 Canonical Quantum Gravity One of the most popular Quantum Gravity approaches (whose popularity might have been the reason for the diffusion of the possibly misleading name “Quantum Gravity”) is the one in which the ordinary canonical formalism of Quantum Mechanics is applied to (some formulation of) Einstein’s Gravity. While I must emphasize again that some of the observations reviewed and/or reported in the previous sections strongly suggest that Quantum Gravity should require a new mechanics, not exactly given by ordinary Quantum Mechanics, it is nonetheless encouraging that some of the phenomena considered in the previous sections have also emerged in studies of Canonical Quantum Gravity. The most direct connection was found in the study reported in Ref. , which was motivated by Ref. . In fact, Ref. shows that the popular Canonical/Loop Quantum Gravity admits the phenomenon of deformed dispersion relations, with the deformation going linearly with the Planck length. Concerning the bounds on the measurability of distances it is probably fair to say that the situation in Canonical/Loop Quantum Gravity is not yet clear because the present formulations do not appear to lead to a compelling candidate “length operator.” This author would like to interpret the problems associated with the length operator as an indication that perhaps something unexpected might actually emerge in Canonical/Loop Quantum Gravity as a length operator, possibly something with properties fitting the intuition emerging from the analyses in Subsections 3.2, 3.3, and 3.6. Actually, the random-walk space-time fuzziness models discussed in Subsection 3.3 might have a (somewhat weak, but intriguing) connection with “Quantum Mechanics applied to Gravity” at least to the level seen by comparison with the scenario discussed in Ref. , which was motivated by the intuition that is emerging from investigations of the Canonical/Loop Quantum Gravity. The “moves” of Ref. share many of the properties of the “random steps” of my random-walk models. Unfortunately, in both approaches one is still searching for a more complete description of the dynamics, and particularly for estimates of how frequently (in time) a $`L_{planck}`$-size step/move is taken. ### 6.2 Non-commutative geometry and deformed symmetries Although this was not emphasized in the present Article, some of the Quantum Gravity intuition emerging from the observations in the previous sections fits rather naturally within certain approaches based on non-commutative geometry and deformed symmetries. In particular, there is growing evidence that theories living in the non-commutative Minkowski space proposed in Refs. , which involves a dimensionful (possibly Planck length related) deformation parameter, would host both the phenomenon of Planck-length-linear deformations of dispersion relations and phenomena setting $`T_{obs}`$-dependent bounds on the measurability of distances. In general, the possibility of dimensionful deformations of symmetries might be quite natural if indeed the relation between system and measuring apparatus is modified at the Quantum Gravity level. For example, the symmetries we observe in ordinary Quantum Mechanics experiments at low energies might be the ones valid in the limit in which the interaction between system and measuring apparatus can be neglected. The dimensionful parameter characterizing the deformation of symmetries could mark a clear separation between (high-energy) processes in which the violations of ordinary symmetries are large and (low-energy) processes in which ordinary symmetries hold to a very good approximation. On the subject of quantum deformations of space-time symmetries interesting work has also been devoted (see, e.g., Refs. ) to frameworks that would host a bound on the measurability of distances of type (1). ### 6.3 Critical and non-critical String Theories Unfortunately, in the popular Quantum Gravity approach based on Critical Superstring Theory<sup>19</sup><sup>19</sup>19As already mentioned the mechanics of String Theory is just an ordinary Quantum Mechanics, the novelty of the approach comes from the fact that the fundamental dynamical entities are extended objects rather than point particles. not many results have been derived concerning directly the quantum properties of space-time. Perhaps the most noticeable such results are the ones on limitations on the measurability of distances emerged in the scattering analyses reported in Refs. , which I already mentioned in Subsection 3.1, since they provide support for the hypothesis that also Critical Superstring Theory might host a bound on the measurability of distances of type (1). A rather different picture is emerging (within the difficulties of this rich formalism) in Liouville (non-critical) String Theory , whose development was partly motivated by intuition concerning the “Quantum Gravity vacuum” that is rather close to the one traditionally associated to the works of Wheeler and Hawking . Evidence has been found in Liouville String Theory supporting the validity of deformed dispersion relations, with the deformation going linearly with the Planck/string length. In the sense clarified in Subsections 3.4 this approach might also host a bound on the measurability of distances which grows with $`\sqrt{T_{obs}}`$. ### 6.4 Other types of measurement analyses In light of the scarce opportunities to get any experimental input in the search for Quantum Gravity, it is not surprising that many authors have been seeking some intuition by formal analyses of the ways in which the interplay between Gravity and Quantum Mechanics could affect measurement procedures. A large portion of these analyses produced a “$`min[\delta D]`$” with $`D`$ denoting a distance; however, the same type of notation was used for structures defined in significantly different manner. Also different meanings have been given by different authors to the statement “absolute bound on the measurability of an observable.” Quite important for the topics here discussed are the differences (which might not be totally transparent as a result of this unfortunate choice of overlapping notations) between the approach advocated in the present Article (and in Refs. ) and the approaches advocated in Refs. . In the present Article “$`min[\delta D]`$” denotes an absolute limitation on the measurability of a distance $`D`$. The studies analyzed the interplay of Gravity and Quantum Mechanics in defining a net of time-like geodesics, and in those studies “$`min[\delta D]`$” characterizes the maximum “tightness” achievable for the net of time-like geodesics. Moreover, in Refs. it was required that the measurement procedure should not affect/modify the geometric observable being measured, and “absolute bounds on the measurability” were obtained in this specific sense. Instead, here and in Refs. I allowed the possibility for the observable which is being measured to depend also on the devices (the underlying view is that observables in Quantum Gravity would always be, in a sense, shared properties of “system” and “apparatus”), and I only required that the nature of the devices be consistent with the various stages of the measurement procedure (e.g., a black-hole device would not allow some of the required exchanges of signal). My measurability bounds are therefore to be intended from this more fundamental perspective, and this is crucial for the possibility that these measurability bounds be associated to a fundamental Quantum-Gravity mechanism for “fuzziness” (quantum fluctuations of space-time). The analyses reported in Refs. did not include any reference to fuzzy space-times of the type operatively defined in Section 2. The more fundamental nature of the bounds I obtained is also crucial for the arguments suggesting that Quantum Gravity might require a new mechanics, not exactly given by ordinary Quantum Mechanics. The analyses reported in Refs. did not include any reference to this possibility. I also notice that the conjectured relation between measurability bounds and noise levels in interferometers (e.g. the ones characterized by $`S(f)f^1`$ or $`S(f)f^{5/6}`$) is based on the dependence of the measurability bounds on the time of observation $`T_{obs}`$. In fact, this $`T_{obs}`$-dependence has been here emphasized, while in Refs. the emphasis was placed on observed lengths rather than on the time needed to observe them. Having clarified that there is a “double difference” (different “$`min`$” and different “$`\delta D`$”) between the meaning of $`min[\delta D]`$ adopted in the present Article and the meaning of $`min[\delta D]`$ adopted in Refs. , it is however important to notice that the studies reported in Refs. were among the first studies which showed how in some aspects of measurement analysis the Planck length might appear together with other length scales in the problem. For example, a Quantum Gravity effect naturally involving something of length-squared dimensions might not necessarily go like $`L_{planck}^2`$, in some case it could go like $`\mathrm{\Lambda }L_{planck}`$, with $`\mathrm{\Lambda }`$ some other length scale in the problem. Some of my arguments are examples of this possibility; in particular, I find in some cases relations of the type (see, e.g., Eq. (6)) $`\delta D\delta x^{}+{\displaystyle \frac{A}{\delta x^{}}}\sqrt{A},`$ (35) where $`A`$, which has length-squared dimensions, turns out to be given by the product of the $`L_{planck}`$-like small fundamental length $`L_{QG}`$ and the typically larger length scale $`cT_{obs}`$. Interestingly, the analysis of the interplay of Gravity and Quantum Mechanics in defining a net of time-like geodesics reported in Ref. concluded that the maximum “tightness” achievable for the geodesics would be characterized by $`\sqrt{L_{planck}^2R^1s}`$, where $`R`$ is the radius of the (spherically symmetric) clocks whose world lines define the network of geodesics, and $`s`$ is the characteristic distance scale over which one is intending to define such a network. The $`\sqrt{L_{planck}^2R^1s}`$ maximum tightness discussed in Ref. is formally analogous to my Eq. (11), but, as clarified above, this “maximum tightness” was defined in a way that is very (“doubly”) different from my “$`min[\delta D]`$”, and therefore the two proposals have completely different physical implications. Actually, in Ref. it was also stated that for a single geodesic distance (which might be closer to the type of distance measurability analysis reported here and in Refs. ) one could achieve accuracy significantly better than the formula $`\sqrt{L_{planck}^2R^1s}`$, which was interpreted in Ref. as a direct result of the structure of a network of geodesics. Relations of the type $`min[\delta D](L_{planck}^2D)^{(1/3)}`$, which are formally analogous to Eq. (22), were encountered in the analysis of maximum tightness achievable for a geodesics network reported in Ref. and in the analysis of measurability of distances reported in Ref. . Although once again the definitions of “$`min`$” and “$`\delta D`$” used in these studies are completely different from the ones relevant for the “$`min[\delta D]`$” of Eq. (22), the analyses reported in Ref. do provide some additional motivation for the scenario (22), at least in as much as they give examples of the fact that behaviour of the type $`L_{planck}^{2/3}`$ can naturally emerge in Quantum-Gravity measurement analyses. ### 6.5 Other interferometry-based Quantum-Gravity studies Several authors have put forward ideas which combine in one or another way some aspects of interferometry and candidate Quantum Gravity phenomena. While the viewpoints and the results of all of these works are significantly different from the ones of the present Article, it seems appropriate to at least mention briefly these studies, for the benefit of the interested reader. A first example, on which I shall return in the next Section, is provided by the idea that we might be able to use modern gravity-wave interferometers to investigate certain candidate early-universe String Theory effects. The studies reported in Ref. (and references therein) have considered how certain effectively stochastic properties of space-time would affect the evolution of quantum-mechanical states. The stochastic properties there considered are different from the ones discussed in the present Article, but were introduced within a similar viewpoint, i.e. stochastic processes as effective description of quantum space-time processes. The implications of these stochastic properties for the evolution of quantum-mechanical states were modeled via the formalism of “primary state diffusion”, but only rather crude models turned out to be treatable. Atom interferometers were found to have properties suitable for tests of this scenario. I should however emphasize that in Ref. the proposed tests concerned the Quantum Mechanics of systems leaving in a fuzzy space-time, whereas here and in Ref. I have discussed direct tests of effectively stochastic properties of space-time. The studies reported in Refs. are more closely related to the physics of gravity-wave interferometers. In particular, combining a detailed analysis of certain aspects of interferometry and the assumption that quantum space-time effects could be estimated using ordinary Quantum Mechanics applied to Einstein’s gravity, Refs. developed a model of Quantum-Gravity induced noise for interferometers which fits within the scenario I here discussed in Subsection 3.1. \[Actually, Refs. discuss in greater detail the spectral features encoded in Eqs. (3)-(4), while, as explained in Subsection 3.1, it was for me sufficient to provide a simplified discussion.\] As mentioned in Subsection 3.1, it is not surprising that the assumption that Quantum Gravity be given by an ordinary Quantum Mechanics applied to (some formulation of) Einstein’s gravity would lead to noise levels of the type encoded in Eqs. (3)-(4). The recent paper Ref. proposed certain quantum properties of gravity waves and discussed the implications for gravity-wave interferometry. Let me emphasize that instead the effects considered here and in Ref. concern the properties of the interferometer and would affect the operation of any interferometer whether or not it would be used to detect gravity waves. Here and in Ref. the emphasis on modern gravity-wave interferometers is only due to the fact that these interferometers, because of the extraordinary challenges posed by the detection of classical gravity waves, are the most advanced interferometers available and therefore provide the best opportunity to test scenarios for Quantum-Gravity induced noise in interferometers. ## 7 A QUANTUM-GRAVITY PHENOMENOLOGY PROGRAMME While opportunities to test experimentally the nature of the interplay between Gravity and Quantum Mechanics remain extremely rare, the proposals now available represent a small fortune with respect to the expectations of not many years ago. We have finally at least reached the point that the most optimistic/speculative estimates of Quantum Gravity effects can be falsified. In searching for even more opportunities to test Quantum Gravity it is useful to analyze the proposals put forward in Refs. as representatives of the two generic mechanisms that one might imagine to use in Quantum-Gravity experiments. Let me comment here on these mechanisms. The most natural discovery strategy would of course resort to strong Quantum Gravity effects, of the type we expect for collisions of elementary particles endowed with momenta of order the Planck mass ($`10^{19}GeV`$). Since presently and for the foreseeable future we do not expect to be able to set up such collisions, the only opportunities to find evidence of strong Quantum Gravity effects should be found in natural phenomena (e.g. astrophysical contexts that might excite strong Quantum Gravity effects) rather than in controlled laboratory setups. An example is provided by the experiment proposed in Ref. which would be looking for residual traces of some strong Quantum Gravity effects<sup>20</sup><sup>20</sup>20Most of the effects considered in Ref. actually concern the interplay between classical Gravity and Quantum Mechanics, so they pertain to a very special regime of Quantum Gravity. This is also true of the experiments on gravitationally induced quantum phases . Instead the experiments discussed here and in Refs. concern proposed quantum properties of space-time itself, and could therefore probe even more deeply the structure of Quantum Gravity. (specifically, Critical Superstring Theory effects) which might have occurred in the early Universe. Another class of Quantum Gravity experiments is based on physical contexts in which small Quantum Gravity effects lead to observably large signatures thanks to the interplay with a naturally large number present in such contexts. This is the basic mechanism underlying all the proposals in Refs. and underlying the interferometric studies of space-time fuzziness proposed in Ref. which I have here discussed in detail. For the interferometric studies which I am proposing the large number is essentially provided by the ratio between the inverse of the Planck time and the typical frequencies of operation of gravity-wave interferometers. In practice if some of the space-time fuzziness scenarios discussed in Section 3 capture actual features of quantum space-time, in a time as long as the inverse of the typical gravity-wave interferometer frequency of operation an extremely large number of minute quantum fluctuations in the distance $`D`$ could add up. A large sum of small quantities can give a sizeable final result, and in fact this final result would be observable if for example the noise induced by fuzziness was characterised by $`f^2cL_{QG}`$ which is comparable in size to the corresponding quantity $`[S_{exp}(f)]^2`$ characterising the noise levels achievable with modern interferometers. For the physical context of gamma rays reaching us from far away astrophysical objects the large number can be provided by the ratio between the time travelled by the gamma rays and the time scale over which the signal presents significant structure (time spread of peaks etc.). The proposal made in Ref. basically uses the fact that this allows to add up a very large number of very minute dispersion-inducing Quantum Gravity effects, and if the deformation of the dispersion relation goes linearly with the Planck length the resulting energy-dependent time-delay turns out to be comparable to the time scale that characterizes some of these astrophsical signals, thereby allowing a direct test of the Quantum Gravity scenario. Similarly, experiments investigating the quantum phases induced by large gravitational fields (the only aspect of the interplay between Gravity and Quantum Mechanics on which we already have positive “discovery” data ) exploit the fact that gravitational forces are additive and therefore, for example, gravitational effects due to the earth are the result of a very large number of very minute gravitational effects (instead we would not be able to measure the quantum phases induced by a single elementary particle). The large number involved in the possibility that Quantum Gravity effects might leave an observable trace in some aspects of the phenomenology of the neutral-kaon system cannot be directly interpreted as a the number of minute Quantum Gravity effects to which the system is exposed. It is rather that the conjectured Quantum Gravity effects would involve in addition to the small dimensionless ratio between the energy of the kaons and the Planck energy also a very large dimensionless ratio characterising the physics of neutral kaons. This idea of figuring out ways to put together many minute effects (which until a short time ago had been strangely dismissed by the Quantum Gravity community) has a time honored tradition in physics. Perhaps the clearest example is the particle-physics experiment setting bounds on proton lifetime. The relevant dimensionless ratio characterising proton-decay analyses is extremely small (somewhere in the neighborhood of $`10^{64}`$, since it is given by the fourth power of the ratio between the mass of the proton and the grandunification scale), but by keeping under observation a correspondingly large number of protons experimentalists are managing<sup>21</sup><sup>21</sup>21This author’s familiarity with the accomplishments of proton-decay experiments has certainly contributed to the moderate optimism for the outlook of Quantum Gravity phenomenology which is implicit in the present Article. to set highly significant bounds. Another point of contact between proposed Quantum Gravity experiments and proton decay experiments is that a crucial role in rendering the experiment viable is the fact that the process under investigation would violate some of the symmetries of ordinary physics. This plays a central role in the experiments proposed in Refs. . ## 8 MORE ON A LOW-ENERGY EFFECTIVE THEORY OF QUANTUM GRAVITY While the primary emphasis has been on direct experimental tests of crude scenarios for space-time fuzziness, part of this Article has been devoted to the discussion (expanding on what was reported in Refs. ) of the properties that one could demand of a theory suitable for a first stage of partial unification of Gravity and Quantum Mechanics. This first stage of partial unification would be a low-energy effective theory capturing only some rough features of Quantum Gravity, possibly associated with the structure of the non-trivial “Quantum Gravity vacuum”. One of the features that appear desirable for an effective low-energy theory of Quantum Gravity is that its mechanics be not exactly given by ordinary Quantum Mechanics. I have reviewed some of the arguments in support of this hypothesis when I discussed the Salecker-Wigner setup for the measurement of distances, and showed that the problems associated with the infinite-mass classical-device limit provide encouragement for the idea that the analysis of Quantum Gravity experiments should be fundamentally different from the one of the experiments described by ordinary Quantum Mechanics. A similar conclusion was already drawn in the context of attempts (see, e.g., Ref. ) to generalize to the study of the measurability of gravitational fields the famous Bohr-Rosenfeld analysis of the measurability of the electromagnetic field. In fact, in order to achieve the accuracy allowed by the formalism of ordinary Quantum Mechanics, the Bohr-Rosenfeld measurement procedure resorts to ideal test particles of infinite mass, which would of course not be admissible probes in a gravitational context . Since all of the (extensive) experimental evidence for ordinary Quantum Mechanics comes from experiments in which the behaviour of the devices can be meaningfully approximated as classical, and moreover it is well-understood that the conceptual structure of ordinary Quantum Mechanics makes it only acceptable as the theoretical framework for the description of the outcomes of this specific type of experiments, it seems reasonable to explore the possibility that Quantum Gravity might require a new mechanics, not exactly given by ordinary Quantum Mechanics and probably involving a novel (in a sense, “more democratic”) relationship between “measuring apparatus” and “system”. Other (related) plausible features of the correct effective low-energy theory of Quantum Gravity are novel bounds on the measurability of distances. This appears to be an inevitable consequence of relinquishing the idealized methods of measurement analysis that rely on the artifacts of the infinite-mass classical-device limit. If indeed one of these novel measurability bounds holds in the physical world, and if indeed the structure of the Quantum-Gravity vacuum is non-trivial and involves space-time fuzziness, it appears also plausible that this two features be related, i.e. that the fuzziness of space-time would be ultimately responsible for the measurability bounds. It is this scenario which I have investigated here and in Ref. , emphasizing the opportunity for direct tests which is provided by modern interferometers. The intuition emerging from these first investigations of the properties of a low-energy effective Quantum Gravity might or might not turn out to be accurate, but additional work on this first stage of partial unification of Gravity and Quantum Mechanics is anyway well motivated in light of the huge gap between the Planck regime and the physical regimes ordinarily accessed in present-day particle-physics or gravity experiments. Results on a low-energy effective Quantum Gravity might provide a perspective on Quantum Gravity that is complementary with respect to the one emerging from approaches based on proposals for a one-step full unification of Gravity and Quantum Mechanics. On one side of this complementarity there are the attempts to find a low-energy effective Quantum Gravity which are necessarily driven by intuition based on direct extrapolation from known physical regimes; they are therefore rather close to the phenomelogical realm but they are confronted by huge difficulties when trying to incorporate the physical intuition within a completely new formalism. On the other side there are the attempts of one-step full unification of Gravity and Quantum Mechanics, which usually start from some intuition concerning the appropriate formalism (e.g., “Canonical/Loop Quantum Gravity” or “Critical Superstring Theory” ) but are confronted by huge difficulties when trying to “come down” to the level of phenomenological predictions. These complementary perspectives might meet at the mid-way point leading to new insight in Quantum Gravity physics. One instance in which this mid-way-point meeting has already been successful is provided by the candidate phenomenon of Quantum-Gravity induced deformed dispersion relations, which was proposed within a purely phenomenological analysis of the type needed for the search of a low-energy theory of Quantum Gravity, but was then shown to be consistent with the structure of Canonical/Loop Quantum Gravity. ## 9 OUTLOOK The panorama of opportunities for Quantum Gravity phenomenology is certainly becoming richer. In this Article I have taken the conservative viewpoint that the length scales parametrizing proposed Quantum Gravity phenomena should be somewhere in the neighborhood of the Planck length, but I have taken the optimistic (although supported by various Quantum Gravity scenarios, including Canonical/Loop Quantum Gravity ) viewpoint that there should be Quantum Gravity effects going linearly or quadratically with the Planck length, i.e. effects which are penalized only by one or two powers of the Planck length. An exciting recent development is that results in the general area of String Theory have motivated work (see, e.g., Ref. ) on theories with large extra dimensions in which rather naturally Quantum Gravity effects would become significant at scales much larger than the conventional Planck length. In such scenarios one expects to find phenomena for which the length scale characterizing the onset of large Quantum-Gravity corrections is much larger than the conventional Planck length. The example of advanced modern interferometers here emphasized provides further evidence (in addition to the one emerging from Refs. ) of the fact that we should eventually be able to find signatures of Quantum Gravity effects if they are linear in the conventional Planck length. If the physical world only hosts effects that are quadratic in the deformation length scale, values of this length scale of order the Planck length would probably be out of reach for the foreseeable future, but effects quadratic in the larger length scales characterizing scenarios of the type in Ref. might be experimentally accessible. On the theory side an exciting opportunity for future research appears to be provided by the possibility of exchanges of ideas between the more phenomenological/intuitive studies appropriate for the search of a low-energy effective Quantum Gravity and the more rigorous/formal studies used in searches of fully consistent Quantum Gravity theories. As mentioned at the end of the preceding Section, the first example of such an exchange has led to the exciting realization that deformed dispersion relations linear in the Plack length appear plausible both from the point of view of heuristic phenomenological analyses and are also a rather general prediction of Canonical/Loop Quantum Gravity . Additional exchanges of this type appear likely. For example, the intuition coming from the low-energy effective Quantum Gravity viewpoint on distance fuzziness which I discussed here might prove useful for those Quantum Gravity approaches (again an example is provided by Canonical/Loop Quantum Gravity) in which there is substantial evidence of space-time fuzziness but one has not yet achieved a satisfactory description of fuzzy distances. Acknowledgements I owe special thanks to Abhay Ashtekar, since he suggested to me that gravity-wave interferometers might be useful for experimental tests of some of the Quantum-Gravity phenomena that I have been investigating. My understanding of Refs. and benefited from conversations with N.E. Mavromatos and G. Veneziano. I am also happy to acknowledge a kind email message from A. Camacho which provided positive feed-back on my Ref. and also made me aware of the works in Refs. . Still on the “theory side” I am grateful to several colleagues who provided encouragement and stimulating feed-back, particularly D. Ahluwalia, J. Ellis, J. Lukierski, C. Rovelli, S. Sarkar, L. Smolin and J. Stachel. On the “experiment side” I would like to thank F. Barone, J. Faist, R. Flaminio, L. Gammaitoni, T. Huffman, L. Marrucci and M. Punturo for useful conversations on various aspects of interferometry.
no-problem/9903/cond-mat9903357.html
ar5iv
text
# Renormalization group analysis of the small-world network model \[ ## Abstract We study the small-world network model, which mimics the transition between regular-lattice and random-lattice behavior in social networks of increasing size. We contend that the model displays a normal continuous phase transition with a divergent correlation length as the degree of randomness tends to zero. We propose a real-space renormalization group transformation for the model and demonstrate that the transformation is exact in the limit of large system size. We use this result to calculate the exact value of the single critical exponent for the system, and to derive the scaling form for the average number of “degrees of separation” between two nodes on the network as a function of the three independent variables. We confirm our results by extensive numerical simulation. \] Folk wisdom holds that there are “six degrees of separation” between any two human beings on the planet—i.e., a path of no more than six acquaintances linking any person to any other. While the exact number six may not be a very reliable estimate, it does appear that for most social networks quite a short chain is needed to connect even the most distant of the network’s members, an observation which has important consequences for issues such as the spread of disease, oscillator synchrony, and genetic regulatory networks. At first sight this does not seem too surprising a result; random networks have average vertex–vertex distances which increase as the logarithm of the number of vertices and which can therefore be small even in very large networks. However, real social networks are far from random, possessing well-defined locales in which the probability of connection is high and very low probability of connection between two vertices chosen at random. Watts and Strogatz have recently proposed a model of the “small world” which reconciles these observations. Their model does indeed possess well-defined locales, with vertices falling on a regular lattice, but in addition there is a fixed density of random “shortcuts” on the lattice which can link distant vertices. Their principal finding is that only a very small density of such shortcuts is necessary to produce vertex–vertex distances comparable to those found on a random lattice. In this paper we study the model of Watts and Strogatz using the techniques of statistical physics, and show that it possesses a continuous phase transition in the limit where the density of shortcuts tends to zero. We investigate this transition using a renormalization group (RG) method and calculate the scaling forms and the single critical exponent describing the behavior of the model in the critical region. Previous studies have concentrated on the one-dimensional version of the small-world model, and we will start with this version too, although we will later generalize our results to higher dimensions. In one dimension the model is defined on a lattice with $`L`$ sites and periodic boundary conditions (the lattice is a ring). Initially each site is connected to all of its neighbors up to some fixed range $`k`$ to make a network with average coordination number $`z=2k`$. Randomness is then introduced by independently rewiring each of the $`kL`$ connections with probability $`p`$. “Rewiring” in this context means moving one end of the connection to a new, randomly chosen site. The behavior of the network thus depends on three independent parameters: $`L`$, $`k`$ and $`p`$. In this paper we will study a slight variation on the model in which shortcuts are added between randomly chosen pairs of sites, but no connections are removed from the regular lattice. For sufficiently small $`p`$ and large $`L`$ this makes no difference to the mean separation between vertices of the network for $`k2`$. For $`k=1`$ it does make a difference, since the original small-world model is poorly defined in this case—there is a finite probability of a part of the lattice becoming disconnected from the rest and therefore making an infinite contribution to the average distance between vertices, and this makes the distance averaged over all networks for a given value of $`p`$ also infinite. Our variation does not suffer from this problem and this makes the analysis significantly simpler. In Fig. 1 we show some examples of small-world networks. We consider the behavior of the model for low density $`p`$ of shortcuts. The fundamental observable quantity that we measure is the shortest distance between a pair of vertices on the network, averaged both over all pairs on the network and over all possible realizations of the randomness. This quantity, which we denote $`\mathrm{}`$, has two regimes of behavior. For systems small enough that there is much less than one shortcut on the lattice on average, $`\mathrm{}`$ is dominated by the connections of the regular lattice and can be expected to increase linearly with system size $`L`$. As the lattice becomes larger with $`p`$ held fixed, the average number of shortcuts will eventually become greater than one and $`\mathrm{}`$ will start to scale as $`\mathrm{log}L`$. The transition between these two regimes takes place at some intermediate system size $`L=\xi `$, and from the arguments above we would expect $`\xi `$ to take a value such that the number of shortcuts $`pk\xi 1`$. In other words we expect $`\xi `$ to diverge in the limit of small $`p`$ as $`\xi p^1`$. The quantity $`\xi `$ plays a role similar to the correlation length in an interacting system in conventional statistical physics, and its divergence leaves the small-world model with no characteristic length scale other than the fundamental lattice spacing. Thus the model possesses a continuous phase transition at $`p=0`$, and, as we will see, this gives rise to specific finite-size scaling behavior in the region close to the transition. Note that the transition is a one-sided one, since $`p`$ can never take a value less than zero. In this respect the transition is similar to transitions seen in other one-dimensional systems such as 1D bond or site percolation, or the 1D Ising model. Barthélémy and Amaral have suggested that the arguments above, although correct in outline, are not correct in detail. They contend that the length-scale $`\xi `$ diverges as $$\xi p^\tau $$ (1) with $`\tau `$ different from the value of 1 given by the scaling argument. On the basis of numerical results, they conjecture that $`\tau =\frac{2}{3}`$. Barret, on the other hand, has given a simple physical argument which directly contradicts this, indicating that $`\tau `$ should be greater than or equal to 1. Amongst other things, we demonstrate in this paper that in fact $`\tau `$ is exactly 1 for all values of $`k`$. Let us first consider the small-world model for the simplest case $`k=1`$. As discussed above, the average distance $`\mathrm{}`$ scales linearly with $`L`$ for $`L\xi `$ and logarithmically for $`L\xi `$. If $`\xi `$ is the only non-trivial length-scale in the problem and is much larger than one (i.e., we are close to the phase transition), this implies that $`\mathrm{}`$ should obey a finite-size scaling law of the form $$\mathrm{}=Lf(L/\xi ),$$ (2) where $`f(x)`$ is a universal scaling function with the limiting forms $$f(x)\{\begin{array}{cc}\text{constant}\hfill & \text{for }x1\hfill \\ (\mathrm{log}x)/x\hfill & \text{for }x1\text{.}\hfill \end{array}$$ (3) In fact, it is easy to show that the limiting value of $`f(x)`$ as $`x0`$ is $`\frac{1}{4}`$. A scaling law similar to this has been proposed previously by Barthélémy and Amaral for the small-world model, although curiously they suggested that scaling of this type was evidence for the absence of a phase transition in the model, whereas we regard it as the appropriate form for $`\mathrm{}`$ in the presence of one. We now assume that, in the critical region, $`\xi `$ takes the form (1), and that we do not know the value of the exponent $`\tau `$. Then we can rewrite Eq. (3) in the form $$\mathrm{}=Lf(p^\tau L),$$ (4) where we have absorbed a multiplicative constant into the argument of $`f(x)`$, but otherwise it is the same scaling function as before, with the same limits, Eq. (3). Now consider the real-space RG transformation on the $`k=1`$ small-world model in which we block sites in adjacent pairs to create a one-dimensional lattice of a half as many sites. (We assume that the lattice size $`L`$ is even. In fact the transformation works fine if we block in groups of any size which divides $`L`$.) Two vertices are connected on the renormalized lattice if either of the original vertices in one was connected to either of the original vertices in the other. This includes shortcut connections. The transformation is illustrated in Fig. 1a for a lattice of size $`L=24`$. The number of shortcuts on the lattice is conserved under the transformation, so the fundamental parameters $`L`$ and $`p`$ renormalize according to $$L^{}=\frac{1}{2}L,p^{}=2p.$$ (5) The transformation generates all possible configurations of shortcuts on the renormalized lattice with the correct probability, as we can easily see since the probability of finding a shortcut between any two sites $`i`$ and $`j`$ is uniform, independent of $`i`$ and $`j`$ both before and after renormalization. The geometry of the shortest path between any two points is unchanged under our transformation. However, the length of the path is, on average, halved along those portions of the path which run around the perimeter of the ring, and remains the same along the shortcuts. For large $`L`$ and small $`p`$, the portion of the length along the shortcuts tends to zero and so can be neglected. Thus $$\mathrm{}^{}=\frac{1}{2}\mathrm{}$$ (6) in this limit. Eqs. (5) and (6) constitute the RG equations for this system and are exact for $`n1`$ and $`p1`$. Substituting into Eq. (4) we then find that $$\tau =\frac{\mathrm{log}(L/L^{})}{\mathrm{log}(p^{}/p)}=1.$$ (7) Now we turn to the case of $`k>1`$. To treat this case we define a slightly different RG transformation: we group adjacent sites in groups of $`k`$, with connections assigned using the same rule as before. The transformation is illustrated in Fig. 1b for a lattice of size $`L=24`$ with $`k=3`$. Again the number of shortcuts in the network is preserved under the transformation, which gives the following renormalization equations for the parameters: $$L^{}=L/k,p^{}=k^2p,k^{}=1,\mathrm{}^{}=\mathrm{}.$$ (8) Note that, in the limit of large $`L`$ and small $`p`$, the mean distance $`\mathrm{}`$ is not affected at all; the number of vertices along the path joining two distant sites is reduced by a factor $`k`$, but the number of vertices that can be traversed in one step is reduced by the same factor, and the two cancel out. For the same reasons as before, this transformation is exact in the limit of large $`L`$ and small $`p`$. We can use this second transformation to turn any network with $`k>1`$ into a corresponding network with $`k=1`$, which we can then treat using the arguments given before. Thus, we conclude, the exponent $`\tau =1`$ for all values of $`k`$ and, substituting from Eq. (8) into Eq. (4), the general small-world network must satisfy the scaling form $$\mathrm{}=\frac{L}{k}f(pkL).$$ (9) This form should be correct for $`L^{}1`$ and $`p^{}1`$, which implies that $`L/k1`$ and $`k^2p1`$. The first of these conditions is trivial—it merely precludes inaccuracies of $`\pm k`$ in the estimate of $`\mathrm{}`$ because positions on the lattice are rounded off to the nearest multiple of $`k`$ by the RG transformation. The second condition is interesting however; it is necessary to ensure that the average distance traveled along shortcuts in the network is small compared to the distance traveled around the perimeter of the ring. This condition tells us when we are moving out of the scaling regime close to the transition, which is governed by (9), into the regime of the true random network, for which (9) is badly violated and $`\mathrm{}`$ is known to scale as $`\mathrm{log}L/\mathrm{log}k`$. It implies that we need to work with values of $`p`$ which decrease as $`k^2`$ with increasing $`k`$ if we wish to see clean scaling behavior, or conversely, that true random-network behavior should be visible in networks with values of $`pk^2`$ or greater. We have tested our predictions by extensive numerical simulation of the small-world model. We have calculated exhaustively the minimum distance between all pairs of points on a variety of networks and averaged the results to find $`\mathrm{}`$. We have done this for $`k=1`$ (coordination number $`z=2`$) for systems of size $`L`$ equal to a power of two from $`128`$ up to $`8192`$ and $`p=1\times 10^4`$ up to $`3\times 10^2`$, and for $`k=5`$ ($`z=10`$) with $`L=512\mathrm{}\mathrm{32\hspace{0.17em}768}`$ and $`p=3\times 10^6\mathrm{}1\times 10^3`$. Each calculation was averaged over 1000 realizations of the randomness. In Fig. 2 we show our results plotted as the values of $`\mathrm{}k/L`$ against $`pkL`$. Eq. (9) predicts that when plotted in this way, the results should collapse onto a single curve and, as the figure shows, they do indeed do this to a reasonable approximation. As mentioned above, Barthélémy and Amaral also performed numerical simulations of the small-world model and extracted a value of $`\tau =\frac{2}{3}`$ for the critical exponent. In the inset of Fig. 2 we show our simulation results for $`k=1`$ plotted according to Eq. (4) using this value for $`\tau `$. As the figure shows, the data collapse is significantly poorer in this case than for $`\tau =1`$. It is interesting to ask then how Barthélémy and Amaral arrived at their result. It seems likely that the problem arises from looking at systems that are too small to show the true scaling behavior. In our calculations, we find good scaling for $`L/k60`$. Barthélémy and Amaral examined networks with $`k=5`$, 10 and 15 ($`z=10`$, 20, 30) so we should expect to find good scaling behavior for values of $`L`$ larger than about 600. However, the systems studied by Barthélémy and Amaral ranged in size from about $`L=50`$ to about $`500`$ in most cases, and in no case exceeded $`L=1000`$. Their calculations therefore had either no overlap with the scaling regime, or only a small overlap, and so we would not expect to find behavior typical of the true value of $`\tau `$ in their results. It is possible to generalize the calculations presented here to small-world networks built on lattices of dimension $`d`$ greater than one. For simplicity we consider first the case $`k=1`$. If we construct a square or (hyper)cubic lattice in $`d`$ dimensions with linear dimension $`L`$, connections between nearest neighbor vertices, and shortcuts added with a rewiring probability of $`p`$, then as before the average vertex–vertex distance scales linearly with $`L`$ for small $`L`$, logarithmically for large $`L`$, and the length-scale $`\xi `$ of the transition diverges according to Eq. (1) for small $`p`$. Thus the scaling form (4) applies for general $`d`$ also. The appropriate generalization of our RG transformation involves grouping sites in square or cubic blocks of side 2, and the quantities $`L`$, $`p`$ and $`\mathrm{}`$ then renormalize according to $$L^{}=\frac{1}{2}L,p^{}=2^dp,\mathrm{}^{}=\frac{1}{2}\mathrm{}.$$ (10) Thus $$\tau =\frac{\mathrm{log}(L/L^{})}{\mathrm{log}(p^{}/p)}=\frac{1}{d}.$$ (11) As an example, we show in Fig. 3 numerical results for the $`d=2`$ case, for $`L`$ equal to a power of two from $`64`$ up to $`512`$ (i.e., a little over a quarter of a million vertices for the largest networks simulated) and six different values of $`p`$ for each system size from $`p=3\times 10^6`$ up to $`1\times 10^3`$. The results are plotted according to Eq. (4) with $`\tau =\frac{1}{2}`$ and, as the figure shows, they again collapse nicely onto a single curve. A number of generalizations are possible for $`k>1`$. Perhaps the simplest is to add connections along the principal axes of the lattice between all vertices whose separation is $`k`$ or less. This produces a graph with average coordination number $`z=2dk`$. By blocking vertices in square or cubic blocks of edge $`k`$, we can then transform this system into one with $`k=1`$. The appropriate generalization of the RG equations (8) is then $$L^{}=L/k,p^{}=k^{d+1}p,k^{}=1,\mathrm{}^{}=\mathrm{},$$ (12) which gives $`\tau =1/d`$ for all $`k`$ and a scaling form of $$\mathrm{}=\frac{L}{k}f\left((pk)^{1/d}L\right).$$ (13) Alternatively, we could redefine our scaling function $`f(x)`$ so that $`\mathrm{}k/L`$ is given as a function of $`pkL^d`$. Writing it in this form makes it clear that the number of vertices in the network at the transition from large- to small-world behavior diverges as $`(pk)^1`$ in any number of dimensions. Another possible generalization to $`k>1`$ is to add connections between all sites within square or cubic regions of side $`2k`$. This gives a different dependence on $`k`$ in the scaling relation, but $`\tau `$ still equal to $`1/d`$. To conclude, we have studied the small-world network model of Watts and Strogatz using an asymptotically exact real-space renormalization group method. We find that in all dimensions $`d`$ the model undergoes a continuous phase transition as the density $`p`$ of shortcuts tends to zero and that the characteristic length $`\xi `$ diverges according to $`\xi p^\tau `$ with $`\tau =1/d`$ for all values of the connection range $`k`$. We have also deduced the general finite-size scaling law which describes the variation of the mean vertex–vertex separation as a function of $`p`$, $`k`$ and the system size $`L`$. We have performed extensive numerical calculations which confirm our analytic results.
no-problem/9903/math9903037.html
ar5iv
text
# Hessian quartic surfaces that are Kummer surfaces ## 1. Introduction Let $`C`$ be a cubic surface in $`^3`$. Among the many interesting geometrical objects associated to $`C`$ is its Hessian, a quartic surface $`H`$ in $`^3`$. It was found in the nineteenth century \[Seg42\] that $`H`$ will have ten double points, and will contain ten lines through those points. Conversely, it was shown that any irreducible quartic surface containing an appropriate configuration of lines and double points would be the Hessian of a unique cubic surface. Another class of objects of interest in classical algebraic geometry is the class of Kummer surfaces. Given an abelian surface $`A`$, we have an action of the group $`\{\pm 1\}`$ by multiplication, and we can take the quotient $`K=A/\{\pm 1\}`$, a surface with 16 double points. On a Kummer surface, there are 16 special curves, called tropes, each of which passing through six of the double points. It was found \[Hut99\] that one can choose a certain other subset $`W`$ of 6 of the double points, called a Weber hexad, and blow these points up, to obtain a surface $`K_W`$ that embeds in $`^3`$ as the Hessian of a cubic surface. That is, there remain 10 double points on $`K_W`$, and one can embed the surface such that of the sixteen tropes, ten are taken to straight lines in $`^3`$, and in the same configuration as referred to above. So we conclude that $`K_W`$ is the Hessian of some cubic surface. Now, it is known that the moduli space of cubic surfaces is a four-dimensional normal variety, while there is only a three-parameter family of Kummer surfaces, each of which having only finitely many Weber hexads. So we can hope that the locus of cubic surfaces whose Hessians are Kummer in the above sense will be a divisor in the space of all cubic surfaces. In this paper we prove ###### Theorem 1.1. Let $`^3`$ be taken to be the hyperplane in $`^4`$ with $`_{i=0}^4X_i=0`$. Let $`C^3`$ be the cubic surface $`𝕍(_{i=0}^4\frac{1}{\mu _i}X_i^3)`$, where $`\mu _0\mu _1\mu _2\mu _3\mu _40`$, and assume $`C`$ is smooth. Then the Hessian of $`C`$, $$𝕍(\mu _0X_1X_2X_3X_4+\mu _1X_0X_2X_3X_4+\mu _2X_0X_1X_3X_4+\mu _3X_0X_1X_2X_4+\mu _4X_0X_1X_2X_3),$$ is the blowup of a Weber hexad on a Kummer surface if and only if the coefficients $`\mu _i`$ satisfy the following irreducible cubic condition: $$\underset{i=0}{\overset{4}{}}\mu _i^3\underset{ij}{}\mu _i^2\mu _j+2\underset{ijk}{}\mu _i\mu _j\mu _k=0.$$ The reader familiar with the invariant theory of cubic surfaces will be curious how this locus fits in with the classical invariants. We can interpret the previous result in that language, obtaining the following. ###### Corollary 1.2. Let $`^{19}`$ be the parameter space of cubic forms on $`^3`$. Let $`X`$ be the locus in $`^{19}`$ of cubic surfaces whose Hessians are isomorphic to blowups of Weber hexads on Kummer surfaces, embedded as in Theorem 1.1. Then $`X`$ is $`SL(4)`$-invariant, and the closure of $`X`$ is a divisor in $`^{19}`$. If we label the classical invariants as $`I_8,I_{16},I_{24},I_{32},I_{40}`$, following \[Hun96\], then the polynomial on $`^{19}`$ given by $$I_8I_{24}+8I_{32}$$ is irreducible, is degree 32, and vanishes on $`X`$. This result, while somewhat satisfying, is only the beginning of the story. The next natural question to ask is, “If this divisor in the parameter space of $`\mu _i`$s is associated to the moduli space of Kummer surfaces with Weber hexads, then what is this correspondence?” We also answer this question. Recall that the generic abelian surface is the Jacobian of a unique genus 2 curve, say $`A=J(B)`$, and that $`B`$ can be specified as the double cover of $`^1`$ branched at six points $`a,b,c,d,e,f`$. It turns out that upon placing an ordering on these six points, we can specify a unique Weber hexad in $`K`$. We then have the following theorem. ###### Theorem 1.3. Let $`a,b,c=0,d=1,e,f=\mathrm{}`$ be six distinct points in $`^1`$. Let $`B`$ be the double cover of $`^1`$ branched at these six points, and let $`A`$ be the abelian surface $`J(B)`$. Let $`WA`$ be the Weber hexad $$\{0,b+c2a,c+d2a,d+e2a,e+f2a,f+b2a\}.$$ Then the surface $`K_W`$ obtained by blowing up $`K=A/\{\pm 1\}`$ at $`W`$ can be embedded in $`^3`$ as the Hessian of the surface $$𝕍(\underset{i=0}{\overset{4}{}}\frac{1}{\mu _i}X_i^3),$$ where the coefficients $`\mu _i`$ are given by $`\mu _0`$ $`=a(1b),`$ $`\mu _1`$ $`=e(1a),`$ $`\mu _2`$ $`=b(ea),`$ $`\mu _3`$ $`=(eb),`$ $`\mu _4`$ $`=(ab)(1e).`$ Conversely, if $`\mu _0\mu _1\mu _2\mu _3\mu _40`$ and $$\underset{i=0}{\overset{4}{}}\mu _i^3\underset{ij}{}\mu _i^2\mu _j+2\underset{ijk}{}\mu _i\mu _j\mu _k=0,$$ then let $`a`$ $`={\displaystyle \frac{\mu _0+\mu _3+\mu _4\mu _1\mu _2}{2\mu _3}},`$ $`b`$ $`={\displaystyle \frac{2\mu _2}{\mu _1+\mu _2+\mu _3\mu _0\mu _4}},`$ $`e`$ $`={\displaystyle \frac{\mu _0+\mu _3\mu _4\mu _1\mu _2}{\mu _0+\mu _3+\mu _4\mu _1\mu _2}}`$ be points in $`^1`$. If these points are all distinct, and none of these points are $`0`$, $`1`$, or $`\mathrm{}`$, then let $`B`$ be the double cover of $`^1`$ branched at $`a,b,c=0,d=1,e,f=\mathrm{}`$, and let $`K_W`$ be the blown up Kummer specified above. Then the Hessian surface $$H=𝕍(\mu _0X_1X_2X_3X_4+\mu _1X_0X_2X_3X_4+\mu _2X_0X_1X_3X_4+\mu _3X_0X_1X_2X_4+\mu _4X_0X_1X_2X_3)$$ is isomorphic to $`K_W`$. This still is not the whole story as it should be. If one recalls that any smooth cubic surface can be obtained as the blow up of $`^2`$ at six points, the question arises, given a genus 2 curve $`B`$ and a Weber hexad $`WJ(B)`$, what six points we should blow up in $`^2`$ to obtain a cubic surface $`C`$ whose Hessian $`H`$ is isomorphic to $`K_W`$. As far as this author is aware, this question has not been answered. Failing that, we present proofs of the above two theorems, in the hope that our techniques can be extended to answer the remaining questions about Kummer Hessian surfaces. ## 2. The geometry of Hessian quartic surfaces We will begin our exploration by collecting some results about the geometries of Hessian surfaces and Kummer surfaces, and then apply them to the theorems at hand. First, let us describe some of the geometry of the Hessian of a generic cubic surface. Let $`^3`$ be the hyperplane $$𝕍(\underset{i=0}{\overset{4}{}}X_i)^4,$$ as above, and for $`0i<j4`$, let $$\mathrm{}_{ij}=𝕍(X_i,X_j)^3.$$ Let $`L=_{ij}\mathrm{}_{ij}`$. Then we have the following lemma. ###### Lemma 2.1. Let $`H`$ be a quartic form on $`^3`$ that vanishes on $`L`$. Then $`H`$ is in the linear span of $$X_0X_1X_2X_3,X_0X_1X_2X_4,X_0X_1X_3X_4,X_0X_2X_3X_4,X_1X_2X_3X_4.$$ As a result, $`H`$ is double at the 10 points $`p_{ijk}=𝕍(X_i,X_j,X_k)`$. ###### Proof. For the first statement, we observe that $`𝕍(H,X_0)`$ by assumption contains the four lines $`\mathrm{}_{01},\mathrm{}_{02},\mathrm{}_{03},\mathrm{}_{04}`$. So $`H`$ lies in the ideal $`(X_0,X_1X_2X_3X_4)`$. Applying this symmetrically yields the result. The second statement is immediate. ∎ Observe that if $`\mu _0\mu _1\mu _2\mu _3\mu _40`$, then the cubic surface $`𝕍(_{i=0}^4\frac{1}{\mu _i}X_i^3)`$ has Hessian equal to $$H=\mu _0X_1X_2X_3X_4+\mu _1X_0X_2X_3X_4+\mu _2X_0X_1X_3X_4+\mu _3X_0X_1X_2X_4+\mu _4X_0X_1X_2X_3.$$ Clebsch showed \[Sal82\] that the generic cubic surface is isomorphic to an essentially unique cubic surface of this (pentahedral) form, but this result is largely irrelevant to our work here, so simply recall the result, and do not pursue it further. So a Hessian in this family contains ten lines and ten double points, such that each line passes through three of the double points and each double point lies on three of the lines. We now give one result each about the double points and the lines of a Hessian quartic $`H`$. ###### Proposition 2.2. Projection away from the node $`p_{012}=𝕍(X_0,X_1,X_2)`$ gives a rational map from $`𝕍(H)`$ to $`^2`$, which is generically 2-to-1. The branch locus of this map is the union of two cubic curves in $`^2`$, with equations $$\begin{array}{c}\left(sX_0X_1X_2+(\mu _0X_1X_2+\mu _1X_0X_2+\mu _2X_0X_1)(X_0X_1X_2)\right)\hfill \\ \hfill \left(\overline{s}X_0X_1X_2+(\mu _0X_1X_2+\mu _1X_0X_2+\mu _2X_0X_1)(X_0X_1X_2)\right)\end{array}$$ where $`s`$ and $`\overline{s}`$ are the roots of $$s^22(\mu _3+\mu _4)s+(\mu _3\mu _4)^2=0.$$ ###### Proof. The first statement follows because a generic line through $`p_{012}`$ meets $`𝕍(H)`$ twice there, and at two other points. The second statement follows by computing the discriminant of $`H`$, viewed as a quadric in $`X_3`$. ∎ Observe that the two cubics given are tangent to the quadric $`\mu _0X_1X_2+\mu _1X_0X_2+\mu _2X_0X_1=0`$ at the three points $`𝕍(X_i,X_j)_{0i<j2}`$, and so meet each other to order two there. These points are the images of the lines in $`H`$ through $`p_{012}`$. The cubic curves have their remaining three intersections transverse, all along the line $`X_0+X_1+X_2=0`$. This line is the image of the line $`\mathrm{}_{34}`$. We next prove a result about this line. ###### Proposition 2.3. The plane $`\mu _4X_3+\mu _3X_4=0`$ is tangent to $`H`$ at every point of the line $`\mathrm{}_{34}`$. ###### Proof. This is immediate from the equation of the surface. ∎ Observe that the intersection of this plane with the surface then consists of the line $`\mathrm{}_{34}`$, counted twice, and a conic with equation $`\mu _0X_1X_2+\mu _1X_0X_2+\mu _2X_0X_1=0`$, the same conic referred to above. Finally, observe that if a cubic surface $`C`$ has a node at a point $`p`$, then its Hessian is also nodal at $`p`$, with the same tangent cone. Connversely, if a Hessian in our four-parameter family acquires a node other than the ten coordinate points, the corresponding cubic surface also acquires a node. So, if we restrict our attention to Hessians $`H`$ of smooth cubic surfaces, we may assume that $`H`$ contains only ten nodes, and that the discriminant sextics described above are smooth away from the six images of the nodes. ## 3. The geometry of Kummer surfaces For this section, we will largely follow the development in \[GH94\], with one exception. Since we have already made use of subscripted numbers for our cubic surface in pentahedral form, we will begin with six distinct points labelled $`a,b,c,d,e,f^1`$. So begin with these points, and let $`B^1`$ be the genus 2 curve that is the double cover branched over these six points. We will also label the ramification points in $`B`$ by the letters $`a\mathrm{}f`$. Then the Jacobian of $`B`$ is an abelian surface $`A`$, with 16 two-torsion points, and these correspond to the divisors $$0,\{ba,ca,\mathrm{},fa\},\{b+c2a,\mathrm{},e+f2a\}.$$ Recall that a theta divisor on $`A`$ is an image of the curve $`B`$ under a map $`ppD`$, where $`D`$ is some divisor of degree 1 on $`B`$. If $`D`$ is any “two-torsion” point, i.e., if $`2D2a`$, then the theta-divisor given will pass through 6 of the two-torsion points of $`A`$. We will refer to these 16 divisors on $`A`$ as tropes, and will label the trope corresponding to the divisor $`D`$ by the symbol $`\mathrm{\Theta }_D`$. Note that they give 16 distinguished subsets of 6 two-torsion points. We will now define a different sort of set of six two-torsion points, called a Weber hexad. A Weber hexad is a set of 6 points of the two-torsion of $`A`$ such that 10 of the tropes each contain 3 of the points of the hexad, and the other 6 tropes each contain exactly one of the points of the hexad. For example, the six points $$0,b+c2a,c+d2a,d+e2a,e+f2a,f+b2a$$ have this property: only 0 lies on $`\mathrm{\Theta }_a`$, etc. On a given abelian surface, there are exactly 192 Weber hexads, which can be obtained from the one above (each one 60 times) by acting by translation by the two-torsion of $`A`$, and by acting on $`a\mathrm{}f`$ with the group $`S_6`$. Now, as discussed in the introduction, if we identify points on $`A`$ with their negatives, we obtain a surface $`K`$ with sixteen double points, the image of the two-torsion. We can desingularize these nodes by blowing up, or equivalently by blowing up the two-torsion points on $`A`$ before taking the quotient. Observe that if we blow up the six points of a Weber hexad, we will be left with a surface with 10 nodes, just like our Hessians above. In fact, one makes the following claim, first noticed by Hutchinson \[Hut99\]. ###### Proposition 3.1. Let $`a\mathrm{}f,B,A,K`$ be as above. Let $$W=\{0,b+c2a,c+d2a,d+e2a,e+f2a,f+b2a\}A.$$ If one maps $`A`$ to projective space using the linear series $`|4\mathrm{\Theta }_a2W|`$, one gets a map to a quartic surface $`K_W^3`$, with 10 nodes, with the following properties. There exist 5 planes in $`^3`$ such that $`K_W`$ is nodal at the intersection of any three of these planes, and contains the line that is the intersection of any two of these planes. Further, the image of each of the point of $`W`$ is a conic in $`K_W`$. ###### Proof. One may check using homological criteria that the linear series $`|4\mathrm{\Theta }_a2W|`$ has rank 4, and that all of its sections are even functions, so identify points with their negatives. So, the map is $`\rho :A^3`$, and factors through $`K`$. The points of $`W`$ are in the base locus, so get blown up. Now, the self-intersection $`(4\mathrm{\Theta }_a2W)^2=8`$, which divided by two gives 4, so $`K_W`$ is a quartic surface, as stated. Also, if $`E_0`$ is the exceptional divisor over $`0A`$, then $`E_0`$ is part of the ramification locus of the map. So if $`C_0`$ is the image of $`E_0`$ in $`^3`$, and $`\omega `$ the hyperplane class in $`^3`$, then $`C_0\omega =E_0(4\mathrm{\Theta }_a2W)=2`$, and $`C_0`$ is a conic. Likewise the remaining points of $`W`$ also map to conics. Now, for clarity we will introduce the notation $`p_\lambda `$ for the two-torsion point we have been calling $`\lambda A`$. Observe that $`(\mathrm{\Theta }_bp_0p_{b+c2a}p_{e+b2a})+`$ $`(\mathrm{\Theta }_dp_0p_{d+e2a}p_{c+d2a})+`$ $`(\mathrm{\Theta }_{b+ca}p_{b+c2a}p_{d+e2a}p_{c+d2a})+`$ $`(\mathrm{\Theta }_{c+da}p_{c+d2a}p_{e+a2a}p_{d+e2a})=4\mathrm{\Theta }_a2W.`$ So these four tropes have coplanar image, and are all lines. Letting the group $`/5`$ act by $`(bcdef)`$, we get five such planes, and the result. ∎ Hutchinson largely ignores the conics coming from $`W`$, because his purpose is to study low-degree curves that arise on every Hessian quartic, not solely on the Kummer surfaces. In the next section, we will take the opposite approach, and study those Hessian surfaces that do contain conics like these, and find that this extra class of curves is enough to make a surface Kummer. ### 3.1. Labelling Before we proceed to our main results, we pause here for a discussion of labelling. To conform with our names for the 5 planes in the discussion of the Hessian, we will let $`P_0=𝕍(X_0)`$ $`=span\rho (p_b),\rho (p_c),\rho (p_d),`$ $`P_1=𝕍(X_1)`$ $`=span\rho (p_c),\rho (p_d),\rho (p_e),`$ $`P_2=𝕍(X_2)`$ $`=span\rho (p_d),\rho (p_e),\rho (p_f),`$ $`P_3=𝕍(X_3)`$ $`=span\rho (p_e),\rho (p_f),\rho (p_b),`$ $`P_4=𝕍(X_4)`$ $`=span\rho (p_f),\rho (p_b),\rho (p_c).`$ Then we may label the lines on $`K_W`$ as $`\mathrm{}_{ij}`$ as before, and obtain another set of names for the ten nodes. However, the real interest lies in what we can say about the conics. Observe that the conic $`C_0`$ meets the tropes $$\mathrm{\Theta }_d,\mathrm{\Theta }_f,\mathrm{\Theta }_c,\mathrm{\Theta }_e,\mathrm{\Theta }_b,$$ that is, the lines $$\mathrm{}_{02},\mathrm{}_{24},\mathrm{}_{41},\mathrm{}_{13},\mathrm{}_{30}.$$ So, we would do well to associate to $`C_0`$ the cyclic ordering $`(02413)`$. Similarly, one finds that the other exceptional divisors should be assigned the labels $`C_{b+ca}`$ $`(03214),`$ $`C_{c+da}`$ $`(01432),`$ $`C_{d+ea}`$ $`(04312),`$ $`C_{e+fa}`$ $`(01324),`$ $`C_{f+ba}`$ $`(03421).`$ This accounts for 6 of the 12 cyclic orders on five letters. The astute reader will ask to what do the other cyclic orders correspond. The astute reader will also have noticed that a conic is a plane curve, and since a plane meets a quartic surface in a degree 4 curve, each conic on $`K_W`$ must be coplanar with a residual conic on $`K_W`$. The answer, of course, is that we should label these residual conics with the complementary orderings. Now take one of these conics, say the one labelled $`(ijklm)`$, and consider how it meets the lines on $`K_W`$ through a node $`p_{rst}`$. If $`rst`$ are consecutive letters in the cyclic order $`(ijklm)`$, then the conic will meet the lines $`\mathrm{}_{rs}`$ and $`\mathrm{}_{st}`$, but not $`\mathrm{}_{rt}`$. If the letters $`rst`$ are not consecutive in $`(ijklm)`$, for example, $`ijl`$, then the conic will only meet the one line $`\mathrm{}_{ij}`$, while its residual will meet the other two lines. In summary, the plane containing $`C_{(ijklm)}`$ and $`C_{(ikmjl)}`$ corresponds to one of the 6 subgroups of order 5 in $`S_5`$. The elements of order 5 are all conjugate under the action of $`S_5`$, but break into two orbits under the action of $`A_5`$, with each element conjugate to its inverse. In one of these $`A_5`$-orbits we get the 6 cyclic orders we assigned to the exceptional divisors in $`K_W`$, while in the other $`A_5`$-orbit we get the labels of the residual conics. ## 4. Conics on Hessian quartic surfaces Now, we return to the question of when a Hessian quartic surface contains “extra” conics. We observed in section 2 that every Hessian quartic surface contains 10 conics, each lying in the tangent plane to one of the ten lines. We saw that the conic in the plane of $`\mathrm{}_{34}`$ met all three of the lines through the node $`p_{012}`$. In the last section, we saw that in a Kummer Hessian surface, there existed twelve other conics, not coplanar with any node, and that for each node, each of these conics met exactly one or two of the lines through that node. So, we assume that we have a surface $`H`$ in the four-parameter family we have been studying, that $`H`$ contains a conic $`C`$ in its smooth locus such that $`C`$ meets the lines $`\mathrm{}_{01}`$ and $`\mathrm{}_{02}`$, but not $`\mathrm{}_{12}`$. We will find that this condition is necessary and sufficient for the Hessian to be Kummer, and so prove Theorem 1.1. We begin with the following lemma. ###### Lemma 4.1. Let $`H`$ be the quartic surface $$𝕍(\underset{i=0}{\overset{4}{}}\mu _i\underset{ji}{}X_j),$$ and assume $`_{i=0}^4\mu _i0`$, so $`H`$ is a Hessian, and that $`H`$ has only the ten nodes it should, i.e., the cubic surface is smooth. Assume there exists a conic $`C`$ contained in the smooth locus of $`H`$ such that $`C`$ meets the lines $`\mathrm{}_{01}`$ and $`\mathrm{}_{02}`$, but not $`\mathrm{}_{12}`$. Then the $`\mu _i`$ satisfy the following irreducible cubic form: $$\underset{i=0}{\overset{4}{}}\mu _i^3\underset{ij}{}\mu _i^2\mu _j+2\underset{ijk}{}\mu _i\mu _j\mu _k=0.$$ ###### Proof. To begin with, since $`C`$ meets $`\mathrm{}_{01}`$ at a smooth point, it must be tangent to the plane $`\mu _1X_0+\mu _0X_1=0`$, and likewise tangent to the plane $`\mu _2X_0+\mu _0X_2=0`$. Now, let $`\pi :K_W^2`$ be the projection away from the point $`p_{012}`$, and let $`Q=\pi (C)`$, a conic. Our observations have placed four linear conditions on $`Q`$, so $`Q`$ must lie in the pencil $$(\mu _0X_1X_2+\mu _1X_0X_2+\mu _2X_0X_1)+\alpha X_0^2.$$ Now our goal is to find which $`Q`$ in this pencil can have a conic in its preimage. Since $`Q`$ passes through the images of the lines $`\mathrm{}_{01}`$ and $`\mathrm{}_{02}`$, and with the right tangent direction, the pullback $`\pi ^1(Q)`$ will always contain these lines, each counted twice. The rest of the preimage will then be a quartic curve, dominating $`Q`$ and mapping 2-to-1 to it. This quartic will be branched over $`Q`$ at four points, the remaining intersections of $`Q`$ with the discriminant locus away from the lines through $`p_{012}`$. To have the preimage decompose, these four intersections must coincide in pairs. So for each of the cubic curves making up the branch locus, we ask which elements of the pencil meet it a third time non-reducedly. So let $$E_s=𝕍\left(sX_0X_1X_2+(\mu _0X_1X_2+\mu _1X_0X_2+\mu _2X_0X_1)(X_0X_1X_2)\right).$$ Since $`H`$ is assumed to have only the ten nodes, $`E_s`$ will be smooth, so an elliptic curve, and there will be exactly four elements of the pencil where the $`g_2^1`$ given by residual intersection with the conic branches. Two of these are uninteresting: we already know the preimage of the conic $$\mu _0X_1X_2+\mu _1X_0X_2+\mu _2X_0X_1,$$ and the double line $`X_0^2`$ pulls back to the plane $`P_0`$. So we are left with a quadratic equation in $`\alpha `$ indicating which conics meet $`E_s`$ interestingly. This quadratic is $$T(s,\alpha )=4\mu _0\alpha ^2+[(s\mu _0)^2+(\mu _2\mu _1)^22(s+\mu _0)(\mu _2+\mu _1)]\alpha +4s\mu _1\mu _2.$$ We want for there to be a conic that meets both $`E_s`$ and $`E_{\overline{s}}`$ interestingly, so we take the resultant of $`T(s,\alpha )`$ and $`T(\overline{s},\alpha )`$, to find for what values of $`\mu _0,\mu _1,\mu _2,\mu _3,\mu _4`$ these quadratics have a common solution. We find that the resultant is $$512\mu _0\mu _1\mu _2\mu _3\mu _4\left[\underset{i=0}{\overset{4}{}}\mu _i^3\underset{ij}{}\mu _i^2\mu _j+2\underset{ijk}{}\mu _i\mu _j\mu _k\right],$$ and since we are not interested in the cases where some $`\mu _i=0`$, we keep only the last factor. ∎ Observe that this form is symmetric in the five variables. Now, since this cuts out an irreducible threefold in the space of $`\mu `$s, we have almost proven Theorem 1.1. We deal with the remaining issues below. But while we have the quadratics $`T(s,\alpha )`$ and $`T(\overline{s},\alpha )`$ in hand, we observe that their difference is linear in $`\alpha `$, and solving, we may write $$\alpha =\frac{2\mu _1\mu _2}{\mu _0+\mu _1+\mu _2\mu _3\mu _4}.$$ We now return to the theorem, which we restate as follows: ###### Theorem 4.2. Let $`H`$ be the quartic surface $$𝕍(\underset{i=0}{\overset{4}{}}\mu _i\underset{ji}{}X_j),$$ and assume $`_{i=0}^4\mu _i0`$, so $`H`$ is a Hessian, and that $`H`$ has only the ten nodes it should, i.e., the cubic surface is smooth. Assume the $`\mu _i`$ satisfy the cubic form $$\underset{i=0}{\overset{4}{}}\mu _i^3\underset{ij}{}\mu _i^2\mu _j+2\underset{ijk}{}\mu _i\mu _j\mu _k=0.$$ Then there exists a Kummer surface $`K`$ and a Weber hexad $`W`$ such that $`HK_W`$. ###### Proof. We begin by showing that for any $`H`$ satisfying this hypothesis, there are only finitely many conics satisfying the hypothesis of the lemma. But this is easy, since at most two choices of $`\alpha `$ can give acceptable image conics $`Q`$, and each of these will have at most two preimages in $`H`$. In fact, we observe that since $`s\overline{s}`$, only one choice of $`\alpha `$ will work. Next, we show that from the existence of such a conic $`C`$, we can deduce the existence of eleven others meeting different subsets of the lines, as in section 3.1. We know that $`C`$ meets the lines $`\mathrm{}_{01}`$ and $`\mathrm{}_{02}`$. If we look at its intersection with the plane $`P_2`$, we see that it must hit another line, which we will assume is $`\mathrm{}_{23}`$. We then intersect it with the plane $`P_4`$, and conclude that it must hit $`\mathrm{}_{14}`$ and $`\mathrm{}_{34}`$. So we label our conic $`C_{(01432)}`$. As in the proof of the lemma, we may project away from $`p_{012}`$ and pull back, residuating $`C_{(01432)}`$ to a conic, which we will call $`C_{(01342)}`$, and we can verify that it meets the appropriate lines and so deserves that name. Similarly, we can project the Hessian away from $`p_{014}`$, $`p_{134}`$, $`p_{234}`$, and $`p_{023}`$, and obtain four other conics. Residuating each of these six conics in the plane containing them gives us our total of twelve. Using the observation that only two conics will meet, $`\mathrm{}_{01}`$ and $`\mathrm{}_{02}`$, but not $`\mathrm{}_{12}`$, and the image under $`S_5`$ of this fact, we can conclude that these are the only twelve interesting conics on $`H`$. Now to prove the theorem, we observe that the conics in one $`A_5`$-orbit, say $$C_{(02413)},C_{(03214)},C_{(01432)},C_{(04312)},C_{(01324)},C_{(03421)},$$ are all disjoint, and each had self-intersection $`2`$. So we may blow these down to obtain a 16-nodal K3 surface, which is known to be Kummer. ∎ Given this theorem, we may now restate, and prove, Corollary 1.2. ###### Corollary 4.3. Let $`^{19}`$ be the parameter space of cubic forms on $`^3`$. Let $`X`$ be the locus in $`^{19}`$ of cubic surfaces whose Hessians are isomorphic to blowups of Weber hexads on Kummer surfaces, embedded as in Theorem 1.1. Then $`X`$ is $`SL(4)`$-invariant, and the closure of $`X`$ is a divisor in $`^{19}`$. If we label the classical invariants as $`I_8,I_{16},I_{24},I_{32},I_{40}`$, following \[Hun96\], then the polynomial on $`^{19}`$ given by $$I_8I_{24}+8I_{32}$$ is irreducible, is degree 32, and vanishes on $`X`$. ###### Proof. Inside $`^{19}`$, consider the 4-plane $`P`$ of cubic forms $$\underset{i=0}{\overset{4}{}}\lambda _iX_i^3,$$ where as usual $`X_0+X_1+X_2+X_3+X_4=0`$. We have been concentrating our attention on the locus in this family where no $`\lambda _i`$ equals zero, and have been writing $`\mu _i=\frac{1}{\lambda _i}`$. If we pull back our cubic condition $$\underset{i=0}{\overset{4}{}}\mu _i^3\underset{ij}{}\mu _i^2\mu _j+2\underset{ijk}{}\mu _i\mu _j\mu _k=0$$ in the $`\mu _i`$ variables to the 4-plane $`P`$, we see that we have described an open subset of a degree 12 threefold $`TP^{19}`$. The locus $`X`$ is the $`SL(4)`$-orbit of $`T`$, and since $`T`$ has finite stabilizer in $`SL(4)`$, the closure of $`X`$ is a divisor in $`^{19}`$. We next look for those $`SL(4)`$-invariants on $`^{19}`$ which vanish on $`X`$. These must vanish on $`T`$, a degree 12 threefold inside a $`^4`$ in $`^{19}`$. The ring of invariants of cubic forms has no element of degree 12, but if we refer to \[Hun96\] or \[Sal82\] to recall how the invariants restrict to $`P`$, we will find that the invariant $$=I_8I_{24}+8I_{32}$$ vanishes on $`T`$. Indeed, this irreducible degree 32 invariant cuts out the closure of $`X`$ in $`^{19}`$, and $``$ restricts on $`P`$ to an irreducible degree 12 polynomial, multiplied by $`(\lambda _0\lambda _1\lambda _2\lambda _3\lambda _4)^4`$. ∎ ## 5. Finding the correspondence So given a Hessian quartic surface satisfying the hypothesis of Theorem 1.1, we know it to be $`K_W`$ for some choice of Kummer surface and Weber hexad, and the next question is to which genus 2 curve it corresponds and to which Weber hexad. We answer this by observing that the lines $`\mathrm{}_{ij}`$ correspond to tropes on the Kummer, and that the three nodes on a line and the right choice of three places where conics meet the line give six points on $`^1`$ which specify the genus 2 curve $`B`$. So our task is to explicitly find the six planes containing the conics. To do this, we return to the pullback of the conic $`Q`$ of the last section. So for any Hessian $`H`$, not necessarily Kummer, let $$\alpha =\frac{2\mu _1\mu _2}{\mu _0+\mu _1+\mu _2\mu _3\mu _4},$$ and let $$Q=(\mu _0X_1X_2+\mu _1X_0X_2+\mu _2X_0X_1)+\alpha X_0^2,$$ a singular quadric surface. Then as above, the intersection of $`Q`$ with $`H`$ consists of two lines, each counted twice, and a quartic elliptic curve $`F`$. This quartic elliptic curve is the base locus of a pencil of quadrics, $$Q,\alpha \mu _0X_3X_4+(\mu _1X_2+\mu _2X_1+\alpha X_0)(\mu _3X_4+\mu _4X_3).$$ Since we are looking, in the Kummer case, for an element of this pencil that decomposes into two planes, we begin by looking at the singular elements of the pencil. We find, inter alia, the following result. ###### Proposition 5.1. Let $`\mu _0\mu _1\mu _2\mu _3\mu _40`$, and assume $$\alpha =\frac{2\mu _1\mu _2}{\mu _0+\mu _1+\mu _2\mu _3\mu _4},\beta =\frac{2\mu _3\mu _4}{\mu _0+\mu _3+\mu _4\mu _1\mu _2}$$ are finite. Then let $$\begin{array}{c}R=(\mu _1X_2+\mu _2X_1)(\mu _3X_4+\mu _4X_3)+\alpha (\mu _0X_3X_4+\mu _3X_0X_4+\mu _4X_0X_3)\hfill \\ \hfill +\beta (\mu _0X_1X_2+\mu _1X_0X_2+\mu _2X_0X_1)+\alpha \beta X_0^2.\end{array}$$ Then $`R`$ is always singular, i.e., of rank$`3`$, with singular point $$[\mu _1+\mu _2\mu _3\mu _4:\mu _1:\mu _2:\mu _3:\mu _4].$$ Further, $`R`$ has rank$`2`$, i.e., decomposes, exactly if $$\underset{i=0}{\overset{4}{}}\mu _i^3\underset{ij}{}\mu _i^2\mu _j+2\underset{ijk}{}\mu _i\mu _j\mu _k=0.$$ ###### Proof. This can all be checked by computing the necessary determinants. ∎ Observe that $`R`$ is symmetric with respect to the notational involution $`_1`$ $`_3,`$ $`_2`$ $`_4,`$ $`\alpha `$ $`\beta ,`$ and in the case that $`H`$ is Kummer, we know that $`R`$ breaks into the planes containing $`C_{(03214)}`$ and $`C_{(01432)}`$. Also, the proposition provides for us a point on the intersection of these planes. If let $`S_5`$ act on this proposition, so to speak, by relabelling the variables, we obtain points on each of the intersections of the six planes we are interested in. For example, the plane $`P_{(03214)}`$ contains the following five points: $`[\mu _1+\mu _2\mu _3\mu _4:\mu _1:\mu _2:\mu _3:\mu _4]`$ $`[\mu _0:\mu _0+\mu _3\mu _2\mu _4:\mu _2:\mu _3:\mu _4]`$ $`[\mu _0:\mu _1:\mu _2:\mu _1+\mu _4\mu _0\mu _2:\mu _4]`$ $`[\mu _0:\mu _1:\mu _2:\mu _3:\mu _2+\mu _3\mu _0\mu _1]`$ $`[\mu _0:\mu _1:\mu _0+\mu _4\mu _1\mu _3:\mu _3:\mu _4]`$ Taking minors of this matrix, we obtain an equation for the plane $`P_{(03214)}`$ with coefficients cubic in the $`\mu _i`$s, and likewise for the other five planes. More interestingly, we can find the intersections of the conics $$C_{(01324)},C_{(03421)},C_{(01432)}$$ with the line $`\mathrm{}_{01}`$, that is to say, the locations of the points $`p_{e+f2a},p_{f+b2a},p_{c+d2a}`$ on the trope $`\mathrm{\Theta }_{c+da}`$. This gives the following theorem. ###### Theorem 5.2. If $`\mu _0\mu _1\mu _2\mu _3\mu _40`$ and $$\underset{i=0}{\overset{4}{}}\mu _i^3\underset{ij}{}\mu _i^2\mu _j+2\underset{ijk}{}\mu _i\mu _j\mu _k=0,$$ and if the Hessian quartic surface $`H`$ given by $$𝕍(\underset{i=0}{\overset{4}{}}\mu _i\underset{ji}{}X_j)$$ has only ten nodes, then $`H`$ is Kummer. Specifically, let $`B`$ be the branched cover of $`^1`$ over $`a`$ $`={\displaystyle \frac{\mu _1+\mu _4\mu _0\mu _2\mu _3}{2\mu _3}},`$ $`b`$ $`={\displaystyle \frac{2\mu _2}{\mu _0+\mu _4\mu _1\mu _2\mu _3}},`$ $`c`$ $`=0,`$ $`d`$ $`=1,`$ $`e`$ $`={\displaystyle \frac{\mu _0+\mu _3\mu _1\mu _2\mu _4}{\mu _1+\mu _2\mu _0\mu _3\mu _4}},`$ $`f`$ $`=\mathrm{}.`$ Then these six points will be distinct, so $`B`$ will be a smooth genus 2 curve. Let $`K`$ be its Kummer surface, and let $`W`$ be the Weber hexad $$\{0,b+c2a,c+d2a,d+e2a,e+f2a,f+b2a\}K.$$ Then $`HK_W`$. Conversely, if $`\{a,b,c=0,d=1,e,f=\mathrm{}\}`$ are six distinct points on $`^1`$, and $`B`$ is the genus 2 curve branched over those six points, and $`K`$ and $`W`$ are as usual, the surface $`K_W`$ can be embedded as a Hessian, with equation $$H=𝕍(\mu _0X_1X_2X_3X_4+\mu _1X_0X_2X_3X_4+\mu _2X_0X_1X_3X_4+\mu _3X_0X_1X_2X_4+\mu _4X_0X_1X_2X_3),$$ where the coefficients $`\mu _i`$ are given by $`\mu _0`$ $`=a(b+1),`$ $`\mu _1`$ $`=e(a+1),`$ $`\mu _2`$ $`=b(ae),`$ $`\mu _3`$ $`=eb,`$ $`\mu _4`$ $`=(ab)(e+1).`$ ###### Proof. Given the previous theorem and the proposition, this reduces to a computation. ∎ ## 6. Suggestions for further research As stated in the introduction, this exploration is by no means done. To begin with, there is the problem of finding 6 points in $`^2`$ to blow up to obtain the cubic surfaces associated to these Hessians. Igor Dolgachev has presented a candidate sextuple, but this author cannot see a good technique to answer his question. Another intriguing line of research is broached by observing that among all cubic surfaces, there is a codimension one subfamily of singular cubic surfaces. On the moduli space, this divisor meets the Kummer divisor studied in this paper, and the two divisors are everywhere tangent along their intersection. However, the condition of smoothness of the cubic surface has only barely made its presence known in the results of this paper. A related question is brought up by asking what happens if we allow our genus 2 curve to degenerate. ## Acknowledgments We thank Igor Dolgachev for suggesting this problem. Also, we gratefully acknowledge Bert van Geemen, who has been approaching the same questions using theta function techniques. He independently found the cubic relation described in this paper, and made several helpful suggestions and comments while I was pursuing the geometric approach to the problem.
no-problem/9903/chao-dyn9903011.html
ar5iv
text
# Defect-freezing and Defect-unbinding in the Vector Complex Ginzburg-Landau Equation. ## 1 Introduction Spatially extended nonlinear dynamical systems display an amazing variety of behavior including pattern formation, self-organization, and spatiotemporal chaos. Transition phenomena between different kinds of states share some characteristics with phase transitions in equilibrium systems. Symmetry breaking, topological defects, and Goldstone modes, for instance, are commonly found. Nevertheless, a much larger variety of collective effects are possible in these far-from-equilibrium systems. In this paper we report some numerical results on the behavior of the Vector Complex Ginzburg-Landau (VCGL) equation , a model originally developed in the study of pattern formation in optical systems . It consists of a set of two coupled complex Ginzburg-Landau equations which could be thought as the two components of a vector equation : $$_tA_\pm =A_\pm +(1+i\alpha )^2A_\pm (1+i\beta )(|A_\pm |^2+\gamma |A_{}|^2)A_\pm .$$ (1) The VCGL equation appears naturally in situations where a two-component vector field starts to oscillate after undergoing a Hopf bifurcation. This is the case of the transverse electric vector field in a resonant optical cavity near the onset of laser emission. The two complex fields $`A_\pm `$ are the complex envelopes of the two components (the circularly polarized components in the optical case) of the oscillating field. The parameter $`\alpha `$ measures dispersion or diffraction effects whereas $`\beta `$ is a measure of nonlinear frequency renormalization. $`\gamma `$ is the coupling between the components, so that for $`\gamma =0`$ one obtains two uncoupled scalar Ginzburg-Landau equations. The onset of oscillations breaks two continuous symmetries. On the one hand, the phase of the oscillations destroys time translation invariance. On the other the direction of oscillations breaks isotropy by singling out a vector orientation. Typically these symmetries are broken differently in different parts of the system, so that regions in different oscillation states, with topological defects between them, appear and compete. For the case $`\gamma =0`$, equivalent to the scalar case, a phase diagram charting the different states at different parameter values has been obtained both in one and in two dimensions . In the general vectorial or coupled case, however, our knowledge is much more partial. Here we will describe states appearing in two spatial dimensions for $`\gamma `$ real, $`0\gamma <1`$, and $`\alpha `$ and $`\beta `$ such that plane waves $`A_\pm =Q_\pm e^{i(𝐤_\pm 𝐱\omega _\pm 𝐭)}`$ are linearly stable solutions (a necessary condition is $`1+\alpha \beta >0`$). The range of parameters that we consider is relevant to describe laser emission when atomic properties favor linear polarization in a broad area laser with large detuning between atomic and cavity frequencies. The following two sections describe our results for the behavior of the system in our range of parameter values. We show the existence of a transition between a frozen phase and a gas-like phase. After the conclusions section, an Appendix gives some details on the numerical algorithm used. ## 2 Defect-dominated frozen phase Despite the existence and stability of plane-wave solutions, typical evolution starting from random initial conditions leads to complex evolving states. For $`\gamma `$ small the state of each component superficially resembles the one obtained for the scalar equation (see Fig. 1): the dominant objects are spiral waves, emanating from or sinking into a defect (a zero of the complex field, giving a phase singularity) core. Despite the similarities, there are important differences between defects in the scalar case and in the present vectorial case. In the scalar case there is only one complex field, so that there is a single phase and thus a single type of charge associated to its singularities or defects. In our case there are two complex fields, $`A_+`$ and $`A_{}`$, which can vanish independently, giving rise to two independent charges. The topological charges of a defect are defined by $$n_\pm =\frac{1}{2\pi }_\mathrm{\Gamma }\stackrel{}{}\varphi _\pm 𝑑\stackrel{}{r},$$ (2) where $`\mathrm{\Gamma }`$ is a closed path around the defect, and the phases $`\varphi _\pm `$ are defined by the relations $`A_\pm =|A_\pm |e^{i\varphi _\pm }`$. Numerically we do not find in the system the spontaneous emergence of any topological charge with modulus greater than 1 starting from random initial conditions. It is possible to make a classification of defects using standard topological arguments . We call vectorial defect a defect which is a singularity of both components of the field (i.e. both components vanish at the same point). A vectorial defect is of argument type when the charges of the two field components have the same signs, i.e., when $`n_+=n_{}=1`$ or $`n_+=n_{}=1`$. If the charges are of opposite signs, i.e., when $`n_+=n_{}=1`$ or $`n_+=n_{}=1`$, the vectorial defect is of director type. We call mixed defect a defect that is present just in one component of the field. For $`\gamma =0`$ there is no interaction between the two fields and thus binding of mixed defects to form vectorial defects would not occur generically. By increasing $`\gamma `$ we observe that all kinds of defects appear leading to configurations which evolve very slowly in time. Such configurations are representative of a frozen or glassy state. For example, for $`\gamma =0.1`$, $`\alpha =0.2`$, $`\beta =2`$ and large times (Fig. 1), the system evolves into a state in which the fields are organized in domains of nearly constant modulus separated by shocks. There is a vectorial defect at the center of each domain. This defect core emits or receives phase waves which entrain the whole domain. Perturbations and mixed defects are ejected away from the defect core with a group velocity. The mixed defects accumulate at the domain borders. In Fig. 1 we also show the global and relative phases, $`\varphi _g=\varphi _++\varphi _{}`$ and $`\varphi _r=\varphi _+\varphi _{}`$. An argument defect has a global phase $`\varphi _g`$ that rotates $`4\pi `$ around the defect core, while the relative phase $`\varphi _r`$ rotates 0. For a director defect, $`\varphi _g`$ rotates 0 and $`\varphi _r`$ rotates $`4\pi `$. In consequence argument and director defects are easily distinguished in the plot of the global phase: A two-armed spiral is formed around an argument defect, while a target pattern is seen in the domain of a director defect. Mixed defects appear as points around which the global or relative phase rotates by $`2\pi `$. The modulus of this kind of configuration evolves very slowly in time, so that we could call it a frozen or glassy state. As $`\gamma `$ increases the structure of the mixed defects becomes such that a maximum in the modulus of one of the components appears where the other component presents a singularity (see Fig. 2). Such anticorrelation, which also occurs for the shocks separating the regions dominated by a vectorial defect, becomes more evident by further increasing $`\gamma `$. This feature is also present in the one-dimensional case: no topological defects exist in $`d=1`$, but a spatially localized minimum of one field, which moves in time, goes together with a maximum of the other field. ## 3 Unbinding transition to a gas phase There is a critical value ($`\gamma 0.35`$ for $`\alpha =0.2`$, $`\beta =2`$, as in Figs. 1 and 2) above which vectorial defects disappear. We observe two different annihilation processes: a) One of the two singularities that form the vectorial defect is annihilated in the collision with a mixed defect, in the same component but of opposite charge, which migrates from the boundaries of a domain. A mixed defect, with charge associated to the other component, is thus left in the system. b) The vectorial defect splits into two spatially separated mixed defects, one in each component. When the vectorial defects disappear, spiral-wave domains dissolve and the frozen structure transforms into a mobile configuration with fast active dynamics. Fig. 2 (bottom) shows a typical snapshot: mixed defects travel freely around the system as in a kind of “gas phase”. The anticorrelation between the two components is quite evident at this large value of $`\gamma `$. The transition between the frozen and the gas behavior is rather sharp, and can be thought as a kind of vortex unbinding. One way of characterizing the different kinds of behavior and transitions between them is by means of an entropy measure $`H(X)=_xp(x)\mathrm{ln}p(x)`$, where $`p(x)`$ is the probability that $`X`$ takes the value $`x`$. $`H(X)`$ measures the randomness of a discrete variable $`X`$. We can compute the single-point entropies of the modulus of the field components by considering the discretized values of $`|A_+|`$ and $`|A_{}|`$ as random variables ($`X=|A_+|`$ or $`|A_{}|`$; we discretize the range of these variables into 200 values). The associated probability distributions are defined from the ensemble of values collected from different space-time points. In Fig. 3 we plot the entropy of $`|A_+|`$ and $`|A_{}|`$ as functions of $`\gamma `$. For low values of $`\gamma `$ the system is in the frozen state consisting in large domains of uniform modulus surrounding vectorial defects. These domains impose some degree of order which gives low values to the entropies. For $`\gamma =0.25`$ the size of the domains diminishes, and the system becomes more disordered as indicated by the increase of the entropies. There is a maximum of the entropies at $`\gamma 0.3`$, which is the value at which the argument defects are seen to annihilate. Thus the maximum in the entropies is signaling the transition from the frozen structure to the gas-like phase. For $`\gamma 0.35`$ the director vectorial defects disappear also, so that for higher values of $`\gamma `$ there are only mixed defects. When $`\gamma `$ leaves the transition region, the entropies initially decrease, but they increase later with growing $`\gamma `$ in correspondence with the increasing dynamic disorder in the fields. This behavior of the entropies is in contrast with the one-dimensional case , where topological defects are absent. There, entropies maintain an essentially constant value when $`\gamma `$ varies. The presence of defects in the two-dimensional case is responsible for the distinct behavior of the entropies. ## 4 Conclusions In this Paper we have described qualitatively some aspects of the dynamics of the VCGL equation, focusing in a particular parameter regime of relevance in optics. The presence of different kinds of defects is the characteristic phenomenon organizing other features of the dynamics. Two main “phases”, a frozen or glassy state and a more dynamic gas-like phase, have been identified. The transition between these two phases originates in the unbinding of vectorial defects. Financial support from DGES (Spain) Project PB94-1167 and from the European Union TMR network QSTRUCT (Project FMRX-CT96-0077)is acknowledged. ## Appendix A Numerical Integration Scheme The time evolution of the complex fields $`A_\pm (x,t)`$ subjected to periodic boundary conditions is obtained numerically from the integration of the VCGL in Fourier space. The method is pseudospectral and second-order accurate in time. It is the straightforward generalization to two dimensions and two components of the algorithm described in for the scalar Ginzburg-Landau equation. Each Fourier mode $`A_\pm ^q`$ evolves according to: $$_tA_\pm ^q(t)=\alpha _qA_\pm ^q(t)+\mathrm{\Phi }_\pm ^q(t),$$ (3) where $`\alpha _q`$ is $`(1+ic_1)q^21`$, and $`\mathrm{\Phi }_\pm ^q`$ are the $`q`$-modes of the non-linear terms in the VCGL equation. When a large number of modes $`q`$ is used, the linear time scales $`\alpha _q`$ can take a wide range of values. A way of circumventing this stiffness problem is to treat exactly the linear terms by using the formal solution: $$A_\pm ^q(t)=e^{\alpha _qt}\left(A_\pm ^q(t_0)e^{\alpha _qt_0}+_{t_0}^t\mathrm{\Phi }_\pm ^q(s)e^{\alpha _qs}𝑑s\right).$$ (4) From here the following relationship can be obtained: $$A_\pm ^q(n+1)=e^{2\alpha _q\delta t}A_\pm ^q(n1)+\frac{1e^{2\alpha _q\delta t}}{\alpha _q}\mathrm{\Phi }_\pm ^q(n)+𝒪(\delta t^3).$$ (5) Expressions of the type $`f(n)`$ are shortcuts for $`f(t=n\delta t)`$. Scheme (5) alone is unstable for the VCGL equation. To fix this one can derive the auxiliary expression $$A_\pm ^q(n)=e^{\alpha _q\delta t}A_\pm ^q(n1)+\frac{1e^{\alpha _q\delta t}}{\alpha _q}\mathrm{\Phi }_\pm ^q(n1)+𝒪(\delta t^2),$$ (6) and the algorithm proceeds as follows: 1. Starting from $`A_\pm ^q(n1)`$ and Fourier inverting to get $`A_\pm (𝐱,𝐧\mathrm{𝟏})`$ one can calculate the nonlinear terms in direct space and then obtain $`\mathrm{\Phi }_\pm ^q(n1)`$. 2. Eq. (6) is used to obtain an approximation to $`A_\pm ^q(n)`$. 3. The non-linear terms $`\mathrm{\Phi }_\pm ^q(n)`$ are now calculated from these $`A_\pm ^q(n)`$ by going to real space as before. 4. The fields at step $`n+1`$ are calculated from (5) by using $`A_\pm ^q(n1)`$ and $`\mathrm{\Phi }_\pm ^q(n)`$. At each iteration, we get $`A_q(n+1)`$ from $`A_q(n1)`$, and the time advances by $`2\delta t`$. The number of Fourier modes depends on the space discretization. We have used $`dx=1`$ in lattices of size $`128\times 128`$ or $`256\times 256`$. The time step was usually $`dt=2\delta t=0.05`$.
no-problem/9903/astro-ph9903398.html
ar5iv
text
# X-ray Nova XTE J1550-564: Optical Observations ## 1 Introduction Soft X-ray transients, also called X-ray novae (XN), are mass transferring binaries in which long periods of quiescence (when the X-ray luminosity is $`10^{33}`$ergs s<sup>-1</sup>) are occasionally interrupted by luminous X-ray and optical outbursts (Tanaka & Shibazaki 1996). X-ray novae are unique objects since they provide the most compelling evidence for the existence of stellar mass black holes (Cowley 1992). Using optical photometry and spectroscopy, eight XN have been shown to contain a black hole, (van Paradijs & McClintock 1995; Bailyn et al. 1998; Orosz et al. 1998a) since the mass of the primary exceeds the maximum stable limit of a neutron star ($`3M_{\mathrm{}}`$, Chitre & Hartle 1976). The soft X-ray transient XTE J1550-564 was discovered with the All Sky Monitor (ASM ; Levine et al. 1996) on the Rossi X-ray Timing Explorer (RXTE) on September 6, 1998 (Smith et al. 1998). This object became the brightest X-ray nova yet observed by RXTE (Remillard et al. 1998b). A high frequency QPO at $`185`$ Hz has been observed on two well separated occasions (McClintock et al. 1998; Remillard et al. 1999). Although the true nature of this object has yet to be confirmed, XTE J1550-564 is likely to be a black hole based on its characteristic soft X-ray spectrum and the hard power law tail (Sobczak et al. 1999), and high frequency QPOs (Remillard et al. 1999). Shortly after the X-ray discovery the optical counterpart was identified within the RXTE error box (Orosz, Bailyn & Jain 1998). We present optical light curves obtained during the outburst of XTE J1550-564 as part of a multi-wavelength campaign. The spectral analysis of the RXTE PCA data as well as the ASM light curve and a timing study based on the same RXTE observations are presented in companion papers (Sobczak et al. 1999 and Remillard et al. 1999; hereafter paper I and paper II, respectively). Although much can be learned about XN by studying the X-ray data alone, simultaneous optical observations can provide tighter constraints for various accretion disk models. For example, optical, UV and X-ray data obtained during quiescence of several BHXN have been used to demonstrate the successful application of the advection dominated accretion flow (ADAF) model, whereas thermal emission from a thin disk model is inconsistent with these observations (Narayan et al. 1996, 1997a,b). Furthermore, the six day time delay between the optical and X-ray outbursts of GRO J1655-40 observed in April 1996 (Orosz et al. 1997) has also been successfully modeled by an accretion flow consisting of a cold outer disk and a hot inner ADAF region (Hameury et al. 1997). The extensive X-ray and optical coverage of the outburst of XTE J1550-564 provides further opportunities to test accretion disk models and ADAF models in particular. We report below our optical observations, data reductions, and results. ## 2 Observations and Reductions We obtained photometry using the Yale 1m telescope at CTIO, which is currently operated by the YALO (Yale, AURA, Lisbon, Ohio State) consortium. This telescope is ideally suited for observing X-ray transients and other objects which require continuous long-term monitoring. Data are taken every clear night by two permanent staff observers, and observations are requested by a queue which can be changed quickly in response to discoveries. The data reported here were acquired using the ANDICAM optical/IR camera which contains a TEK $`2048\times 2048`$ CCD with $`10.2\times 10.2`$ arcmin<sup>2</sup> field of view with a scale of 0.3 arcsec pixel<sup>-1</sup>. The IR array was not available at the time of our observations. On September 8.99 (UT), we obtained images of two fields in response to the announcement of the initial detection of XTE J1550-564 by RXTE (Smith et al. 1998). For each field a 60 second and a 300 second exposure was obtained in both $`V`$ and $`I`$. The 300 second $`V`$ band images were compared to images of the same regions extracted from the Digitized Sky Survey (DSS) CD ROM set (Sturch et al. 1993). We identified a $`V16`$ star as the optical counterpart, since it appeared in all of the CCD images we obtained, but not in the DSS image (Orosz, Bailyn, & Jain 1998 — see Figure 1). There were several HST guide stars in the CCD image, which allowed us to determine the J2000 coordinates of $`\alpha =15^\mathrm{h}50^\mathrm{m}58\stackrel{\mathrm{s}}{\mathrm{.}}78`$, $`\delta =56^{}28^{}35\stackrel{}{\mathrm{.}}0`$, with errors of $`2^{\prime \prime }`$. The quoted error corresponds to the maximum systematic error present in the HST Guide Star Catalog (Russell et al. 1990). A variable radio source detected on September 9 at a position consistent with that of the optical transient (Campbell-Wilson et al. 1998). Finally, spectroscopic confirmation came on September 16, when Castro-Tirado et al. (1998) showed that the optical variable had emission lines of H, He II, and N III, typical of X-ray transients in outburst. We observed the source on all nights for which weather and instrumentation permitted between September 8.99 to October 26.9, 1998, when the object was no longer observable in the night sky (see Figure 2). The exposure times were 120-300 seconds for $`V`$ and $`I`$, 300-600 seconds for $`B`$. The seeing varied from night to night ranging from 1.3 to 3 arcseconds, with a typical value of 1.7 arcseconds. A time series of the optical data was obtained using the IRAF versions of DAOPHOT and ALLSTAR and the stand-alone code DAOMASTER (Stetson 1987; Stetson, Davis, & Crabtree 1991; Stetson 1992a, 1992b). The DAOPHOT instrumental magnitudes were calibrated to the standard scales using standard stars from the list of Landolt (1992). ## 3 The Quiescent Optical Counterpart The Royal Observatory Edinburgh (ROE) maintains a large archive of photographic plates of Southern sky fields taken with the UK Schmidt telescope located at the Anglo-Australian Observatory. There are ten plates on which the XTE J1550-564 field was well-centered and the exposure times were fairly long. Sue Tritton of the ROE kindly examined these ten plates and photographed the $`4\times 4`$ arcminute regions surrounding the positions of XTE J1550-564. The best quality plate is No. J2977 from March 20, 1977, which is the atlas plate for the SERC J survey. A print of this plate was scanned using the Yale PDS-microdensitometer at a resolution of 0.3 arcseconds/pixel. There is a faint star close to the position of XTE J1550-564 (see Figure 1). We used the pixel coordinates of several bright comparison stars to determine the coordinate transformation between the CCD image and the scanned image. Based on this transformation, we find that the faint star in the scanned image is within 0.5 arcseconds of the position of XTE J1550-564. Hence this star is most likely the quiescent optical counterpart, although we can not completely rule out the possibility that it is an unrelated field star. We performed aperture photometry of this faint star and several comparison stars in the scanned image and determined a $`B`$ magnitude of $`B=22.0\pm 0.5`$ for the faint counterpart. This error represents the scatter in the differences between the calibrated magnitudes of the comparison stars and the magnitudes obtained from the scanned photographic image. The quiescent counterpart is not visible on any of the the remaining nine plates, all of which have limiting magnitudes of $`21`$. The amplitude of the outburst thus appears to be $`4`$ magnitudes, although this could be larger if the star appearing on the SERC J plate is not the actual quiescent counterpart. The outburst amplitude is intermediate between the relatively small optical outbursts seen in X-ray novae with early type (F and A) companions like GRO J1655-40 and 4U1543-47 (Bailyn et al. 1995; Orosz et al. 1998a) and the much larger outbursts seen in systems with later spectral types (van Paradijs & McClintock 1995). If this pattern holds, one may expect the secondary of XTE J1550-56 to be a main-sequence G star or perhaps an evolved giant. For main sequence companions, Shahbaz & Kuulkers (1998) propose an empirical formula relating the orbital period to the outburst magnitude, which yields $`P23`$ hours for this case. ## 4 The Outburst Light Curve The optical magnitude of XTE J1550-564 varied much less during our observing period than the X-ray flux (see Figure 2). During the span of 49 days that we have data for XTE J1550-564, the optical brightness in $`B`$, $`V`$ and $`I`$ dropped by $`1.5`$ magnitudes and the daily fluctuations were less than $`0.15`$ magnitude between any two adjacent nights. In general, the $`B`$, $`V`$, and $`I`$ magnitudes decayed steadily with the exception of an optical flare near September 21, which occurred approximately one day after the X-ray flare. There is also a seven day plateau lasting from approximately October 15 to October 21, during which the $`I`$ magnitude fluctuated by less than 0.03 magnitudes (the $`B`$ and $`V`$ magnitudes also fluctuated much less during this period compared to the previous seven days). The general features of the optical and X-ray light curves can be compared with other LMXBs using the classifications by Chen, Shrader & Livio (1997). Based on their classification, the exponentially decaying optical light curve of XTE J1550-564 with an e-folding time of $`30`$ days can be described as a possible FRED (light curves that have either a Fast-Rise or an Exponential Decay). In fact, most optical light curves of LMXBs are best described as possible FREDs, although the average e-folding time of 67.6 days (Chen, Shrader, & Livio 1997) is longer than what we find for XTE J1550-564. Curiously, the optical decay is not correlated with the ASM X-ray light curve, which remains fairly constant after the flare. Before we can compare properties such as the average intrinsic color and ratio of the X-ray to optical flux of XTE J1550-564 to those of other LMXBs, we must correct for interstellar reddening. The reddening can be estimated from the average expected values of $`N_H`$ which can be derived from the HI map by Dicky & Lockman (1990), in this case by using the FTOOLS routine nh. We obtained $`N_H9\times 10^{21}`$ cm<sup>-2</sup>, which yields an estimate of $`A_V=5.0`$ assuming the relation between $`N_H`$ and $`A_V`$ of Predehl & Schmitt (1995). The Dicky & Lockman (1990) map represents the column density to infinity, and thus might be an overestimate for a galactic source embedded in the plane. However, the tentative distance of 6 kpc suggested in paper I places the source 200pc below the plane, well out of the galactic dust layer. We note that the X-ray spectral analysis yields values of $`N_H`$ about twice as large as is suggested by the HI maps — this may indicate self-absorption in the source. Using the conventional relationship $`A_V=3.1\times E(BV)`$ (Savage & Mathis 1979), we find $`E(BV)=1.6`$. This implies an intrinsic color of $`(BV)_0=0.25\pm 0.04`$ right after the flare, consistent with the average value of $`0.09\pm 0.14`$ for a sample of LMXBs obtained by van Paradijs & McClintock (1995). Note that the much larger reddening implied by the X-ray column density in the absence of self-absorption results in an implausibly blue intrinsic color for XTE J1550-564. Using $`A_V=5.0`$, we find an optical to X-ray flux ratio of $`450`$ during the flare, comparable to the average value of 500 found by van Paradijs & McClintock (1995). This estimate assumes a flat optical spectrum between $`3000\AA `$ and $`7000\AA `$, and a 2-20 keV X-ray flux derived from the spectral decomposition described in paper I. The optical response to the large X-ray flare which occurred near Sept. 21 (see paper I) was quite muted, with an amplitude of $`0.2`$ magnitudes. The (V-I) color increased during the flare, meaning the optical outburst was redder than during the decay (see Figure 2). On the other hand, the ASM hardness ratio (HR2, defined as the ratio of ASM count rates between 5 to 12 keV band and 3 to 5 keV bands) increased, meaning the X-ray light curve was “bluer” during the flare (see paper I for details). Although detailed spectral information about the optical flare is unavailable, the color and hardness ratios indicate a change in the spectrum across a wide range of wavelengths during the flare. The optical response to the flare is delayed by about a day relative to the X-rays. To quantify this delay, we parameterized the light curves as described below. The fits are not perfect, and somewhat different results can be obtained with different fitting schemes. However the qualitative relations between the different light curves persist no matter how the data are described. We fit the ASM flare with a Lorentzian with a centroid of $`51,076.2\pm 0.1`$ and a FWHM of $`1.5`$ days (for convenience we express time in days in the units of MJD=JD-2,400,000.5, where MJD 51,075 is September 19, 1998). We estimated the centroid and FWHM of the optical flare using a Gaussian to fit the flare and a linear component to fit the decay. We determined the centroids for the $`B`$, $`V`$, and $`I`$ bands to be $`51,077.3\pm 0.2`$, $`51,077.05\pm 0.02`$, $`51,077.05\pm 0.03`$, respectively, all of which occur approximately one day later than the X-ray peak as noted above. There is no significant offset between the times of the peaks of the optical flare in the $`B`$, $`V`$, and $`I`$ light curves. In contrast, the duration of the optical event varied strongly with filter. One self-consistent solution yields the following values of FWHM for $`B`$, $`V`$, and $`I`$ respectively: $`2.1\pm 0.5`$, $`1.6\pm 0.1`$, and $`1.6\pm 0.05`$ days. The relationship between the X-ray and optical flares is in dramatic contrast to the onset of the April 1996 outburst of GRO J1655-40, described by Orosz et al. (1997), in which the optical event preceded the X-rays by six days. That event was interpreted (Hameury et al. 1997) as an “outside-in” instability in the accretion disk. The flare in XTE J1550-564, on the other hand, appears to have begun deep in the accretion flow, and subsequently propagated outwards. ## 5 Summary We have identified the optical counterpart of XTE J1550-564, and analyzed the B, V and I light curves from September 8.99 to October 26.9, 1998. We find that the X-ray and optical light curves are poorly correlated. The large X-ray flare was followed a day later by a small ($`0.2`$ mag) increase in the optical brightness. The tentative identification of a quiescent counterpart in sky survey images suggests that it will be possible to measure the mass function of the object after it has returned to quiescence. We thank the two YALO observers, David Gonzalez Huerta and Juan Espinoza, for providing data in a timely manner. We also thank Sue Tritton of the ROE for her help with the archival plates and John Lee, Terry Girard, and Imants Platais for assistance with the Yale PDS scanner. We would like to thank Suzanne Tourtellotte and Elene Terry for their assistance with data reduction. Financial support for this work was provided by the National Science Foundation through grant AST-9730774.
no-problem/9903/quant-ph9903006.html
ar5iv
text
# Quantum counter erasure ## I INTRODUCTION In order to gain concrete experimental notions, we start by discussing the well-known two-slit interference experiment , which is theoretically the simplest and best known example of interference. Let the indices 1 and 2 refer to the two slits, and let $`|\psi _1`$ and $`|\psi _2`$ be the spatial state vectors of the photon having traversed only the first or only the second slit respectively. Then the superposition (also called coherent mixture) of these two state vectors, i. e., $$|\psi (1/2)^{1/2}\left(|\psi _1+|\psi _2\right)$$ $`(1)`$ is the interference state vector corresponding to both slits being open. (The term ”interference” actually refers to the interference pattern on the detection screen.) To bring in entanglement, we assume that the photons pass a horizontal linear polarizer at slit 1 and a vertical one at slit 2 . The entangled two-subsystem (but one-photon) state vector is then, in obvious notation: $$|\chi \left(|H|\psi _1+|V|\psi _2\right).$$ $`(2)`$ We are dealing with a minimal-term entanglement (two terms only). The state of the subsystem of spatial degrees of freedom is now an improper mixture $$\rho _sTr_p|\chi \chi |=(1/2)\left(\psi _1\psi _1|+\psi _2\psi _2|\right)$$ $`(3)`$ as easily seen. The symbol $`\rho _s`$ denotes the state operator (reduced statistical operator) of the spatial subsystem, and $`\mathrm{"}Tr_p\mathrm{"}`$ denotes the partial trace over the (linear) polarization degree of freedom of the photon. Also the mixture in (3) is a minimal-term one. The entanglement in (2) suppresses the interference replacing the interference state (1) by the nonintereference one given by (3). The entanglement (2) contains the so-called ”which-path” memory, because, in principle, measuring only if the linear polarization is horizontal or vertical, one reestablishes $`|\psi _1`$ or $`|\psi _2`$ respectively. For example, if the polarization turns out to be horizontal, then, according to the so-called Lüders formula for ideal measurement , , one has the following disentanglement: $$|\chi c\left(HH|1\right)\chi =|H|\psi _1,$$ $`(4)`$ where c is a normalization constant. Before this ”which-path” measurement is performed, there is a (potential) complementarity in $`|\chi `$ because it provides also a complementary memory, on ground of which one can revive the suppressed interference in the distant subsystem , . This revival is possible because, as easily checked, one can rewrite the same composite-system state vector given by (2) as follows: $$|\chi =(1/2)^{1/2}\left(|45^0|\psi +45^0|\psi ^c\right),$$ $`(5)`$ where, e.g., $`|45^0`$ is the polarization state at $`45^0`$ between horizontal and vertical, $`|\psi `$ is given by (1), and, what we call the counter-interference state (for reasons seen below), $`|\psi ^c`$ is defined by $$|\psi ^c(1/2)^{1/2}\left(|\psi _1|\psi _2\right).$$ $`(6)`$ Further, the corresponding linear polarization state vectors at the given angles, evidently, satisfy $$|45^0=(1/2)^{1/2}\left(|H+|V\right),45^0=(1/2)^{1/2}\left(|H|V\right).$$ $`(7a,b)`$ If one measures the linear polarization at $`45^0`$ or at $`45^0`$ (since $`45^0|45^0=0,`$ this is, essentially, an observable), and if the former result is obtained, then, on account of (5), the following disentanglement takes place (cf. or ): $$|\chi 45^0|\psi .$$ Thus, the spatial interference state $`|\psi `$ is revived. This phenomenon is called quantum erasure , because the ”which-path memory” in the entanglement in $`|\chi `$, which suppresses the interference, is erased. If in the $`45^0`$-angle linear polarization measurement the result is $`45^0`$, then the Lüders formula gives $`|\psi ^c`$, i. e., it is the counter-interference state that is revived. As to what is actually observed in the laboratory, one cannot ”see” the interference state $`|\psi `$ itself (cf. (1)) in full. One usually observes an interference pattern implied by $`|\psi `$ (on a detection screen). The pattern is actually the localization probability distribution: $$p_i(\text{r})|\psi (\text{r})|^2=(1/2)\left(\psi _1(\text{r})|^2+|\psi _2(\text{r})|^2+\psi _1^{}(\text{r})\psi _2(\text{r})+\psi _1(\text{r})\psi _2^{}(\text{r})\right),$$ $`(8a)`$ where ”i” refers to interference, and $`\psi (\text{r})\text{r}|\psi `$ is determined by (1), etc.. It follows from (5) that the state $`\rho _s`$ of the spatial subsystem (cf (3)) can also be written as $$\rho _s=(1/2)\left(|\psi \psi |+\psi ^c\psi ^c|\right).$$ $`(9)`$ The probability distribution defined by the counter-interference state $`|\psi ^c`$, what we call counter interference , and the one defined by the incoherent mixture $`\rho _s`$ are respectively: $$p_i^c(\text{r})|\psi ^c(\text{r})|^2=$$ $$(1/2)\left(|\psi _1(\text{r})|^2+\psi _2(\text{r})|^2\psi _1^{}(\text{r})\psi _2(\text{r})\psi _1(\text{r})\psi _2^{}(\text{r})\right),$$ $`(8b)`$ $$p(\text{r})\text{r}|\rho _s|\text{r}=(1/2)\left(p_i(\text{r})+p_i^c(\text{r})\right)=(1/2)\left(|\psi _1(\text{r})|^2+|\psi _2(\text{r})|^2\right).$$ $`(8c)`$ The empirical, i. e., ensemble view of the phenomena of interference and counter-interference consists in realizing that, on account of the measurement of $`\left(|45^045^0|1\right)`$ on each individual photon of a laboratory ensemble that is described by $`|\chi `$ given by (2), the (improper ) ensemble of spatial subsystems described by $`\rho _s`$ breaks up into two subensembles (cf. the first sum in (8c)), each of which causes interference on the detection screen (cf. (8a) and (8b) respectively), but which are counter cases of each other in the sense that the two interferences cancel (cf (8c)). A thought experiment in which the above mentioned linear polarizers at the slits are replaced by maser cavities was given by Scully et al. in . The authors actually introduce quantum erasure in a pioneering way explaining the revival of the interference state $`|\psi `$. With a slight modification of the experiment one can revive the counter-interference state $`|\psi ^c`$ instead of $`|\psi `$. The first real experiment of quantum erasure was attempted in . It turned out that it was erasure in a somewhat broader sense. Actually, the entangled composite-system state $`|\chi `$ contains a nondenumerable infinity of spatial states that can, in principle, be revived . The rivival takes place via the measurement of an opposite-subsystem observable. Since the entanglement is a minimal-term one, this observable is a yes-no measurement, and the revived states appear in pairs, the counter states of each other . The ”which-way” states $`|\psi _1`$ and $`|\psi _2`$ on the one hand and the interference state $`|\psi `$ and the counter-interference state $`|\psi ^c`$ on the other are examples of counter states of each other. We explore this phenomenon in detail in this study. ## II COUNTER STATES IN MINIMAL-TERM MIXTURES In this section the following question is given an answer: How to classify, i. e., enumerate (in a bijective way) explicitly the set of all mathematically possible decompositions of a given minimal-term mixture (like $`\rho _s`$ in (3)) into two pure states? This question is studied with a view to find out (in the next section) how one can revive any of the two pure states of any of the mentioned decompositions by a yes-no measurement on the opposite subsystem. Let $`\rho `$ be a given minimal-term mixture state operator, i. e., one that can be written in the spectral form : $$\rho =r|11|+(1r)|22|,$$ $`(10)`$ where $`0<r(1/2)`$. It is known that each state vector from the range of $`\rho `$, and only such state vectors, can appear in a decomposition of $`\rho `$. We want to find out about the counter state vectors and the corresponding statistical weights. Our answer to the above question goes as follows: Let (10) be given. Let, further, $$|\varphi p|1+(1p^2)^{1/2}e^{i\vartheta }|2,$$ $`(11a)`$ with any values from the intervals $$0p1,0\vartheta <2\pi ,$$ $`(11b)`$ be an (up to a phase factor) arbitrary state vector from the range of $`\rho `$. Then there exists one and only one decomposition of $`\rho `$ into two pure states in which $`|\varphi \varphi |`$ appears. It is $$\rho =w|\varphi \varphi |+(1w)|\varphi ^c\varphi ^c|$$ $`(12)`$ where $$wr(1r)/\left(p^2(1r)+(1p^2)r\right),$$ $`(13)`$ and $$|\varphi ^c\left[(rwp^2)/(1w)\right]^{1/2}|1+$$ $$\left[\left((1r)w(1p^2)\right)/(1w)\right]^{1/2}e^{i(\vartheta +\pi )}|2.$$ $`(14)`$ The claims made are shown to follow as an immediate consequence of a wider lemma stated and proved in Appendix 1. To be practical, we shall call decomposition (12) of $`\rho `$ in the context of (10)-(14) ”the p,$`\vartheta `$-decomposition”. Thus, all decompositions of a given minimal-term mixture $`\rho `$ ca be classified or enumerated by the two parameters p and $`\vartheta `$. The counter state $`|\varphi ^c`$ and the statistical weight $`w`$ are uniquely implied by the state operator $`\rho `$ and $`|\varphi `$. The state vectors $`|\varphi `$ and $`|\varphi ^c`$ are counter states of each other, i. e., if one is written as (11a), then the other takes the form (14). Further, a more detailed examination of the answer given reveals the following peculiarities in the above relations: (i) If the characteristic value $`r`$ of $`\rho `$ is nondegenerate, or, equivalently, if $`r<(1/2)`$, then relation (13) establishes a monotonously decreasing bijection of the interval of the values of $`p`$ onto the interval $`[r,(1r)]`$ of the values of $`w`$. (Namely, $`dw/d(p^2)<0`$.) (ii) If $`r=(1r)=1/2`$, then $`p`$ and $`\vartheta `$ can still take all values from their respective intervals (11b), but always $`w=1w=1/2`$. In this case the counter state takes the simple form $$|\varphi ^c=(1p^2)^{1/2}|1pe^{i\vartheta }|2$$ $`(15)`$ and $`|\varphi ^c`$ is orthogonal to $`|\varphi `$. Decomposition (12) is now a spectral form of $`\rho `$ (just like (10)). In this case, every decomposition of $`\rho `$ into pure states is an orthogonal one (a spectral form), and there are no other decompositions into pure states . Further, every orthogonal decomposition of the range $`R(\rho )`$ gives also a decomposition of $`\rho `$, and vice versa. (iii) Always $`rw(1r)`$. The equality r=w is observed if and only if $`p=1`$, then $`|\varphi =|1`$; whereas $`w=(1r)`$ if and only if $`p=0`$, and then $`|\varphi =|2`$. (These are consequences of (i) and (11a).) In case of nondegenerate $`r`$, peculiarity (iii) implies that the spectral form (10) is the mixture in which the most dominant pure state (i. e., the one with the largest statistical weight) and the least dominant one are exhibited. All other mixture forms (i. e., $`p,\vartheta `$-decompositions) of the given state operator $`\rho `$ are less extreme. In case of nondegenerate $`r`$, it, further, ensues from peculiarity (i) that for any a priori given $`w(r,1r)`$, there exists a family of $`p,\vartheta `$-decompositions that give this $`w`$ value: The (unique) value of $`p`$ is obtained by solving (13) for $`p`$, and $`\vartheta `$ is arbitrary. In particular, $`w=1/2`$ is obtained with $`p=r`$. Before we tackle (in the next section) the problem of how to perform empirically decomposition (12) of an empirically given (subsystem) state $`\rho `$, it should be noted that this decomposition may find application in various problems. For instance, $`\rho `$ may be the state operator of a composite system, and $`|\varphi `$ (cf (11a)) an uncorrelated state vector. The evaluation of the counter state $`|\varphi ^c`$ (cf (14)) is then of interest because it decomposes $`\rho `$ into a separable and an inseperable state (cf and ). ## III WHICH YES-NO MEASUREMENT GIVES RISE TO A GIVEN DECOMPOSITION? The state vectors $`|\varphi `$ and $`|\varphi ^c`$ in decomposition (12) are in general not orthogonal. Hence, one cannot produce decomposition (12) by measurement in the laboratory because this always ends up in orthogonal states. Nevertheless, these decompositions do have physical meaning in terms of so-called distant state decomposition (empirically distant ensemble decomposition): One views the system on hand as a subsystem of a two-subsystem composite system, and one envisages the state vector $`|\omega `$ of the latter that implies the a priori given state operator $`\rho `$ (cf (10)) as its subsystem state operator $`\rho =Tr_o|\omega \omega |`$ (the letter ”o” in the index of the partial trace applies to the ”opposite” subsystem). Then, arguing along the lines presented in the Introduction for the Young two-slit interference, an opposite-subsystem yes-no measurement on the composite system in the state $`|\omega `$ may leave the subsystem state $`\rho `$ decomposed precisely as given in the $`p,\vartheta `$-decomposition (12). This is what we investigate in detail in this section. We have the given minimal-term mixture $`\rho `$ in spectral form (10). We write $`|\omega `$ expanded in the characteristic basis $`\{|1,|2\}`$ of $`\rho `$ with positive expansion coefficients: $$|\omega =r^{1/2}|1_o|1+(1r)^{1/2}|2_o|2.$$ $`(16)`$ This is a so-called Schmidt biorthogonal expansion (cf section 4 in or see ). The vectors $`\{|1_o,|2_o\}`$ are orthogonal state vectors in the state space of the opposite subsystem. (One may define $`|\omega `$ via (16) by choosing any such subbasis.) A suitable observable on the opposite subsystem that is a yes-no one on $`|\omega `$ has the following spectral form: $$A_o=a_1|\mu _1_o\mu _1|_o+a_2|\mu _2_o\mu _2|_o,a_1a_2,$$ $`(17)`$ where (the state vectors) $`|\mu _1_o`$ and $`|\mu _2_o`$ are required to be (mutually orthogonal) linear combinations of $`|1_o`$ and $`|2_o`$. One should note that one of the exhibited characteristic values of $`A_o`$ can be zero if the opposite-subsystem state space is two dimensional. But if it is three- or more dimensional, then both $`a_1`$ and $`a_2`$ must be nonzero. Then, as evident from (17), $`A_o`$ necessarily has zero in its spectrum (though it is not exhibited in (17)). We treat the characteristic values $`\{a_1,a_2\}`$ as irrelevant, i. e., we consider the whole class of observables having the same characteristic vectors $`\{|\mu _1_o,|\mu _2_o\}`$ as (essentially) one observable (as it is often done). Further, the characteristic vectors can be written in the following suitable form: $$|\mu _1_o=q|1_o+\left(1q^2\right)^{1/2}e^{i\lambda }2_o,$$ $`(18a)`$ $$0q1,0\lambda <2\pi ;$$ $`(18b)`$ $$|\mu _2_o=\left(1q^2\right)^{1/2}|1_oqe^{i\lambda }|2_o;$$ $`(19)`$ where $`\{|1_o,|2_o\}`$ are determined by (or determine) the composite-system state vector $`|\omega `$ (cf (16)). We call the $`\left(A_o1\right)`$ measurement on the composite system in the state $`|\omega `$ the $`q,\lambda `$-measurement. Now, one can make the following claim , which answers the question from the title of the section: If a $`p,\vartheta `$-decomposition (12) of a given minimal-term mixture state operator $`\rho `$ (cf (10)) is given and a minimal-term entanglement composite system state vector $`|\omega `$ (cf (16)) implying $`\rho `$ as its subsystem state operator is also given, then the following $`q,\lambda `$-measurement performed on $`|\omega `$, and no other one, gives rise to the mentioned $`p,\vartheta `$-decomposition: $$q=\left(w/r\right)^{1/2}p.$$ $`(20a)`$ where $`w`$ is the statistical weight of $`|\varphi \varphi |`$ in decomposition (12) given by (13), and $$\lambda =2\pi \vartheta .$$ $`(20b)`$ The claim is proved in Appendix 2. Inverting the question from the title of the section, the (second) answer is as follows: A given $`q,\lambda `$-measurement (cf (17)-(19)) on the composite-system state $`|\omega `$ given by (16) gives rise to the following $`p,\vartheta `$-decomposition (12) of the subsystem state operator $`\rho `$ implied by $`|\omega `$: $$p=\left(r/w\right)^{1/2}q,$$ $`(21a)`$ where $`w`$ is the statistical weight given by (13), which can now more suitably be written as $$w=(2r1)q^2+(1r);$$ $`(21b)`$ and, finally, $$\vartheta =2\pi \lambda .$$ $`(21c)`$ The validity of this claim is proved in Appendix 3. The two claims made establish a correspondence between the set of all decompositions (12) and the set of all suitable yes-no measurements (cf (17)) on the opposite subsystem. ”Suitability” here means that the two characteristic state vectors $`|\mu _1_o`$ and $`|\mu _2_o`$ exhibited in (17) span the range of the opposite-subsystem state operator of $`|\omega `$ (cf (16)), and, as a consequence, one can expand $`|\omega `$ in them (cf (22) below). ## IV CONCLUDING REMARKS Let us return from detail to the global conceptual view. If a composite-system state vector $`|\omega `$ is given in a two-term Schmidt biorthogonal expansion (16), we can expand it in any orthonormal basis $`\{|\mu _1_o,|\mu _2_o\}`$ in the range of the opposite-subsystem state operator $`\rho _o\left(Tr|\omega \omega |\right)`$ (”$`Tr`$” denotes here the partial trace over the subsystem at issue): $$|\omega =|\mu _1_o|\varphi _1^{}+|\mu _2_o|\varphi _2^{}.$$ $`(22)`$ (The vectors $`|\varphi _i^{}`$ , i=1,2 , are, in general, not normalized, i. e., they are not state vectors). Then the nonselective (or all-results) version of ideal measurement of the observable $`\left(A_o1\right)`$, where $`A_o`$ is given by (17) in terms of the basis considered, converts $`|\omega `$ into the mixed state $$\varphi _1^{}|\varphi _1^{}|\mu _1_o\mu _1|_o\left(|\varphi _1^{}\varphi _1^{}|/\varphi _1^{}|\varphi _1^{}\right)+$$ $$\varphi _2^{}|\varphi _2^{}|\mu _2_o\mu _2|_o\left(|\varphi _2^{}\varphi _2^{}|/\varphi _2^{}|\varphi _2^{}\right)$$ $`(23)`$ with $`\varphi _i^{}|\varphi _i^{}`$, i=1,2, as the statistical weights (cf the Lüders formula (4) that applies to selective or particular-result measurement).This composite-system mixture implies the same subsystem state $`\rho `$ as $`|\omega `$ does (as one can see from (22) and (23)), and it also implies its decomposition into pure states: $$\rho =\varphi _1^{}|\varphi _1^{}\left(|\varphi _1^{}\varphi _1^{}|/\varphi _1^{}|\varphi _1^{}\right)+\varphi _2^{}|\varphi _2^{}\left(|\varphi _2^{}\varphi _2^{}|/\varphi _2^{}|\varphi _2^{}\right).$$ $`(24)`$ In the two answers in the preceding section we have $$\varphi _1^{}|\varphi _1^{}=w,\varphi _2^{}|\varphi _2^{}=1w;$$ $`(25a,b)`$ $$|\varphi _1^{}/\left(\varphi _1^{}|\varphi _1^{}^{1/2}\right)=|\varphi ,|\varphi _2^{}/\left(\varphi _2^{}|\varphi _2^{}^{1/2}\right)=|\varphi ^c.$$ $`(25c,d)`$ The state decompositions (23) and (24) are actual (not just potential or mathematically possible like, e. g., the expansion (22)) because if one takes into account the (suppressed) states of the measuring instrument that has performed the $`\left(A_o1\right)`$ -measurement, different ”positions” of the ”pointer” (symbolically stated) correspond to the two terms. Finally, let us discuss the special case when (24) is an orthogonal decomposition of $`\rho `$, hence, in principle, a measurement. It is called distant measurement , , because the subsystem is not dynamically influenced by the opposite-subsystem measurement. If the characteristic value $`r`$ of $`\rho `$ is not degenerate , (10) is the only orthogonal decomposition of $`\rho `$. In this case distant measurement takes place if and only if $`|\mu _i_o=|i_o`$, $`i=1,2`$ (cf (16)), and we are dealing with a common characteristic subbasis of $`A_o`$ and $`\rho _o`$. Commutation of $`A_o`$ with $`\rho _o`$ is a necessary and sufficient condition for distant measurement for a general entangled two-subsystem state vector as proved in and . If $`r`$ is degenerate , every choice of $`A_o`$ (as long as $`|\mu _1_o`$ given by (18a) and $`|\mu _2_o`$ given by (19) span the range of $`\rho _o`$) leads to distant measurement because the state operator is a constant in $`R\left(\rho _o\right)`$, and, hence, $`A_o`$ always commutes with it. A beautiful realization of essentially the entangled composite state vector $`|\chi `$ given by (2) in a real experiment has been reported : Instead of two slits, there are two processes of parametric down conversion. We’ll disregard, say, the so-called signal out of the pair of down-converted photons, and speak only about the so-called idler. The idler from the first process is reflected back so that it may spatially overlap with the idler created in the second process and thus approach a detector. Writing the state vector of the former as $`|\psi _1`$, and that of the latter as $`|\psi _2`$, the photon may stem either from the first or from the second process, and thus one obtains the above interference state $`|\psi `$ given by (1). The phenomenon of interference is observed by moving the mentioned reflecting mirror, and thus changing $`|\psi _1`$ and changing the detection probability. Both signal and idler are vertically polarized in the very processes of down-conversion. The role of the (mutually orthogonal) polarizers at the slits (see the Introduction) is here played by a quarter-wave plate that is put in the way of the idler from the first process (to be traversed to the mirror and back). It serves to rotate the polarization from vertical to horizontal. Thus, essentially the above entangled state $`|\chi `$ (cf. (2)) comes about. Putting an analyzer at $`45^0`$ in front of the detector, erasure is observed on the photons that pass the analyzer and reach the detector (cf. (5)). If the analyzer is at $`45^0`$, then the counter-interference state $`|\psi ^c`$ is obtained out of $`|\chi `$. Other angles of the analyzer would, if the photon passes, give rise to, or distantly prepare, the spatial state in other linear combinations of $`|\psi _1`$ and $`|\psi _2`$. And all this is only a small part of the mentioned experiment . Incidentally, it may be compared, at least partially, with a previous experiment , because they both give realization to Franson’s idea of superposing (coherently mixing), essentially, different instants of creation of the photon, which comes about due to some spatial detour that exceeds the coherence length. But in the recent experiment polarization is included and manipulated in a practical way, and thus Ryff’s idea of observing quantum erasure in Franson’s experiment can be considered realized. As a matter of fact, the experiment seems to be independent of these ideas, because the corresponding articles are not among the references of . ## A We rewrite the relations (10), (11a), (13) and (14) in a redundant, but more compact and for proof more suitable form: $$\rho =r|11|+r^{}|22|,r^{}=1r;$$ $`(A.1)`$ $$|\varphi p|1+p^{}e^{i\vartheta }|2,p^{}\left(1p^2\right)^{1/2};$$ $`(A.2)`$ $$wrr^{}/\left(p^2r^{}+p^2r\right),w^{}1w;$$ $`(A.3)`$ $$|\varphi ^c\left[\left(rwp^2\right)/w^{}\right]^{1/2}|1+\left[\left(r^{}wp^2\right)/w^{}\right]^{1/2}e^{i(\vartheta +\pi )}|2.$$ $`(A.4)`$ Lemma A1. Let a parameter $`s`$ be given such that $`0<s1`$. Then for each value of $`s`$ from the given interval, one can decompose $`\rho `$ uniquely as follows: $$\rho =ws|\varphi \varphi |+(1ws)\rho ^{},$$ $`(A.5)`$ $$\rho ^{}\left((wws)/(1ws)\right)|\varphi \varphi |+\left((1w)/(1ws)\right)|\varphi ^c\varphi ^c|.$$ $`(A.6)`$ If $`s>1`$, then there exists no statistical operator $`\rho ^{}`$ such that decomposition (A.5) is valid. Proof. Replacing (A.6) in (A.5), the latter reduces to (12): $$\rho =w|\varphi \varphi |+w^{}|\varphi ^c\varphi ^c|.$$ $`(A.7)`$ Evidently, (A.5) is valid if and only if so is (A.7). Checking this relation, one easily obtains $$1|LHS|1=1|RHS|1$$ and $$2|LHS|2=2|RHS|2.$$ Further, $`1|LHS|2=0`$, and $$1|RHS|2=wpp^{}e^{i\vartheta }\left(rwp^2\right)^{1/2}\left(r^{}wp^2\right)^{1/2}e^{i\vartheta }=$$ $$wpp^{}e^{i\vartheta }\left(rr^{}rwp^2r^{}wp^2+w^2p^2p^2\right)^{1/2}e^{i\vartheta }.$$ Substituting here $`rr^{}`$ from (A.3), one obtains $$1|RHS|2=$$ $$wpp^{}e^{i\vartheta }\left(r^{}wp^2+rwp^2rwp^2r^{}wp^2+w^2p^2p^2\right)^{1/2}e^{i\vartheta }=0.$$ The operator $`\rho ^{}`$ is unique because it is determined by (A.5) in terms of the rest of the entities in this relation. Assuming $`s^{}>1`$ and the validity of (A.5) with $`ss^{}`$ and $`\rho ^{}\rho ^{\prime \prime }`$, where $`\rho ^{\prime \prime }`$ is some hypothetical statistical operator, we can write (A.5) as follows: $$\rho =w|\varphi \varphi |+(ws^{}w)|\varphi \varphi |+(1ws^{})\rho ^{\prime \prime }.$$ Subtracting (A.7) from this, one obtains $$\left((ws^{}w)/w^{}\right)|\varphi \varphi |+\left((1ws^{})/w^{}\right)\rho ^{\prime \prime }=|\varphi ^c\varphi ^c|.$$ This is not possible due to the homogeneity of the state on the RHS and the fact that $`|\varphi ^c|\varphi `$ (or else $`\rho =|\varphi \varphi |`$, which is not true because $`\rho `$ is assumed to be a mixture). This reductio ad absurdum argument proves that decomposition (A.5) with $`s>1`$ is not possible. $`\mathrm{}`$ Corollary A1. Decomposition (A.7) is the only one that decomposes the mixture $`\rho `$ into two pure states one of which is $`|\varphi \varphi |`$. Proof. Let us assume ab contrario that there exists another decomposition $$\rho =w^{}|\varphi \varphi |+(1w^{})|\varphi ^{\prime \prime }\varphi ^{\prime \prime }|.$$ If $`w^{}>w`$, then we can rewrite this in the form of (A.5) with $`s>1`$, but, according to lemma A1, this is not possible. If $`w^{}<1`$, then we can, again, put this in the form of (A.5), but this time with $`s<1`$. Then, one, further, obtains $$|\varphi ^{\prime \prime }\varphi ^{\prime \prime }|=\rho ^{}.$$ This is not possible because $`\rho ^{}`$ is a mixture (cf (A.6)). Finally, if $`w^{}=w`$, then $`|\varphi ^{\prime \prime }\varphi ^{\prime \prime }|`$ is determined by the rest of the entities in the above decomposition. Thus, it cannot differ from $`|\varphi ^{}\varphi ^{}|`$ (cf (A7)).$`\mathrm{}`$ ## B Let a $`p,\vartheta `$-decomposition of $`\rho `$ (cf (10)-(14)) be given together with a composite-system state vector $`|\omega `$ that implies $`\rho `$ as its subsystem state operator (cf (16)). To evaluate the corresponding yes-no measurement, we write (22) with (25a-d) substituted in it: $$|\omega =w^{1/2}|\mu _1_o|\varphi +(1w)^{1/2}|\mu _2_o|\varphi ^c.$$ $`(A.8)`$ Substituting here $`|\omega `$ from (16), partial scalar product (see section 2 in or see ) with $`|\mu _1_o`$ from the left gives $$r^{1/2}\left(\mu _1|_o|1_o\right)|1+(1r)^{1/2}\left(\mu _1|_o|2_o\right)|2=w^{1/2}|\varphi $$ on account of $`\mu _1|_o|\mu _2_o=0`$. Inserting the explicit forms of $`|\varphi `$ and $`|\mu _1_o`$, i. e., (11a) and (18a) respectively, one further has $$r^{1/2}q|1+(1r)^{1/2}(1q^2)^{1/2}e^{i\lambda }|2=w^{1/2}p|1+w^{1/2}(1p^2)^{1/2}e^{i\vartheta }|2$$ or, putting the corresponding expansion coefficients on the two sides equal, one obtains $$r^{1/2}q=w^{1/2}p,(1r)^{1/2}(1q^2)^{1/2}e^{i\lambda }=w^{1/2}(1p^2)^{1/2}e^{i\vartheta }.$$ $`(A.9a,b)`$ Relation (A.9a) can be rewritten as $$q=(w/r)^{1/2}p.$$ $`(A.10a)`$ To evaluate $`w`$, we utilize relation (A.9b), where equality of the norms, upon squaring, implies $$(1r)(1q^2)=w(1p^2).$$ $`(A.11)`$ Replacing here $`q^2`$ from (A.10a), one derives $$w=r(1r)/\left(p^2(1r)+(1p^2)r\right),$$ $`(A.10b)`$ which is, actually, relation (13). In relations (A.10a) and (A10b) the dependence of $`q`$ on $`p`$ is expressed via $`w`$. The phase factors in (A.9b) give the second part of our unique solution: $$\lambda =2\pi \vartheta .$$ $`(A.10c)`$ ## C Let a $`q,\lambda `$-measurement (cf (17)-(19)) be given together with the composite-system state vector $`|\omega `$ determined by (16) in which the measurement is to be performed. To evaluate the corresponding $`p,\vartheta `$-decomposition of $`\rho `$, the state operator of the second subsystem, we return to the argument presented in Appendix 2 leading to (A.9a) and (A.9b). These relations connect $`p,\vartheta `$ and $`q,\lambda `$ independently of the fact which of them is given a priori. Solving (A.10a) for $`p`$, we obtain $$p=(r/w)^{1/2}q,$$ $`(A.12a)`$ and solving (A.11) with (A.12a) for $`w`$, we end up with $$w=(2r1)q^2+(1r).$$ $`(A.12b)`$ The second part of our unique solution comes from inverting (A.10c): $$\vartheta =2\pi \lambda .$$ $`(A.12c)`$
no-problem/9903/cond-mat9903307.html
ar5iv
text
# Modelling the Nonlinear High-Frequency Response of a Short Josephson Junction under Two-Frequency Irradiation ## I Introduction The high-frequency nonlinear response of a Josephson Junction (JJ) is of significant interest because JJs are necessary units of almost all active microwave and millimeter (mm) wave devices , and because weak links, which are likely to be present even in the highest quality samples of high-$`T_c`$ superconductors (HTS), can be modelled as JJs . The RSJ model is often used to simulate the characteristics of JJ-based devices , and was shown to give a good agreement with experimental data on point-contacts and microbridges in the microwave and mm wave ranges, where the capacitance of the junction can be neglected. In the present paper we report simulation of the surface impedance of a JJ at the frequency of the low amplitude signal, hereafter referred to as “high-frequency response”, as a function of the current amplitude of the other elevated-power high-frequency signal. This case can be considered as a model of a microwave-biased electromagnetic radiation detector, which has been shown to have an improved sensitivity and noise figures when compared with dc-biased detectors . The other possible implication of the model is modeling the performance of microwave parametric amplifiers and Josephson mixers . In addition, the above model can also describe the nonlinear microwave response of superconducting weak links, which is often investigated using the so-called “pump-probe” technique . This technique is of a particular interest because it allows one to modulate or to pulse the powerful microwave signal, whilst measuring the surface impedance of the sample with the help of the other low-amplitude continuous wave microwave signal at a different frequency. In such a way, the pump-probe method avoids substantial heating effects and allows the study of intrinsic nonlinear phenomena in superconductors. However, using the pump-probe technique, one has to know how the nonlinear surface impedance, measured at the pump frequency, relates to that measured at the frequency of the probe signal, with respect to which the superconductor is in the linear regime. Although the model proposed in this paper cannot be considered as a model of the nonlinear response of HTS, an extension of the model to the case of a 2D JJ array with a random distribution of $`I_cR_n`$-products (as recently proposed by Herd et al. for the single-frequency case ) would allow the description of experiments on HTS thin films in the pump-probe regime . ## II Numerical Simulation In the case of two signals applied to a short JJ, the sine-Gordon Equation (which describes time dependence of the order parameter phase difference $`\phi `$ across the junction) in dimensionless form can be written as follows $$\frac{d\phi }{d\tau }=i_{pm}\mathrm{sin}(\mathrm{\Omega }_{pm}\tau )+i_{pr}\mathrm{sin}(\mathrm{\Omega }_{pr}\tau )\mathrm{sin}\phi ,$$ (1) where $`\tau =\beta t`$, $`i_{pm}=I_{pm}/I_c`$, $`i_{pr}=I_{pr}/I_c`$, $`I_{pm}`$ and $`I_{pr}`$ current amplitudes of the pump and probe signals respectively, $`\omega _{pm}=2\pi f_{pm}`$ and $`\omega _{pr}=2\pi f_{pr}`$ the corresponding circular frequencies, $`I_c`$ the critical current of the junction, $`\beta =2eR_nI_c/\mathrm{}`$, $`\mathrm{\Omega }_{pm}=\omega _{pm}/\beta `$, $`\mathrm{\Omega }_{pr}=\omega _{pr}/\beta `$, and $`R_n`$ surface resistance of the junction in the normal state. Equation (1) does not take into account the effect of the junction capacitance, which can be neglected at microwave frequencies. Generally, the solution of (1) is not necessarily periodic with time. Only in the case when the ratio $`\mathrm{\Omega }_{pm}/\mathrm{\Omega }_{pr}`$ is an integer is the solution of (1) periodical with a period equal to the least common multiple (LCM) of the pump and the probe signal periods. If we expand the phase derivative $`\dot{\phi }`$ into a Fourier series with respect to time over the LMC period of the two signals, we obtain coefficients which couple the voltage $`\dot{\phi }`$ to the current $`I_{rf}`$. If then we single out the series’ terms at the probe frequency, we obtain the surface impedance at the relevant frequency as follows $$Z_s=R_s+jX_s=\underset{n\mathrm{}}{lim}\frac{\mathrm{\Omega }R_n}{\pi i_{pr}n}\underset{0}{\overset{2\pi n/\mathrm{\Omega }}{}}\dot{\phi }(\tau )\mathrm{exp}(j\mathrm{\Omega }_{pr}\tau )𝑑\tau ,$$ (2) where $`\mathrm{\Omega }=2\pi /T`$, and $`T`$ is the LCM of the pump and probe signal periods. Because one can assume different initial conditions for (1), its solution is not strictly periodic with $`T`$, and hence in (2) integration over a few periods is required to get an appropriate convergency of the results. The parameters used for simulation are as follows: $`R_n=10^3\mathrm{\Omega }`$, $`I_c=0.5`$ A, $`f_{pm}=3.610^{10}`$ Hz, $`f_{pr}=`$(0.036–7.2)$`10^{10}`$ Hz, $`\mathrm{\Omega }_{pm}=2\pi \mathrm{}f_{pm}/(2eI_cR_n)=0.141`$. Results of the simulation for different ratios of $`\mathrm{\Omega }_{pm}/\mathrm{\Omega }_{pr}`$ varied from 0.5 to 100, and for different values of the probe current amplitude $`I_{pr}`$ (0.05, 0.25, and 0.7) are plotted in Fig. 1, Fig. 2, Fig. 3 and Fig. 4. For $`\mathrm{\Omega }_{pm}/\mathrm{\Omega }_{pr}>1`$ (Fig. 1a, Fig. 2a and Fig. 3a) at low $`i_{pr}`$ (0.05), the surface resistance $`R_s^{f_{pr}}`$ as a function of $`I_{pm}`$ is a sequence of peaks of about the same height with a “pedestal” level approximately equal to the value of $`R_s^{f_{pr}}`$ in the linear regime. This picture is rather different from that expected for the single-frequency situation. In the latter case, the surface resistance $`R_s`$ has a staircase-like form, starting to increase rapidly for $`i_{pm}>1`$, and gradually approaching the $`R_n`$ value as $`i_{pm}\mathrm{}`$. With regards to the surface reactance $`X_s`$, for the single-frequency case it oscillates around zero with amplitude decaying with increased $`i_{pm}`$ (see, e.g. , ). However, in the two-frequency regime, no obvious decay of the peaks amplitude in $`X_s`$ up to $`i_{pm}=4`$ is observed. In addition, every oscillation peak seen in $`X_s(i_{pm})`$ in the single-frequency regime translates into a peak with a complicated structure, containing many upward and downward minor peaks of smaller amplitude. With increased $`i_{pr}`$ the major peaks in $`X_s^{f_{pr}}(i_{pm})`$ broaden, and more complicated “fine” structure of minor peaks develops (see (Fig. 1b,c, Fig. 2b,c and Fig. 3b,c)). As far as $`R_s^{f_{pr}}`$ is concerned, an increase of $`i_{pr}`$ leads to the appearance of steps in $`R_s^{f_{pr}}(i_{pm})`$, similar to those seen in $`R_s(i_{pm})`$ for the single-frequency regime. The higher $`i_{pr}`$, the more steps are observed in $`R_s^{f_{pr}}(i_{pm})`$, before it levels off and starts to oscillate near some average value around unity for $`i_{pm}1`$. Contrary to the case of $`\mathrm{\Omega }_{pm}/\mathrm{\Omega }_{pr}>1`$, when the major peaks in $`R_s^{f_{pr}}(i_{pm})`$ are almost symmetrical with respect to a vertical line drawn through the middle of their width, in the case of $`\mathrm{\Omega }_{pm}/\mathrm{\Omega }_{pr}<1`$ these peaks are obviously asymmetric (see Fig. 4). Another distinctive feature for this case are discontinuous double-peak structures (upward peak followed by downward peak) at the beginning and the end of every major peak. One further significant difference of $`R_s^{f_{pr}}(i_{pm})`$ in the case of $`\mathrm{\Omega }_{pm}/\mathrm{\Omega }_{pr}<1`$, as compared to the case with $`\mathrm{\Omega }_{pm}/\mathrm{\Omega }_{pr}>1`$, is the appearance of regions with negative values of $`R_s^{f_{pr}}`$. This means that under these particular conditions the JJ can contribute energy to the external circuit, i. e. it works as a generator. This effect was theoretically predicted and experimentally observed in JJs made of low-temperature superconductors, and was called “the effect of nondegenerated single-frequency parametric regeneration” . As the theoretical analysis showed, this phenomena can be realised in any parametric element, a reactive parameter of which can take negative values with changing time . All other features of $`R_s^{f_{pr}}(i_{pm})`$ for the case of $`\mathrm{\Omega }_{pm}/\mathrm{\Omega }_{pr}<1`$, such as the appearance of steps, an increase in their number, and a shift of the oscillatory part of the dependence to higher $`i_{pm}`$ with increased $`i_{pr}`$, are similar to those seen in the case of $`\mathrm{\Omega }_{pm}/\mathrm{\Omega }_{pr}>1`$. As far as $`X_s^{f_{pr}}(i_{pm})`$ is concerned, features like asymmetry of the major peaks and discontinuous double-peak structures are observed, similar to those present in $`R_s^{f_{pr}}(i_{pm})`$. ### A Implications for applications One of the possible applications of the two-frequency regime, simulated in this paper, is a microwave-biased JJ detector. An advantage of this regime is that the amplitude of the oscillation peaks (or, equivalently, the impedance steps) can be made several times (up to a factor of 3–4) higher than the resistance of the junction in the normal-state, especially at a small probe current (see Fig. 1a, Fig. 2a and Fig. 3a). This should lead to enhanced sensitivity of the detector, as compared to the single-frequency regime. Despite that similar results (increased step heights) have been also obtained for a dc-biased JJ , a microwave-biased detector was shown to benefit from reduced noise temperature and enhanced responsivity as compared to the dc-biased one . Moreover, since the step amplitudes in the saturation regime ($`I_{pm}1`$) are almost independent of the pump current, such a detector will possess amplitude-independent sensitivity. In the case when $`\mathrm{\Omega }_{pm}/\mathrm{\Omega }_{pr}<1`$, the JJ can be used as a parametric amplifier operating in the single-frequency nondegenerate regime which, when compared to the self-pumped regime, was shown to give a reduced noise temperature and a narrower frequency response of the junction at the probe frequency , allowing one to use high quality factor resonators for matching the junction with the external circuits. ## III Conclusion The numerical simulation performed by us has shown that the nonlinear high-frequency response $`Z_s^{f_{pr}}`$ of a short JJ in the regime of two-frequency irradiation can be rather different from the surface impedance $`Z_s`$ measured in the single-frequency regime. Depending on the ratio of the pump to the probe frequencies, a number of new features in $`Z_s^{f_{pr}}(i_{pm})`$ are predicted. Among them are the absence of the steps in $`R_s^{f_{pr}}(i_{pm})`$ at low probe currents ($`i_{pr}<0.05`$); persistent oscillations of $`R_s^{f_{pr}}(i_{pm})`$ around some average value which tends to unity with increased $`i_{pm}`$; a multiple-peak structure of $`X_s^{f_{pr}}(i_{pm})`$, which becomes more complicated with increased ratio $`\mathrm{\Omega }_{pm}/\mathrm{\Omega }_{pr}`$; appearance of regions with negative values of surface resistance in $`R_s^{f_{pr}}(i_{pm})`$ for the case of $`\mathrm{\Omega }_{pm}/\mathrm{\Omega }_{pr}<1`$. At the same time, there are some features which are similar to the those in the single-frequency regime. These are the appearance of steps in $`R_s^{f_{pr}}(i_{pm})`$ with increased $`i_{pr}`$, and the oscillation of $`X_s^{f_{pr}}(i_{pm})`$ around a zero-$`X_s^{f_{pr}}`$ value. The model presented here was shown to give a useful basis knowledge for an application of the JJ as a microwave-biased detector of electromagnetic radiation. ## Acknowledgment A.V.V. thanks I.V.Yurkevich and A.S.Stepanenko for useful discussion.
no-problem/9903/hep-th9903208.html
ar5iv
text
# Untitled Document FU Berlin preprint Universality Class of Confining Strings M. Cristina Diamantini Supported by an A. v. Humboldt fellowship. On leave of absence from I.N.F.N. and University of Perugia; e-mail: diamanti@einstein.physik.fu-berlin.de, Hagen Kleinert <sup>∗∗</sup> e-mail: kleinert@physik.fu-berlin.de http://www.physik.fu-berlin/~kleinert and C. A. Trugenberger <sup>∗∗∗</sup> e-mail: cat@kalymnos.unige.ch Institut für Theoretische Physik, Freie Universität Berlin Arnimalle 14, D-14195 Berlin, Germany A recently proposed model of confining strings has a non-local world-sheet action induced by a space-time Kalb-Ramond tensor field. Here we show that, in the large-$`D`$ approximation, an infinite set of ghost- and tachyon-free truncations of the derivative expansion of this action all lead to $`c=1`$ models. Their infrared limit describes smooth strings with world-sheets of Hausdorff dimension $`D_H=2`$ and long-range orientational order, as expected for QCD strings. March 1999 1. Introduction Nonwithstanding the large amount of evidence suggesting the possibility of a string description of quark confinement, a consistent model of non-critical strings has yet to be found. The simplest possibility, provided by the Nambu-Goto string can be consistently quantized only in $`D=26`$ or $`D1`$ dimensions due to the conformal anomaly . Large world-sheets in Euclidean space crumple and the model is inappropriate to describe the expected smooth strings dual to QCD . In an attempt to cure this problem a naively marginal term proportional to the square of the extrinsic curvature was added to the action . The resulting rigid string is, however, different from the Nambu-Goto string only in the ultraviolet region, since the new term turned out to be infrared irrelevant. Thus the rigidity did not really help in preventing crumpling . Recent progress in this field is based on two types of actions. A first model of confining strings \[6,,7\] is based on an induced string action which can be explicitly derived for compact QED \[6,,8,,9\] and for Abelian-projected $`SU(2)`$ . In its non-local formulation, the model was independently proposed in . A second proposal , analyzed further in , is based on a string action in a five-dimensional curved space-time with the quarks living on a four-dimensional horizon. Both proposals, whose interrelation has been investigated in , enjoy the necessary zigzag invariance of QCD strings. In its world-sheet formulation, the induced string possesses a non-local action with negative stiffness \[11,,8\] just as the world-sheets of magnetic strings of the Abelian Higgs model in the London limit of infinite Higgs mass \[9,,15\] . Such an action may be brought to a quasi-local form via a derivative expansion of the interaction between the surface elements. For a conventional renormalization group study of the geometric properties of the fluctuating world-sheets we a truncate this derivative expansion. This makes the model non-unitary, but in a spurious way. In the truncated action the stiffness is negative, so that a stable truncation must include at least a sixth-order term in the derivatives. In we have shown that this term has the desired properties of solving the infrared problem of Nambu-Goto and rigid strings in the large-$`D`$ approximation. While perturbatively irrelevant, it becomes relevant in the large-$`D`$ approximation, a phenomenon familiar from 3$`D`$ Gross-Neveu model . It suppresses crumpling and the model has an infrared-stable fixed point corresponding to a tensionless smooth string whose world-sheet has Hausdorff dimension $`D_H=2`$. The corresponding long-range orientational order is caused by a frustrated antiferromagnetic interaction between normals, a mechanism first recognized in and confirmed by recent numerical simulations . The purpose of this paper is to determine the universality class of confining strings determined by the finite-size scaling of the Euclidean effective action of the model on a cylinder of (spatial) circumference $`R`$ . In the limit of large $`R`$ this takes the form $$\underset{\beta \mathrm{}}{lim}\frac{S^{\mathrm{eff}}}{\beta }=𝒯R\frac{\pi c(D2)}{6R}+\mathrm{},$$ for $`(D2)`$ transverse degrees of freedom, the universality class being encoded in the pure number $`c`$. This suggests that the effective theory describing the infrared behaviour is a conformal field theory (CFT) with central charge $`c`$ . In this case the number $`c`$ also fixes the Lüscher term in the quark-antiquark potential: $$V(R)=𝒯R\frac{\pi c(D2)}{24R}+\mathrm{}.$$ By interchanging $`R`$ in (1.1) with the inverse temperature $`\beta `$ we obtain immediately the low-temperature behaviour of the model. We shall give an estimate of the deconfinement temperature as well as its range of validity. Finally we shall generalize the results of to higher truncations of the original non-local action and show that the universality class and the geometric properties of world-sheets are largely independent of the level of the truncation, implying the irrelevance of the truncation and the spurious non-unitarity deriving from it altogether. 2. Finite-size scaling The truncated world-sheet model of confining strings proposed in is defined in Euclidean space by the action $$S=d^2\xi \sqrt{g}g^{ab}𝒟_ax_\mu \left(ts𝒟^2+\frac{1}{m^2}𝒟^4\right)𝒟_bx_\mu ,$$ where $`g`$ and $`𝒟_a`$ represent, respectively, the determinant and the covariant derivatives with respect to the induced metric $`g_{ab}=_ax_\mu _bx_\mu `$ on the world-sheet $`𝐱(\xi _0,\xi _1)`$. The first term represents a bare surface tension $`2t`$, while the second accounts for rigidity with a stiffness parameter $`s`$ which is negative when generated dynamically by a tensor field in four-dimensional space-time \[11,,8\] . The last term ensures the stability of the model. Since it contains the square of the gradient of the extrinsic curvature matrices it suppresses the formation of spikes on the world-sheet. In the large-$`D`$ approximation it generates a string tension proportional to the square mass $`m^2`$ which takes control of the fluctuations where the orientational correlation die off. For $`t,s,m0`$, one reaches an infrared fixed-point describing tensionless smooth strings with long-range orientational order . While the model (2.1) is a toy version of the action induced by an antisymmetric tensor field, it is known that QCD strings possess a curvature expansion of exactly this type. In this paper we shall analyze the leading large-$`D`$ behaviour of the effective action on a cylinder of (spatial) circumference $`R`$. This is the extension to our model of the calculations for the Nambu-Goto string and \[25,,26\] for the rigid string. Contrary to these papers, however, we consider periodic boundary conditions as in in order to avoid the problem of a non-uniform saddle-point metric pointed out in \[24,,25\] . In order to simplify analytic computations we shall moreover equate the stiffness to its fixed-point value from the beginning by setting $`s=0`$ in (2.1). The large-$`D`$ calculation requires the introduction of a Lagrange multiplier matrix $`\lambda ^{ab}`$ enforcing the constraint $`g_{ab}=_ax_\mu _bx_\mu `$. The action (2.1) is thus extended to $$SS+d^2\xi \sqrt{g}\lambda ^{ab}\left(_ax_\mu _bx_\mu g_{ab}\right).$$ The world-sheet is parametrized in a Gauss map as $`x_\mu (\xi )=(\xi _0,\xi _1,\varphi _i(\xi ))`$ with $`i=2,\mathrm{},D1`$. Here $`\beta /2\xi _0\beta /2`$ and $`R/2\xi _1R/2`$ and $`\varphi _i(\xi )`$ describe the $`D2`$ transverse fluctuations. We look for a saddle-point with diagonal metric $`g_{ab}=\mathrm{diag}(\rho _0,\rho _1)`$ and Lagrange multiplier $`\lambda ^{ab}=\lambda g^{ab}`$. With this ansatz the extended action becomes $$\begin{array}{cc}\hfill S& =A_{\mathrm{ext}}\sqrt{\rho _0\rho _1}\left[(t+\lambda )\frac{\rho _0+\rho _1}{\rho _0\rho _1}2\lambda \right]\hfill \\ & +d^2\xi \sqrt{g}g^{ab}_a\varphi ^i\left(t+\lambda +\frac{1}{m^2}𝒟^4\right)_b\varphi ^i,\hfill \end{array}$$ where $`A_{\mathrm{ext}}=\beta R`$ is the extrinsic, projected area in coordinate space. By integrating over the transverse fluctuations we get, in the limit $`\beta \mathrm{}`$, an effective action $$\begin{array}{cc}\hfill S^{\mathrm{eff}}& =S_0+S_1,\hfill \\ \hfill S_0& =A_{\mathrm{ext}}\sqrt{\rho _0\rho _1}\left[(t+\lambda )\frac{\rho _0+\rho _1}{\rho _0\rho _1}2\lambda \right],\hfill \\ \hfill S_1& =\frac{D2}{4\pi }\beta \sqrt{\rho _0}\underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}_{\mathrm{}}^+\mathrm{}𝑑p_0\mathrm{ln}\left[p^2\left(t+\lambda +\frac{p^4}{m^2}\right)\right],\hfill \end{array}$$ where $$p^2p_0^2+\omega _n^2,\omega _n\frac{2\pi }{R\sqrt{\rho _1}}n.$$ By introducing the mass scale $`\mu =\sqrt{m\sqrt{t+\lambda }}`$ we can rewrite the sums and integrals in the one-loop contribution $`S_1`$ as $$\begin{array}{cc}\hfill S_1& =\frac{D2}{4\pi }\beta \sqrt{\rho _0}\left(S_1^0+2\mathrm{Re}S_1^\mu \right),\hfill \\ \hfill S_1^0& =\underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}_{\mathrm{}}^+\mathrm{}𝑑xm\mathrm{ln}\left(x^2+\frac{\omega _n^2}{m^2}\right),\hfill \\ \hfill S_1^\mu & =\underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}_{\mathrm{}}^+\mathrm{}𝑑x\mathrm{ln}\left(x^2+\omega _n^2+i\mu ^2\right),\hfill \end{array}$$ where Re denotes the real part. We shall dispose of the ultraviolet divergences in these quantities by analytic regularization. Defining first the logarithm as $`\mathrm{ln}x=\left[(d/d\beta )x^\beta \right]_{\beta =0}`$, and using the analytic interpolation of the integral $$_{\mathrm{}}^+\mathrm{}𝑑x\frac{1}{\left(x^2+q^2\right)^n}=q^{12n}\frac{\mathrm{\Gamma }\left(\frac{1}{2}\right)\mathrm{\Gamma }\left(n\frac{1}{2}\right)}{\mathrm{\Gamma }(n)}$$ to any real $`n`$, leads to the following formula for the regularized integrals: $$_{\mathrm{reg}}𝑑x\mathrm{ln}\left(x^2+a^2\right)=2\pi a.$$ The sums are then regularized by analytic continuation of the formula $`_{n=1}^{\mathrm{}}n^z`$ $`=\zeta (z)`$ for the Riemann zeta function . Using $`\zeta (1)=1/12`$ one obtains immediately $$S_1^0=\frac{2\pi ^2}{3R\sqrt{\rho _1}},$$ which leads to the well known results for the Nambu-Goto \[24,,29\] and the rigid strings. The computation of $`S_0^\mu `$ is a bit more involved. First we represent the right-hand side of (2.1) by the analytic continuation of the following integral representation of the gamma function, $$\frac{1}{\left(x^2+q^2\right)^s}=\frac{1}{\mathrm{\Gamma }(s)}_0^{\mathrm{}}𝑑tt^{s1}\mathrm{exp}\left[\left(x^2+q^2\right)t\right].$$ We then substitute the sum in $`S_1^\mu `$ by an equivalent expression by means of the duality relation $$\underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}\mathrm{exp}\left(n^2t\right)=\sqrt{\frac{\pi }{t}}\underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}\mathrm{exp}\left(\frac{\pi ^2n^2}{t}\right).$$ Separating out the $`n=0`$ term in the sum and using the representation $$K_\nu \left(2\sqrt{\beta \gamma }\right)=\frac{1}{2}\left(\frac{\gamma }{\beta }\right)^{\frac{\nu }{2}}_0^{\mathrm{}}𝑑xx^{\nu 1}\mathrm{exp}\left(\gamma x\frac{\beta }{x}\right)$$ of the modified Bessel function in the remainder we find $$S_1^\mu =\frac{\pi \mu ^2R\sqrt{\rho _1}}{4}\underset{n=1}{\overset{\mathrm{}}{}}\frac{4\sqrt{i\mu ^2}}{n}K_1\left(Rn\sqrt{i\mu ^2\rho _1}\right).$$ Altogether, we obtain the effective action on the cylinder: $$\begin{array}{cc}\hfill S^{\mathrm{eff}}& =\beta R\sqrt{\rho _0\rho _1}\left[(t+\lambda )\frac{\rho _0+\rho _1}{\rho _0\rho _1}2\lambda +\frac{D2}{2}\frac{\mu ^2}{4}\right]\hfill \\ & \frac{D2}{2}\frac{\beta \sqrt{\rho _0}}{2\pi }\left[\frac{2\pi ^2}{3R\sqrt{\rho _1}}+\mathrm{Re}\left[\underset{n=1}{\overset{\mathrm{}}{}}\frac{8\sqrt{i\mu ^2}}{n}K_1\left(nR\sqrt{i\mu ^2\rho _1}\right)\right]\right].\hfill \end{array}$$ Being interested only in the large-$`R`$ behaviour, we may neglect the exponentially small terms arising from the Bessel functions and arrive at the relevant approximation to $`S^{\mathrm{eff}}`$ to be used in the remaining computation: $$S^{\mathrm{eff}}=\beta R\sqrt{\rho _0\rho _1}\left[(t+\lambda )\frac{\rho _0+\rho _1}{\rho _0\rho _1}2\lambda +\frac{D2}{2}\frac{\mu ^2}{4}\right]\frac{D2}{2}\frac{\pi \beta }{3R}\sqrt{\frac{\rho _0}{\rho _1}}.$$ The factor $`(D2)`$ in $`S^{\mathrm{eff}}`$ ensures that, for large $`D`$, the fields $`\lambda `$, $`\rho _0`$ and $`\rho _1`$ are extremal and satisfy thus the saddle-point (“gap”) equations $$\begin{array}{cc}\hfill \frac{\rho _0+\rho _1}{\rho _0\rho _1}2+\frac{D2}{2}\frac{\mu ^2}{8(t+\lambda )}& =0,\hfill \\ \hfill \frac{t+\lambda }{2}\left(\frac{1}{\rho _1}\frac{1}{\rho _0}\right)\lambda +\frac{D2}{2}\frac{\mu ^2}{8}\frac{D2}{2}\frac{\pi }{6R^2\rho _1}& =0,\hfill \\ \hfill \frac{t+\lambda }{2}\left(\frac{1}{\rho _0}\frac{1}{\rho _1}\right)\lambda +\frac{D2}{2}\frac{\mu ^2}{8}+\frac{D2}{2}\frac{\pi }{6R^2\rho _1}& =0.\hfill \end{array}$$ Substituting the second of these equations in (2.1) we obtain the simplified form of the effective action $$S^{\mathrm{eff}}=\beta R𝒯\sqrt{\frac{\rho _1}{\rho _0}},$$ with $`𝒯2(t+\lambda )`$ being the physical string tension. The saddle-point equations are easily solved as follows. The sum of the last two equations yields an equation for $`\lambda `$ alone, $$\lambda \frac{D2}{2}\frac{\mu ^2}{8}=0.$$ Using $`\mu ^2=m\sqrt{t+\lambda }`$ this leads to the following solution for the string tension $`𝒯=2(t+\lambda )`$: $$\begin{array}{cc}\hfill 𝒯& =\frac{a^2}{32}\left(\frac{D2}{2}\right)^2m^2,\hfill \\ \hfill a^2& \frac{1+128\left(\frac{2}{D2}\right)^2\frac{t}{m^2}+\sqrt{1+256\left(\frac{2}{D2}\right)^2\frac{t}{m^2}}}{2},\hfill \end{array}$$ reproducing the result of . In a second step we subtract the second equation from the third, and multiply the result by $`2\rho _1/𝒯`$, obtaining $$\frac{\rho _1}{\rho _0}=1\frac{\pi (D2)}{3𝒯R^2}.$$ Expanding the square-root of this expression and multiplying by $`\beta R𝒯`$ we obtain the final result $$\frac{S^{\mathrm{eff}}}{\beta }=𝒯R\frac{\pi (D2)}{6R}+\mathrm{}.$$ Thus we conclude that confining strings are characterized by $`c=1`$. Although they share the same value of $`c`$, confining strings are clearly different $`c=1`$ theories than Nambu-Goto or rigid strings. Indeed the former are smooth strings on any scale while the latter crumple and fill the ambient space, at least in the infrared region. Our result $`c=1`$ is in agreement with recent precision numerical determinations of this constant. 3. The deconfinement temperature By changing $`R`$ into $`\beta `$ and $`\rho _0`$ into $`\rho _1`$ in the above formulas we obtain the behaviour of the model (2.1) at temperature $`T=1/k_B\beta `$. Having neglected the contribution of the Bessel functions in (2.1), however, we can only study low temperatures, whith $`\beta \mu \sqrt{\rho _0}>1`$. Using (2.1) and (2.1) we get $$\left(\frac{S^{\mathrm{eff}}}{R}\right)^2=\beta ^2𝒯^2\frac{\pi (D2)𝒯}{3}.$$ Raising the temperature, this quantity, representing the square mass of the lowest state, crosses zero at an inverse temperature $$\beta _{\mathrm{dec}}=\sqrt{\frac{\pi (D2)}{3𝒯}}=\frac{1}{m}\sqrt{\frac{128\pi }{3(D2)a^2}},$$ which specifies the deconfinement temperature of the model. Note that this result coincides with the corresponding one for Nambu-Goto and rigid strings when expressed in terms of the string tension. In order to establish the range of validity of this result we need to know the value of $`\sqrt{\rho _0}`$. This is obtained by substituting (2.1) into the first of the saddle-point equations (2.1), yielding $$\rho _0=\frac{2a\left(1\frac{\pi (D2)}{6𝒯\beta ^2}\right)}{2a1}.$$ At the deconfinement temperature this becomes $$\left(\rho _0\right)_{\mathrm{dec}}=\frac{a}{2a1}.$$ The value (3.1) of the deconfinement temperature is consistent with our approximation only if the equation $`\beta _{\mathrm{dec}}\mu \sqrt{\rho _{0}^{}{}_{\mathrm{dec}}{}^{}}>1`$ is satisifed. Only then can we neglect the Bessel functions down to the deconfinement transition. Using the above value of $`\beta _{\mathrm{dec}}`$ and $`\rho _{0}^{}{}_{\mathrm{dec}}{}^{}`$ this condition translates into $$a<\left(\frac{8\pi }{6}\frac{1}{2}\right)3.5.$$ Thus, formula (3.1) for the deconfinement temperature is reliable in the region of small $`(t/m^2)`$ where $`a1`$. Otherwise there are sizable corrections from the sum over $`n`$ in (2.1). 4. Generalization to higher truncations Having established that the model (2.1) describes smooth strings with $`c=1`$, the question arises as to how much these results depend on the truncation of the original non-local action after the $`𝒟^4`$ term. To answer this question let us consider instead of (2.1) an arbitrary truncation $$\begin{array}{cc}\hfill S|_n& =d^2\xi \sqrt{g}g^{ab}𝒟_ax_\mu V_n\left(𝒟^2\right)𝒟_bx_\mu ,\hfill \\ \hfill V_n\left(𝒟^2\right)& =\left(\alpha _0+\lambda \right)\mathrm{\Lambda }^2+\underset{k=1}{\overset{2n}{}}\frac{\alpha _k}{\mathrm{\Lambda }^{2k2}}𝒟^{2k}.\hfill \end{array}$$ Here $`\mathrm{\Lambda }`$ represents the fundamental mass scale in the model, to be identified with the QCD mass scale, and we have already included in the action the Lagrange multiplier $`\lambda `$ arising from (2.1) (note that here we have defined $`\lambda `$ as a dimensionless quantity). Since all expansion coefficients $`\alpha _k`$ are positive, the series is alternating in momentum space, with all terms with odd index $`k`$ being negative \[11,,18\] . Thus, stable truncations must end with an even $`k=2n`$. Following , the only condition we shall impose on the coefficients $`\alpha _k`$ is the absence of both tachyons and ghosts. This requires that the Fourier-transform $`V_n\left(p^2\right)`$ has no zeros on the real $`p^2`$-axis. The polynomial $`V_n\left(p^2\right)`$ has thus $`n`$ pairs of complex-conjugate zeros in the complex $`p^2`$-plane. For simplicity of computation we shall set all coefficients with odd $`k`$ to zero, $`\alpha _{2m+1}=0`$ for $`0mn1`$. This, however, is no drastic restriction since, as we shall now demonstrate, this is their value at the infrared-stable fixed point anyhow. With this simplification all pairs of complex conjugate zeros of $`V_n\left(p^2\right)`$ lie on the imaginary axis and we can represent $`V_n\left(p^2\right)`$ as $$\frac{\mathrm{\Lambda }^{4n2}}{\alpha _{2n}}V_n\left(p^2\right)=\underset{k=1}{\overset{n}{}}\left(p^4+\gamma _k^2\mathrm{\Lambda }^4\right),$$ with purely numerical coefficients $`\gamma _k`$. This expression substitutes $`\left[p^4+m^2(t+\lambda )\right]`$ inside the logarithm in the one-loop contribution (2.1), which becomes $$\begin{array}{cc}\hfill S_1& =\frac{D2}{4\pi }\beta \sqrt{\rho _0}\left(S_1^0+\underset{k=1}{\overset{n}{}}2\mathrm{Re}S_1^k\right),\hfill \\ \hfill S_1^k& =\underset{l=\mathrm{}}{\overset{+\mathrm{}}{}}_{\mathrm{}}^+\mathrm{}𝑑x\mathrm{ln}\left(x^2+\omega _l^2+i\gamma _k\mathrm{\Lambda }^2\right).\hfill \end{array}$$ Using (2.1) and neglecting as before the Bessel functions o for large $`R`$, we see that the only modification to (2.1), (2.1) and (2.1) due to the higher-order truncation is the substitution $$\mu ^2\underset{k=1}{\overset{n}{}}\gamma _k\mathrm{\Lambda }^2.$$ The Lagrange multiplier $`\lambda \mathrm{\Lambda }^2`$ and the string tension $`𝒯=2\left(\alpha _0+\lambda \right)\mathrm{\Lambda }^2`$ are now determined by the new saddle-point equation $$\lambda \frac{D2}{16}\underset{k=1}{\overset{n}{}}\gamma _k=0.$$ The value $`c=1`$ for the universal term in (2.1), however, remains unchanged. The new saddle-point equation for $`\lambda `$ is still polynomial, although of higher-order. The requirement that this polynomial “gap ” equation has at least one solution on the real axis with $`\left(\alpha _0+\lambda \right)0`$ provides the condition on the coefficients $`\alpha _{2m}`$, $`0mn`$, that defines the universality class of confining strings at level $`n`$. Note that with all $`\alpha _{2m+1}=0`$ for $`0mn1`$, no normalization scale needs to be introduced to define the one-loop term $`S_1`$. In other words, a scale introduced to properly define the logarithm in (2.1) would drop out at the end of the computation since the result does not contain logarithms. As a consequence, in a renormalization analysis as in , there are no anomalous dimensions and the infrared limit $`\mathrm{\Lambda }^20`$ of vanishing string tension is characterized by $`\beta \left(\alpha _{2m}\right)=0`$ for $`0mn`$. The point $`\mathrm{\Lambda }=0`$ is thus again an infrared-stable fixed point characterized by $`\alpha _{2m+1}=0`$ for $`0mn1`$, and $`n+1`$ renormalization group invariant numerical coefficients $`\alpha _{2m}`$, $`0mn`$, varying in a range where there exists a real solution to the “gap” equation. The geometric properties of world-sheets in the vicinity of this point can be easily obtained by decomposing $$\frac{1}{V_n\left(p^2\right)}=\frac{\mathrm{\Lambda }^2}{\alpha _{2n}}\underset{k=1}{\overset{n}{}}\frac{\eta _k}{p^4+\gamma _k^2\mathrm{\Lambda }^2}.$$ This decomposition is always possible since it is determined by a linear system of $`n`$ equations for the $`n`$ numerical coefficents $`\eta _k`$. At this point we can simply apply to each term in the above decomposition the discussion of and conclude that the infrared point of vanishing tension is characterized by long-range orientational order and Hausdorff dimension $`D_H=2`$ of world-sheets. We have thus shown that $`c`$ and the smooth geometric properties are independent of an infinite set of truncations, provided that a solution for the polynomial “gap” equation exists. These properties are presumably common to a large class of non-local world-sheet interactions. References relax For a review see e.g. : J. Polchinski, “Strings and QCD”, contribution in Symposium on Black Holes, Wormholes Membranes and Superstrings, H.A.R.C., Houston (1992); hep-th/9210045. relax For a review see e.g.: J. Polchinski, “String Theory”, Cambridge University Press, Cambridge (1998). relax A. M. Polyakov, Physica Scripta T15 (1987) 191. relax A. M. Polyakov, Nucl. Phys. B268 (1986) 406; H. Kleinert, Phys. Lett. B174 (1986) 335. relax F. David and E. Guitter Nucl. Phys. B295 (1988) 332, Europhys. Lett. 3 (1987) 1169. relax A. M. Polyakov, Nucl. Phys. B486 (1997) 23. relax F. Quevedo and C. A. Trugenberger, Nucl. Phys. B501 (1997) 143. relax M. C. Diamantini, F. Quevedo and C. A. Trugenberger, Phys. Lett. B396 (1997) 115; D. Antonov, Phys. Lett. B427 (1998) 274, B428 (1998) 346. relax H. Kleinert, Phys. Lett. B246 (1990) 127, Int. J. Theor. Phys. A7 (1992) 4693, Phys. Lett. B293 (1992) 168. relax D. Antonov and D. Ebert, ”Dual Formulation and Confining Properties of the SU(2)-Gluodynamics”, hep-th/9902177. relax H. Kleinert and A. Chervyakov, Phys. Lett. B381 (1996) 286. relax A. M. Polyakov, Nucl. Phys. Proc. Supp. 68 (1998) 1 (hep-th/9711002), The Wall of the Cave, hep-th/9809057. relax I. Kogan and O. Solovev, Phys. Lett. B442 (1998) 136; E. Alvarez and C. Gomez, String Representation of Wilson Loops, hep-th/9806075, Non-critical Confining Strings and the Renormalization Group, hep-th/9902012; I Kogan, On Zigzag Invariant Strings, hep-th/9901131. relax J. Ellis and N. Mavromatos, Confinement in Gauge Theories from the Condensation of World-Sheet Defects in Liouville Strings, hep-th/9808172; P. Horava, On QCD String Theory and AdS Dynamics, hep-th/9811028. relax M. I. Polikarpov, U.-J. Wiese and M. A. Zubkov, Phys. Lett. B309 (1993); K. Lee, Phys. Rev. D48 (1993) 2493; P.Orland, Nucl. Phys. B428 (1994) 221. relax M. C. Diamantini, H. Kleinert and C. A. Trugenberger, Phys. Rev. Lett. 82 (1999) 267. relax See e.g.: D. Gross, Application of the Renormalization Group to High-Energy Physics, in “Methods in Field Theory”, R. Balian and J. Zinn-Justin eds., North-Holland & World Scientific, Singapore (1981). relax M. C. Diamantini and C. A. Trugenberger, Phys. Lett. B421 (1998) 196; M. C. Diamantini and C. A. Trugenberger, Nucl. Phys. B531 (1998) 151. relax M. N. Chernodub, M. I. Polikarpov, A. I. Veselov and M. A. Zubkov, Phys. Lett. B432 (1998) 182. relax For a review see: J. Cardy, Scaling and Renormalization in Statistical Physics, Cambridge University Press, Cambridge (1996). relax H. W. Blöte, J. Cardy and M. Nightingale, Phys. Rev. Lett. 56 (1986) 742; I. Affleck, Phys. Rev. Lett. 56 (1986) 746. relax M. Lüscher, K. Symanzik and P. Weisz, Nucl. Phys. B173 (1980) 365; M. Lüscher, Nucl. Phys. B180 (1981) 317. relax D. Antonov, D. Ebert and Y. Simonov, Mod. Phys. Lett. A11 (1996) 1905. relax O. Alvarez, Phys. Rev. D24 (1981) 440. relax E. Braaten, R. D. Pisarski and S. M. Tze, Phys. Rev. Lett. 58 (1987) 93. relax H. Kleinert, Phys. Rev. Lett. 58 (1987) 1915; P. Olesen and S. K. Yang, Nucl. Phys. B283 (1987) 73. relax H. Kleinert, Phys. Lett. B189 (1987) 187, Phys. Rev D40 (1989) 473. relax I. Gradstheyn and I. M. Ryzhik, “Table of Integrals, Series and Products”, Academic Press, Boston (1980). relax R. Pisarski and O. Alvarez, Phys. Rev. D26 (1982) 3735. relax V. V. Nesterenko and N. R. Shvetz, Z. Phys. C55 (1992) 265. relax M. Caselle, R. Fiore, F. Gliozzi, M. Hasenbusch and P. Provero, Nucl. Phys. B486 (1997) 245.
no-problem/9903/astro-ph9903492.html
ar5iv
text
# The Spectrum of Diffuse Cosmic Hard X-Rays Measured with HEAO–1 ## 1 Introduction The spectrum of the diffuse sky background of cosmic X- and gamma-rays has been a matter of considerable interest and some controversy since the discovery by rocket-borne X-ray counters (Giacconi 1962) and by a gamma-ray counter on the Ranger III lunar probe (Metzger et al. 1968). The known spectrum was extended beyond 100 MeV by an instrument on OSO-III (Kraushaar et al. 1972). Although there were many subsequent measurements by a variety of rocket, balloon and space-borne instruments during the 1960’s and early 1970’s (Horstman et al. 1975), the most definitive spectra below about 500 keV were obtained from the HEAO–1, launched in 1977 (Marshall et al. 1980; Kinzer et al. 1997). At higher energies (i.e. $`>`$ 800 keV) the spectrum has recently been clarified with data obtained from the Compton Gamma-Ray Observatory (CGRO). The COMPTEL instrument on the CGRO has failed to confirm the “MeV Bump” (Trombka et al. 1977) in the diffuse gamma-ray spectrum in the range 0.8$``$ E $``$ 30 MeV (Kappadath et al. 1995, 1996), while the EGRET instrument (Kniffen et al 1996, Sreekumar et al. 1998) has generally confirmed the results presented earlier by Fichtel in the 100 MeV range by a spark chamber on the Small Astronomy Satellite–2 (SAS–2) (Fichtel et al. 1978), and has also extended the spectrum to about 100 GeV. The near isotropy of the diffuse X-ray background and its large energy density point to an extragalactic and even cosmological origin. Early attempts to produce the spectrum above about 3 keV in terms of uniform emissions at truly cosmological distances seem to have been ruled out (Barcons, Fabian & Rees 1991); therefore discrete source populations which extends to high redshifts must be considered (Barber & Warwick 1994). Fabian and Barcons (1992) and Hasinger (1996) provide reviews of the observational and theoretical status of the subject as of these dates. The most recent concept, summarized by Zdziarski (1996) is that the background in the range of $``$3–300 keV is due to various AGN components, particularly Seyfert II’s, (Madau et al. 1994), and that the low energy gamma-ray background ($``$ 300 keV $`<`$E$`<`$ 10 MeV) is due to supernova 1a (The, Leising & Clayton 1993). The diffuse component at energies $`>`$ 30 MeV measured by EGRET is attributed to unresolved blazars (Stecker & Salmon, 1996). The components in the range 0.4–10 keV, as determined with ASCA (Gendreau et al. 1995), may also be accounted for in terms of AGN’s; however, there exists an excess below 1 keV which, if not accounted for by effects in the local ISM, requires additional source components (Chen, Fabian & Gendreau 1997). The ROSAT deep X-ray survey in the Lockman hole has discovered enough sources to account for at least 70–80% of the diffuse flux in the 0.5–2 keV range (Hasinger 1998). Taking into account evolution, such as that which characterizes quasars, this source density actually overproduces the X-ray background in this range (Hasinger, pvt comm). This paper describes the final spectral results obtained by one of the UCSD/MIT Hard X-ray detectors on the HEAO–1 over the 13–180 keV range. This data is compared with related data on the diffuse component, and the total spectrum to 100 GeV is fit by a simple analytic function. Preliminary results of this work have been reported earlier (Rothschild et al. 1983; Gruber 1992). ## 2 Instrument and Operation The UCSD/MIT Hard X-ray and Gamma-Ray Instrument launched on the HEAO–1 spacecraft has been described previously (Matteson 1978, Jung 1989, Kinzer et al. 1997). The instrument consists of an array of seven NaI/CsI “phoswich” detectors collimated with a thick CsI active anticoincidence shield. Three different detector configurations were optimized to cover different sub-ranges over the 13 keV – 10 MeV total range of the instrument. Relevant properties of the various detectors are indicated in Table 1. The data reported here were taken from one of the two lower energy detectors (LED’s) which operated over a nominal 13–180 keV energy range. The LED’s had a passive lead-tin-copper multiple-slat subcollimator within the circular active CsI aperture to give a 1.4 x 20 FWHM beam response. The HEAO–1 was launched 1977 August 12 into a 22.7, 400 km circular orbit. The spacecraft rotated about the Earth-Sun line with a nominal 33 minute period. The detector fields were centered perpendicular to this line, and thus scanned across the sky and the Earth below every rotation, and made a complete sky scan every 6 months. The mission produced usable data until 1979 January 13, and the spacecraft re-entered the atmosphere on 1979 March 5. “Good” events, which met the criteria of no detectable energy losses in the CsI(Na) part of the phoswich and no anticoincidence shield event above $``$ 50 keV were coded in a 128 channel pulse height analyzer and transmitted in an event-by-event manner, with detector identifications, time tag, and dead time information. Auxiliary information on counting rates of the various functions, and housekeeping information were also transmitted. Commandable data modes allowed various diagnostics to be sent with each event. Separating diffuse fluxes from various background effects requires a more sophisticated instrumental and data analysis approach then that for localized sources. A movable 20.0 cm dia. x 5.0 cm thick CsI (Na) blocking crystal was arranged to cover the various apertures so that intrinsic detector backgrounds could be separated from fluxes entering the apertures. The blocking crystal or “shutter” could be operated in a “passive ”or “active” mode by using or ignoring the telemetered anticoincidence information during data analysis. This allowed determination of second-order effects due to radiations from the blocking crystal. The analysis of the various background effects has been described in detail in the previous paper reporting on the results of the diffuse cosmic flux in the $``$ 80–400 keV range (Kinzer et al. 1997) obtained with two of the Medium Energy Detectors. ## 3 Observations and Data Selection One of the two Low Energy Detectors, LED #6 (Kinzer et al. 1997), was covered on 14 occasions by the blocking crystal for intervals of six to twelve hours each between 1978 November and 1979 January. Given losses from live time correction and incomplete data recovery, this resulted in 205 ks of observation with the aperture closed, therefore counting only the cosmic-ray induced internal background. The observing intervals were selected to avoid passage through the South Atlantic Anomaly (SAA) region of geomagnetically-trapped particles, which also induce sizeable internal background, most notably from the production of I<sup>128</sup>, which decays with a 28 minute half life (Gruber, Jung & Matteson 1989; Briggs 1992). To further avoid this induced background component, observations were initiated at least three hours after the last of the daily sequence of passages through the SAA. This selection of orbits resulted, however, in a wide range of geomagnetic cutoffs, and therefore of fluxes due to cosmic rays and their atmospheric and spacecraft secondaries. Nevertheless, uniform sampling of the geomagnetic coordinate space B (magnetic field) and L (earth radii) (McIlwain, 1961) was assured by the large elapsed time, about 46 orbits. The closed aperture background was averaged over all zenith angles, since effects due to varying aspects of the internal background are expected to be very small. Sky-looking data during this period totaling 224 ks was also selected for the same geomagnetic conditions (B,L) during the SAA-quiet part of the observing day. The average of the geomagnetic parameter L agreed within 0.2 percent of that obtained during the orbits with aperture blocked. Data were obtained during scanning observations on a set of sky great circles whose center moved with the sun during this interval from 16h to 19h R.A. and with an average declination of -22 degrees. A small fraction of data containing catalogued sources (Levine et al. 1984) was excluded. ## 4 Control of Systematics ### 4.1 Variation of Detector Internal Background While the counting rate from the diffuse background dominates the internal background at energies from threshold to $``$20 keV, the sky contribution to the total drops rapidly with energy to a few percent at 100 keV. Since the internal background varies with geomagnetic L to a power between 1 and 2 (Gruber 1974), the average of L to $`<`$1%, as indicated above, implies an internal background mismatch between the open and blocked data sets of not more than 0.4%. The observed diffuse flux above 80 keV gives a count rate of about 6.7 x 10<sup>-3</sup> s<sup>-1</sup>, about 1% of the average background level. The observed agreement of this diffuse flux with that from the Medium Energy Detectors at 80–100 keV (Kinzer et al. 1997), where the latter detectors have a signal about equal to the internal background, and which is therefore very reliable, shows that this limit of 0.4 percent is not too low at these higher energies. The flat spectrum of the internal background insures that background estimation errors will have a completely negligible effect at all lower energies, where the real strength of this measurement lies. ### 4.2 Variation of Detector Gain The electron optics of the photomultiplier tube were insufficiently shielded from effects of the geomagnetic field, and therefore changed with orientation, resulting in gain variations as large as 20% peak-to-peak, with an RMS of the order of 5%. This gain variability was laboriously modeled in detail (Jung 1986) and corrected in the data so that the net instantaneous gain error was between one and 2% rms. The propagation of this error into the average spectrum over about 135 rotations of the spacecraft reduced the net effect by another order of magnitude, making it completely negligible. ### 4.3 Energy Calibration Relative energy calibration was based on preflight measurements of the differential channel width for detector pulses. Absolute energy calibration, required by a sudden change post-launch and a slow drift thereafter, was monitored using two discrete features of the internal background. The primary calibrator was a K-capture line of I<sup>125</sup>, which produces a gamma ray of 35 keV followed by a prompt decay, followed by K and L X-rays from the I<sup>125</sup> daughter, for a total of 67 keV. Differential response of the NaI scintillator produced light equivalent to a single photon emitted at 62.7 keV, based on ground calibrations. Measurement of bright sources such as the Crab Nebula (Jung 1989) and Cyg X-1 (Nolan & Matteson 1982) using this gain calibration produced smooth spectra, but a variation of the formal gain value by only a few percent from this produced an artifact at 40 keV in each of these sources. Our secondary calibration line, a blend of Iodine and Tellurium K X-rays, was useful only as confirmation of the prime calibration, because of the unknown and possibly variable mix of the two species. The background spectrum in Figure 1 shows the features used to determine the energy calibration. The effective energy resolution in orbit was about 15 keV FWHM at 60 keV. ### 4.4 Emission from Blocking Crystal The open minus blocked difference spectrum initially showed a strong deficit near 30 keV, the effective energy of the K X-ray blend from excited Iodine and Tellurium isotopes in the detector material. This deficit, and its identification as K X-rays, was traced to the blocking crystal, whose material also undergoes spallation by cosmic rays, followed in some cases by K-capture decay of the daughter, with a high-energy gamma that escapes the blocking crystal, and K radiation that produces a count in the detector. While this process is difficult to calculate, it was easier and more reliable to measure the effect in earth-looking data. We make the reasonable assumption that the earth’s secondary X-ray spectrum is featureless near 30 keV. ## 5 Results The average counting rates of the selected Low Energy Detector (LED), after correction for gain variations, are shown in Figure 1 for both the blocked and unblocked data. The sky data taken when the beam was above the horizon, and the blocked data is averaged over all zenith angles. These rates correspond to an average L of 1.17 and an average B of 0.30. As indicated previously the diffuse sky component is dominant at the lower energies, and is a small fraction of the average background at above 100 keV. Except for small corrections, the difference of these two curves is the rate due to diffuse hard X-rays. The resultant sky flux in units proportional to $`\nu F_\nu `$ is shown in Figure 2. The data here are corrected for the geometry factor, 3.0 cm<sup>2</sup>-sr, and for the energy response matrix. The latter has been determined from a combination of direct pre-launch measurements, and Monte-Carlo calculations. At these low energies, photoelectron absorption in the thin NaI detector is the primary interaction, so a simple efficiency correction applies over most of the energy range. The sky flux is shown averaged over many PHA channels widths, comparable to the measured energy resolution of 15 keV at 59 keV. Selecting energy widths of approximately constant ratio helps to keep the statistical significance of the plotted channels comparable on a log-log plot. Also shown in Figure 2 are the results obtained by a number of other experiments. The HEAO–A2 instrument (Marshall et al. 1980), which produced the most significant result on diffuse fluxes in the 3–45 keV range, overlaps and joins smoothly with the LED results. The LED data also join smoothly at the higher end to the data obtained from Medium Energy Detectors (MEDs) (Kinzer et al. 1997). Data obtained by a number of balloon and space experiments (Kinzer, Johnson & Kurfess 1978; Fukuda et al. 1975) are also shown for comparison. As discussed in Kinzer et al. (1997) it is significant that data in this range obtained with a number of different experimental techniques and in various radiation environments are in agreement within statistical and systematic uncertainties. We conclude the diffuse hard X-ray background is well determined in the range 3 $`<`$ E $`<`$ 500 keV. ## 6 Total Diffuse Spectrum With these and other recent results, it is now possible to define the spectrum over the entire observed energy range above 3 keV, and to generate an empirical analytic fit to this spectrum. Figure 3 shows selected data presented on an intensity scale, which is more useful for theoretical comparisons than the photon scale. The lower energy data ($`<`$ 500 keV) is shown again, converted to intensities. The COMPTEL results in the 0.8 $`<`$ E $`<`$ 30 MeV range, which fail to establish the “MeV bump” (Kappadath et al. 1996; Kinzer et al. 1997) are also shown. We do not plot the Apollo 15/16 results which overlap the HEAO and the COMPTEL work since uncertainty in the correction for induced background (Trombka et al. 1977) in detectors operating over the 0.5 $`<`$ E $`<`$ 10 MeV range has almost certainly caused the artifact resulting in the “MeV bump”. We have also discarded other reported results from scintillators above 500 keV as unreliable. At higher energies, the earlier results obtained on SAS–2 are in substantial agreement with the more definitive results obtained over an extended energy range by EGRET on CGRO (Kniffen et al. 1996). Since the various results in Figure 3 obtained with many instruments and techniques over eight decades in energy appear to join smoothly, it is possible to empirically fit an analytic function to the data from the full energy range. Such a function had been developed previously by Gruber (1962), based on the data as it was available at the time. The present data and selected earlier data, all shown in Figure 3, have been empirically fit to a combination of exponential and power law functions, operating over different energy ranges. Criteria are that the functions join smoothly in first and second order at the break point, and that $`\chi `$<sup>2</sup> be minimized. Such a function is: | 3–60 keV: | 7.877 E<sup>-0.29</sup> e<sup>-E/41.13</sup> | keV/keV-cm<sup>2</sup>-sec-sr | | --- | --- | --- | | $`>`$ 60 keV: | 0.0259 (E/60)<sup>-5.5</sup> | | | | \+ 0.504 (E/60)<sup>-1.58</sup> | keV/keV-cm<sup>2</sup>-sec-sr | | | \+ 0.0288 (E/60)<sup>-1.05</sup> | | Overall, the reduced $`\chi `$<sup>2</sup> is about 1.3, which may be regarded as an excellent fit, considering the data used for the fitting was obtained from five different instruments. The function is shown as a solid line in Figures 2 and 3. The function below 60 keV is that introduced by Boldt (1987, 1988, 1989, 1992) as an excellent fit to the HEAO-1 A2 data, but has slightly different values for the normalization and e-folding energy, reflecting, of course, the fit to a different and larger data set consisting of the A4 LED data from HEAO-1 and HEAO-1 A2 data from the High-Energy Detector no. 1 (E. Boldt, private communication), which was independent of the set analyzed by Marshall et al. (1980). This lower-energy fit with the present best-fit values was first reported by Gruber (1992). Boldt (1988) and Holt (1992) have both emphasized that the two spectral parameters, index and e-fold energy, of this function are particularly revealing for characterizing the residual CXB spectra obtained when subtracting various foreground components. Above 60 keV selected data sets included the HEAO A4 (LED and MED), balloon, COMPTEL and EGRET data. The fit required the sum of three power laws, the flattest of which largely characterizes the EGRET observations (it ignores a likely “ripple” at 70 MeV), and the next steeper, with index 1.58, may be said to represent the spectrum between 70 keV and an MeV. The steepest component, with index 5.5, is almost certainly only a numerical necessity for matching to the lower-energy spectrum and its derivative, and represents nothing physical. The three main functional components may possibly be identified with separate physical components. If the flat EGRET component continues unbroken to much lower energies, and the rollover at tens of keV is an actual cutoff for the lower energy component, then the index 1.58 power law characterizes a separate component dominant at hundreds of keV. Given the lower-energy spectral form, the maximum in $`\nu F_\nu `$ (see Figure 2) of 42.6 kev(sec cm<sup>2</sup> sr)<sup>-1</sup> occurs at 29.3 keV, very close to the values of 41.3 kev(sec cm<sup>2</sup> sr)<sup>-1</sup> and 28.4 keV, respectively, for A2 data alone (E. Boldt, private communication), indicating that the results from the HEAO-1 experiments are robust, both with respect to normalization and spectral shape. ## 7 Discussion The final analysis of the HEAO A4 Low Energy Detectors presented here, and that of the Medium Energy Detectors presently earlier (Kinzer et al. 1997) have provided a completely consistent set of measurements of the diffuse component of cosmic X-ray over the range 13 $`<`$ E $`<`$ 400 keV. These data join smoothly to other recent data obtained at higher energies by the COMPTEL (Kappadath et al. 1996) and EGRET (Sreekumar et al. 1998) instruments on the Compton Gamma Ray Observatory. ASCA and ROSAT (Chen, Fabian & Gendreau 1997) have presented new results in the 0.1–7 keV band. These data, and those of the HEAO A4, agree well with those previously presented in the 3–45 keV range from the HEAO A2 instrument (Marshall et al. 1980). Only in the range $``$300 $`<`$ E $`<`$ $``$1 MeV is a set of new or confirming data missing; the earlier Apollo 15/16 data in this range now being suspect. To obtain an accurate, definitive spectrum in this range will require a new instrumental concept, since instruments designed for this range, such as OSSE on CGRO (Kurfess 1996) and the spectrometer SPI to be launched on INTEGRAL (Mandrou et al. 1997) have relatively narrow apertures and high background, more optimized for discrete source studies. The excellent (reduced $`\chi ^2`$ = 1.3) of a simple exponential at lower energies and three summed power law functions above 60 keV to our selected set of data over the entire 3 keV to 100 GeV range can only be described as remarkable. This is particularly so, considering that different discrete source classes producing X- and gamma-rays by different mechanisms are certainly operating in the different energy ranges. It seems at present that a truly cosmological origin for the “diffuse” cosmic component in unlikely in any energy range, and that the integrated effects of various evolving classes of discrete sources are sufficient to explain the phenomenon. Small discontinuities and inflections expected in the combined spectra due to various physical processes predicted by The et al. (1993) have not been observed with high resolution instruments (Barthelmy et al. 1996). The data presented here is of low resolution ($`\mathrm{\Delta }`$E/E $``$ 0.1), or is averaged over wide bands ($`\mathrm{\Delta }`$E/E $``$ 0.2), precluding searches for narrow band discontinuities or inflection phenomena. Even so, it requires a rather unique combination of power law quasar X-ray spectra and absorbed Seyfert II’s to produce the very smooth exponential in the 3–60 keV range. Such a class has been postulated by Madau et al. (1994), Comastri et al. (1995) and Zdziarski (1996). Studying a large red-shifted class of absorbed Seyferts II’s to determine the distributions of low-and high-energy cutoffs is crucial to resolving this problem. A similar problem exists for the integrated effect of Supernova Ia’s to cosmological distances to explain the diffuse spectrum in the MeV range (The et al. 1993). Here effects due to line emission are expected to produce discontinuities; as indicated above such effects have been searched for and not found. However, the recent discovery of a class of “MeV blazars”, with emission concentrated near 1 MeV (Collmar, private communication 1998) may provide an alternative to the supernova component. To make further progress on understanding the diffuse component of cosmic X- and gamma-rays therefore requires advances on two observational fronts. First, high sensitivity, high resolution class studies of postulated source components are needed to determine the luminosity function of the various spectral types. Second, high resolution, low background instruments specifically designed to measure the diffuse cosmic flux are required for precise determination of spectral features, particularly in the range about 10 keV to 1 MeV, where the various postulated components join, and where phenomena due to discrete lines may be operative. ## 8 Acknowledgements We acknowledge the contribution of many students, colleagues and technical support personnel to the HEAO program. We have received many useful comments from Elihu Boldt, R. E. Lingenfelter and R. E. Rothschild. G.V. Jung was a Ph.D. student at UCSD during the course of this work. This work was supported by NASA under contract NAS8–27974 and grants NAGW–449. REFERENCES Barber, C. R. & Warwick, R. S. 1994, MNRAS, 267, 270 Barcons, X., Fabian, A. C., & Rees, M. J. 1991, Nature, 350, 685 Barthelmy, S. D., Naya, J. E. Gehrels, N., Parsons, A., Teegarden, B., Tueller, J., Bartlett, L. M., & Leventhal, M. 1996, BAAS, , Boldt, E. 1987, Phys. Reports, 146, 215. Boldt, E. 1988, in Physics of Neutron Stars and Black Holes, ed. Y. Tanaka (Tokyo: Universal Academy press), 342. Boldt, E. 1989, in X-Ray Astronomy, 2. ESA SP-296 (Noordwijk: ESTEC), 797. Boldt, E. 1992, in The X-Ray Background, ed. X. Barcons & A. C. Fabian (Cambridge: Cambridge Univ. Press), 116. Briggs, M. S. 1992, Ph.D. thesis, Univ. California, San Diego Chen, L.W., Fabian, A. C., & Gendreau, K. C. 1997, MNRAS, 285, Issue 3, 449, astro-ph/9711083 Comastri, A., Setti, G., Zamorani, G. & Hasinger, G. 1995, A&A, 296, 1. Fabian, A. C. & Barcons, X. 1992, Ann. Rev. Astron. & Astrophys., 30, 429 Fichtel, C. A., Simpson, G. A., & Thompson, D. J. 1978, ApJ, 222, 833 Fukada, Y., Hayakawa, S., Kasahara, I., Makino, F., & Tanaka, Y. 1975, Nature, 254, 398 Gendreau, K. C. et al. 1995, PASJ, 47, L5 Giacconi, R., Gursky, H., Paolini, R., & Rossi, B. 1962, Phys. Rev. Lett., 9, 439 Gruber, D. E. 1974, Ph.D. thesis, Univ. of CA at San Diego Gruber, D. E., Jung, G. V., & Matteson, J. L. 1989, in High Energy Radiations in Space, ed. A. C. Restor, Jr. & J. I. Trombka (AIP: New York), 232 Gruber, D. E. 1992, in The X-Ray Background, ed. X. Barcons & A. C. Fabian (Cambridge: Cambridge Univ. Press), 46 Hasinger, G. 1996, A&A Supp. Series, 120, 607 Hasinger, G. et al. 1998, A&A, 329, 482 Holt, S. 1992, in The X-Ray Background, ed. X. Barcons & A. C. Fabian (Cambridge: Cambridge Univ. Press), 33 Horstman, H. M., Cavallo, G., & Moretti-Horstman, E. 1975, Nuovo Cimento, 5, 255 Jung, G. V. 1986, Ph.D. thesis, Univ. of CA at San Diego Jung, G. V. 1989, ApJ, 338, 972 Kappadath, S. C., et al. 1995, in Proc. 24th Intl. Cosmic-Ray Conf. (Rome), Vol. 2, 230 Kappadath, S.C., et al. 1996, A&A Supp. Series, 120, 619 Kinzer, R. L., Johnson, W. N., & Kurfess, J. D. 1978, ApJ, 222, 370 Kinzer, R. L., Jung, G. V., Gruber, D. E., Matteson, J. L. & Peterson, L. E. 1997, ApJ, 475, 361 Kinzer, R. L., Johnson, W. N., & Kurfess, J. D. 1978, ApJ, 222, 370 Kraushaar, W. L. et al. 1972, ApJ, 177, 341 Kurfess, J. D. 1996, A&A Supp., 120, 5 Kniffen, D. A., et al. 1996, A&A Supp. Series, 120, 615. Levine, A. M. et al. 1984, ApJ Supp., 54, 581 Madau, P., Ghisellini, G., & Fabian, A. 1993, ApJ, 410, L7 Madau, P., Ghisellini, G., & Fabian, A. 1994, MNRAS, 270, L17 Mandrou, P. et al. 1997, Proc. 2nd INTEGRAL Workshop (ESA Pub.), 591 Marshall, F. E., Boldt, E. A., Holt, S. S., Miller, R. B., Mushotzky, R. F., Rose, L. A., Rothschild, R. E. & Serlemitsos, P. J. 1980, ApJ, 235, 4 Matteson, J. L. 1978, in Proc. AIAA 16th Aerospace Science Meeting, 78–35, 1 McIlwain, C E. 1961, J. Geophys. Res., 66, 3681 Metzger, A. E., Anderson, E. C., Van Dilla, M. A., & Arnold, J. R. 1964, Nature, 204, 766 Naya, J. E., Barthelmy, S. D., Bartlett, L. M., Gehrels, N., Parsons, A. et al. 1998, ApJ, 499, L169, astro-ph/9804074 Nolan, P. L. & Matteson, J. L. 1983, ApJ, 265, 389 Rothschild, R. E., Mushotzky, R. F., Baity, W. A., Gruber, D. E., Matteson, J. L., & Peterson, L. E. 1983, ApJ, 269, 423 Sreekumar, P. et al. 1998, ApJ, 494, 523, astro-ph/970925 Stecker, F. W. & Salamon, M. H. 1996, ApJ, 464, 600, astro-ph/9609102 The, L.-S., Leising, M. D., & Clayton, D. D. 1993, ApJ, 403, 32 Trombka, J. I. et al. 1977, ApJ, 212, 925 Zdziarski, A. A. 1996, MNRAS, 281, L9 TABLE 1 Detector Properties | Detector | Number | Energy | Area | FOV | Geometry | | --- | --- | --- | --- | --- | --- | | | | keV | cm<sup>2</sup> | degrees | cm<sup>2</sup>-ster | | | | (nominal) | | (FWHM) | | | Low Energy (LED) | 2 | 13–180 | 103 ea | 1.7$`\times `$20 | 3.0 | | Medium Energy (MED) | 4 | 80–2100 | 42 ea | 17 | 3.97 | | High Energy (HED) | 1 | 150–10000 | 120 | 30 | 100 |
no-problem/9903/astro-ph9903088.html
ar5iv
text
# The Flux Variability of Markarian 501 in Very High Energy Gamma Rays ## 1 Introduction Three active galactic nuclei (AGN) have been discovered to be very high energy (VHE, E$`>`$300 GeV) $`\gamma `$-ray sources by the Whipple Observatory $`\gamma `$-ray collaboration: Markarian 421 (Mrk 421) (Punch et al. (1992)), Markarian 501 (Mrk 501)(Quinn et al. (1996)) and 1ES 2344+514 (Catanese et al. (1998)). These are the three closest BL Lacertae objects (BL Lacs), with redshifts in the range 0.0308 - 0.044, and are among the brightest at X-ray energies. They are all classified as X-ray selected BL Lacs (XBLs) as their synchrotron spectra extend into the X-ray range. A fourth BL Lac, PKS2155-304, has been detected in VHE $`\gamma `$-rays by the University of Durham group (Chadwick et al. (1999)). The Energetic Gamma-Ray Experiment Telescope (EGRET) on board the Compton Gamma-Ray Observatory has detected at least 51 AGN at energies $`>`$100 MeV (Thompson et al. (1995); Mukherjee et al. (1997)). They are all members of the blazar class of AGN, which include flat spectrum radio quasars and BL Lacs. Of the EGRET-detected blazars, 14 are BL Lacs with 12 Radio Selected BL Lacs (RBLs) and only two XBLs. Mrk 421 is the only VHE source in this catalogue and it is among the weakest. However, Mrk 501 has recently been detected at the 4$`\sigma `$ level with EGRET (Kataoka et al. (1999)). One of the most striking characteristics of the EGRET-detected blazars is variability; 42 of the 51 AGN exhibit variability (Mukherjee et al. (1997)). Variability time-scales as short as 4 hours have been observed (Mattox et al. (1997)). VHE observations of Mrk 421 and Mrk 501 have also revealed extreme variability (e.g. Gaidos et al. (1996); Quinn et al. (1996)). The VHE flux has been measured to vary by nearly a factor of 100 in Mrk 421 (McEnery et al. (1999)) and, as we show in the following sections, Mrk 501 has been measured with fluxes ranging from 0.1 to 5 times the Crab Nebula flux with the Whipple Observatory atmospheric Čerenkov telescope. The large collection area ($`3.5\times 10^5`$ m<sup>2</sup>) of the Whipple Observatory telescope permits sensitive studies of variability on time-scales inaccessible to space-based telescopes. Indeed, the shortest observed variability of any blazar at any $`\gamma `$-ray energy, a 30 minute duration flare observed from Mrk 421 (Gaidos et al. (1996)), was measured with this telescope. Both Mrk 421 and Mrk 501 have been closely monitored by the Whipple Collaboration since their discovery, with an $``$0.5 hr exposure/night being sufficient for detection of flaring activity. Prior to 1997, the $`\gamma `$-ray emission from Mrk 421 was generally observed to have a higher mean flux (Schubnell et al. (1996)) and to have been more frequently variable (Buckley et al. (1996)) than that of Mrk 501. In Spring 1997, the Whipple Collaboration observed Mrk 501 to be in an unprecedented high emission state at VHE energies, as subsequently confirmed by several independent Čerenkov imaging groups (Protheroe et al. (1998)). A public Compton Gamma-Ray Observatory Target of Opportunity was initiated in response to a request by the Whipple Collaboration in 1997. Evidence of correlated variability in data from the Whipple Observatory $`\gamma `$-ray telescope, the Oriented Scintillation Spectrometer Experiment (OSSE) and the All-Sky Monitor (ASM) of the Rossi X-ray Timing Explorer is presented in Catanese et al. (1997). During this observing campaign the energy output of Mrk 501 in VHE $`\gamma `$-rays was comparable to that in the 2-100 keV range but the variability amplitude was larger. The correlations seen may imply some relativistic beaming of the emission, given that the spectrum extends to $`>`$7 TeV (Samuelson et al. (1998)). There was also some indication that the optical U-band flux was higher on average in the month of peak $`\gamma `$-ray activity. Here, based on 4 years of data, we present a study of the variability of the $`\gamma `$-ray flux above $``$350 GeV from Mrk 501. Details of the observations are given in §2 and the analysis methodology, including $`\gamma `$-ray selection criteria and the test for variability, is described in §3. The results of the analysis are presented in §4 and their implications briefly discussed in §5. ## 2 Source Observations Observations were made with the Whipple Observatory 10 m atmospheric Čerenkov imaging telescope (Cawley et al. (1991)), located on Mt. Hopkins in southern Arizona. The 10 m reflector images the Čerenkov radiation from cosmic-ray and $`\gamma `$-ray initiated air-showers onto a high resolution camera mounted in the focal plane. Subsequent off-line analysis of the images (described below) facilitates the selection of candidate $`\gamma `$-ray events. The high resolution camera utilises fast photomultiplier tubes (PMTs) arranged in a hexagonal array, with inter-tube spacing of $`0\stackrel{}{\mathrm{.}}25`$. During 1995 and 1996 the camera consisted of 109 PMTs with a resulting field of view (FOV) of $`2\stackrel{}{\mathrm{.}}8`$. An event was recorded when any two of the inner 91 PMTs registered a signal $`>`$40 photoelectrons within an effective resolving time of 15 ns. For the 1997 observations a further 42 PMTs were added, resulting in a FOV of $`3\stackrel{}{\mathrm{.}}4`$. The trigger condition remained the same as for the 109 PMT camera. In Summer 1997 a new camera, containing 331 pixels, was installed. This camera has a FOV of $`4\stackrel{}{\mathrm{.}}8`$. The trigger condition for this enlarged camera required that any two of the 331 pixels produce a signal $`>`$40 photo-electrons within an effective resolving time of 8 ns. The telescope was also triggered artificially, after every 24 events for the 109/151 pixel cameras and once every second for the 331 pixel camera, to determine the background sky-brightness level in each PMT. Light-cones, which minimise the dead-space between PMTs and reduce the albedo effect, were used on the 109 and 151 pixel cameras, but were not yet installed on the 331 pixel camera. In general, two modes of observation are used: on/off and tracking. With the on/off mode the source is tracked continuously for 28 minutes and then, to estimate the background, a region offset in right ascension (RA) by 30 minutes (allowing 2 minutes slew time) is tracked. This has the disadvantage that an equivalent amount of observation time is spent looking away from the source. Alternatively, the tracking mode, where the background is estimated from the on source run itself (see §3.2), can be used. In this case an off source run is not required for each on source run, allowing continuous monitoring of an object. However, off runs are still needed to determine the response of the telescope to background events. Observations are typically made when the source zenith angle is less than 35 and are referred to as small zenith angle (SZA) observations. Observations at large zenith angles (LZA, typically 55 to 70) may also be made. Increasing the zenith angle has the effect of increasing the energy threshold, but has the benefit of increasing the collection area. Thus, it is an excellent method for increasing photon statistics to facilitate the determination of the energy spectrum at higher energies. For a detailed description of the LZA technique see Krennrich et al. (1997). Since its discovery as a $`\gamma `$-ray source in 1995 (Quinn et al. (1996)), the VHE $`\gamma `$-ray emission from Mrk 501 has been monitored intensively with the Whipple Observatory 10 m telescope. Only SZA observations taken under good sky conditions have been considered in the analysis for variability presented here. Our selection includes data from 56, 50, 55 and 49 nights of observation in the Spring – Summer periods of 1995, 1996, 1997 and 1998 respectively. Table 1 summarises the resulting database. ## 3 Data Analysis ### 3.1 $`\gamma `$-ray Selection The vast majority of events detected by Čerenkov telescopes are cosmic rays. Candidate $`\gamma `$-ray events are selected on the basis of the shape and orientation of the Čerenkov images: $`\gamma `$-ray images are typically more compact and elliptical than background hadronic images and tend to have their major axes aligned with the source location in the FOV. Background cosmic ray events, on the other hand, have random orientations. Each image is first subjected to a cleaning procedure (Fegan (1997)) which suppresses pixels which are dominated by light from fluctuations of the night-sky background. A moment-fitting routine is then used to calculate various image parameters. The shape of each image is characterised by the parameters *length* and *width* and the orientation by $`\alpha `$, the angle between the major axis of the image and the line joining the source location in the FOV to the centroid of the image. In addition, for the data taken in 1998, an *asymmetry* cut has been included. $`\gamma `$-ray images have a cometary shape with a tail which points away from the source location in the FOV (Buckley et al. (1998)) and thus their intensity profiles have positive asymmetry. The larger FOV of the camera used in 1998 allows this parameter to be accurately determined, something which was not possible with the smaller FOV cameras. Before the application of shape and orientation cuts a *software trigger* cut is also applied to eliminate events close to threshold, some of which are induced by noise fluctuations. The software trigger involves cuts on the image *size* (i.e. the total number of photoelectrons recorded), the counts in each of the brightest two tubes (*max1, max2*), and a requirement that at least three tubes above a low noise threshold (2.25$`\sigma `$, where $`\sigma `$ is the RMS sky-noise in a PMT, as determined from the artificially triggered events) be neighbours (*NBR3*). A *distance* cut is applied to eliminate images which are too close to the center of the camera and will have poor $`\alpha `$ reconstruction and also those events which have occurred too close to the edge of the FOV and may be truncated. Due to the continuous evolution of the high resolution camera, and changes such as deterioration of mirror reflectivity through weathering and the presence or absence of light-cones, the optimum data analysis cuts differ for each year. The cuts used for a given telescope configuration are optimised on an independent data set, usually data taken on the Crab Nebula or, in the case of the 1998 data, Mrk 421. Table 2 lists the cuts used for each years’ analyses. ### 3.2 $`\gamma `$-ray Rate and Flux Calculation There are slightly different analysis methods for the on/off and tracking observation modes. For the on/off observations the background is estimated from the off source run, which is assumed to be on a sky region which does not include a $`\gamma `$-ray source. This analysis mode has been discussed at length elsewhere (e.g. Kerrick et al. (1995); Catanese et al. (1998)). For tracking observations the background is estimated from the on source run itself. All of the $`\gamma `$-ray selection criteria apart from orientation ($`\alpha `$) are applied to the data. The background is then estimated from events which are not oriented towards the source. In this analysis, background events with values of $`\alpha `$ between 20 and 65 are used. Images having values of $`\alpha `$ between 65 and 90 are discarded because of possible systematic effects due to truncation at the camera’s edge. Once the number of events with orientations in the 20 to 65 range is known then the expected number of background events in the *signal* domain ($`\alpha `$ of 0 to 15, or to 10 for data taken in 1998) can be estimated. Off source data recorded for this source and others can be combined to calculate a ratio, $`r\pm \mathrm{\Delta }r`$, of the number of events in the signal region to those in the 20 to 65 region in the absence of a source. In this case, the significance of a $`\gamma `$-ray excess, $`S`$, is given by: $$S=\frac{N_{on}rN_{off}}{\sqrt{N_{on}+r^2N_{off}+(\mathrm{\Delta }r)^2N_{off}^2}}$$ (1) and the $`\gamma `$-ray rate ($`R\pm \mathrm{\Delta }R`$) is calculated from: $$R\pm \mathrm{\Delta }R=\frac{N_{on}rN_{off}}{t}\pm \frac{\sqrt{N_{on}+r^2N_{off}+(\mathrm{\Delta }r)^2N_{off}^2}}{t}$$ (2) where $`N_{on}`$ is the number of counts in the $`\gamma `$-ray domain ($`\alpha <10^{}`$ or $`15^{}`$), $`N_{off}`$ is the number of counts in the 20 to 65 $`\alpha `$ range and $`t`$ is the duration of the observation. The inclusion of the statistical error on the tracking ratio effectively limits the amount (duration) of tracking data which can be usefully analysed for an excess. Once the duration of the tracking data exceeds that of the off source data used in the calculation of the tracking ratio the error on the tracking ratio starts to dominate and to limit the significance of a detection. For the purpose of investigating possible differences in results produced by the on/off and tracking analyses, the Crab Nebula data were analysed with both methods (Quinn (1997)). The results demonstrated that the $`\gamma `$-ray rates derived using both methods were consistent and stable; the tracking analysis did not introduce any apparent variability in the rate. For the analysis presented in this paper, all of the on source data were combined with the tracking data and the resulting database analysed using the tracking analysis. The data presented here were taken with different telescope configurations having different sensitivities. It is therefore necessary to normalize when comparing the different data sets. To a first approximation, this can be achieved by converting the rates to fractions of the $`\gamma `$-ray rate from the Crab Nebula taken with the same telescope configuration and analysed with the same cuts. The Crab Nebula is believed to be a steady source of VHE $`\gamma `$-rays, as has been observed by the Whipple Collaboration over the past decade (Cawley et al. (1999)). To convert a given $`\gamma `$-ray rate to an integral flux, the rate as a fraction of the Crab Nebula flux was multiplied by $`(1.05\pm 0.24)\times 10^{10}`$cm<sup>-2</sup>s<sup>-1</sup>, which represents the integral Crab Nebula flux above 350 GeV (Hillas et al. (1998)), the threshold of the analysis presented here. ### 3.3 Test for Variability To search for temporal variability in the $`\gamma `$-ray flux we apply a $`\chi ^2`$ test for a constant rate. The $`\chi ^2`$ sum is converted into a probability ($`P_{\chi ^2}`$) that the emission is constant about the mean using the incomplete gamma function *gammq*(a,x) (Press et al. (1988)) $$P_{\chi ^2}=gammq(\frac{N1}{2},\frac{\chi ^2}{2})$$ (3) where N-1 is the number of degrees of freedom. The number of trials is taken into account by calculating the probability ($`P_{trials}`$) of $`P_{\chi ^2}`$ occurring in N trials from: $$P_{trials}=1(1P_{\chi ^2})^N.$$ (4) This method is used to test whether the distribution of measured $`\gamma `$-ray rates is consistent with statistical fluctuations about the mean for a range of time-scales. ## 4 Results The data have been analysed using the tracking analysis described above. This differs from that applied to the 1995 data by Quinn et al. (1996), in that a statistical error on the tracking ratio is now included and that a 10% systematic error is no longer added to tracking results as a careful study showed that the results of the on/off and tracking analysis methods are in close agreement (Quinn (1997)). In fact, these changes tend to cancel each other. For the 1995 data set as a whole we obtain a $`\gamma `$-ray rate of approximately 10% of that of the Crab Nebula ($`0.18\pm \mathrm{\hspace{0.17em}0.02}`$min<sup>-1</sup>, giving a 9.1$`\sigma `$ excess). This rose to approximately 20% of the Crab Nebula rate for the following season (analysis of 1996 data reveals an excess of 11.1$`\sigma `$ and a $`\gamma `$-ray rate of $`0.26\pm \mathrm{\hspace{0.17em}0.02}`$min<sup>-1</sup>), indicating that the average emission level had doubled since the previous year. The monthly and nightly average rates, in fractions of the Crab Nebula rate, over all four years of Mrk 501 observation are shown in Figure 1. The rate in 1995 appears to have been constant with the exception of one night, MJD 49920, when the it was approximately 4.6$`\sigma `$ above the average. A $`\chi ^2`$ test gives a chance probability of $`1.2\times 10^3`$ (after accounting for trials) that the daily averages were constant during that month. The $`\chi ^2`$ probability that the daily averages are constant over the entire 5 months of observation is 0.17 (after trials). The probability that the emission is constant when averaged on monthly time-scales is 0.06 (after trials). Conversely, in the 1996 data set there were no obvious flaring episodes, but when averaged on time-scales of a month there is significant variability. The $`\chi ^2`$ probability that the emission is constant for the monthly averages is $`3.8\times \mathrm{\hspace{0.17em}10}^6`$, after accounting for trials. When each month is examined for variability with the rates averaged on time-scales of a day no significant variability is found. However, the $`\chi ^2`$ probabilities are smaller than for a similar analysis of the 1995 data, suggestive of increased day-scale flickering in the $`\gamma `$-ray emission. The results of the $`\chi ^2`$ test for variability of the monthly and daily averages are shown in Tables 3 and 4 respectively. Observations of Mrk 501 in 1997 began in January using the LZA technique (Krennrich et al. (1998)). The initial observations suggested that the flux was much higher than in previous years. Conventional SZA observations commenced as soon as possible in February. Other ground-based $`\gamma `$-ray experiments were notified and high emission levels were verified. In March a joint IAU circular by the CAT, HEGRA and Whipple groups (Breslin et al. (1997)) announced preliminary results. The Whipple observations continued through June and included observations made with the LZA technique. The results of the LZA data are presented elsewhere (Krennrich et al. (1998)). Our SZA observations revealed that the VHE $`\gamma `$-ray emission was very strong throughout Spring – Summer 1997. Significant variability was observed in the monthly averages. The emission appeared to increase steadily from February through May and then to level off in June. The daily rates also exhibit dramatic variability. Day-to-day changes in the flux by factors $`>`$4 were observed and on eight occasions the flux more than doubled between consecutive nights. On four occasions there were equally rapid decays in the rate. The average flux level for the season is 1.4 times that of the Crab Nebula - an increase by a factor of 14 from the level in 1995. The peak rate, observed on MJD 50554, is 3.7 times the Crab rate. The average rates for the 1997 data were calculated from only one run (the first with elevation above 55) per night. This was done to remove a bias in the observing strategy whereby observations of Mrk 501 on a given night continued only if the source was in a very active state. The strong signal-to-background level in the 1997 data allowed a search to be made for variability on time-scales shorter than one day. For this test, data from each night on which there were three or more runs (approx. 1.5 hours, see §2) were analysed to test for run-to-run variability. A total of 24 nights satisfied this criterion. The $`\chi ^2`$ probability that the emission was constant was calculated for each of the nights; the distribution of these probabilities is shown in Figure 2a. For a statistically-variable source this distribution should be flat but there is an excess of nights with small probabilities. Assuming that the variations are purely statistical, the probability of getting seven nights in this first bin (width = 0.025) out of 24 trials is 1.45$`\times `$10<sup>-6</sup>. Of these seven nights, two show statistically significant variations within themselves. The probability for constant emission on MJD 50577 is $`5.2\times \mathrm{\hspace{0.17em}10}^6`$ while for MJD 50607 the probability is $`5.8\times \mathrm{\hspace{0.17em}10}^8`$ (after accounting for trials). The flux on MJD 50607 has a doubling time of $``$2 hours. We thus identify these two nights as exhibiting significant hour-scale variability while the five other nights exhibit marginal variability. Figure 3 shows the $`\gamma `$-ray rates for the two nights with significant variability. A search of the 1997 data for variability on time-scales of less than half an hour has also been performed. For this test each of the 28 minute runs, 144 in total, was divided into three equal length intervals. Each triplet was then analysed for variability. No significant variations were found and the distribution of $`\chi ^2`$ probabilities (Figure 2b) does not indicate any excess of low probabilities. Hence, we see no evidence for significant sub-hour scale variability. The average $`\gamma `$-ray rate for the 1998 data set was 0.42$`\pm `$0.04 min<sup>-1</sup>, approximately 20% of the rate obtained from the Crab Nebula i.e. on average the emission was much lower than in 1997. In fact, the average rates for March, April and May are comparable to the level of the initial detection in 1995. There were however two significant flaring events. The first occurred in early March where an apparent rise and decay were observed. Unfortunately this flare is poorly sampled due to bad weather. The flux was observed to be relatively high ($``$1.3 times the Crab Nebula flux) on MJD 50876 and on the following night a flux of $``$5.0 times that of the Crab Nebula was recorded. This is the largest flux detected to date from Mrk 501 by the Whipple Observatory 10 m telescope. For the next observation on MJD 50880 the measured flux was still relatively high ($``$1.3 times the level of the Crab Nebula flux). A second flare occurred in June. The flux increased on two consecutive nights, peaking at $``$1.1 times the Crab Nebula flux on MJD 50991, then decayed on a similar time-scale. The average flux level immediately to either side of this flare was below the sensitivity of the telescope. ## 5 Discussion We have demonstrated that rapid variability, a common characteristic of blazars at all observed energies, is also present in the VHE $`\gamma `$-ray emission from Mrk 501. Our 4 year data set spans a remarkable change in the flux level from Mrk 501: the average yearly emission level exhibited a fourteen-fold increase between 1995 and 1997 and the average daily flux varied by a factor of $``$50 (see Figure 1). In 1997 large amplitude day-scale flares occurred frequently and were usually followed by equally rapid decays. Day-scale changes in flux by factors as large as 4.7 were observed. Episodes of significant hour-scale variability were detected, with one having a doubling time of 2 hours. In addition, there is evidence of consistent hour-scale variability which is not resolved in individual episodes. A variable flux from Mrk 501 was reported by at least four other atmospheric Čerenkov observatories in 1997 (see, e.g. Protheroe et al. (1998)). The data presented here suggest that Mrk 501 was more variable when the flux level was higher. However, this effect could also be due to the sensitivity of the telescope. At low flux levels it takes longer to accumulate a significant signal, so the search for short-term variability may be limited by poor statistics. To address this issue we performed a test to see if the day-scale variability observed in 1997 would have been detected in 1996 and/or 1995 and if the month-scale variability seen in 1996 and 1997 would have been detected in 1995. We calculated the percentage deviations about the mean level from a period where significant variability was observed and then, using the mean signal and background level from another period, calculated the signal (and statistical error) which would have been observed given the same percentage deviations about that mean. The results are that neither the 1997 month-scale nor day-scale variability would have been detected at a significant level if present in 1995. However, the 1996 month-scale variability would have been significant if present in 1995 (chance probability of $`10^6`$), while the 1997 degree of day-scale variability, if present in 1996, would have been detectable (chance probability $`10^7`$). Thus we conclude that there was a change in the flaring characteristics, in addition to the change in the mean flux level between the different observing seasons. The VHE $`\gamma `$-ray emission from Mrk 501 exhibits rapid variability similar to that of the emission from Mrk 421. A major difference between the two objects is that the emission from Mrk 501 seems to have a base-level, which changes on monthly and yearly time-scales, whearas the VHE $`\gamma `$-ray emission from Mrk 421 has been described as consisting of a series of rapid flares with no underlying baseline (Buckley et al. (1996)). The variability of the VHE $`\gamma `$-ray emission of Mrk 501 in 1995 and 1996 was similar to that at other wavelengths, with small amplitude, slow variations being more common than fast, large amplitude flares. The increase in the VHE $`\gamma `$-ray power and variability in 1997 was accompanied by an increase in the hard X-ray power and an extension of the synchrotron spectrum to at least 100 keV (Catanese et al. (1997)). This is consistent with an inverse-Compton mechanism for producing VHE $`\gamma `$-rays. However, our observations of Mrk 501 to date cannot discriminate between the inverse-Compton (electron) models and those where the dominant particles producing $`\gamma `$-rays in the jet are protons. More densely sampled light curves, covering as broad a wave-band as possible, are needed to provide more insight into the mechanisms responsible for the VHE radiation from BL Lac objects. Future, more sensitive $`\gamma `$-ray observations of AGN with proposed detectors such as GLAST, HESS, MAGIC and VERITAS will allow the structure of flares on shorter time-scales to be determined. We acknowledge the technical assistance of K. Harris and E. Roache. This research is supported by grants from the US Department of Energy and by NASA, by PPARC in the UK and by Forbairt in Ireland.
no-problem/9903/astro-ph9903262.html
ar5iv
text
# Weak-Scale Hidden Sector and Energy Transport in Fireball Models of Gamma-Ray Bursts \[ ## Abstract The annihilation of pairs of very weakly interacting particles in the neibourghood of gamma-ray sources is introduced here as a plausible mechanism to overcome the baryon load problem. This way we can explain how these very high energy gamma-ray bursts can be powered at the onset of very energetic events like supernovae (collapsars) explosions or coalescences of binary neutron stars. Our approach uses the weak-scale hidden sector models in which the Higgs sector of the standard model is extended to include a gauge singlet that only interacts with the Higgs particle. These particles would be produced either during the implosion of the red supergiant star core or at the aftermath of a neutron star binary merger. The whole energetics and timescales of the relativistic blast wave, the fireball, are reproduced. preprint: IC/99/22 \] The discovery of the afterglow associated with some of the gamma–ray bursts (GRBs) and isotropy of the emitted radiation both support the view that the GRBs occur at cosmological distances with a redshift of the order of one. In this sense, GRBs offer a new observational tool for probing the early universe. There have been several suggestions concerning the generation mechanisms as well as the distributions of photons in the core of the astrophysical object, i. e., the central engine of the GRBs. Among these, for example, the resonant production of gamma rays during the collision of two neutron stars is one possible mechanism . Pertinent to a typical neutron star, the core region of the progenitor has a characteristic radius of $`R_010\text{km}`$ with roughly constant temperature $`k_BT_050\text{MeV}`$ and matter density $`\rho _010^{14}\text{g}/\text{cm}^3`$. The daily GRBs require a burst duration of $`1\mathrm{ms}\stackrel{<}{_{}}\tau \stackrel{<}{_{}}0.1`$ s with a total energy release of $`10^{53}\mathrm{erg}`$. Given the extensive parameters of the progenitor and the observed power spectrum, it is known that the gamma photons are trapped in the core region. Then one is to think about other possible agents to transfer the correct amount energy in the given time interval through the baryonic load. One possible alternative is neutrinos ; however, the mixing of the flavour neutrinos with the sterile one is strongly suppressed in such matter densities, and thus, the oscillation picture runs into difficulties . Another alternative mechanism would come through the axions; however, the transferred power diminishes if the breaking scale of the Peccei–Quinn symmetry gets higher . In fact, the required axion mass cannot be reproduced in known axion models at all . In this letter we work out a different scenario for transporting the energy outside the GRBs central engine. The basic agent of the process is a CP–even, presumably light, scalar particle, $`S`$, which has no baryonic charges. Such singlets have been proposed to take into account the nonobservation of the standard model (SM) Higgs particle by increasing its invisible decay rate . For clarity of the discussion, we write down the effective Lagrangian describing the interactions between this SM–singlet and the photons as $`L_{int}{\displaystyle \frac{1}{2}}m_s^2S^2+\lambda _SS^4+\lambda _\gamma S^2A_\mu A^\mu `$ (1) where $`A_\mu `$ is the photon, and $`m_s`$ and $`\lambda _S`$ designate the mass and the quartic coupling of the singlet, respectively. The original model has an unbroken U(1) symmetry associated to the complex nature of this singlet. However, for the purpose of this work the global phase of the singlet is not important so we take it real. In the framework of the hidden Higgs sector models the couplings above take the form $`\lambda _S\lambda _S^0+\kappa ^2,\lambda _\gamma \kappa \left({\displaystyle \frac{\alpha ^3}{\pi }}\right)^{1/2}A_\gamma `$ (2) where $`\lambda _S^0`$ is the bare quartic coupling of the theory, and $`\kappa m_h`$ stands for $`hSS`$ coupling . In writing these expressions we neglected invariant masses in a given channel compared to the Higgs boson mass. The vertex factor $`A_\gamma `$ for $`h\gamma \gamma `$ coupling is a slowly varying function of $`\sqrt{s}`$, and it is of the order of one . In all computations below we will parametrize results in terms of $`\lambda _S`$ and $`\lambda _\gamma `$ without going back to relations above. However, one keeps in mind that numerically $`\lambda _\gamma 𝒪(10^4)\kappa `$. The effective Lagrangian (1) describes a real scalar field interacting with itself and photons. The scattering processes following from this Lagrangian are shown in Fig.1. Here the diagram (a) represents $`\gamma \gamma SS`$ scattering with which the conversion of the photons to singlets in the core of the GRBs burster occurs. Furthermore, conversion of singlets to photons outside (baryon–depleted region of) the GRBs central engine happens with the same process in backward direction. To have a description of the energy transport through the strong baryonic load, it is convenient to start with the conversion of the photons to the singlets. The relevant cross section reads as $`\sigma (\gamma \gamma SS)={\displaystyle \frac{\lambda _\gamma ^2}{24\pi s}}\left(1{\displaystyle \frac{4m_s^2}{s}}\right)^{1/2}`$ (3) where $`\sqrt{s}100\mathrm{M}\mathrm{e}\mathrm{V}`$ is the total invariant mass of the annihilating photons. This process occurs in the core of the progenitor, and the produced singlet pairs move out through the surrounding baryonic loading. Before describing the journey of the singlets in the baryon load, it is convenient to compute the rate of energy conversion from photons to singlets. Denoting the four-momenta of the photons by $`k_1=(\omega _1,\stackrel{}{k}_1)`$ and $`k_2=(\omega _2,\stackrel{}{k}_2)`$, the total amount of energy converted to singlet pairs per unit time per unit volume reads $`Q={\displaystyle \frac{d^3\stackrel{}{k}_1}{(2\pi )^3}\frac{d^3\stackrel{}{k}_2}{(2\pi )^3}n(\omega _1)n(\omega _2)(\omega _1+\omega _2)v_{rel}\sigma (\gamma \gamma SS)}`$ (4) where $`n(\omega )`$ is the equilibrium Bose population of the photons and $`v_{rel}=1\stackrel{}{k}_1\stackrel{}{k}_2/\omega _1\omega _2s/(2\omega _1\omega _2)`$ is the relative velocity of the two annihilating photons. After integrating $`Q`$ over the volume of the core region, the total luminosity for photon–to–singlet conversion becomes $`_{\gamma \gamma SS}\lambda _\gamma ^210^{70}\left({\displaystyle \frac{T}{T_0}}\right)^5\left({\displaystyle \frac{R}{R_0}}\right)^3\mathrm{erg}\mathrm{sec}^1`$ (5) where $`𝒪(m_s^2/(k_BT)^2)`$ terms are neglected in computing $`Q`$. This is a good approximation in the burster core where temperature is high enough. Assuming that this will be the luminosity observed on Earth, a comparison with the GRBs standard candle luminosity requires $`\lambda _\gamma 10^8`$. As will be discussed later, $`\lambda _\gamma `$ is a loop–induced quantity in the weak–scale hidden Higgs sector models so that such a small number is naturally expected there. Despite all these, however, what is observed on Earth is $`_{SS\gamma \gamma }`$, that is, one has to convert the singlets back to photons to simulate the experimental conditions so that these naive bounds on $`\lambda _\gamma `$ may vary. The singlets, after being pair–produced by photon–photon annihilations in the core region, travel through the strong baryonic load towards the baryon–depleted region outside the GRBs burster. Since there is no interaction with the baryons they do not feel the baryonic load at all, and would move freely along radially outward trajectories were it not for their self-interactions. It is clear that the temperature of the host baryon distribution does not effect the distribution and dynamics of the singlets as they can never come to thermal equilibrium with the baryons. In this sense, even if the star is quite cold with a small fraction of $`\mathrm{MeV}`$ temperature in the optically thin region, the singlets themselves can be quite energetic to generate the $`SS\gamma \gamma `$ reaction. Namely what singlets take out of the star is the energy accumulated in the gamma photons and this happens independent of the temperature and density distribution of the baryons. The singlet self-interactions are depicted in diagrams (b) and (c) of Fig.1. Both diagrams are generated by the singlet quartic coupling in the Lagrangian. A close inspection of these diagrams reveal some important properties. The $`SSSS`$ scattering depicted in Fig.1(b) preserves the number of singlets, is a contact interaction, and is kinematically operative for $`\sqrt{s}2m_s`$. The $`SSSSSS`$ scattering in Fig.1(c), on the other hand, doubles the number of singlets, is a long–range interaction with range $`m_s^1`$, and is kinematically allowed when $`\sqrt{s}4m_s`$. The relevant cross sections are estimated to be $`\sigma \left(\mathrm{Fig}.1(b)\right)={\displaystyle \frac{\lambda _s^2}{16\pi s}}>>\sigma \left(\mathrm{Fig}.1(c)\right)𝒪\left({\displaystyle \frac{\lambda _s^4}{(2\pi )^4s}}\right)`$ (6) so that cross section for $`SSSSSS`$ scattering is second order in $`\lambda _s`$ and receives further phase space suppressions. The most important quantity describing the motion of the singlets towards the GRBs baryon–depleted region is their mean free path. Having no interactions with the baryons, the singlet mean free path would be infinitely long were it not for the singlet self-interactions depicted in diagrams (b) and (c) of Fig.1. The total mean free path obeys the relation $`\mathrm{}^1=\mathrm{}_{(b)}^1+\mathrm{}_{(c)}^1`$ (7) where subscripts refer to the diagrams in Fig.1. Therefore, the total mean path over which singlets move freely is described by the larger of the individual contributions (for Fig.1(b) and Fig.1(c), respectively) $`\mathrm{}_{(b,c)}^1={\displaystyle \frac{d^3\stackrel{}{p_t}}{(2\pi )^3}n(E_t)v_{rel}\left(1\frac{4m_s^2}{s}\right)^{1/2}\sigma }`$ (8) where $`n(E_t)`$ is the equilibrium Bose population for the target singlet and $`v_{rel}s/(2EE_t)`$ is the relative velocity of the incident and target singlets with respective four–momenta $`p=(E,\stackrel{}{p})`$ and $`p_t=(E_t,\stackrel{}{p_t})`$. Here we take the phase space density of singlets as a Bose distribution ignoring the possibility of free streaming. In any case the resulting mean free path will be a conservative estimate of the actual one. As Eq.(8) suggests clearly, larger the cross section smaller the corresponding mean free path. Using the expressions for the cross sections in (6) one can make the rough estimate $`\mathrm{}\mathrm{}_{(b)}\left({\displaystyle \frac{E}{k_BT_0}}\right)\left({\displaystyle \frac{50\mathrm{M}\mathrm{e}\mathrm{V}}{k_BT_0}}\right)\left({\displaystyle \frac{10^8}{\lambda _s}}\right)^2100\mathrm{k}\mathrm{m}`$ (9) neglecting the terms $`𝒪\left(m_s^2/k_B^2T_0^2\right)`$. This mean free path results solely from the self-interactions of the singlets, that is, it is the singlets themselves which prevent their further flight. In particular, it is not the baryons that limit their motion so that it does not matter if $`\mathrm{}`$ happens to fall inside or outside the baryon load region; literally, singlets will stand still in a sphere of mean radius $`𝒪(\mathrm{})`$ measured from the core of the progenitor. To clarify this point further one recalls, for instance, the neutrino propagation. In that case the mean free path is determined by the neutrino–baryon interactions, and if it is outside the GRBs baryon load region neutrinos get out of the astrophysical source otherwise they are trapped by the baryons and form a thermal neutrino sphere. Therefore, the formation of the singlet sphere follows only from their self-interactions. Due to their self-interactions in Fig.1(b) and (c) singlets will form a thermalized cloud of particles whose number and energetics will change with a chain of such scatterings. As mentioned before, the $`22`$ scattering in Fig.1(b) preserves the number of singlets and plays an important role in restricting the singlets to have the finite path (9). The $`24`$ scattering in Fig.1(c), however, is a long–range interaction and it modifies the number of singlets. Due especially to its long–range nature it is effective everywhere in the singlet sphere, and causes largely separated singlet pairs to annihilate into four new singlets. This interaction, thus, increases the number of singlets and reduces the mean energy per capita. The resulting cloud of singlets will thermalize themselves with their self-interactions with a temperature much lower than the burster core temperature. At any point inside the singlet sphere, there will be singlets coming from every direction which is important in computing the energy accumulation in a given region. If singlets were moving along radially outward trajectories there would be a strong geometrical suppression factor for the energy deposition . As mentioned above, because of $`24`$ process in Fig.1(c) total number of singlets increases, and thus, average energy per singlet decreases. Similar to the electromagnetic showers initiated by photons, a straightforward computation of the total number of produced singlets as the mean energy per singlet drops from $`E_0`$ to a critical energy $`E_c`$ via the $`24`$ scattering gives $`N=N_0\left[1+(E_0E_c)/E_c\right]`$ where $`N_0`$ is the initial number of singlets given by the number of photons. In the problem at hand, $`E_0k_BT_0`$ and $`E_ck_BT2m_s`$ where the latter follows from the kinematic blocking of $`24`$ scatterings. Number of singlets per unit volume can be computed over a sphere of radius $`\mathrm{}`$: $`\rho =3N/(4\pi \mathrm{}^3)`$. Since the singlet cloud eventually thermalizes it is convenient to use the usual Bose population for the number of singlets per unit volume per unit momentum state with an appropriate scaling to reproduce the singlet density $`\rho `$: $`\rho =K{\displaystyle \frac{d^3\stackrel{}{k}}{(2\pi )^3}n(E)}`$ (10) where $`n(E)`$ is the Bose population of the singlets at temperature $`T`$, and $`K`$ is a constant for reproducing $`\rho `$. The next thing one needs for computing the electromagnetic power generated by the singlets is the $`SS\gamma \gamma `$ cross section $`\sigma (SS\gamma \gamma )={\displaystyle \frac{\lambda _\gamma ^2}{8\pi s}}\left(1{\displaystyle \frac{4m_s^2}{s}}\right)^{1/2}`$ (11) to be compared with $`\gamma \gamma SS`$ cross section in (3). Then computation of the luminosity proceeds in exact similarity with (4) after replacing the cross section there by (11): $`_{SS\gamma \gamma }\left({\displaystyle \frac{R_0}{\mathrm{}}}\right)^3\left({\displaystyle \frac{T_0}{T}}\right)^3_{\gamma \gamma SS},`$ (12) where a factor of $`(T_0/T)^2`$ follows from $`(E_0/E_c)^2`$. $`_{SS\gamma \gamma }`$ is computed over the volume of singlet sphere. Thus, the larger the mean free path the higher the luminosity suppression factor. However, the higher the initial (core) temperature the higher the luminosity emitted. Since both effects scale with the same power it is clear that no energy is lost in the global process, and thus the mechanism here suggested is able to effectibly carry out the overall energy released in the GRBs central engine as in the context of fireball models. Once the energy is out of the baryon loading, a relativistic blast wave of electron-positron pairs and radiation is formed, i. e., a fireball, which cleans away the burster environment pushing out a rather matter free debris to produce the observed burst when colliding with and external interstellar medium. The no-time-delay feature of this scenario makes it suitable to explain bursts with both rapid risetime and prompt afterglows as observed in GRB990123. The potential role of this mechanism in triggering GRBs from supermassive star explosions is an issue currently pursued. To conclude, we have investigated the viability of the hidden Higgs sector models of turning realizable the energy transport in GRBs in the context of fireball scenarios. As the discussions in the text show, in such models transport of the energy from the core to outside is quite efficient, and the resulting luminosity agrees with the astronomical observations. The particle physics scenarios with neutrinos and axions are not as efficient as the present model due to the suppresion of the neutrino mixing angle and smallness of the axions mass, respectively. In the present model, conversion processes are mediated by the Higgs particle. On the other hand, transport of the energy from the core to outside is done by the singlets having a rather large mean free path compared to neutrinos. In the present scenario singlets are light enough to be pair-produced by the photon annihilations, and such a light singlet does not contradict with the present collider data as it affects the precision observables at two and higher loop levels. The model employed here has two free parameters, the coupling constants $`\lambda _\gamma `$ and $`\lambda _S`$, both hidding the explicit dependence on the singlet mass $`m_s`$ and Higgs mass $`m_h`$. The first parameter is constrained to be around $`\lambda _\gamma 10^8`$ by the current GRBs BATSE observations, and its value will be measured at the $`\gamma \gamma `$ mode of the TESLA collider. As a final remark, the potential realization of this scenario in the astrophysical sources triggering GRBs would render it a viable pathway for testing some of the extensions of the standard model of particle physics introduced to account for the overall GRBs observational properties: energetics, timescales, spectra, etc., such as the one being suggested here. ###### Acknowledgements. We would like to thank A. Dar, A. Kusenko, S. Nussinov and A. Yu. Smirnov for helpful suggestions and fruitful discussions on this work.
no-problem/9903/hep-lat9903022.html
ar5iv
text
# 1 Introduction ## 1 Introduction Calorons are characterised by their holonomy, defined by the value of the Polyakov loop at spatial infinity. When non-trivial, it resolves the fact that a caloron is built from constituent monopoles, their mass ratios directly determined by the holonomy . These solutions differ from the (deformed) instantons described by the Harrington-Shepard solution , for which the holonomy is trivial. What we find by (improved ) cooling on a finite lattice, to relatively high accuracy, is $`SU(2)`$ configurations that fit these infinite volume caloron solutions for arbitrary constituent monopole mass ratios. Twist in the time direction constrains the masses of the two constituent monopoles to be equal. The constituent nature becomes evident when the instanton scale parameter $`\rho `$ is larger than the time extent $`\beta `$ (inverse temperature) of the system. The masses of the monopoles are for $`SU(2)`$ proportional to $`\omega `$ and $`{\scriptscriptstyle \frac{1}{2}}\omega `$, where $`\omega `$ ($`0\omega {\scriptscriptstyle \frac{1}{2}}`$) follows from the trace of the holonomy: $`2\mathrm{cos}(2\pi \omega )`$. The distance between the monopole constituents is given by $`\pi \rho ^2/\beta `$. At $`\rho /\beta 1`$ the constituents therefore hide deep inside the core of the instanton and the non-trivial holonomy plays no discernible role. But for $`\rho /\beta 1`$ the situation is opposite; the instanton becomes static and will dissolve in two BPS monopoles . The transition occurs for $`{\scriptscriptstyle \frac{1}{2}}\beta <\rho <\beta `$. When, however, the holonomy is trivial one of the monopoles is massless and will hide in the background. Charge one $`SU(N)`$ calorons have $`N`$ constituent monopoles for non-trivial holonomy. These have the same location in time, but the spatial position of each constituent monopole can be arbitrary. There are (at fixed holonomy) $`N1`$ phases associated to the residual $`U(1)^{N1}`$ gauge symmetry that leaves the holonomy invariant. The total number of parameters describing these calorons is therefore $`4N`$. One may speculate that the $`N1`$ phases are replaced in a finite volume by the holonomy itself, indeed described by $`N1`$ eigenvalues taking values in $`U(1)`$ ($`\mathrm{exp}(2\pi i\omega )`$ for $`SU(2)`$). Also it is likely that, in general, a charge $`Q`$ caloron is characterised by $`NQ`$ constituent monopoles, which we confirm for a $`Q=2`$ caloron solution obtained from cooling. At zero temperature it is tempting to explain the $`4NQ`$ parameters of an $`SU(N)`$ charge $`Q`$ instanton in terms of the positions of $`NQ`$ objects . Indeed, there are charge $`Q=1/N`$ instanton solutions on a torus with twisted boundary conditions, whose four parameters specify its position . Subdividing a given finite volume in boxes with the appropriate twisted boundary conditions, such that each cell supports a $`Q=1/N`$ instanton, provides an exact solution that has $`NQ`$ lumps. In ref. it is suggested that a typical self-dual configuration would appear as an ensemble of $`N`$ randomly placed lumps of charge $`Q=1/N`$, whose locations would account for the $`4NQ`$ parameters. The results at finite temperature presented here suggest that the assignment of $`Q=1/N`$ charge to each lump might only hold on the average. Our results point to the usefulness of studying the dynamical role of these configurations. A first attempt in that direction is hampered by the fact that at high temperatures where the constituent monopoles should be well separated, the fluctuations are so large that on average topological charge cannot be supported over large enough domains of space-time to capture the configurations with cooling . On a semiclassical basis one is tempted to argue against non-trivial holonomy. It polarises the vacuum at infinity and raises the energy density above the one with a trivial holonomy . But now we have seen that these BPS bound states can be supported in a finite volume, it is time to acknowledge this as an irrelevant objection, given the non-perturbative and non-trivial nature of the QCD vacuum. As a consequence, constituent monopoles, at least at high temperatures, are tangible objects that do not depend on a choice of Abelian projection , which till now has been used to address the monopole content of the theory . In extracting the non-trivial topological content of the theory constituent monopoles introduce an extra parameter: their mass, $`16\pi ^2\omega /\beta `$. Up to now only the maximal mass, $`8\pi ^2/\beta `$, of such a BPS monopole was considered. It arises in terms of the caloron with trivial holonomy, described by the Harrington-Shepard solution . Rossi showed that at high temperature, equivalent to a large scale parameter, this solution indeed becomes a BPS monopole . In section 2 we discuss the numerical procedure of constructing the configurations. Apart from cooling with improved actions, twisted boundary conditions are used as a tool for biasing the cooling towards non-trivial holonomy. The twist can then be removed, while preserving the non-trivial holonomy and the constituent monopole nature of the configuration, although it should be pointed out that no exact charge one instanton solutions can exist on $`T^4`$, which remains true at finite temperature. Interesting in this respect is that the well-established $`Q=1/2`$ instanton solutions that occur with suitable combinations of spatial and temporal twists (so-called non-orthogonal twist), can be argued to become a single static BPS monopole in the infinite volume limit at finite temperature. This is discussed in section 3. Configurations of higher charge are discussed in section 4 and we conclude with some speculations and possible applications. An appendix summarises the formulae for the $`SU(2)`$ analytic caloron solutions. ## 2 Non-trivial holonomy from time-twist For finite temperature ($`T=1/\beta `$) and volume ($`L^3`$, $`L\beta `$) caloron configurations with non-trivial holonomy were discovered on lattices with twisted boundary conditions. Starting with a random configuration and after applying a standard cooling algorithm one frequently reaches $`Q=1`$ self-dual configurations which are stable under many cooling steps. These configurations are later analysed. An automatic peak-searching routine identified one or (actually more frequently) two lumps in them. These are our candidate caloron configurations on the lattice. Below, we will show how twist in the time direction can help in bringing about non-trivial holonomy. To appreciate the ease with which twist can be implemented on the lattice, and because twist has been a very useful tool , neglected by large parts of the lattice community, we think it is useful to review the notion of twist-carrying plaquettes that introduce twist by modifying the lattice action , but not the measure. In the initial formulation of ’t Hooft , $`SU(N)`$ twisted boundary conditions were implemented by defining gauge functions $`\mathrm{\Omega }_\mu (x)`$ (which are assumed independent of $`x_\mu `$), such that with $`a^\mu `$ the periods of the torus in the four directions ($`a_\mu ^\nu =L_\mu \delta _{\mu \nu }`$) $$U_\nu (x+a^\mu )=\mathrm{\Omega }_\mu (x)U_\nu (x)\mathrm{\Omega }_\mu ^{}(x+\widehat{\nu }),$$ (1) here re-formulated for a lattice of size $`_\mu N_\mu `$. Calculating $`U_\nu (x+a^\mu +a^\lambda )`$ in two ways shows that for all $`x`$ one should have $$\mathrm{\Omega }_\mu (x+a^\lambda )\mathrm{\Omega }_\lambda (x)=Z_{\mu \lambda }\mathrm{\Omega }_\lambda (x+a^\mu )\mathrm{\Omega }_\mu (x),$$ (2) with $`Z_{\mu \lambda }=\mathrm{exp}(2\pi in_{\mu \lambda }/N)`$ an element of the center of the gauge group. (We define $`k_i=n_{0i}`$ and $`m_i={\scriptscriptstyle \frac{1}{2}}\epsilon _{ijk}n_{jk}`$ to distinguish the twist in the time and space directions respectively). The center freedom arises because $`U_\mu (x)`$ is invariant under constant center gauge transformations (i.e. the gauge field is in the adjoint representation). In the presence of site variables (fields in the fundamental representation) one is required to put all $`Z_{\mu \nu }`$ equal to 1. We now perform the following change of variables $$U_\mu ^{}(x)=U_\mu (x)\mathrm{\Omega }_\mu (x),\text{for}x_\mu =N_\mu 1.$$ (3) As a consequence, the plaquettes at $`x_\lambda =N_\lambda 1`$ and $`x_\mu =N_\mu 1`$ (for any value of the other two components of $`x`$) can be shown to have acquired an additional factor $`Z_{\lambda \mu }`$. These corner plaquettes are called twist-carrying and the change of variables has absorbed the twist in the action, by multiplying these plaquettes by the appropriate center element (the action involves the real part of the plaquette variables after this multiplication). The location of the twist-carrying plaquette is arbitrary, as one is free to choose the boundary of the box used for defining the torus. Alternatively, the twist-carrying plaquette can be moved around by a periodic gauge transformation. It corresponds to the non-Abelian analogue of a Dirac string, and is at the heart of ’t Hooft’s definition of magnetic flux for non-Abelian gauge theories . Thus, twist is introduced by the trivial modification of the weights of the plaquettes in terms of multiplication with appropriate center elements and causes no computational overhead. Note that we have just shown that if $`Z_{\mu \nu }=1`$ for all $`\mu `$ and $`\nu `$, in a suitable gauge the links can be chosen periodic without changing the weights of the plaquettes. In the continuum, however, there remains an obstruction in making the gauge field periodic when the topological charge of the configuration is non-trivial . This shows that on the lattice, only the center charges are unambiguously defined. Interestingly this includes configurations that in the continuum would be assigned a non-trivial fractional Pontryagin index (so-called twisted instantons). To understand what the effect of the twist is on the holonomy, we use the observation that the presence of the $`Z_N`$ flux can be measured by taking a Polyakov loop in the $`a^\lambda `$ direction, which when translated over a period in the $`a^\mu `$ direction picks up a factor $`Z_{\mu \lambda }`$. $$P_\lambda (x)=\frac{1}{N}\mathrm{Tr}P\mathrm{exp}\left(_0^1A_\lambda (x+sa^\lambda )𝑑s\right)\mathrm{\Omega }_\lambda (x),P_\lambda (x+a^\mu )=Z_{\lambda \mu }P_\lambda (x).$$ (4) There are various ways to see this , but becomes most evident when ‘pulling’ the loop over the twist-carrying plaquette. For $`SU(2)`$ this means that the Polyakov loop is anti-periodic in case the twist is non-trivial. In particular for $`Z_{0i}=1`$, $`P_0(\stackrel{}{x})`$ is anti-periodic in the $`x_i`$-direction. As we increase the size of the spatial torus it is natural to expect that the self-dual configuration would approach a caloron solution. Then $`P_0(\stackrel{}{x})`$ would approach a constant at spatial infinity. This is only compatible with the anti-periodicity implied by the non-trivial time-twist when $`P_0(\stackrel{}{x})0`$ for $`|\stackrel{}{x}|\mathrm{}`$, forcing $`\omega ={\scriptscriptstyle \frac{1}{4}}`$ and thus non-trivial holonomy. This therefore provides a sure way of obtaining caloron solutions with non-trivial holonomy on the lattice, which at high temperature gives rise to two constituent monopoles, albeit in this case of equal mass. Since the twist in the time direction forces the constituent monopoles to have equal mass, the lattice corrections to the value of the action (which depend on the shape of the configuration ) are affected only by the separation of the two constituents (in the next section we will encounter the situation where the mass ratio is affected by the cooling). This allows to manipulate the positions of the two lumps by using the tool of cooling with modified actions. This can be implemented by using a lattice action that combines the traces of the $`1\times 1`$ and $`2\times 2`$ plaquettes. The two couplings are fixed in terms of the parameters multiplying the leading (continuum) and next to leading ($`a^2`$) terms in the expansion of the lattice action in powers of the lattice spacing $`a`$. The $`a^2`$ term is given by a unique dimension six operator, and its coefficient is called $`\epsilon `$ (it is trivial to incorporate the twist-carrying plaquettes also in these modified actions). Wilson’s action corresponds to $`\epsilon =1`$. The choice $`\epsilon <1`$ is known as over-improvement, whereas improved cooling is performed by choosing $`\epsilon =0`$. In this last case the lattice and continuum action differ only by corrections of order $`a^4`$. For that reason, we will choose $`\epsilon =0`$ whenever we compare with the analytic infinite volume continuum caloron solution. However, unlike for the continuum action, the value of the $`a^2`$ operator depends on the position of the constituent monopoles, and therefore we can use other values of $`\epsilon `$ to alter these positions. Cooling with the Wilson action has the effect of driving the constituent monopoles together, since the Wilson action is decreased with respect to the continuum when the field strength has a larger gradient . Once the two lumps merge, and can no longer be distinguished from an instanton (at which point the solution will no longer be static), it follows the usual fate of an instanton under prolonged cooling with the Wilson action: At some point it falls through the lattice . (For cooling histories see fig. 3). Over-improved cooling has the effect of pushing the two constituent monopoles apart. One can speed-up the rate at which monopoles separate by decreasing $`\epsilon `$. Apriori it is not clear if, when the lumps are maximally apart, the solution will not be affected significantly by the boundary conditions. This will partly depend on the ratio $`L/\beta `$, but we find for $`L=4\beta `$ that these effects are rather small. In figure 1 we give an example of a caloron configuration with well separated constituents on a $`16^3\times 4`$ lattice with $`\stackrel{}{k}=(1,1,1)`$, initially generated by cooling with the Wilson action, switching to improved cooling to reduce lattice artifacts. Shown is the action density $`s`$. We see that the agreement with the infinite volume analytic result is very good, with the action peaks for the lattice result somewhat lower (this feature is somewhat suppressed by plotting $`\mathrm{log}(1+s/3)`$, rather than $`s`$). The total action of this static lattice configuration is very close to the required continuum value $`8\pi ^2`$. An example of a non-static configuration with overlapping constituents will be presented below (see fig. 5). There seems no doubt that a continuum solution with this constituent monopole structure should exist on the time-twisted torus. ## 3 The case of space-twist When both space and time twists are non-trivial and $`\stackrel{}{k}\stackrel{}{m}0\mathrm{mod}N`$ (called non-orthogonal twist), the minimum of the action corresponds to a so-called twisted instanton with fractional charge. Unlike the integer charge instantons, these twisted instantons can not fall through the lattice. Their scale is fixed, only their position is a free parameter. This was used in the past to find accurate lattice results using ordinary cooling ($`\epsilon =1`$). At high temperatures such a twisted instanton becomes static and represents a single BPS monopole on $`T^3`$. The twist allows for non-zero charge in the box. As discussed in the previous section it also gives rise to a holonomy characterised by $`\omega ={\scriptscriptstyle \frac{1}{4}}`$. Indeed, we were able to fit the finite temperature twisted instanton (in a sufficiently large volume) to one of the constituent monopoles of the caloron at $`\omega ={\scriptscriptstyle \frac{1}{4}}`$ (when placing the other constituent at a sufficiently large separation). In the appropriate limits both become ordinary BPS monopoles with mass $`4\pi ^2/\beta `$. Now we will show the type and size of finite volume and lattice artifact effects. In fig. 2 (left) we display a plot of the minimum lattice (Wilson) action for lattices with different space and time extensions, $`N_s^3\times N_t`$ and twist $`\stackrel{}{k}=\stackrel{}{m}=(1,1,1)`$. In the given range, $`N_t=4`$$`7`$ and $`N_s=16`$$`32`$, deviations from the continuum result are of the order of a few percent. However, the pattern of deviations from the continuum value is well understood. If we set $`a_t=1/N_t`$ and $`a_s=1/N_s`$, the value of the lattice action can be fitted with great accuracy to a formula: $$S/4\pi ^2=S_0ba_t^2ba_s^2ca_ta_sd(a_t+a_s)^4.$$ (5) The extrapolated value of the continuum action matches $`4\pi ^2`$ to a precision of a few parts in $`10^5`$. Notice also that the extrapolation shows the existence of a self-dual continuum solution for any value of the ratio $`L/\beta =N_s/N_t`$. Furthermore, the lattice correction to the action decreases in absolute value with the ratio $`N_s/N_t`$, consistent with the statement made before that Wilson’s action decreases with decreasing separation of the monopoles, since in this case $`N_s`$ plays the role of the separation between lumps (the periodic mirrors). To measure finite volume corrections, we performed improved cooling ($`\epsilon =0`$, to minimise lattice corrections) for $`N_s=16`$ and $`32`$. In this case, the values of the minimum lattice action attained are of the order $`S/4\pi ^2=1.0001(1)`$. In fig. 2 (right) we compare the $`x`$-profiles $`(x)`$ obtained from the lattice minimum action configuration with the corresponding one for the BPS monopole. The $`x`$-profile is the integral of the action density over all but the $`x`$ coordinate. This quantity has smaller errors and is less sensitive to the lattice discretisation than the action density itself. From the figure we see how the lattice profiles approach the infinite volume BPS monopole profile. The slow convergence is due to the powerlike Abelian tail of the BPS monopole (in contrast to the exponential tail found for other cases ). Interestingly, an exact caloron solution with equal-size constituents ($`\omega ={\scriptscriptstyle \frac{1}{4}}`$) on the twisted torus can be constructed by gluing two twisted instantons together, starting from the $`Q={\scriptscriptstyle \frac{1}{2}}`$ solution defined by $`\stackrel{}{k}=\stackrel{}{m}=(1,0,0)`$. Gluing two boxes in the $`y`$\- or $`z`$-directions preserves $`\stackrel{}{k}`$, but reduces $`\stackrel{}{m}`$ to the trivial value (since $`n_{\mu \nu }`$ is defined modulo 2 for $`SU(2)`$). This exact solution corresponds to the situation studied in the previous section. Instead, gluing two boxes in the $`x`$-direction removes the time-twist, but preserves the space-twist. The same twist results when gluing the two boxes in the time-direction. In the first case we have an exact solution on a space-twisted torus with equal size constituents (corresponding to $`\omega ={\scriptscriptstyle \frac{1}{4}}`$) at maximal separation in the direction of the twist, whereas in the second case the static nature of the finite temperature solution simply leads to doubling the mass of the monopole. Therefore this solution corresponds to an exact caloron solution on a space-twisted torus with trivial holonomy (the other constituent monopole is massless). We have also performed lattice studies on a space-twisted torus, with $`\stackrel{}{k}=\stackrel{}{0}`$, which allowed us to probe the constituent monopole mass ratios, by a subtle use of the cooling procedure. It can be proven that without twist there are no regular charge one instanton solutions on a torus , but for any non-trivial twist an 8 dimensional space of regular solutions exists . Part of this parameter space comes about by gluing a localised instanton to the unique curvature free background supported by the twist. The eight parameters are given by scale, space-time position and so-called attachment parameters, that describe the gauge orientation of the localised instanton relative to the fixed curvature free background. For $`\stackrel{}{m}\stackrel{}{0}`$ we find that the magnetically charged constituent monopoles, superposed on this non-Abelian magnetic flux background, experience an additional force that repels them as far as the finite volume allows. The presence of this force is evident from the fact that under prolonged cooling in all cases, $`\epsilon =1,0,1`$, the separation between the two constituent monopoles was increasing and that their centers lined up with the direction of $`\stackrel{}{m}`$. Once the constituent monopoles are placed at their maximal separation, further cooling with the Wilson action ($`\epsilon =1`$) leads to action shifting from one to the other peak, driving the constituent monopole mass ratios away from equal masses. Once one of the masses has decreased to zero, the scale parameter of the remaining (deformed) instanton configuration can shrink, resulting in the usual fate of falling through the lattice under prolonged cooling with the Wilson action. For over-improvement the effect is opposite, and the masses are pushed to equal values. The ‘force’ —due to lattice artifacts— changing the value of $`\omega `$ can be neglected for $`\epsilon =0`$ cooling. We summarise the behaviour under cooling in fig. 3, by showing the distance between the peak locations and $`\omega `$ (estimated by equating $`({\scriptscriptstyle \frac{1}{2}}\omega )^4/\omega ^4`$ to the ratio of the peak heights) as a function of the number of cooling sweeps. Shown are the histories for $`\stackrel{}{m}=(1,1,1)`$ at $`\epsilon =1,0,1`$ and for $`\stackrel{}{k}=(1,1,1)`$ at $`\epsilon =0,1`$. That we can have solutions that are characterised by arbitrary mass ratios of the constituent monopoles is also illustrated in figure 4, which represents two values for the parameter $`\omega `$, comparing the finite volume configurations obtained from improved cooling to the analytic infinite volume caloron solutions with non-trivial holonomy. We see again that the agreement is very good (and will improve for increasing $`L/\beta `$), with the peaks for the lattice result now somewhat higher as compared to the infinite volume results. Next, we discuss the comparison with configurations that are not static. Here the constituents are close together and therefore there is considerable overlap. This is illustrated in figure 5, both in the case of twist in time as in the case of no twist. As mentioned previously, the presence of twist in time ($`\stackrel{}{k}0`$) forces $`\omega ={\scriptscriptstyle \frac{1}{4}}`$, while in the absence of twist $`\omega `$ can be arbitrary. Obtaining configurations with no twist requires some care. By cooling random configurations one ends up quickly in the trivial vacuum configuration. Hence, it is useful to start with a $`Q=1`$ configuration having $`\stackrel{}{m}\stackrel{}{0}`$ obtained by cooling. Then twist is eliminated from this configuration by setting the weights of the twist-carrying plaquettes to their standard (untwisted) value. Additional improved cooling steps were applied to the configuration, leading to a new solution still having a non-trivial $`\omega `$ value. We recall that there are no exactly self-dual $`Q=1`$ solutions on the torus without twist . However, for solutions well-localised inside the torus the configuration is very approximately self-dual. Notice, nonetheless, that this reflects itself in higher values of the minimum lattice action. For these configurations with periodic boundary conditions performing further cooling steps with positive or zero $`\epsilon `$ will bring the constituents together and leads to the standard fate of instantons on the lattice. This can be stabilised by $`\epsilon <0`$, and the better the solution is contained within the box, the closer one can take $`\epsilon =0`$ to have a stable lattice solution . The differences with the analytic infinite volume caloron solutions only show themselves by small differences in peak heights (at $`t=0`$) and would not be clearly visible on the scale of figure 5. Instead, in figure 6, we display the analytic action density profile in the $`zt`$ plane, where $`z`$ is the axis connecting the two constituent monopole centers. The values of $`\omega `$ and the distance of the constituent monopoles is as in figure 5. It is clear that a two-lump structure is still visible. As a function of $`z`$ the constituent monopoles are best seen for $`t`$ values where the density is minimal ($`t={\scriptscriptstyle \frac{1}{2}}`$). The logarithmic scale enhances the regions of low action densities in favour of those with large densities, and brings out more clearly the constituents. For large $`L/\beta `$ the difference between the finite volume solutions with respect to the infinite volume calorons is mostly due to the contribution of the Coulombic tails of the periodic copies of the monopole constituents. That this depends on the nature of the twist is to be expected. For twist in time the charges change sign when shifting over a period of the torus. For twist in space there is no change in sign. This behaviour of the charges is correlated to the zeros of $`A_0`$ (which plays the role of the Higgs field) at the core of the constituent monopoles as illustrated by the behaviour of $`P_0`$ which is anti-periodic with time-twist and periodic with space-twist. It can be shown from the analytic solution (see the Appendix) that $`P_0=1`$ (corresponding to $`A_0=0`$) near one of the constituent centers, and $`P_0=1`$ near the other (related to $`A_0=0`$ by a gauge transformation that is anti-periodic in time - the gauge transformation that changes $`\omega `$ to $`{\scriptscriptstyle \frac{1}{2}}\omega `$). This vanishing, i.e. $`P_0^2=1`$, of the Higgs field near to the constituent monopole centers is reproduced by the lattice data as is illustrated in figure 7. ## 4 Higher charge calorons In this section we discuss our finding for higher topological charge. Analytic results in infinite volumes for higher charge calorons with non-trivial holonomy are not yet available. Due to the local (lumpy) character of the caloron solutions one would expect that higher charge configurations can be obtained by “gluing” together lower charged solutions. Indeed for configurations on the torus, considering more than one period in any direction is a sure way of producing solutions with higher topological charge. On the basis of this it is to be expected that in the case of SU(2), for example, configurations would have $`2Q`$ action density lumps. Producing high charge configurations with our method is simple. It is sufficient to monitor the value of the lattice action during cooling. Typically, this quantity shows plateaus at integer multiples of $`8\pi ^2`$. The cooling process can be interrupted at the desired value of the lattice action. We used ordinary ($`\epsilon =1`$) cooling; resulting configurations can subsequently be studied in more detail with other values of $`\epsilon `$. Figure 8 shows a configuration of charge 2, generated with ordinary cooling and twist $`\stackrel{}{k}=(1,1,1)`$. Indeed we find four lumps. We have been able to fit these to two $`Q=1`$, $`\omega ={\scriptscriptstyle \frac{1}{4}}`$, calorons by just adding the action densities together. Other charge 2 configurations have been obtained as well. This includes a configuration with 3 lumps, one of which seems describable as a $`Q=1`$ object. With similar techniques one can generate configurations with topological charge higher than $`2`$. This process led us to study the whole cooling histories that go from randomly generated configurations to low action ones. On lattices $`N_s^3\times 4`$, with $`N_s=16`$, 20 and 24, we computed every 10 ($`\epsilon =1`$) cooling steps the total action $`S`$ of the configuration and used our peak-searching algorithm to locate action density maxima. The information was recorded whenever the density of peaks, $`N_{peak}/(N_s^3\times 4)`$, found by the algorithm was smaller than $`50/(24^3\times 4)`$ (for higher densities the results are too sensitive to the details of the peak searching algorithm to be considered reliable). For all recorded data the quotient $`S/(4\pi ^2N_{peak})`$ was found to lie between $`0.8`$ and $`2`$, and peaked around $`1`$. This means that on average every peak is associated to an action of $`4\pi ^2`$, a property shared with the exact $`Q=1`$ caloron solution with non-trivial holonomy. The same follows for configurations that are aggregates of $`Q=1`$ calorons, which each have either one or (more often) two lumps (the constituent monopoles). Our result shows that this pattern extends to higher densities, where a detailed analysis of individual peaks is hard to do. Furthermore, the sign of the topological charge of these lumps is not always the same, thereby pushing the picture of a constituent monopole ensemble beyond the case of self-dual configurations. Our result resembles the findings of ref. , where a similar behaviour was reported for Monte Carlo generated configurations at zero temperature. In our case, we have the additional advantage of having an analytic control for $`Q=1`$ self-dual configurations. This allows us to conclude that the lumps correspond to constituent monopoles and hence, at least in this finite temperature case, not all lumps carry integer or half-integer topological charge. We hope these results will help to motivate other authors to investigate this point further. ## 5 Discussion In this paper we have shown that $`Q=1`$ self-dual solutions can be obtained profusely on asymmetric lattices $`L^3\times \beta `$ with $`L\beta `$ by using twisted boundary conditions. These configurations match quite well the analytic caloron solutions on $`R^3\times S_1`$ . The main change induced by the finite spatial volume is due to the contribution of the Coulombic tails of the periodic mirrors of the caloron solutions. We have shown that with judicious use of the twist values and of the parameter $`\epsilon `$ appearing in the cooling method of ref. , one can produce caloron solutions with different values of $`\rho `$ and $`\omega `$. In comparing to the continuum expressions, the choice $`\epsilon =0`$ (improved cooling) reduces considerably the size of lattice corrections. In our analysis we have attempted to disentangle the finite size effects from the lattice artifacts, by making use of the $`\epsilon `$ engineering. We have also explored self-dual configurations with higher values of the topological charge. Our results show that these configurations look very much like ensembles of $`Q=1`$ calorons with trivial or non-trivial holonomy. The conclusion, sustained by our results, is that typically a configuration with topological charge $`Q`$ has $`2Q`$ lumps (constituent monopoles). Given their local nature and the non-perturbative nature of the QCD (Yang-Mills) vacuum we vindicate that these configurations ought to play a role in the dynamics of the theory. It is to be emphasised that for high charges, the existence of these solutions does not rely on the use of any particular boundary conditions (twisted or not). Twist however plays a role in stabilising these solutions under cooling and this lies at the heart of the success of our method. This is most probably related to the fact that there are no exactly self-dual $`Q=1`$ solutions on the torus in the absence of twist . This does not happen for non-zero twist . Thus, lattice studies involving cooling methods could introduce distortions for low values of the topological charge . We stress again that due to its simple implementation and zero computational overhead, the use of twisted boundary conditions is an ideal tool for non-perturbative investigations of non-Abelian gauge theories and QCD. ## Appendix Here we summarise the infinite volume analytic solutions for the $`SU(N)`$ calorons with non-trivial holonomy. After a constant gauge transformation, the holonomy $`H`$ is characterised by ($`_{m=1}^n\mu _m=0`$) $$H=\mathrm{exp}[2\pi i\mathrm{diag}(\mu _1,\mathrm{},\mu _n)],\mu _1<\mathrm{}<\mu _n<\mu _{n+1}\mu _1+1.$$ (6) Note that $`\mathrm{Tr}(H)/N=lim_{|\stackrel{}{x}|\mathrm{}}P_0(\stackrel{}{x})`$. Using the classical scale invariance to put $`\beta =1`$, one has $$s(x)={\scriptscriptstyle \frac{1}{2}}\mathrm{Tr}F_{\mu \nu }^{\mathrm{\hspace{0.17em}2}}(x)={\scriptscriptstyle \frac{1}{2}}_\mu ^2_\nu ^2\mathrm{log}\psi (x),\psi (x)=\mathrm{\Psi }(\stackrel{}{x})\mathrm{cos}(2\pi t),\mathrm{\Psi }(\stackrel{}{x})={\scriptscriptstyle \frac{1}{2}}\mathrm{tr}(A_n\mathrm{}A_1),$$ (7) where $$A_m\frac{1}{r_m}(\begin{array}{cc}r_m& |\stackrel{}{y}_m\stackrel{}{y}_{m+1}|\\ 0& r_{m+1}\end{array})(\begin{array}{cc}c_m& s_m\\ s_m& c_m\end{array}).$$ (8) Noting that $`r_{n+1}r_1`$ and $`\stackrel{}{y}_{n+1}\stackrel{}{y}_1`$ we defined $`r_m=|\stackrel{}{x}\stackrel{}{y}_m|`$, with $`\stackrel{}{y}_m`$ the position of the $`m^{\mathrm{th}}`$ constituent monopole, which can be assigned a mass $`16\pi ^2\nu _m`$, where $`\nu _m\mu _{m+1}\mu _m`$. Furthermore, $`c_m\mathrm{cosh}(2\pi \nu _mr_m)`$ and $`s_m\mathrm{sinh}(2\pi \nu _mr_m)`$. Restricting to the gauge group of $`SU(2)`$, choosing $`H=\mathrm{exp}(2\pi i\omega \tau _3)`$ and defining $`\pi \rho ^2=|\stackrel{}{y}_2\stackrel{}{y}_1|`$, we can place the constituents at $`\stackrel{}{y}_1=(0,0,\nu _2\pi \rho ^2)`$ and $`\stackrel{}{y}_2=(0,0,\nu _1\pi \rho ^2)`$ by a suitable combination of a constant gauge transformation, spatial rotation and translation. For this case the gauge field reads $$A_\mu (x)=\frac{i}{2}\overline{\eta }_{\mu \nu }^3\tau _3_\nu \mathrm{log}\varphi (x)+\frac{i}{2}\varphi (x)\mathrm{Re}\left((\overline{\eta }_{\mu \nu }^1i\overline{\eta }_{\mu \nu }^2)(\tau _1+i\tau _2)_\nu \chi (x)\right),$$ (9) where the anti-selfdual ’t Hooft tensor $`\overline{\eta }`$ is defined by $`\overline{\eta }_{0j}^i=\overline{\eta }_{j0}^i=\delta _{ij}`$ and $`\overline{\eta }_{jk}^i=\epsilon _{ijk}`$ (with our conventions of $`t=x_0`$, $`\epsilon _{0123}=1`$) and $`\tau _a`$ are the Pauli matrices. Furthermore, $`\varphi ^1(x)=1\frac{\pi \rho ^2}{\psi (x)}\left(\frac{s_1c_2}{r_1}+\frac{s_2c_1}{r_2}+\frac{\pi \rho ^2s_1s_2}{r_1r_2}\right)`$ and $`\chi (x)=\frac{\pi \rho ^2}{\psi (x)}\left(e^{2\pi it}\frac{s_1}{r_1}+\frac{s_2}{r_2}\right)e^{2\pi i\nu _1t}`$, with $`\nu _1=2\omega `$ and $`\nu _2=12\omega `$. The solution is presented in the “algebraic” gauge, $`A_\mu (t+1,\stackrel{}{x})=\mathrm{exp}(2\pi i\omega \tau _3)A_\mu (t,\stackrel{}{x})\mathrm{exp}(2\pi i\omega \tau _3)`$. Since the radii $`r_i`$ are even functions of $`x`$ and $`y`$, derivatives in these two directions vanish on the $`z`$-axis. Hence, along the line connecting the two constituents $`A_0`$ is Abelian, allowing for a simple result for $`P_0`$ along this axis $$P_0(z)=\mathrm{cos}(\pi \nu _1+\mathrm{\Phi }(z)),\mathrm{\Phi }(z)={\scriptscriptstyle \frac{1}{2}}_0^1𝑑t_z\mathrm{log}\varphi (t,z).$$ (10) Since $`\psi (x)`$ and $`\varphi (x)`$ are even functions of $`r_i`$ we may substitute $`r_i=zz_i`$ (with $`z_1=\nu _2\pi \rho ^2`$ and $`z_2=\nu _1\pi \rho ^2`$) to find $`\varphi (t,z)=(\mathrm{\Psi }(z)\mathrm{cos}(2\pi t))/(\mathrm{cosh}(2\pi z)\mathrm{cos}(2\pi t))`$, with $`\mathrm{\Psi }(z)=\mathrm{cosh}(2\pi z)+\pi \rho ^2\left(\frac{s_1c_2}{r_1}+\frac{s_2c_1}{r_2}+\frac{\pi \rho ^2s_1s_2}{r_1r_2}\right)>1`$ a smooth function of $`z`$. The pole of $`\varphi (x)`$ at $`x=0`$ represents the usual gauge singularity. It leads to a jump of $`2\pi `$ in $`\mathrm{\Phi }(z)`$, to which the gauge invariant observable $`P_0(z)`$ is insensitive. The integration over time can be performed explicitly and one finds $$P_0(z)=\mathrm{cos}\left[\nu _1\pi +{\scriptscriptstyle \frac{1}{2}}_z\mathrm{acosh}(\mathrm{\Psi }(z))\right].$$ (11) From this it is easily shown that each of the values $`P_0(z)=\pm 1`$ is taken only once. Only for large $`\rho `$ one finds $`P_0(z_1)=1`$ and $`P_0(z_2)=1`$. When associating the constituent monopole locations to the zeros of the Higgs field (i.e. to $`P_0^2(\stackrel{}{x})=1`$), we find these are shifted outward from $`\stackrel{}{y}_i`$. This is illustrated in figure 9. For the cases we studied in this paper these shifts are small, but they tend to become large for the constituent monopoles with a small mass ($`\omega `$ approaching either 0 or $`{\scriptscriptstyle \frac{1}{2}}`$). We should also note that the maxima of the energy density (at $`t=0`$) are shifted inward due to overlap of the energy profiles of each constituent. The numerical evaluation of the action density $`s(x)`$ and of the Polyakov loop $`P_0(\stackrel{}{x})`$ are straightforward, but tedious. For the action density it involves taking 4 derivatives, which is most conveniently achieved by using the fact that $`\mathrm{\Psi }(\stackrel{}{x})`$ depends on $`\stackrel{}{x}`$ through the radii $`r_i`$. The C-programmes written for this purpose are available . ## Acknowledgements We are grateful to Conor Houghton, Thomas Kraan and Carlos Pena for useful discussions. This work was supported in part by a grant from “Stichting Nationale Computer Faciliteiten (NCF)” for use of the Cray Y-MP C90 at SARA. A. Gonzalez-Arroyo and A. Montero acknowledge finantial support by CICYT under grant AEN97-1678. M. García Pérez acknowledges finantial support by CICYT and warm hospitality at the Instituut Lorentz while part of this work was developed.
no-problem/9903/nucl-th9903030.html
ar5iv
text
# Three-Body System with Short-Range Interactions ## Abstract Within the framework of non-relativistic scalar effective field theory it is shown that the problem of the cutoff dependence of the leading order amplitude for a particle scattering off a two-body bound state can be solved without introducing three-body forces. Applications of effective field theory (EFT) to problems of nuclear physics have been under intensive investigations during the last few years. A review of recent developments (and references to the relevant papers) can be found in . Generalisation of the EFT program to the three-body problem is not straightforward. In bosonic systems and in some fermionic channels one encounters a non-trivial problem. While each leading order three-body diagram with re-summed two-body interactions is individually finite, the whole amplitude shows sensitivity to the ultraviolet cutoff. In it was argued that the addition of an one-parameter three-body force counter-term at leading order is necessary and sufficient to eliminate this cut-off dependence. The present paper considers the simple case of a non-relativistic scalar particle scattering off a two-body bound state and provides a solution of the above mentioned problem of sensitivity to the ultraviolet cut-off without introducing three-body forces into the leading order Lagrangian. The Lagrangian of the considered EFT of non-relativistic self-interacting boson $`\varphi `$ is given by the following expression : $$=\varphi ^{}\left(i_0+\frac{\stackrel{}{}^2}{2M}\right)\varphi \frac{C_0}{2}(\varphi ^{}\varphi )^2\frac{D_0}{6}(\varphi ^{}\varphi )^3+\mathrm{},$$ where the ellipsis stands for terms with more derivatives and/or fields. Terms with more derivatives are suppressed at low momentum and terms with more fields do not contribute to the three-body amplitude. For the sake of convenience one can rewrite this theory introducing a dummy field $`T`$ with the quantum numbers of two bosons (referred to as “dimeron” ), $``$ $`=`$ $`\varphi ^{}\left(i_0+{\displaystyle \frac{\stackrel{}{}^2}{2M}}\right)\varphi +\mathrm{\Delta }T^{}T{\displaystyle \frac{g}{\sqrt{2}}}(T^{}\varphi \varphi +\text{h.c.})`$ (1) $`+`$ $`hT^{}T\varphi ^{}\varphi +\mathrm{}`$ Observables depend on the parameters of Eq. (1) only through the combinations $`C_0g^2/\mathrm{\Delta }=4\pi a_2/M`$ and $`D_03hg^2/\mathrm{\Delta }^2`$. The (bare) dimeron propagator is a constant $`i/\mathrm{\Delta }`$ and the particle propagator is given by the usual non-relativistic expression $`i/(p^0p^2/2M)`$. The dressing of the dimeron propagator is given in FIG.1 (a). Summing loop-diagrams, subtracting divergent integral at $`p^0=\stackrel{}{p}^2=0`$ and removing the cut-off one gets the following dressed dimeron propagator: $$iS(p)=\frac{1}{\mathrm{\Delta }^R+\frac{Mg^2}{4\pi }\sqrt{Mp^0+\frac{\stackrel{}{p}^{\mathrm{\hspace{0.17em}2}}}{4}iϵ}+iϵ}.$$ (2) Where $`\mathrm{\Delta }^R`$ is the renormalised parameter ($`\mathrm{\Delta }`$ has absorbed the linear divergence). Standard power counting shows that diagrams which contribute to leading order calculations of particle - two-body bound state scattering are those illustrated in FIG.1 (b). The sum of all these diagrams satisfies the equation represented by the second equality in FIG.1 (b) , : $$a(p,k)=K(p,k)+\frac{2\lambda }{\pi }_0^{\mathrm{}}𝑑qK(p,q)\frac{q^2}{q^2k^2iϵ}a(q,k),$$ (3) where $`k`$ ($`p`$) is the incoming (outgoing) momentum, $`ME=3k^2/41/a_2^2`$ is the total energy, $`a(p=k,k)`$ is the scattering amplitude, $`a_2`$ is the two-particle scattering length, and $$K(p,q)=\frac{4}{3}\left(\frac{1}{a_2}+\sqrt{\frac{3}{4}p^2ME}\right)\frac{1}{pq}\mathrm{ln}\left(\frac{q^2+pq+p^2ME}{q^2qp+p^2ME}\right)$$ (4) Eq. (3) was first derived by Skorniakov and Ter-Martirosian (S-TM) and has $`\lambda =1`$ for the boson case. Three nucleons in the spin $`J=1/2`$ channel obey a pair of integral equations with similar properties to this bosonic equation. It was shown in that for $`\lambda =1`$ the homogeneous equation corresponding to Eq. (3) has a solution for arbitrary $`E`$. This solution is well-defined except for a normalisation constant and hence the solution of Eq. (3) contains an arbitrary parameter. The sum of the diagrams in FIG.1 (b) is only one of the solutions. Hence, given the general solution of Eq. (3), to find this sum one would have to fix the value of the arbitrary parameter appropriately. The fact that the homogeneous equation corresponding to Eq. (3) has a solution for arbitrary $`E`$ is not surprising: since Eq. (3) corresponds to a coordinate space $`\delta `$-function potential, the use of the Thomas theorem combined with the Efimov effect explains the existence of solutions for arbitrary $`E`$. Note that two-body forces are not actually of zero range in EFT. Although Eq. (3) can be derived from the leading order Lagrangian of EFT, this equation is not a leading order approximation of a more general equation: there are no consistent equations for renormalised amplitudes in EFT if the cut-off is removed after renormalization. The problem is that EFT is a non-renormalizable theory in the traditional sense and hence to remove all divergences which occur in the equations for amplitudes one would need to include contributions of an infinite number of counter-terms at any finite order (except perhaps leading order) approximation. Hence EFT with removed cut-off describes the amplitude for a particle scattering off a two-body bound state as a sum of an infinite number of diagrams. The EFT approach is concerned with Eq. (3) only because one of its solutions corresponds to this sum of diagrams. A great advantage of cut-off theory is that one can write down consistent equations, and the solutions of these equations are equivalent to the renormalised (with removed cut-off) amplitudes up to the order one is working with. If working with equations of cut-off theory it is necessary to keep the cut-off finite even though at leading order the cut-off can be removed, giving Eq. (3). As the equations with finite cut-off do not correspond to any system with local ($`\delta `$-function type) potential, there are no three-body bound states with arbitrarily large negative energies. The solution of the homogeneous equation corresponding to equation (3), which exists for any value of the energy, does not carry any physical information. The existence of this solution is a result of the incorrect procedure of removing the cut-off in the leading order equations of the cut-off theory. Note that the amplitude determined from the equation of cut-off theory can contain some non-perturbative contributions in addition to the sum of the infinite number of diagrams drawn in FIG.1 (b) but these non-perturbative effects can not have anything to do with non-physical solutions of the homogeneous equation. One can still use Eq. (3) to find the amplitude for a particle scattering off two-body bound state, but one should keep in mind that it contains non-physical information encoded in the solution of the corresponding homogeneous equation. As will be seen below the EFT approach fixes uniquely the arbitrary parameter present in the general solution of Eq. (3). This particular solution with an appropriately fixed value of the arbitrary parameter is the scattering amplitude. One can study the asymptotic behaviour of $`a(p,k)`$ for large $`p`$. Up to terms decreasing as $`p^1`$ the function $`a(p,k)`$ has the form : $$a(p,k)\underset{i}{}A_i\left(k\right)p^{s_i}$$ (5) where $`s_i`$ are roots of the following equation: $$1\frac{8\lambda }{\sqrt{3}}\frac{\mathrm{sin}\frac{\pi s}{6}}{s\mathrm{cos}\frac{\pi s}{2}}=0.$$ (6) The summation in Eq. (5) goes over all solutions of Eq. (6) for which $`|\mathrm{Re}s|<1`$. Eq. (6) has two roots for which $`|\mathrm{Re}s|<1`$: $`s=\pm is_0,s_01`$. Hence, Eq.(5) becomes: $$a(p,k)A_1\left(k\right)p^{is_0}+A_2\left(k\right)p^{is_0}$$ (7) One of the arbitrary constants $`A_1\left(k\right)`$ and $`A_2\left(k\right)`$ is determined by the other when this solution is joined to the solution in the region of small $`p`$. Hence the solution of Eq. (3) depends on a single arbitrary parameter. The asymptotic behaviour of the solution of the homogeneous equation corresponding to Eq. (3) is evidently the same. Iterating the equation (3) one gets a series which is equivalent to the sum of the diagrams in FIG.1 (b). As $`s_0`$ does not have an expansion in $`\lambda `$, it should be clear that for the sum of the considered diagrams (if it exists) the parameters $`A_1\left(k\right)`$ and $`A_2\left(k\right)`$ must be vanishing. Hence the EFT with removed cut-off supports the conclusion drawn from general considerations, namely that the non-physical solution of the homogeneous equation has to be eliminated. To find the sum of the considered infinite number of diagrams one needs to construct a solution with non-oscillating asymptotic behaviour i.e. with vanishing $`A_1(k)`$ and $`A_2(k)`$. Note that there is only one solution with such asymptotic behaviour. To summarise, the leading order EFT for a spinless particle scattering off a two-body bound state leads to the equation of S-TM together with a boundary condition at the origin (in configuration space) which eliminates the oscillating behaviour. Hence EFT resolves quite naturally the problem of the choice for the arbitrary parameter present in the general solution of this equation.
no-problem/9903/quant-ph9903027.html
ar5iv
text
# Direct measurement of the Wigner function by photon counting ## Abstract We report a direct measurement of the Wigner function characterizing the quantum state of a light mode. The experimental scheme is based on the representation of the Wigner function as an expectation value of a displaced photon number parity operator. This allowed us to scan the phase space point-by-point, and obtain the complete Wigner function without using any numerical reconstruction algorithms. Among many representations of the quantum state, the Wigner function offers an appealing possibility to describe quantum phenomena using the classical-like concept of phase space . The Wigner function provides complete information on the state of a system, and it allows one to evaluate any quantum observable by phase space integration with an appropriate Wigner-Weyl ordered expression. Recently, the Wigner function has gained experimental significance due to the development of the optical homodyne tomography, a beautiful technique for measuring the quantum state of light pioneered by Smithey et al. and further applied by Breitenbach et al. In this method, rooted in the domain of image processing, the Wigner function is a natural representation of the quantum state reconstructed from experimental data. However, the route from raw experimental results to the Wigner function is not straightforward. First, a sample of homodyne events is collected and stored. Statistics of these events for a fixed local oscillator phase is described by a marginal projection of the Wigner function. In order to retrieve the complete Wigner function, a family of homodyne statistics measured for a sufficiently dense set of local oscillator phases has to be processed using the sophisticated filtered back-projection algorithm. In this Communication we report a direct measurement of the Wigner function of a light mode. This technique, based on photon counting, avoids the detour via complex numerical reconstruction algorithms. The principle of our measurement is entirely different from optical homodyne tomography. The Wigner function at a given phase space point is itself a well defined quantum observable . Furthermore, the measurement of this observable can be implemented for optical fields using an arrangement employing an auxiliary coherent probe beam . The amplitude and the phase of the probe field define the point in the phase space at which the Wigner function is measured. This allowed us to scan the phase space point-by-point, simply by changing the parameters of the probe field. A variation of this idea has been applied by Leibfried et al. to determine the vibrational state of a trapped ion. Here we present an experiment, which to the best of our knowledge is the first direct measurement of the Wigner function for optical fields. Our experiment is based on the representation of the Wigner function at a complex phase space point denoted by $`\alpha `$ as the expectation value of the following operator: $$\widehat{W}(\alpha )=\frac{2}{\pi }\underset{n=0}{\overset{\mathrm{}}{}}(1)^n\widehat{D}(\alpha )|nn|\widehat{D}^{}(\alpha ),$$ (1) where $`\widehat{D}(\alpha )`$ is the displacement operator and $`|n`$ denote Fock states, $`\widehat{n}|n=n|n`$. Thus, $`\widehat{W}(\alpha )`$ has two eigenvalues: $`2/\pi `$ and $`2/\pi `$, corresponding to degenerate subspaces spanned respectively by even and odd displaced Fock states. Practical means to translate this formula into an optical arrangement are quite simple . The displacement transformation can be realized by superposing the measured field at a low-reflection beam splitter with a strong coherent probe beam. The value of the displacement $`\alpha `$ is equal in this setup to the reflected amplitude of the probe field. Furthermore, the projections on Fock states can be obtained by photon counting assuming unit quantum efficiency. These two procedures, combined together, provide a practical way to measure the Wigner function at an arbitrarily selected phase space point $`\alpha `$. The experimental setup we used to measure the Wigner function is shown schematically in Fig. 1. In principle, it is a Mach-Zender interferometric scheme with the beams in two arms of the interferometer serving as the signal and the probe fields. An attenuated, linearly polarized (in the plane of Fig. 1) 632.8 nm beam from a frequency-stabilized single-mode He:Ne laser is divided by a low-reflection beam splitter BS1. The weak reflected beam is used to generate the signal field whose Wigner function will be measured. The state preparation stage consists of a neutral density filter ND and a mirror mounted on a piezoelectric translator PZT. With this arrangement, we are able to create pure coherent states with variable phase as well as their incoherent mixtures. Though these states do not exhibit nonclassical properties, they constitute a nontrivial family to demonstrate the principle of the method, which provides a complete characterization of both quantum and classical field fluctuations. The strong beam leaving the beam splitter BS1 plays the role of the probe field with which we perform the displacement transformation $`\widehat{D}(\alpha )`$. In order to scan the phase space one should be able to set freely its amplitude and phase, which define respectively the radial and the angular coordinates in the phase space. The amplitude modulation is achieved with a half-wave plate, a longitudinal Pockels cell EOM1, and a polarizer oriented parallel to the initial direction of polarization. The phase modulation is done with the help of an ADP crystal electrooptic phase modulator EOM2 on the signal field. This is completely equivalent to modulating the probe field phase, but more convenient for technical reasons: in this arrangement optical paths in both the arms of the Mach-Zender interferometer are approximately the same, and better overlap of the signal and the probe modes is achieved at the output of the interferometer. The signal and the probe fields are interfered at a nearly completely transmitting beam splitter BS2 with the power transmission $`T=98.6\%`$. In this regime, the transmitted signal field effectively undergoes the required displacement transformation. Spurious reflections that accompany the beam leaving the interferometer are removed using the aperture A. Finally, the transmitted signal is focused on an EG&G photon counting module SPCM-AQ-CD2749, whose photosensitive element is a silicon avalanche diode operated in the Geiger regime. The overall quantum efficiency of the module specified by the manufacturer is $`\eta 70\%`$. The count rate is kept low in the experiment, and thus the chance of two or more photons triggering a single avalanche signal is very small, and the probability of another photon arriving during the detector dead time can be neglected. Under these assumptions, each pulse generated by the module corresponds to the detection of a single photon . The pulses are acquired by a computer, which also controls the voltages applied to the electrooptic modulators. The interference visibility in our setup has been measured to be $`v98.5\%`$, and the phase difference between the two arms was stable up to few percent over times of the order of ten minutes. In Fig. 2 we depict the measured Wigner functions of the vacuum, a weak coherent state, and a phase diffused coherent state. The phase fluctuations were obtained by applying a 400 Hz sine waveform to the piezoelectric translator. For all the plots, the phase space was scanned on a grid defined by 20 amplitudes and 40 phases. The scaling of the radial coordinate is obtained from the average number of photons $`n_{\text{vac}}`$ detected for the blocked signal path. Thus the graphs are parameterized with the complex variable $`\beta =e^{i\phi }n_{\text{vac}}^{1/2}`$, where $`\phi `$ is the phase shift generated by the phase modulator EOM2. At each selected point of the phase space, the photocount statistics $`p_n(\beta )`$ was determined from a sequence of $`N=8000`$ counting intervals, each $`\tau =30\mu `$s long. The duration of the counting interval $`\tau `$ defines the temporal envelope of the measured mode. The count statistics was used to evaluate the alternating sum $$\mathrm{\Pi }(\beta )=\underset{n=0}{\overset{\mathrm{}}{}}(1)^np_n(\beta ),$$ (2) which, up to the normalization factor $`2/\pi `$ is equal to the Wigner function of the measured state. Statistical variance of this result can be estimated by $`\text{Var}[\mathrm{\Pi }(\beta )]=\{1[\mathrm{\Pi }(\beta )]^2\}/N`$ . Thus, the statistical error of our measurement reaches its maximum value, equal to $`1/N^{1/2}1.1\%`$, when the value of the Wigner function is close to zero. The Wigner functions of the vacuum and of the coherent state are Gaussians centered at the average complex amplitude of the field, and their widths characterize quantum fluctuations. It can be noticed that the measured Wigner function of the coherent state is slightly lower than that of the vacuum state. In the following, when discussing experimental imperfections, we shall explain this as a result of non-unit interference visibility. In the plot of the Wigner function of the phase diffused coherent state, one can clearly distinguish two outer peaks corresponding to the turning points of the harmonically modulated phase. There are several experimental factors whose impact on the result of the measurement needs to be analyzed. First, there are losses of the signal field resulting from two main sources: the reflection from the beam splitter BS2 and, what is more important, imperfect photodetection characterized by the quantum efficiency $`\eta `$. Analysis of these losses shows , that in such a case the alternating series evaluated from photocount statistics is proportional to a generalized, $`s`$-ordered quasidistribution function $`W(\alpha ;s)`$, with the ordering parameter equal $`s=(1\eta T)/\eta T`$. In addition, the two modes interfered at the beam splitter BS2 are never matched perfectly. The effects of the mode mismatch can be discussed most thoroughly within the multimode theory . Here, due to limited space, we shall present the main conclusions and briefly sketch the reasoning. Let us consider the normalized mode functions describing the transmitted signal field and the reflected probe field. The squared overlap $`\xi `$ of these two mode functions can be related to the interference visibility $`v`$ as $`\xi =v/(2v)`$. In order to describe the effects of the mode mismatch, we will decompose the probe mode function into a part that precisely overlaps with the signal, and the orthogonal remainder. The amplitude of the probe field effectively interfering with the signal is thus multiplied by $`\xi ^{1/2}`$, and the remaining part of the probe field contributes to independent Poissonian counts with the average number of detected photons equal $`(1\xi )|\beta |^2`$. Consequently, the full count statistics is given by a convolution of the statistics generated by the interfering fields, and the Poissonian statistics of mismatched photons. A simple calculation shows, that the alternating sum evaluated from such a convolution can be represented as a product of the contributions corresponding to the two components of the probe field: $`\mathrm{\Pi }(\beta )`$ $`=`$ $`\mathrm{exp}[2(1\xi )|\beta |^2]`$ (4) $`\times {\displaystyle \frac{\pi }{2\eta T}}W(\sqrt{{\displaystyle \frac{\xi }{\eta T}}}\beta ;{\displaystyle \frac{1\eta T}{\eta T}}).`$ Here on the right-hand side we have made use of the theoretical results for imperfect detection obtained in Refs. . Specializing this result to a coherent state $`|\alpha _0`$ with the amplitude $`\alpha _0`$, yields: $$\mathrm{\Pi }(\beta )=\mathrm{exp}[2|\beta \sqrt{\xi \eta T}\alpha _0|^22(1\xi )\eta T|\alpha _0|^2].$$ (5) Thus, in a realistic case $`\mathrm{\Pi }(\beta )`$ represents a Gaussian centered at the attenuated amplitude $`\sqrt{\xi \eta T}\alpha _0`$, and the width remains unchanged. This Gaussian function in multiplied by the constant factor $`\mathrm{exp}[2(1\xi )\eta T|\alpha _0|^2]`$. For our measurement, $`\xi 97\%`$ and $`\eta T|\alpha _0|^21.34`$, which gives the value of this factor equal 0.92. This result agrees with the height of the experimentally measured Wigner function of a coherent state. We shall conclude this Communication with a brief comparison of the demonstrated direct method for measuring the Wigner function with the optical homodyne tomography approach. An important parameter in the experimental quantum state reconstruction is the detection efficiency. Currently, higher values of this parameter can be achieved in the homodyne technique, which detects quantum fluctuations as a difference between two rather intense fields. Such fields can be efficiently converted into photocurrent signals with the help of p-i-n diodes. It should be also noted that an avalanche photodiode is not capable of resolving the number of simultaneously absorbed photons, and that it delivers a signal proportional to the light intensity only in the regime described in this paper. However, continuous progress in single photon detection technology gives hope to overcome current limitations of photon counting . Alternatively, the displacement transformation implemented in the photon counting technique can be combined with efficient random phase homodyne detection. This yields the recently proposed scheme for cascaded homodyning . The simplicity of the relation (1) linking the count statistics with quasidistribution functions allows one to determine the Wigner function at a given point from a relatively small sample of experimental data. This feature becomes particularly advantageous, when we consider detection of multimode light. Optical homodyne tomography requires substantial numerical effort to reconstruct the multimode Wigner function. In contrast, the photon counting method has a very elegant generalization to the multimode case: after applying the displacement to each of the involved modes, the Wigner function at the selected point is simply given by the average parity of the total number of detected photons. Moreover, the dichotomic outcome of such a measurement provides a novel way of testing quantum nonlocality exhibited by correlated states of optical radiation . The authors thank Prof. K. Ernst for placing a single-mode He:Ne laser at their disposal. This research is supported by Komitet Badań Naukowych, Grant 2P03B 002 14.
no-problem/9903/hep-ex9903062.html
ar5iv
text
# Determination of the upper limit on m_𝜈_𝜏 from LEP. ## I Introduction: Motivation and indirect constraints. The neutrino masses are one of the most puzzling and hot subject of discussion in the high energy physics community. It is believed that the smallness the neutrinos masses can be explained by assuming that they are produced by the mixing between standard Dirac mass terms and large Majorana mass terms; the Majorana masses are related to a new energy scale at which the lepton number conservation is violated. This is the so called see-saw mechanism which is present in many grand-unified models. If the Standard Model mass hierarchy is preserved in the Dirac sector of the neutrino mass matrix the tau-neutrino is expected to be by far the heaviest neutrino. Under this assumption the neutrino mass hierarchy is expected to be of the order $`m_{\nu \tau }:m_{\nu \mu }:m_{\nu e}=m_t^2:m_c^2:m_u^2`$. Cosmology can put strong constraints on the neutrino masses because of their influence on the actual density of the universe. An unstable tau-neutrino with a mass of the order of 10-20 MeV can survive to the cosmological constraints. Measurements of light nucleus abundances which results from Big Bang Nuleo Synthesis can give information on the neutrino masses. The incompatibility between the BBNS prediction and the measured D and H<sup>3</sup> abundances can be solved by and unstable tau-neutrino with a mass of the order of 10-25 MeV . The claimed superK discovery of atmospheric neutrinos oscillation would constrain $`\mathrm{m}_{\nu _\tau }`$ to be lighter than about 170 KeV (which is the direct limit on the mu-neutrino mass) if what they observe is an oscillation between tau and mu neutrinos. ## II The fit to $`\mathrm{m}_{\nu _\tau }`$: 2-dimensional method The $`\mathrm{m}_{\nu _\tau }`$ measurements at LEP are based on a fit to the $`E_h,M_h`$ spectrum in hadronic tau decays. This method has been introduced for the first time by two LEP experiments OPAL and ALEPH . The fact that for each given hadronic mass the hadronic energy is constrained between the two values $`E_h^{max,min}(M_h,\mathrm{m}_{\nu _\tau })`$ gives a sizable improvement in the sensitivity to the tau-neutrino mass with respect to the one obtained by a fit to the $`M_h`$ spectrum alone, as explained in . The two decays used at LEP are $`\tau 5\pi ^\pm \nu _\tau `$ and $`\tau 3\pi ^\pm \nu _\tau `$. The first decay mode benefits of a invariant mass spectrum which extends to large values but is limited by the very small branching which is of the order of 0.08$`\%`$. The second mode benefit of a large statistics (BR($`\tau 3\pi ^\pm \nu _\tau `$)$`9\%`$) but the $`M_h`$ spectrum is suppressed at values close to $`M_\tau `$ because of the $`a_1`$ dominance in 3$`\pi `$ tau decays. The two modes have similar sensitivities even though the regions in the $`M_h,E_h`$ plane from which this sensitivity comes are slightly different: for the five-prong mode it comes from few events at very high values of $`M_H`$ and $`E_h`$ while for the three-prong mode events at very high energy and intermediate mass can contribute too. The value of $`\mathrm{m}_{\nu _\tau }`$ is obtained by a likelihood fit to the observed events where the likelihood has the following form: $`(m_\nu )={\displaystyle \underset{events}{}}{\displaystyle \frac{1}{\mathrm{\Gamma }}}{\displaystyle \frac{d^2\mathrm{\Gamma }}{dE_hdm_h}}𝒢(E_{beam},E_\tau )(m_h,E_h,\rho ,\sigma _{m_h},\sigma _{E_h},\mathrm{})\epsilon (m_h,E_h)`$ (1) The $`\frac{d^2\mathrm{\Gamma }}{dE_hdm_h}`$ is the double-differential tau decay width and contains the unknown part related to the hadronic spectral functions. The knowledge of these function is relevant in estimating the sensitivity of an experiment (the so called luck factor) but it doesn’t affect the limit which comes from a region of the $`M_h,E_h`$ plane where the phase space is dominant (this is true if no narrow resonance is present with $`M_{res}M_\tau `$ as discussed in the three-prong results section). The effect of initial/final state radiation is described by $`𝒢(E_{beam},E_\tau )`$; at LEP ISR is expected to be small and it has practically very small effect on the $`\mathrm{m}_{\nu _\tau }`$ determination. The resolution function $`(m_h,E_h,\rho ,\sigma _{m_h},\sigma _{E_h},\mathrm{})`$ is the most delicate part of this measurement; the determination of the $`M_h,E_h`$ end-point requires the knowledge of the tracking calibration with an accuracy better than the ratio $`\mathrm{m}_{\nu _\tau }/M_{tau}`$. As will be shown in the result section this is the main source of systematics for all the LEP experiments. The detector efficiency is contained in the function $`\epsilon (m_h,E_h)`$; since this function is not expected to vary rapidly in the sensitive region its influence on the $`\mathrm{m}_{\nu _\tau }`$ determination is expected to be very small. ## III The Results In this section the LEP results from five- and three-prong tau decays are reviewed. In the three-prong section the possible problem caused by the presence of a narrow resonance close to the hadronic mass end-point is discussed. Finally the ALEPH and OPAL results are combined with the likelihood product method by using the published five- and three-prong likelihoods. An estimate of the systematic error of the combined result is given too. ### A Results form $`\tau 5\pi ^\pm \nu _\tau `$ tau decays The decay $`\tau 5\pi ^\pm \nu _\tau `$ decay mode has been used by ALEPH and by OPAL to measure the tau-neutrino mass. Both experiments have analysed the full LEP1 statistics which corresponds to about 200k tau-pairs. The ALEPH experiment has selected 52 $`\tau 5\pi ^\pm \nu _\tau `$ decays (and 3 $`\tau 5\pi ^\pm \pi ^0\nu _\tau `$ decays which due to the worse $`\pi ^0`$ energy resolution have very small impact on the final result) with an efficiency of about 27$`\%`$ and a background from dangerous topologies at the level of 0.6$`\%`$. In terms of the $`\mathrm{m}_{\nu _\tau }`$ upper limit the dangerous backgrounds are the tau decays in which the hadronic mass and/or the hadronic energy are reconstructed at values larger than the true ones. For example a decay $`\tau 3\pi ^\pm \pi ^03\pi ^\pm \gamma e^+e^{}`$ where the two electrons are reconstructed as pions tends to have a reconstructed hadronic mass which is systematically higher than the true one. If this events are not rejected they could mimic a massless tau-neutrino giving a fake good limit on $`\mathrm{m}_{\nu _\tau }`$. The same problem holds for $`q\overline{q}`$ events reconstructed as $`\tau 5\pi ^\pm \nu _\tau `$; in fact this kind of events tends to be in the high $`M_h,E_h`$ region. The typical ALEPH resolutions are of about 15 MeV for $`M_h`$ and 350 MeV for $`E_h`$. The resolution parameters have been determining by using the so called Monte Carlo cloning technique which allows the determination of these parameters on an event by event basis. The fit to the ALEPH events showed in Fig.1 gives a limit of $`\mathrm{m}_{\nu _\tau }<22.3`$ MeV at 95$`\%`$ confidence level. The systematic error is dominated by the knowledge of the parameters of the resolution function. The energy and the mass scales and resolutions have been determined by using the $`Z\mu ^+\mu ^{}`$ events and the charm decays $`D^0K^{}\pi ^+`$, $`D^0K^{}\pi ^+\pi ^+\pi ^{}`$ and $`D^+K^{}\pi ^+\pi ^+`$. By adding linearly the 0.8 MeV systematic error the final 95 $`\%`$ C.L. limit is $`\mathrm{m}_{\nu _\tau }<23.1`$ MeV. The OPAL experiment has performed a similar measurement by selecting 22 $`\tau 5\pi ^\pm \nu _\tau `$ decays . The selection efficiency is of 9.3$`\%`$ with a dangerous background of the order of 2.5$`\%`$. The parameters of the resolution function have been determined, as for ALEPH, with the Monte Carlo cloning technique. In the OPAL paper is proved that this technique is able to spot events with reconstruction problem as shown in Fig. 2. Typical mass and energy resolutions of the OPAL analysis are 20-25 MeV and 500 MeV respectively. The fit to the 22 OPAL events gives a limit of $`\mathrm{m}_{\nu _\tau }<39.6`$ MeV at 95$`\%`$ confidence level. As for ALEPH the systematic error is dominated by the knowledge of the resolution function parameters and is of 3.6 MeV. By adding linearly this systematics to statistical limit OPAL obtains a 95 $`\%`$ C.L. upper limit on $`\mathrm{m}_{\nu _\tau }`$ of 43.2 MeV. ### B Results form $`\tau 3\pi ^\pm \nu _\tau `$ tau decays As mentioned in the introduction the three-prong tau decay mode is competitive with the five-prong one in the determination of the tau neutrino mass. The three LEP experiments ALEPH , DELPHI and OPAL have used this decay mode to constraint the tau-neutrino mass. The ALEPH results is based on a fit to the $`M_h,E_h`$ distribution. Due to the large statistics the fit has been limited to an high $`M_h,E_h`$ region where 3000 $`\tau 3\pi ^\pm \nu _\tau `$ decays have been selected. The selection efficiency in this region is of about 49$`\%`$ with a background from dangerous topologies of less then 0.2$`\%`$. The high statistics of this channel make the cloning technique not viable. For this reason ALEPH has parameterised the quantities entering in resolution $``$ as a function of the hadronic mass and energy. The typical values of the mass and of the energy resolution are similar to the ones obtained in the five-prong mode. The fit gives a statistical limit of 21.5 MeV on the tau-neutrino mass at 95$`\%`$ confidence level. The systematic error on this limit is again dominated by the knowledge of the resolution function and ammount to 4.2 MeV. This error is larger than the five-prong one mainly because of the non use of the cloning technique. Adding linearly the systematic error to the fit result a 95$`\%`$ C.L. limit of $`\mathrm{m}_{\nu _\tau }<25.7`$ MeV has been obtained. The OPAL experiment has tried to increase its sensitivity to the tau neutrino mass by partially reconstructing the tau direction in three-prong versus three-prong tau events. In this kind of events the thrust axes is a good approximation of the tau direction especially for events where the three-prong are very energetic. By a fit to the two variables square missing-mass and missing-energy on a sample of 2514 events OPAL obtained an upper limit of 32.1 MeV at 95$`\%`$ C.L. on $`\mathrm{m}_{\nu _\tau }`$. The systematic error has been estimated to be of 3.2 MeV dominated by the knowledge of the resolution function parameters. This gives a final limit of $`\mathrm{m}_{\nu _\tau }<35.3`$ MeV at 95 $`\%`$ confidence level. The DELPHI experiment has selected 12538 $`\tau 3\pi ^\pm \nu _\tau `$ decays with a 38$`\%`$ efficiency and a dangerous background of 1.5 $`\%`$. A fit to the $`M_h,E_h`$ distribution gives a 95 $`\%`$ C.L. upper limit of 25 MeV on the tau-neutrino mass. In the study of the systematics DELPHI has observed a significant disagreement between the three-prong mass spectrum in the data sample and the one obtained with a Monte Carlo based on the Kün-Satamaria model . This discrepancy is observed in the $`M_h`$ range (1.5-1.9) GeV. The DELPHI collaboration claimed that this excess could be explained if about 2.3$`\%`$ of new resonance, the a’(1700) with a mass of 1.7 GeV and a width of 0.3 GeV, was added in the three-prong tau decay. The description to the Dalitz plots in the three-prong tau decays also improved by the addition of this resonance. The CLEO experiment has tried to measure the ammount of this resonance in their three-prong tau sample (by assuming a massless tau-neutrino) and has obtained (with different models) an a’(1700) fraction of the order of (0.1-0.4)$`\%`$ which is significantly smaller than the 2.3$`\%`$ reported by DELPHI. The ALEPH and the OPAL experiments has observed the same problem as DELPHI in describing the $`\tau 3\pi ^\pm \nu _\tau `$ Dalitz plots however the do not observe any excess with respect to the Khün and Santamaria model in the hadronic mass spectrum. The ALEPH experiment has checked the effect of such a large ammount of a’(1700) on its limit: if a 2.5$`\%`$ of a’(1700) with the parameters suggested by DELPHI is added in the three-prong fit the limit on $`\mathrm{m}_{\nu _\tau }`$ is worsened by about 6 MeV. This implies a variation on the combined three- and five-prong ALEPH upper limit, reported in the following, of about 1 MeV. As DELPHI correctly states that a simultaneous fit of the a’(1700) properties and of $`\mathrm{m}_{\nu _\tau }`$ in three-prong tau decays is not possible. In view of the CLEO results and of the ALEPH check is unlikely that the limit on the tau neutrino mass can be deteriorated by the presence of this new resonance. More inputs from theorists is welcome. ### C Combination of the ALEPH and OPAL results The ALEPH and the OPAL collaborations have combined (separately) their three- and five-prong upper limits on the tau-neutrino masses. The method used to combine these results is based on the likelihood product. In doing the combination the correlation between the systematic errors of the two decay modes has been properly taken into account as described in . The limit obtained by the OPAL and by the ALEPH collaborations are respectively of $`\mathrm{m}_{\nu _\tau }<27.6`$ MeV and of $`\mathrm{m}_{\nu _\tau }<18.2`$ MeV at 95$`\%`$ confidence level, including systematic effects. The results from the different LEP experiments and from the different tau decay modes are limited by statistics. Moreover the dominant systematics (resolution function parameters) are mainly uncorrelated between the different LEP experiments. For this reason a combination of the LEP results would improve the sensitivity to the tau-neutrino mass. I have done this exercise in order to get an estimate of what this combined limit would be. I have used the five- and three-prong likelihoods published by the ALEPH and by the OPAL experiments (the DELPHI results have not yet been published). The method is the same as the one used in the ALEPH and OPAL publications: $`_{COMB}(m_\nu )=_{OPAL}^{3\pi }(m_\nu )\times _{OPAL}^{5\pi }(m_\nu )\times _{ALEPH}^{5\pi }(m_\nu )\times _{ALEPH}^{3\pi }(m_\nu )`$ (2) the combined likelihood is shown in Fig. 3. From this likelihood a 95$`\%`$ C.L. limit $`\mathrm{m}_{\nu _\tau }<13.6`$ MeV can be derived by requiring $`ln((m_\nu ^{95}))=ln(^{MAX})1.92`$ (this method is almost equivalent to the one based on the integration of the likelihood, used for example by CLEO, when the likelihood shape is fairly Gaussian as in this case). To estimate the systematic error all the modified likelihoods containing the effect of the different systematic sources would be needed. Since they are not published a rough estimate of the systematics have been obtained by multiplying each likelihood by a constant factor which brings, for each channel, the limit on $`\mathrm{m}_{\nu _\tau }`$ to be equal to the ones which includes the systematics. The total systematics has been obtained as followed: the systematic error for each combined mode is obtained by subtracting to the limit derived with the modified likelihoods the one obtained without systematics; all these errors are added in quadrature (in this way the possible correlations between the different systematic errors are not taken into account) giving a total systematics of 1.4 MeV. By adding linearly this error the the statistical result a combined ALEPH+OPAL 95$`\%`$ C.L upper limit of 15 MeV on $`\mathrm{m}_{\nu _\tau }`$ is obtained. I want to stress that this combination is unofficial and approximated. The aim is to give an idea of the gain which could be achieved with the combination of the LEP results and to push the ALEPH, DELPHI and OPAL collaboration to produce an official combined $`\mathrm{m}_{\nu _\tau }`$ limit. ## IV Comparison with CLEO results The CLEO experiment has collected a huge statistics of tau decays at a centre-of-mass energy close to the $`\mathrm{{\rm Y}}(4s)`$ resonance and is expected to have a sensitivity to $`\mathrm{m}_{\nu _\tau }`$ larger than that of the LEP experiments. The performance of the CLEO , ALEPH and OPAL 5$`\pi `$ analyses are compared in table I. The CLEO limits are worse than the ALEPH one even though the CLEO statistics is a factor of five larger. This brings to the question: is CLEO unlucky or the are LEP results lucky ? It would be nice to evaluate for each experiment the expected limit on $`\mathrm{m}_{\nu _\tau }`$. Its comparison with the actual one will tell us who is lucky and who is unlucky. Unfortunately the unknown hadronic dynamics doesn’t allow the evaluation of the a priori sensitivity of an experiment. The ALEPH experiment claims that the probability to get such a lucky distribution in the $`M_h,E_h`$ plane is at the level of 15$`\%`$ if a model of the dynamics driven by $`\pi \pi a_1`$ is assumed in 5$`\pi `$ tau decays. At the same time CLEO claims that the probability to get a limit on $`\mathrm{m}_{\nu _\tau }`$ such a bad or worse than what they have obtained is at the level of 23$`\%`$ if a softer mass spectrum is assumed in the five-prong tau decays. So the puzzle stays unsolved. What can be done is to compare data with data in the region where they are more sensitive to the tau-neutrino mass. This exercise is shown in Fig. 4 where the number of 5$`\pi `$ events are plotted in slices of iso-$`\mathrm{m}_{\nu _\tau }`$ in the $`M_h,E_h`$ plane; in order to have more statistics the ALEPH and the OPAL events have been summed up in this comparison. Only the events in a sensitive region which corresponds to $`M_h>1.65`$ GeV and $`E_h/E_b>0.9`$ are shown. The selection efficiency is assumed to be flat in the full $`M_h,E_h`$ plane. It finds out from this plot that the shapes of the LEP and CLEO data are compatible. This is shown by the comparison between the dark-blue line and the red dots of Fig. 4; the CLEO events in the sensitive region have been normalised the number of ALEPH+OPAL events in the same region. For what concerns the fraction of 5$`\pi `$ events selected in this region with respect to the total number of selected 5 $`\pi `$ events the compatibility is less good as can be observed by comparing the green line in Fig. 4 with the red dots; the ALEPH and the OPAL experiments select a total of 16 events in this sensitive region which should be compared with about 7 which is the number of events observed by CLEO in this sensitive region rescaled to the ALEPH+OPAL 5$`\pi `$ statistics. These two numbers are barely compatible. The number of expected events normalised to the ALEPH plus OPAL statistics is of about 18 with the $`\pi \pi a_1`$ dynamics (this is showed by the light-blue line of Fig. 4) and of about 9 with a softer dynamics similar to phase-space. Even more intriguing is the fact that the likelihood shown by the CLEO experiment at the TAU98 workshop shows a peak at $`\mathrm{m}_{\nu _\tau }18`$ MeV. This likelihood is preliminary and doesn’t include the systematic errors. If the ALEPH method described above is applied to this likelihood the value of $`\mathrm{m}_{\nu _\tau }=0`$ is excluded at more than 90 $`\%`$ confidence level. This could mean that CLEO is on the verge of a very interesting result or that (more probably but less interesting) there is in the CLEO likelihood a bias towards large neutrino masses. This bias would make the CLEO limit more conservative (explaining why their limit is so unlucky) and is therefore not warring in terms of the validity of their $`\mathrm{m}_{\nu _\tau }`$ upper limit. One should remember that most of the systematics determined by the different experiments are studied in terms of bias towards a massless tau-neutrino while less attention is played to possible sources which can mimic a massive neutrinos. A typical example is the fact that all the experiments reduce the dangerous background (the one which can mimic a massless tau-neutrino) at the 1-2$`\%`$ level while backgrounds as high as $`10\%`$ from higher decay multiplicities (like a $`3\pi ^\pm \pi ^0`$ reconstructed as a $`3\pi ^\pm `$ tau decays) which can mimic a massive neutrino are accepted. The CLEO experiment as still a large fraction of its statistics to analyse so I think that this intriguing situation will be clarified soon. ## V Conclusions and Acknowledgements The LEP experiments have put constraints on $`\mathrm{m}_{\nu _\tau }`$ by fitting the $`M_h,E_h`$ distribution in three- and five-prong tau decays. The best results obtained by a single experiment is given by ALEPH which obtains $`\mathrm{m}_{\nu _\tau }<18.2`$ MeV at 95$`\%`$ confidence level by combining the three- and five-prong results. An unofficial combination of the ALEPH and OPAL results shows that LEP can exclude at 95$`\%`$ C.L. values of $`\mathrm{m}_{\nu _\tau }`$ higher than 15 MeV. In my personal opinion the CLEO experiment has the statistical power to go below this limit. I want to thank Ronan McNulty from DELPHI, Achim Stahl from OPAL and Jean Duboscq from CLEO for the help that I received in preparing this talk. A special thank goes to my ALEPH colleague (and friend) Luca Passalacqua who shared with me three years of $`\mathrm{m}_{\nu _\tau }`$ measurements with the ALEPH detector. I also want to thank the organisers of this very nice conference.
no-problem/9903/cond-mat9903224.html
ar5iv
text
# Monte Carlo Algorithms based on the Number of Potential Moves ## 1 Introduction The traditional Monte Carlo method applied to statistical physics is mostly a sampling method to generate standard statistical ensembles, e.g., the canonical ensemble or microcanonical ensemble. In recent years, other ensembles have been used which do not correspond to thermodynamically meaningful ensembles, but used only as a vehicle for computing quantities of interests by Monte Carlo method. The earliest such method is the umbrella sampling . Other important recent developments are the multi-canonical simulations , $`1/k`$-sampling , and broad histogram methods . According to one definition , the multi-canonical ensemble is an ensemble that the probability $`P(E)`$ of having energy $`E`$ is a constant. This may be realized in a piecewise fashion. In the $`1/k`$-sampling method, the probability for a state having energy $`E`$ is given by $`1/_{E^{}<E}n(E^{})`$, where $`n(E)`$ is density of states. The broad histogram dynamics does not have a well characterized distribution, but $`P(E)`$ is much broader than the canonical ensemble. The canonical distribution is well approximated by a Gaussian function. It was also pointed out that this dynamics is not entirely correct . The ultimate goal of generating these distributions is usually to compute thermodynamic averages, which are mostly canonical averages. Some form of reweighting is then used to obtain the desired distribution. Recently, we proposed a dynamics which can generate a flat histogram $`P(E)=const`$. This method is exact when a “self-consistency” is achieved. The meaning of which will be made clear later. Similar to the broad histogram method, the central quantity is $`N(\sigma ,\mathrm{\Delta }E)_E`$ , the microcanonical average of the number of ways to move from one energy level $`E`$, to a nearby energy level, $`E+\mathrm{\Delta }E`$. We can then construct either the density of states or the canonical distribution at any temperature. The canonical distribution is determined from an artificial dynamics which we call it transition matrix Monte Carlo. We discuss these methods and present some preliminary results in the later section. ## 2 Sampling the Inverse Density of States We illustrate the method using a two-dimensional Ising model on a square lattice as an example. First of all, we choose a type of permissible moves. For purpose of connection with standard single-spin-flip Glauber (or Metropolis) dynamics, we take the set of moves to be all single-spin flips. For a given state $`\sigma `$, we can obtain $`N=L^2`$ new states through flipping each of the spins in an $`L\times L`$ system. If the original state has energy $`E=E(\sigma )`$, the new state may have energy $`E+\mathrm{\Delta }E`$. Since the energy spectrum is discrete, we have only a finite number of possibilities for the new energies; for the two-dimensional Ising model, we have five possible energy changes, $`\mathrm{\Delta }E=0,\pm 4J,\pm 8J`$. Let the counts of moves for each energy changes be $`N(\sigma ,\mathrm{\Delta }E)`$. Hence the total number of moves is $`_{\mathrm{\Delta }E}N(\sigma ,\mathrm{\Delta }E)=N`$. Following the argument of Oliveira , we consider two energy levels $`E`$ and $`E^{}=E+\mathrm{\Delta }E`$. Each move from the state $`\sigma `$ of energy $`E`$ to the state $`\sigma ^{}`$ of energy $`E^{}`$ is through a single spin flip and the reverse move is also allowed. Thus, the total number of moves from all the states with energy $`E`$ to $`E^{}`$ is the same as from $`E^{}`$ to $`E`$: $$\underset{E(\sigma )=E}{}N(\sigma ,\mathrm{\Delta }E)=\underset{E(\sigma ^{})=E+\mathrm{\Delta }E}{}N(\sigma ^{},\mathrm{\Delta }E).$$ (1) The microcanonical average of a quantity $`A(\sigma )`$ is defined as $$A_E=\frac{1}{n(E)}\underset{E(\sigma )=E}{}A(\sigma ),$$ (2) where the summation is over all the states with a fixed energy $`E`$, and $`n(E)`$ is the number of such states. In terms of microcanonical average, we can re-write Eq. (1) as $$n(E)N(\sigma ,\mathrm{\Delta }E)_E=n(E+\mathrm{\Delta }E)N(\sigma ^{},\mathrm{\Delta }E)_{E+\mathrm{\Delta }E}.$$ (3) This is the basic equation of the broad histogram method and is also our starting point for a flat histogram sampling algorithm. Consider the following flip rate for a single-spin-flip move from state $`\sigma `$ to $`\sigma ^{}`$ with energy $`E`$ and $`E^{}=E+\mathrm{\Delta }E`$, respectively: $$r(E^{}|E)=\mathrm{min}(1,\frac{N(\sigma ^{},\mathrm{\Delta }E)_E^{}}{N(\sigma ,\mathrm{\Delta }E)_E}).$$ (4) The site of the spin flip is chosen at random. Then the detailed balance condition for this rate $$r(E^{}|E)P(\sigma )=r(E|E^{})P(\sigma ^{})$$ (5) is satisfied for $`P(\sigma )1/n(E(\sigma ))`$. Thus energy histogram is flat, $$P(E)=\underset{E(\sigma )=E}{}P(\sigma )n(E)\frac{1}{n(E)}=const.$$ (6) Suppose that such samples are generated, then in some sense, it is the optimal ensemble for evaluation of $`N(\sigma ,\mathrm{\Delta }E)_E`$. This is because for different $`E`$, we take samples uniformly in $`E`$, and thus the relative errors in $`N(\sigma ,\mathrm{\Delta }E)_E`$ are about the same for all $`E`$. Since $`N(\sigma ,\mathrm{\Delta }E)_E`$ is not known in general, we cannot start the simulation unless an approximation scheme is used. We can think of the process as finding fixed point value of the system $`x=f(x)`$, where vector $`x`$ represents the whole set of $`N(\sigma ,\mathrm{\Delta }E)_E`$ values. While the function $`f`$ can be evaluated, its explicit form is not known. Some iterative scheme may be useful to speed-up the convergence. To start the iterative process, we use a cumulative average for the true microcanonical average. For those $`E`$ which we do not have any sample yet, we simply set $`r(E^{}|E)`$ to 1. This simple scheme is very good for small systems even without iteration. ## 3 The transition matrix Monte Carlo dynamics We can construct a Monte Carlo dynamics, in the space of energy, with the average number of moves, $`N(\sigma ,\mathrm{\Delta }E)_E`$, . Let us look at a single-spin-flip Glauber dynamics. Suppose we do not care about the spin states and only want to know the change of energy. The rate of a spin flip is given by the Glauber rate, $$w(\mathrm{\Delta }E)=\frac{1}{2}\left[1\mathrm{tanh}\frac{\mathrm{\Delta }E}{2kT}\right].$$ (7) Since there are (on average) $`N(\sigma ,\mathrm{\Delta }E)_E`$ different ways of going from $`E`$ to $`E^{}=E+\mathrm{\Delta }E`$, the total probability for transition from $`E`$ to $`E^{}`$ is $$W(E^{}|E)=w(\mathrm{\Delta }E)N(\sigma ,\mathrm{\Delta }E)_E,EE^{}.$$ (8) The diagonal elements are fixed by the requirement that $`W(E|E^{})`$ is a stochastic matrix. This transition matrix satisfies detailed balance with respect to the canonical distribution, $`P_T(E)n(E)\mathrm{exp}(E/kT)`$. Thus the stationary distribution is the canonical distribution. This new dynamics in the space of energy $`E`$ is related to the single-spin-flip dynamics by $$W(E^{}|E)=\frac{1}{n(E)}\underset{E(\sigma )=E}{}\underset{E(\sigma ^{})=E^{}}{}\mathrm{\Gamma }(\sigma ^{}|\sigma ),$$ (9) where $`\mathrm{\Gamma }(\sigma ^{}|\sigma )`$ is the transition matrix of the single-spin-flip dynamics. An interesting aspect of this dynamics is that it has a much reduced critical slowing down. In fact, one can show that the relaxation time at the critical point $`T_c`$ is proportional to the specific heat. Thus for the two-dimensional Ising model, the divergence of the relaxation time is only logarithmic. In one dimension, the dynamics has a curious dynamical critical exponent of $`z=1`$ as oppose to 2 for the local dynamics and 0 for the Swendsen-Wang dynamics . Since the dynamics can not be realized without first knowing the values $`N(\sigma ,\mathrm{\Delta }E)_E`$, the real usefulness is in the construction of canonical distribution from the samples obtained by flat histogram or any other algorithms that can compute $`N(\sigma ,\mathrm{\Delta }E)_E`$ accurately. ## 4 Results In Fig. 1, we show the energy histograms for three different types of dynamics of the $`32\times 32`$ two-dimensional Ising model, (a) the Gaussian-like peak for the standard canonical ensemble at the critical temperature $`T_c`$; (b) the broad histogram dynamics with a sharp peak near $`E=0`$; (c) the flat histogram method with an insert showing the fluctuation on a fine scale. Given the estimates for $`N(\sigma ,\mathrm{\Delta }E)_E`$, there are a number of ways to determine the canonical distribution, $`P_T(E)n(E)\mathrm{exp}(E/kT)`$. For example, we can use Eq. (3) to determine the density of states. We can also determine $`P_T(E)`$ directly from the detailed balance of the transition matrix Monte Carlo dynamics : $$w(\mathrm{\Delta }E)N(\sigma ,\mathrm{\Delta }E)_EP_T(E)=w(\mathrm{\Delta }E)N(\sigma ^{},\mathrm{\Delta }E)_{E+\mathrm{\Delta }E}P_T(E+\mathrm{\Delta }E),$$ (10) where $`w(\mathrm{\Delta }E)`$ is given by Eq. (7). Since there are more equations than unknowns, it is natural to solve these over-determined equations with least-square method. However, a more direct iterative scheme is also quite accurate and more efficient. In Fig. 2, we show the specific heat (upper part) and relative errors as compared with exact results . The dash lines are for the broad histogram method and solid lines are from the flat histogram sampling. The broad histogram method shows an anomalous peak around $`T=1.3`$, while the flat histogram result agrees with exact values with errors of $`10^2`$ or less. Since we can compute the density of states $`n(E)`$ easily, we can also compute free energy and entropy with ease. These quantities are more difficult to compute by the traditional methods. Fig. 3 shows the entropy and errors. The flat histogram is again better than the broad histogram method. All approaches that use reweighting technique, such as the histogram methods of Ferrenberg and Swendsen , Lee’s version of multicanonical method , or the broad histogram method , have the problem of scalability for large systems. Our flat histogram method also suffers from this. While the simple method without an iterative process and without requirement for self-consistency seems to work well for systems $`L32`$, systematic errors are observed for large systems. Substantial deviations (extra anomalous peaks in the specific heat, for example) are present for the $`L=64`$ systems. Such systematic deviations can be measured quantitatively by what we called detailed balance violation : $$v(E)=\left|1\frac{g(E,E^{\prime \prime })g(E^{\prime \prime },E^{})g(E^{},E)}{g(E,E^{})g(E^{},E^{\prime \prime })g(E^{\prime \prime },E)}\right|$$ (11) where $`g`$ is generally a transition rate—for our problem here, we’ll take $`g(E,E^{})=N(\sigma ,E^{}E)_E`$, with $`E^{}=E+4J`$ and $`E^{\prime \prime }=E+8J`$. The quantity $`v(E)`$ should be zero, up to the usual Monte Carlo statistical errors, if the estimates are not systematically biased. In Fig. 4, we show this quantity as a function of $`E`$ for the $`L=64`$ system. The largest violation occurs at the two ends of the distribution. This systematic trend is also present for small systems. There are a number of ways to fix this problem. One is to do a number of canonical simulations at lower temperatures where the violation of detailed balance is biggest. This indeed proves to be effective for the Ising model. However, this solution is not very satisfactory, as such simulations may be very difficult, for example, for spin glasses. Thus, a more systematic approach is to use an iterative scheme, which can hopefully converge to the true value without any systematic bias. ## 5 Conclusion We study a recently proposed Monte Carlo dynamics in which the energy histogram is exactly flat in principle. We demonstrated that such method is capable of giving highly accurate results for the thermodynamic quantities in a single or few simulations for the whole temperature region. While some systematic errors are present in our current simple implementation, there are ways to improve the naive algorithm. We expect that this method will be a useful alternative for thermodynamic calculations, especially for free energy and entropy calculations.
no-problem/9903/astro-ph9903028.html
ar5iv
text
# New Structure In The Shapley Supercluster ## 1 Introduction The Shapley supercluster (SSC) has been investigated by numerous authors since its discovery in 1930 (Quintana et al. 1995; hereafter Paper I). It lies in the general direction of the dipole anisotropy of the Cosmic Microwave Background (CMB), and is located at 130$`\mathrm{h}_{75}^1\mathrm{Mpc}`$ beyond the Hydra-Centaurus supercluster ($`50`$$`\mathrm{h}_{75}^1\mathrm{Mpc}`$ away from us). It consists of many clusters and groups of galaxies in the redshift range $`0.04<z<0.055`$. The central cluster A3558 has also been measured with a ROSAT PSPC observation by Bardelli et al. (1996) who derive a total mass of $`M_{tot}=3.1\times 10^{14}\mathrm{M}_{}`$ within an Abell radius of 2$`\mathrm{h}_{75}^1\mathrm{Mpc}`$. Several other x-ray clusters form part of the Shapley supercluster (Pierre et al. 1994). The Shapley supercluster is recognised as one of the most massive concentrations of galaxies in the local universe (Scaramella et al. 1989; Raychaudhury 1989), so it is of particular interest to consider its effect on the dynamics of the Local Group. In Paper I it was estimated that for $`\mathrm{\Omega }_o=0.3`$ and $`H_o=75\mathrm{kms}^1\mathrm{Mpc}^1`$ the gravitational pull of the supercluster may account for up to 25% of the peculiar velocity of the Local Group required to explain the dipole anisotropy of the CMB radiation, in which case the mass of the supercluster would be dominated by inter-cluster dark matter. Previous studies of the Shapley supercluster (Paper I, Quintana et al. 1997; hereafter Paper II) have concentrated on the various rich Abell galaxy clusters in the region, but this might give a very biased view of the supercluster. As was noted in Paper I, “the galaxy distribution inside the supercluster must be confirmed by the detection in redshift space of bridges or clouds of galaxies connecting the different clusters”. We are continuing this project, using data from wide-field multi-fibre spectrographs to measure many more galaxy redshifts and get a more complete picture of the composition of the supercluster. Our main aims are first to define the real topology of the SSC: in Paper I it was shown that the SSC is significantly flattened, but the real extent of the concentration is not well defined. Secondly we will analyse the individual X-ray clusters that are true members of the Shapley Supercluster in order to estimate the cluster masses, and investigate suspected sub-structure. Additional observations are planned before we present a full analysis of the dynamics (Proust et al. 1998 in preparation). In Paper II we presented data from the MEFOS spectrograph on the European Southern Observatory 3.60m telescope. This has 30 fibres in a 1 deg diameter field, so the observations were again mainly concentrated on the known clusters, determining for several of them if they were members of the supercluster or not. In this paper we present new data obtained with the FLAIR-II (Parker & Watson 1995; Parker 1997) multi-fibre spectrograph on the UK Schmidt Telescope at the Anglo-Australian Observatory. This has 90 fibres in a $`5.5\times 5.5`$ deg<sup>2</sup> field and has allowed us to measure a much more uniform distribution of galaxies in the direction of the SSC, avoiding the previous bias in favour of the rich clusters. Our data reveal the existence of a sheet of galaxies connecting the main parts of the supercluster. We describe the sample and observations in Section 2. We present the results along with previous measurements in Section 3 and discuss the significance of the measurements in Section 4. ## 2 Observations Although a large body of galaxy velocity data is available in the literature for the SSC, the existing samples of redshifts in each cluster are highly incomplete, even at the bright end of the luminosity function. We have therefore started a campaign to obtain complete samples down to the same magnitude below $`L_{}`$ for each cluster. Each selected cluster has a projected diameter of 2.5 to 3.0 degrees, so the FLAIR-II system on the UKST with a $`5.5\times 5.5`$ deg<sup>2</sup> field is an ideal facility for this project. The very wide field also permits us to probe the regions between the dominant clusters neglected in previous observations. In this paper we emphasise our results from these regions. We selected targets from red ESO/SRC sky survey plates scanned by the MAMA machine at Paris Observatory (as described in Paper II; see also Infante et al. 1996). The fields observed (listed in Table 1) were the standard survey fields nearest to the centre of the cluster (13:25:00 $``$31:00:00 B1950). These covered an area of 77 deg<sup>2</sup> allowing us to probe the limits of the SSC out to radii as large as 8 deg. We defined a sample of galaxies to a limit of $`R<16`$, corresponding (assuming a mean $`BR=1.5`$) to $`B<17.5`$, the nominal galaxy limiting magnitude of the FLAIR-II system. This corresponds to an absolute magnitude of $`M_B=19`$ at the Shapley distance of 200$`\mathrm{h}_{75}^1\mathrm{Mpc}`$. This gave samples of 600–1000 galaxies per field. We then removed any galaxies with published measurements in the NED database or measured by H. Quintana and R. Carrasco (private communication, 1997): 46 galaxies for F382, 81 for F383 and 200 for F444. For each observing run we then selected random subsamples of about 110 targets per field from the unobserved galaxies. When preparing each field for observation at the telescope we made a further selection of 80 targets to observe (10 fibres being reserved for measurement of the sky background). This final selection was essentially random, but we did reject any galaxies too close (less than about 1 arcminute) to another target already chosen or a bright star. We observed a total of 3 fields with the FLAIR-II spectrograph in 1997 May and two more in 1998 April. The details of the observations are given in Table 1. In 1997, out of 6 allocated nights we were only able to observe 3 FLAIR fields successfully due to poor weather and the first of these was repeated over 3 nights. Field F444 was observed in particularly poor weather resulting in a much lower number of measured redshifts. In 1998 we again had poor weather, and were only able to observe two fields in an allocation of 8 half-nights. The data were reduced as in Drinkwater et al. (1996) using the dofibers package in IRAF (Tody 1993). We measured the radial velocities with the RVSAO package (Kurtz & Mink 1998) contributed to IRAF. Redshifts were measured for absorption-featured spectra using the cross-correlation task XCSAO in RVSAO. We decided to adopt as the absorption velocity the one associated with the minimum error from the cross-correlation against the templates. In the great majority of cases, this coincided also with the maximum R parameter of Tonry & Davis (1979). The redshifts for the emission line objects were determined using the EMSAO task in RVSAO. EMSAO finds emission lines automatically, computes redshifts for each identified line and combines them into a single radial velocity with error. Spectra showing both absorption and emission features were generally measured with the two tasks XCSAO and EMSAO and the result with the lower error used. In two spectra with very poor signal (13:05:19.9 $``$33:00:31 and 13:23:22.9 $``$36:47:09) the emission lines were measured manually and a conservative error of 150$`\mathrm{kms}^1`$ assigned. We measured velocities successfully for 306 galaxies in the sample: these are presented in Table 3. We have compared the distributions of the galaxies we measured to the input samples to check they are fair samples. There is no significant difference in the distributions of the coordinates but there is a small difference in the magnitude distributions in the sense that the measured sample does not have as many of the faintest galaxies as the input sample. This is to be expected as these would have the lowest signal in the FLAIR-II spectra, but this should not affect our study of the spatial distribution significantly. ## 3 Results Previous studies of the SSC have covered a very large region of sky, but we will limit our analysis in this paper to the region of sky we observed with FLAIR-II: the three UK Schmidt fields in Table 1. In some cases we will further restrict our analysis to the two Southern fields F382 and F383 where our observations were much more complete. The distribution of these fields and the galaxies we observed is shown in Fig. 1. We also show any previously observed galaxies and the known Abell Clusters. We present the resulting distribution of galaxies towards the Shapley supercluster in Fig. 2 as cone diagrams and in Fig. 3 as the histogram of all velocities up to $`40000\mathrm{kms}^1`$. The importance of the SSC in this region of the sky is demonstrated by the fact the fully three quarters of the galaxies we measured belong to the SSC with velocities in the range 7580–18300$`\mathrm{kms}^1`$. In all the plots the new data are indicated by different symbols to emphasise their impact (this can also be seen by comparing these figures with the equivalent ones in Paper II). It can be seen that by probing large regions of the SSC away from the rich Abell clusters, we have revealed additional structure which we discuss in the following sections. ### 3.1 Foreground Galaxies First in agreement with previous work we also note the presence of a foreground wall of 269 galaxies (Hydra-Centaurus region) at $`\overline{V}=4242\mathrm{kms}^1`$ with $`\sigma =890\mathrm{kms}^1`$ in the range $`20006000\mathrm{kms}^1`$. This distribution can be related with the nearby cluster A3627 associated with the “Great Attractor” (Kraan-Korteweg et al. 1996). ### 3.2 Clusters in the Shapley Supercluster The previous observations reported in Papers I and II concentrated on the Abell clusters, clarifying the location of many of them. We reproduce a list of the main clusters in the SSC region in Table 2 for reference and plot their positions in Fig. 1. As noted above, our new measurements concentrate on galaxies outside the rich clusters in this field. In particular we observed virtually no galaxies in foreground or background clusters. We compare the distribution of the SSC galaxies to the Abell clusters in two velocity slices in Figs. 4 and 5. In the near side of the SSC ($`7580<v<12700\mathrm{kms}^1`$: Fig. 4) we detected several new galaxies in the clusters A3571 and A3572. This region has a very extended velocity structure with several galaxies in the higher range (Fig. 5). At the velocity of the main part of the SSC ($`12700<v<18300\mathrm{kms}^1`$: Fig. 5) we have found additional galaxies in many of the clusters, especially the poorer ones like AS726, AS731 and A3564. The main conclusion however is that the clusters are seen as peaks in a sheet-like distribution rather than isolated objects. ### 3.3 Structure of the Shapley Supercluster The main impact of our new data is to revise our knowledge of the large-scale structure of the SSC by measuring a large number of galaxies away from the rich Abell clusters previously studied. The majority of the galaxies we observed were part of the SSC, so our principal result is to show that the SSC is bigger than previously thought with an additional 230 galaxies in the velocity range $`7580<v<18300\mathrm{kms}^1`$ compared to 492 previously known in our survey area. Looking at the cone diagrams (Fig. 2) and the velocity histogram in Fig. 3 our first new observation is that the SSC is clearly separated into two components in velocity space, the nearer one at $`\overline{v}=10800\mathrm{kms}^1`$ ($`\sigma _v=1300\mathrm{kms}^1`$) to the East of the main concentration at $`\overline{v}=14920\mathrm{kms}^1`$ ($`\sigma _v=1100\mathrm{kms}^1`$). The two regions contain 200 and 522 galaxies respectively. Some evidence for this separation was noted in the velocity distribution in Paper II, but it is much clearer with our new data. Secondly, it can be see from the Declination cone diagram in Fig. 2 as well as the sky plots in Figs. 4 and 5 that the Southern part of the SSC consists of two large sheets of galaxies of which the previously measured Abell Clusters represent the peaks of maximum density. To consider the significance of this extended distribution of galaxies it is helpful to define an inter-cluster sample consisting of galaxies in the Southern fields (F382 and F383) outside the known Abell clusters in the SSC velocity range. We eliminated all galaxies within a 0.5 degree radius (about 1 Abell radius) of all the clusters shown in Figs. 4 and 5. Very few of the previously-measured galaxies remain in the sample. In Fig. 6 we plot a histogram of the galaxy velocities in this inter-cluster sample compared to the predicted $`n(z)`$ distribution of galaxies. The predicted distribution was based on the number counts of Metcalfe et al. (1991) normalised to the area of the Southern sample after removing clusters (44 deg<sup>2</sup>) and corrected for completeness (304 out of a possible 1194 galaxies measured in total). We also show the histogram (shaded) and predictions (dashed) for the previously-measured galaxies in the same field (128 out of a possible 1194). The histogram shows that even for the inter-cluster galaxies there is a large overdensity in the SSC region ($`7500<cz<18500`$$`\mathrm{kms}^1`$): we measure 161 galaxies compared to 74 expected. This is an overdensity of $`2.0\pm 0.2`$ detected at the 10 sigma level. This is averaged over the whole SSC velocity range; the overdensity in individual 1000$`\mathrm{kms}^1`$ bins peaks at about 7. By comparison the previous data (42 galaxies, 33 expected) gave an overdensity of 1.3 detected at only 1.5 sigma. The overdensity for the whole SSC including the Abell clusters is, of course, much larger still. These new observations mean that we must modify the conclusions of Paper I about the overall shape of the SSC. In Paper I it was concluded from the velocity distribution of the clusters that the SSC was very elongated and either inclined towards us or rotating. The SSC extends as far as our measurements to the South, so we find it is not elongated or flattened. We now suggest that it is more complex still, being composed of the known Abell clusters embedded in two sheets of galaxies of much larger extent. ## 4 Discussion Our new observations of galaxies towards the Shapely supercluster have, by surveying a large area away from known clusters, revealed substantial new large structures in the region. The cluster is part of a much large structure than was apparent from the previous observations, extending uniformly in two sheets over the whole region we surveyed to the South of the core of the SSC. We detected an additional 230 members of the SSC in our whole survey area, representing a 50% increase on the previous total of 492 SSC galaxies. Our measurements to the North of the cluster were much less complete (only one field in poor weather) so we cannot exclude the possibility that these sheets of galaxies extend equally to the North. Recent results by Bardelli, Zucca & Zamorani (1999) support this possibility: they have measured galaxies in 18 small (40 arcmin) inter-cluster fields North of the core of the SSC and also find an overdensity at the SSC velocity. In Paper I the effect of the SSC on the dynamics of the Local Group was estimated. It was found that the mass in the cluster could account for at least 25% of the motion of the Local Group with respect to the cosmic microwave background. Our new data suggest that the SSC is at least 50% more massive with a significant part of the extra mass in the closer sub-region. The SSC therefore has a more important effect on the Local Group that previously thought, although we defer a detailed calculation until we have additional data (Proust et al. 1999, in preparation). ## Acknowledgements We wish to thank Roberto de Propris for kindly providing the software to calculate the predicted galaxy $`n(z)`$ distributions and we are grateful of the staff of the UKST and AAO for their assistance in the observations. This research was partially supported by the cooperative programme ECOS/ CONICYT C96U04 and HC acknowledges support from a Presidential Chair in Science. MJD acknowledges receipt of an AFCOP travel grant and a French Embassy Fellowship in support of visits to Paris Observatory where some of this work was carried out. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. ## References Bardelli, S., Zucca, E., Malizia, A., Zamorani, G., Scaramella, R., Vettolani, G., 1996, A&A, 305, 435 Bardelli, S., Zucca, E., Zamorani, G., Vettolani, G., Scaramella, R., 1998, MNRAS, 296, 599 Bardelli, S., Zucca, E., Zamorani, G., 1999, in Observational Cosmology: The Development of Galaxy systems’, Sesto 1998, (astro-ph/9811015) Drinkwater, M.J., Currie, M.J., Young, C.K., Hardy, E., Yearsley, J.M., 1996, MNRAS, 279, 595 Infante L., Slezak E., Quintana H., 1996, A&A 315, 657 Kraan-Korteweg, R.C., Woudt, P.A., Cayatte, V., Fairall, A.P., Balkowski, C., Henning, P.A., 1996, Nature, 379, 519 Kurtz M.J., Mink D.J., 1998, PASP, in press Metcalfe, N., Shanks, T., Fong, R., Jones, L.R., 1991, MNRAS, 249, 498 Parker, Q.A., Watson, F.G., 1995, in Wide Field Spectroscopy and the Distant Universe, 35th Herstmonceux Conference, ed. S.J. Maddox, & A. Aragon-Salamanca, (Singapore: World Scientific), 33 Parker, Q.A., 1997, in Wide Field Spectroscopy, 2nd conference of the working group of IAU Commission 9 on Wide Field Imaging, ed. Kontizas et al., (Dordrecht: Kluwer), 25 Pierre, M., Bohringer, H., Ebeling, H., Voges, W., Schuecker, P., Cruddace, R., MacGillivray, H., 1994, A&A 290, 725 Tody, D. 1993, in Astronomical Data Analysis Software and Systems II, A.S.P. Conference Ser., Vol 52, eds. R.J. Hanisch, R.J.V. Brissenden, & J. Barnes, 173. Tonry J., Davis M., 1979, AJ, 84, 1511. Quintana, H., Ramirez, A., Melnick, J., Raychaudhury, S., Slezak, E., 1995, AJ 110, 463 (Paper I) Quintana, H, Melnick, J., Proust, D., Infante, L., 1997, A&A Sup, 125, 247 (Paper II) Raychaudhury, S., 1989, Nature, 342, 251 Scaramella R., Baiesi-Pillastrini, G, Chincarini, G., Vettolani, G., Zamorani, G., 1989, Nature 338, 562
no-problem/9903/hep-th9903201.html
ar5iv
text
# 1 Introduction ## 1 Introduction Based on the D-dimensional Kerr solution and its generalization to a family of rotating, electrically charged black holes in , a number of solutions with maximum number of rotational parameters in 11- and 10-dim supergravities were constructed. Among them in particular, the most general solutions representing $`N`$ coincident rotating M2- or M5- or D3-branes -. However, an analogous solution representing $`N`$ coincident rotating NS5-branes has not been explicitly constructed. It is the purpose of this note to fill this gap. It turns out that, because of the absence of R–R fields, in the near-horizon limit there is a description in terms of a background corresponding to the exact conformal field theory (CFT) $`SL(2,\mathrm{IR})_N/U(1)\times SU(2)_N`$. This generalizes previous realizations that such exact string backgrounds exist in the near-horizon limit of $`N`$ coincident extremal - and non-extremal NS5-branes, as well as for $`N`$ extremal NS5-branes distributed uniformly along the circumference of a ring . ## 2 Rotating NS5-branes The usual NS5-brane solution (extremal or not; see, for instance, ) with no angular parameters has a global $`SO(4)`$ symmetry. Introducing angular momentum breaks this symmetry to the Cartan subalgebra of $`SO(4)`$, which is $`U(1)\times U(1)`$. Since the latter is two-dimensional we may obtain a solution with at most two angular parameters $`l_1`$ and $`l_2`$. As we shall see, without loss of generality, these can be taken to be non-negative. In order to obtain our solution we have used as a guide the general rotating M5-brane solution .<sup>2</sup><sup>2</sup>2It turns out that (1)–(3) correspond to a dimensional reduction of the M5-brane solution we mentioned, along a vanishing circle corresponding to one of the angular variables, after we also replace the mass parameter as $`mmr`$. In particular, this empirical rule can be used in eq. (2.1) of and eq. (14) of for the metric and 3-form respectively. The angular variable we mentioned is denoted by $`\psi `$ (in both papers) and we dimensionally reduce around $`\psi =\pi /2`$. The metric of our solution is given by $`ds^2=hdt^2+dy_1^2+\mathrm{}+dy_5^2`$ $`+f\left({\displaystyle \frac{dr^2}{\stackrel{~}{h}}}+r^2(\mathrm{\Delta }d\theta ^2+\mathrm{sin}^2\theta \mathrm{\Delta }_1d\varphi _1^2+\mathrm{cos}^2\theta \mathrm{\Delta }_2d\varphi _2^2)\right)`$ (1) $`+{\displaystyle \frac{4ml_1l_2\mathrm{sin}^2\theta \mathrm{cos}^2\theta }{r^2\mathrm{\Delta }}}d\varphi _1d\varphi _2{\displaystyle \frac{4m\mathrm{cosh}\alpha }{r^2\mathrm{\Delta }}}dt(l_1\mathrm{sin}^2\theta d\varphi _1+l_2\mathrm{cos}^2\theta d\varphi _2),`$ the components of the antisymmetric tensor by $`B_{\varphi _1\varphi _2}`$ $`=`$ $`2m\mathrm{cosh}\alpha \mathrm{sinh}\alpha \left(1+{\displaystyle \frac{l_1^2}{r^2}}\right){\displaystyle \frac{\mathrm{cos}^2\theta }{\mathrm{\Delta }}},`$ $`B_{t\varphi _1}`$ $`=`$ $`2ml_2\mathrm{sinh}\alpha {\displaystyle \frac{\mathrm{sin}^2\theta }{r^2\mathrm{\Delta }}},`$ (2) $`B_{t\varphi _2}`$ $`=`$ $`2ml_1\mathrm{sinh}\alpha {\displaystyle \frac{\mathrm{cos}^2\theta }{r^2\mathrm{\Delta }}},`$ and the dilaton by $`e^{2\varphi }=g_s^2f,`$ (3) where $`g_s`$ is the string coupling at infinity<sup>3</sup><sup>3</sup>3The general rotating $`D5`$-brane solution in type-IIB supergravity is trivially obtained by an S-duality transformation on (1)–(3) and will not be presented here. and the various functions are defined as $`f`$ $`=`$ $`1+{\displaystyle \frac{2m\mathrm{sinh}^2\alpha }{r^2\mathrm{\Delta }}},h=1{\displaystyle \frac{2m}{r^2\mathrm{\Delta }}},`$ $`\stackrel{~}{h}`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{\Delta }}}\left(1+{\displaystyle \frac{l_1^2}{r^2}}+{\displaystyle \frac{l_2^2}{r^2}}+{\displaystyle \frac{l_1^2l_2^2}{r^4}}{\displaystyle \frac{2m}{r^2}}\right),`$ $`\mathrm{\Delta }`$ $`=`$ $`1+{\displaystyle \frac{l_1^2}{r^2}}\mathrm{cos}^2\theta +{\displaystyle \frac{l_2^2}{r^2}}\mathrm{sin}^2\theta ,`$ (4) $`\mathrm{\Delta }_1`$ $`=`$ $`1+{\displaystyle \frac{l_1^2}{r^2}}+{\displaystyle \frac{2ml_1^2\mathrm{sin}^2\theta }{r^4\mathrm{\Delta }f}},`$ $`\mathrm{\Delta }_2`$ $`=`$ $`1+{\displaystyle \frac{l_2^2}{r^2}}+{\displaystyle \frac{2ml_2^2\mathrm{cos}^2\theta }{r^4\mathrm{\Delta }f}}.`$ The ADM mass, the angular momentum and the angular velocities associated with motion in $`\varphi _1`$ and $`\varphi _2`$, as well as the Bekenstein–Hawking entropy and temperature are given by<sup>4</sup><sup>4</sup>4The angular velocities $`\mathrm{\Omega }_i`$, $`i=1,2`$ in (5) below, are determined by demanding that the three-vector (with components in the $`t,\varphi _1`$ and $`\varphi _2`$ directions) $`\eta ^a=(1,\mathrm{\Omega }_1,\mathrm{\Omega }_2)`$ be null at the horizon, i.e. $`\eta ^2|_{r_H}=0`$. The temperature is determined using the general formula $`T_H^2=\frac{1}{16\pi ^2}lim_{rr_H}\frac{_\mu \eta ^2^\mu \eta ^2}{\eta ^2}`$. $`M_{ADM}`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Omega }_3V_5}{16\pi G_N}}2m(2\mathrm{cosh}^2\alpha +1),\mathrm{\Omega }_3=2\pi ^2,`$ $`J_i`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Omega }_3V_5}{4\pi G_N}}ml_i\mathrm{cosh}\alpha ,i=1,2,`$ $`\mathrm{\Omega }_i`$ $`=`$ $`{\displaystyle \frac{l_i}{(r_H^2+l_i^2)\mathrm{cosh}\alpha }},i=1,2,`$ (5) $`S`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Omega }_3V_5}{4G_N}}2mr_H\mathrm{cosh}\alpha ,`$ $`T_H`$ $`=`$ $`{\displaystyle \frac{r_H^4l_1^2l_2^2}{4\pi mr_H^3\mathrm{cosh}\alpha }},`$ where $`r_H`$ is the position of the outer horizon given by $$r_H^2=\frac{1}{2}\left(2ml_1^2l_2^2+\sqrt{(2ml_1^2l_2^2)^24l_1^2l_2^2}\right).$$ (6) There is also an inner horizon given by the above formula with a minus sign in front of the square root. Notice also that in order to have a horizon, i.e. $`r_H^20`$, the inequality $`l_1+l_2\sqrt{2m}`$ should be satisfied. The parameter $`\alpha `$ is related to the mass and charge of the NS5-brane by $$\mathrm{sinh}^2\alpha =\sqrt{\left(\frac{\alpha ^{}N}{2m}\right)^2+\frac{1}{4}}\frac{1}{2}.$$ (7) Finally we note that the thermodynamic quantities in (5) obey the first law of black-hole thermodynamics $$dM_{ADM}=T_HdS+\mathrm{\Omega }_1dJ_1+\mathrm{\Omega }_2dJ_2.$$ (8) This is easily checked by treating $`M_{ADM},S,J_1,J_2`$ as functions of the variables $`m,l_1,l_2`$ using (5)–(7). ## 3 The extremal limit The extremal limit of the above solution is obtained by letting $`m0`$. Then, after changing variables from $`(r,\theta ,\varphi _1,\varphi _2)`$ to $`(x_1,x_2,x_3,x_4)`$ as $$\left(\begin{array}{c}x_1\\ x_2\end{array}\right)=\sqrt{r^2+l_1^2}\mathrm{sin}\theta \left(\begin{array}{c}\mathrm{cos}\varphi _1\\ \mathrm{sin}\varphi _1\end{array}\right),\left(\begin{array}{c}x_3\\ x_4\end{array}\right)=\sqrt{r^2+l_2^2}\mathrm{cos}\theta \left(\begin{array}{c}\mathrm{cos}\varphi _2\\ \mathrm{sin}\varphi _2\end{array}\right),$$ (9) we find the following background $`ds^2=dt^2+dy_1^2+\mathrm{}+dy_5^2+Hdx_idx_i,i=1,2,3,4,`$ $`H_{ijk}=ϵ_{ijkl}_lH,`$ (10) $`e^{2\mathrm{\Phi }}=H,`$ where $`H`$ is given by $$H=1+\frac{\alpha ^{}N}{\sqrt{(l_1^2l_2^2+x_1^2+x_2^2+x_3^2+x_4^2)^24(l_1^2l_2^2)(x_1^2+x_2^2)}}.$$ (11) It can easily be checked that $`H`$ is a (multicenter) harmonic function in the 4-dim Euclidean space spanned by the $`x_i`$’s. The metric in (11) has singularities at $`x_3=x_4=0,x_1^2+x_2^2=l_1^2l_2^2,\mathrm{if}l_1>l_2,`$ $`x_1=x_2=0,x_3^2+x_4^2=l_2^2l_1^2,\mathrm{if}l_1<l_2.`$ (12) Hence, the singularity structure is that of a ring with radius $`\sqrt{|l_1^2l_2^2|}`$. In fact, (10) with (11) corresponds to a continuous uniform distribution of NS5-branes along the circumference of a ring .<sup>5</sup><sup>5</sup>5My interest in finding the rotating NS5-brane solution (1)–(3) was sparked by the (correct) remark of E. Kiritsis that the BPS solution (10), could be unstable at finite temperature, since the gravitational attraction will no longer be balanced by just the R–R repulsion. In our solution (1)–(3) spin forces provide the necessary extra balance. In the field-theory limit, discussed in , the 1 in the harmonic function in (11) is effectively removed. Then, it becomes an exact string background as it is connected by a T-duality transformation to the coset model CFT $`SL(2,\mathrm{IR})_N/U(1)\times SU(2)_N/U(1)`$ . The background (10) is an axionic instanton and as such it preserves half of the supersymmetries of flat space. From a gauge theory view point, it corresponds to a Higgs phase of a 6-dim SYM theory $`SU(N)`$ broken to $`U(1)^N`$ since the centers where the branes are put correspond to non-zero expectation values for the scalars. In our case the vacuum moduli space has a $`Z_N\times U(1)`$ symmetry, which, in the continuous limit we are discussing here, becomes a $`U(1)\times U(1)`$ symmetry. This degeneracy is however lifted once we turn on the temperature, and the corresponding supergravity solution can describe excitations around these points of the moduli space. ## 4 Field-theory limit and exact description A natural question arises, namely what the field-theory limit of the non-supersymmetric background (1)–(3) is and, moreover if it also has an exact CFT interpretation as well. Consider the limit $`g_s0`$ and $`m0`$ in such a way that the ratio $`m^{1/2}/g_s`$ is held fixed. In this limit the Yang–Mills coupling constant $`g_{\mathrm{YM}}\alpha ^{}`$ remains finite. It is convenient to define rescaled quantities as $`{\displaystyle \frac{2m}{g_s^2}}=\mu \alpha ^{},r=(2m)^{1/2}\rho ,l_i=(2m)^{1/2}a_i,i=1,2,`$ (13) and then take the linit $`m0`$ in (1)–(3). We find for the metric<sup>6</sup><sup>6</sup>6In the following we use the rescaled variables $`t\sqrt{\alpha ^{}N}t`$ and $`y_i\sqrt{\alpha ^{}N}y_i`$, $`i=1,\mathrm{},5`$, and omit $`\alpha ^{}`$ since it drops out of the $`\sigma `$-model as well as the supergravity action. $`{\displaystyle \frac{1}{N}}ds^2`$ $`=`$ $`\left(1{\displaystyle \frac{1}{\mathrm{\Delta }_0}}\right)dt^2+dy_1^2+\mathrm{}+dy_5^2+{\displaystyle \frac{d\rho ^2}{\rho ^2+a_1^2a_2^2/\rho ^2+a_1^2+a_2^21}}`$ $`+d\theta ^2+{\displaystyle \frac{1}{\mathrm{\Delta }_0}}\left((\rho ^2+a_1^2)\mathrm{sin}^2\theta d\varphi _1^2+(\rho ^2+a_2^2)\mathrm{cos}^2\theta d\varphi _2^2\right)`$ $`{\displaystyle \frac{2}{\mathrm{\Delta }_0}}dt(a_1\mathrm{sin}^2\theta d\varphi _1+a_2\mathrm{cos}^2\theta d\varphi _2),`$ for the antisymmetric tensor two-form $$\frac{1}{N}B=2\frac{1}{\mathrm{\Delta }_0}\left((\rho ^2+a_1^2)\mathrm{cos}^2\theta d\varphi _1d\varphi _2+a_2\mathrm{sin}^2\theta dtd\varphi _1+a_1\mathrm{cos}^2\theta dtd\varphi _2\right),$$ (15) and for the dilaton $$e^{2\mathrm{\Phi }}=\frac{N}{\mu \mathrm{\Delta }_0}.$$ (16) The function $`\mathrm{\Delta }_0`$ entering the previous expressions is defined as $$\mathrm{\Delta }_0=\rho ^2+a_1^2\mathrm{cos}^2\theta +a_2^2\mathrm{sin}^2\theta .$$ (17) Note that string-theory corrections to the supergravity result are organized in powers of $`1/N`$. Hence, by choosing $`N1`$ we suppress these perturbative corrections. On the other hand, string-loop corrections are suppressed by choosing $`N\mu `$. These are the same conditions as were found in for the case of zero angular momenta. As a final remark we note that it is very likely that the background (4)–(16) can also be obtained by gauging directly a 2-dim subgroup, isomorphic to $`U(1)\times U(1)`$, of the WZW model for $`SL(2,\mathrm{IR})\times SU(2)`$. In that case we may compute the $`\frac{1}{N}`$-corrections to the background (4)–(16) using techniques developed in ### 4.1 The $`O(3,3)`$ duality transformation First, consider the case of vanishing angular parameters $`a_1`$ and $`a_2`$. Then, the background (4)-(16) becomes the one corresponding to the $`SL(2,R)_N/SO(1,1)\times SU(2)_N`$ exact CFT, as it was shown in . It turns out that by performing an $`O(3,3)`$ transformation to the latter background we can obtain the more general one given by (4)-(16). Let us first pass to the Euclidean regime by letting $`ti\tau `$ and $`a_1ia_1`$. In order to find out the specific $`O(3,3)`$ matrix, we first expand the $`\sigma `$-model action with metric and antisymmetric tensor given by (4) and (15) for small values of $`a_1,a_2`$. Then, the infinitesimal change in the $`\sigma `$-model Lagrangian density is $`\delta `$ $`=`$ $`{\displaystyle \frac{a_1a_2}{\mathrm{cosh}^2r}}(\mathrm{sin}^2\theta _+\tau _{}\varphi _1\mathrm{cos}^2\theta _+\tau _{}\varphi _2)`$ (18) $`{\displaystyle \frac{a_1+a_2}{\mathrm{cosh}^2r}}(\mathrm{sin}^2\theta _+\varphi _1_{}\tau +\mathrm{cos}^2\theta _+\varphi _2_{}\tau )+𝒪(a^2),`$ where we have changed variables as $`\rho =\mathrm{cosh}r`$ so that $`G_{rr}=1`$ to zeroth order in $`a_1,a_2`$. In the space of the three variables $`X^\mu =(\tau ,\varphi _1,\varphi _2)`$ a general $`O(3,3)`$ transformation acts as (see for instance ) $$\stackrel{~}{E}=(aE+b)(cE+d)^1,$$ (19) where the group element $`G=\left(\begin{array}{cc}a& b\\ c& d\end{array}\right)O(3,3)`$ preserves the bilinear form $`J=\left(\begin{array}{cc}0& I\\ I& 0\end{array}\right)`$, i.e. $`G^TJG=J`$. The matrix $`E_{\mu \nu }=G_{\mu \nu }+B_{\mu \nu }`$ is read off from the Euclidean version of the background (4), (15) (after setting $`a_1=a_2=0`$): $$E=\left(\begin{array}{ccc}\mathrm{tanh}^2r& 0& 0\\ 0& \mathrm{sin}^2\theta & \mathrm{cos}^2\theta \\ 0& \mathrm{cos}^2\theta & \mathrm{cos}^2\theta \end{array}\right).$$ (20) An infinitesimal version of the transformation (19) is obtained by expanding the $`O(3,3)`$ group element around the identity element using $`a=I+A`$, $`b=B`$, $`c=C`$ and $`d=IA^T`$, where $`B`$ and $`C`$ are antisymmetric matrices. Then, the infinitesimal change (first order in the generators $`A`$, $`B`$ and $`C`$) of the $`\sigma `$-model Lagrangian density is $$\delta =(AE+EA^T+BECE)_{\mu \nu }_+X^\mu _{}X^\nu .$$ (21) Comparing (18) and (21) we determine $$A=\left(\begin{array}{ccc}0& a_1& a_2\\ a_1& 0& 0\\ 0& 0& 0\end{array}\right),B=\left(\begin{array}{ccc}0& a_2& 0\\ a_2& 0& 0\\ 0& 0& 0\end{array}\right),C=\left(\begin{array}{ccc}0& a_2& a_1\\ a_2& 0& 0\\ a_1& 0& 0\end{array}\right).$$ (22) Exponentiating, we find that the necessary $`O(3,3)`$ group element in (19) is $`a`$ $`=`$ $`\left(\begin{array}{ccc}\sigma _1\sigma _2& b_1\sigma _1\sigma _2& b_2\sigma _1\sigma _2\\ b_1\sigma _1\sigma _2& \sigma _1\sigma _2& b_1b_2\sigma _1\sigma _2\\ 0& 0& 1\end{array}\right),b=\left(\begin{array}{ccc}b_1b_2\sigma _1\sigma _2& b_2\sigma _1\sigma _2& 0\\ b_2\sigma _1\sigma _2& b_1b_2\sigma _1\sigma _2& 0\\ 0& 0& 0\end{array}\right),`$ $`c`$ $`=`$ $`\left(\begin{array}{ccc}b_1b_2\sigma _1\sigma _2& b_2\sigma _1\sigma _2& b_1\sigma _1\sigma _2\\ b_2\sigma _1\sigma _2& b_1b_2\sigma _1\sigma _2& 1\sigma _1\sigma _2\\ b_1\sigma _1\sigma _2& 1\sigma _1\sigma _2& b_1b_2\sigma _1\sigma _2\end{array}\right),d=\left(\begin{array}{ccc}\sigma _1\sigma _2& b_1\sigma _1\sigma _2& 0\\ b_1\sigma _1\sigma _2& \sigma _1\sigma _2& 0\\ b_2\sigma _1\sigma _2& b_1b_2\sigma _1\sigma _2& 1\end{array}\right),`$ (23) where $`\sigma _i^2{\displaystyle \frac{\rho _+^2a_i^2}{\rho _+^2\rho _{}^2}},b_i^2{\displaystyle \frac{a_i^2\rho _{}^2}{\rho _+^2a_i^2}},i=1,2,`$ $`\rho _\pm ^2{\displaystyle \frac{1}{2}}\left(a_1^2+a_2^2+1\pm \sqrt{(a_1^2+a_2^2+1)^24a_1^2a_2^2}\right).`$ (24) Indeed, we may easily check that applying (19), with (20) and (23), we obtain a matrix $`\stackrel{~}{E}`$; after we change variables as $`\rho ^2=(\rho _+^2\rho _{}^2)\mathrm{cosh}^2r+\rho _{}^2`$, this $`\stackrel{~}{E}`$ corresponds to the Euclidean version of the background (4) and (15). The dilaton (16) is found by demanding that the measure factor $`e^{2\mathrm{\Phi }}\sqrt{detG}`$ be invariant under the $`O(3,3)`$ transformation. Acknowledgements I would like to thank the organizers for the invitation to present this and related work.